Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 09/15/2020 in all areas

  1. NOTE: This problem is consistently manifested when running Jun's loader on a virtual machine with 6.2.3, but some also have problems on baremetal, and under certain conditions, other 6.2.x versions. The fix can be implemented safely on all Jun loader installs. You can verify if you have the issue by launching SSH and issuing the following command: $ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 If these files are not present, your Synoboot devices are not being configured correctly, and you should install the fix script. If synoboot3 exists that is okay. TL;DR: When running DSM 6.2.3 as a virtual machine (and sometimes on baremetal), Jun's 1.03b and 1.04b bootloaders fail to build /dev/synoboot Bootloader SATA device normally mapped beyond the MaxDisks limit becomes visible if /dev/synoboot is not created DSM 6.2.3 update rewrites the synoinfo.cfg disk port bitmasks which may break arrays >12 disks, and cause odd behavior with the bootloader device Background: Setting the PID/VID for a baremetal install allows Jun's loader to pretend that the USB key is a genuine Synology flash loader. On an ESXi install, there is no USB key - instead, the loader runs a script to find its own boot device, and then remakes it as /dev/synoboot. This was very reliable on 6.1.x and Jun's loader 1.02b. But moving to DSM 6.2.x and loaders 1.03b and 1.04b, there are circumstances when /dev/synoboot is created and the original boot device is not suppressed. The result is that sometimes the loader device is visible in Storage Manager. Someone found that if the controller was mapped beyond the maximum number of disk devices (MaxDisks), any errant /dev/sd boot device was suppressed. Adjusting DiskIdxMap became an alternative way to "hide" the loader device on ESXi and Jun's latest loaders use this technique. Now, DSM 6.2.3: The upgrade changes at least two fundamental DSM behaviors: SATA devices that are mapped beyond the MaxDisks limit no longer are suppressed, including the loader (appearing as /dev/sdm if DiskIdxMap is set to 0C) The disk port configuration bitmasks are rewritten in synoinfo.conf: internalportcfg, usbportcfg and esataportcfg and on 1.04b, do not match up with the default MaxDisks=16 anymore (or if you have modified MaxDisks). NOTE: If you have more than 12 disks, it will probably break your array and you will need to restore the values of those parameters Also, when running as a virtual machine (and sometimes on baremetal), DSM 6.2.3 breaks Jun's loader synoboot script such that /dev/synoboot is not created at all. Negative impacts: The loader device might be accidentally configured in Storage Manager, which will crash the system The loader partitions may inadvertently be mapped as USB or eSata folders in File Station and become corrupted Absence of /dev/synoboot devices may cause future upgrades to fail, when the upgrade wants to modify rd.gz in the loader (often, ERROR 21 or "file corrupt") Unpacking Jun's synoboot script reveals that it harvests the device nodes, deletes the devices altogether, and remakes them as /dev/synoboot. It tries to identify the boot device by looking for a partition smaller than the smallest array partition allowed. It's an ambiguous strategy to identify the device, and something new in 6.2.3 is causing it to fail during early boot system state. There are a few technical configuration options can can cause the script to select the correct device, but they are difficult and dependent upon loader version, DSM platform, and BIOS/EFI boot. Fix: However, if Jun's script is re-run after the system is fully started, everything is as it should be. So extracting the script from the loader, and adding it to post-boot actions appears to be a solution to this problem: Download the attached FixSynoboot.sh script (if you cannot see it attached to this post, be sure you are logged in) Copy the file to /usr/local/etc/rc.d chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Thus, Jun's own code will re-run after the initial boot after whatever system initialization parameters that break the first run of the script no longer apply. This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for a virtual system running 6.2.3, and it won't hurt anything if installed or ported to another environment. FixSynoboot.sh
    2 points
  2. Thank you fryride for time you spent. I'm done and happy Now i know how to change disks in NAS, move packages and something new about how DSM works
    1 point
  3. why so secretive? i could add a link in the 1st post if there is good enough description for the file or the creator just makes a new topic in the "Additional Compiled Modules" areas
    1 point
  4. SHR is not a filesystem. It's an array. If you have only one drive, it can be called a JBOD, RAID0, SHR, Basic, they are all essentially the same thing. It doesn't matter. You have a one-disk SHR/Basic drive now (which is just a RAID0) applying to your HDD Partition #3. Depending on the version of DSM and hardware platform and the way you configured your original array, Synology may call an array a volume in a storage pool, or just a volume. Install SSD, create a NEW separate storage pool/volume. You don't need two drives, you can create a Basic or SHR with one drive. As soon as you do this, partitions #1 and #2 will be created and common across all devices. They ignore the layout of other arrays/storage pools/etc. Once you have two volumes (one on HDD and one on SSD), you can even move shares straight over using Control Panel. Again, packages may need to be uninstalled/reinstalled and data restored if applicable. There are lots of online resources on how to do that (move Synology packages from one volume to another). When all is complete, shutdown, remove HDD and done.
    1 point
  5. after testing the loaders 1.03b for 3615 and 1.04b for 918+ with 7.0 preview i'd say that's not the case, the loaders do not work anymore with the kernel of dsm 7.0 jun used a "pre" kernel (bzImage on 1st partition of the loader) that is started by grub and that loads the the synology kernel and i guess that's the point where it fails, the synology kernel is not loaded properly and it fails to start, resulting in dsm not booting (drivers like extra.lzma come later into play) i guess it needs to be the same kernel version to step from a running "pre" kernel into loading a new kernel and as synology changed the base kernel version in all three dsm versions, we have loaders for, it will not work with any of them 3615 3.10.105 -> 3.10.108 3617 3.10.105 -> 4.4.180+ 918+ 4.4.59+ -> 4.4.180+ all loaders throw a va not found Failed to process header on the serial console when booting with the new 7.0 kernel and stop after this (as @OllieD already wrote some days ago https://xpenology.com/forum/topic/33940-dsm-7-preview/?do=findComment&comment=164967) case closed i'd say
    1 point
  6. For those Linux newbs who need exact instructions on installing the script, follow this here. Please be VERY careful with syntax especially when working as root. If you have not turned on ssh in Control Panel remote access, do it Download putty or other ssh terminal emulator for ssh access Connect to your nas with putty and use your admin credentials. It will give you a command line "$" which means non-privileged In File Station, upload FixSynoboot.sh to a shared folder. If the folder name is "folder" and it's on Volume 1, the path in command line is /volume1/folder From command line, enter "ls /volume1/folder/FixSynoboot.sh" and the filename will be returned if uploaded correctly. Case always matters in Linux. $ ls /volume1/folder/FixSynoboot.sh FixSynoboot.sh Enter "sudo -i" which will elevate your admin to root. Use the admin password again. Now everything you do is destructive, so be careful. The prompt will change to "#" to tell you that you have done this. $ sudo -i Password: # Copy the file from your upload location to the target location. # cp /volume1/folder/FixSynoboot.sh /usr/local/etc/rc.d Make the script executable. # chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Now verify the install. The important part is the first -rwx which indicates FixSynoboot.sh can be executed. # ls -la /usr/local/etc/rc.d/FixSynoboot.sh -rwxr-xr-x 1 root root 2184 May 18 17:54 FixSynoboot.sh Ensure the file configuration is correct, reboot the nas and FixSynoboot will be enabled.
    1 point
  7. Alors Bonne nouvelle !!! Je viens de faire cette mise a jour et tout fonctionne. J'ai encore quelques tests a faire mais tout semble fonctionner Pour ceux qui ont un doute : Lancez la mise a jour lorsque ça redémarre éteignez tout enlevez la clef et placez l'extra 0.5 remettre la clef et redémarrez et normalement tout fonctionne pour ma part j'ai du faire un hard reset de plus c'est tout
    1 point
×
×
  • Create New...