Leaderboard


Popular Content

Showing content with the highest reputation on 06/22/2020 in all areas

  1. 1 point
    NOTE: This problem is consistently manifested when running on ESXi, but many have encountered problems with Synoboot devices on baremetal installs of 6.2.3 and under certain conditions, other 6.2.x versions. The fix can be implemented safely on baremetal installs and does resolve the issue there also. You can verify if you have an issue by launching SSH and issuing the following command: $ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 If you see something other than the three devices returned, your Synoboot devices are not being configured correctly, and you should install the corrective (attached) script. TL;DR: When running DSM 6.2.3 under ESXi, Jun's 1.03b and 1.04b bootloaders fail to build /dev/synoboot (this can be fixed by installing an extracted script from the loader to re-run after the boot has completed) DSM 6.2.3 displays SATA devices (i.e. bootloader on 1.04b) that are mapped beyond the MaxDisks limit when previous versions did not DSM 6.2.3 update rewrites the synoinfo.cfg disk port bitmasks which may break some high-disk count arrays, and cause odd behavior with the bootloader device Background: Setting the PID/VID for a baremetal install allows Jun's loader to pretend that the USB key is a genuine Synology flash loader. On an ESXi install, there is no USB key - instead, the loader runs a script to find its own boot device, and then remakes it as /dev/synoboot. This was very reliable on 6.1.x and Jun's loader 1.02b. But moving to DSM 6.2.x and loaders 1.03b and 1.04b, there are circumstances when /dev/synoboot is created and the original boot device is not suppressed. The result is that sometimes the loader device is visible in Storage Manager. Someone found that if the controller was mapped beyond the maximum number of disk devices (MaxDisks), any errant /dev/sd boot device was suppressed. Adjusting DiskIdxMap became an alternative way to "hide" the loader device on ESXi and Jun's latest loaders use this technique. Now, DSM 6.2.3: The upgrade changes at least two fundamental DSM behaviors: SATA devices that are mapped beyond the MaxDisks limit no longer are suppressed, including the loader (appearing as /dev/sdm if DiskIdxMap is set to 0C) The disk port configuration bitmasks are rewritten in synoinfo.conf: internalportcfg, usbportcfg and esataportcfg and on 1.04b, do not match up with default MaxDisks=16 anymore. NOTE: If you have more than 12 disks, it will probably break your array and you will need to edit them back Also, when running under ESXi, DSM 6.2.3 breaks Jun's loader synoboot script such that /dev/synoboot is not created at all. Negative impacts: The loader device might be accidentally configured in Storage Manager, which will crash the system The loader partitions may inadvertently be mapped as USB or eSata folders in File Station and become corrupted Absence of /dev/synoboot devices may cause future upgrades to fail, when the upgrade wants to modify rd.gz in the loader (often, ERROR 21) Unpacking Jun's synoboot script reveals that it harvests the device nodes, deletes the devices altogether, and remakes them as /dev/synoboot. It tries to identify the boot device by looking for a partition smaller than the smallest array partition allowed. It's an ambiguous strategy to identify the device, and something new in 6.2.3 is causing it to fail during early boot system state. There are a few technical configuration options can can cause the script to select the correct device, but they are difficult and dependent upon loader version, DSM platform, and BIOS/EFI boot. However, if Jun's script is re-run after the system is fully started, everything is as it should be. So extracting the script from the loader, and adding it to post-boot actions appears to be a solution to this problem: Download the attached FixSynoboot.sh script Copy the file to /usr/local/etc/rc.d chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Thus, Jun's own code will re-run after the initial boot after whatever system initialization parameters that break the first run of the script no longer apply. This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for ESXi running 6.2.3, and it won't hurt anything if installed or ported to another environment. FixSynoboot.sh
  2. 1 point
    Petit problème résolu en appliquant le petit tuto suivant : je l'ai réalisé sur 2 instance VM sur un DSM3617xs et un DSM3615xs et tout va très bien. Fred
  3. 1 point
    No, for this you would need another tool, like the "USB image tool" (freeware): https://www.alexpage.de/usb-image-tool/
  4. 1 point
    That is incorrect. Without the patch they are not recognized at all by Synology's utilities (they are accessible by Linux). You can hack them in as storage but it is not supported or safe, and it does not matter whether the patch is installed or not. If you want to remove the patch, put the original file back.
  5. 1 point
    вот собрал все версии начиная с 7.2.0 в одну кучу. все версии патченые. начиная с 8.1.2 перезагрузка раз в сутки. как только Вирус закончит с последней, добавлю и ее. Парни, на arm пока нет рабочей версии. Возможно в будущем и будет. https://mega.nz/folder/q80zQATS#1VAWvg4Dr0rfSnRjM5X9pQ
  6. 1 point
    8.2.7-6221 в 5 утра перезагрузка командой /usr/syno/sbin/synoservice --restart pkgctl-SurveillanceStation Версия системы DSM 6.2.2-24922 Update 6 - DS3615xs
  7. 1 point
    For those Linux newbs who need exact instructions on installing the script, follow this here. Please be VERY careful with syntax especially when working as root. If you have not turned on ssh in Control Panel remote access, do it Download putty or other ssh terminal emulator for ssh access Connect to your nas with putty and use your admin credentials. It will give you a command line "$" which means non-privileged In File Station, upload FixSynoboot.sh to a shared folder. If the folder name is "folder" and it's on Volume 1, the path in command line is /volume1/folder From command line, enter "ls /volume1/folder/FixSynoboot.sh" and the filename will be returned if uploaded correctly. Case always matters in Linux. $ ls /volume1/folder/FixSynoboot.sh FixSynoboot.sh Enter "sudo -i" which will elevate your admin to root. Use the admin password again. Now everything you do is destructive, so be careful. The prompt will change to "#" to tell you that you have done this. $ sudo -i Password: # Copy the file from your upload location to the target location. # cp /volume1/folder/FixSynoboot.sh /usr/local/etc/rc.d Make the script executable. # chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Now verify the install. The important part is the first -rwx which indicates FixSynoboot.sh can be executed. # ls -la /usr/local/etc/rc.d/FixSynoboot.sh -rwxr-xr-x 1 root root 2184 May 18 17:54 FixSynoboot.sh Ensure the file configuration is correct, reboot the nas and FixSynoboot will be enabled.