NOTE: This problem is consistently manifested when running on ESXi, but many have encountered problems with Synoboot devices on baremetal installs of 6.2.3 and under certain conditions, other 6.2.x versions. The fix can be implemented safely on baremetal installs and does resolve the issue there also. You can verify if you have an issue by launching SSH and issuing the following command:
$ ls /dev/synoboot*
/dev/synoboot /dev/synoboot1 /dev/synoboot2
If you see something other than the three devices returned, your Synoboot devices are not being configured correctly, and you should install the corrective (attached) script.
When running DSM 6.2.3 under ESXi, Jun's 1.03b and 1.04b bootloaders fail to build /dev/synoboot (this can be fixed by installing an extracted script from the loader to re-run after the boot has completed)
DSM 6.2.3 displays SATA devices (i.e. bootloader on 1.04b) that are mapped beyond the MaxDisks limit when previous versions did not
DSM 6.2.3 update rewrites the synoinfo.cfg disk port bitmasks which may break some high-disk count arrays, and cause odd behavior with the bootloader device
Setting the PID/VID for a baremetal install allows Jun's loader to pretend that the USB key is a genuine Synology flash loader. On an ESXi install, there is no USB key - instead, the loader runs a script to find its own boot device, and then remakes it as /dev/synoboot. This was very reliable on 6.1.x and Jun's loader 1.02b. But moving to DSM 6.2.x and loaders 1.03b and 1.04b, there are circumstances when /dev/synoboot is created and the original boot device is not suppressed. The result is that sometimes the loader device is visible in Storage Manager. Someone found that if the controller was mapped beyond the maximum number of disk devices (MaxDisks), any errant /dev/sd boot device was suppressed. Adjusting DiskIdxMap became an alternative way to "hide" the loader device on ESXi and Jun's latest loaders use this technique.
Now, DSM 6.2.3: The upgrade changes at least two fundamental DSM behaviors:
SATA devices that are mapped beyond the MaxDisks limit no longer are suppressed, including the loader (appearing as /dev/sdm if DiskIdxMap is set to 0C)
The disk port configuration bitmasks are rewritten in synoinfo.conf: internalportcfg, usbportcfg and esataportcfg and on 1.04b, do not match up with default MaxDisks=16 anymore. NOTE: If you have more than 12 disks, it will probably break your array and you will need to edit them back
Also, when running under ESXi, DSM 6.2.3 breaks Jun's loader synoboot script such that /dev/synoboot is not created at all. Negative impacts:
The loader device might be accidentally configured in Storage Manager, which will crash the system
The loader partitions may inadvertently be mapped as USB or eSata folders in File Station and become corrupted
Absence of /dev/synoboot devices may cause future upgrades to fail, when the upgrade wants to modify rd.gz in the loader (often, ERROR 21)
Unpacking Jun's synoboot script reveals that it harvests the device nodes, deletes the devices altogether, and remakes them as /dev/synoboot. It tries to identify the boot device by looking for a partition smaller than the smallest array partition allowed. It's an ambiguous strategy to identify the device, and something new in 6.2.3 is causing it to fail during early boot system state. There are a few technical configuration options can can cause the script to select the correct device, but they are difficult and dependent upon loader version, DSM platform, and BIOS/EFI boot.
However, if Jun's script is re-run after the system is fully started, everything is as it should be. So extracting the script from the loader, and adding it to post-boot actions appears to be a solution to this problem:
Download the attached FixSynoboot.sh script
Copy the file to /usr/local/etc/rc.d
chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh
Thus, Jun's own code will re-run after the initial boot after whatever system initialization parameters that break the first run of the script no longer apply. This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for ESXi running 6.2.3, and it won't hurt anything if installed or ported to another environment.