Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 03/12/2021 in all areas

  1. Thanks Gex for the update for v1.02a 1. Read the original post announced by jun: http://xpenology.com/forum/viewtopic.php?f=2&t=20216 2. Download the bootloader v1.01 (DSM 6.0.2-8451) or bootloader v1.02a (alpha version for DSM 6.1-15047): https://mega.nz/#F!yQpw0YTI!DQqIzUCG2RbBtQ6YieScWg 3. Unzip the package. We will need 2 files: synoboot.img and synoboot.vmdk 4. Create a Virtualbox VM: 4.1 Operation System: Other Linux (64bit) 4.2 Create 2 Storage Controller: a) IDE: Mount the file synoboot.vmdk b) SCSI: Create a new harddisk for your data storage (Has to be SCSI, otherwise it won't work!) 4.3 Setup the network: a) Bridge or NAT, it's up to you b) Expand the Advanced option c) Adapter type: Intel PRO/1000 MT Desktop (8254OEM) d) MAC Address: 0011322CA785 (This is the most important setting to make your Xpenology accesssible!) 5. Start the VM, press F12 to enable the boot menu 6. Boot from IDE harddisk 1 7. In the gnub menu, select the last option (xxx VMware xxx) 8. Wait around 2~5 minutes, depends on your Hardware spec. The bootloader only show around 10 lines of message. Don't worry. Be patient and it's still loading the system. 9. Install synology assistant. Press Search button 10. You should see your new DiskStation now. If not, wait for another 1 or 2 minutes 11. The most important step! Verify the MAC address shown in the Synology Assistant. Make sure it's the same as the one you set in step 4.3d above. 12. Here you go, do what you usually do to a Synology system to install the DSM. Enjoy!
    1 point
  2. For those who need a MBR Bootloader for DSM 6.2 i've uploaded it here, if someone will give them a Chance... i've successfully installed this on an Supermicro X7DBP-i with Intel Blackford 5000P Chipset. It is a loader for the DSM6.2-23739.pat and can be directly upgraded to 6.2.2-24922 Update 2.. http://ul.to/zg58eusm this loader is based on Jun's Mod of Genesys Loader and done with their help!... Thanks a lot..
    1 point
  3. NOTE: This problem is consistently manifested when running Jun's loader on a virtual machine with 6.2.3, but some also have problems on baremetal, and under certain conditions, other 6.2.x versions. The fix can be implemented safely on all Jun loader installs. You can verify if you have the issue by launching SSH and issuing the following command: $ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 If these files are not present, your Synoboot devices are not being configured correctly, and you should install the fix script. If synoboot3 exists that is okay. TL;DR: When running DSM 6.2.3 as a virtual machine (and sometimes on baremetal), Jun's 1.03b and 1.04b bootloaders fail to build /dev/synoboot Bootloader SATA device normally mapped beyond the MaxDisks limit becomes visible if /dev/synoboot is not created DSM 6.2.3 update rewrites the synoinfo.cfg disk port bitmasks which may break arrays >12 disks, and cause odd behavior with the bootloader device Background: Setting the PID/VID for a baremetal install allows Jun's loader to pretend that the USB key is a genuine Synology flash loader. On an ESXi install, there is no USB key - instead, the loader runs a script to find its own boot device, and then remakes it as /dev/synoboot. This was very reliable on 6.1.x and Jun's loader 1.02b. But moving to DSM 6.2.x and loaders 1.03b and 1.04b, there are circumstances when /dev/synoboot is created and the original boot device is not suppressed. The result is that sometimes the loader device is visible in Storage Manager. Someone found that if the controller was mapped beyond the maximum number of disk devices (MaxDisks), any errant /dev/sd boot device was suppressed. Adjusting DiskIdxMap became an alternative way to "hide" the loader device on ESXi and Jun's latest loaders use this technique. Now, DSM 6.2.3: The upgrade changes at least two fundamental DSM behaviors: SATA devices that are mapped beyond the MaxDisks limit no longer are suppressed, including the loader (appearing as /dev/sdm if DiskIdxMap is set to 0C) The disk port configuration bitmasks are rewritten in synoinfo.conf: internalportcfg, usbportcfg and esataportcfg and on 1.04b, do not match up with the default MaxDisks=16 anymore (or if you have modified MaxDisks). NOTE: If you have more than 12 disks, it will probably break your array and you will need to restore the values of those parameters Also, when running as a virtual machine (and sometimes on baremetal), DSM 6.2.3 breaks Jun's loader synoboot script such that /dev/synoboot is not created at all. Negative impacts: The loader device might be accidentally configured in Storage Manager, which will crash the system The loader partitions may inadvertently be mapped as USB or eSata folders in File Station and become corrupted Absence of /dev/synoboot devices may cause future upgrades to fail, when the upgrade wants to modify rd.gz in the loader (often, ERROR 21 or "file corrupt") Unpacking Jun's synoboot script reveals that it harvests the device nodes, deletes the devices altogether, and remakes them as /dev/synoboot. It tries to identify the boot device by looking for a partition smaller than the smallest array partition allowed. It's an ambiguous strategy to identify the device, and something new in 6.2.3 is causing it to fail during early boot system state. There are a few technical configuration options can can cause the script to select the correct device, but they are difficult and dependent upon loader version, DSM platform, and BIOS/EFI boot. Fix: However, if Jun's script is re-run after the system is fully started, everything is as it should be. So extracting the script from the loader, and adding it to post-boot actions appears to be a solution to this problem: Download the attached FixSynoboot.sh script (if you cannot see it attached to this post, be sure you are logged in) Copy the file to /usr/local/etc/rc.d chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Thus, Jun's own code will re-run after the initial boot after whatever system initialization parameters that break the first run of the script no longer apply. This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for a virtual system running 6.2.3, and it won't hurt anything if installed or ported to another environment. FixSynoboot.sh
    1 point
  4. So I leave 3 days on holidays and everyone forgets the rules... They are rather simple and stated just above the first post. All Off-Topic posts have been moved.
    1 point
  5. I tried this one with the latest 6.2.3 but like on jun's mod it says
    1 point
×
×
  • Create New...