tomtcs Posted June 14, 2016 #1 Posted June 14, 2016 Anyone willing to offer assistance getting Xpenology installed on a chassis with 58 disks? Payment via paypal is available for anyone that actually gets it working...
AllGamer Posted June 15, 2016 #2 Posted June 15, 2016 is not that hard just edit the synoinfo.conf to include 58 disks. some tips worth keeping in mind for safety and easy troubleshooting. keep volumes to the amount of disk per controller, normally 8 disk per SAS/SATA controller, unless you purchased the real expensive controllers that worth $1000 those can easily do 16 or 24 disks per controller. if it does really comes to that, I'll suggest setting up a small volume of 12 disks, since that is the default with XPEnology once booted up, you can edit the synoinfo.conf to allow more disks, here is a nice reference guy for the DS214play, which defaults to 2 disks viewtopic.php?f=15&t=15305 the same "Best practice" works well for the 64bit version XPEnology
sbv3000 Posted June 15, 2016 #3 Posted June 15, 2016 my build approach to this would be; 1) setup the mobo with 'out of the box' hardware and install XPE/DSM with the onboard SATAs/and number of drives, test network, USB boot, BIOS reset issues etc. 2) Edit Synoinfo for 58 drives as per AllGamer and check storage manager - see below for possible entries. You might need to exclude USB enumeration (there are threads about that) 3) Install 'Controller 1' and its disks - Create volume - Document HDD slots vs drive serial numbers 4) Repeat 3) with 'Controller 2' - You might find the drives shift slots due to PCI slot enumeration in BIOS 5) Repeat for remaining controllers/disks 6) Soak test/check drives spin down 7) Watch electricity bill go up maxdisks="58" esataportcfg="0x000000000000000" usbportcfg="0x000000000000000" internalportcfg="0x3ffffffffffffff"
tomtcs Posted June 15, 2016 Author #4 Posted June 15, 2016 As for hardware, I'm using (3) LSI 1901-16i controllers and (1) 9201-8i in a 45 Drives Storinator enclosure. The 8 port controller is used for SSD caching where the rest are used for direct attached storage (WD Reds and Seagate Constellations). I tried the following: I'm thinking it would be as follows: 0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x300000000000000000 0000 1111 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0xFF000000000000000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 = 0xFFFFFFFFFFFFFFF then tried this when the first didn't boot properly: 1100 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0xC000000000000000 (eSATA) 0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x3000000000000000 (USB) 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 = 0xFFFFFFFFFFFFFFF (Drives) That didn't work either. It seems like I lose access to the synoinfo.conf on reboot... It simply will NOT initialize all the drives on a reboot. Keep in mind that the pod is currently only loaded with 12 Seagate Drives and 4 SSDs + the USB boot device.
sbv3000 Posted June 15, 2016 #5 Posted June 15, 2016 1) edit the conf file in both etc and etc/default 2) try 'no' esata and usb ( ie all zeros - as my suggestion) for diagnostics and add later once you have 58 drives showing 3) pad the hex values with zeros to that they have the same number of characters in the strings and there is no overlapping hex value > 0 so your first set of values would become; 0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x300000000000000000 0000 1111 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x0FF000000000000000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 = 0x000FFFFFFFFFFFFFFF (this equates to 3 esata, 8 usb and internal 60 drives I think) The conf file only controls the values that storage manager will show and manage - you still need to make sure the drives all appear in controller bios, set the correct initiator mode, jbod etc etc
sbv3000 Posted June 15, 2016 #6 Posted June 15, 2016 ps, you can edit the conf files on the fly with winscp etc then refresh storage manager to see what shows up without having to reboot
tomtcs Posted June 16, 2016 Author #7 Posted June 16, 2016 Oddly enough, the first set of values with the 0x0... for USB and eSata work to bring all the other drives showing as available. However, on reboot they end up showing a failure notification... So -- If i reinstall the DSM image, then edit the files again without rebooting, all the drives show up and I can create the volumes and storage groups as i want. It just doesn't sustain a reboot.
sbv3000 Posted June 16, 2016 #8 Posted June 16, 2016 how many drives connected in this config? how many controller cards connected? I run 16 disk and 24 disk units that both 'survive' reboots, this is my /etc/default/synoinfo.conf settings from them maxdisks="16" esataportcfg="0x0000" usbportcfg="0x00000" internalportcfg="0xffff" maxdisks="24" esataportcfg="0x00000" usbportcfg="0x00000" internalportcfg="0xfffff" suggest you run some tests of different settings to get the conf file stable eg, default 12 bay then /16/24 etc then jump to the full requirement Something else to do - suggest disabling all features on the mobo not needed, ie serial/parallel/additional usb ports
tomtcs Posted June 16, 2016 Author #9 Posted June 16, 2016 how many drives connected in this config?how many controller cards connected? I run 16 disk and 24 disk units that both 'survive' reboots, this is my /etc/default/synoinfo.conf settings from them maxdisks="16" esataportcfg="0x0000" usbportcfg="0x00000" internalportcfg="0xffff" maxdisks="24" esataportcfg="0x00000" usbportcfg="0x00000" internalportcfg="0xfffff" suggest you run some tests of different settings to get the conf file stable eg, default 12 bay then /16/24 etc then jump to the full requirement Something else to do - suggest disabling all features on the mobo not needed, ie serial/parallel/additional usb ports That seems to have been the trick. I turned off the integrated sata ports, usb, etc. and now it retains the info on reboot. Thanks for the assistance! Shoot me a PM with your email address when you have a minute.
tomtcs Posted June 16, 2016 Author #10 Posted June 16, 2016 On second thought.. I just moved the two SSDs from my internal SATA headers that were disabled to the SAS controllers, and I received the same as before in the image above. I'm wondering if the SSDs are bad perhaps?
sbv3000 Posted June 16, 2016 #11 Posted June 16, 2016 my experience is that onboard controllers will always be hdd 1-4 so if you add controllers such drives are on ports > 12 they will get 'lost', I suspect that somehow dsm was applying defaults on reboot hence losing drives, but once you disabled the on board sata the only drives seen are on the add in cards hence stability with a complex setup like this I would make sure you label/cross reference drives v slots in case of disk failures - it will make life easier
tomtcs Posted October 20, 2016 Author #12 Posted October 20, 2016 I'm back at it again. Since I've been seeing some slow progress with DSM 6, I'm looking for someone to assist with getting this working on a large storage pod. SBV3000 and I were close before, but the settings would just never stick. I'm hoping someone else might take a stab at it. Any help would be greatly appreciated.
quicknick Posted October 20, 2016 #13 Posted October 20, 2016 my experience is that onboard controllers will always be hdd 1-4 so if you add controllers such drives are on ports > 12 they will get 'lost', I suspect that somehow dsm was applying defaults on reboot hence losing drives, but once you disabled the on board sata the only drives seen are on the add in cards hence stability with a complex setup like this I would make sure you label/cross reference drives v slots in case of disk failures - it will make life easier Make sure you edit /etc.defaults/synoinfo.conf not /etc/synoinfo.conf Sent from my SM-N920T using Tapatalk
tomtcs Posted October 20, 2016 Author #14 Posted October 20, 2016 I've done that. What I can't confirm or determine is whether or not a system will even handle anything more than 26 drives (a-z). If I could determine that it doesn't, I don't think i would try to make this work so much...
sbv3000 Posted October 25, 2016 #15 Posted October 25, 2016 Hi Tom See my info post, viewtopic.php?f=2&t=21115 I think XPE is 'limited' to 26 drives sda-sdz for some reason within the linux version used
Recommended Posts