Jump to content
XPEnology Community

Remote Assistance with Initial Configuration and Testing


tomtcs

Recommended Posts

is not that hard

 

just edit the synoinfo.conf to include 58 disks.

some tips worth keeping in mind for safety and easy troubleshooting.

 

keep volumes to the amount of disk per controller, normally 8 disk per SAS/SATA controller,

unless you purchased the real expensive controllers that worth $1000 those can easily do 16 or 24 disks per controller.

 

if it does really comes to that, I'll suggest setting up a small volume of 12 disks, since that is the default with XPEnology

 

once booted up, you can edit the synoinfo.conf to allow more disks, here is a nice reference guy for the DS214play, which defaults to 2 disks

viewtopic.php?f=15&t=15305

 

the same "Best practice" works well for the 64bit version XPEnology

Link to comment
Share on other sites

my build approach to this would be;

 

1) setup the mobo with 'out of the box' hardware and install XPE/DSM with the onboard SATAs/and number of drives, test network, USB boot, BIOS reset issues etc.

2) Edit Synoinfo for 58 drives as per AllGamer and check storage manager - see below for possible entries. You might need to exclude USB enumeration (there are threads about that)

3) Install 'Controller 1' and its disks - Create volume - Document HDD slots vs drive serial numbers

4) Repeat 3) with 'Controller 2' - You might find the drives shift slots due to PCI slot enumeration in BIOS

5) Repeat for remaining controllers/disks

6) Soak test/check drives spin down

7) Watch electricity bill go up :smile:

 

maxdisks="58"

esataportcfg="0x000000000000000"

usbportcfg="0x000000000000000"

internalportcfg="0x3ffffffffffffff"

Link to comment
Share on other sites

As for hardware, I'm using (3) LSI 1901-16i controllers and (1) 9201-8i in a 45 Drives Storinator enclosure. The 8 port controller is used for SSD caching where the rest are used for direct attached storage (WD Reds and Seagate Constellations).

 

I tried the following:

 

I'm thinking it would be as follows:

0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x300000000000000000

0000 1111 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0xFF000000000000000

0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 = 0xFFFFFFFFFFFFFFF

 

then tried this when the first didn't boot properly:

1100 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0xC000000000000000 (eSATA)

0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x3000000000000000 (USB)

0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 = 0xFFFFFFFFFFFFFFF (Drives)

 

That didn't work either. It seems like I lose access to the synoinfo.conf on reboot... It simply will NOT initialize all the drives on a reboot.

 

Keep in mind that the pod is currently only loaded with 12 Seagate Drives and 4 SSDs + the USB boot device.

Link to comment
Share on other sites

1) edit the conf file in both etc and etc/default

2) try 'no' esata and usb ( ie all zeros - as my suggestion) for diagnostics and add later once you have 58 drives showing

3) pad the hex values with zeros to that they have the same number of characters in the strings and there is no overlapping hex value > 0 so your first set of values would become;

 

0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x300000000000000000

0000 1111 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 = 0x0FF000000000000000

0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 = 0x000FFFFFFFFFFFFFFF

 

(this equates to 3 esata, 8 usb and internal 60 drives I think)

 

The conf file only controls the values that storage manager will show and manage - you still need to make sure the drives all appear in controller bios, set the correct initiator mode, jbod etc etc

Link to comment
Share on other sites

Oddly enough, the first set of values with the 0x0... for USB and eSata work to bring all the other drives showing as available. However, on reboot they end up showing a failure notification...

 

xpenoboot.jpg

 

So -- If i reinstall the DSM image, then edit the files again without rebooting, all the drives show up and I can create the volumes and storage groups as i want. It just doesn't sustain a reboot.

Link to comment
Share on other sites

how many drives connected in this config?

how many controller cards connected?

 

I run 16 disk and 24 disk units that both 'survive' reboots, this is my /etc/default/synoinfo.conf settings from them

 

maxdisks="16"

esataportcfg="0x0000"

usbportcfg="0x00000"

internalportcfg="0xffff"

 

maxdisks="24"

esataportcfg="0x00000"

usbportcfg="0x00000"

internalportcfg="0xfffff"

 

suggest you run some tests of different settings to get the conf file stable eg, default 12 bay then /16/24 etc then jump to the full requirement

 

Something else to do - suggest disabling all features on the mobo not needed, ie serial/parallel/additional usb ports

Link to comment
Share on other sites

how many drives connected in this config?

how many controller cards connected?

 

I run 16 disk and 24 disk units that both 'survive' reboots, this is my /etc/default/synoinfo.conf settings from them

 

maxdisks="16"

esataportcfg="0x0000"

usbportcfg="0x00000"

internalportcfg="0xffff"

 

maxdisks="24"

esataportcfg="0x00000"

usbportcfg="0x00000"

internalportcfg="0xfffff"

 

suggest you run some tests of different settings to get the conf file stable eg, default 12 bay then /16/24 etc then jump to the full requirement

 

Something else to do - suggest disabling all features on the mobo not needed, ie serial/parallel/additional usb ports

 

That seems to have been the trick. I turned off the integrated sata ports, usb, etc. and now it retains the info on reboot. Thanks for the assistance! Shoot me a PM with your email address when you have a minute.

Link to comment
Share on other sites

On second thought.. I just moved the two SSDs from my internal SATA headers that were disabled to the SAS controllers, and I received the same as before in the image above. I'm wondering if the SSDs are bad perhaps?

Link to comment
Share on other sites

my experience is that onboard controllers will always be hdd 1-4 so if you add controllers such drives are on ports > 12 they will get 'lost', I suspect that somehow dsm was applying defaults on reboot hence losing drives, but once you disabled the on board sata the only drives seen are on the add in cards hence stability :smile:

with a complex setup like this I would make sure you label/cross reference drives v slots in case of disk failures - it will make life easier :smile:

Link to comment
Share on other sites

  • 4 months later...

I'm back at it again. Since I've been seeing some slow progress with DSM 6, I'm looking for someone to assist with getting this working on a large storage pod. SBV3000 and I were close before, but the settings would just never stick. I'm hoping someone else might take a stab at it. Any help would be greatly appreciated.

Link to comment
Share on other sites

my experience is that onboard controllers will always be hdd 1-4 so if you add controllers such drives are on ports > 12 they will get 'lost', I suspect that somehow dsm was applying defaults on reboot hence losing drives, but once you disabled the on board sata the only drives seen are on the add in cards hence stability :smile:

with a complex setup like this I would make sure you label/cross reference drives v slots in case of disk failures - it will make life easier :smile:

Make sure you edit /etc.defaults/synoinfo.conf not /etc/synoinfo.conf

 

Sent from my SM-N920T using Tapatalk

Link to comment
Share on other sites

×
×
  • Create New...