Jump to content
XPEnology Community

64Bit - DS3612xs is capable with 12 drives, is that true?


Elpee

Recommended Posts

Just installed DSM 5.0 4493 on my PC case with 12 HDD. Doesn't matter how many times I've tried (uninstalled and then reinstalled), XPEnology DS3612xs only sees 10 HDDs even I moved them (12 HDDs) arround.

 

 

15323170056_3b30603599_b.jpg

 

In fact, Disk 1 & 2 are missing :sad:

15346168415_d227a3ae59_b.jpg

 

Have you ever seen this issue or do you know how to fix it?

 

Much appreciate it.

Link to comment
Share on other sites

You may have PATA/IDE controller enabled. That may account for the first two unpopulated drives.

 

I set it as "SCSI" not IDE nor SATA as recommended. Do I need to change to IDE or SATA?

I replaced the first two by those working ones, but same issue.

Thanks.

Link to comment
Share on other sites

Thanks, I just found out the missing ports are from SAS card. Looks like DSM 5.0 NanoBoot doesn't have that card driver.

This is the page I can download my card driver, but I really don't know what exactly correct driver for DSM 5.0 NanoBoot.

http://www.highpoint-tech.com/USA_new/r ... wnload.htm

Can you guys please help what driver should I download and how I can install it?

 

Thank you very much for your help.

Link to comment
Share on other sites

You may have PATA/IDE controller enabled. That may account for the first two unpopulated drives.

 

This is your problem. You need to unload the PATA/IDE driver in the nanoboot config, it it will keep reserving those 2 slots.

 

If using nanoboot, you need to edit syslinux.cfg. Here's an example from my config:

 

LABEL Synology DSM 5.0
 MENU LABEL Synology DSM 5.0
 kernel /zImage rmmod=ata_piix ihd_num=0 netif_num=4 syno_hw_version=DS3612xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168 loglevel=0 vga=0x305

 

the part I am referring to is; rmmod=ata_piix. That unloads the driver, or my slots would be like yours, first 2 reserved. Change all the menu options in the config to reflect the change.

Link to comment
Share on other sites

There is a really simple fix for this problem.

 

ssh into your xpenology system

edit /etc.defaults/synoinfo.conf

 

modify internalportcfg and esataportcfg fields

 

original values

----------------

internalportcfg="0xfff"

esataportcfg="0xff000"

 

change to something like this

-----------------

internalportcfg="0xfffff"

esataportcfg="0x0"

 

Save the file and reboot. You should see all of your drives now.

 

The above fields are device letter masks. The original value of 0xfff is a hexadecimal mask indicating up to 12 devices are internal drives from /dev/sda to /dev/sdl. However, if your device assignments fall out side of that range (for example /dev/sdm and /dev/sdn), then you'll need to increase it to accommodate. The new value of 0xfffff signifies that there can be up to 20 devices from /dev/sda through /dev/sdt. Hopefully this is enough to cover all of your internal disk letters.

 

NOTE: On a real Synology DS3612xs, there are exactly 12 internal ports that will get enumerated from /dev/sda to /dev/sdl. However, on your system, you may have additional ports (SATA or PATA) that are unused, but they will still occupy a device letter. So let's say you have 16 ports on your server, but you're only using 12 of them. Some of your drives' enumeration may very well fall outside of the original 12 from /dev/sda through /dev/sdl. Therefore, you need to modify your internalportcfg value to cover your system's device enumeration.

 

If you want to know what exact drive letters are being enumerated on your system, run the following commaned

fdisk -l |grep ^Disk

If you search for internalportcfg in this forum, you'll see it being discussed multiple times. I have 14 SATA ports (6 onboard SATA + 8 SAS ports) on my server. 12 of the 14 are currently populated.. The values I use are....

 

esataportcfg="0x0" // i have no esata drives

internalportcfg="0xfffff" // up to 20 internal devices enumeration letters from /dev/sda - /dev/sdt

usbportcfg="0xf00000" // usb will get enumerated from /dev/sdu - /dev/sdx

 

Check my other post to see how I got 24 drives to show up in VMware.

http://xpenology.com/forum/viewtopic.php?f=2&t=3529&p=21475&hilit=internalportcfg#p21475

Link to comment
Share on other sites

You may have PATA/IDE controller enabled. That may account for the first two unpopulated drives.

 

This is your problem. You need to unload the PATA/IDE driver in the nanoboot config, it it will keep reserving those 2 slots.

 

If using nanoboot, you need to edit syslinux.cfg. Here's an example from my config:

 

LABEL Synology DSM 5.0
 MENU LABEL Synology DSM 5.0
 kernel /zImage rmmod=ata_piix ihd_num=0 netif_num=4 syno_hw_version=DS3612xs sn=B3J4N01003 vid=0x0EA0 pid=0x2168 loglevel=0 vga=0x305

 

the part I am referring to is; rmmod=ata_piix. That unloads the driver, or my slots would be like yours, first 2 reserved. Change all the menu options in the config to reflect the change.

Sorry but I don't know where I can see yslinux.cfg file. I'm using nanoBoot iso to boot my Nas.

Link to comment
Share on other sites

There is a really simple fix for this problem.

 

ssh into your xpenology system

edit /etc.defaults/synoinfo.conf

 

modify internalportcfg and esataportcfg fields

 

original values

----------------

internalportcfg="0xfff"

esataportcfg="0xff000"

 

change to something like this

-----------------

internalportcfg="0xfffff"

esataportcfg="0x0"

 

Save the file and reboot. You should see all of your drives now.

 

The above fields are device letter masks. The original value of 0xfff is a hexadecimal mask indicating up to 12 devices are internal drives from /dev/sda to /dev/sdl. However, if your device assignments fall out side of that range (for example /dev/sdm and /dev/sdn), then you'll need to increase it to accommodate. The new value of 0xfffff signifies that there can be up to 20 devices from /dev/sda through /dev/sdt. Hopefully this is enough to cover all of your internal disk letters.

 

NOTE: On a real Synology DS3612xs, there are exactly 12 internal ports that will get enumerated from /dev/sda to /dev/sdl. However, on your system, you may have additional ports (SATA or PATA) that are unused, but they will still occupy a device letter. So let's say you have 16 ports on your server, but you're only using 12 of them. Some of your drives' enumeration may very well fall outside of the original 12 from /dev/sda through /dev/sdl. Therefore, you need to modify your internalportcfg value to cover your system's device enumeration.

 

If you want to know what exact drive letters are being enumerated on your system, run the following commaned

fdisk -l |grep ^Disk

If you search for internalportcfg in this forum, you'll see it being discussed multiple times. I have 14 SATA ports (6 onboard SATA + 8 SAS ports) on my server. 12 of the 14 are currently populated.. The values I use are....

 

esataportcfg="0x0" // i have no esata drives

internalportcfg="0xfffff" // up to 20 internal devices enumeration letters from /dev/sda - /dev/sdt

usbportcfg="0xf00000" // usb will get enumerated from /dev/sdu - /dev/sdx

 

Check my other post to see how I got 24 drives to show up in VMware.

http://xpenology.com/forum/viewtopic.php?f=2&t=3529&p=21475&hilit=internalportcfg#p21475

 

 

I've followed your instruction. Changed file internalportcfg="0xfffff". Saved and rebooted it.

Yes, now I can see 20 slots as you told but Disk 1 & 2 are still missing. :oops:

 

15209727920_00e414260a_b.jpg

Link to comment
Share on other sites

There is a really simple fix for this problem.

 

ssh into your xpenology system

edit /etc.defaults/synoinfo.conf

 

modify internalportcfg and esataportcfg fields

 

original values

----------------

internalportcfg="0xfff"

esataportcfg="0xff000"

 

change to something like this

-----------------

internalportcfg="0xfffff"

esataportcfg="0x0"

 

Save the file and reboot. You should see all of your drives now.

 

The above fields are device letter masks. The original value of 0xfff is a hexadecimal mask indicating up to 12 devices are internal drives from /dev/sda to /dev/sdl. However, if your device assignments fall out side of that range (for example /dev/sdm and /dev/sdn), then you'll need to increase it to accommodate. The new value of 0xfffff signifies that there can be up to 20 devices from /dev/sda through /dev/sdt. Hopefully this is enough to cover all of your internal disk letters.

 

NOTE: On a real Synology DS3612xs, there are exactly 12 internal ports that will get enumerated from /dev/sda to /dev/sdl. However, on your system, you may have additional ports (SATA or PATA) that are unused, but they will still occupy a device letter. So let's say you have 16 ports on your server, but you're only using 12 of them. Some of your drives' enumeration may very well fall outside of the original 12 from /dev/sda through /dev/sdl. Therefore, you need to modify your internalportcfg value to cover your system's device enumeration.

 

If you want to know what exact drive letters are being enumerated on your system, run the following commaned

fdisk -l |grep ^Disk

If you search for internalportcfg in this forum, you'll see it being discussed multiple times. I have 14 SATA ports (6 onboard SATA + 8 SAS ports) on my server. 12 of the 14 are currently populated.. The values I use are....

 

esataportcfg="0x0" // i have no esata drives

internalportcfg="0xfffff" // up to 20 internal devices enumeration letters from /dev/sda - /dev/sdt

usbportcfg="0xf00000" // usb will get enumerated from /dev/sdu - /dev/sdx

 

Check my other post to see how I got 24 drives to show up in VMware.

http://xpenology.com/forum/viewtopic.php?f=2&t=3529&p=21475&hilit=internalportcfg#p21475

 

do you mean that you can have more than 12disks in one volume if I follow youre instructions?

Link to comment
Share on other sites

do you mean that you can have more than 12disks in one volume if I follow youre instructions?

 

The number of disks you can use in a volume depends on the type of volume you're creating.

 

From what I vaguely remember, you cannot exceed 12 disks per volume for RAID5 and RAID6 with Synology. Don't quote me on this.

 

However, as you can see below, I've successfully created a single SHR1 volume with 24 drives.

 

file.php?id=526

Link to comment
Share on other sites

  • 5 weeks later...
do you mean that you can have more than 12disks in one volume if I follow youre instructions?

 

The number of disks you can use in a volume depends on the type of volume you're creating.

 

From what I vaguely remember, you cannot exceed 12 disks per volume for RAID5 and RAID6 with Synology. Don't quote me on this.

 

However, as you can see below, I've successfully created a single SHR1 volume with 24 drives.

 

file.php?id=526

 

"Hi, What is your Server Config., I just want to know as I am planning to build Highend NAS using AMD 990FX Chipset with 6 Drive. Planning for SHR1 / Raid 10. Please reply I will be waiting"

Link to comment
Share on other sites

×
×
  • Create New...