Jump to content
XPEnology Community

DSM 5.2 & random drive bay layout


Recommended Posts

Hi,

 

I've just come back to Xpenology now that DSM 5.2 supports SMB3, well done to all involved.

 

When I was running 5.1, my baremetal system (SM X7SPA-HF-D510) was numbering my 12 hotswap drive bays as 1-4 onboard SATA and then 5-12 LSI HBA in IT mode; effectively meaning that on the 12 bay display in DSM the left most drive corresponded to drive 1 and hence bay 1.. A perfect match!

 

Now, running DSM 5.2 from scratch (no upgrade), Sata 1-4 is hotswap bays 2-5 and for some reason the HBA has drive 1 as its first drive, even though its connected to hotswap bay 5 and then 6-12. Is there anyway I can return this back to hotswap bay 1 attached to SATA 1 is drive 1 in Synology? Would make it easier to replace drives etc.

 

Cheers,

 

Chris

Link to comment
Share on other sites

Ok, so I went back to 5.1 as I had nothing on the drives and they all sorted themselves and booted in the right order. Upon updating the USB drive to Xpenology 5.2, booting in to 5.1 causes the messed up drive settings. It appears that something in the boot system may not be playing fair with my LSI raid card and the X7SPA-HF-D510 SATA ports to get them in the right order!

 

@Trantor and other devs - Is it possible to set the order that SATA drives are hosted? For example, all motherboard based SATA first, followed by HBA's?

 

Cheers,

 

Chris

Link to comment
Share on other sites

So another question, what happens to disk groups etc. if this change happens again on a new boot image release? At the moment, my HBA controller dumps a drive in a drive 1 within DSM and so my disk groups are not laid out like that. What happens to a disk group if the boot image causes the controllers to move them again? Will I lose access to the group and therefore data, or does DSM store the data by UUIDs and therefore knows which HDD is attached to which group?

 

Thanks,

 

Chris

Link to comment
Share on other sites

See my comment here: viewtopic.php?f=2&t=5026&start=1020#p38031

 

To sum it up, LSI cards have never mapped the physical port #'s in order. Drive labels have always gotten assigned by which ever drive spooled up and was seen first on boot. Onboard has always retained some kind of consistent mapping though. No sure why this is, but if this LSI issue can be fixed I'd donate some $ to whoever fixes it, since it messes up me being able to make a specific drive bay into an esata slot for backups... right now I have to pull the drive on boots, and re-insert it after DSM boots so it doesn't mess up the array.

Link to comment
Share on other sites

I found some info on the LSI drive mapping here:

http://unix.stackexchange.com/questions ... id-systems

https://wiki.debian.org/Persistent_disk_names

 

Apparently since DSM 5.1, synology uses udevadm, but it doesn't seem to have the port mapping correct. I have the first 2 drives connected to ports 0, and 1, and the 3rd connected to port 7. You can see it doesn't have anything to show that:

 

DSM-Test> udevadm info --query=path --name=/dev/sda
/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/port-0:0/end_device-0:0/target0:0:0/0:0:0:0/block/sda
DSM-Test> udevadm info --query=path --name=/dev/sdb
/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/port-0:1/end_device-0:1/target0:0:1/0:0:1:0/block/sdb
DSM-Test> udevadm info --query=path --name=/dev/sdc
/devices/pci0000:00/0000:00:15.0/0000:03:00.0/host0/port-0:2/end_device-0:2/target0:0:2/0:0:2:0/block/sdc
DSM-Test>

but if you look at the dmesg, you'll see that the phy number correlates to the real port numbers:

DSM-Test> dmesg |grep scsi
[    0.725581] scsi0 : Fusion MPT SAS Host
[    2.956387] scsi 0:0:0:0: Direct-Access     WDC      WD740GD-00FLA1           8D27 PQ: 0 ANSI: 6
[    2.956403] scsi 0:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x0000000000000000)
[    2.956408] scsi 0:0:0:0: SATA: enclosure_logical_id(0x5000000080000000), slot(3)
[    2.956536] scsi 0:0:0:0: atapi(n), ncq(n), asyn_notify(n), smart(y), fua(n), sw_preserve(n)
[    2.956541] scsi 0:0:0:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
[    3.202241] scsi 0:0:1:0: Direct-Access     WDC      WD5000AAKS-00UU3A0       3B01 PQ: 0 ANSI: 6
[    3.202256] scsi 0:0:1:0: SATA: handle(0x000a), sas_addr(0x4433221107000000), phy(7), device_name(0x0000000000000000)
[    3.202261] scsi 0:0:1:0: SATA: enclosure_logical_id(0x5000000080000000), slot(4)
[    3.202420] scsi 0:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    3.202426] scsi 0:0:1:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
[    3.456442] scsi 0:0:2:0: Direct-Access     WDC      WD740GD-00FLA1           8D27 PQ: 0 ANSI: 6
[    3.456459] scsi 0:0:2:0: SATA: handle(0x000b), sas_addr(0x4433221101000000), phy(1), device_name(0x0000000000000000)
[    3.456464] scsi 0:0:2:0: SATA: enclosure_logical_id(0x5000000080000000), slot(2)
[    3.456588] scsi 0:0:2:0: atapi(n), ncq(n), asyn_notify(n), smart(y), fua(n), sw_preserve(n)
[    3.456594] scsi 0:0:2:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
[    8.585973] sd 0:0:0:0: Attached scsi generic sg0 type 0
[    8.586282] sd 0:0:1:0: Attached scsi generic sg1 type 0
[    8.586623] sd 0:0:2:0: Attached scsi generic sg2 type 0
DSM-Test>

 

edit: a little info that I just found that correlated with my observation about the phy #: https://utcc.utoronto.ca/~cks/space/blo ... uxSASNames

 

edit2: the last post of this thread seems pretty interesting https://forums.freenas.org/index.php?th ... der.15286/ I'll have to try to play with LSIUtil when I get a chance.

 

edit3: interesting features from lsiutil manual:

 

Figure 2.13 Changing SAS I/O Unit Settings

SATA Maximum Queue Depth: [0 to 127, default is 32]
Device Missing Report Delay: [0 to 2047, default is 0]
Device Missing I/O Delay: [0 to 255, default is 0]
PhyNum Link MinRate MaxRate Initiator Target Port
0 Enabled 1.5 3.0 Enabled Disabled Auto
1 Enabled 1.5 3.0 Enabled Disabled Auto
2 Enabled 1.5 3.0 Enabled Disabled Auto
3 Enabled 1.5 3.0 Enabled Disabled Auto
4 Enabled 1.5 3.0 Enabled Disabled Auto
5 Enabled 1.5 3.0 Enabled Disabled Auto
6 Enabled 1.5 3.0 Enabled Disabled Auto
7 Enabled 1.5 3.0 Enabled Disabled Auto
Select a Phy: [0-7, 8=AllPhys, RETURN to quit] 0
Link: [0=Disabled, 1=Enabled, default is 1]
MinRate: [0=1.5 Gbps, 1=3.0 Gbps, default is 0]
MaxRate: [0=1.5 Gbps, 1=3.0 Gbps, default is 1]
Initiator: [0=Disabled, 1=Enabled, default is 1]
Target: [0=Disabled, 1=Enabled, default is 0]
Port: [0 to 7 for manual config, 8 for auto config, default is 8]
Persistence: [0=Disabled, 1=Enabled, default is 1]
Physical mapping: [0=None, 1=DirectAttach, 2=EnclosureSlot, default is 0]

 

edit4: made the changes from Auto to port# matching phy#. made no difference...

Link to comment
Share on other sites

  • 3 weeks later...

Hi Diverge,

 

Didn't see the rest of your investigations, do not get post updates if you edit the post, sorry!

 

Its a shame that it does this, although I have no idea what has changed between 5.1 and 5.2 to cause this, unless the driver for the mpt2sas has been updated in the 5.2 image?

 

Currently running virtualised so I can assign drives manually to DSM so they end up aligned to physical ports but would prefer to run alone as I'd like to run some of my virtual machines on my esxi hosts from the array.

 

Cheers,

 

Chris

Link to comment
Share on other sites

Hi Diverge,

 

Didn't see the rest of your investigations, do not get post updates if you edit the post, sorry!

 

Its a shame that it does this, although I have no idea what has changed between 5.1 and 5.2 to cause this, unless the driver for the mpt2sas has been updated in the 5.2 image?

 

Currently running virtualised so I can assign drives manually to DSM so they end up aligned to physical ports but would prefer to run alone as I'd like to run some of my virtual machines on my esxi hosts from the array.

 

Cheers,

 

Chris

 

I searched for a while on the LSI drive order issue, and it's just how LSI does it. It's not OS related as far as what I've read.

 

On 5.2, I just think it is broken. I played around with it for a few days on a brand new install, no migration, or previous data (in ESXI), and had too many issues. You can see them here: viewtopic.php?f=2&t=6570

Link to comment
Share on other sites

Out of curiosity trublu, where you simply using the HBA or did you use the motherboard ports aswell? I'm wondering if the issue has come about due to the use of 8 HBA ports and 4 motherboard SATA ports.

 

I've just bought some components to build a more modern baremetal Xpenology system using a spare i5 I have so will try it simply with the 8 ports of the HBA. If that works, then I know that it didn't like combining motherboard and hba at the same time on that particular motherboard.

Link to comment
Share on other sites

Out of curiosity trublu, where you simply using the HBA or did you use the motherboard ports aswell? I'm wondering if the issue has come about due to the use of 8 HBA ports and 4 motherboard SATA ports.

 

I've just bought some components to build a more modern baremetal Xpenology system using a spare i5 I have so will try it simply with the 8 ports of the HBA. If that works, then I know that it didn't like combining motherboard and hba at the same time on that particular motherboard.

I'm using the HBA only.

Link to comment
Share on other sites

I've restarted my system (with LSI 9211-8i) a few times but I don't have this issue. After installing DSM 5.2 (new baremetal) I put in the additional drives one after the order after each previous one was recognized.

 

 

I don't think you understand the issue. Your LSI card has 8 ports (port 0 to port 7), if put plug port 0 into slot 1, port 1 into slot 2, ... port 7 into slot 8. Put 1 drive in slot 1 and one in slot 8. Look in DSM and they will show as slot 1 and slot 2. It will fill the slots in DSM in order any drive is detected, regardless of their port #, or what physical slot they are actually plugged in.

 

Everything I've read said this is the only way the LSI cards work. Also, everything I've tried supported this as well.

Link to comment
Share on other sites

I've restarted my system (with LSI 9211-8i) a few times but I don't have this issue. After installing DSM 5.2 (new baremetal) I put in the additional drives one after the order after each previous one was recognized.

 

 

I don't think you understand the issue. Your LSI card has 8 ports (port 0 to port 7), if put plug port 0 into slot 1, port 1 into slot 2, ... port 7 into slot 8. Put 1 drive in slot 1 and one in slot 8. Look in DSM and they will show as slot 1 and slot 2. It will fill the slots in DSM in order any drive is detected, regardless of their port #, or what physical slot they are actually plugged in.

 

Everything I've read said this is the only way the LSI cards work. Also, everything I've tried supported this as well.

 

That is the problem, but I suspect it is only a problem if you mix and match HBA with on board SATA, which is what I was trying to do to maximise storage up to the 12 hot swap bays. Now, if this works I'd be happy to run a SAS expander to fill the other slots up and just use my HBA as long as it populates the drives in order of filling. Obviously, this is no good if I add a drive to bay 1 and bay 12 as it would most likely detect physical bay 12 as DSM bay 2, but I am unlikely to load my drives that way.

Link to comment
Share on other sites

I've restarted my system (with LSI 9211-8i) a few times but I don't have this issue. After installing DSM 5.2 (new baremetal) I put in the additional drives one after the order after each previous one was recognized.

 

 

I don't think you understand the issue. Your LSI card has 8 ports (port 0 to port 7), if put plug port 0 into slot 1, port 1 into slot 2, ... port 7 into slot 8. Put 1 drive in slot 1 and one in slot 8. Look in DSM and they will show as slot 1 and slot 2. It will fill the slots in DSM in order any drive is detected, regardless of their port #, or what physical slot they are actually plugged in.

 

Everything I've read said this is the only way the LSI cards work. Also, everything I've tried supported this as well.

I perfectly understood what was said, hence my explanation of the order I added my drives. Putting them in one after the other keeps them in order. I have an 8bay UNAS populated with 5 drives with no issues after adding one after the other from left to right.

Link to comment
Share on other sites

  • 3 weeks later...
I've restarted my system (with LSI 9211-8i) a few times but I don't have this issue. After installing DSM 5.2 (new baremetal) I put in the additional drives one after the order after each previous one was recognized.

 

 

I don't think you understand the issue. Your LSI card has 8 ports (port 0 to port 7), if put plug port 0 into slot 1, port 1 into slot 2, ... port 7 into slot 8. Put 1 drive in slot 1 and one in slot 8. Look in DSM and they will show as slot 1 and slot 2. It will fill the slots in DSM in order any drive is detected, regardless of their port #, or what physical slot they are actually plugged in.

 

Everything I've read said this is the only way the LSI cards work. Also, everything I've tried supported this as well.

I perfectly understood what was said, hence my explanation of the order I added my drives. Putting them in one after the other keeps them in order. I have an 8bay UNAS populated with 5 drives with no issues after adding one after the other from left to right.

 

So I've now gone back to baremetal and come across the same as before, even though I'm not using the SATA ports on my new motherboard. When running via ESXi, they where in the correct order due to the setup, now baremetal, I have the 5 drives coming UP as the first 5 (great this is progress) but now physical drive 1 is not drive 1 in DSM. I presume when you re-start your box after a shutdown, the drives are in the same order as they where when you started?

 

Can I also ask what version of firmware are you using? I'm on P20 I believe and want to make sure this is not an issue with the latest firmware.

 

Cheers,

 

Chris

Link to comment
Share on other sites

Right, so I'm done with this. Went from vSphere back to baremetal expecting it to be all good. Drives where enumerating at the adaptor screen in the same physical order as installed but DSM was not doing the same, no matter what I did it wouldn't reflect that disk 2 was actually disk 2 etc. To cap it all off, the entire system died, 2 HDDs crashed and the entire system volume failed. 10 hours later, full format of all drives and lots of swearing. Lost pretty much everything, bar the important stuff I had backed up, very disappointing.

 

DSM IS fantastic and powerful and easy to use, but man when it goes it goes wrong! Whilst the write performance is great, and block storage for iSCSI/NFS is cool, I am returning back to Windows Server 2012 R2 with Flexraid's T-Raid product combined with drivepool... No where near as good but will be more reliable as they are basically jbod disks with a virtual spanning and parity protection.

 

Shame I couldn't fix this problem and had to loose my data in the process :sad:

Link to comment
Share on other sites

So I've now gone back to baremetal and come across the same as before, even though I'm not using the SATA ports on my new motherboard. When running via ESXi, they where in the correct order due to the setup, now baremetal, I have the 5 drives coming UP as the first 5 (great this is progress) but now physical drive 1 is not drive 1 in DSM. I presume when you re-start your box after a shutdown, the drives are in the same order as they where when you started?

 

Can I also ask what version of firmware are you using? I'm on P20 I believe and want to make sure this is not an issue with the latest firmware.

 

Cheers,

 

Chris

I'm not sure why that's the case for you. Did you put them in in order? Before I put my drives in I brought up the HDD/SSD page. After I added each disk I waited for each to initialize and show "Normal" status then i added the next one. I also created the disk group first before creating the volume but I don't know if it made a difference. I'm on DSM 5.2-5565 Update 2 using LSI 9211-8i flashed to 2118it. I've restarted multiple times with no issue. I even took screenshots to be sure.
Link to comment
Share on other sites

Thanks Trublu,

 

Yeah I don't know why. I started, once I had lost all the data and wiped them clean, with physiscal slot 1 filled. On start, the LSI screen showed it as device 0 and DSM showed it in drive 1. Everytime, I populated the bays LSI showed them in the correct order on every reset, so that was fine... However DSM lost the order after a reboot and they never came back.

 

What does the 2118it version offer? Mine is just flashed from IR to IT and is running P20. I'd still rather run DSM but at the moment I can't see how while keeping the logic of the drives in order. As I said, the LSI seems to record fine when not using motherboard SATA ports at the same time, just that DSM is not showing it correctly.

Link to comment
Share on other sites

Thanks Trublu,

 

Yeah I don't know why. I started, once I had lost all the data and wiped them clean, with physiscal slot 1 filled. On start, the LSI screen showed it as device 0 and DSM showed it in drive 1. Everytime, I populated the bays LSI showed them in the correct order on every reset, so that was fine... However DSM lost the order after a reboot and they never came back.

 

What does the 2118it version offer? Mine is just flashed from IR to IT and is running P20. I'd still rather run DSM but at the moment I can't see how while keeping the logic of the drives in order. As I said, the LSI seems to record fine when not using motherboard SATA ports at the same time, just that DSM is not showing it correctly.

Not sure about what 2118it offers vs p20. I only flashed to it because it was the latest version.

 

I think you should give it another go and create the disk group before creating the volume (I might have gotten lucky doing it that way). It's weird that your DSM instance didn't keep track because the actual order of the disks matters. In the Synology docs it specifically says that when migrating disks from one Synology NAS to another, the disks must be in the same exact order.

Link to comment
Share on other sites

One more thing I forgot to mention: When I first added my disks, the HBA was in IR mode because I forgot to flash it. I moved the HBA to a different computer for flashing and when I put it back into my NAS the drive order stayed the same. Can the backplanes in my build be part of the reason why the order holds? I'm curious to hear the experience of anyone else that used the NSC-800 for their build.

Link to comment
Share on other sites

So I'm going to downgrade the controller to P16 as I've read that P19 appeared to have problems in FreeNAS so want to exclude that from the equation, then I'll see what happens. When I was booting up, I had empty drives doing nothing and they where still moving around...

Link to comment
Share on other sites

So i'm back on P16 for the LSI 9211-8i and running it in IT mode as before. After spending sometime using the format function of the LSI, I've got DSM to not say that one of my disks has crashed so that is better, took 7 hours to format though!

 

Now, the LSI displays on boot the drives in the correct order, but DSM is still completely wrong.

 

Perhaps the SAS backplane IS making a difference to the way DSM deals with the cards.

 

Really do not know why it is doing this. On a completely different motherboard/cpu setup now, formatted all the drives etc.

 

The only way I can think of making this work is to run vSphere again and assign using RDM.

Link to comment
Share on other sites

Ok, so I have no idea what it is doing. Realised that on boot, drives 2 & 4 where listed in the correct physical order, but DSM had them swapped over... When I then went and swapped the sata breakout cables over, so plugging Cable #2 in to drive 4 and Cable #4 in to drive 2, DSM got them in the correct physical order but then on boot, the LSI screen showed Physical Drive 2 as being Drive bay 4 etc.

 

Do not know why it has done this, wonder if something is wrong with my breakout cable? Either way, DSM is now surviving reboots/cold boots with the 5 drives in the right order. Will see how I go, although I'm still gutted as I lost probably 700GB of data that wasn't backed up... Mainly ripped music and movies but some applications too.

 

Would a SAS backplane make much difference, not that I can afford to get one shipped to New Zealand!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...