Rhubarb

How to modify grub.cfg or otherwise modify DS3615 HDD port mapping.

Recommended Posts

My Supermicro m'brd system (X10SL7F as DS3615xs) has lately had occasional disk issues which Supermicro has advised me could be due to RAIDing 6Gb/s (SATA1x2), 3Gb/s (SATA2x4), and 6Gb/s SAS(4of8) as a 10 Disk RAID 6 array.  I was generally advised that RAID groups should be confined to each controller.

My m'brd has 2xSATA3 RAID0,1 ports; 4xSATA2 RAID0,1,5,10 ports; and 8xSAS2 (LSI2308) ports.

To resolve my problems, I placed 4 of my HDDs on the slower SATA2 ports as RAID5, and 6 drives on 6 of the 8 available SAS2 ports. Because of the 12 disk limit  for DS3615 I can't use more than 6 of the 8xSAS2 ports.

I guess my options are to:

           a: Change the MaxDrives limit to 14 and place two of my HDDs  on to the 2xSATA1 ports and the balance on the 8 SAS ports; (this gives problem that the 4 vacant SATA2 ports show as "blanks" in the Storage Manager, Overview page); or,

           b: modify my systems config to remove all the SATA2 ports from being visible to the DS3615xs so that the 4 vacant ports don't show as "blanks" in the Storage Manager Overview and I can see
               all 8 SAS ports as available for usage. 

Any ideas how I can achieve either of a/b above? Suggestions welcome.

Edited by Rhubarb
remove typos

Share this post


Link to post
Share on other sites

I'm not sure SuperMicro can advise you on how DSM should be configured.  In any case, using a combination of hardware (motherboard) and software (DSM) RAID seems counterproductive.  You will get the most out of DSM if you present raw drives and let DSM do all the RAID operations you want to do.  That would mean increasing MaxDrives in your case.

 

If you want to delete an entire controller or subset of drives from a controller, look into SataPortMap, SASIdxMap, and DiskIdxMap.  This is on the main tutorial here.

 

Share this post


Link to post
Share on other sites

What you want is to limit the number of drives per controller.

If possible, put all drives to your SAS controller and tell XPE to see 0 drives on the other two controllers.

 

SataPortMap=008, would result in first controller = 0 drivers, second controller 0 drives, third controller 8 drives.

If your SAS controller is not recognized as third, you need to swith positions.

 

The  "12 drives" limit is the default-configuration. If you change it, be prepared that updates sometimes reset your settings to the default value.

 

Share this post


Link to post
Share on other sites
11 minutes ago, flyride said:

I'm not sure SuperMicro can advise you on how DSM should be configured.  In any case, using a combination of hardware (motherboard) and software (DSM) RAID seems counterproductive.  You will get the most out of DSM if you present raw drives and let DSM do all the RAID operations you want to do.  That would mean increasing MaxDrives in your case.

 

If you want to delete an entire controller or subset of drives from a controller, look into SataPortMap, SASIdxMap, and DiskIdxMap.  This is on the main tutorial here.

 

Thanks for your advice flyride.  I have seen some material on this topic on the net and have previously (some months ago) read the tutorial when I was not aware of the need to map out a device (SATA2x4), because my system appeared to be functioning flawlessly with all drives connected across the available controllers (incl. SATA2).  Will investigate further today.  Thanks again for your advice and pointing me in the right direction.

Share this post


Link to post
Share on other sites
15 minutes ago, haydibe said:

What you want is to limit the number of drives per controller.

If possible, put all drives to your SAS controller and tell XPE to see 0 drives on the other two controllers.

 

SataPortMap=008, would result in first controller = 0 drivers, second controller 0 drives, third controller 8 drives.

If your SAS controller is not recognized as third, you need to swith positions.

 

The  "12 drives" limit is the default-configuration. If you change it, be prepared that updates sometimes reset your settings to the default value.

 

Thanks to you to too haydibe. Much appreciated!  Desirably, my SAS controller should be switched to become the first device, then SATA1(x2), and finally SATA2(x4), without increasing/changing the MAXDRIVES limit of 12. Will post again to confirm when I've got system configured correctly as I want.

Edited by Rhubarb
Extra information in reply.

Share this post


Link to post
Share on other sites
On 11/27/2018 at 7:13 AM, Rhubarb said:

Thanks to you to too haydibe. Much appreciated!  Desirably, my SAS controller should be switched to become the first device, then SATA1(x2), and finally SATA2(x4), without increasing/changing the MAXDRIVES limit of 12. Will post again to confirm when I've got system configured correctly as I want.

Had no success mapping out SATA2 controller in two attempts.

1st attempt was to set "set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=208 SasIdxMap=0'"

2nd attempt was to set "set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=20 SasIdxMap=8'"

On both occasions, the blank SATA2(x4) still show as vacant slots in (3 to 6) in the Storage Manager Overview and DSM is unable to access the 7th & 8th disk on the LSI2308 (SAS) ports.

I do want to retain max disks set to 12 as suggested by haydibe: "The  "12 drives" limit is the default-configuration. If you change it, be prepared that updates sometimes reset your settings to the default value." 

I'm still struggling with how to go about offsetting the location of the 8xSAS ports to mask over the empty SATA2 ports.

 

Share this post


Link to post
Share on other sites

I'm not 100% sure of the controller order on your system, but assuming your PCIe enumeration = SATA1x2, SATA2x4, LSI, try DiskIdxMap=080A00

 

You can also add SataPortMap=228 but it should not be necessary since you just want to cut off 2 of the SATA2 devices

  • Like 1

Share this post


Link to post
Share on other sites
1 hour ago, flyride said:

I'm not 100% sure of the controller order on your system, but assuming your PCIe enumeration = SATA1x2, SATA2x4, LSI, try DiskIdxMap=080A00

 

You can also add SataPortMap=228 but it should not be necessary since you just want to cut off 2 of the SATA2 devices

Thanks again for your advice, flyride.  However, no luck yet with DiskIdxMap=080A00 and SataPortMap=228.  Also tried 208. Neither removes the 4 vacant slots for SATA2 (positions 3 to 6 of the Storage Manager drive listing).

Have just noticed, however, that the 6 drive RAID 6 group has now entered degraded mode 5/6.  One of the drives that was previously not visible is now included in the array, replacing one of the other disks.  Will allow the RAID to recover before further changes.  Will probably take at least 24hrs to get the RAID normalised.

 

Share this post


Link to post
Share on other sites

Then the PCIe enumeration (controller order) is not known - you really need to figure that out before you do anything else.

 

Why are you trying this with a production array??

 

Edited by flyride

Share this post


Link to post
Share on other sites

I have similar issue here. I've an ASM1602 card connect via passthrough and the msata ssd drive is detected as sdaf, which is very far from the 12 drives supported. Any idea to fix this?

The other 4 drives are added via RDM

Share this post


Link to post
Share on other sites
On 11/28/2018 at 9:54 PM, bearcat said:

Have you tried the "easy way" and disabled the internal controller for the ports you don't want to use ?

Seemed to be the only way bearcat - but not good to have to disable the whole 6 port SATA controller (2x6Gbps & 4x3Gbps) to disable only 2 ports. I have now arranged purchase of a new LSI2308 8 port SAS PCI-E and a SFF-8087 SAS to 4 port SATA cable to make up for the lost ports.

System is now running with 8 disks (RAID 6) until I get/add the new hardware.

  • Like 1

Share this post


Link to post
Share on other sites

@flyride

 

Hello,

 

I have a H370M-ITX/ac motherboard with 918+ 1.04b, which works pretty well (although without the 2nd NIC sadly, which requires E1000e, I think).

 

I have 6 SATA ports on board, and another 4 ports on external 88SE9215 via minipcie-->m.2. Both work very good with no problems.

 

But DSM always shows 16 drives instead 10 in the storage manager map, no matter what I do.

I used SataPortMap=64 (I have seen the order is correct using 2 disks -1 connected to the 1st controller and the other to de 2nd-, first SATA mainboard, the 88SE9215, as the first disk and the 9th were populated in the map). It looks like it reserves 8 drives for each controller...

Even when I leave the grub.cfg at the loader's default in which is SataPortMap=4 it always shows 16 drives.

 

In the guide you mentioned, SASIdxMap, and DiskIdxMap are not explained, so I didn't know who to proceed.

 

Does your baremetal ASRock J4105-ITX shows exactly the number of drive ports you have or more like me?

What can I do/try?

 

I think it is just cosmetic but I would like to solve it.

 

Thanks!!

Edited by ed_co

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.