• 0

External HDDs detected as internal HDD after synoinfo.conf edit.


Question

Please be patient with the newbie here. This is my third xpenology box that I put together in the last 5 months, so I have some experience with editing the synoinfo.conf file, and minimal experience in Linux. This is my very first post.

 

DS918+ running DSM 6.2.3-25426 Update 2 on Jun's 1.04b loader. I love it! :) Everything was working fine after install (including detecting and reading from external HDDs), but it not detect all 10 internal drives were detected. I had this problem before on when using DS3615xs on the same NAS box, and this tutorial worked great! But this time the external HDDs are showing up as internal HDD!

 

Original binary to HEX (with "maxdisks=16"):

11 0000 0000 0000 0000 ==> USB ports = 30000

00 1111 1111 1111 1111 ==> Sata ports = FFFF

 

Modified binary to HEX (with "maxdisks=20")

11 0000 0000 0000 0000 0000 ==> USB ports = 300000

00 1111 1111 1111 1111 1111 ==> Sata ports = FFFFF

esataportcfg="0x0"

Both synoinfo.conf from /etc and /etc.defaults were edited.

dmesg.txt shows 4 USB ports detected, but when I changed the binary to "1111" it also did not detect any USB port.

dmesg.txt shows 20 ata ports detected, this is why I have expanded the maxdisks to 20.

It sounds cool to be able to use external HD as part of internal volumes, but my data is located in the external HDD, so I cannot bring it back to new volumes. I want to have access to external HDDs to use HyperBackup on a NTFS drive for portability of backups.

 

Hardware:

• OptiPlex XE2: Intel Core I7-4770S (4 Cores/8 Threads, 3.10 GHz, Turbo 3.90 GHz), RAM 12Gb 1600 MHz DDR3

 

Drives #17 and #18 below are two different external HDDs.

 

Any ideas are highly appreciated! Thanks! :)

Capture2.PNG

Capture.PNG

Link to post
Share on other sites

9 answers to this question

Recommended Posts

  • 1
17 hours ago, Triplex said:

Please be patient with the newbie here. This is my third xpenology box that I put together in the last 5 months, so I have some experience with editing the synoinfo.conf file, and minimal experience in Linux. This is my very first post.

 

 

Both synoinfo.conf from /etc and /etc.defaults were edited.

dmesg.txt shows 4 USB ports detected, but when I changed the binary to "1111" it also did not detect any USB port.

dmesg.txt shows 20 ata ports detected, this is why I have expanded the maxdisks to 20.

It sounds cool to be able to use external HD as part of internal volumes, but my data is located in the external HDD, so I cannot bring it back to new volumes. I want to have access to external HDDs to use HyperBackup on a NTFS drive for portability of backups.

 

Hardware:

• OptiPlex XE2: Intel Core I7-4770S (4 Cores/8 Threads, 3.10 GHz, Turbo 3.90 GHz), RAM 12Gb 1600 MHz DDR3

 

Drives #17 and #18 below are two different external HDDs.

from the picure it looks like as if you missed to put in a 5th zero in usb config, when your usb starts above 16 instead of 20

it should look like this

maxdisks=20
internalportcfg="0xfffff"
esataportcfg="0x0"
usbportcfg="0xf00000"

 

how do you reach 20 ports, what additional hardware did you use, maybe you can provide /var/log/dmesg

 

Link to post
Share on other sites
  • 1
On 12/11/2020 at 11:55 AM, xpevenom said:

Did an update from 6.2.2 to 6.2.3 Update 2. Box rebooted, update seemed to apply successfully and onboard nic works... but array comes up degraded with last 2 drives missing from array. Why?

that's normal and expected behavior as "big" updates ~200-300MB containing a hda1.tgz will replace the whole content of the system partition and all "undefined" changes will be ignored and overwritten - its a appliance after all so mangling any files beside what synology allows can vanish at any time with any update

the loader takes care of some settings, being checked on every boot and get patched when changed aka are back to original

the 12 disks of 3615/17 are the default and where enough when the patch was created, so the patch does not contain any parts monitoring the disk count, if you set it to something different manually it will be used but also it will be reset to default on some updates and your settings are gone

the 918+ is different in that way as it had a default of 4 disks and would not have been that useful with that so jun also patched the disk count to 16, so even after a bigger update its patched to 16, there is no equivalent in 3615/17 but technically its possible to create a new patch containing parts to patch the 12 default to i higher number - only no one took the effort and time to do so (i would incorporate the patch in my extra.lzma - if done properly)

 

On 12/11/2020 at 11:55 AM, xpevenom said:

Roughly speaking it seems editing synoinfo.conf will blow up in your face if used to enable some external ports as internal ports

yes and thats why its not suggested and i warn people to to so, it can be handled but it can be complicated or dangerous, contradicting the purpose of a appliance of beeing easier to handle

 

 

worst case is "loosing" the dedundacy disks, the raid will come up and you will need to rebuild, a little scarier on 1st boot after update is when more disks then redundant are missing, then the raid completely fails to start but thats better because you just change the synoninfo.conf back the way you had it before, reboot and everything should be back to normal (beside some raid1 system partitions that can be auto repaired by dsm)

 

On 12/11/2020 at 11:55 AM, xpevenom said:

A) Way to utilize the kernel/grub commands to achieve same results as I outlined above? I am assuming the grub based kernel commands would persist across an update where as synoinfo.conf edits will not.

yes but the max disk count and layout of the disks are in the synoinfo.conf and the kernel commands (if working for sata) are more or less for remapping

 

On 12/11/2020 at 11:55 AM, xpevenom said:

B) Way to update and not degrade an array that depends on default external ports being mapped to internal? (Preferably without having to LiveCD after patch applied)

see above, if you lost redundancy then its kind of worst case scenarion, its not always that way a "crashed" array (like not enough disks to assemble it) is much less hassle (but look scary when happening after boot)

 

On 12/11/2020 at 11:55 AM, xpevenom said:

A better way to do this? Or just not recommend port remapping because update will degrade the array.

 

with just 8 disks the suggested scenario is to use the available sata ports first and only use the needed lsi ports then so you wont miss the 2 ports at the and and dont need to change anything, the other way would be to completely disable the sata and have the 8 ports of the lsi at position 1-8 (being inside the 12), there might be kernel option to swap controller positions and get the lsi used 1st but thats still not fool proof and might change when synology changes the kernel (they are in control and we not even have a recent source), using the sata ports 1st before the lsi ports is safer and the performance it not problem, the onboard chipset sata usually works good enough

 

Link to post
Share on other sites
  • 1
21 minutes ago, xpevenom said:

2. RAID failure due to missing disks (more disks gone than redundancy allows) results in RAID not starting but recovers gracefully once the disks are seen once again. If only the redundancy disks are missing, the RAID will start and returning the missing disks will require them to be rebuilt (time + strain on the disks).

 

All array disks are redundancy disks.  If DSM boots and there are not enough disks available to start the array, no change will be made to the array and a subsequent correction of the problem allowing the array to start will result in a normal state.  When there are enough disks to start the array (given redundancy), it is subsequently modified and the disks left out of the array are no longer valid.  Therefore, the array must be rebuilt to restore them.

 

IMHO, "strain on disks" is poppycock with the exception of all-flash arrays that will use up some of their write allowance.

Edited by flyride
Link to post
Share on other sites
  • 0

Hi IG-88! It's an honor to have you addressing my question, as I have learned so much from your posts/replies in the last months. 🤩 Thank you for your support! 👍

After few days battling this issue, I gave up troubleshooting it and installed DS3615xs with Jun's 1.03b loader and the latest DSM (6.2.3-25426), because I had a good experience using the 6.2.2 version before (and I didn't need transcoding in this NAS box). I was trying to use 918+ simply because it is running in another NAS at home. Now, with 3615xs, everything is working great after few edits to adjust the number of drives and activate SHR in the synoinfo. 

 

To answer your questions, so maybe it will help others:

  1. I may have messed up the number of zeros on the HEX code, but I tried so many times, using different numbers from 12 to 24 drives (and different HEX codes for USB and internal ports every time) that I don't believe this may be the issue.
  2. To reach 20 drives, I used 2 PCie Sata controllers: one for 6 drives and one for 2 drives (plus the 4 SATA onboard). 
  3. Sorry, I cannot provide the dmesg file simply because I installed DS3615xs and did not backup the DS918+ system files (as they were giving me trouble).
  4. Now, I am trying to work with those ata ports that are always listed as DUMMY in the dmesg file (#5, #6, #9, #10). They remain empty. I also don't understand why disks 13 -> 16 are not automatically populated, as they are not listed as dummy in the dmesg file and I did change the grub.cfg file with SataPortMap=462 (as per my setup). I understand that I can use sata_remap (works great!), and I am also trying to learn more about SasIdxMap. Let me know if you have any idea in how to deal with these drive numbers that don't populate automatically. 

Thank you again for helping this amazing community! 😃

Link to post
Share on other sites
  • 0

there is some documentation about kernel commads in older source of synoplogy

i started documenting it in a thread in tutorial and guides but its not yet visible as it's not approved yet by a mod

"sata and sas config commands in grub.cfg and what they do"

 

just a small snipped that might be interesting here

 

config SYNO_SATA_PORT_MAP
    bool "Modify SATA Hosts Port Number"
    depends on SYNO_FIXED_DISK_NAME
    default y
    help
      <DSM> #18789
      Reads Sata-Port-Mapping information and forces the sata hosts
      to initialize specified number of ports. This makes the disk
      name not skip some characters.

      Notice - Do NOT set the port number out of the range that [0-9].
               It supports as most 9 ports now.

      For example, SataPortMap=4233 means the 1st host use 4 ports,
      the 2nd host use 2 ports, the 3rd and 4th host use 3 ports.

 

config SYNO_DISK_INDEX_MAP
    bool "Modify Disk Name Sequence"
    depends on SYNO_FIXED_DISK_NAME
    default y
    help
      <DSM> #19604
      Add boot argument DiskIdxMap to modify disk name sequence. Each
      two characters define the start disk index of the sata host. This
      argument is a hex string and is related with SataPortMap.

      For example, DiskIdxMap=030600 means the disk name of the first
      host start from sdd, the second host start from sdq, and the third
      host start sda.

 

config SYNO_SATA_REMAP
    bool "Re-map Disk Name Sequence"
    depends on SYNO_FIXED_DISK_NAME
    default y
    help
      <DSM> #47418
      Add boot argument sata_remap to remap data port sequence.

      For example, sata_remap=0>4:4>0 means swap the first disk name
      and the 5th. The following is the remap result.
          ata1 - sde
          ata2 - sdb
          ata3 - sdc
          ata4 - sdd
          ata5 - sda

 

config SYNO_SATA_DISK_SEQ_REVERSE
    bool "Reverse Disk Port Sequence"
    depends on SYNO_FIXED_DISK_NAME
    default y
    help
      <DSM> #23278
      Add boot argument DiskSeqReverse to reverse the ports of each SATA chip.

      For example, for a 4 SATA chips model, 4 ports of each SATA chip.
      We want to reverse all 4 ports of 1st chip, no modification of 2nd chip,
      reverse 2 former ports of 3rd chip, and reverse 3 former ports of 4th chip.
      The boot argument should be DiskSeqReverse=4023. And the sequence of disk is:
        1st chip - [sdd, sdc, sdb, sda]
        2nd chip - [sde, sdf, sdg, sdh]
        3rd chip - [sdj, sdi, sdk, sdl]
        4th chip - [sdo, sdn, sdm, sdp]

 

Link to post
Share on other sites
  • 0

@IG-88On the topic of the kernel commands vs synoinfo.conf

 

HP Gen8 MicroServer w/ LSI 9211 HBA running 6.2.2 DS3617xs w/ 1.03b loader.

Note: motherboard is no longer living in original chassis, migrated to new chassis with 8 drive bays.

Running one critical tweak - modified both synoinfo.conf's:

usbportcfg="0x7c000"
esataportcfg="0x0000"
internalportcfg="0x3fff"

 

Quote

(14 drive bays, 5 USB, 0 esata)

0000 0000 0000 0000 0000 0000 0111 1100 0000 0000 0000 ==> Usb ports (dmesg reports 5 assigned)

0000 0000 0000 0000 0000 0000 0000 0011 1111 1111 1111 ==> Sata ports (14 - 4 onboard, ODD port, MicroSD slot, 8 on LSI 9211)

 

Without this modification, default DS3617xs configuration detects the last two ports of the LSI 9211 as eSata. After the edit, a volume using all 8 ports of the LSI 9211 is created.

 

Did an update from 6.2.2 to 6.2.3 Update 2. Box rebooted, update seemed to apply successfully and onboard nic works... but array comes up degraded with last 2 drives missing from array. Why? All 8 ports of the LSI 9211 are used in array, update rewrote `/etc.defaults/synoinfo.conf` (but not `/etc/synoinfo.conf`) and so now the last two 2 drives are being detected as eSata external disks. Manually editing `/etc.defaults/synoinfo.conf`, rebooting, allows for the last two drives to be visible now as internal drives. Triggered a repair of storage pool, which is reinitializing the data on last two drives. So array is rebuilding but overall looks like same unpleasantry could happen with next minor update.

 

Issue never noticed before because never maxed out all 8 ports of the LSI 9211 (last 2-4 ports were not used).

 

Roughly speaking it seems editing synoinfo.conf will blow up in your face if used to enable some external ports as internal ports. Wonder is there:

A) Way to utilize the kernel/grub commands to achieve same results as I outlined above? I am assuming the grub based kernel commands would persist across an update where as synoinfo.conf edits will not.

B) Way to update and not degrade an array that depends on default external ports being mapped to internal? (Preferably without having to LiveCD after patch applied)

C) A better way to do this? Or just not recommend port remapping because update will degrade the array.

Edited by xpevenom
minor typo, formatting fix
Link to post
Share on other sites
  • 0

Hmm very interesting. Thanks for the details, that helps better explain what options are.

 

Takeaways:

1. If have more than 12 ports, 918 jun loader is patched to 16 disks. Might of made more sense to use that loader vs the 3617 loader (which sticks to the default 12)

 

2. RAID failure due to missing disks (more disks gone than redundancy allows) results in RAID not starting but recovers gracefully once the disks are seen once again. If only the redundancy disks are missing, the RAID will start and returning the missing disks will require them to be rebuilt (time + strain on the disks).

 

3. For the particular case of HP Gen8 MicroSever with LSI 9211: Using the onboard ports + first 4 ports of the LSI 9211 should allow for an 8 disk array to not degrade after major update. The remaining 4 ports can used for SSD array (2 to be safe) and last 2 for a cache array (which can be safety blown away without concern).

Link to post
Share on other sites
  • 0
46 minutes ago, xpevenom said:

3. For the particular case of HP Gen8 MicroSever with LSI 9211: Using the onboard ports + first 4 ports of the LSI 9211 should allow for an 8 disk array to not degrade after major update. The remaining 4 ports can used for SSD array (2 to be safe) and last 2 for a cache array (which can be safety blown away without concern).

when sticking to the 12 limit (918+ is no option wit a hp microserver gen 8 mainboard)

 6 onboard  + 8 lsi - 2 ( two onboard unusable) - 2 (over 12) = 10  usable disk "slots" (4 onboard sata and 6 out of 8 on the lsi)

1 1 1 1 0 0 1 1 1 1 1 1 0 0 - thats your 10 disks seen from the 6 +8 layout of the controllers

if you already have 8 then you can add two not four disks

 

just scrap the cache drives, with a 1GBit nic the performance of the raid array will be enough (110MB/s), no speed gain in most to all cases by ssd cache (at least on a home use system, in a small business environment there might be a difference)

local ssd as volume can be interesting for local tasks on the nas like docker/vm's and photostation/moments

it might also be a option to switch off onboard sata completely and get a 16port lsi sas controller but at this point it might be batter to thing about a new cpu/board (does not have to be new)

 

if you just sick to the rules and keep it to 10 disks there is no need to change anything, just keep the default of 12 drives and forget about update problems with synoinfo.conf

if there are no immediate plans to extend then see for it later where a new cpu/board might be already needed or planed and 918+ might be the choice

maybe i will do the patch for more disks for 3615/17 - but i'm talking about that for over a year now ... - lets see what the new hard corona shutdown will bring, maybe i will get bored so much that i do it this time, 16 or 20 disks would be my choice for a new patch

 

Link to post
Share on other sites
  • 0

Yea that makes good sense. For my own sake, will repeat my take away:

 

As you say, given the age of the HP Gen8 Micro, its looks like best value path forward would be:

- Split the main 8 disk data array between the onboard 4 ports + first 4 ports of the LSI 9211. This is easy enough - swapping the 3 SFF8087 cables, with disks 1-4 going onboard, disks 5-8 to LSI 9211 Slot 1, and disks 9-12 to LSI 9211 Slot 2. I'm assuming there shouldn't be any array issues after swapping the cables around - drives will be recognized and picked up by the array the next boot.

- Out of the remaining 4 ports of the LSI 9211, first two are used as nice SSD RAID1 for things like docker etc.

    - The remaining two will be see as external eSATA drives: Modifications to synoinfo.conf can be made to switch them to internal drives, but since updates could potentially disable access to them, they ideally only should be used for read-only cache (that is not even necessary, but hey, got an empty SSD so why not).

 

Longer term, replacing the motherboard with something more modern and good support would be ideal. I'm thinking probably will have to wait and see what DSM7 will allow, if it ever comes out and gets support from the community.

 

Thanks again, getting some life out of increasingly obsolete HP Gen8 Microserver w/ Xeon 12xx chip (thats still faster than most factory units being shipped today.... 🤦‍♂️)

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.