volume degraded after switching HBA


Recommended Posts

I installed DSM succsessfully with a 4 x 500 GB RAID5 volume1 on the motherboard's built-in SATA ports. Then later successfully added a 8 port SATA PCI-e card with 4 x 2 TB RAID5 volume2 (separate pool). When I moved the first volume/pool (volume1) to the 8 port SATA PCI-e card the volume/pool degraded. I powered off the machine and put the volume1 drives in the original order (but still on the PCI-e card), but it still shows up as degraded. I'm not sure if this is part of the problem, but I noticed 2 of the volume1 drives are now showing up as external hard drives (and not regular drives in Storage Manger -> HDD/SSD). How do I fix it so volume1 gets back to good standing and on the PCI-e card (I don't want to use the motherboard's built-in SATA ports)??

Link to post
Share on other sites

https://hedichaibi.com/fix-xpenology-problems-viewing-internal-hard-drives-as-esata-hard-drives/

 

Problem solved, I think, by following the above article. All drives show up as internal drives now and volume1 went from a red degraded icon to amber/orange icon after making the changes in the synoinfo.conf files. The volume1 showed 3 drives as Normal and 1 drive as Initialized. Kicked off a repair of volume1. All data appears to be there.

 

Also, it appear the order does not matter when migrating a set of drives in RAID to another HBA - Linux/DSM will still see a RAID set (albeit some drives may show up as external drives after the move without modifying the synoinfo.conf files).

Link to post
Share on other sites
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.