digity

volume degraded after switching HBA

Recommended Posts

I installed DSM succsessfully with a 4 x 500 GB RAID5 volume1 on the motherboard's built-in SATA ports. Then later successfully added a 8 port SATA PCI-e card with 4 x 2 TB RAID5 volume2 (separate pool). When I moved the first volume/pool (volume1) to the 8 port SATA PCI-e card the volume/pool degraded. I powered off the machine and put the volume1 drives in the original order (but still on the PCI-e card), but it still shows up as degraded. I'm not sure if this is part of the problem, but I noticed 2 of the volume1 drives are now showing up as external hard drives (and not regular drives in Storage Manger -> HDD/SSD). How do I fix it so volume1 gets back to good standing and on the PCI-e card (I don't want to use the motherboard's built-in SATA ports)??

Share this post


Link to post
Share on other sites

https://hedichaibi.com/fix-xpenology-problems-viewing-internal-hard-drives-as-esata-hard-drives/

 

Problem solved, I think, by following the above article. All drives show up as internal drives now and volume1 went from a red degraded icon to amber/orange icon after making the changes in the synoinfo.conf files. The volume1 showed 3 drives as Normal and 1 drive as Initialized. Kicked off a repair of volume1. All data appears to be there.

 

Also, it appear the order does not matter when migrating a set of drives in RAID to another HBA - Linux/DSM will still see a RAID set (albeit some drives may show up as external drives after the move without modifying the synoinfo.conf files).

Share this post


Link to post
Share on other sites

Is there a way to do this automatically? Every time I upgrade I have to change this value and rebuild my array

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now