Jump to content
XPEnology Community

RAID degraded after upgrade from 5.2 to 6.1


Recommended Posts

I followed the excellent topics of Polanskiman (http://xpenology.com/forum/viewtopic.php?f=2&t=22100 and quickni (http://xpenology.com/forum/viewtopic.php?f=2&t=24308 but I made the mistake (or just beeing brave and ruthless... :roll: ) to use his bootloader "DS3615xs 6.1 Jun's Mod V1.02-alpha.zip"

 

Anyway, installation went fine and after all reboots I was able to easily to login to the web gui without problems. I did see though a message about my raid being degraded and 2 out of 8 disks were not in the raid (I do have a proper backup... :cool: )

 

Poking around in the webgui I realised two things:

 

1. I had made the above mentioned mistake which actually made me running DSM 6.1.xxx instead of 6.0.xxx

 

2. The 2 disks that were taken out of the raid were attached as two external eSATA disks and working normally. Strange!

 

I have not yet run smartctl and badblocks test on those two disk but I thought I would post this issue here first to get some guru advice from all you guys first (before I might do some stupid modifcation attempts on the raid).

 

This is what I get when I do "fdisk -l":

 

fdisk -l  

 

Disk /dev/sdg: 3.7 TiB, 4000753476096 bytes, 7813971633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 325DFE1B-A2AC-4AEA-B75B-8D86E7473C6F
 
Device Start End Sectors Size Type
/dev/sdg1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdg2 4982528 9176831 4194304 2G Linux RAID
/dev/sdg5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdh: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: BC62F0DF-E10F-44B5-940A-D1D15A9748D0
 
Device Start End Sectors Size Type
/dev/sdh1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdh2 4982528 9176831 4194304 2G Linux RAID
/dev/sdh5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdi: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E368D235-865B-4571-9077-07A8FD1E1DE5
 
Device Start End Sectors Size Type
/dev/sdi1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdi2 4982528 9176831 4194304 2G Linux RAID
/dev/sdi5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdj: 3.7 TiB, 4000753476096 bytes, 7813971633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 9A4B1520-F64A-4A2F-B0F8-186F400290EE
 
Device Start End Sectors Size Type
/dev/sdj1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdj2 4982528 9176831 4194304 2G Linux RAID
/dev/sdj5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdk: 3.7 TiB, 4000753476096 bytes, 7813971633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 809D9F0D-2528-4BC9-9103-E445E2CE7CB6
 
Device Start End Sectors Size Type
/dev/sdk1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdk2 4982528 9176831 4194304 2G Linux RAID
/dev/sdk5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdl: 3.7 TiB, 4000753476096 bytes, 7813971633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: E4D6DD02-1B61-455F-B0E4-3C738F12640A
 
Device Start End Sectors Size Type
/dev/sdl1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdl2 4982528 9176831 4194304 2G Linux RAID
/dev/sdl5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdm: 3.7 TiB, 4000753476096 bytes, 7813971633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F3BD886D-CE11-4F0E-91D9-1FBEA97DE8E9
 
Device Start End Sectors Size Type
/dev/sdm1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdm2 4982528 9176831 4194304 2G Linux RAID
/dev/sdm5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/sdn: 3.7 TiB, 4000753476096 bytes, 7813971633 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 05E2163B-EBFF-47DD-9752-2FFE1FED7BB4
 
Device Start End Sectors Size Type
/dev/sdn1 2048 4982527 4980480 2.4G Linux RAID
/dev/sdn2 4982528 9176831 4194304 2G Linux RAID
/dev/sdn5 9453280 7813765983 7804312704 3.6T Linux RAID
 
Disk /dev/md0: 2.4 GiB, 2549940224 bytes, 4980352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/md1: 2 GiB, 2147418112 bytes, 4194176 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/sdu: 3.7 TiB, 4000752599040 bytes, 976746240 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 671DE8BB-ED64-4DEC-96CF-15AA514BA272
 
Device Start End Sectors Size Type
/dev/sdu1 32 976735934 976735903 3.7T Linux filesystem
 
GPT PMBR size mismatch (102399 != 7843838) will be corrected by w(rite).
Disk /dev/synoboot: 3.8 GiB, 4016045568 bytes, 7843839 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5D2A121E-5B72-49A8-9416-54409373E88F
Device Start End Sectors Size Type
/dev/synoboot1 2048 32767 30720 15M EFI System
/dev/synoboot2 32768 94207 61440 30M Linux filesystem
/dev/synoboot3 94208 102366 8159 4M BIOS boot
 
Disk /dev/zram0: 2.4 GiB, 2524971008 bytes, 616448 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/zram1: 2.4 GiB, 2524971008 bytes, 616448 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/zram2: 2.4 GiB, 2524971008 bytes, 616448 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/zram3: 2.4 GiB, 2524971008 bytes, 616448 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
 
Disk /dev/md2: 21.8 TiB, 23974842335232 bytes, 46825863936 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 393216 bytes
 
Disk /dev/mapper/vg1-syno_vg_reserved_area: 12 MiB, 12582912 bytes, 24576 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 393216 bytes
 
Disk /dev/mapper/vg1-volume_1: 21.8 TiB, 23974826213376 bytes, 46825832448 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 65536 bytes / 393216 bytes

 

Hide  

 

and when I do cat "/proc/mdstat" I get

 

cat "/proc/mdstat"  

 

Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
md2 : active raid6 sdg5[0] sdl5[5] sdk5[4] sdj5[3] sdi5[2] sdh5[1]
23412931968 blocks super 1.2 level 6, 64k chunk, algorithm 2 [8/6] [uUUUUU__]
 
md1 : active raid1 sdg2[0] sdh2[1] sdi2[2] sdj2[3] sdk2[4] sdl2[5]
2097088 blocks [12/6] [uUUUUU______]
 
md0 : active raid1 sdg1[0] sdh1[1] sdi1[2] sdj1[3] sdk1[4] sdl1[5] sdm1[6] sdn1[7]
2490176 blocks [12/8] [uUUUUUUU____]

 

Hide  

 

which clearly indicates 2 devices being out of the raid ("sdm" and "sdn")

 

What confuses me though is:

 

1. Why do I see md0, md1 and md2? I only have 8 disks of 4TB each in the raid (connected via xxxx controller) and one external USB disk (not shown above) and the USB itself running the bootloader.

 

2. Why are the 2 "failing" disks visible in the DSM GUI as 2 externally attached eSata disk in normal mode without any problems?

 

I am currently doing a 2nd full backup of the data on my raid onto a separate external USB disk and when that is completed I am ready to take on your advice and test disk commands you suggest.

Link to comment
Share on other sites

Im not a guru, but as you figured it out it was mistake to upgrade from dsm 5.2 straight to 6.1. I think i saw somewhere Synology says that upgrade to dsm 6.1 can be make only from dsm 6.

 

Now about questions. md0 and md1 are system partitions and im guessing one is with dsm 5.2 and the other is with dsm 6.1. One of them could be unmounted but due upgrade issues its still there.

About 2nd question. You mentioned those 2 disks are eSATA, did you changed flags in configuration files in dsm 5.2 to make it visible as normal disks? If yes, check if those settings got no changed after upgrade.

 

You can try to downgrade system to dsm 5.2 and from that point try to upgrade to dsm 6 and then to dsm 6.1. At least i would do those steps.

Link to comment
Share on other sites

You mentioned those 2 disks are eSATA, did you changed flags in configuration files in dsm 5.2 to make it visible as normal disks? If yes, check if those settings got no changed after upgrade.

Nope... all those 8 disks were part of my raid while running DSM 5.2. While on DSM 5.2 all was running normal with all 8 disks being part of my RAID6 array and all disks were checked with smartctl long test before upgrade and no errors shown.

 

You said you attached them to eSATA pots, that means you had to edit synoinfo.conf. After upgrade that file was replaced with default one, so your eSATA disks were not "internal" anymore, so the system understood them as lost.

No, I did not attach them to eSata ports. I did NOT change ANYTHING in my hardware conf before the upgrade or after the upgrade. All of these disks are attached to the SAS9211-81 card (the card is physically an "IBM ServeRAID M1015 8-CH SAS-SATA PCI-E - 46M0861")

 

Anyway, I´ll revert back to DSM 5.2 and then do a "proper upgrade to 6.0 (bootloader version DS3615xs 6.0.2 Jun's Mod V1.01.zip) instead of 6.1 (bootloader DS3615xs 6.1 Jun's Mod V1.02-alpha.zip) that I made "accidentally".

Link to comment
Share on other sites

[sOLVED!]

 

Alright, Saturday and Sunday was used for trying to figure out what had happened when I went from DSM 5.2 to DSM 6.1 (yeah, yeah, I know I made a mistake by testing 6.1 directly even though it is still pure alpha but hey, I'm brave and I have backups... :cool: )

 

I first tried to downgrade to 6.0.2 but still the same problem that 2 of my 8 disks where shown as eSata disks and control center yelling about degraded raid.

I then tried with downgrade to 5.2 but it always went back to "migratable" mode so even though I re-booted several time with Xpeneology 5.2 (which I initially had) I was not able to complete to roll-back to 5.2. (I did never try FULL RE-INSTALL though where all settings are wiped but data kept.)

 

So I decided to go the hard path and just get DSM 6.1 working... :twisted:

I now decided to skip all my settings but keep my data so I did a full re-install and that didn't help either... :evil:

 

I tried with modifying sataportmap setting in grub file and also played around with the internalportcfg and usbportconfig setting in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf but I was not able to get it to work properly and I realised there was something I am doing wrong since it felt like any setting I tried the 2 of my 8 disks (connected to LSI card, I have no sata drives connected) where always showing up as externally connected eSata drives.

 

My disk setup is 6 sata connectors on mobo and I have NO sata drives connected to them. Instead I have the "standard" LSI 9211-8i with 8 disks connected to it via fan out sata cables.

So, Google, google, google and google and finally after reading a BUNCH of different posts here at Xpenology forum as well as other posts/blogs/comments I realised that I most likely had:

 

1. missed to modify the esataportcfg setting

and

2. used the wrong values.

and

3. missed to INCREASE the setting maxdisks from 12 to 14 (6 sata ports with NO disks connected + my 8 disks connected to the LSI-card)

 

Lookning in dmesg output I could see that my LSI-connected disks were displayed as slot 7 to 14

 

I figured out that the internalportcfg value should therefore be

Binary 111111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports)

and I set esataportcfg = 0x0

and kept the current setting usbportcfg = 0x300000

 

I made the change in both /etc/synoinfo.conf and /etc.defaults/synoinfo.conf, rebooted, waited anxiously.... and voilaaaaa! :razz:

The 2 external eSata disks were now back available as two internal disks within the raid but degrade of course. I just did "Repair volume" from the GUI and it started its parity checking mode and now 10 hours later parity checking is at 90% done and I am happy and confident it will work as intended from now on...

 

So the conclusion is:

 

1. sataportmap=1 which is the default setting works fine for me and didn't seem to help/affect my problems

 

2. VERY important to understand how many sata ports one has and if one has extra controller cards (like I have) to figure out the proper value to use in BOTH files for the internalportcfg setting.

In my case I had 6 mobo sata ports (which I do not use) and the LSI controller card with 8 disks, i.e. totally 14 disks (and remember the default setting in the conf-files seems to be maxdisks=12)

 

3. Important also to set proper esataportcfg value as well (in my case I forced it to zero since I have no eSata drives/ports, I use USB 3.0 ports for backing up data to external USB 3.0 disks).

 

In my case it was this post the pointed me out in the proper direction (as well as the many other posts here at xpenology.com):

Sata and eSata port configuration at xpenology.eu-forum as well as this one here at xpenelogy.com

Link to comment
Share on other sites

Thanks for the feedback! Very helpful. :smile:

I have also a HP Microserver Gen 8 with a Dell H200 flashed to it mode as LSI 9211 currently running DSM 5.2.

I will remember your thread if I make the way to DSM 6.1 (but I think I will wait till a updated loader is available).

Link to comment
Share on other sites

Yesterday I also took the step and added 3x3TB SATA disks to 3 of the motherboard internal SATA connectors, changed internalportcfg setting:

 

from binary 0011 1111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports)

 

to binary 0011 1111 1111 1000 = 0x3ff8 (i.e. zeroes now on the 3 first positions only since I have 3 SATA disks connected and the other 8 are connected to my LSI-controller)

 

Rebooted the system and the 3 new disks on the SATA ports were visible as disks in Synology. When I started to greate a separate disk group of the 3 new disks added I could not choose SHR as the raid type but some searching here on the forum quickly enlightened me that I needed to

 

comment out / delete

support_raid_group = "yes"

and then add

support_syno_hybrid_raid = "yes"

 

in /etc.defaults/synoinfo.conf and /etc/synoinfo.conf and reboot and voilaaaa! My 2nd disk group is now also up and running in SHR mode with BTRFS-filesystem. Cool! :razz:

  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...