Jump to content
XPEnology Community

Chrunch

Rookie
  • Posts

    7
  • Joined

  • Last visited

Chrunch's Achievements

Newbie

Newbie (1/7)

1

Reputation

  1. Thanks, everything seems to be back in order Thanks for your time, i wouldnt have been able to solve that part myself with mdadm.
  2. Hi Flyride, It took some time, so i left it overnight to complete. Seems like the volume got repaired now, and is now in "warning" status. Should i just try the repair link from the overview page, or should i do something else before? Overview status: Volume status: cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdd5[7] sdc5[5] sdb5[6] sdf5[2] sde5[8] 11701741824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md3 : active raid5 sde6[4] sdb6[5] sdd6[3] sdc6[1] 14651252736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md4 : active raid1 sdg3[0] 1948692544 blocks super 1.2 [1/1] [U] md1 : active raid1 sdb2[0] 2097088 blocks [16/1] [U_______________] md0 : active raid1 sdb1[0] 2490176 blocks [12/1] [U___________] unused devices: <none> sudo fdisk -l /dev/sdb Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9E87F26C-B630-11E9-8F59-0CC47AC3A20D Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdb6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sdc Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: B9B51C1C-CA99-43CC-8DC1-80EA2EDF294A Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdc6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sdd Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9F406465-1A66-436C-BB6E-8A558D642FC9 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdd6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sde Disk /dev/sde: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 70959473-593B-4676-BE10-42738C834B2C Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sde6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sdf Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WD30EZRX-00DC0B0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9E87F269-B630-11E9-8F59-0CC47AC3A20D Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID
  3. Not sure what changed since last, besides a reboot. Files are still accessible. sudo mdadm -D /dev/md3 /dev/md3: Version : 1.2 Creation Time : Mon Aug 12 05:25:30 2019 Raid Level : raid5 Array Size : 14651252736 (13972.52 GiB 15002.88 GB) Used Dev Size : 4883750912 (4657.51 GiB 5000.96 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Aug 8 23:53:10 2022 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : Xpenology:3 (local to host Xpenology) UUID : 934a52c6:b8d5e8f1:5653d81b:340aed0c Events : 219089 Number Major Minor RaidDevice State 5 8 22 0 active sync /dev/sdb6 1 8 38 1 active sync /dev/sdc6 3 8 54 2 active sync /dev/sdd6 - 0 0 3 removed sudo fdisk -l /dev/sdb Disk /dev/sdb: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9E87F26C-B630-11E9-8F59-0CC47AC3A20D Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdb6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sdc Disk /dev/sdc: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: B9B51C1C-CA99-43CC-8DC1-80EA2EDF294A Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdc6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sdd Disk /dev/sdd: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9F406465-1A66-436C-BB6E-8A558D642FC9 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sdd6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sde Disk /dev/sde: 7.3 TiB, 8001563222016 bytes, 15628053168 sectors Disk model: WD80EZAZ-11TDBA0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 70959473-593B-4676-BE10-42738C834B2C Device Start End Sectors Size Type /dev/sde1 2048 4982527 4980480 2.4G Linux RAID /dev/sde2 4982528 9176831 4194304 2G Linux RAID /dev/sde5 9453280 5860326239 5850872960 2.7T Linux RAID /dev/sde6 5860342336 15627846239 9767503904 4.6T Linux RAID sudo fdisk -l /dev/sdf Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Disk model: WD30EZRX-00DC0B0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 9E87F269-B630-11E9-8F59-0CC47AC3A20D Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID Mino@Xpenology:~$
  4. Out of my league by now, so i'm glad for your help sudo mdadm -D /dev/md3 Password: /dev/md3: Version : 1.2 Creation Time : Mon Aug 12 05:25:30 2019 Raid Level : raid5 Array Size : 14651252736 (13972.52 GiB 15002.88 GB) Used Dev Size : 4883750912 (4657.51 GiB 5000.96 GB) Raid Devices : 4 Total Devices : 3 Persistence : Superblock is persistent Update Time : Mon Aug 8 22:58:54 2022 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : Xpenology:3 (local to host Xpenology) UUID : 934a52c6:b8d5e8f1:5653d81b:340aed0c Events : 219081 Number Major Minor RaidDevice State 5 8 22 0 active sync /dev/sdb6 1 8 38 1 active sync /dev/sdc6 3 8 54 2 active sync /dev/sdd6 - 0 0 3 removed
  5. Hi, All the data is visble and accesible, atleast i couldn't find anything missing Just the error with "system partition failed" and warnings from DSM. cat /proc/mdstat Personalities : [raid1] [raid6] [raid5] [raid4] [raidF1] md4 : active raid1 sdg3[0] 1948692544 blocks super 1.2 [1/1] [U] md2 : active raid5 sdd5[7] sdc5[5] sdb5[6] sdf5[2] sde5[8] 11701741824 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md3 : active raid5 sdb6[5] sdd6[3] sdc6[1] 14651252736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [UUU_] md1 : active raid1 sdg2[3] sdb2[0] sdd2[2] sdc2[1] 2097088 blocks [16/4] [UUUU____________] md0 : active raid1 sdg1[2] sdb1[0] sdc1[4] sdd1[1] 2490176 blocks [12/4] [UUU_U_______] unused devices: <none>
  6. I'm contemplating about creating a new VM with DSM7 with a new .vmdk file etc. - and let DSM try to "migrate" the installation? Not sure if that would make it worse however. As i understand "system partition failed" is that DSM isn't installed across all disks, which makes sense as all the disks wasn't visible to DSM when DSM7 got installed due to my mistake.
  7. Initially when i got my DSM7 installation started on ESXI, Synology only found 3 of 5 disks in my SHR volume. That of course caused some panic. My disks are all passed trough a sata controller. I managed to fix DSM not seeing the remaining disks via satamap so they are identical to my setup from DSM 6.2, however i'm unable to repair it. My SHR consists of 5 disks, 4x 8TB, and 1x 3TB. It seems it refuses to repair unless i replace my 3TB disk with another 8TB now? Error now Error when trying to repair: How disks were with DSM 6.2 Hoping for some suggestions
×
×
  • Create New...