Jump to content
XPEnology Community

peterzil

Transition Member
  • Posts

    11
  • Joined

  • Last visited

Everything posted by peterzil

  1. Despite the unfortunate result, I really appreciate your help. You did a lot for me. I wish there were more professionals like you. Have a nice day
  2. This is the log of the commands: root@XP:~# btrfs ins dump-super -Ffs 67108864 /dev/md3 superblock: bytenr=67108864, device=/dev/md3 --------------------------------------------------------- csum 0x00000000 [DON'T MATCH] bytenr 0 flags 0x0 magic ........ [DON'T MATCH] fsid 00000000-0000-0000-0000-000000000000 label generation 0 root 0 sys_array_size 0 chunk_root_generation 0 root_level 0 chunk_root 0 chunk_root_level 0 log_root 0 log_root_transid 0 log_root_level 0 total_bytes 0 bytes_used 0 sectorsize 0 nodesize 0 leafsize 0 stripesize 0 root_dir 0 num_devices 0 compat_flags 0x0 compat_ro_flags 0x0 incompat_flags 0x0 csum_type 0 csum_size 4 cache_generation 0 uuid_tree_generation 0 dev_item.uuid 00000000-0000-0000-0000-000000000000 dev_item.fsid 00000000-0000-0000-0000-000000000000 [match] dev_item.type 0 dev_item.total_bytes 0 dev_item.bytes_used 0 dev_item.io_align 0 dev_item.io_width 0 dev_item.sector_size 0 dev_item.devid 0 dev_item.dev_group 0 dev_item.seek_speed 0 dev_item.bandwidth 0 dev_item.generation 0 sys_chunk_array[2048]: backup_roots[4]: root@XP:~# btrfs ins dump-super -Ffs 274877906944 /dev/md3 superblock: bytenr=274877906944, device=/dev/md3 --------------------------------------------------------- btrfs: ctree.h:2183: btrfs_super_csum_size: Assertion `!(t >= (sizeof(btrfs_csum_sizes) / sizeof((btrfs_csum_sizes)[0])))' failed. csum 0xAborted (core dumped)
  3. root@XP:~# btrfs ins dump-super -fFa /dev/md3 superblock: bytenr=65536, device=/dev/md3 --------------------------------------------------------- btrfs: ctree.h:2183: btrfs_super_csum_size: Assertion `!(t >= (sizeof(btrfs_csum_sizes) / sizeof((btrfs_csum_sizes)[0])))' failed. csum 0xAborted (core dumped)
  4. You are right - it was btrfs root@XP:~# btrfs rescue super-recover -v /dev/md3 No valid Btrfs found on /dev/md3 Usage or syntax errors Segmentation fault (core dumped)
  5. Thank you. This is the log of the commands: root@XP:~# cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/md3 /volume2 btrfs 0 0 /dev/mapper/cachedev_0 /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0 root@XP:~# mdadm --stop /dev/md3 mdadm: stopped /dev/md3 root@XP:~# mdadm -v --create --assume-clean -e1.2 -n3 -l5 /dev/md3 /dev/sdh3 /dev/sdi3 /dev/sdj3 -u22a4b5c5:8103a815:1de617b2:3f23ee03 mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/sdi3 appears to be part of a raid array: level=raid5 devices=3 ctime=Sat Nov 16 12:10:31 2019 mdadm: size set to 5855700544K Continue creating array? y mdadm: array /dev/md3 started. root@XP:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md3 : active raid5 sdj3[2] sdi3[1] sdh3[0] 11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] md4 : active raid1 sda1[0] sdb1[1] 117216192 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdg3[0] 3902196544 blocks super 1.2 [1/1] [U] md1 : active raid1 sdg2[0] sdh2[1] sdi2[2] sdj2[3] 2097088 blocks [12/4] [UUUU________] md0 : active raid1 sdg1[0] sdh1[1] sdi1[2] sdj1[3] 2490176 blocks [12/4] [UUUU________] unused devices: <none> The storage pool was repaired, but the volume is still "crashed"
  6. Do someone have a technical manual for DMS ? I want to try to add the disks in the RAID in Linux configuration files. Thanks
  7. What happened before? I think the volume was overflowing. The web access to the storage returned a message like: the system cannot display the page (Synology's error message, not a standard browser error) After reboot the system asked me to install DSM again. After installation I get this situation. No addition actions were performed. I agree that trying to manually add disks to the raid it's only option with some chance to success. I will be grateful if you can tell me how to do this. Thank you
  8. I think so (like on pic 1) root@XP:~# fdisk -l /dev/sdh Disk /dev/sdh: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 987F18E3-DCA2-431A-9174-AADC0F9C53EC Device Start End Sectors Size Type /dev/sdh1 2048 4982527 4980480 2.4G Linux RAID /dev/sdh2 4982528 9176831 4194304 2G Linux RAID /dev/sdh3 9437184 11720840351 11711403168 5.5T Linux RAID root@XP:~# fdisk -l /dev/sdj Disk /dev/sdj: 5.5 TiB, 6001175126016 bytes, 11721045168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: D6A7A561-92BA-4C7C-B9AA-7C6DA547F406 Device Start End Sectors Size Type /dev/sdj1 2048 4982527 4980480 2.4G Linux RAID /dev/sdj2 4982528 9176831 4194304 2G Linux RAID /dev/sdj3 9437184 11720840351 11711403168 5.5T Linux RAID
  9. root@XP:~# mdadm --detail /dev/md3 /dev/md3: Version : 1.2 Creation Time : Sat Nov 16 12:10:31 2019 Raid Level : raid5 Array Size : 11711401088 (11168.86 GiB 11992.47 GB) Used Dev Size : 5855700544 (5584.43 GiB 5996.24 GB) Raid Devices : 3 Total Devices : 1 Persistence : Superblock is persistent Update Time : Mon Oct 19 22:13:04 2020 State : clean, FAILED Active Devices : 1 Working Devices : 1 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : XPEH:2 UUID : 22a4b5c5:8103a815:1de617b2:3f23ee03 Events : 376 Number Major Minor RaidDevice State - 0 0 0 removed 1 8 131 1 active sync /dev/sdi3 - 0 0 2 removed root@XP:~# mdadm --examine /dev/sd[hij]3 | egrep 'Event|/dev/sd' mdadm: No md superblock detected on /dev/sdh3. mdadm: No md superblock detected on /dev/sdj3. /dev/sdi3: Events : 376 Thank you
  10. admin@XP:/$ cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md3 : active raid5 sdi3[1] 11711401088 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/1] [_U_] md4 : active raid1 sda1[0] sdb1[1] 117216192 blocks super 1.2 [2/2] [UU] md2 : active raid1 sdg3[0] 3902196544 blocks super 1.2 [1/1] [U] md1 : active raid1 sdg2[0] sdh2[1] sdi2[2] sdj2[3] 2097088 blocks [12/4] [UUUU________] md0 : active raid1 sdg1[0] sdh1[1] sdi1[2] sdj1[3] 2490176 blocks [12/4] [UUUU________] unused devices: <none>
  11. Hi all For unknown reason the storage pool was crashed. 2 of 3 disk being with status "Initialized", completely healthy. How I can repair it ?
×
×
  • Create New...