Jump to content
XPEnology Community

Search the Community

Showing results for tags 'btrfs'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 8 results

  1. Hi guys, For some of you who wish to expand btrfs Syno volume after disk space increased. Before: df -Th btrfs fi show mdadm --detail /dev/md2 SSH commands: syno_poweroff_task -d mdadm --stop /dev/md2 parted /dev/sda resizepart 3 100% mdadm --assemble --update=devicesize /dev/md2 /dev/sda3 mdadm --grow /dev/md2 --size=max reboot btrfs filesystem resize max /dev/md2 After: df -Th btrfs fi show mdadm --detail /dev/md2 Voila Kall
  2. Hello. Broken write and read cache in the storage. There are disks but writes that there is no cache and the disk is not available. Tried it in ubuntu: root@ubuntu:/home/root# mdadm -D /dev/md2 /dev/md2: Version : 1.2 Creation Time : Tue Dec 8 23:15:41 2020 Raid Level : raid5 Array Size : 7794770176 (7433.67 GiB 7981.84 GB) Used Dev Size : 1948692544 (1858.42 GiB 1995.46 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Tue Jun 22 22:16:38 2021 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Consistency Policy : resync Name : memedia:2 UUID : ae85cc53:ecc1226b:0b6f21b5:b81b58c5 Events : 34754 root@ubuntu:/home/root# btrfs-find-root /dev/vg1/volume_1 parent transid verify failed on 20971520 wanted 48188 found 201397 parent transid verify failed on 20971520 wanted 48188 found 201397 parent transid verify failed on 20971520 wanted 48188 found 201397 parent transid verify failed on 20971520 wanted 48188 found 201397 Ignoring transid failure parent transid verify failed on 2230861594624 wanted 54011 found 196832 parent transid verify failed on 2230861594624 wanted 54011 found 196832 parent transid verify failed on 2230861594624 wanted 54011 found 196832 parent transid verify failed on 2230861594624 wanted 54011 found 196832 Ignoring transid failure Couldn't setup extent tree Couldn't setup device tree Superblock thinks the generation is 54011 Superblock thinks the level is 1 Well block 2095916859392(gen: 218669 level: 1) seems good, but generation/level doesn't match, want gen: 54011 level: 1 Well block 2095916580864(gen: 218668 level: 1) seems good, but generation/level doesn't match, want gen: 54011 level: 1 Well block 2095916056576(gen: 218667 level: 1) seems good, but generation/level doesn't match, want gen: 54011 level: 1 Well block 2095915335680(gen: 218666 level: 1) seems good, but generation/level doesn't match, want gen: 54011 level: 1 root@ubuntu:/home/root# btrfs check --repair -r 2095916859392 -s 1 /dev/vg1/volume_1 enabling repair mode using SB copy 1, bytenr 67108864 couldn't open RDWR because of unsupported option features (3). ERROR: cannot open file system root@ubuntu:/home/root# btrfs check --clear-space-cache v1 /dev/mapper/vg1-volume_1 couldn't open RDWR because of unsupported option features (3). ERROR: cannot open file system root@ubuntu:/home/root# btrfs check --clear-space-cache v2 /dev/mapper/vg1-volume_1 couldn't open RDWR because of unsupported option features (3). ERROR: cannot open file system root@ubuntu:/home/root# mount -o recovery /dev/vg1/volume_1 /mnt/volume1/ mount: /mnt/volume1: wrong fs type, bad option, bad superblock on /dev/mapper/vg1-volume_1, missing codepage or helper program, or other error. All possible commands are answered: couldn't open RDWR because of unsupported option features (3). Tell me - something can be recovered from such an array? Thank you very much in advance!
  3. Hi, I am new to Xpenology and Proxmox but I just managed to install Xpenology DSM 6.2.3 on Proxmox 6.2-4. DSM was assigned a single disk that is created by Proxmox from a RAID10 (mirrored striped) ZFS storage. Seeing that this disk used by DSM already has redundancy from the underlying ZFS storage and also has some features similar to BTRFS like snapshots, replication, quotas, integrity protection, Is it redundant to use BTRFS instead of ext4 for a new volume in DMS? Should we use 'Basic', 'JBOD' for the storage pool in DMS? DSM only sees a single disk here. Thank you for any guidance on this issue!
  4. Hello, I am trying to either get the system to mount or recover the data. My volume crashed with no system reboot or OS crash. I have looked through the forums and have performed some items with no success and I'm hoping someone can assist with the issues. I have gone through all the steps other than the repair command by following the info found in this thread Volume Crash after 4 months of stability . Below are the commands that I have run so far and I have attached a text file with the results since I kept getting an error when posting the results. fdisk -l cat /proc/mdstat cat /etc/fstab btrfs check /dev/md2 btrfs rescue super /dev/md2 btrfs insp dump-s -f /dev/md2 btrfs restore /dev/md2 /volumeUSB1/ Thanks in advance for any assistance. results log.txt
  5. Hi guys, I decided to make a call for help as right now I'm stuck on recovering data from my BTRFS drive. I am using a hardware RAID 1 on the back-end [2x 4TB WD Red Drives], and on the front-end, on XPEnology I configured a Basic RAID Group with only "one drive" passed from ESXi. Until this January I've been using EXT file system, but I read that BTRFS is better both in speed and stability terms, so I decided to give it a go :) I run my system on UPS which can keep the system powered for more than 4 hours, in case of a blackout, so I though that my data was safe. Two weeks ago I decided to power off the system after 3 months of continual usage, just for checking that everything is okay and to clean it's inside for dust, as a normal routine check. Unfortunately, after I powered on my system, the main data drive from XPEnology was crashed, not even degraded. I rebooted it thinking that it might run a fsck by itself, but unfortunately it remained crashed. I ejected both drives from the system and ran extensive checks on them, both smart tests and surface checks, and everything looks just fine. Given these, I decided to eject one of the drives from the system and to connect it to an external docking station, in order to perform other troubleshooting. Unfortunately, seems that I'm unable to get it mounted, even as RO, with recovery parameters :( I have the drive connected to a live Parted Magic OS running, for troubleshooting and hope so, recovery procedures :) This is the output of some of the commands I issued, but with no luck to have it mounted: root@PartedMagic:~# btrfs version btrfs-progs v4.15 root@PartedMagic:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [multipath] md3 : active raid1 sda3[0] 3902163776 blocks super 1.2 [1/1] root@PartedMagic:~# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT loop253 squashfs sr0 loop254 squashfs loop252 squashfs sda ├─sda2 linux_raid_member 7b661c20-ea68-b207-e8f5-bc45dd5a58c4 ├─sda3 linux_raid_member DISKSTATION:3 be150a49-a5a3-f915-32bf-80e4732a20ac │ └─md3 btrfs 2018.01.02-20:42:27 v15217 1457616f-5daf-4487-ba1c-07963a0c4723 └─sda1 linux_raid_member root@PartedMagic:~# btrfs fi show -d Label: '2018.01.02-20:42:27 v15217' uuid: 1457616f-5daf-4487-ba1c-07963a0c4723 Total devices 1 FS bytes used 2.15TiB devid 1 size 3.63TiB used 2.21TiB path /dev/md3 root@PartedMagic:~# mount -t btrfs -oro,degraded,recovery /dev/md3 /mnt/temp1 mount: wrong fs type, bad option, bad superblock on /dev/md3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. root@PartedMagic:~# dmesg | tail -n 100 [SNIP] [39026.756985] BTRFS info (device md3): allowing degraded mounts [39026.756988] BTRFS warning (device md3): 'recovery' is deprecated, use 'usebackuproot' instead [39026.756989] BTRFS info (device md3): trying to use backup root at mount time [39026.756990] BTRFS info (device md3): disk space caching is enabled [39026.756992] BTRFS info (device md3): has skinny extents [39027.082642] BTRFS error (device md3): bad tree block start 1917848902723122217 2473933062144 [39027.083051] BTRFS error (device md3): bad tree block start 5775457142092792234 2473933062144 [39027.083552] BTRFS error (device md3): bad tree block start 1917848902723122217 2473933062144 [39027.083936] BTRFS error (device md3): bad tree block start 5775457142092792234 2473933062144 [39027.097706] BTRFS error (device md3): bad tree block start 1917848902723122217 2473933062144 [39027.098146] BTRFS error (device md3): bad tree block start 5775457142092792234 2473933062144 [39027.114806] BTRFS error (device md3): bad tree block start 1917848902723122217 2473933062144 [39027.115410] BTRFS error (device md3): bad tree block start 5775457142092792234 2473933062144 [39027.133510] BTRFS error (device md3): bad tree block start 1917848902723122217 2473933062144 [39027.133941] BTRFS error (device md3): bad tree block start 5775457142092792234 2473933062144 [39027.136206] BTRFS error (device md3): open_ctree failed root@PartedMagic:~# btrfsck /dev/md3 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found E41F2C90 wanted 90ED32B2 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A bytenr mismatch, want=2473933062144, have=1917848902723122217 Couldn't setup device tree ERROR: cannot open file system root@PartedMagic:~# btrfs check --repair /dev/md3 enabling repair mode checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found E41F2C90 wanted 90ED32B2 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A bytenr mismatch, want=2473933062144, have=1917848902723122217 Couldn't setup device tree ERROR: cannot open file system root@PartedMagic:~# btrfs restore -v -i /dev/md3 /mnt/temp1 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found E41F2C90 wanted 90ED32B2 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A bytenr mismatch, want=2473933062144, have=1917848902723122217 Couldn't setup device tree Could not open root, trying backup super checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found E41F2C90 wanted 90ED32B2 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A bytenr mismatch, want=2473933062144, have=1917848902723122217 Couldn't setup device tree Could not open root, trying backup super checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A checksum verify failed on 2473933062144 found E41F2C90 wanted 90ED32B2 checksum verify failed on 2473933062144 found 908A6889 wanted 1646EB1A bytenr mismatch, want=2473933062144, have=1917848902723122217 Couldn't setup device tree Could not open root, trying backup super root@PartedMagic:~# btrfs rescue super-recover -v /dev/md3 All Devices: Device: id = 1, name = /dev/md3 Before Recovering: [All good supers]: device name = /dev/md3 superblock bytenr = 65536 device name = /dev/md3 superblock bytenr = 67108864 device name = /dev/md3 superblock bytenr = 274877906944 [All bad supers]: All supers are valid, no need to recover root@PartedMagic:~# smartctl -a /dev/sda | grep PASSED SMART overall-health self-assessment test result: PASSED root@PartedMagic:~# btrfs inspect-internal dump-super /dev/md3 superblock: bytenr=65536, device=/dev/md3 --------------------------------------------------------- csum_type 0 (crc32c) csum_size 4 csum 0x91d56f51 [match] bytenr 65536 flags 0x1 ( WRITTEN ) magic _BHRfS_M [match] fsid 1457616f-5daf-4487-ba1c-07963a0c4723 label 2018.01.02-20:42:27 v15217 generation 31802 root 2473938698240 sys_array_size 129 chunk_root_generation 30230 root_level 1 chunk_root 20987904 chunk_root_level 1 log_root 0 log_root_transid 0 log_root_level 0 total_bytes 3995815706624 bytes_used 2366072147968 sectorsize 4096 nodesize 16384 leafsize (deprecated) 16384 stripesize 4096 root_dir 6 num_devices 1 compat_flags 0x0 compat_ro_flags 0x0 incompat_flags 0x16b ( MIXED_BACKREF | DEFAULT_SUBVOL | COMPRESS_LZO | BIG_METADATA | EXTENDED_IREF | SKINNY_METADATA ) cache_generation 22402 uuid_tree_generation 31802 dev_item.uuid 8c0579fe-b94c-465a-8005-c55991b8727e dev_item.fsid 1457616f-5daf-4487-ba1c-07963a0c4723 [match] dev_item.type 0 dev_item.total_bytes 3995815706624 dev_item.bytes_used 2428820783104 dev_item.io_align 4096 dev_item.io_width 4096 dev_item.sector_size 4096 dev_item.devid 1 dev_item.dev_group 0 dev_item.seek_speed 0 dev_item.bandwidth 0 dev_item.generation 0 Given these, do you have any other idea that I should apply to get my data recovered? In case you need the output of other troubleshooting commands, please, just let me know, and I will post them here. Thank you in advance for your tips and help! :)
  6. Hi Everyone, I'm using DMS 6.1.4 on my case. What filesystem do you use? Could you share your suggestions?
  7. Hey, I'm running XPEnology DSM 6.0.2-8451 Update 11 on a self built computer. I started out with 4 x 1Tb older Samsung drives (HD103UJ & HD103SJ). These are in SHR2/BTRFS array (enabled SHR for DS3615xs). This setup hasn't had any issues and I intended to expand the array with other 1Tb drives, but I decided to go with bigger drives since I had the chance to do so. So I added a 3Tb WD Red and started expanding the volume. The main goal was to replace the 1Tb drives one by one with 3Tb drives and have 5x 3Tb WD Reds in the end. The expansion went ok and so did the consistency check. Then for some unknown reason the newly added disk was restarted, then degraded the swap system volume, then degraded the volume, was "inserted" and "removed" (although I didn't do anything) and finally it degraded the root system volume on the disk. I tried repairing the volume, but it didn't help. I shut the server down and no new data has been written on the array since this issue. Yesterday I finally had time to do something about this, so I removed the disk, emptied everything on the disk and re-inserted it in the XPEnology server. I also changed the SATA-cable and power cable for the disk. Repair was successful like the array expansion before and so was the consistency check. After this had finished, I started the RAID scrub. Even the scrub went through just fine at 3:28:01 and then it did just about the same as when I was expanding the array. Disk restarted due to "unknown error", volumes degraded and the disk 5 was inserted and removed from the array. This is the situation now along with: Next step is of course to run diagnostics on that WD Red, but for some reason I don't think it's the disk that is causing this issue. I also have a few other WD Reds which I could be able to try out, but I'd need to empty them first. If you have some inkling on what could be causing this, It'd be appreciated. Best regards, Darkened aka. Janne
  8. I've had a mixture of WD Red drives in a Syno DS410 and an Intel SSE4200 enclosure running Xpenology for years with very few drive issues. Recently I thought I'd repurpose an Intel box I'd built a few years ago but was just sitting there (CPU/RAM/MOBO) and successfully set it up with 4x3TB WD Red drives running Xpenology. When given the choice, I chose to create a btrfs RAID 5 volume. But. In the 5 or so months I've been running this NAS, three drives have crashed and started reporting a bunch of bad sectors. These drives have less than 1000 hours on them, practically new. Fortunately they are under warranty at some level. But, still, wondering, could this be btrfs? I'm no file system expert. Light research suggests that while btrfs has been around for several years and of course is a supported option in Synology, some feel it isn't ready for prime time. I'm at a loss to explain why 3 WD Red drives with less than 1000 hours on them manufactured on different dates are failing so catastrophically. I understand btrfs and bad sectors are not really in the same problem zone; software shouldn't be able to cause hardware faults. I considered heat but these drives are rated at 65 celsius and they are not going above 38 or so. If it matters, when drives fail, the drive always reports problems at boot up; in fact, as the volume is now degraded with the loss of yet another drive I'm just not turning the system off until I get a new drive in there; one of the remaining drives reported failure to start up properly in the past week. Final consideration I have is that this is a build-a-box using a Gigabyte motherboard and 4 drives on the SATA bus in AHCI mode. Some sort of random hardware issue in this system could possibly be causing bad sectors to be reported on the drives?? Seems unlikely. Has anyone ever heard of SynologyOS reporting bad sectors when there weren't actually bad sectors? Anyone have any thoughts on this? Should I go back to ext4? This is mainly a plex media server.
×
×
  • Create New...