Search the Community

Showing results for tags 'btrfs'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN
    • CHINESE
    • HUNGARIAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 7 results

  1. Hi, I am new to Xpenology and Proxmox but I just managed to install Xpenology DSM 6.2.3 on Proxmox 6.2-4. DSM was assigned a single disk that is created by Proxmox from a RAID10 (mirrored striped) ZFS storage. Seeing that this disk used by DSM already has redundancy from the underlying ZFS storage and also has some features similar to BTRFS like snapshots, replication, quotas, integrity protection, Is it redundant to use BTRFS instead of ext4 for a new volume in DMS? Should we use 'Basic', 'JBOD' for the storage pool in DMS? DSM only sees a single disk here.
  2. Hi guys, For some of you who wish to expand btrfs Syno volume after disk space increased. Before: df -Th btrfs fi show mdadm --detail /dev/md2 SSH commands: syno_poweroff_task -d mdadm --stop /dev/md2 parted /dev/sda resizepart 3 100% mdadm --assemble --update=devicesize /dev/md2 /dev/sda3 mdadm --grow /dev/md2 --size=max reboot btrfs filesystem resize max /dev/md2 After: df -Th btrfs fi show mdadm --detail /dev/md2 Voila Kall
  3. Hello, I am trying to either get the system to mount or recover the data. My volume crashed with no system reboot or OS crash. I have looked through the forums and have performed some items with no success and I'm hoping someone can assist with the issues. I have gone through all the steps other than the repair command by following the info found in this thread Volume Crash after 4 months of stability . Below are the commands that I have run so far and I have attached a text file with the results since I kept getting an error when posting the results. fdisk -l
  4. Hi guys, I decided to make a call for help as right now I'm stuck on recovering data from my BTRFS drive. I am using a hardware RAID 1 on the back-end [2x 4TB WD Red Drives], and on the front-end, on XPEnology I configured a Basic RAID Group with only "one drive" passed from ESXi. Until this January I've been using EXT file system, but I read that BTRFS is better both in speed and stability terms, so I decided to give it a go :) I run my system on UPS which can keep the system powered for more than 4 hours, in case of a blackout, so I though that my data was safe.
  5. Hi Everyone, I'm using DMS 6.1.4 on my case. What filesystem do you use? Could you share your suggestions?
  6. Hey, I'm running XPEnology DSM 6.0.2-8451 Update 11 on a self built computer. I started out with 4 x 1Tb older Samsung drives (HD103UJ & HD103SJ). These are in SHR2/BTRFS array (enabled SHR for DS3615xs). This setup hasn't had any issues and I intended to expand the array with other 1Tb drives, but I decided to go with bigger drives since I had the chance to do so. So I added a 3Tb WD Red and started expanding the volume. The main goal was to replace the 1Tb drives one by one with 3Tb drives and have 5x 3Tb WD Reds in the end. The expansion went ok and s
  7. I've had a mixture of WD Red drives in a Syno DS410 and an Intel SSE4200 enclosure running Xpenology for years with very few drive issues. Recently I thought I'd repurpose an Intel box I'd built a few years ago but was just sitting there (CPU/RAM/MOBO) and successfully set it up with 4x3TB WD Red drives running Xpenology. When given the choice, I chose to create a btrfs RAID 5 volume. But. In the 5 or so months I've been running this NAS, three drives have crashed and started reporting a bunch of bad sectors. These drives have less than 1000 hours on them, practically new. Fortuna