Jump to content
XPEnology Community

Dashrendar2507

Rookie
  • Posts

    9
  • Joined

  • Last visited

Everything posted by Dashrendar2507

  1. Good news, I was able to copy most of my data from the broken volume, even though I couldn't mount it. I used the following command: sudo btrfs restore -x -m -S -i /dev/vg1/volume_1 /volume2/Volume1_Backup/ This searched the broken volume1 for retrievable files and copied them to a backup folder on another volume. Hope this can help others.
  2. I inserted my drive into a Ubuntu machine and tried to mount the crashed volume using: mount -t btrfs -o ro,usebackuproot /dev/vg1/volume_1 /vol1 But that didn't work. I get the error "wrong fs type, bad option, bad superblock". I also tried mounting "/dev/md2>, but it always says it's already mounted or the mount point is busy...
  3. Just an update, I inserted a spare SSD I had, hoping I could mount /dev/mapper/cachedev_1 to it. Unfortunately I get the following error: mount: /volume3: wrong fs type, bad option, bad superblock on /dev/mapper/cachedev_1, missing codepage or helper program, or other error. I tried using options, to no avail.
  4. I'll see what I can do. I have a s Dell R720XD, I don't think I have storage options left. EDIT: I do! I was looking to get the rear backplane for 2 more 2.5 drives, but noticed that I have 2 internal SATA ports. I'll just stick an SSD on there and call it a day. However, I still have to get my issue resolved first.
  5. @Orphée Thank you for the idea! I'm just back from doing that. I had to use something other than smartctl, but it seems to have worked all the same. No issue detected. The one device that could not be opened is a USB drive, so no problem there Pastebin of the output: https://pastebin.com/MBkJB0cH
  6. Thanks for the idea! I had tried that previously, with one big issue; I only have the 12 drives on the LSI controller, so passthrough'ing that would make me lose my ESXi datastores which hosts the SynoBoot. I'll try and see if I can get another drive in there to act as datastore. If so, I'll go by your recommendation. Thanks again!
  7. Hi all, very new to Synology shell commands, but I've ready pretty much all the forum posts and couldn't find a solution. Again, new to this so please let me know if you need anything. Originally a Xpenology 6.2.2, upgraded to 7.2. Worked for a few days. Using RAW mapping in ESXi to push 10 x 3.3TB drives to Synology. Arranged like so: Volume 1: RAID6, 4x 3.3TB. BTRFS. Volume 2: RAID10, 6 x 3.3TB, BTRFS. The Storage Pool 1 seems fine, it's the Volume 1 that has crashed. I checked using the commands on the forum and my array looks fine. Using DMESG, I get the following error (repeating): BTRFS error (device dm-3): parent transid verify failed on 341688320 wanted 1499107 found 1499106 [ 5725.178296] parent transid verify failed on 341688320 wanted 1499107 found 1499104 [ 5725.475268] parent transid verify failed on 341688320 wanted 1499107 found 1499106 [ 5725.475795] parent transid verify failed on 341688320 wanted 1499107 found 1499106 [ 5725.476298] parent transid verify failed on 341688320 wanted 1499107 found 1499106 I have tried mounting both "cachedev_1" and "/dev/vg1/volume1", but it always says that it's "already mounted or mount point busy". I tried the command for unmounting and shutting down all service except SSH (forgot the exact command), but it doesn't exist in DSM 7, and I couldn't find an alternative. There is a backup on the crashed Volume1, so accessing it read-only would be sufficient for me to recover. Would love and appreciate any help on this. Again, please let me know if there is anything I can do to help you help me. Thanks again!
  8. My issue seems to be fixed, thanks to this post: I created 2 "sacrificial" drives of 50mb each in ESXi and assigned them to SATA0:0 and SATA0:1, and these don't show up but the 2 missing drives do! The only issue left is that the "drive rack" in Synology still says it's empty, but it doesn't seem to affect anything.
  9. Hi all! I'm new to DSM7 and Redpill, coming from XPE 6.2.2. I've got it working on ESXi 6.5, HP DL380p G8. However, I see only 8 of my 10 drives. I'm using RDM and virtual SATA (was using iSCSI with 6.2.2). I've read about the sataportmap and tried setting it to the correct values, but can't ever get passed the 8 disk limit. Also, while Synology sees 8 drives in the Drive Manager, it says that the 12 disk bays are empty. What logs/configs do you need? Again, I'm new here. Thank you!
×
×
  • Create New...