Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. DSM updates according to the settings that you have configured. The platform does not matter, except that Synology does not push to all platforms or devices at once. It's not recommended that you ever auto-update with XPenology, but rather control the updates to make sure they are functional with your platform and hardware first. The main reason there would be a difference between disks on one controller versus several is if physical access to bus bandwidth was constrained. For example, a 4-port SATA controller on PCIe 2.0 x1, fully populated running SSD's would substantially exceed the bandwidth available and performance would suffer. Logically, there is no such bandwidth restriction on a virtual SATA controller. However, it could still depend on how your disks (that you are presenting to ESXi guest via RDM) are physically configured per the above. I expect the only way to tell for sure is to try it. But really I think it would be a waste of time.
  2. Ugh, my last post was on my phone and I didn't see that you had posted the contents of your fstab which is DSM crosslinked and confused. Probably not a good deal; hopefully no damage has been done, I think rather unlikely, but still too bad. Better to have taken the initial advice to copy everything off when it was up and running and not go change something. At this point, please post DSM Storage Manager screenshots of RAID groups and volumes. Also, run this set of commands again and post. # vgdisplay # lvs # lvm vgscan # lvm pvscan # lvm lvmdiskscan
  3. When you added the new volume through the GUI, DSM probably rewrote the /etc/fstab file which we customized to get your broken volume to mount. Go back to post #93 and edit it again. Note that you might have a new line in the file for your volume2/md3 that you should leave alone.
  4. I understand. I still advise to recover in place, get two copies of your data (one on a healthy RAID5, the other on 8TB) before making changes to DSM. This situation exists due to acceptance of excessive risk (no backups) and now your only copy of your data is barely accessible. Simultaneous data recovery operation and a DSM upgrade is a bad combination. I'm not lecturing, just stating facts. But I'll stop advising on this matter as it's entirely your choice.
  5. Sure you can make a new volume and copy as long as there are enough slots to build your new array for volume1. I'm not sure why you think reinstalling DSM is easier though. There is nothing unstable or corrupted about your DSM installation, thus nothing to be gained by doing a reinstall, and additional risk of making sure your 8TB volume is accessible and undamaged.
  6. Better not to threadjack and cross post. I believe you are saying that you can mount readonly and get to files, so do that and let this be closed.
  7. Honestly I don't know. btrfs repair is a bit of a void even if you search online. I still have the preference to copy off and rebuild the volume.
  8. You can try and do the repair options (post #14 in the thread). But really btrfs is supposed to self-heal. If it were me I would probably copy everything off and rebuild the btrfs volume.
  9. @supermounter if your mdstats indicate healthy arrays, check out this thread, starting from post #9 https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability
  10. The data loss would be due to the partial resync that affected /dev/sdc5. It was enough to invalidate the filesystem superblock so there is definitely some damage. However, you probably won't know until you open an affected file. The number of files affected should be low.
  11. Thanks for the offer of a beer... but pay it forward to help someone else! I hope you don't have too much data loss. Good luck.
  12. It's up to you. I suspect that if you do the steps in the last post, you can access the files via Windows. But here's the situation: Your RAID5 array is critical (no redundancy) and has mild corruption Your filesystem has some corruption but we have been able to get it to mount My strong recommendation is that you not attempt to "fix" anything further, and do the following: Copy everything off your volume1 onto another device. If you need to go buy an 8TB external drive, do it. Delete your volume1 Delete your SHR Click the Fix System Partition options in Storage Manager to correct the DSM and swap replicas Remove/replace your bad drive #0/sda Create a new SHR Create a new volume1 Copy your files back The copy operations can be done on either platform (Mac or Windows). So it's up to you.
  13. Ok, let's modify /etc/fstab to mount your filesystem with the good superblock and in read-only mode: First, make a backup copy of fstab # cp /etc/fstab /etc/fstab.bak Right now your fstab looks like this: none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0 You need to edit so that it looks like this: none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 ext4 ro,noload,sb=1934917632,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0 After you are done editing, cat /etc/fstab and confirm that it looks exactly like the above, then reboot the NAS. I'm pretty sure Syno won't rewrite fstab as long as you don't make any changes to the GUI. After reboot, report if you can get to files on the network, and post the output of: # cat /proc/mdstat # cat /etc/fstab # df
  14. If you reboot the nas, the filesystem will not mount and you will need to repeat the mount command in post #83. This should start SMB # /sbin/restart smbd Report results, and if you can access files over the network.
  15. Ok, I think we couldn't modify the superblock because we had set the array to read only (that only makes sense). Check your files, they should be there (File Station). I expect if files are there, there will be some data corruption. ext4 won't really be able to tell you about it. Can you map a drive or UNC to your data? (\\nasname\volume1 in Windows Explorer or Finder)?
  16. Sorry, let's try this variant: mount -v -oro,noload,sb=1934917632 /dev/vg1000/lv /volume1
  17. I must point out that all the superblocks you tried were different than the first output. e2fsck did not report corrupted journal entries. mount -v -b 1934917632 -oro,noload /dev/vg1000/lv /volume1
  18. Repeat the last command, substituting each of the superblocks from this list in sequence (we already tried 32768) Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424, 20480000, 23887872, 71663616, 78675968, 102400000, 214990848, 512000000, 550731776, 644972544, 1934917632 If one of them produces an output different than what you just received, stop and post the results.
  19. That's good. This next command may or may not work, and it may or may not take a long time. Don't interrupt it. # e2fsck -b 32768 /dev/vg1000/lv
  20. Ok, still investigating: # vgdisplay # lvs # lvm vgscan # lvm pvscan # lvm lvmdiskscan
×
×
  • Create New...