Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. There is no server config on Synology, it all comes from the array itself. If your /dev/sdc5 got corrupted as part of a failed rebuild, then your volume won't mount and your data is probably lost. We don't know whether that happened yet but let's investigate further. # cat /etc/fstab # cat /proc/mdstat # df
  2. No problem, conservative is good when dealing with arrays. I think your array is started and should have data. We absolutely do not want it to rebuild, resync or any other operations. So don't click on any "fix" buttons in the GUI. It also has no redundancy and your drive #0 /dev/sda is presumed to be dead. I advise to flag the whole array read-only: # mdadm --misc -o /dev/md2 Then reboot and see if your data is there. Report back on status.
  3. # mdadm --stop /dev/md2 # mdadm -v --create --assume-clean -e1.2 -n4 -l5 /dev/md2 missing /dev/sdc5 /dev/sdb5 /dev/sdd5 -u75762e2e:4629b4db:259f216e:a39c266d # cat /proc/mdstat
  4. Still quite perplexed about the refusal of this drive to play, but we're probably out of non-invasive options and need to do a create - the path that IG-88 charted out. Before we do that let's get a current state of your system. Please do not reboot or do anything else to change the system state once we start using this information or your data is at risk. If anything changes at all, please advise. # mdadm --detail /dev/md2 | fgrep "/dev/" # mdadm --examine /dev/sdb5 /dev/sdc5 /dev/sdd5 | egrep "/dev|Role|Events|UUID"
  5. If you are saying that you can't get a virtual disk that you expanded in ESXi to be recognized in DSM, see this: https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/
  6. No need to retry #39, it didn't get us anywhere. Please do #41, it doesn't matter if you reboot first or not.
  7. That was going to be my next suggestion. But, are you sure there was not more output from the last command? For verbose mode, it sure didn't say very much. Can you post a mdstat please? After that, if it still only has assembled with two instead of three drives, let's try: # mdadm --stop /dev/md2 # mdadm -v --assemble --force /dev/md2 --uuid 75762e2e:4629b4db:259f216e:a39c266d
  8. I'm also a bit perplexed about /dev/sdc not coming online with the commands we've used. But I think I know why it isn't joining the array - it has a "feature map" bit set which flags the drive as being in the middle of an array recovery operation. So it is reluctant to include the drive in the array assembly. In my opinion zapping off the superblocks is a last resort, only when nothing else will work. There is a lot of consistency information that is embedded in the superblock (evidence by the --detail command output) and the positional information of the disk within the stripe, and all that is lost when we zero a superblock. Before we go doing that, let's repeat the last command with verbose mode on and change the syntax a bit: mdadm --stop /dev/md2 mdadm -v --assemble --scan --force --run /dev/md2 /dev/sdb5 /dev/sdc5 /dev/sdd5 If that doesn't work, we'll come up with something to clear the feature map bit.
  9. Hello, sorry I had to work (I work in health care so very busy lately). These commands we are trying have not started the array yet, but you are no worse off. I don't quite understand why the drive hasn't unflagged, but let's try one more combination before telling it to create a new array metadata. # mdadm --stop /dev/md2 # mdadm --assemble --run --force /dev/md2 /dev/sd[bcd]5
  10. Ok, the goal here is to flag the out of sequence drive for use. Try this: # mdadm --stop /dev/md2 # mdadm --assemble --run /dev/md2 /dev/sd[bcd]5 Post results as before.
  11. Yes, it's hard to do all this remotely and from memory. # mdadm --stop /dev/md2 then # mdadm --assemble --force /dev/md2 /dev/sd[bcd]5 Sorry for the false start.
  12. The pound sign I typed was to represent the operating system prompt and so you knew the command was to be run with elevated privilege. When you entered the command with a preceding pound sign, you made it into a comment and exactly nothing was done. Please do not click that repair button right now. It won’t be helpful.
  13. Well it looks like that drive 1 is not worth trying to use. Lots of bad sectors and no partitions with obvious data on them. Let's try restarting your array in degraded mode using the remaining drives. # mdadm --force --assemble /dev/md2 /dev/sd[bcd]5 Post the output exactly.
  14. When you say something like this, be very specific as to which disk it is. Is it disk #1 or disk #3? (please answer) This is not too bad, there might be some modest data corruption (IG-88 quantified), but most files would be ok. Do you know if your filesystem is btrfs or ext4? (please answer) And please answer the two questions I asked in the first place.
  15. Before you do anything else, heed IG-88's advice to understand what happened and hopefully determine that it won't happen again. From what you have posted, DSM cannot see any data on disk #1 (/dev/sda). There is an incomplete Warning message that might tell us more about /dev/sda. Also disk #3 (/dev/sdc) MIGHT have data on it but we aren't sure yet. In order to effect a recovery, one of those two drives has to be functional and contain data. So first, please investigate and report on the circumstances that caused the failure. Also, consider power-cycling the NAS and/or reset the drive connector on disk #1. Once /dev/sda is up and running (or you are absolutely certain that it won't come up), complete the last investigation step IG-88 proposed. You only have four drives. So adapt the last command as follows: # mdadm --examine /dev/sd[abcd]5 | egrep 'Event|/dev/sd'
  16. CPU is "Prestonia" which is Netburst architecture. Unlikely to work as the Linux kernel compile requires Nehalem or later. That chip is more than 15 years old!
  17. 6.2 has received a few patches for security fixes when 6.1 has not. However, Synology lags so badly with critical updates versus a normal Linux distro, I personally won't rely on 6.2.2's security state. My recommendation is that you should assume it's hackable and never expose your NAS to the Internet. VPN or 2-factor encrypted proxy service are really the only options to safely access it remotely. If you subscribe to that opinion, the difference between 6.1 and 6.2's security state is really irrelevant.
  18. https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=113011 Read the last paragraph in particular...
  19. Yes, if you do a DSM migration install with the new VM. Yes. They have to be seen as SSD by the DSM instance. Don't use cheap SSD for write cache or you risk your data. Better yet, don't use write cache at all.
  20. DS918 and DS3615 - 8 threads (4 cores + 4 hyperthreads, or 8 cores if you turn off HT in the BIOS) DS3617 - 16 threads (8 cores + 8 hyperthreads, or 16 cores if you turn off HT in the BIOS). But most finicky with regard to hardware compatibility. You can add as many threads as you want to the VM but DSM won't use more than stated; it's a kernel compile limitation.
  21. It depends on which loader you choose. 1.03b requires BIOS.
×
×
  • Create New...