Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Yes, this is quite "curious." @IG-88 is THE heavy lifter with regard to driver support on this hack of a hack and is the #1 contributor as recognized by the forum population. I won't speak for him, but from my view 6.2.1 and 6.2.2 were real problems for driver support as Synology essentially broke their own driver model, let it get out into the public, and then fixed it with 6.2.3. And contrary to uninformed opinion, the drivers available don't always just compile without errors into the Syno DSM environment, they take some work to address incompatibility and weird dependencies. I say this because I tried to duplicate his work when he was offline for awhile, and found a lot of problems in doing so. So if he doesn't want to update his own contributions to 6.2.2 because it's fundamentally broken, drivers don't work even when they compile clean, and generally a real pain in the butt, so be it. If the recommendation to a new user is NOT to use 6.2.2 because of this, that's still help. To accuse him of insufficient or faulty altruism is the height of arrogance. Nobody gets paid for this, we do it because we like the platform and want others to enjoy it.
  2. If nothing else changed, you have a network configuration problem and nothing wrong with DSM. Do you know the IP's of your PC, DSM and router? What is your network mask and gateway address? You might have to change the IP of your PC so that you can reach DSM if the router was routing between the two and now it is not.
  3. The usual suspects have tried to help him in the past and he rejects the advice. Also posts on something super obscure that turns out to be bad hardware on his end. @omar please take the advice and create your own new question thread (leave this one be), and document your question as specifically and completely as possible. We'll try and help.
  4. If you need to reinstall for whatever reason, you can just initiate that from Synology Assistant and it will reuse the existing loader USB. You don't have to reburn it. At any time, you can burn a new loader USB and install it to a running system, and it will update the loader automatically. So just save the .IMG file that you originally built with your serial and other loader configs in a safe place (helps to identify the platform and loader version so that if you experiment with others it won't get crossed up). If you ever need to install from scratch (assuming that you do not change platforms), just reburn that file. There is no reason to save or clone a running loader outside of the above.
  5. Having a test platform makes things less intimidating. Virtually everything I do on XPEnology happens on a test system first. If all you have is your production data, I get your trepidation. Maybe you just need to migrate to more compact cases. I use the U-NAS 810 and 410 cases with my own motherboard and power supplies. These are frequently out of stock and a bit of a pain to work with, but extremely small and quiet. But there are a lot of NAS-tuned case options out there that might be an ideal fit for your motherboard and drive counts. I question running 10TB of SSD's in RAID 0, however. They can and do fail. You might consider switching to DS3617xs on that system and running RAIDF1. But maybe you have everything backed up somewhere.
  6. The array I/O modification part has nothing to do with ESXi or NVMe. Jun's loader is also a hack but is leaving the DSM system in a state it expects to see itself in. Shutting down /md0 and /md1 replicas leaves DSM in a state it never expected to be in, so there is no guarantee there won't be a problem in the future. However, the worst thing I've encountered in several years of running partial /md0 and /md1 arrays is unexpected resumption of replicas on drives I was attempting to omit.
  7. Not necessary unless you functionally require the reported MAC to match the MAC on your new card (DSM does not care). Otherwise just change out the NIC and continue.
  8. If you are creating a SHR, you must include the smallest disks at the beginning or they won't be supported. If you are creating a RAID, it will only use the storage on each disk that corresponds to the smallest disk in the initial array. Unfortunately one manufacturer's "x terabyte" drive may be slightly larger than another, so some have run into the problem of trying to add additional drives marketed as the same size, but are still too small to function in the existing array.
  9. You are asking about two different things: 1. How to migrate a fully intact array from one DSM instance to another? The easiest way is probably to reinstall DSM with both array sets connected. There is also a way to import into a running system, but it's complex. It's a lot easier if the array to be imported is RAID and not a SHR. You need to be able to hotswap the disks into the running system, as Bad Things can happen if a conflicting array suddenly is present on boot. This procedure is from my notes importing RAID arrays (not SHR). If the array to be migrated is a SHR, don't attempt this procedure as written, as there are additional steps including extracting a volume group reserved area that I don't quite understand. It is nondestructive to your data, but it might leave your array inaccessible and require a more in-depth recovery operation if something went wrong. I don't know all the things that can go wrong with it. You have to decide if that risk justifies avoiding the manual copy of your files from one system to the other. If all went well, you should now see new Storage Pool and a Volume entry in Storage Manager. It will probably show the System Partition crash error and offer to fix, go ahead and do that. Then reboot and make sure everything restarts correctly. 2. Can the DSM OS be installed only on some drives and not others (in your case, SSD and not have it on HDD)? Sort of. You can disable the disk I/O to the DSM and swap partitions on the HDD drives via procedure detailed in the link below. It cannot reclaim the space; the DSM and swap partitions must continue to exist on the HDDs. They become hot spares that are activated if a SSD goes offline. Please note this is not supported in any way by Synology and they have no intention for this to work - it's just something I figured out a long time ago. https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report
  10. I suspect that OP has #5 and #6 disks that are slightly smaller than the first four, so it won't allow them to be added to the array.
  11. You mean with File Station? That would be uncommon, but if that is how you are doing it, DSM is probably using gzip in the background, which is a single-threaded application. Synology's version is 1.6 and the current (since 2018) is 1.10, so you can see they aren't too concerned with it. If you want you can install a multithreaded zip app like pigz and run your heavy zip operations from the shell instead of File Manager.
  12. It has no practical limitations (24 disks for "high performance" configuration, even more for flexible). More likely that your drives are not compatible with what you wish to do. To verify, run this command from ssh and post the result here: $ sudo fdisk -l | grep "^Disk /dev"
  13. Your DSM/XPenology system is not responsible for decompressing RAR or ZIP archives - all it does is serve up the original files to your PC, and your PC does the decompression. Serving up files is not a particularly taxing operation, thus CPU is commensurately low. Your i3 is more than adequate for the file serving tasks you may give your NAS. USB on ESXi doesn't work the same way as baremetal hardware. The device need to be plugged in and explicitly added to the VM prior to booting it. You will actually see the external HD device in ESXi, not a generic USB port.
  14. Ugh. btrfs stores its superblocks in three different places and we just tried to look for all of them, but the btrfs binary keeps crashing (on #1 and #3, #2 returned unusable data). For the sake of completeness, please post a dmesg to see if there is any kernel-related log information about the last crash. Because of the btrfs crashes, we have not positively proven that all three superblocks are inaccessible. Really the only thing left to try now is install a new Linux system, connect the drives to it, and see if a new Linux kernel and latest btrfs utilities would be able to read anything useful without core dumping. I suppose you could also try and reinstall DSM (maybe using the DS918 platform since it has the newest kernel) and see if that makes a difference, but I don't hold out much hope for that. Barring that result, whatever happened to your drives has caused them to return data that are so corrupted that there is probably no filesystem recovery possible without forensic tools. However, we haven't written over the filesystem areas of the disk, so forensic recovery should still be possible. And, the new metadata we created for the array will help a forensic lab know the correct order of the disks, should you decide to go in that direction. If you decide to abandon the array and remake it, test the two failed drives very carefully before putting production data on them again, because this could be the result of controller or drive failure (although two drives failing in this way at the same time seems unlikely). We did everything reasonable to recover this data. I'm sorry that the results were not better.
  15. Ok, btrfs is crashing before we have tested all three superblocks. So let's try and reach the other two directly: # btrfs ins dump-super -Ffs 67108864 /dev/md3 # btrfs ins dump-super -Ffs 274877906944 /dev/md3
  16. Ok, let's see if there is any valid superblock in btrfs. # btrfs ins dump-super -fFa /dev/md3
  17. No, this is definitely not the case. There is no attempt to hide anything at all. The FAQ recommends not using Synology cloud services for this reason. Plus, how would Synology know it was talking to the Internet? All it knows is a default gateway which is on your LAN. The only thing that knows about the Internet and port forwarding is your router. I'd check your network settings very carefully on DSM and your router. I've seen weird things like this when the network masks are mismatched.
  18. Yep. There is corruption to the extent that DSM thinks it must be an ext4 volume because it cannot find the initial btrfs superblock. Your fstab says it was previously mounted as a btrfs volume, do you concur with that? If so, try and recover the btrfs superblock with: # btrfs rescue super-recover -v /dev/md3 If it errors out, post the error. If it suggests that it may have fixed the superblock, try mounting the volume in recovery mode: # mount -vs -t btrfs -o ro,recovery,errors=continue /dev/md3 /volume2
  19. I had a few minutes, so here's a plan: 1. Retrieve the current array to filesystem relationship 2. Stop the array 3. Force (re-)create the array 4. Check the array for proper configuration before doing anything else (or report the exact failure response) Assumptions based on the prior posts: 1. Array members are on disks h (8), i (9), j (10) and the array is ordered in that sequence 2. Data corruption has at least damaged the array superblocks (/dev/md3 RAID5)- but the extent is unknown Comments and caveats: Note that this is an irreversible operation. Any metadata on the disks containing the array state will be overwritten. Files on the disks are not damaged by this operation (so you could, in theory, send for forensic recovery still). It's possible that the create operation will fail without zeroing the array superblocks first. I don't like doing that unless it's absolutely necessary. If corruption is extensive, the array will start but it will not be possible to mount the filesystem (we'll try and check that after the array creates correctly). Feel free to question, research, verify this suggestion prior to executing. At the end of the day it's your data, and your decision to follow the free advice you obtain here. Again, I hope you have a backup, because we already know there is some amount of data loss. Commands, execute in sequence: # cat /etc/fstab # mdadm --stop /dev/md3 # mdadm -v --create --assume-clean -e1.2 -n3 -l5 /dev/md3 /dev/sdh3 /dev/sdi3 /dev/sdj3 -u22a4b5c5:8103a815:1de617b2:3f23ee03 # cat /proc/mdstat Post output, including error messages from each of these.
  20. Sorry, I've had a crazy work schedule for the last couple of days, and not been able to get back to this. It isn't rocket science but I will post more detailed instructions when I get back from work in about 8 hours.
  21. Normally you would use visudo to ensure that you have valid syntax. Unfortunately Synology doesn't bother with that package, so be extra, extra careful when editing sudoers!
  22. It is passively supported via snapshots and the Copy on Write feature. The few times I have seen Synology comment on the matter, other deduplication services are not supported. There are open source utilities that can look for file-level duplicates and submit them in batch to btrfs for deduplication. I have used rmlint successfully, installed via syno-cli synocommunity package. I'm not aware of block-level deduplication that works on Synology btrfs. Note that some of the dedupe utilities have specific btrfs version and kernel version requirements that DSM does not meet. Tread carefully!
  23. Hmm well the script is not working. Someone posted some trouble with newline conversion due to how they downloaded the script. You might try this: # tr -d '\r' < FixSynoboot.sh > FixSynoboot.new # mv FixSynoboot.new FixSynoboot.sh Then try and run manually again.
  24. Did you receive Error 21? (reporting this sort of detail makes diagnosis somewhat easier) Ok, that makes more sense. Most upgrades do require the availability of synoboot so that the upgrade can modify the loader, so that is quite possibly the issue and you are trying to do to the right thing to resolve it. That said, it would seem that you have not successfully installed FixSynoboot.sh. Double-check the installation directions. You must install in /usr/local/etc/rc.d, not /etc/rc.d You should be able to run the script manually in this way (must be root): /usr/local/etc/rc.d # ./FixSynoboot.sh Then validate that the synoboot devices appear # ls /dev/synoboot*
  25. It's not a patch, it is a script that has to be installed and run on every boot. Its purpose is to make the synoboot device present when missing from your system (which you reported). That said, I don't know why you were "struggling to mount synoboot disk through ssh to update to the newest DSM" There is no need to mount synoboot prior to update; in fact if you do it probably will fail. It might be helpful if you explain in a little more detail what you are doing and how.
×
×
  • Create New...