• Content Count

  • Joined

  • Last visited

  • Days Won


Everything posted by flyride

  1. DSM is Linux, but a highly modified Linux and many utilities you expect in a normal distro are omitted. There is no compile environment, and many of the system files are changed from standard. It isn't impossible to install directly, but running your apps via Docker is a much better idea.
  2. Sorry, no confusion here. I am not referring to any of those posts. I quoted you specifically from your fifth contribution to this forum, where you decided to admonish a highly respected contributor, bracketed with two paragraphs of tripe about how you wish folks would behave, and the wistful recollection of idealism from the early days of ARPANET. It was all perfectly communicative, thank you. At best, you make it hard to want to help you. I certainly won't forget who you are. At worst, you (and now I) have damaged a valuable information thread and made it less useful. Thus I
  3. Yes, this is quite "curious." @IG-88 is THE heavy lifter with regard to driver support on this hack of a hack and is the #1 contributor as recognized by the forum population. I won't speak for him, but from my view 6.2.1 and 6.2.2 were real problems for driver support as Synology essentially broke their own driver model, let it get out into the public, and then fixed it with 6.2.3. And contrary to uninformed opinion, the drivers available don't always just compile without errors into the Syno DSM environment, they take some work to address incompatibility and weird dependencies. I say this
  4. If nothing else changed, you have a network configuration problem and nothing wrong with DSM. Do you know the IP's of your PC, DSM and router? What is your network mask and gateway address? You might have to change the IP of your PC so that you can reach DSM if the router was routing between the two and now it is not.
  5. The usual suspects have tried to help him in the past and he rejects the advice. Also posts on something super obscure that turns out to be bad hardware on his end. @omar please take the advice and create your own new question thread (leave this one be), and document your question as specifically and completely as possible. We'll try and help.
  6. If you need to reinstall for whatever reason, you can just initiate that from Synology Assistant and it will reuse the existing loader USB. You don't have to reburn it. At any time, you can burn a new loader USB and install it to a running system, and it will update the loader automatically. So just save the .IMG file that you originally built with your serial and other loader configs in a safe place (helps to identify the platform and loader version so that if you experiment with others it won't get crossed up). If you ever need to install from scratch (assuming that you do not
  7. Having a test platform makes things less intimidating. Virtually everything I do on XPEnology happens on a test system first. If all you have is your production data, I get your trepidation. Maybe you just need to migrate to more compact cases. I use the U-NAS 810 and 410 cases with my own motherboard and power supplies. These are frequently out of stock and a bit of a pain to work with, but extremely small and quiet. But there are a lot of NAS-tuned case options out there that might be an ideal fit for your motherboard and drive counts. I question running 10TB of S
  8. The array I/O modification part has nothing to do with ESXi or NVMe. Jun's loader is also a hack but is leaving the DSM system in a state it expects to see itself in. Shutting down /md0 and /md1 replicas leaves DSM in a state it never expected to be in, so there is no guarantee there won't be a problem in the future. However, the worst thing I've encountered in several years of running partial /md0 and /md1 arrays is unexpected resumption of replicas on drives I was attempting to omit.
  9. Not necessary unless you functionally require the reported MAC to match the MAC on your new card (DSM does not care). Otherwise just change out the NIC and continue.
  10. If you are creating a SHR, you must include the smallest disks at the beginning or they won't be supported. If you are creating a RAID, it will only use the storage on each disk that corresponds to the smallest disk in the initial array. Unfortunately one manufacturer's "x terabyte" drive may be slightly larger than another, so some have run into the problem of trying to add additional drives marketed as the same size, but are still too small to function in the existing array.
  11. You are asking about two different things: 1. How to migrate a fully intact array from one DSM instance to another? The easiest way is probably to reinstall DSM with both array sets connected. There is also a way to import into a running system, but it's complex. It's a lot easier if the array to be imported is RAID and not a SHR. You need to be able to hotswap the disks into the running system, as Bad Things can happen if a conflicting array suddenly is present on boot. This procedure is from my notes importing RAID arrays (not SHR). If the array to be m
  12. I suspect that OP has #5 and #6 disks that are slightly smaller than the first four, so it won't allow them to be added to the array.
  13. You mean with File Station? That would be uncommon, but if that is how you are doing it, DSM is probably using gzip in the background, which is a single-threaded application. Synology's version is 1.6 and the current (since 2018) is 1.10, so you can see they aren't too concerned with it. If you want you can install a multithreaded zip app like pigz and run your heavy zip operations from the shell instead of File Manager.
  14. It has no practical limitations (24 disks for "high performance" configuration, even more for flexible). More likely that your drives are not compatible with what you wish to do. To verify, run this command from ssh and post the result here: $ sudo fdisk -l | grep "^Disk /dev"
  15. Your DSM/XPenology system is not responsible for decompressing RAR or ZIP archives - all it does is serve up the original files to your PC, and your PC does the decompression. Serving up files is not a particularly taxing operation, thus CPU is commensurately low. Your i3 is more than adequate for the file serving tasks you may give your NAS. USB on ESXi doesn't work the same way as baremetal hardware. The device need to be plugged in and explicitly added to the VM prior to booting it. You will actually see the external HD device in ESXi, not a generic USB port.
  16. Ugh. btrfs stores its superblocks in three different places and we just tried to look for all of them, but the btrfs binary keeps crashing (on #1 and #3, #2 returned unusable data). For the sake of completeness, please post a dmesg to see if there is any kernel-related log information about the last crash. Because of the btrfs crashes, we have not positively proven that all three superblocks are inaccessible. Really the only thing left to try now is install a new Linux system, connect the drives to it, and see if a new Linux kernel and latest btrfs utilities would be able to read
  17. Ok, btrfs is crashing before we have tested all three superblocks. So let's try and reach the other two directly: # btrfs ins dump-super -Ffs 67108864 /dev/md3 # btrfs ins dump-super -Ffs 274877906944 /dev/md3
  18. Ok, let's see if there is any valid superblock in btrfs. # btrfs ins dump-super -fFa /dev/md3
  19. No, this is definitely not the case. There is no attempt to hide anything at all. The FAQ recommends not using Synology cloud services for this reason. Plus, how would Synology know it was talking to the Internet? All it knows is a default gateway which is on your LAN. The only thing that knows about the Internet and port forwarding is your router. I'd check your network settings very carefully on DSM and your router. I've seen weird things like this when the network masks are mismatched.
  20. Yep. There is corruption to the extent that DSM thinks it must be an ext4 volume because it cannot find the initial btrfs superblock. Your fstab says it was previously mounted as a btrfs volume, do you concur with that? If so, try and recover the btrfs superblock with: # btrfs rescue super-recover -v /dev/md3 If it errors out, post the error. If it suggests that it may have fixed the superblock, try mounting the volume in recovery mode: # mount -vs -t btrfs -o ro,recovery,errors=continue /dev/md3 /volume2
  21. I had a few minutes, so here's a plan: 1. Retrieve the current array to filesystem relationship 2. Stop the array 3. Force (re-)create the array 4. Check the array for proper configuration before doing anything else (or report the exact failure response) Assumptions based on the prior posts: 1. Array members are on disks h (8), i (9), j (10) and the array is ordered in that sequence 2. Data corruption has at least damaged the array superblocks (/dev/md3 RAID5)- but the extent is unknown Comments and caveats: Note that this is an irreversi
  22. Sorry, I've had a crazy work schedule for the last couple of days, and not been able to get back to this. It isn't rocket science but I will post more detailed instructions when I get back from work in about 8 hours.
  23. Normally you would use visudo to ensure that you have valid syntax. Unfortunately Synology doesn't bother with that package, so be extra, extra careful when editing sudoers!
  24. It is passively supported via snapshots and the Copy on Write feature. The few times I have seen Synology comment on the matter, other deduplication services are not supported. There are open source utilities that can look for file-level duplicates and submit them in batch to btrfs for deduplication. I have used rmlint successfully, installed via syno-cli synocommunity package. I'm not aware of block-level deduplication that works on Synology btrfs. Note that some of the dedupe utilities have specific btrfs version and kernel version requirements that DSM does not mee
  25. Hmm well the script is not working. Someone posted some trouble with newline conversion due to how they downloaded the script. You might try this: # tr -d '\r' < > # mv Then try and run manually again.