flyride

Members
  • Content Count

    382
  • Joined

  • Last visited

  • Days Won

    23

Everything posted by flyride

  1. That hardware should work great, either baremetal or under ESXi. Suggested loader 1.02b or 1.03b and DS3615 DSM.
  2. flyride

    Azure

    Well this doesn't show any virtual NIC in use, at least one that is connected to the PCI bus. However, it does indicate that the virtualization environment is Hyper-V, which is known not to work with DSM/XPEnology because the Microsoft virtual drivers aren't supported within DSM. See this thread for relevant discussion - it's possible that if you can select that dialect on your Azure instance, the DEC 21140/net-tulip driver might be able to be added via extra.lzma which might work, but I haven't heard of anyone who has gotten it running as of yet.
  3. flyride

    Azure

    That screenshot does not show the NIC type. Can you do an lspci -k on the CentOS box?
  4. flyride

    Azure

    What network adapters is Azure providing? If the VM is being delivered via Hyper-V, the current state is that Microsoft does not provide a compatible NIC.
  5. flyride

    DSM 6.2 Loader

    1.03b and DS3615 is supported to current 6.2.1
  6. I don't run Syno VMM anymore but when I was playing around with it, I found that I had to install the exact supported releases and versions to avoid kernel panics.
  7. flyride

    DSM 6.2 Loader

    Under Tutorials and Guides, or use the links in my signature. I strongly recommend you use DS3615 instead of DS3617. However, it's well documented you must have an Intel card to boot 6.2.1 on 1.03b because of kernel crashes.
  8. Ok, there is good information here. The kernel output (dmesg) tells us that you probably have a hardware failure that is not a bad sector. Possibly a disk problem, possibly a bad SATA cable, or a disk controller failure which has caused some erroneous information to be written to the disk. It's extremely likely that a btrfs check would fix your problem. In particular I would be try these repair commands in this order, with an attempt to mount after each has completed: sudo btrfs check --init-extent-tree /dev/vg1000/lv sudo btrfs check --init-csum-tree /dev/vg1000/lv sudo btrfs check --repair /dev/vg1000/lv HOWEVER: Because the btrfs filesystem has incompatibility flags reported via the inspect-internal dump, I believe you won't be able to run any of the commands, and they will error out with "couldn't open RDWR because of unsupported option features." The flags tell us that the way the btrfs filesystem has been built, the btrfs code in the kernel and the btrfs tools are not totally compatible with each other. I think Synology may use its own custom binaries to generate the btrfs filesystem and don't intend for the standard btrfs tools to be used to maintain it. They may have their own maintenance tools that we don't know about, or they may only load the tools when providing remote support. It might be possible to compile the latest versions of the btrfs tools against a Synology kernel source and get them to work. If it were me in your situation I would try that. It's actually been on my list of things to do when I get some time, and if it works I will post them for download. The other option is to build up a Linux system, install the latest btrfs, connect your drives to it and run btrfs tools from there. Obviously both of these choices are fairly complex to execute. In summary, I think the filesystem is largely intact and the check options above would fix it. But in lieu of a working btrfs check option, consider this alternative, which definitely does work on Synology btrfs: Execute a btrfs recovery. Btrfs has a special option to dump the whole filesystem to completely separate location, even if the source cannot be mounted. So if you have a free SATA port, install an adequately sized drive, create a second storage pool and set up a second volume to use as a recovery target. Alternatively, you could build it on another NAS and NFS mount it. Whatever you come up with has to be directly accessible on the problem system. For example's sake, let's say that you have installed and configured /volume2. This command should extract all the files from your broken btrfs filesystem and drop them on /volume2. Note that /volume2 can be set up as btrfs or ext4 - the filesystem type does not matter. sudo btrfs restore /dev/vg1000/lv /volume2 FMI: https://btrfs.wiki.kernel.org/index.php/Restore Doing this restore is probably a good safety option even if you are able to successfully execute some sort of repair on the broken filesystem with btrfs check. I'm just about out of advice at this point. You do a very good job of pulling together relevant logs, however. If you encounter something interesting, post it. Otherwise, good luck!
  9. Just so you know, this strategy doesn't accomplish anything. The instant you installed the 16 spinning disks, OS and swap partitions were created and DSM was replicated to each of them. It is in fact possible to keep DSM I/O confined to your SSD's, but it is impossible to recover the space on the spinning disks. If the former interests you, see here. Unfortunately the answer is that it depends. There are several reasons for this: Intel has up-revved the silicon for the I219-V PHY and has been unfortunately assigning new device ID's, which are incompatible with existing drivers The various DSM versions (DSM 6.1.7, 6.2, 6.2.1) and platforms (DS3615, DS916, DS918) are not identical with regard to kernel drivers The various DSM versions (DSM 6.1.7, 6.2, 6.2.1) and platforms (DS3615, DS916, DS918) are not identical with regard to kernel driver versions The first bullet is the major issue. Essentially, Intel is selling a new device but calls it the same name as the old one. So depending on which one you have, it may work or not. Installing the "latest" drivers via extra.lzma has been the usual way of combatting this, but with the latest loaders and versions, extra.lzma support is less robust, so your mileage may vary. Without extra.lzma, the only way to tell for sure that your device will work is to get the device ID and check it against the supported device ID's for the DSM version and platform you want to use. Links to these resources are also in my signature, or you can go to the Tutorials and Guides page and they are pinned there.
  10. No, the stick is the boot device and is modified by DSM whenever you take an update, and perhaps at other times. If you need to access the files on it while booted up, there are methods of mounting the partitions so that they are accessible from the shell.
  11. A few more things to try. But before that, do you have another disk on which you could recover the 4TB of files? If so, that might be worth preparing for. First, attempt one more mount: sudo mount -o recovery,ro /dev/vg1000/lv /volume1 If it doesn't work, do all of these: sudo btrfs rescue super /dev/vg1000/lv sudo btrfs-find-root /dev/vg1000/lv sudo btrfs insp dump-s -f /dev/vg1000/lv Post results of all. We are getting to a point where the only options are a check repair (which doesn't really work on Syno because of kernel/utility mismatches) or a recovery to another disk as mentioned above.
  12. Haswell or later required with DS918+ image, which is what 1.04b supports
  13. Syno has several different historical methods of building and naming LVs. I think this one must have been built on 6.1.x because it doesn't use the current design. The current one has a "syno_vg_reserved_area" 2nd LV when a volume is created. But yours may be complete with only one LV. It should not be necessary, but make sure you have activated all your LV's by executing sudo vgchange -ay At this point I think we have validated all the way up to the filesystem. Yep, you are using btrfs. Therefore the tools (fsck, mke2fs) you were trying to use at the beginning of the post are not correct. Those are appropriate for the ext4 filesystem only. NOTE: Btrfs has a lot of redundancy built into it so running a filesystem check using btrfs check --repair is generally not recommended. Before you do anything else, just be absolutely certain you cannot manually mount the filesystem via sudo mount /dev/vg1000/lv /volume1 If that doesn't work, try mounting without the filesystem cache sudo mount -o clear_cache /dev/vg1000/lv /volume1 If that doesn't work, try a recovery mount sudo mount -o recovery /dev/vg1000/lv /volume1 If any of these mount options work, copy all the data off that you can, delete your volume and rebuild it. If not, report the failure information.
  14. So FWIW this proves that your volume is not subject to a 16TB limitation (only applicable to ext4 created with 32-bit OS). Do you know if your filesystem was btrfs or ext4? /$ sudo vgdisplay --- Volume group --- VG Name vg1000 System ID Format lvm2 Metadata Areas 2 Metadata Sequence No 5 VG Access read/write VG Status resizable MAX LV 0 Cur LV 1 <----------- this looks like a problem to me, I think there should be 2 LVs Post the output of vgdisplay --verbose and lvdisplay --verbose
  15. Ok, your arrays seem healthy. It's strange that you have a filesystem (volume) crash without any array corruption. What exactly was happening when the volume crashed? When you say it has "populated with an empty volume" you mean that it shows 0 bytes? I suspect it just isn't mounting it. Run a vgdisplay and post the results please.
  16. You want to troubleshoot your crash from the lowest (atomic) level progressing to the highest level. You are starting at the highest level (the filesystem) which is the wrong end of the problem. Generally you don't want to force things as they can render your system unrecoverable. I have no idea if your data can be recovered at this point but you can troubleshoot until you run into something that can't be resolved. Start by posting screen captures of Storage Manager, both the HDD/SDD screen and the RAID Group screen. Are all your drives present and functional? Then run a cat /proc/mdstat and post the output.
  17. flyride

    NVMe cache support

    Ooo, I wonder if Syno coded their utilities specifically to a particular PCIe address for DS918? Can anyone with a real DS918+ and a NVMe card run a lspci on their NVMe card?
  18. It would have to be a standalone or on another ESXi host, or you could not get to the XPE backup if ESXi was down.
  19. flyride

    DSM 6.2 Loader

    Get the vendor and device ID from the onboard NIC and you may find that it is supported.
  20. flyride

    DSM 6.2 Loader

    There is nothing magical about the NC360T. The only thing you need to do is to make sure that the device you are trying to buy is supported by the driver in the image you want to run. Refer to this Intel list: https://www.intel.com/content/www/us/en/support/articles/000005612/network-and-i-o/ethernet-products.html The PRO 1000/CT is device ID 10D3. Now look here (for DS3615) or here (for DS918) and look up the device ID in the appropriate spreadsheet. The 10D3 card is supported by both images, so you should be fine with that card.
  21. That would work, or you can just use the datastore browser in the vSphere or web consoles to download the files individually.
  22. It depends a little bit on your installation. In my particular case, since all the NAS storage is passthrough or RDM, there really is no unique XPEnology data in datastores other than the vmx file and the RDM pointers. So I just copy those small files up to my PC for safekeeping. For VMWare itself, it's pretty typical to make a clone of the USB boot device when the system is down, as it doesn't change, unless you upgrade or patch VMWare. I'm running a couple of Linux VM's and I'm using VEEAM for Linux to back those up to XPEnology.
  23. NIC on that board is Atheros, which is not very frequently used, and 100Mbps besides. There is some Atheros support on the 3615 image but I doubt it is comprehensive. I think your simple solution will be to buy an Intel PCIe NIC. There are some PCIe x1 flavors and your board has one PCIe x16 slot too. Should cost you no more than $15-$20.
  24. I believe that your statement is correct.
  25. Again, start with loader 1.02b for DS3615, and download a DSM PAT file for version 6.1.7 DS3615xs+ NOT DS3617 NOT 6.2.x When it installs, DON'T let it upgrade to 6.2.x. This is a version that is almost guaranteed to work on your hardware, and once you figure out what's going on you can experiment with other loaders and DSM versions.