flyride

Members
  • Content Count

    1,957
  • Joined

  • Last visited

  • Days Won

    98

Everything posted by flyride

  1. Just delete the XML tag with the serial number of the drive in question. When you reboot the tag will be regenerated and any reconnect history will start from 0.
  2. That's all the GRUB menu will say. Which loader? Which DSM and version?
  3. Which loader and DSM are you trying to use? Recommended VM type is "Other 3.x Linux 64-bit" for ESXi, but I don't know if that translates to VMWare Workstation. EDIT: It would appear from topic that you have tried both 1.04b/DS918 and 1.03b/DS3617. I recommend 1.04b/DS918 and vmxnet3-equivalent NIC (not sure what VMW "host" type is), or 1.03b/DS3615 and e1000e NIC type
  4. r8168/r8169 is removed from DS3615 6.2.1 https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/
  5. You are correct, it will limit the amount of RAM on passthrough. RAM can remain dynamic if you use RDM, however, which is equally functional.
  6. flyride

    DSM 6.2 Loader

    And this if you are using ESXi. Search is your friend.
  7. Do you have another PC you can use to experiment with ESXi first? Yes, there is a learning curve, but it's not awful. It would be wise to do some practice installations using both ESXi and XPEnology on ESXi before trying to convert your main hardware and storage. And, as always, when you do your "real" installation, your data should be backed up somewhere else in case of catastrophe. Performance-wise, any difference between ESXi and baremetal is negligible. You will be running a hypervisor in addition to XPEnology, but you can give the entire machine (RAM and CPU) to the XPEnolo
  8. That link says passthrough is not possible with Hyper-V. So I agree that your only possible option will be the tulip driver compiled for the version you want to install. This is not part of any DSM PAT file, but it is in the source code tree.
  9. Interesting. I am using that same strategy (virtual LSI SAS to get SMART) to connect to NVMe drives via physical RDM, on DSM 6.1.7 and DS3615. I don't see any reason one could not attached SAS drives via physical RDM and do the same. On 6.2.x, 1.03b and DS3615, this does not work unless the storage pool is built first using a virtual SATA controller, then go back to the VM config and move the drives to the virtual LSI SAS controller. I"m not sure if that would work with your SAS drives via RDM, but I suspect it may. Your post however caused me to edit the original "p
  10. You can run two Storage Pools (one on enterprise SSD and the other on regular disk) and accomplish what you want without worrying about whether DSM is installed on regular disk. However, my configuration is exactly what you are envisioning - isolate DSM to an enterprise SSD (in my case NVMe) RAID 1 which speeds things up quite a bit. Link/guide in my sig.
  11. AFAIK all supported DSM versions support 8 threads total regardless of the processor type. So 4 hyperthreaded cores, or 8 non-HT cores.
  12. The router is offering DHCP on 192.168.0.x and in his diagram there is no connection from the 10Gbe interfaces to the router with DHCP, so there is no requirement to exclude the range.
  13. I agree with the advice to change addresses, but in the meantime, you have IP address pairs on the 10Gbe segments which require subnet masks of 255.255.0.0 in order for them to work. I suspect at least one of the interfaces on your Syno to XPe network has a subnet of 255.255.255.0 and that is why it is not working. Typically you want to limit an IP network to a class C scope (255.255.255.0 subnet mask), which means that the first three octets (i.e. 169.254.72.x) must be the same on every device on that network. The reason for this is to limit the collision domain and broadcast t
  14. insmod at booted console so far, just to see if it would work.
  15. Acquantia card is supported, I'm not sure your card's implementation is identical to the OEM card, but if it presents as the Acquantia device type and ID, it should work on the DS3615 image. I'd start with the 1.02b loader and 6.1.7 and prove it is working. Then if you still want 6.2.1, try the install again. Native driver support on 1.03b is imperfect, some drivers crash. Even when it is working, NVMe only shows up when you go to add cache to an existing storage pool. You won't see it anywhere else. It only works in the DS918 image, so using 10Gbe and NVMe at the s
  16. I have compiled drivers and loaded successfully, driver signing is not the reason that drivers are not working.
  17. Maybe I am not understanding what is being stated. NIC drivers are not in loaders except as part of an extra.lzma. The implication is that DSM 6.2 or 6.2.1 is used with a 1.03 or 1.04 loader, and for DS3615, DS3617 and DS918 DSM distributions, e1000e driver is present in all cases.
  18. You are probably correct, in this particular example you need to switch partition type to gpt. gdisk can do this but you'll have to load a binary up on your system, it's not there now. Again, this problem would never actually happen with a real Synology (since disks cannot change sizes) so they probably don't have a utility/method for this. Please note that changing partition types is a pretty high risk operation and that you might be better served just creating a new store that is large enough, then delete your old one. If you want to pursue it, start here: https://askubuntu.co
  19. flyride

    DSM 6.2 Loader

    Updated https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/
  20. Bump for the addition of the 6.2.1 driver guide for DS3615. Comparing 6.1.7 and 6.2.1 is interesting, and I'll summarize findings here: r8168/r8169 support was completely removed from 6.2.1 on this platform Intel i40e and ixgbe drivers were enhanced on 6.2.1 to support newer cards Mellanox mlx4 and mlx5 drivers were enhanced on 6.2.1 to support newer cards Comparing DS3615 and DS918, in most cases there continues to be significantly better native driver support in DS3615, especially for 10Gbe+ cards. DSM supports many more 10Gbe+ cards than Synology lists on their com
  21. There is nothing wrong with 6.1.7, it's fully supported by Synology and will be for some time. I'm sure you understand you cannot upgrade the DSM to 6.2 without using 1.03b loader. That said, I can't see any reason you cannot use 1.03b and DS3615 as your chipset NIC (Q77 Intel 82579LM/82579V Gigabit Ethernet) is supported by e1000e which is what is required. I actually have one of these in my office that I was making ready to sell... I might try to do an install on it just to prove it works. But I think you might be making a mistake somewhere.
  22. That's a pretty good sign that the Storage Pool is not using LVM. Also when you df the volume and see that the host device is the /dev/md2 array and not the logical volume, you may conclude the same thing. Therefore you can just follow the plan from the very first post in this thread - in other words, the only step left should be to expand the filesystem. Do you know if you are running btrfs or ext4? If it is btrfs, the command is different and it is preferable to have the volume mounted: $ sudo btrfs filesystem resize max /volume1