flyride

Members
  • Content Count

    1,786
  • Joined

  • Last visited

  • Days Won

    93

Everything posted by flyride

  1. If you are not using DSM to provide storage redundancy, why are you using it in the first place vs. a Linux fileshare or Docker host? You could theoretically use vSAN to achieve redundancy in the VMWare space (again DSM seems useless at that point). Like IG-88 says, you can provision on two datastores and the JBOD them together in DSM, but there is no redundancy. If you have a 1.5TB volume on a 3TB datastore, and you want a 2TB volume on a 3TB datastore, it's not difficult to add space to your VMDK, then use mdadm and btrfs/ext tools to expand the disk from
  2. Nice, I had not seen native NVMe for cache validated yet.
  3. XPEnology is only Intel-based, there is no support for the ARM platforms.
  4. See this post for your first issue.
  5. I had a similar problem, only way I could get things to work when ANY NIC had jumbo frames was to use E1000. I pass through my 10GBe card to DSM finally, and that works but I don't have other VM's that need high bandwidth.
  6. As you found out, for some reason 6.1.7 and ESXi don't work on DS3617 image. 6.1.5 works with DS3617. DS3615 works with all 6.1.x I have not used LSI controller before but most LSI components seem to be supported, either natively on DS3617 or via extra.lzma. You do have the choice to present storage to DSM via 1) virtual drive on datastore, 2) rdm or 3) passthrough controller and drives
  7. Make the boot disk a SATA drive (add SATA controller, delete IDE controller). The boot drive should be the only drive on that SATA controller. Make sure you are using the ESXi boot option on the loader. Then your boot disk should be hidden.
  8. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.1.7-15284 - Loader version and model: Jun's loader v1.02b - DS3615xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.5.0 U1-7388607 - Additional comments: REBOOT REQUIRED / No issues with test VM upgrade or production VM with physical RDM and controller pass-through.
  9. https://xpenology.com/forum/topic/12779-tried-a-recovery-choosing-migrate-option-worried-now-bricked/
  10. I have J4105 which presumably has the same ASMedia controller, and this is not the case. The four ports (Intelx2, ASMediax2) line up disk 1-4. Which image are you using?
  11. It works on my Kaby Lake processor, and Skylake/Kaby Lake are basically identical. I don't have QuickSync model however, so can't address transcoding questions.
  12. https://xpenology.com/forum/topic/12764-dsm-62-23739-downgrade-or-possible-solution/?do=findComment&comment=91880
  13. Bad news, if you did not specify a 6.1.x .PAT file during the migration, you downloaded and installed 6.2 now that it is released across all platforms. This post is consistent with how I would attempt to recover it.
  14. Platform: ASRock J4105-ITX Loader: Jun's v1.02b DS916+ DSM: DSM 6.1.7-15284 LAN: Realtek RTL8111H Gigabit SATA3: 2x Intel via chipset, 2x ASMedia ASM1061 (hot plugging supported on all ports) Comments: This is the Gemini Lake update of the ITX motherboard ASRock has been selling for years, and it runs XPEnology baremetal without special configs or drivers. Pros: 4x 2.4 Ghz cores and up to 32GB RAM (ignore the manual's 8GB limit). Hardware-accelerated video (Intel UHD 600) and AES-NI hardware encryption. On-board WiFi option. Only con is lack of expansion - only on
  15. Just built a machine with the latest Gemini Lake rev of this board, J4105-ITX. It works perfectly baremetal, no drivers required. A pretty nice XPEnology setup for a 4-bay NAS, 4x 2.5Ghz cores and 4 SATA ports for $85 USD. Expandable to 32GB RAM (ignore the false 8GB limit in the manual).
  16. I'm using a Kaby Lake Xeon on C236 chipset, and it works fine on both baremetal and with ESXi. Given that CFL is basically Kaby Lake and Z370 is basically Z270, I can't imagine you would have trouble.
  17. It would have to be a SN/MAC from a DS3615XS+ and not have been used by anyone else prior. Also, see this: https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=82390
  18. I'd wait at least until the loader is released official, since there are other code bases that support 6.2, there should be a good demand for the trusty DS3615XS, although it has the 3.10 Linux kernel, not the 4.4 of DS918+.
  19. Yes, you should be able to migrate. Have a backup. As a newbie to XPEnology installs, it would be useful for you to get another hard drive and do some test installation/migrations before you move forward on your production disks.
  20. Verify you have a 6th gen Intel processor or later: Skylake, Kaby Lake, Coffee Lake, Apollo Lake, Denverton, Gemini Lake. The current alpha loader is built on an Apollo Lake binary, so it may have some processor-specific instructions.
  21. Cannot install on a drive on a secondary controller, but you probably knew that. Also tried subbing a secondary/alternate controller on a running system, which also did not work. The alpha tutorial indicates this, but the synoboot drive is not hidden (and you knew that too). At some point if there are things you'd like the community to test, please post. Bravo on a great alpha effort.
  22. Installed fine on ESXi 6.5 on a Kaby Lake Xeon. Tried setting up a VMWare NVMe virtual disk, but DSM does not recognize it for SSD Cache. It can see it using the command line tools, however. It may be an issue with an enforced approved equipment compatibility list. Will poke around a bit and report back.
  23. Rereading the later posts - you did identify your drive as a mSATA drive. In any case, you should be able to use that (or any) drive as an SSD cache with ESXi by virtualizing the storage or physical RDM.
  24. You did not say what type of Kingston drive you were trying to use. If your drives are NVMe, they won't work on a baremetal installation. None of the currently supported XPEnology hardware options (DS3615xs, DS3617xs, DS916+) can support a NVMe drive. If you have a M.2 SATA drive, I'm pretty sure it would work. M2D17 only provides M.2 SATA interfaces, so it will probably also work (but don't try to install NVMe devices into it). Back to NVMe: I was able to make NVMe drives work under ESXi as physical RDM or virtualized (both options work). See https://xpenology.com/