flyride

Members
  • Content Count

    1,934
  • Joined

  • Last visited

  • Days Won

    98

Everything posted by flyride

  1. DSM 6.2 on DS918 as compiled by Synology supports NVMe drives for cache only. Not for installation/boot/data, which must be SATA. But I don't think that's the issue here, it seems that maybe you don't fully understand how devices are represented in ESXi. Your NVMe drive seems to be attached to ESXi as a datastore (which is fine). Then you are trying to configure a virtual SCSI controller (which won't work with DSM 6.2). I'm not sure if you created a virtual data disk or not but you need something to attach to the controller. You can definitely cre
  2. Patchlevel? Unsure about why you don't have it. But it is required for 6.2.x so you will need to solve it.
  3. This is an ESXi error, nothing to do with XPEnology. You need to use a virtual SATA controller for 1.04b bootloader not SCSI. Set up SATA Controller 0 with the bootloader as disk 0. Set up a second SATA Controller 1 with your data drive(s)
  4. You will initially need a DHCP server for installation. Once installed you can change to static with no problem.
  5. This part is true This part is not true. 1.03b/DS3615 works fine on 6.2.1 as long as you have the e1000e NIC set.
  6. Thanks for posting. This is probably applicable to any of the Apollo Lake/Gemini Lake-based products.
  7. You can definitely upgrade from 6.1 to 6.2.x without building a new VM, but you will need to replace your boot loader with 1.03b or 1.04b. As soon as you do that it will prompt to install DSM 6.2.x and then upgrade with your existing VM information and drives. Your risk is that your NIC or controllers aren't supported. If you don't passthrough anything that shouldn't be a problem. But you have a virtualized environment already. Build up a test system exactly like your current system, write down your upgrade procedure, and test it so you know your procedure will work. And even if
  8. I can see that by reading the posts in this thread. The answer to your problem is also in the posts in this thread, if you will kindly do the same.
  9. I think most HP Microserver users are staying with 1.03b and DS3615xs DSM along with a PCIe Intel NIC and disabling on-board NIC. 1.04b is only for DS918 which requires Haswell CPU. FMI: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  10. I believe that motherboard uses Realtek NICs. So you will need to go buy an Intel PCIe NIC and disable RealTek in the BIOS before updating to 6.2.1. FMI: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  11. Download the loader again and write a fresh copy to your USB.
  12. This suggests you are trying to reuse a USB loader that was used (or was attempted to be used) for another XPEnology installation of 6.2.1. For example, you accidentally clicked on the "install the latest DSM" option and it installed and then of course hung since that loader doesn't support 6.2.x Download the loader again and write a fresh copy to your USB.
  13. Try the DS3615xs version. You may need to regen your boot loader USB too.
  14. flyride

    DSM 6.2 Loader

    While modules are in fact signed, I do not see any evidence that kernel signature enforcement is turned on. I can load and unload my own compiled modules into 6.2.1 just fine.
  15. If by settings, you mean Control Panel, that is a static report and is cosmetic only. DSM will use up to 8 non-hyperthreaded cores or 4 hyperthreaded cores on all versions supported by XPEnology. See cat /proc/cpuinfo for what is actually running. The "processor" field is the number of threads starting with 0 (it will report 7 for 8 threads). Alternative command: nproc --all
  16. flyride

    DSM 6.2 Loader

    GNU GRUB version 2.02~beta2-36ubuntu3.9 +----------------------------------------------------------------------------+ | DS3617xs 6.1 Baremetal with Jun's Mod v1.02-alpha | |*DS3617xs 6.1 Baremetal with Jun's Mod v1.02-alpha Reinstall | | DS3617xs 6.1 VMWare/ESXI with Jun's Mod v1.02-alpha | | | | | |
  17. You can use 3615 image with no negative effects with your CPU. 3617 seems to have some problems in unpredictable situations.
  18. @Bearcat you are correct and e1000e is the driver for 8086:105E for all the drivers I have records for. Not sure why I came to that answer but I apologize for the inaccuracy. The other comments about e1000 card lack of function on DS3615 6.2.1 remain accurate. @bluemax, I do suggest you try DS3615 instead of DS3617.
  19. I understand, our information is collected by the community and is not perfect. Some of the collective information posted has errors but it cannot be easily corrected once posted. And nobody has all the possible hardware or time with which to test. EDIT: I seem to be contributing to the imperfect knowledge, see update of previous post. My own test VM on 1.03b running DS3615 6.2.1 consistently fails on e1000 emulation but not e1000e. I did look through the same thread you cited to see if there were other patterns emerging but the majority of the successful reports are
  20. flyride

    DSM 6.2 Loader

    /dev/md0 is the DSM partition and is a n-disk RAID1 (n=the number of disks in your system) /dev/md1 is the swap partition is also a n-disk RAID1 (n=the number of disks in your system) Your arrays are /dev/md2, /dev/md3 and so forth The [16/n] (or [12/n] in the case of earlier loaders) notation is normal and linked to maxdisks. Even on a real Synology box you will at least see underscores where empty slots are.
  21. flyride

    DSM 6.2 Loader

    Maxdrives=16 is already overriding synoinfo.conf. This is not different than Maxdrives=12 we have had up until now, with no more or less risk due to upgrades. You'd have to get Jun to speak about the user configurability of it however. I suspect it's not easy to do as the changes in grub are affecting arguments passed to the kernel at boot time, and Maxdrives is not a kernel parameter.
  22. flyride

    DSM 6.2 Loader

    Maxdrives=16 via Jun's loader 1.04b only (so far). There was a forum request to have that change implemented.
  23. e1000e is required, e1000 currently is not working as a primary network interface on Jun 1.03b, DS3615/17 and 6.2.1
  24. I don't think anyone can guarantee future proof on any platform, except perhaps ESXi. I'll respond to your hardware query on PM.