flyride

Members
  • Content Count

    1,843
  • Joined

  • Last visited

  • Days Won

    96

Everything posted by flyride

  1. Try the DS3615xs version. You may need to regen your boot loader USB too.
  2. flyride

    DSM 6.2 Loader

    While modules are in fact signed, I do not see any evidence that kernel signature enforcement is turned on. I can load and unload my own compiled modules into 6.2.1 just fine.
  3. If by settings, you mean Control Panel, that is a static report and is cosmetic only. DSM will use up to 8 non-hyperthreaded cores or 4 hyperthreaded cores on all versions supported by XPEnology. See cat /proc/cpuinfo for what is actually running. The "processor" field is the number of threads starting with 0 (it will report 7 for 8 threads). Alternative command: nproc --all
  4. flyride

    DSM 6.2 Loader

    GNU GRUB version 2.02~beta2-36ubuntu3.9 +----------------------------------------------------------------------------+ | DS3617xs 6.1 Baremetal with Jun's Mod v1.02-alpha | |*DS3617xs 6.1 Baremetal with Jun's Mod v1.02-alpha Reinstall | | DS3617xs 6.1 VMWare/ESXI with Jun's Mod v1.02-alpha | | | | | |
  5. You can use 3615 image with no negative effects with your CPU. 3617 seems to have some problems in unpredictable situations.
  6. @Bearcat you are correct and e1000e is the driver for 8086:105E for all the drivers I have records for. Not sure why I came to that answer but I apologize for the inaccuracy. The other comments about e1000 card lack of function on DS3615 6.2.1 remain accurate. @bluemax, I do suggest you try DS3615 instead of DS3617.
  7. I understand, our information is collected by the community and is not perfect. Some of the collective information posted has errors but it cannot be easily corrected once posted. And nobody has all the possible hardware or time with which to test. EDIT: I seem to be contributing to the imperfect knowledge, see update of previous post. My own test VM on 1.03b running DS3615 6.2.1 consistently fails on e1000 emulation but not e1000e. I did look through the same thread you cited to see if there were other patterns emerging but the majority of the successful reports are
  8. flyride

    DSM 6.2 Loader

    /dev/md0 is the DSM partition and is a n-disk RAID1 (n=the number of disks in your system) /dev/md1 is the swap partition is also a n-disk RAID1 (n=the number of disks in your system) Your arrays are /dev/md2, /dev/md3 and so forth The [16/n] (or [12/n] in the case of earlier loaders) notation is normal and linked to maxdisks. Even on a real Synology box you will at least see underscores where empty slots are.
  9. flyride

    DSM 6.2 Loader

    Maxdrives=16 is already overriding synoinfo.conf. This is not different than Maxdrives=12 we have had up until now, with no more or less risk due to upgrades. You'd have to get Jun to speak about the user configurability of it however. I suspect it's not easy to do as the changes in grub are affecting arguments passed to the kernel at boot time, and Maxdrives is not a kernel parameter.
  10. flyride

    DSM 6.2 Loader

    Maxdrives=16 via Jun's loader 1.04b only (so far). There was a forum request to have that change implemented.
  11. e1000e is required, e1000 currently is not working as a primary network interface on Jun 1.03b, DS3615/17 and 6.2.1
  12. I don't think anyone can guarantee future proof on any platform, except perhaps ESXi. I'll respond to your hardware query on PM.
  13. 8086.105E is the Intel PRO/1000 PT card, which is supported by the "e1000" driver set. This is different than the "e1000e" which is the only one that appears to work on 1.03b/DS3615/6.2.1 EDIT: The above seems incorrect, and I can't replicate my research that led me to state the above. 8086.105E is supported on e1000e. This doesn't explain why it isn't working for @bluemax. I think the problem is that there are many posts to "buy [any] Intel NIC" not considering the above. Acquire and use an e1000e card (i.e. 82574L-based card) and you should be good. If
  14. I don't have a microserver, you may not be viewing signatures or you would see this: VM: ESXi 6.5 Jun 1.02b DS3615xs / DSM 6.1.7-15284U3 / Supermicro X11SSH-F / E3-1230v6 / 64GB / Mellanox ConnectX-3 dual 10GbE passthrough Intel P3500 NVMe RAID 1 (physical RDM) / WD Red RAID 10 (chipset SATA controller passthrough) / Samsung PM961 NVMe (ESXi datastore) Baremetal: Jun 1.04b DS918/ DSM 6.2.1-23824U4 / ASRock J4105-ITX / 4GB
  15. flyride

    Hp n40l

    lspci -k can be helpful here
  16. 6.2.1 DS3615 (loader 1.03b) does not work with vmxnet3, only Intel e1000e virtual 6.2.1 DS918 (loader 1.04b) does work with vmxnet3, but you will need a Haswell+ processor I realize it's a lot to keep track of, but the information is out there: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  17. That S/N and MAC won't do you any good, sorry. Loaders have only been built up for reference platforms. You probably should review the following links: https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ https://xpenology.com/forum/topic/7848-links-to-loaders/
  18. Interesting. DS3617 has had historical problems on DSM 6.1.x (for example, ESXi and 6.1.7 don't work on DS3617 but fine on DS3615). I've read that DS3617 support was not intended but the loader worked, so it was released as a combo. I'll have to load up a test DS3617 install and see what driver version it has.
  19. Nothing special. Haswell or later processor is required in order to run DS918. VM standard install, add 2nd SATA to connect vdisks or RDM devices, or alternatively passthrough a controller On my test instance, I need DiskIdxMap=0C00 to hide the boot loader and get the SATA vdisks to show up in the right place.
  20. I am running an XPEnology instance on ESXi right now with 1.04b DS918 and it works with vmxnet3
  21. Each hardware manufacturer registers PCI device type before a device is made available for sale. It's how the drivers can identify the hardware with which they are supposed to work. So a driver essentially has a list of the hardware ID's it can work with. The above is an extract of that information from the drivers Synology included in DSM 6.2.1 for DS918. You can see devID's within Synology or Linux by using lspci. I'm not sure what the MacOS command would be to do the same, but I am sure there is an equivalent command. What I'm suggesting to @hoidoi is to verify t
  22. You need the "right" NIC. Here are the device ID's of the Intel NICs that are presently supported on DS918 6.2.1:
  23. Your discovery is consistent with this report here. Note that 6.2.x doesn't like creating storage pools when the devices are attached to a SAS controller - but once they are built they can be moved to it and seem to work fine. Probably not compatible with most users' expectations of reliability and supportability.
  24. DS918 has direct support for much fewer devices than DS3615. For example, there are over 200 distinct Intel NIC types supported by DSM DS3615 under 6.1.7. On DS918 6.2.1, that number is 135. And it isn't just the older boards that are missing, in some cases Synology regressed to an older driver package and dropped support for new NICs they were not going to use in their platform. What's worse is that Intel releases multiple cards (multiple PCI device ID's) under a single nomenclature so you can have two of the "same" cards, and one works and the other doesn't.
  25. You mention 4K H.265 transcoding. What's really happening here is probably 4K H.265 decoding + H.264 encoding at a low quality profile. I don't think there is a H.265 display device that doesn't also support H.264. So if your expectation is that huge CPU is required for H.265 encoding (it is), it doesn't apply to real-time transcoding. Regardless of the method (hardware vs. software) the encoding profile to provide a H.264 stream in real-time is comparatively low quality to the source. You are always better off streaming a source file directly to the device that can accept it (