Jump to content
XPEnology Community


  • Posts

  • Joined

  • Last visited

  • Days Won


Community Answers

  1. IG-88's post in DSM7 Compatible SATA PCIe Controller was marked as the answer   
    the picture shows a pcie 1x card (now?), if thats real then only one pcie lane out of two would be used and thats not desirable
    the one i have looks the same but has a pcie 4x connector (as there is no 2x defined in the standard - at least not as a slot, but i've also seen cards with a 2x connector cards)
    the pcb has no vendor but a number marking PCE6SAT-A01, VER006S
    mine is like this

    and i did only test it briefly, its laying around for over a year as i was using jmb585/582 (and a older pcie 1x marvell) in my normal systems
    so i cant say much about it other then it worked for testing with a few different disks but i guess by now there should be more reports about asm1166 based cards, so you should use a search engine to get some more information
  2. IG-88's post in DSM 6.2.3 Update 3 on HP Server was marked as the answer   
    that 2GB is a hardware limit of this controller chip, nothing can fix this, no driver no firmware, use sata ahci or a lsi sas2008 based controller (IT firmware), 2308 should work too (at least with 3617 and 3622)
  3. IG-88's post in M.2 to SATA card, it will work? was marked as the answer   
    i had tested a jmb585 m.2 card nearly 2 years ago and it newer was running stable and there are mechanical issues too as the m.2 boards are very thin, forget unplug or plug cables, you might beak or stretch the boars beyond limits (at least its needed to put something below the board to limit the way it can be bend downward), i tried to have all 5 sata cable connected and even that put a lot of stain to the board and i hat to hold the sata cable in place with zip ties on the cpu cooler
    i ended with a M.2 flex cable to pcie slot solution and using a normal (reliable, cheaper) pcie sata card (also jmb585), it might be preferable to have a 10g nic in that extra slot, because errors on network traffic might be easier to detect/fix and have lower impact, having disks doing strange things because o f pcie problems might be more dangerous
    no need for that as jmb585/asm1166 are ahci compatible the ahci driver in synologys kernel will handle it
    we had some discussion about pcie 3.0 and newer sata chips and afaik there is nothing newer/better the jmb585 and asm1166 (and both are pcie 3.0 and "only" support two lanes and i've not seen affordable cards combing two of these with a pcie multiplexer for better use of 4 lanes in a slot and more sata ports - thats more the domain of sas controllers and as sata is downward compatible to sata most people end up using more capable sas controller)
  4. IG-88's post in best image to use for 9th gen intel setup. was marked as the answer   
    i5-9500t only has 6 cores and no HT, so its within the 8 max of 918+, you loose nothing and its 0x3e92 iGPU is supported by dsm i915 driver, the only thing to do is to copy the firmware files for i915 when using 7.0
    there is no "extension" for redpill loader to take care of that, synology only has the firmware files needed for its own hardware in dsm *.pat file (but the driver is not limited in any way, if the right firmware is present then it will work with all hardware thats supported by the driver)
  5. IG-88's post in 918+ Can Not Create Hot Spare Drives "Operation Failed" was marked as the answer   
    21st of November 2014  is the date code of the intel driver in kernel 3.10.105, giving a hint about hardware being released way later like 2016 will not be covered
    only hardware already planned to release in 2015 might work (and from the source it seems to be max. skylake that came out 2015)
    from gut feeling about what was wrong about i915 driver for 3615/17 i tried to build, it was about missing parts in the kernel of dsm and as we can't make a new kernel  we only can use drivers (aka kernel modules) that are working with the kernel synology provides, logical result it never even got tested because of this
    918+ is different as it already comes with i915 drivers and jun backported a newer i915 driver for 918+ making gpu's up to coffee lake working, later with 6.2.3 synology backported a newer i915 driver by themself as they needed a gemini lake capable driver for x20+ models
    if its 6 + 2 then your lsi's usually start with port 9 on your map in dsm gui (maybe 7 if the asmedia is found later then the lsi's but thats unlikely usually its ahci first as its part of the kernel and then lsi's as the driver is loaded later on in boot process), if you get them really disabled the disks connected to the lsi's will start lower
    so you can look at the disk map in the web gui where your lsi connected drives start (if you dont have anything connected to the 8 onboard sata's)
    btw. the asm1061 is just one pcie 2.0 lane so its max. 500MB/s for both ports together, so best try to disable these as its the weakest part (the 6 from the chipset should be able to work at full sata3 speed), also the asm1061 can have problems when vt-d is enabled in bios
    as its unclear how its configured atm i'd say a dmesg will get a better picture, as we will see all controllers found an see how the disks are numbered/counted
    if you look at the dsm gui you will usually see the number of drive slots the system is configured for by the synoinfo.conf, 16 is the default for 918+
    i dont know how you have done the 20 drives, it needs to be done the right way or it might not working properly, so if its not default anymore then specify what values you used  in synoinfo.conf
  6. IG-88's post in Network interface and bond Error was marked as the answer   
    you can rest the whole dsm config like as it was when starting the 1st time (2nd option in grub menu of the loader) or you mount the system partitions (raid1) and restore the files
    the default file are in the dsm *.pat file used for installing
    you can open it with 7zip, in there you open the hda1.tgz, hda1, and in there
    ifcfg-eth0 to ifcfg-eth7
  7. IG-88's post in HP MicroServer GEN8 Migation To 2U was marked as the answer   
    when it comes to buy hardware its good to know about the limits you are up to with xpenology
    (thats based on whats now usable, we don't know if there ever will be a hack for dsm 7.0 and for what hardware base it will be, so buying for the future is futile or at least risky and might make unneeded limits now you might never overcome in the next two years)
    1. cpu cores, HT "cores" will count as normal cores when countig
    3615 and 918+ - 8 cores
    3617 - 16 cores
    BUT only 918+ can use the hardware transcoding of the iGPU (intel quick sync video)
    so if you buy a cpu with 6 cores  with HT ist like 12 cores to DSM and then only 8 cores would be used in 918+ and that would be usually halt of it HT cores so
    you can estimate the performance with ht core ~25% of a normal/full core so a 6 +ht would be the equivalent of  7.5 real cores, wehn only 4 real + 4 ht can be used with 918+ you would have the equivalent of 5 cores or when disabling HT in bios (if that is possible, you should check that before buying a board) its 6 cores
    only 3617 could use all 6 +ht performance (but has not hardware transcoding)
    sizing need to be mindful to what the system is supposed to do and bigger is not better or at least you waste some money as you loose because of disabling ht and the performance is down to a lower grade cpu, so safe that money
    there is also a limit to what the driver of the 918+ supports as iGPU for transcoding
    also here bigger or newer is no better and an most cases the few percent for a faster cpu or a newer cpu are not that important
    so i low tier 9th gen cpu could be a good choice compared to a biggest 10th gen cpu
    in that example the i915 driver works ootb with 9th gen low tier (9100/9300) and with the high tier 10th gen there is no way at all to get it working (for now)
    high tier 9th gen and low 10th can be persuaded with a manually patched driver (copy a patched driver needs to be redone after most updates)
    i would see two 4x pcie 3.0 as minimum (when not 10G is onboard, can also be 8 or 16 but most use cases have only 4x cards)  and 2xM.2 if possible (when there are tree 4x pcie slots a 2nd m.2 can be realized with a pcie to m.2 adapter card)
    so a mini-itx is no good choice
    when playing safe it's 5 additional sata ports per pcie 4x slot and a 10G nic per 4x slot
    when the board comes with 6xsata and adding a 5port card the there are enough ports fns there is still one or two pcie 1x slot to have a 2port sata card
    so for 10th gen cpu this would be your options
    (i also added a serial port as its a great option to get forward when dsm is not booting as its console output is switche to serial ports, no use for a monitor)
    same for 8/9th gen cpu's
    as long as you are not planning on extensive vm usage the MHz is not that important, you won't see a measurable impact on a system using normal hdd's (maybe on a all flash with 64/128GB RAM for cacheing?), 16GB leaves options for VM use and is used as cache (but thats mood with a 1GBit nic, impact of more ram can be seen with a 10G nic)
    F ist without GPU, so no hardware transcoding, there is not that much difference and you would keep that option, if you sure about not needing it in the future you can buy a F version and plan for 3617 with max. 16 cores
    if you buy a 10th high tier there might be an option to use it in the future but that's not sure (at least not with dsm)
    my choice is still 9100/9300 as it works ootb and its not to big for 918+
    if a cpu heavy vm use is planned then a bigger cpu and esxi is a option and then using dsm in a vm with less cores to stay in the limits
    but i guess if you where thinking about RS1219+ a 9300 should be ok
    1 ssd = read cache and that's not very useful in most home use scenarios
    i'd scrap that for now, a 1GBit network as you plan it is just ~110MB/s and normal raid with 8-16TB disks should do that without ans ssd cache
    keep the option of two m.2 ssd fpr the board and you can have that later when needing it like 10G network and the ~400-500 MB/snormal hdd's can handle without cache is not enough
  8. IG-88's post in ASM 1061 support on 6.2? was marked as the answer   
    thats ahci and working in 6.x without any added drivers, its build into the kernel
    you will not get anything like console output on the monitor, the local console is switched to serial port, grab a 0-modem cable and putty if you can
    you might start to read the faq and tutorial section, there are lots of changes in 6.x compared with 5.2
    especially  min. cpu support for 918+ and csm/legacy requirement (no uefi)  for 3615/17 (csm setting in bios and choosing the non uefi usb boot device)
    please don't assume anything like a driver issue as long as you cant prove the point, misleading for you and for other trying to help you
    also its kind of weird to expect we know hat hardware, loader and dsm type you used, you need to tell
  9. IG-88's post in CPU/Core Features on bare metal was marked as the answer   
    this cpu (iGPU)  is not supported by synologys (or juns) i915 driver
    i did a crude hack to patch it in (-> link) but there was not much feedback, a 9th gen cpu might be a better choice (at least as long as there is nothing to say about dsm 7.0)
    hevc/h.265 should be no problem with 9th gen cpu's
    best solution might be to disable HT in bios (if that option is available), maybe it is possible to add something in grub.cfg as kernel parameter to disable HT support
    HT "cores" usually add 15-20% of a real hardware core, without it you loose a 1.6 core equivalent in performance but can use 918+ with the best possible performance
    3617 does not have i915 driver support and with its old kernel (3.10.105) in dsm 6.2 it does not look like anyone will backport this for us as driver
    with dsm 7.0 3617 will also get a 4.4.x kernel (3615 will still be 3.10.x) so it might be possible to have the same i915 driver as 918+ uses in dsm 6.2.3
    BUT 7.0 would need a new hack and loader (no word yet if anything is in work with the 7.0 beta) and also kernel sources synology has not been released (pretty sure we will have to compile the i915 driver by our self for 3617), the last one is important and taking into account that synology published 6.x (non beta) kernel source 2.5 years after the release (with some pressure from external) - i would not bet on anything in that direction, more likely it would be a thing to have nvidia support added (not usable with synologys video station but plex could use it, i can't test that i dont have a plex license and could not find anything that works baremetal (non docker) with nvidia to test it)
  10. IG-88's post in DS3617 not see all disk was marked as the answer   
    yes, Modell: PCE6SAT-M01
    -> in description
     marvell 88SE92xx (4 port ahci)  + ASM 1092 (port multilier)
    "ASM1092 is Serial ATA port multiplier controller, supporting one host ports and two device drives"
    so forget about it as 6port with dsm, its 4 what you can use and dont forget about it being pci1x, it will result in a bad performance
  11. IG-88's post in 2 Diffrent Controllers, what to change SataPortMap to? was marked as the answer   
    you usually dont have to but if you want to ...
    its the number of ports of every controller in the system in a row, one after another, so 62 in your case
  12. IG-88's post in Advice in Virtuabox : DS918 or DS3617 was marked as the answer   
    no, will not make a difference in that scenario, main difference  is that 3617 supports up to 16 cpu threads/cores and 918+ "only" 8
    maybe you will see a difference in 3rd party software packages as the 3615/17 it longer around in some older packages where not renewed to show up on 918+
  13. IG-88's post in USB and win32disk imager in w10 and win8 was marked as the answer   
    if you read the tutorial, there is osfmount used to alter the grub.cfg
    the image for usb has more then one partition in it, 1st contains grub.cfg, 2nd contains extra.lzma and kernel files for dsm
    win10 was also able to show the 1st parttion until creators update, atm win10 only mount the  2nd partition
    if you really have to you can read the usb with "Win32DiskImager 1.0" (set the option "read only allocated partitions") into a image file, use osfmount to alter it and then write it again to usb
    its also possible to use linux to mount and write the data an the usb, its just win10 behaving that way linux has nothing to complain about the file system on the 1st partition
  14. IG-88's post in XPEnology on bare metal + VMM or ESXi with XPEnology was marked as the answer   
    that depends on what you are planing to do, if its more nas and server role (use docker instead of vm's) then its baremetal
    if its more about experimenting with virtualization like training for a professional environment and nas is just same small part then its esxi
    it might also depend on how you plan to do backups for your nas, or if the hardware in question supports vt-d and you want to use a that to have the whole controller with its disk inside the dsm vm (or do you accept to have virtual disks on esxi to have them in dsm)
    also a factor is the user base of the nas, if its the whole family the a less experimental dedicated baremetal would be the choice
    virtualization can also be handled on a desktop with virtualbox and if the nas storage is connected with 10G network you can use this with vm's keeping a good performance (in most cases 1-2TB local ssd storage on that desktop can do better)
  15. IG-88's post in Can's see my sever on network was marked as the answer   
    just a assumption, as a extra.lzma package for 6.2.2 918+ is already available, the 3615/17 is work in progress and not public yet
    never mind, i only had a quick glance on a picture and overlooked the two black sata connectors
    yes that's a 918+ capable cpu
    because the driver packages are not well tested yet and i haven't seen much feedback
    its more meant as test install to see if network and storage is working, if its just sata onbard and ahci then there is no problem at all with storage
    its pretty easy to test with a fresh usb and a old empty disk
  16. IG-88's post in Recommended PCIE x1 NIC (PCIE 3.0) was marked as the answer   
    safe would be to choose something that already has a driver in DSM so if things go south with driver support you won't depend on extra drivers
    e1000e, igb
    e1000e: Intel PCIe PRO/1000 82563/82566/82567/82571/82572/82573/82574/82577/82578/82583/Gigabit CT Desktop Adapter/PRO/1000 PT/PF/I217-LM/V/I218-V/LM/I219 LM/V igb: Intel Gigabit Ethernet 82575/82576/82580/I350/I210/I211 both types are available as pcie 1x nic, i have both for testing
    Gigabit CT Desktop Adapter for e1000e
    the one with igb was sold as server nic i210T
    both can be found on amazon
    the information is also documented (but you can also open the dsm pat file with 7zip, hda1 and in side it in  /usr/lib/modules you find the kernel drivers that are part of dsm, but its kind of hard to make out the thing you are looking for in all this *.ko files)
  17. IG-88's post in Last guaranteed working DSM version was marked as the answer   
    just two above you can read that 6.1.5 ist working, thats the lastest 6.1.x version
  18. IG-88's post in Is it possible to use a Silverstone PCIe expansion card to use NVMe as cache? was marked as the answer   
    nvme is a completely new way of having a device and its not supported in the models we can use for xpenology, i did a driver for nvme in my extra.lzma but using it for dsm as cache does need much more so its not supported, you can read here an can experiment if you want do push things further
  19. IG-88's post in Will this sata card work with xpenology was marked as the answer   
    chip is marvell 88SE9215
    -> https://ata.wiki.kernel.org/index.php/Hardware,_driver_status
    -> AHCI
    and AHCI driver is part of the dsm kernel itself so it will work
    OR you could just use the forum search and look for "88SE9215" plenty of hits for this
  20. IG-88's post in syninfo.conf gets overwritten on system updates / Raid with 8 disks+ degraded was marked as the answer   
    in theory you could change the patch file inside the extra.lzma to change the values you need, the 3615/3617 patch files do not contain sections with this values (they have the "default" of 12) the 916+ patch does try to patch the values from 4 to 12, doing it with the 916+ would be easier then for 3615/3617, for 3615/3617 you would have to create a diff for this and make it part of the patch (if you know how a diff works its not so difficult), i was thinking of doing this last year but it would (kind of) collide with the driver thing i do
    if you thing that further it would be possible to define a value in grub.conf and use it as the amount of disks and change the things in the patch according to this valuen, but thats beyond what i will do for now and afaik quicknick did this already in his loader (should he not release then maybe i will do something it this direction)
  21. IG-88's post in grub.cfg - change the serial no after DSM 6.x installation was marked as the answer   
    its on the first partiton on the drive, shutdown your nas, remove it, inset it into a linux system, mount 1st partiton, change grub.cfg
    win10 creators update was also able to do this but with fall creators update  its not working
    but on windows you can still do it, use Win32DiskImager to make a image of it, use osfmount to mount the parttion, change grub.cfg, user Win32DiskImager to write the modded version to usb
  22. IG-88's post in Offical hardware Upgradeable and IDE controller support was marked as the answer   
    no, most ide will not work as the kernel module for this (pata_*.ko) doe not load, synology made changes to the kernel for driver they need and as they dont need pata_*.ko the drivers are not adapted so they will not work
    no thats a older ARM processor based unit, xpeonology is based on x86 processors (now day's 64bit capable processors)
  23. IG-88's post in Sata portmapping problem was marked as the answer   
    6 onboard + 8 on sas controller = 14
    as the default config is about 12 disks and the onboard controller comes first the last 2 disks of the 8 port controller will be missing i guess
    internalportcfg="0x3fff" is ok for 14 disks but you also have to set usbportcfg and esataportcfg to a pattern that does not overlap with the internel defined ports (0x0) for testing) and maxdisks has to be set to 14
  24. IG-88's post in SHR2/BTRFS array degraded after adding a disk was marked as the answer   
    if the disk turns out to be ok it might be a software flaw in dsm, possibly fixed in dsm 6.1
    if you write what hardware you use i can telly ou if its compatible (as there are fewer drivers for 6.1 availible then for 6.0.2)
    btw. you can also try to check the "real" log files in /var/log/ when using putty/ssh, maybe you will see more information about the "unknown" error of disk5
  25. IG-88's post in Lots of ram usage was marked as the answer   
    a little bit more context would be nice, like what hardware, like HP server?
  • Create New...