Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. flyride

    DSM 6.2 Loader

    Yes, that's clear. However, your sample set of one does not make your assertion (that an iGPU is required) in fact true.
  2. flyride

    DSM 6.2 Loader

    There are many examples of non-iGPU equipped CPU's in the DSM updates reporting threads. My own 1.04b test environment is ESXi with no iGPU.
  3. flyride

    DSM 6.2 Loader

    Integrated graphics is not required. Any Haswell or later (AFAIK it's support of the FMA3 instructions) is ok. @CheapSk8, I think your system should run 1.04b and DS918 DSM fine, keep at it.
  4. @advin bump Also https://xpenology.com/forum/topic/13342-nvme-cache-support/?do=findComment&comment=107911 If possible, please post the output of the following commands on a real DS918 with a working NVMe cache drive installed. You may need to be elevated root (sudo -i): lspci -k ls /dev/nvm* udevadm info /dev/nvme0 synonvme --get-location /dev/nvme0 synonvme --port-type-get /dev/nvme0 ls /sys/block ls /run/synostorage/disks
  5. Running some of these services on docker may help, as it is possible to bind docker apps to specific networks.
  6. Actually mdadm is able to use all the RAID1 members to improve throughput. See https://superuser.com/questions/1169537/md-raid-1-read-balancing-algorithm SATA3 max bandwidth per drive is 600Mbps. Assuming 80% efficiency rate for mdadm, that's still possible 960Mbps burstable. 10Gbe max throughput is 1280Mbps. So it's pretty close with a minimal hardware investment. Certainly much better than the 115Mbps maximum we can get from 1Gbps Ethernet. EDIT: I realize your comment about equipment is referring to OP. Lack of SATA3 capability makes 10Gbe in this particular case not very useful.
  7. 10Gbe is transformative. Two SATA SSD's in RAID1 can burst to 10GBe speeds, and most NVMe drives can fill a 10GBe pipe with one device. Economically getting to 10Gbe was one of the main reasons that I personally went to XPEnology in the first place.
  8. Shared folders need to be created in the DSM UI. It takes just a few seconds to create each one. It's pretty simple. Is the issue that copying the files to the new shared folders takes time? Because that can be done from the shell very quickly (Linux mv command).
  9. Serial port virtualization on the VPS is probably your best bet.
  10. I don't think LSI controller will work at all without at least a PCIe-x4 slot. If you can live with four drives, J-series are decent, inexpensive board and the Realtek driver works fine on DS918 with the "real3x" mod - see my signature for more or search for real3x. If you need more than four SATA, AsRock J-series is not the way to go.
  11. https://xpenology.com/forum/forum/36-tutorials-and-guides/ There are discrete tutorials for ESXi and Proxmox, you should be able to figure out what you need to support KVM by itself with those.
  12. If you are willing to use a hypervisor, you don't need a USB stick to boot DSM. The loader is a virtual disk at that point.
  13. <snip> Polanskiman is giving you good advice regarding the CPUs, 3615xs DSM base and LSI card configuration. Additionally, I strongly suggest you reconsider the use of ESXi, for several reasons: Your on-board NIC won't work with baremetal 1.03b - you will need an e1000e compatible NIC like an Intel CT. But PT card works fine with ESXi. ESXi directly supports LSI. For best results, use passthrough mode, or if DSM 6.2.x has problems with it, translate the LSI attached drives to your VM via physical RDM. DSM max CPU cores is 8c/16t due to Linux kernel limitation. You'll need to run DSM as a VM (and run other workloads on the same hypervisor) if you want to use all your system's compute resources I don't recall anyone ever posting a system with 120GB of RAM - while I can confirm 64GB works fine baremetal, I don't know the max RAM supported, but... ESXi free license supports two processor dies with unlimited cores and 128GB of RAM If you do insist on running baremetal, you will have an easier time starting with loader 1.02b, and DS3615xs DSM 6.1.7, which will definitely support your NIC and HBA. But even then I would recommend ESXi to fully utilize your hardware.
  14. flyride

    DSM 6.2 Loader

    The kernel is compiled with instructions that are only available in Haswell or later. There is a lot of variation in the capability of "Quicksync" processors across processor families. For example, current CPU's can directly decode H.265 and HDR and older ones cannot. Even if you could get an old processor to boot, you would still need Quicksync function compatibility with the transcoding software on Video Station or Plex.
  15. If you are writing it off as dead, you have nothing to lose by trying more advanced options, like re-creating the array. Google is your friend, but here's a thread to start with: http://paregov.net/blog/21-linux/25-how-to-recover-mdadm-raid-array-after-superblock-is-zeroed Don't mess with /dev/md0, /dev/md1 and the members (/dev/sda1, /dev/sdb1 and /dev/sda2, /dev/sdb2). Those are your DSM and swap partitions, respectively. You are worried about recreating /dev/md2 and its members /dev/sda3, /dev/sdb3 - your raid group and volume
  16. This is a two-drive RAID1. If both members have problems, you are not in great shape for recovery. But try the mdadm assemble command with force (-f) instead of -v
  17. flyride

    DSM 6.2 Loader

    No difference except as on that thread, DS3615xs instead of DS3617xs. FWIW, I don't think the speed difference has anything to do with the connection type. It's all virtual anyway. Subjectively, 6.2 is slower than 6.1 even on a real Synology.
  18. flyride

    DSM 6.2 Loader

    I think we covered this in your thread here, I don't know what might have changed though.
  19. The slots will be shown in DSM regardless of whether there is a controller available to service them. You are assigning meaning to the empty slots, and it doesn't tell you anything about what is and is not recognized in the system. You are connecting a large number of drives - not a typical, or simple installation. All three add-in SATA controllers are using port multiplication. I suspect the DSM drivers are having trouble with that many controllers along with port multiplication, which has been documented to have trouble in complex scenarios (which your configuration definitely is). Even your base motherboard configuration has FOUR controllers onboard (2x Intel SATA via PCH, 1 embedded 88SE9128 and the JMicron eSATA). A couple of points: Disable controllers you aren't using Each controller added may "remap" the active ports and order of drives in DSM (that's where SataPortMap/Idx comes in to play) Personally, I avoid multiple instances of the same controller type in one system. Intel SATA usually works well on DSM. If I were in your shoes, I'd back up and figure out why the Intel SATA II disks aren't working. Using the cheapest tech (Marvell) to support a complex configuration (11 drives) is part of the problem as it is making debugging and analysis very hard ESXi may help you support multiple controllers (and the Intel SATA). However, it doesn't like Marvell too much. There are some hacks to make it work, but again LSI is a much better option.
  20. Probably, but it is not guaranteed. "Warning" disk status means that the disk has been flagged but is otherwise working. "Initialized" means that the disk is not currently part of the array but is recognized by DSM and has the Synology disk structure (partitions) configured on it. Generally a disk that has been part of an array and is now in Initialized status has either been disconnected from its controller (bad data cable or power), or in a read error state (bad sectors) for a long time such that DSM has decided it no longer was functioning. Standard hard disks (in contrast with "NAS" hard disks like WD Reds) have internal firmware that tries to recover disk errors for a very long time and will fail the RAID. Two drives must be "offline" in a (3 drives or more) SHR for the array to be down.
  21. You can try to force the arrays online with mdadm --assemble --force That's not the exact syntax; you will need to do some investigation first. Google "recover mdadm array" for some examples. Because SHR, you may need to reboot again once the arrays are online in order for the volume to be visible. This is one of the reasons I personally don't use SHR (that it makes recovery more complicated in this scenario). Once the volume is back online, you can force a resync. You will undoubtedly lose some data, and you won't know what files it affects. Sorry, there is no easy step-by-step solution to this. You also need to figure out the original cause... Got a backup, right?
  22. flyride

    Totally new

    Your onboard NIC may be too new for the drivers in DSM. If the motherboard has two NICs try the other port. Or buy a Intel CT (82574L) card and plug that in and it should work.
  23. U.2 does not break out to SATA. You may be confusing SFF-8087 multi-SAS port with U.2 U.2 is basically a PCIe port like a M.2 NVMe slot is a PCIe port. There is no way to connect SATA directly to the PCIe bus. I have heard of an embedded SAS/SATA controller with a U.2 interface at one end and a SAS/SATA breakout at the other. This would appear as a SATA controller to DSM, and as long as it was compatible with the standard SATA driver, it should work. The only way I know of to get U.2 ports (and U.2 drives) to work is with ESXi (I actually have two U.2 drives mirrored on my main XPenology system) via physical RDM.
×
×
  • Create New...