Jump to content
XPEnology Community

mervincm

Member
  • Posts

    193
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by mervincm

  1. Well, I just built a fresh xpenology in my i3-8100 exactly as described, and sure enough, it PLEX transcodes in (hw) exactly as you said! I used the "real" Synology SN and macs but didn't burn the macs into the NICs. Also the bug with low quality on the 20Mbps HD HEVC jellyfish sample file is gone! With Hardware transcoding enabled I can play up to the 45Mbps HD HEVC, but it fails on the 50 Mbps file, the same as in Ubuntu. Also just like Ubuntu, w software transcoding the 50 works fine. This is good to know. I was stuck on trying to burn the NICs! I have no issue with pauses or freezing on regular files. Do you have a sample "problem" file that's publically available I can grab and test, see if I can replicate your issue?
  2. Can you please confirm a few things? did you use the serial in the USB image, or did you change it to generated serial? maybe a real serial? did you burn any mac addresses into your Intel NIC? did you add those into the mac1 and mac2 entries on the USB image? can you confirm if you have any other video card in your system? any monitor connected or headless? can you confirm the contents of your conf file with this cat /usr/syno/etc/codec/activation.conf I am stuck trying to burn the MACs into my dual-headed NIC, and I am really hoping it's not actually required for plex (I didn't have to do it for plex on my Haswell system) If I can get to where you are maybe we can work on it together. Have you tried the 20mbps HEVC HD jellifish sample file to determine if you get the quality deterioration issue when it hw transcodes to a web client set to MAX bitrate? http://jell.yfish.us/media/jellyfish-20-mbps-hd-hevc.mkv
  3. OK I just purchased a new systemboard (B360) i3-8100 / ram / case Plan is -wiped HDD -wiped USB key, -freshly downloaded 1.04b loader and DSM OS. -correct pid/vid/real synology 918+ serial number, and 2 real synology MACs step 1 burn real synology 918+ MACs to my NIC step 2 config pid/vid/serial/macs into boot USB image step 3 install DSM. step 4 confirm /dev/dri, and /usr/syno/etc/codec/activation.conf I have created bootable freedos USB-drive with Rufus So far I have not been able to find a utility to wipe either of my 2 port Intel NICs 1 HP 2x10GiG intel 560SFP+ (1rst choice) or 1 HP 2x1GB intel HP NC360T (2nd choice) Work continues!
  4. I also tried installing videostation. Still no /dev/dri, but I did get the error in /usr/syno/etc/codec/activation.conf {"success":false,"msg":"SN format is wrong."} So, my videostation transcode does not work. that being said, on the old systemboard I had working hardware transcode in PLEX, so I don't think the serial and MAC is important for PLEX hardware transcode. It would be interesting to compile a list of what people were able to get working. As an example I can confirm haswell CPUs will support hw transcode with 104b+DS918+ 6.2 patched to date+plex+plexpass. I can not get 9xxx coffee lake refresh CPU w Z390 board+104b+DS9618+ 6.2 patched to date+plex+plexpass to support hw transcode (I don't think this is possible yet) Has anyone got 8xxx coffee lake CPU w z390 (or any other) 104b+DS9618+ 6.2 patched to date+plex+plexpass to support hw transcode? (I don't think this is possible yet) Has anyone got 7xxx kaby lake CPU + chipset 104b+DS9618+ 6.2 patched to date+plex+plexpass to support hw transcode? (should work) Has anyone got 6xxx sky lake + chipset 104b+DS9618+ 6.2 patched to date+plex+plexpass to support hw transcode? (should work)
  5. I run 918+ image with 10G ethernet card, (recycled HP server NIC)
  6. Transcoding was working on my old hardware, replaced the system board and now it does not work anymore. old system Lenovo TS140 w e3-1225v3 (Haswell) replacement asus prime z390A w i5-9600k I moved the USB-key, NIC (10GE), and HBA, all HDD to the new system board, it boots as expected, but in plex hardware encoding and hw decoding never seems to kick in anymore. I believe I am on bootloader 1.04B ( can you check without a monitor connected? I run it headless) Latest DSM and plex media server and plex-pass. I connect to DSM with WinSCP and I see the /dev/ folder but not the dri subfolder. also, I see /usr/syno/etc/ but no codec/activation.conf.
  7. did you try iperf from workstation to Diskstation? there is an iperf3 Synology package kicking around. from my desktop to my xpenology I can copy files initially at 1.1GB/sec and then once the ram fills up it drops to 700-800 MB/sec desktop is NVME SSD and xpenology is 6x8TB seagate ironwolf disks with 2x500GB 860 EVO SSD as non sequential cache. NICs are Intel in NAS and Mellanox cx2, Switch is a dlink.
  8. rebranded LSI controllers like the H200 (I have an HPE version) in IT mode are the default best option from my opinion. Inexpensive, TONS of them around, lots of support available from home-labbers. Most add 8 6G SAS/SATA ports with no complications around SMART etc.
  9. I agree with the MicroTik interface comment for the most part; that being said, for a flat switch, it's really nothing to do. Additionally, this model runs switch-OS, not the router OS you are used to. much much simpler. I have 4 10 gig switches in my home lab (Ubiquiti UNIFI 16x10gig, Microtik CRS-226 24x1, 2x10, quanta l4b, 48x1GB and 2x10G, and a DLINK 24x1G, 4x10G) so I have seen a range of less common options (no cisco/HPE/juniper) At this time I only really use the D-Link Enterprise line switch. a switch allows you to have a single IP address and a single name/DNS entry for your NAS. It's a nice to have, not a need to have. I do 10GBE main PC to physical NAS, and to vSphere Host. I also do multichannel SMB3 over multiple NICs to some secondary PC's and a second NAS (real Synology box) physical routing/firewall/IDS make do with 1 GBE because my WAN link is limited.
  10. I suggest getting a switch. it will cost you 1 more cable, but it makes your networking so much easier. managing 2 networks (1 Data, and 1 Storage) is definitely possible, but it's a mess that you don't have to deal with for under 200$. I have tried both. You can buy a brand new switch from Mikrotik with a couple 10 gig ports for under 150$ (CSS326-24G-2S+RM)
  11. Seems like the right approach to me. Disable sequential caching till NVME support is available. It should not be too long, as models like the 918+ have nvme caching features. Once that is in Xpenology, you can LACP your 2 links from switch to NAS, and one day be pushing for 2 gigabytes per second transfers!
  12. I have a lenovo Xpenology system with a 10gig intel 559 based card, xeon 1226v3 CPU, 32GB RAM, and 6HDD (WD RED) RAID5, LSI IT mode HBA (6g SATA/SAS) an well as a real synology 1815+ w 5HDD RAID5. I have not tried adding in SSD cache as my testing (back in 5.x days) showed that adding them in impacted my sequential performance in a negative way. my Win10 system is a 6700k w 32 GB ram, Intel SSD750 and a samsung 960 evo, and on the same 10gigE intel card. I can do some tests, so you have something to compare to.
  13. Safe to assume you are using a 10Gbe network here? When you created your SSD cache you created a 2 SSD Read/Write and only for sequential transfer correct? Atom CPU (a great atom but still an atom) might be impacting performance, did you monitor CPU while this is happening? 3HDD RAID5 supplies a steady 500-600MB/s? that's not real, must be caching involved. the most you can hope for in writing to a 3 disk raid 5 array is 2x a single disk, and I doubt you have HDD disks that can go that fast my really old agility 3 SSDs five 120 write performance in the DSM write benchmark in my xeon system. maybe you have an issue with backplane? BTW very cool motherboard! I would love one!
  14. I have used a range of 10GB NIC's, optical transceivers as well as DAC cables, and 4 different switches with 10g. I don't think that the mellanox connectX-2 (I tested one in the past) have any sort of built in driver support in XPenology. I have used broadcom in the past, and now use intel 82599 based cards. If you just want to built a point to point 10GBE link, I had great luck with the IBM BNX2 broadcom cards their included transceiver, and a standard OM3 LC/LC fiber patch. I also use the 3615xs version
  15. This can be a number of things, some are not easily fixed. HBA, SATA/SAS Backplane, port multiplier, OS. The 6/i looks to be a bad choice for xpenology for two reasons, no IT mode firmware and no support for over 2TB disks. if you are sure you don't have a backplane limitation, this might be as easy to fix as purchasing a replacement SAS/SATA HBA. There are other forums that are more suited to server grade parts in the home lab you might try there for specific advice.
  16. I started with ESXi, but am now running it bare metal. ESXi was fine, but I didn't need any extra VM hosting. You can do that now inside DSM with virtual machine manager anyway. I wanted to give DSM local access ALL of the HDD for smart testing. I wanted push button power button support, USB support One item you should consider is that bare metal install means that you need DSM to include your driver support whereas with esxi you need vSphere to include your driver support. This might be a deal breaker either direction, depends what you have.
  17. update: it's working. moved the USB boot drive from my test server to the main Xpenology NAS (exact same hardware) and it is stable running at 10gbps on the Intel card (broadcom still non functional.) It seems that something in .1 or u4 fixed the problem. Still no luck with the broadcom, but I would rather use the intel anyway.
  18. Did a test today. I have a real Synology 1813+ and its setup (today) with a single SHR volume, 8 HDD + BTRFS FS. setup a 4gig link with link aggregation with its 4 NICS. removed all packages and turned off all reports, indexing etc. Created a best case scenario (without SSDs.) Directly from the 1813+, I did a rsync copy pulling huge files from my xpenology. at the same time I did a CIFS copy (huge files) from a workstation to the 1813+. I was able to get CUMULATIVE averages over 1gbps and CUMULATIVE peaks at about 1.6 gbps. If you had more clients looking the write to the 1813+ at the same time, it might have gone even higher. This is a typical scenario where link aggregation can help.
  19. 10 gigabit networking does not have to be expensive. IMO a starter setup is just 2 NICS (USED can be as low as 20-30$ each used) (not all work with xpenology at this time) 2 "cables" (cables for cheap 10 gig networking usually means 2 optical SFP+ transceivers and a fiber patch cable the length required, OR an SFP+ DAC cable) each cable is about 20$-75$ 1 switch with at least 2 SFP+ ports, new from 200$-400$, used from 100$ its doable for not too much.
  20. I have played with read and read/write caches. I found the read cache made no noticeable improvement, but never did it seem to get it the way. The Read/write cache actually made random recent reads noticeably better, but also negatively impacted by sequential writes. I took them both and build a dedicated SSD volume. MUCH MUCH better. it was big enough to run my apps and small VMs from, worked great.
  21. Before you spend more time on link aggregation, are you sure you understand what advantage it offers? If you are trying to get more that 1gbps between your NAS and a single client, IT DOESN'T WORK THAT WAY. No matter how many NICS in your PC or NAS, no matter how you create your bonded channel, 1 client maxes at 1gbps. What you can hope to achieve (and its hardly easy) is have multiple clients simultaneous aggregate to over 1gbps. And even if you can get the networking done right, you will still need to have the disks to keep up. When you have simultaneous devices accessing your NAS, thats a someone random activity, and you need ALOT of hard drives to exceed 1gbps of random activity, unless you create an all SSD volume. even if you get everything working how many minutes a week is everything gonna line up perfectly? there are two solutions that do work to give you over 1gbps somewhat often. a-10gbps NICS and a 10gbps switch - used cards and switches are very affordable - this is what I do b-multichannel SMB 3 - right now only works on windows systems, available in linux based systems using the latest samba systems if enable EXPERIMENTAL features.
  22. I should have added that I was able to get both the broadcom 10gbe and the intel 10 gbe NICs working with 5.x, ran it that way for months if not over a year. I have upgraded from 6.1 to 6.1.1u4 and there is a significant improvement, if not quite fixed? is it possible syno tweaked a driver or something? It is MUCH better, at least with the Intel Card I have been testing with so far. I can read from the NAS at about 300MB/sec (OK for a 4 Disk R5 BTRFS) I can only write to the NAS at about 105MB/sec (not great) The only thing I did differently was testing BTRFS this time, I used ETX4 for previous testing. I want to let it settle for a while, make sure it is stable, then I will try the broadcom card and then both again with the new ram drive (assuming that this ram drive is NOT the same as you had it your 6.02 tutorial, as I used that one in yesterdays testing)
  23. used NICs are really cheap, you should be able to get a PCI based intel NIC if you have an old style PCI card in your motherboard. 2 or 4 port PCIE boards are also available.
  24. my Lenovo TS140 has aC226 platform chipset, works well,
×
×
  • Create New...