Jump to content
XPEnology Community

mic-cosmos

Rookie
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mic-cosmos's Achievements

Newbie

Newbie (1/7)

1

Reputation

  1. It's definitely a HW transcoding issue. CPU use is actually lower when I disable HW transcode and there's no buffering. Could anyone give advice on how to get Plex HW transcoding to work? As I understand it, Synology AME isn't necessary for Plex. I've tried elevating privileges in /dev/dri but it didn't work.
  2. Hi all, I used Dell Precision 3620 (E3-1225v5, 8GB ECC) to build a TCRP-based (DS918+ DSM 7.1.1-42962) Plex Media Server (V 1.30.0.6486). My problem is excessive buffering when trying to stream Plex live TV from HDHomerun Duo. -Not a network issue as everything is hardwired and I can stream smoothly using the HDHomerun TV app instead of Plex -CPU use goes up from 2-3% to 30% (Due to Plex Transcoder; HW transcoding enabled in Plex, transcode off in Live TV/DVR settings), buffering every couple of seconds, regardless of video output quality -No issues streaming locally-stored video files (even transcoding with CPU use bump doesn't cause buffering) -No issues streaming live TV from another PMS running in Ubuntu with GPU passthrough in ESXI 6.7 I appreciate any pointers on what the issue might be and how to resolve it.
  3. Thanks both. I think it's obvious that CPU is the bottleneck here to get more than 1Gbps out of this system. What I'm surprised by is that I basically don't see any CPU utilization with my VM and DS1518+. With the CPU being the C60M1-I bottleneck, I would expect a small but noticeable increase in CPU use with the other two systems (given that they're 4 core with much faster clock) but I don't see that. What are your experiences in your systems with file transfers in terms of CPU utilization? Small but noticeable or not at all with CPUs that are more beefy than on the C60M1-I?
  4. Hi all, I have an Asus C60M1-I and with DSM 6.2.3-25426 Update 3, 6 6TB WD Blues running on BM in RAID10 configuration. At baseline, idling, with the Realtek onboard NIC, CPU fluctuates between 10-40% use and I can get steady 58MBps read with 85%-ish CPU utilization. With the onboard NIC disabled and an Intel i350-t4 installed with 2 port static LAG, CPU idles at 10%. Using single client pull, I can get >100MBps with occasional dips (disk I/O catching up?) with CPU use at around 65%. Performance with Intel i350-t4 is obviously better than the onboard NIC though I did run into a weird issue where I was only able to pull ~8MBps with this setup but this was fixed with a reboot. While pulling with 2 clients, I haven't been able to consistently break 1Gbps due to high CPU use. On my genuine DS1518+ on RAID6 configuration also on static LAG, CPU utilization barely budged from low idle while pulling >100MBPs. On an ESXI VM (Dell T30 with i350-T4 vSwitch), I can also pull at >100MBps but with the CPU utilization barely budging. There is no encryption on file transfers and I've tried both AFP & SMB with similar results for the C60M1-I in terms of utilization. My two questions: 1) What's causing the high CPU utilization on file transfer with the C60M1-I that I'm not seeing on the DS1518+ and the VM? 2) I would like to be able to utilize the LAG on the C60M1-I but the bottleneck is the CPU, is there something I can do to lower the CPU utilization?
  5. I have the same issue. Here's how I fixed my VM back to update 2 if anyone's curious in response to a DM: I did revert back to update 2 but it was a difficult process. The server was stuck in an endless recovery loop where I would be able to access the recovery menu after using a fresh synoboot.img but this was followed by the server being rebooted and then not registering an IP address. I managed to get to the DSM by choosing the "baremetal" option instead of the virtual machine option but only 2/4 HDDs on the HBA passthrough would register and my HDD RAID array showed up as "crashed". I eventually managed to reinstall the update 2 DSM by disconnecting the HBA passthrough and the installing a fresh DSM on a virtual HDD. I then reconnected the HBA passthrough. After I booted into the DSM using the virtual HDD, I got the message that the DSM installs on my HBA passthrough HDDs were no good and was asked if I wanted to reinstall the DSM (using the one from the virtual HDD).; I chose yes. I then removed the virtual HDD and was then able to boot to DSM using my HBA passthroughs and update back up to update 2 after that.
  6. - Outcome of the update: Unsuccessful - DSM version prior update: DSM 6.2.3-25426 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617XS - Using custom extra.lzma: No - Installation type: VM - ESXI 6.7 (FixSynoboot.sh), Intel EXPI9404PTL Pro/1000 PT Quad Port (Passthrough), Dell 47MCV Perc H200 flashed to LSI 9207-8i IT mode (Passthrough) - Additional comments: Workstation not detected (No IP) with addition of flashed Dell HBA card after update. Built new VM starting from DSM 6.2.3 using VM HD and FixSynoboot.sh, updated to update 3 without issues. Workstation again not detected with addition of flashed Dell HBA card.
  7. Not a bios issue as the BIOS reports 16 & 4 GB just fine. I get that AMD CPUs are not officially supported. I'm just wondering if anyone else who has this board has the issues or work-arounds? I'm suspecting that the incorrect RAM may be the reason that the system sometimes crashes. FWIW, I'm running the DS3617XS profile.
  8. Running on Jun's loader v1.02b and have DSM 6.1.6-15266 Update 1 installed. I've only had two issues: 1) incorrect RAM is reported. I've had 2x8GB installed and DSM reported 32GB. When I switched to 1x4GB DSM reported 8GB 2) DSM randomly crashes in that I can no longer get into the DSM GUI via IP in the browser. Pinging the IP still works. I'm going to try SSH next if that happens.
×
×
  • Create New...