mervincm

Members
  • Content Count

    172
  • Joined

  • Last visited

  • Days Won

    4

mervincm last won the day on September 2

mervincm had the most liked content!

Community Reputation

11 Good

About mervincm

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yes you can bond multiple nics in DSM
  2. if you want to see it graphically, install docker and netdata/netdata container. Here are my 8 cores.
  3. I can only back up what the guru master has to say. I also picked up a couple jmb585 based cards to add 5 SATA each. I don't have them in my full time system yet, still in my lab / test box, but so far they work very well. Their ability to use pcie 3 for performance, only require 2 lanes (happens quite often in boards for these CPU's) and low price make them easy to recommmend. If you don't plan on adding a bunch of cards, almost anything has worked for me (so far.) If you do want to add a bunch of cards (GPU, NICs, HBA for more slots, AIC NVME drives) then pay attention to A the physical slot size, B how the slots are wired electrically C how lanes are spread around when multiple slots are in use, and D what features are either/or. Most board have features that you must chose between and can't use at the same time. Some boards are much better than others in this way. as an example, a pile of 1x slots are useless whereas a board with a bunch of physical 4x/8x/16x slots are very handy, if they can be used at the same time with enough active lanes. The only useful thing that you can put into a PCIE 2.0 1X slot is a 1 / 2.5Gbit ethernet NIC. When I buy a board I just make an excel spreadsheet and track it. z390 boards seem the best to me so far.
  4. I seriously recommend the i3-8100 and the i3-8350 on the used market. They go for very little, "used" is not a scary word when it comes to a CPU. They are super easy to cool, there are tons of inexpensive systemboards you can get for litttle as well. The idle power usage is the same among nearly every model in a given series, so the power costs will be the same as the pentiums you mention. They have exceptional hardware transcoding with no driver fuss if you want to use that (it's amazing if you use it). And they have 4 honest to goodness fast intel cores for muscle when you want to do anthing locally. They also do not rely on turbo mode. Turbo mode may/may not function in xpenology, only base clocks and power saving modes. "Turbo boost is managed by intel_pstate scaling driver, which is not available with the 918+ kernel" The high base clock on the i3 models means they really kick ass compared to the lower end models and frankly some of the higher end models. I ran xpenology servers with i3-8100, i3-8350K, i5-8600k, i5-9600k, and even i9-9900k. The j5040 is not anywhere near as powerful as the others below as it is a pentium "silver" based on the atom core, not the skylake core. Also although it is 4 core, ignoring turbo mode it is only a 2.0GHz CPU consider these CPUs (What you will see without turbo) pentium 6400t 2 cores w HT 4MB cache 2.3/3.4GHz pentium 6400 2cores w HT 4MB cache 4.0GHz i3-8100t 4 cores 6MB cache 2.4/3.1GHz (bios set) i3-8100 4 cores 6MB cache 3.6GHz i3-8300t 4 cores 8MB cache 2.5/3.2GHz (bios set) i3-8300 4 cores 8MB cache 3.7GHz i3-8350K 4 cores 8MB cache 4.0GHz i5-8400t 6 cores 9MB cache 1.2/1.7GHz i5-8400 6 cores 9MB cache 2.8GHz i5-8500t 6 cores 9MB cache 2.1GHz i5-8500 6 cores 9MB cache 3.0GHz i5-8600t 6 cores 9MB cache 2.3GHz i5-8600 6 cores 9MB cache 3.1GHz i5-8600k 6 cores 9MB cache 3.6GHz Then consider that both single thread and multithread are important. In my mind the 8100 is a rock star because you can get them so cheap, say 40$ more that the 6400 used at times. The 8350K is also a standout becasue of the comparitive excellent single core and multi-core performance. The 8600K is the next model I like because the mulithreaded performance is enough over the i3-8350K to finally more than make up for the drop in single core performance. Don't make the mistake of thinking the low end models will save you appreciable money in power, they will not. Neither will the low power models. They don't burn less power to speak of. When you are looking at a nas from a power cost perspective, the most important thing to look at is the idle power usage. Once you add in the power saving script to shedule power savings, every single one of these CPU run at the same 800 MHz, have the same power gate tech, support the same power saving modes etc etc. They idle at the same power usage (or close enough that it doesnt matter) then you look at the relatively rare (for a nas) power burn when the CPU is busy. This is where the lower freq of the T models will burn less power over time. The important thing to remember is that in a NAS your CPU work is fixed, and your T models, being slower, spend a greater amount of time in the high burn mode. The regular CPUs are already back in idle mode while the low power CPU's are still in their high burn mode. So, while its not exactly the same, even when you need to do work, the over all power used is pretty darn closer than you would think when looking at TDP numbers. So, why chose the low power models? Because they are easier to cool. Use can use them in all sorts of places where the heat generated by a high power CPU would overheat or damage near by caps etc. In my opinion, T models are never worth looking at unless you can't use even a small heatsync and fan with at least some airflow. If Turbo mode works ( and the fact I see CPU rates at 36,000,001 means it might) things change a bit.
  5. Yes I am running baremetal. Also w LSI, I didn't change anything to allow for 16HDD
  6. I hate to fix what isn't broken but I might have the issue admin@DSM:~$ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 /dev/synoboot3 given I have the extra (/dev/synoboot3) I appear to have a problem. I am running 6.2.3U2 918+ with 16 visable drive slots, 13 used (I have 6 SSD and 7 HDD) I also have an AIC NVME Intel SSD750-400 used as a read cache on the HDD volume My volume 1, storage pool 1, is SHR1, 7x 8TB HDD using slots 1,2,3,4,5,6,12 BTRFS - Healthy My volume 2,storage pool 2 ,is SHR1, 6x1TB SSD using slots 7,8,9,10,11,13 BTRFS - Healthy My NVME is cache device 1 - Healthy Other than the strangeness in that my HDD and my SSD are not sequential in their slot numbers, and the fact that the drives two misordered drives 12 and 13 have had a few reconnects, (none since at least March) on all other disks I don't see any issues with the storage.
  7. Hi Folks! I am considering adding an additional server to my lab and this time going with another open (open media vault, ubuntu lts server 20.04, maybe truenas core) to learn something new and get around a few items that I have not been able to get going in xpenology (16 threads, CPU Turbo) Nothing else I look at seems to trust BTRFS in RAID 5 and offer the snapshots, self healing, and file system scrubbing that is so straight forward with a Synology/Xpenology apparantly generic advice is -RAID 5 / 6 build into BTRFS is not ready for prime time, it is NOT usable for prod. -You can make an mdadm RAID5 then format it with BTRFS, but that is a bad idea (unsure why) Does anyone here have a solid understanding of what Synology does differently that the other linux based options?
  8. I understand the difference between software RAID and hardware RAID. I was wondering if one of the cards had its own BIOS or some bizzarre hooks into the motherboard firmware to allow that to function. The vendor says RAID is supported if the motherboard supports it. I can't imagine what that could mean, or what RAID functionality would depend on the motherboard support. As an HBA, you can use standard operating system RAID independantly from the motherboard features. This vendor also states you can boot from it provided bios supports it. The second one says storage only no boot, and non-raid. perhaps it wont be bootable, but it is an HBA so it will absolutely support software based RAID. They just seem to have very different descriptions for what are in all likely functionally identical. I ordered the same card as you, as well as an m2 version. I have an unused PCIE3-2x m2 that would be perfect, so I thought why they heck not try it out. I am concerned it will be fragile more than anything else to be honest.
  9. JMB 585 seems to support (software) RAID and non RAID. any ideas if there is actually a difference here, or just amazon fluff and BS https://www.amazon.ca/ADWITS-Express-Expansion-Controller-Software/dp/B07X27R477/ref=sr_1_7?dchild=1&keywords=jmb585&qid=1595879950&sr=8-7 https://www.amazon.ca/Internal-Non-Raid-Controller-Desktop-Bracket/dp/B07ST9CPND/ref=sr_1_8?dchild=1&keywords=jmb585&qid=1595878196&sr=8-8 I like the first one as it has activity lights and a little cheaper but .... same chipset so I expect support is there ...
  10. 918+ image worked with an HPE version of the x520 for me.
  11. mervincm

    NVMe cache support

    I applied -2 patch to my test system via auto update and , as a test, I even left the read-only cache on NVME enabled. For some strange reason it failed to restart correctly (OS didnt appear to start) but with a physical power down and power up, everythings seems to be working, NVME R-O cache still working.
  12. You are correct it does seem to be used. this is either new or I was wrong all along strangely, on my i5-9600K system enabling quicksync support let to MUCH lower CPU usage (85% down to 10%) but I didnt see much change in optimize performance. I wonder if plex throttles the optimize rate because I would experience a significant difference when quicksync is used. thank you!!!
  13. Starting with a fresh 1.04b boot image I added the 13.3 extra and extra2, installed lastest DSM, then via SSH I updated the i915.ko file. Right away my i5-9600K was able to hw transcode in plex. excellent work!!!