NooL

Members
  • Content Count

    79
  • Joined

  • Last visited

Community Reputation

7 Neutral

About NooL

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @kiler129 Wow this is such an impressive and exciting read.. Awesome work!
  2. Currently using a Fractal Design Node 804 which im pretty happy with, but there are room for improvements: 1.) When drive bays are mounted it leaves very little wiggle room for power/sata adapter - SFX power supply is recommended in my opinion. 2.) I'd like a overall better airflow although only so much could be done i suppose. 3.) I'd like a bit more options to route the cables through the case. I have bought, but not yet installed a 16port hot-swap Inter-Tech 3U 3414 case which im very much looking forward to taking in use:
  3. Haven't been much update from me as I havent gotten around to doing much yet, I have a new Inter-Tech 3U case coming soon, along with a new HBA card and new psu. Havent exactly laid a plan yet, but I think i'll go ahead and split up the volume so that i have a volume with the WD red's and one with the Toshiba's, this way i can run pure raid5 or 6, the only problem is that i have to temporarily find a place to dump 30TB of data, ill probably purchase som external drives as a temp location, I'll probably also move as many drives as I can onto the onboard SATA. The new h
  4. @flyrideAhh makes sense in regards to the SHR2. My Stripe cache was at 4096, tried raising it to 32768 but without any noticable result except for a bit higher RAM usage (I have 16GB) Im starting to think i have one or two shady disks, i keep seeing higher utilizations on 2 disks (drive b and drive k) along with noticably lower Read/s and (This is during a btrfs scrub). Looks kinda off doesnt it? It seems to be peristent on those 2 drives in high activity scenarios. I've run extended smart diag on all the drives, all came back
  5. Hmm i must have misunderstood you then, my logic was based on you saying that it was most likely due to more than 28TB of my current volume of 36TB~ being used, adding 8TB extra would decrease the overall space used or so i understood your earlier post. If it was the CPU limiting it, i would expect to see higher utilization or IO wait numbers? during transfers?
  6. Doh! Didn't even notice they had gone from 1151 to 1200 :S Thank you
  7. How about this, I'll get another 8TB hdd and add to the volume, making it a 10disk array and then it should be below the drive capacity utilization and following performance decrease @flyride referred to in regards to raw write performance, if it still the same then i might put as much as i can on onboard SATA just to test, but a LSI should have more bandwith than onboard sata was my thinking. If i were to upgrade the CPU down the line: @IG-88 Would a i3-10100 be okay (also for transcoding)? I think i saw you "warn" against it in the driver thread as it was a new device id or
  8. It is also the oldest drive i have with close to 42.000 power on hours, but yeah surprised that its that slow compared to the rest. Write cache was disabled on Synology Storage Manager for all drives. But tried checking the write caching with hdparm -W /dev/sd* and could see that for SDB write-caching was off while it was on for all the other drives oddly enough, i changed that to on via hdparm and now the utilization is more in thread with the others, a tad higher but overall more in line. i did try the DD tests a bit later (Every time i have run it, i hav
  9. @flyride Gotcha Here are the results /dev/sda: Timing buffered disk reads: 1040 MB in 3.01 seconds = 345.27 MB/sec /dev/sdb: Timing buffered disk reads: 660 MB in 3.01 seconds = 219.31 MB/sec /dev/sdg: Timing buffered disk reads: 692 MB in 3.01 seconds = 230.25 MB/sec /dev/sdh: Timing buffered disk reads: 712 MB in 3.00 seconds = 237.23 MB/sec /dev/sdi: Timing buffered disk reads: 708 MB in 3.00 seconds = 235.76 MB/sec /dev/sdj: Timing buffered disk reads: 720 MB in 3.00 seconds = 239.
  10. I removed the cache before these results yep, with cache on it was about 200 ish: 1073741824 bytes (1.1 GB) copied, 5.41584 s, 198 MB/s I can confirm that i am using 29,71TB currently on the data volume yep, would the performance hit really be that big on SHR/SHR2 with 85%~ used? In regards to the SMR part, to my knowledge they should be non-smr drives: The only "odd" thing i've noticed is that when im running the DD tests above as an example, "Drive Utilization" will be way higher for Drive2 than the other drives,
  11. Yeah i call it System Disk, probably a poor choice of words - a better choice would possibly be "App Disk", this is for my installed Apps, Docker, Emby, etc. The App disk is a SHR volume with no data protection. Storage volume is a SHR2 volume. The NVME is attached to Strorage volume yes (As read-only cache) Mdstat looks like this: Preliminary tests: Single SSD Volume (3 tests) dd bs=1M count=1024 if=/dev/zero of=/volume1/System/testx conv=fdatasync 1073741824 bytes (1.1 GB) copied, 2.54734 s, 4
  12. Good point My NAS: (DS918+ 6.2.3 on 1.04B with driver pack v0.12.1) Motherboard: Asrock B365m Pro4 CPU: Intel Pentium Gold G5400 Memory: G-Skill 16GB DDR4-2400 NIC: Intel X540-T2 10GBe RJ45 NVME: 2x 128GB Intel 660p NVME "System Disk": Crucial MX500 1TB SSD (Attached to Onboard Sata controller) "Storage volume": 4x4TB WD Red + 5x8TB Toshiba N300 NAS (The 4x4TB + 4x8TB is attached to HP220(LSI 9207-i8 PCIe 3.0 x8) and the last 8TB is attached to onboard SATA) Copying internally (Via DSM Gui Copy) from my Storage Volume to System Disk (HDD
  13. Alrighty, will try digging through logs a bit and see what i find. I would love to stay on 918+ as i am using both SHR (I know it can be edited on 3617 though) along with transcoding and nvme. But i think i might have to try, or try to reinstall it because lately im experiencing really poor performance. Copying from 9 disk array -> SSD is around 200-300MB/s Copying from SSD -> 9 disk array is around 200-300MB/s Copying over 10GBe -> Is around 200-300MB/s. I know i've had alot better performance in the past, but the system has b
  14. LSI controller in IT mode yes, but not using Hibernation no In theory it could be both a PSU issue, or a cable - But both has been checked (Tried new cable and new PSU was used during the new build). It did happen again on a reboot yesterday.. So happens sometimes on either power off/on or reboot. (Had it happen 3 times now, 2 times on shutdown/startup and once on reboot) Havent checked /var/log - Do i need to look for anything specific there? Did a quick look or two in dmesg but could only see a message when the raid was degraded from what i could tell.