• Content Count

  • Joined

  • Last visited

  • Days Won


merve04 last won the day on February 17

merve04 had the most liked content!

Community Reputation

6 Neutral

About merve04

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes I’ve read that exact post somewheres else, I think on a different website tbh, im not nerdy enough to know the meanings behind this all. Based on what I read since it mentions that each number position represents a controller and the number itself the number of ports, my thought process would be 655 first controller being mobo has 6 ports and each of the jmb585 have 5 ports. Is this a correct assumption? Is that all I required to change? Heck do I even need to change anything, I’ve never played around with any of those numbers and all my drives populate correctly in dsm now.
  2. I’ve searched around looking for an answer I’m not 100% sure if I fully understand but here we go. My mobo being a gigabyte b365 has 6 onboard sata. I’ve ordered 2x JMB585 cards, each have 5 sata ports. Would the proper value be =655 ? thanks
  3. Any particular brand you’ve found success with? Or any will do so long it’s based on jmb585 chipset?
  4. I sincerely thank you for your time and narrowing down the likely cause of my issue.
  5. Interesting, but must be slow. 10 HDDs on a pcie 1x bus, ouch. Also the price, being im in Canada, it’s $85+$16 of shipping/import so that’s like $135 CAD. I’ve looked on newegg and could get a pair of reverse breakout cables and a pair of jmb585 cards with 5 sata for slightly cheaper. It’s something I may consider in the future. It seems for now the shuffling around of drives with the bulk of the md3 array residing on the mobo controller has greatly improved write speeds. I moved couple 20GB files from Mac to nas, gigabit was fully saturated, will admit, faster going from nas to desktop, b
  6. Could it be the difference between 3615 and 918 when using a 9211? As mentioned in a previous post, prior to wipping out my nas, my hdd's were somewhat mixed around between the mobo and the lsi controller. I remember only starting with 7 hdd's and i had them all plugged on the lsi and as i was expanding, naturally i started plugging via mobo. So going back before this all started, i would of had 4x8tb an 1x3tb (surveillance), plugged on mobo and remaininder of 4's and 8's were on the lsi. I've kinda mimiked this again but having 5x 8tb and 1x4tb on mobo and moving my 3tb (surveilla
  7. So I moved around my drives, i have 5x 8TB and 1x 4TB on intel, 4x 4TB and 2x 8TB on LSI. I was lucky to hit 30MBps prior.. So I may need to rethink the use of the LSI controller with 918+
  8. Yes I've decided to look into the jmb585 cards, a bit pricy, $66 a pop on amazon, found them on ebay for as low as $30-35 but not always sure about what Im getting with ebay afa quality goes. Plus i would need couple more sets of reverse 8087 to sata breakout cables. I will try and move one drive off lsi onto mobo and see if it boots fine, maybe rinse and repeat if all goes well.
  9. You have mapped out my arrays exactly as it’s configured, I did check on the enable write cache. Drive 1-5 are enabled, 7-14 are not. Should I enable them? Disable on drive 1-5? y’a it won’t let me enable on 7-14, says operation failed. I did move around drives in my bays before reinstalling. I used to have 4 8TB in intel and 3x8TB, 5x4TB on LSI. Could this be the tipping point in performance? As the array primarily using the group of 8TB drives being active and not having cache enabled killing the performance? could I power down and move 6x 8TB on mobo controlle
  10. I’m not sure what to take from this?!?
  11. All drives but #5 are part of volume1; Here im transfering 20GB of random documents to the nas, I'm seeing drive 1-4 doing nothing and 14 doing nothing. Drive 1-5 are plugged on mobo, 7-14 on LSI-9211
  12. admin@DiskStation:/$ dd if=/dev/zero bs=1M count=1024 | md5sum 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 1.50142 s, 715 MB/s cd573cfaace07e7949bc0c46028904ff - admin@DiskStation:/$ dd if=/dev/zero bs=1M count=4096 | md5sum 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB) copied, 6.02468 s, 713 MB/s c9a5a6878d97b48cc965c1e41859f034 - admin@DiskStation:/$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync Password: 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied,
  13. I don’t have or use SSD cache. Is there benchmarking software that can be used directly in DSM?
  14. I think you miss read my question, so let me ask again. Is the correct procedure to reinstall DSM to just make a new USB key and boot my machine back up? As mentioned, I'm currently at 6.2.2u6. Can I reinstall 6.2.2 or am I forced to upgrade to 6.2.3? I've seen it before where you can either do a reinstall of dsm no personal data loss but all packages and settings are gone?
  15. I really appreciate the help but comparing my system to others?!?!?! When i offloaded 38TB in the span of 4 days via network transfer, and now when i try the same process of just doing simple copy files from nas to my desktop, it starts at a crawl, 10....20...40...maxes out at around 75-90MBps but prior was instant +100MBps. Is it possible to just reinstall DSM fresh? Do I just make a new usb key and reinstall DSM?