Jump to content
XPEnology Community
  • 0

Starting from scratch


merve04

Question

So if any of you have seen a recent post from me due to file system error, I haven’t been able to fix it, so I picked up 4x 12 TB to offload my data and rebuilt my current array. I’m gonna return the 4 12’s when done, greasy but whatever. 
What i would like to know is, I will once again have a 54TB storage pool in SHR2, can i create a BTRFS volume and a EXT4 volume on the same pool? My thought is I’ll make a 1TB btrfs for just sensitive data which is duplicated twice on 2 different cloud storage and for all my apps. Second volume would be EXT4 strictly just holding media. Is this doable?

Link to comment
Share on other sites

Recommended Posts

  • 0

Unless there is a clear smoking gun, maybe figure out what's happening before making a big change?

 

Maybe follow the process in this thread, this will help us understand your CPU and separate the disk I/O from interaction with the network.  Also lots of good examples of representative systems that you can compare with.

https://xpenology.com/forum/topic/13368-benchmarking-your-synology/

 

Link to comment
Share on other sites

  • 0

I really appreciate the help but comparing my system to others?!?!?! When i offloaded 38TB in the span of 4 days via network transfer, and now when i try the same process of just doing simple copy files from nas to my desktop, it starts at a crawl, 10....20...40...maxes out at around 75-90MBps but prior was instant +100MBps. Is it possible to just reinstall DSM fresh?

Do I just make a new usb key and reinstall DSM?

Link to comment
Share on other sites

  • 0

The point of that test (or a NASPT test) is to separate the behavior of your disk system from the network or file type.  That would tell you something about what is or isn't happening and whether it is a problem.

 

You asked for help troubleshooting performance, but it sounds like you just want an affirmation to install again, so go for it.

 

 

 

Link to comment
Share on other sites

  • 0

I think you miss read my question, so let me ask again.

Is the correct procedure to reinstall DSM to just make a new USB key and boot my machine back up? As mentioned, I'm currently at 6.2.2u6. Can I reinstall 6.2.2 or am I forced to upgrade to 6.2.3?

I've seen it before where you can either do a reinstall of dsm no personal data loss but all packages and settings are gone?

Link to comment
Share on other sites

  • 0

  

17 minutes ago, merve04 said:

I think you miss read my question, so let me ask again.

Is the correct procedure to reinstall DSM to just make a new USB key and boot my machine back up? As mentioned, I'm currently at 6.2.2u6. Can I reinstall 6.2.2 or am I forced to upgrade to 6.2.3?

I've seen it before where you can either do a reinstall of dsm no personal data loss but all packages and settings are gone?

7 hours ago, merve04 said:

Just trying to browse diskstation via mac os using finder is painfully slow. is there anything i can check to improve this?

 

I don't think I misread your question, I'm still trying to help you with your request.  Benchmarking the disk system (array) without the network is an important part of understanding what is going on.  Also, I was going to ask you to test with and without SSD cache.

 

You still don't know where the performance bottleneck is but you want to reinstall DSM anyway.  Your choice.

 

So the same version can be reinstalled directly from Synology Assistant.  Packages and settings will need to be reconfigured.  No need to reburn your loader USB.

Edited by flyride
Link to comment
Share on other sites

  • 0

admin@DiskStation:/$ dd if=/dev/zero bs=1M count=1024 | md5sum
1024+0 records in
1024+0 records out
1073741824 bytes (1.1 GB) copied, 1.50142 s, 715 MB/s
cd573cfaace07e7949bc0c46028904ff  -

admin@DiskStation:/$ dd if=/dev/zero bs=1M count=4096 | md5sum
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 6.02468 s, 713 MB/s
c9a5a6878d97b48cc965c1e41859f034  -

 

admin@DiskStation:/$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
Password:
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.03175 s, 132 MB/s
admin@DiskStation:/$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 1.77604 s, 151 MB/s
admin@DiskStation:/$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.59729 s, 103 MB/s
admin@DiskStation:/$ sudo dd bs=1M count=256 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
256+0 records in
256+0 records out
268435456 bytes (268 MB) copied, 2.08095 s, 129 MB/s
admin@DiskStation:/$

 

 

admin@DiskStation:/$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 12.7959 s, 168 MB/s
admin@DiskStation:/$ sudo dd bs=1M count=2048 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB) copied, 13.3068 s, 161 MB/s
admin@DiskStation:/$

 

admin@DiskStation:/$ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 33.0768 s, 130 MB/s
admin@DiskStation:/$ sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync
4096+0 records in
4096+0 records out
4294967296 bytes (4.3 GB) copied, 32.0901 s, 134 MB/s
admin@DiskStation:/$

 

 

admin@DiskStation:/$ time $(i=0; while (( i < 9999999 )); do (( i ++ )); done)

real    0m24.437s
user    0m23.838s
sys    0m0.597s
admin@DiskStation:/$

Edited by merve04
Link to comment
Share on other sites

  • 0

Array performance is definitely slow.  You have plenty of CPU (should be no penalty for btrfs or SHR2) and your disk throughput is 1/3 of what it should be on writes.

 

Individual read performance seems okay so something interesting is going on with writes.  In DSM there is a Resource Monitor utility, launch that, go to Performance, Disk, then click View All and you should get some real-time I/O indicating reads, writes, utilization etc on a per-disk basis.  The panel can be expanded to show all your disks by dragging the top or bottom.

 

Repeat the write tests above while monitoring this panel to see if a particular drive is overused on writes or has extremely different utilization.

 

image.thumb.png.e96a051459fece9debcc4d2d7e9493de.png

Edited by flyride
added image
Link to comment
Share on other sites

  • 0
4 hours ago, merve04 said:

sudo dd bs=1M count=4096 if=/dev/zero of=/volume1/Documents/testx conv=fdatasync

i have a 12x4TB raid6 (1.04b, 918+, 6.2.3, fresh install, all hdd's ahci, i3-9100) and with lower count's like 256 or even 4096  i see  low and inconsistent values

with 20-50 GB i see much higher values and that also what i can see when writing big files in that size IRL (like 10 x 3GB files)

with windows10 (m.2 nvme ssd) to nas i see ~1100MB/s for the time when the RAM acts a cache and its~600-700 MB/s after that point and that roughly the value i see in the test from above when writing 20-50GB

 

i was thinking about migration it to 3617 to see how its performing with that, i guess the differences some people see might be because of the different kernel 3.10.105 vs 4.4.59

Edited by IG-88
Link to comment
Share on other sites

  • 0
5 hours ago, IG-88 said:

i have a 12x4TB raid6 (1.04b, 918+, 6.2.3, fresh install, all hdd's ahci, i3-9100) and with lower count's like 256 or even 4096  i see  low and inconsistent values

with 20-50 GB i see much higher values and that also what i can see when writing big files in that size IRL (like 10 x 3GB files)

with windows10 (m.2 nvme ssd) to nas i see ~1100MB/s for the time when the RAM acts a cache and its~600-700 MB/s after that point and that roughly the value i see in the test from above when writing 20-50GB

 

i was thinking about migration it to 3617 to see how its performing with that, i guess the differences some people see might be because of the different kernel 3.10.105 vs 4.4.59

I’m not sure what to take from this?!?

Link to comment
Share on other sites

  • 0

You have mapped out my arrays exactly as it’s configured, I did check on the enable write cache. Drive 1-5 are enabled, 7-14 are not. Should I enable them? Disable on drive 1-5?

 

 

y’a it won’t let me enable on 7-14, says operation failed. I did move around drives in my bays before reinstalling. I used to have 4 8TB in intel and 3x8TB, 5x4TB on LSI. Could this be the tipping point in performance? As the array primarily using the group of 8TB drives being active and not having cache enabled killing the performance?

 

could I power down and move 6x 8TB on mobo controller and leave the balance on LSI controller?

 

im kind of guessing here that md2 is 4TB x12 and md3 is the other half of the 8TB so 4TB x5. Therefore with 38TB of data, md2 is full and md3 is being populated?

 

also just for info, all drives are just run of the mill desktop grade barracuda 5400 or 5900rpm drives. 

Edited by merve04
Link to comment
Share on other sites

  • 0
14 hours ago, flyride said:

Just a random thought, is write cache enabled for the individual drives?

afair this did not work with the lsi sas controller (one of the reasons i switched to ahci)

 

15 hours ago, merve04 said:

I’m not sure what to take from this?!?

when estimating performance like above with dd you might want to write a bigger amount of data to get a reliable number

 

edit: the method with dd and the volume to test will not work reliable with SHR because you have mixed raid sets and depending on where you are on the LV you will have a different amount of disks being used for writing (like raid5 over 12 disks and raid5 over 7 disks)

imho, when using shr you will end up with one LV (like in LVM) and the size of the volumes (dsm name, more likely a partition in a LV?)  you choose in dsm does not correlate to the raid structure below in the LV, you would need to check the structure manually and make the volumes correlate by size  to the found (or guessed) structure

Edited by IG-88
Link to comment
Share on other sites

  • 0
14 hours ago, merve04 said:

y’a it won’t let me enable on 7-14, says operation failed. I did move around drives in my bays before reinstalling. I used to have 4 8TB in intel and 3x8TB, 5x4TB on LSI. Could this be the tipping point in performance? As the array primarily using the group of 8TB drives being active and not having cache enabled killing the performance?

 

could I power down and move 6x 8TB on mobo controller and leave the balance on LSI controller?

 

im kind of guessing here that md2 is 4TB x12 and md3 is the other half of the 8TB so 4TB x5. Therefore with 38TB of data, md2 is full and md3 is being populated?

 

Not quite enough information to be sure, but it seems like we should keep exploring the relationship between the drive and the controllers.  Not sure what the impact is of not having disk write cache enabled on LSI, but lack of local write cache is certainly going to impact performance to some extent, and it would be only represented on the dd throughput or writing files to NAS, not on the raw read tests.

 

Technically you should be able to move the drives around between the controllers with no problem.  Always make sure the arrays are healthy before doing this.

 

We really don't have any control where the filesystem writes the files, you would assume it would start by filling up from the first sector (md2) but hard to know for sure without digging into btrfs itself.

 

Two things I'd think about trying at this point are: 1) updating the LSI controller driver (IG-88 extra.lzma) and 2) replacing the LSI with an AHCI-compatible controller (super inexpensive now).  I really don't think reinstalling DSM is going to help at all.

Edited by flyride
Link to comment
Share on other sites

  • 0
18 minutes ago, flyride said:

1) updating the LSI controller driver (IG-88 extra.lzma) and

imho not working, might be because  of kernel code changes in synologys kernel, there are other problems too like we see with the lsi drivers and disk hibernation, when using the driver from synologys source that problem is gone but we loose s.m.a.r.t. (and temp, serial of the disk)

 

18 minutes ago, flyride said:

2) replacing the LSI with an AHCI-compatible controller (super inexpensive now).

 

in general there is no good affordable 8 port ahci controller, only support for pcie 2.0 and two pcie lanes -> max 1000MB/s for all disks

best i found are jmb585 based 5 port controllers, only 2 lanes but pcie 3.0 support -> max. 2000MB/s

i use a combination of 6 x onbard, 5 x jmb5858, 1 x 88se9215

 

 

18 minutes ago, flyride said:

  I really don't think reinstalling DSM is going to help at all.

i think the same, the missing write cache will lower the performance and also the "inconsistent" structure of shr will make measuring inconsistent, when hitting a "area" with a lower amount of disks in a raid set it will be slower

Link to comment
Share on other sites

  • 0

Yes I've decided to look into the jmb585 cards, a bit pricy, $66 a pop on amazon, found them on ebay for as low as $30-35 but not always sure about what Im getting with ebay afa quality goes. Plus i would need couple more sets of reverse 8087 to sata breakout cables. I will try and move one drive off lsi onto mobo and see if it boots fine, maybe rinse and repeat if all goes well.

Link to comment
Share on other sites

  • 0
8 minutes ago, merve04 said:

I was lucky to hit 30MBps prior.. So I may need to rethink the use of the LSI controller with 918+

i used 11 4TB (raid6) disks on my old hardware before with a lsi (5 disks on lsi) and did have 350-450 MByte/s read and write speed with dsm 6.1 3615 (used a 10G nic, thats irl performance with windows)

the prices on amazon are inflated, i got my 1st one in april for 25€ and the 2nd two weeks ago for 45€ (i did wanted it delivered by amazon in a few days, gets cheaper if it gets delivered from china but might take weeks instead of says)

Link to comment
Share on other sites

  • 0
3 minutes ago, IG-88 said:

dsm 6.1 3615 (used a 10G nic, thats irl performance with windows)

Could it be the difference between 3615 and 918 when using a 9211?

 

As mentioned in a previous post, prior to wipping out my nas, my hdd's were somewhat mixed around between the mobo and the lsi controller. I remember only starting with 7 hdd's and i had them all plugged on the lsi and as i was expanding, naturally i started plugging via mobo. So going back before this all started, i would of had 4x8tb an 1x3tb (surveillance), plugged on mobo and remaininder of 4's and 8's were on the lsi. I've kinda mimiked this again but having 5x 8tb and 1x4tb on mobo and moving my 3tb (surveillance hdd which runs standalone) on the lsi.

Edited by merve04
Link to comment
Share on other sites

  • 0
1 minute ago, merve04 said:

Could it be the difference between 3615 and 918 when using a 9211?

i cant rule that out, i did not compare directly between 918+ and 3615/17 with the same hardware and when changing to a new 918+ capable hardware i also changed to ahci only - thats my normal nas and i'm not going to do experiments with my 30TB data, i already had enough fun with the lsi controllers and disks dropping from the raid and my supply on 10g nic's is short so even with my new intel i5-6500 based test hardware (and two lsi 9211) i will not be able to experiment much as i would need to remove the 10g nic from the system doing backups and i usually dont have 8-10 disks around for a bigger raid array to test with (or 3-4 ssd's)

 

3615/17 come with native lsi sas drivers (mpt2sas, mpt3sas) so they might do better then added 918+ lsi sas drivers when it comes to performance - not to mention the problems with disk hibernation and risks of damaged raid's with 918+

if i would need to use lsi sas i'd go with 3617 as it has the newer lsi sas driver (newer then in 3615)

on some systems with just one or two pcie 2.0 slots and enough lanes on the slots a lsi 9211 might be the only good (cheap) choice for a high disk count like 12 and also using a 10G nic

Link to comment
Share on other sites

  • 0
3 hours ago, merve04 said:

So I moved around my drives, i have 5x 8TB and 1x 4TB on intel, 4x 4TB and 2x 8TB on LSI.

I was lucky to hit 30MBps prior.. So I may need to rethink the use of the LSI controller with 918+

 

from the "view all" pictures it looks like there are 7 disks active when writing and that should be the 2nd array with the 7 x 8 TB disks, where the used space is 7 x 4 TB, the 1st array would be 12 x 4 TB (containing all disks except 5)

 

when moving a drive that are used when writing (in the pictures 7-13) from the lsi to a ahci, you could check that drives individual write performance in the view all table with hdd's cache on or off when connected to ahci

 

if you are going to change to ahci then beside the 5 port controller you would also need at least 2 more ports so you would need a pcie1x controller like a SI-PEX40064 (88se9215 based, i would not use a 88se91xx card, the other two bigger slots are already taken by the 10G nic and the 5 port jmb585 controller)

if you are not i a hurry you might wait with that until i have some time doing tests with a lsi sas controller in my new test hardware and the new version of the driver pack, i guess i will be able to get 3 or 4 ssd's for testing next week so i might be able to do some performance testing with 3615/17 and 918+ with the same hardware

Link to comment
Share on other sites

  • 0

Interesting, but must be slow. 10 HDDs on a pcie 1x bus, ouch. Also the price, being im in Canada, it’s $85+$16 of shipping/import so that’s like $135 CAD. 
I’ve looked on newegg and could get a pair of reverse breakout cables and a pair of jmb585 cards with 5 sata for slightly cheaper. It’s something I may consider in the future. It seems for now the shuffling around of drives with the bulk of the md3 array residing on the mobo controller has greatly improved write speeds. I moved couple 20GB files from Mac to nas, gigabit was fully saturated, will admit, faster going from nas to desktop, but better than 50-70 and sometimes stalling resulting in incomplete transfers. 

Edited by merve04
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...