Jump to content
XPEnology Community

DS918+ 1.04b - Strange issue with "disappearing" hard drives on shutdown.


NooL

Recommended Posts

Hi all

 

System: DS918+

Loader: 1.04b

Volume: 4x4TB + 5x8TB SHR2 volume.

 

A while ago i had a power outtage, and when the the diskstation came back one drive was missing completely in DSM and the raid was in degraded mode, thought it was a bit strange since it was one of my newer drives but wrote it off to the power outtage, i then turned off the NAS completely once or twice for some reason and the drive came back, this time as "not initialized" but i could add it to the raid and rebuild successfully - Again thought it was a bit strange but didnt think much of it.

 

In the meantime i have moved the system to a new motherboard, new NIC, added the 9th drive (before it was 4x4 + 4x8) and reinstalled the system.

 

A little week ago i was playing around with my local network (changing IP ranges) and had to turn off the NAS twice, when i logged in after those two power cycles (done properly via power button package press) i could see that my raid was again degraded, this time with 2 drives missing and looking in log i could see that the first power cycle had taken out drive 14, the next one took out drive 13.. Slightly panicked but also thinking that this couldnt be 2 dead drives in a row something must be going on - Tried rebooting a few times with no result, tried turning it on and off a couple of times with no result, took out the power coord and moved it to my office, booted it up again with no change and voila both drive were there again, again as "Not Initialized" but could be added to raid and rebuilded successfully.

 

What is going on here? Im almost 100% positive that this is not a harddrive issue, both drives are perfectly fine, have 10k hours on them and no issues.

 

Could it be that the system boots too quick for all drives to be initialized/seen by DSM? (And if so, could it be fixed by simply increasing the timeout in grub.cfg(Time options are shown))

 

billede.png.93ffaa9b504ae77947e9bfadb47c7479.png

This is how it looks in DSM, the reason im thinking it could be a timeout/too quick boot issue is that both times it has been the "last" drives, the first drive it was drive 14, the other time drive 14 and then 13.

 

Any ideas? Im at a point where im scared to shut it down :D

 

Oh it doesnt seem like there are any issues with reboots once the system is running, i've only seen the issue on power cycles

Link to comment
Share on other sites

On 2/6/2021 at 5:30 PM, NooL said:

Oh it doesnt seem like there are any issues with reboots once the system is running, i've only seen the issue on power cycles

lsi sas controller? 918+ has issues when disk hibernation is active, 3617 is the best alternative in that case (beside not using disk hibernation)

can also be psu and maybe a cable problem

did you check the logs in /var/log/ ?

 

Edited by IG-88
Link to comment
Share on other sites

8 hours ago, IG-88 said:

lsi sas controller? 918+ has issues when disk hibernation is active, 3617 is the best alternative in that case (beside not using disk hibernation)

can also be psu and maybe a cable problem

did you check the logs in /var/log/ ?

 

 

LSI controller in IT mode yes, but not using Hibernation no :)

 

In theory it could be both a PSU issue, or a cable - But both has been checked (Tried new cable and new PSU was used during the new build).

 

It did happen again on a reboot yesterday.. So happens sometimes on either power off/on or reboot. (Had it happen 3 times now, 2 times on shutdown/startup and  once on reboot)

 

Havent checked /var/log - Do i need to look for anything specific there? Did a quick look or two in dmesg but could only see a message when the raid was degraded from what i could tell.

 

Appreciate your help

 

 

Link to comment
Share on other sites

On 2/8/2021 at 10:34 AM, NooL said:

Havent checked /var/log - Do i need to look for anything specific there? Did a quick look or two in dmesg but could only see a message when the raid was degraded from what i could tell.

there are some disk specific log files and the general "messages" log

imho the safe way to exclude any dsm/driver related problems would be to change to loader 1.03b and migrate to 3617, no nvme and hardware transcoding with that but it has synologys lsi sas drivers they use for the business units, thats supposed to work stable, with 918+ is still added and not as safe as the "original" (as we can see with the smart and hibernation problems)

by using 3617 (at least temporary) you could rule out some factors to drill down to the source of the problem

did the same thing with my system, just swapped the loader and installed dsm 3617 over the 918+, can be reverted the same way, just insert the "old" 918+ loader and install 918+

 

Edited by IG-88
Link to comment
Share on other sites

On 2/9/2021 at 9:30 PM, IG-88 said:

there are some disk specific log files and the general "messages" log

imho the safe way to exclude any dsm/driver related problems would be to change to loader 1.03b and migrate to 3617, no nvme and hardware transcoding with that but it has synologys lsi sas drivers they use for the business units, thats supposed to work stable, with 918+ is still added and not as safe as the "original" (as we can see with the smart and hibernation problems)

by using 3617 (at least temporary) you could rule out some factors to drill down to the source of the problem

did the same thing with my system, just swapped the loader and installed dsm 3617 over the 918+, can ber reverted the same way, just insert the "old" 918+ loader and install 918+

 

 

Alrighty, will try digging through logs a bit and see what i find.

 

I would love to stay on 918+ as i am using both SHR (I know it can be edited on 3617 though) along with transcoding and nvme.

 

But i think i might have to try, or try to reinstall it because lately im experiencing really poor performance.

 

Copying from 9 disk array -> SSD is around 200-300MB/s

Copying from SSD -> 9 disk array is around 200-300MB/s

Copying over 10GBe -> Is around 200-300MB/s.

 

I know i've had alot better performance in the past, but the system has been reinstalled a couple of times since then so not sure when it went wrong.

 

The driver packages shouldnt have anything to do with it right?

 

Best regards

 

 

Edited by NooL
Link to comment
Share on other sites

1 hour ago, NooL said:

Copying from 9 disk array -> SSD is around 200-300MB/s

Copying from SSD -> 9 disk array is around 200-300MB/s

Copying over 10GBe -> Is around 200-300MB/s.

 

I'd figure out what the bottleneck is before doing anything else, this doesn't seem to tell us much. Clearly if you get 200-300MBps you are using 10Gbe for a network connection otherwise would be limited to about 110 MBps throughput.

 

Is the SSD in the NAS or on a client device?

What are the disk make and models in the array?

 

With this additional information we can formulate a test strategy to try and isolate the bottleneck.  Depending on the drives, the SHR configuration and type of data you are moving, this performance is not implausible.

Link to comment
Share on other sites

31 minutes ago, flyride said:

 

I'd figure out what the bottleneck is before doing anything else, this doesn't seem to tell us much. Clearly if you get 200-300MBps you are using 10Gbe for a network connection otherwise would be limited to about 110 MBps throughput.

 

Is the SSD in the NAS or on a client device?

What are the disk make and models in the array?

 

With this additional information we can formulate a test strategy to try and isolate the bottleneck.  Depending on the drives, the SHR configuration and type of data you are moving, this performance is not implausible.

 

Good point :)

My NAS:  (DS918+ 6.2.3 on 1.04B with driver pack
v0.12.1)
Motherboard: Asrock B365m Pro4

CPU: Intel Pentium Gold G5400

Memory: G-Skill 16GB DDR4-2400

NIC: Intel X540-T2 10GBe RJ45

NVME: 2x 128GB Intel 660p NVME

"System Disk": Crucial MX500 1TB SSD (Attached to Onboard Sata controller)

"Storage volume": 4x4TB WD Red + 5x8TB Toshiba N300 NAS (The 4x4TB + 4x8TB is attached to HP220(LSI 9207-i8 PCIe 3.0 x8) and the last 8TB is attached to onboard SATA) 

 

 

Copying internally (Via DSM Gui Copy) from my Storage Volume to System Disk (HDD Array to SSD) gives this:

 

billede.png.2afa38456959fe84a541a4a0ae35e4db.png

 

 

And from the SSD to HDD aray:

billede.png.fcc542f261cd0321142cb3ecf52ad25d.png

 

 

Copying from my PC over 10GBe:

billede.png.a72976ffbdd56e9399f291b322eff15a.png

 

 

 

So a bit faster, but still way lower than i would expect.

 

 

Edited by NooL
Link to comment
Share on other sites

What exactly do you have on your "System" disk?  DSM is installed to all drives, so maybe this is a volume for Docker and other packages etc?

 

The System disk is a Basic volume?

 

The NVMe cache is dedicated to the data volume?

 

To objectively evaluate each array, first remove the NVMe cache. Then run this test on each volume (again, assuming one for SATA SSD and another for spinning disk data) and let us know what you get: https://xpenology.com/forum/topic/13368-benchmarking-your-synology/?tab=comments#comment-97997

Link to comment
Share on other sites

4 minutes ago, flyride said:

What exactly do you have on your "System" disk?  DSM is installed to all drives, so maybe this is a volume for Docker and other packages etc?

 

The System disk is a Basic volume?

 

The NVMe cache is dedicated to the data volume?

 

To objectively evaluate each array, first remove the NVMe cache. Then run this test on each volume (again, assuming one for SATA SSD and another for spinning disk data) and let us know what you get: https://xpenology.com/forum/topic/13368-benchmarking-your-synology/?tab=comments#comment-97997

 

Yeah i call it System Disk, probably a poor choice of words - a better choice would possibly be "App Disk", this is for my installed Apps, Docker, Emby, etc.

 

The App disk is a SHR volume with no data protection. 

 

Storage volume is a SHR2 volume.

 

The NVME is attached to Strorage volume yes (As read-only cache)

 

Mdstat looks like this:

 

billede.thumb.png.f15a67437ff32ee4ad5d438ea8036386.png

 

 

Preliminary tests:

 

Single SSD Volume (3 tests)

dd bs=1M count=1024 if=/dev/zero of=/volume1/System/testx conv=fdatasync
1073741824 bytes (1.1 GB) copied, 2.54734 s, 422 MB/s
1073741824 bytes (1.1 GB) copied, 2.39601 s, 448 MB/s
1073741824 bytes (1.1 GB) copied, 2.42827 s, 442 MB/s

 

HDD Volume (3 tests)

dd bs=1M count=1024 if=/dev/zero of=/volume2/Lager/testx conv=fdatasync
1073741824 bytes (1.1 GB) copied, 23.817 s, 45.1 MB/s
1073741824 bytes (1.1 GB) copied, 24.3493 s, 44.1 MB/s
1073741824 bytes (1.1 GB) copied, 23.7791 s, 45.2 MB/s

 

CPU:

dd if=/dev/zero bs=1M count=1024 | md5sum

1073741824 bytes (1.1 GB) copied, 1.52714 s, 703 MB/s

 

 

 

 

 

Link to comment
Share on other sites

2 hours ago, NooL said:

HDD Volume (3 tests)

dd bs=1M count=1024 if=/dev/zero of=/volume2/Lager/testx conv=fdatasync
1073741824 bytes (1.1 GB) copied, 23.817 s, 45.1 MB/s
1073741824 bytes (1.1 GB) copied, 24.3493 s, 44.1 MB/s
1073741824 bytes (1.1 GB) copied, 23.7791 s, 45.2 MB/s

This is lackluster performance, I agree.  Did you remove the cache before this test?

 

Can you confirm that you have >28TB (4*(9-2)) in use on the data volume?  If so, this is illustrative of the negative impact of SHR.  Your 8TB drives are part of both /dev/md3 and /dev/md4.  Once the 4TB drives fill up (meaning the first 28TB used in the volume) then the performance benefits of 9 spindles drop to five. This is a price that is paid for the additional storage enabled via SHR. You're also using SHR2/RAID6 so there is also double the write overhead, compounded by the above.

 

I'm not convinced there is anything wrong, but the next thing that I would try is a synthetic test on each of the HDD's to see if one is underperforming for some reason. Have you confirmed that your WD Red's are not the SMR versions?  You didn't post the actual models so I can't look it up for you.

Edited by flyride
Link to comment
Share on other sites

1 hour ago, flyride said:

This is lackluster performance, I agree.  Did you remove the cache before this test?

 

Can you confirm that you have >28TB (4*(9-2)) in use on the data volume?  If so, this is illustrative of the negative impact of SHR.  Your 8TB drives are part of both /dev/md3 and /dev/md4.  Once the 4TB drives fill up (meaning the first 28TB used in the volume) then the performance benefits of 9 spindles drop to five. This is a price that is paid for the additional storage enabled via SHR. You're also using SHR2/RAID6 so there is also double the write overhead, compounded by the above.

 

I'm not convinced there is anything wrong, but the next thing that I would try is a synthetic test on each of the HDD's to see if one is underperforming for some reason. Have you confirmed that your WD Red's are not the SMR versions?  You didn't post the actual models so I can't look it up for you.

 

I removed the cache before these results yep, with cache on it was about 200 ish:

1073741824 bytes (1.1 GB) copied, 5.41584 s, 198 MB/s

 

I can confirm that i am using 29,71TB currently on the data volume yep, would the performance hit really be that big on SHR/SHR2 with 85%~ used?

 

billede.png.4804d7cf69adc4f46648dd3ae3edddc2.png

 

 

In regards to the SMR part, to my knowledge they should be non-smr drives:

 

billede.png.86453842429675d1a15c1c9ecbe0362f.png

 

 

The only "odd" thing i've noticed is that when im running the DD tests above as an example, "Drive Utilization" will be way higher for Drive2 than the other drives, Drive2 is the one attached to onboard SATA, not sure if that has anything to say.

 

With a setup similar to mine, what would you expect in raw write performance over 10GBe network? I mean i wasnt expecting full 10GBe speeds, but was hoping to come alot closer than sub 300MB/s, and looking at the "speedtest thread" i see alot faster results with somewhat comparable setups.

Take this as example from that thread:

billede.thumb.png.cc5cc4d90f86ee3da2fa0b67356168bc.png

 

So mine at 45 seems extremely low?

 

Oh another issue - not sure if its related but my plex transcoding performance is horrible even though its using HW transcoding (Checked in Plex while it was doing it) - I cant transcode a 4k remux hevc to 1080P over local network, it will buffer for 2mins, play for 8-10secs and buffer again, i should be able to do this, right? From my google results people are talking about several transcoding streams on this processor without it breaking a sweat, yet i cant transcode one :D 

 

 

Edited by NooL
Link to comment
Share on other sites

46 minutes ago, NooL said:

 

I removed the cache before these results yep, with cache on it was about 200 ish:

1073741824 bytes (1.1 GB) copied, 5.41584 s, 198 MB/s

 

I can confirm that i am using 29,71TB currently on the data volume yep, would the performance hit really be that big on SHR/SHR2 with 85%~ used?

The point is that you have moved out of the part of the array where all the drives are utilized and so the throughput will be slower than when the array was empty.

 

46 minutes ago, NooL said:

In regards to the SMR part, to my knowledge they should be non-smr drives:

I agree you have CMR drives, that's good.

 

46 minutes ago, NooL said:

The only "odd" thing i've noticed is that when im running the DD tests above as an example, "Drive Utilization" will be way higher for Drive2 than the other drives, Drive2 is the one attached to onboard SATA, not sure if that has anything to say.

The onboard SATA might be a factor, but also that drive is a different model with half the onboard cache of your other 8TB drives.  So it is going to be the slowest, therefore most heavily utilized.  Array performance limits are defined by the slowest drive in the array.

 

46 minutes ago, NooL said:

With a setup similar to mine, what would you expect in raw write performance over 10GBe network? I mean i wasnt expecting full 10GBe speeds, but was hoping to come alot closer than sub 300MB/s, and looking at the "speedtest thread" i see alot faster results with somewhat comparable setups.

 

So mine at 45 seems extremely low?

Yes, it seems low, but you are 1) using SHR2 and 2) using dissimilar drives so you have the worst possible configuration for performance.  That doesn't mean the performance should be bad.

 

The next thing is to check the drives themselves. Use hdparm to check the raw read rates for each drive:

 

# hdparm -t /dev/sdX

 

where sdX is sda, sdb, sdg, sdh, sdhi, sdj, sdk, sdl in sequence

 

46 minutes ago, NooL said:

Oh another issue - not sure if its related but my plex transcoding performance is horrible even though its using HW transcoding (Checked in Plex while it was doing it) - I cant transcode a 4k remux hevc to 1080P over local network, it will buffer for 2mins, play for 8-10secs and buffer again, i should be able to do this, right? From my google results people are talking about several transcoding streams on this processor without it breaking a sweat, yet i cant transcode one :D 

My advice is to fix or accept this performance issue before worrying about that one...

Link to comment
Share on other sites

@flyride Gotcha :)

 

Here are the results :)

 

/dev/sda:
 Timing buffered disk reads: 1040 MB in  3.01 seconds = 345.27 MB/sec

 

/dev/sdb:
 Timing buffered disk reads: 660 MB in  3.01 seconds = 219.31 MB/sec

 

/dev/sdg:
 Timing buffered disk reads: 692 MB in  3.01 seconds = 230.25 MB/sec

 

/dev/sdh:
 Timing buffered disk reads: 712 MB in  3.00 seconds = 237.23 MB/sec

 

/dev/sdi:
 Timing buffered disk reads: 708 MB in  3.00 seconds = 235.76 MB/sec

 

/dev/sdj:
 Timing buffered disk reads: 720 MB in  3.00 seconds = 239.64 MB/sec

 

/dev/sdk:
 Timing buffered disk reads: 396 MB in  3.01 seconds = 131.48 MB/sec

 

/dev/sdl:
 Timing buffered disk reads: 522 MB in  3.00 seconds = 173.96 MB/sec

 

Included M and N too

 

/dev/sdm:
 Timing buffered disk reads: 506 MB in  3.01 seconds = 168.35 MB/sec

 

/dev/sdn:
 Timing buffered disk reads: 540 MB in  3.01 seconds = 179.52 MB/sec

 

 

 

Link to comment
Share on other sites

So /dev/sdk, which is a 4GB Red drive, is quite a bit slower than its peers on reads.  I'd test that a little bit more, and maybe review the SMART data for it?  FWIW, WD Reds are among the slowest drives out there for throughput, but I would not expect to see net throughput as you have.  That said, I'm not a huge fan of SHR2/RAID6.

 

/dev/sdb, which is the Toshiba N300 with 128K cache and on the onboard SATA port, is slower but not significantly so than its peers on reads.  I don't think it's a problem.  The high utilization you observed is expected when the 8GB drives are being pushed to their performance limits, and that is a good thing.

 

Your CPU has only 2 cores, which is probably a limiting factor with the computational requirements of SHR2/RAID6.  I think your system is working correctly, but everything is at a worst-case state from a performance standpoint, and the negative impact of all the items that are at performance limit is cumulative.

 

In summary, investigate whether /dev/sdk is working correctly and change your CPU to one with four cores (Core i3-8100 or 8300 would work great and retain your transcoding capability).  Also, is write cache on the individual drives??

Edited by flyride
Link to comment
Share on other sites

13 hours ago, flyride said:

So /dev/sdk, which is a 4GB Red drive, is quite a bit slower than its peers on reads.  I'd test that a little bit more, and maybe review the SMART data for it?  FWIW, WD Reds are among the slowest drives out there for throughput, but I would not expect to see net throughput as you have.  That said, I'm not a huge fan of SHR2/RAID6.

 

/dev/sdb, which is the Toshiba N300 with 128K cache and on the onboard SATA port, is slower but not significantly so than its peers on reads.  I don't think it's a problem.  The high utilization you observed is expected when the 8GB drives are being pushed to their performance limits, and that is a good thing.

 

Your CPU has only 2 cores, which is probably a limiting factor with the computational requirements of SHR2/RAID6.  I think your system is working correctly, but everything is at a worst-case state from a performance standpoint, and the negative impact of all the items that are at performance limit is cumulative.

 

In summary, investigate whether /dev/sdk is working correctly and change your CPU to one with four cores (Core i3-8100 or 8300 would work great and retain your transcoding capability).  Also, is write cache on the individual drives??

 

 

It is also the oldest drive i have with close to 42.000 power on hours,  but yeah surprised that its that slow compared to the rest.

 

Write cache was disabled on Synology Storage Manager for all drives.

 

But tried checking the write caching with hdparm -W /dev/sd* and could see that for SDB write-caching was off while it was on for all the other drives oddly enough, i changed that to on via hdparm and now the utilization is more in thread with the others, a tad higher but overall more in line.

 

i did try the DD tests a bit later (Every time i have run it, i have made sure that disk and storage volume activity was at 0%) but i seem to be getting very varied results.

 

billede.thumb.png.1982033899b84024ac6c85a8528f4b00.png

 

tried again this morning

 

billede.thumb.png.e618fbacf39e92a6f522f18ff02f5262.png

 

So from 45MB/s to 421MB/s and no idea why, and at the same time im getting 79MB/s write speed over network (tested 2mins later)

 

In regards to the CPU, ill go ahead and buy a new one, the i3's are pretty cheap so wont break the bank - I need to stick with Coffee Lake / Coffee Lake S right? Comets are too new to be fully supported if i remember correctly from the driver thread.

 

Again i want to thank you @flyride and @IG-88 for your inputs and assistance, it's very much appreciated and these issues/learning experiences are part of what makes it fun :)

 

Link to comment
Share on other sites

2 minutes ago, NooL said:

In regards to the CPU, ill go ahead and buy a new one, the i3's are pretty cheap so wont break the bank - I need to stick with Coffee Lake / Coffee Lake S right? Comets are too new to be fully supported if i remember correctly from the driver thread.

 

I think you need to stick with what your motherboard will support.  DSM shouldn't care though.

Link to comment
Share on other sites

1 hour ago, NooL said:

In regards to the CPU, ill go ahead and buy a new one, the i3's are pretty cheap so wont break the bank -

not sure if that will help, if thats limiting i'd expect to see a high cpu load when using the raid system to copy data

from my own systems i had the impression the lsi sas controller did not perform as good as the ahci pcie 3.0 controllers i used later

what you can try for free is to use all 6 onboard sata and see if the performance is any better when the disks are connected to a ahci controller (and write cache for the disk is on)

 

 

Link to comment
Share on other sites

How about this, I'll get another 8TB hdd and add to the volume, making it a 10disk array and then it should be below the drive capacity utilization and following performance decrease @flyride referred to in regards to raw write performance, if it still the same then i might put as much as i can on onboard SATA just to test, but a LSI should have more bandwith than onboard sata was my thinking.

 

If i were to upgrade the CPU down the line:

@IG-88 Would a i3-10100 be okay (also for transcoding)? I think i saw you "warn" against it in the driver thread as it was a new device id or something, the reason im asking is that its considerably cheaper than say a i3-8300 and quite a bit quicker.

 

 

Edited by NooL
Link to comment
Share on other sites

33 minutes ago, NooL said:

Would a i3-10100 be okay (also for transcoding)?

yes with a patched i915.ko (i do provide that in my 6.2.3 driver thread)
but your systemboard is only ok for 8th and 9th gen cpu's, i use a i3-9100 on a B365M board from Gigabyte

 

8086:3E92 => iGPU UHD 630, Low End Desktop 9 Series (original driver)
->
8086:9BC8 => iGPU UHD 630, Low End Desktop i5-10500, i5-10600T and lower

 

 

33 minutes ago, NooL said:

I think i saw you "warn" against it in the driver thread as it was a new device id or something, the reason im asking is that its considerably cheaper than say a i3-8300 and quite a bit quicker.

there are higher tier 10th gen that might not work, only had one feedback

-> one user negative feedback for a i9-10900 (8086:9BC5) system does not boot anymore - seems to be a solid hands off?

 

Link to comment
Share on other sites

10 minutes ago, IG-88 said:

yes with a patched i915.ko (i do provide that in my 6.2.3 driver thread)
but your systemboard is only ok for 8th and 9th gen cpu's, i use a i3-9100 on a B365M board from Gigabyte

 


8086:3E92 => iGPU UHD 630, Low End Desktop 9 Series (original driver)
->
8086:9BC8 => iGPU UHD 630, Low End Desktop i5-10500, i5-10600T and lower

 

 

there are higher tier 10th gen that might not work, only had one feedback


-> one user negative feedback for a i9-10900 (8086:9BC5) system does not boot anymore - seems to be a solid hands off?

 

 

Doh! Didn't even notice they had gone from 1151 to 1200 :S

 

Thank you :)

Link to comment
Share on other sites

1 hour ago, NooL said:

How about this, I'll get another 8TB hdd and add to the volume, making it a 10disk array and then it should be below the drive capacity utilization and following performance decrease @flyride referred to in regards to raw write performance

Really, I don't think adding a single drive will get you too much. If we take the example of your SHR which is 7 data spindles for the first 27TB and then 3 for the remaining space, and expand it with another disk, 50% of the new data blocks are still on the 8TB-only part of the array. So you would get 4TB of theoretical improved performance but not for the last 4TB or accessing any of the files already on the 8TB-only part of the array.

 

If you want to rule out SHR, mitigate it altogether with a RAID6 (or RAID5) of 8TB drives.  I'd try the CPU first and see SHR2 responds before doing anything else though.

 

EDIT: if your intention is to REPLACE a potentially problematic drive, that makes some sense.

 

Quote

if it still the same then i might put as much as i can on onboard SATA just to test, but a LSI should have more bandwith than onboard sata was my thinking.

Why?  Onboard SATA either are directly connected to the CPU via chipset, or have a direct connection to PCIe bus. The motherboard documentation will confirm, but for mainstream Core CPU chipsets, it's at least four lanes.

Edited by flyride
Link to comment
Share on other sites

7 minutes ago, flyride said:

Why?  Onboard SATA either are directly connected to the CPU via chipset, or have a direct connection to PCIe bus. The motherboard documentation will confirm, but for mainstream Core CPU chipsets, it's at least four lanes.

there are some exceptions on boards like apollolake, geminilake, usually 2 real onboard (soc) and when 2 more are added with a asm1061 then its one pcie lane with pcie 2.0, limiting the 2 ports to ~500MB/s together - not usable for SSD

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...