N54L 100MByte/sec Transfer speed


Recommended Posts

I've seen a few posts stating that people were able to achieve 100MByte/sec read/write speed using xpenology on their N54L. I was wondering what your setup looks like and what tools you were using to measure.

 

Mine is currently:

N54L

16GB ram

4x 3tb WD Red 1 volume SHR

No SSD Cache

Bay Bios

XPEnology DS3612xs DSM 4.3 build 3810++ (repack v1.0)

Catalyst 2970

60MB/sec windows transfer

 

 

Apologies in advance if this has already been posted I've spent a few days looking, but the search says N54L is a common term so it's pretty hard to find info related :smile:

Link to post
Share on other sites

The more hard drives you have the higher the transfer speed with larger files, as the large file is split across multiple hard drives. Each hard drive can then serve up its chuck of the file at the same time, giving good transfer speeds.

But with lots of small files you will not see much benefit. Servers do not like transferring lots of small files as there is an overhead associated with each, which slows it down.

 

For example:

 

I copied from the server to the local PC a collection of about 2,000 files totalling 1GB in size, this tranferred at about 55mbits on the first run, about 75mbits on the second run, this was due to the server caching the files in memory so not having to load them off a slower hard drive.

I then copied from the server to the local PC a single contigious file totalling 1GB in size, this tranffered at about 115mbits on the first run and the same on the second run. This was hitting 98% utilization on the gigabit network card.

The actual transfer rate will vary with the size of the files you are moving

If the PC you transferring to has a SSD hard drive and large amount of RAM you will see greater transfer rates. The transfer rate is affected by the speed of source and destination hard disks.

Link to post
Share on other sites

Great point sinnuendo. These drives have very low IOPS so I'm using 4gb files. What are your BIOS settings?

Mine is setup the same as this tut

http://homeservershow.com/hp-proliant-n ... sited.html

 

I'd love to run iperf to eliminate some of the network thoughts, but I'm not totally sure where to get it for XPEnology... it's not the marvel version is it?

Link to post
Share on other sites

Native and LAN Speed Test by totusoft.com from several locations. I've tested vm's, physical servers, desktops, laptops, x-over direct, and even tried multiple simultaneous connections to measure the aggregate. It peaks at 60, but averages around 55 write and 30 read. I'm willing to use any other utility though.

 

 

Here are my thoughts. 1. I've setup something wrong. 2. That's just the speed of the thing.

 

In order to eliminate #2, I thought I would ask about setups and throughput. If everyone running WD reds is seeing 60MB and the folks running seagate are running 110, it might be time to swap. If they are all running a cache drive, it's time to buy an SSD. No one seems to be posting builds and speeds :sad:

 

Thoughts?

Link to post
Share on other sites

Running Native.

 

Zarocq, Thank you. That's incredibly helpful. Is that 100 on reads and 100 on writes?

 

I will look over everything again in my test environment but any pointers, advice, or settings recommendations would be appreciated. I don't thing the number of drives is the issue, but will also send off for 2 more drives.

Link to post
Share on other sites

Here are my thoughts. 1. I've setup something wrong. 2. That's just the speed of the thing.

Without wishing to offend anyone...

Let me give you my ideas on #2.

It's the speed of the thing for that interface type.

I didn't change any config settings for the network adapter relating to speed, but consistently get 100+MB/sec "out of the box" with my set-up (see sig).

I attribute that to the driver (dual intel chip gigabit NIC's on the NAS) which are able to transfer at that speed between Win 8.1 (Asus X79 chip) and the Xpenlogy (asus X45 chip).

However, transfer between the NAS and a KDLinks HD720 (again, gigabit switch, gigabit connections) is at ~10MB/sec due to their implementation of the NIC protocols (a sore point for KDLinks owners).

As an aside, When moving large amounts of data (>150gigs), I need to have that on an SSD on the X79, I find. After a minute or so from a 7200rpm SATA III drive, some sort of saturation point is reached and the transfer often drops to 30-60MB/sec (not always, though). I think it's not able to continuously buffer through windows from the "rotational" drive.

IMHO.

Link to post
Share on other sites
Running Native.

 

Zarocq, Thank you. That's incredibly helpful. Is that 100 on reads and 100 on writes?

That is 100 read OR 100 write - not simultaneously - and I've reached this speed using only the 250GB disk, that came with the N54L.

 

So I think HDMann has a good point - it is the entire chain that has to be able to deliver.

Link to post
Share on other sites
Running Native.

 

Zarocq, Thank you. That's incredibly helpful. Is that 100 on reads and 100 on writes?

That is 100 read OR 100 write - not simultaneously - and I've reached this speed using only the 250GB disk, that came with the N54L.

 

So I think HDMann has a good point - it is the entire chain that has to be able to deliver.

Link to post
Share on other sites

@HDMann, I see where you are going and I'm not offended. I have raptors and 840s in the other NAS. The File Server has the bandwidth to saturate a gig port. The SQL has the bandwidth to saturate. The switch has more than enough to support what I'm testing 24 times over.

 

Added the 5th drive. Rebuilt the array. Only saw a marginal improvement.

Ran Bios update 2011.07.29, ran 2013.10.01, pulled the 5th drive, rebuilt the array and bam 98MB.

I think issues may have been caused by going from 2011.07.29 to a Russian mod to bay. I think there may have been flags checked in the bios that I was no longer able to see or that there was a disconnect with the speed selection or write. Either way, it's acting like it should now.

 

Honestly, a few months ago I setup a N54L Solaris array that was giving me really low bandwidth, so I shelved the whole thing. I've never used Reds before so I thought they were the likely issue. About a week ago I read someone talking about xpenology as the fastest NAS OS they'd tried... Also saw someone mention they were using Reds and getting 110MB. Pulled the aborted project down, install xpenology but ran into the same low bandwidth issue again. My post and then Zarocq's response. Seriously Zarocq, knowing that it could work made all the difference. Thanks again.

Link to post
Share on other sites

@HDMann, I see where you are going and I'm not offended. I have raptors and 840s in the other NAS. The File Server has the bandwidth to saturate a gig port. The SQL has the bandwidth to saturate. The switch has more than enough to support what I'm testing 24 times over.

 

Added the 5th drive. Rebuilt the array. Only saw a marginal improvement.

Ran Bios update 2011.07.29, ran 2013.10.01, pulled the 5th drive, rebuilt the array and bam 98MB.

I think issues may have been caused by going from 2011.07.29 to a Russian mod to bay. I think there may have been flags checked in the bios that I was no longer able to see or that there was a disconnect with the speed selection or write. Either way, it's acting like it should now.

 

Honestly, a few months ago I setup a N54L Solaris array that was giving me really low bandwidth, so I shelved the whole thing. I've never used Reds before so I thought they were the likely issue. About a week ago I read someone talking about xpenology as the fastest NAS OS they'd tried... Also saw someone mention they were using Reds and getting 110MB. Pulled the aborted project down, install xpenology but ran into the same low bandwidth issue again. My post and then Zarocq's response. Seriously Zarocq, knowing that it could work made all the difference. Thanks again.

Link to post
Share on other sites
  • 2 weeks later...

Just setup two more Microservers

 

Used thebay bios

 

1st install was slow...... reason why

 

[ 23.642201] ata1.00: configured for UDMA/133

[ 23.642256] ata1: EH complete

[ 23.648070] ata2.00: configured for UDMA/133

[ 23.648124] ata2: EH complete

[ 23.648255] ata3.00: configured for UDMA/133

[ 23.648305] ata3: EH complete

[ 23.649466] ata4.00: configured for UDMA/133

[ 23.649515] ata4: EH complete

[ 23.649925] ata5.00: configured for DMA/33 <<<<<<<<<<<<

[ 23.649974] ata5: EH complete

 

Re checked BIOS settings, as it had port 5 and 6 as IDE mode

 

disabled IDE mode on port 5/6

 

reboot and

 

[ 23.642201] ata1.00: configured for UDMA/133

[ 23.642256] ata1: EH complete

[ 23.648070] ata2.00: configured for UDMA/133

[ 23.648124] ata2: EH complete

[ 23.648255] ata3.00: configured for UDMA/133

[ 23.648305] ata3: EH complete

[ 23.649466] ata4.00: configured for UDMA/133

[ 23.649515] ata4: EH complete

[ 23.649925] ata5.00: configured for UDMA/133 <<<<<<<<<<<<<

[ 23.649974] ata5: EH complete

Link to post
Share on other sites