Jump to content
XPEnology Community

Change disks order in DSM


ed_co

Recommended Posts

Hello guys,

 

Maybe it is an stupid question, but is it possible to configure several hard disk in a NAS, and then rearrange them in different order?

Let's make an example:

I do have 2 controllers in my Xpenology PC, one (SATA1) in my motherboard with 6 SATA ports, and other external (SATA2), - mini PCI-E via M.2 E-key port- with 4 SATA ports.

I have 5 HDDs.

 

Now in my current configuration I don't have problems, as everything fits inside the first controller:

- HDD1 is going to be in SATA1_1

- HDD2 to SATA1_2

...

- HDD5 to SATA1_5

 

But let's say that I want 2 SSDs in RAID 1 for CACHE, and I know that the fist controller is faster (just with SSDs) than the second one (which for HDDs is the same), if now I put (for example):

- HDD5 in SATA2_1

- SSD1 in SATA1_5

- SSD2 in SATA1_6

 

So, questions:

1) I know that the SSDs are not part of the same volume, and shouldn't be a problem (we could not even thinking about the SSD but just the HDD5) , but if I move the HDD5 from SATA1_5 to SATA2_1, does it keep working without doing anything?

2) So if I permute (like HDD1 to SATA1_5, HDD5 --> SATA1_1) any HDD it could work too? Or something needs to be change?

 

Maybe someone can help me with my questions.

 

Thanks and greetings!!

Link to comment
Share on other sites

Generally your answer to both your questions is yes.  But it is hard to answer without understanding much more about your system.

 

Assuming your HDD's are in some sort of RAID redundant config, you can discover the answer on your own.  Why not just shutdown, move one drive over to your other controller, boot up and see?  If it works, great, then shutdown again and move a second drive.  If it does not, move the first drive back and recover your array.

 

Moving the drives so that they do not match the controller drive sequence can complicate a recovery from a catastrophic failure where you are trying to manually reconstruct an array that won't start.

 

Now if you are successful moving your drives, you can get your array corrupted from SSD cache failure, so maybe we should not be helping you ... :-)

Link to comment
Share on other sites

@flyride, apart for the information of the previous post, I want to clarify what I am trying to do.

 

I am about to do the setup (with 5x 8Tb HDD: 3 x Seagate and 2x WD), so I didn't yet, and just asked because I have planned (not now) to add a SSD cache (2 disk in RAID 0), and if I connect everything in the same controller (and not using the second one, meaning, I don't need even to connect this controller right now) then if I want to change, I was wondering if I could have any trouble afterwards... so what I am proposing is this:

HDD1 TO SATA1_1

HDD2 TO SATA1_2

...

HDD5 TO SATA1_5

and not even connect the second SATA controller.

 

The second option is, to connect:

HDD1 TO SATA1_3

HDD2 TO SATA1_4

HDD3 TO SATA1_5

HDD4 TO SATA1_6

HDD5 TO SATA2_1

And save space directly to for SSD1 and SSD1 in SATA1_1, and SATA1_2, respectively.

 

What do you think what is better?

 

Thanks!!

Edited by ed_co
Link to comment
Share on other sites

You are asking for opinion, not what is technically possible.

  1. I do not value highly the DSM SSD write cache implementation.  There are many examples of data loss due to array corruption when using SSD write cache.  Plus, most feasible DSM NAS write workloads can be similarly benefited by increasing RAM, with no corruption risk.
  2. You assert that the chipset controller is faster than a (M.2) PCIe connected controller, and that is the reason for all this effort.  Based on what? Have you actually benchmarked this?  SATA III SSD is limited to 550MBps by its interface specification, no matter where it is connected. M.2 is essentially a PCIe x4 slot which is 4GBps bandwidth to the CPU, so connected SATA drives should run just fine from there. I don't really understand the performance advantage from moving things around.
  3. There is a inherent, latent risk of spanning volumes across controllers.  What if you upgrade and only one controller comes up?  Your array is now broken.  Unless the ports are needed for the drives you want to connect, it would be better not to run a second controller at all.  This is also true with SSD cache on a secondary controller - your array is broken if the SSD cache isn't available (i.e. you do not reduce risk by putting SSD cache on secondary controller).

Summarizing, I don't think you will get much performance value for this plan, and will incur subjectively unnecessary risk, both short term and long term.  If you do still want to move drives around, I recommend moving ONE drive at a time with DSM shut down.  Then boot up and let DSM update the array information and verify that everything comes up clean.  Then shutdown and move another drive, repeat until you are done with whatever you are trying to do.

  • Like 1
Link to comment
Share on other sites

@flyride, first of all, thanks for your info.

I will try to clarify some things first.

First my rigs details (what I already HAVE):

- i5 8400 6 core processor

- H370M-ITX/ac with 6 SATA card

- 32gb RAM DDR4

- Controller SYBA SI-MPE40125 4 SATA ports -with Marvell 88SE9215 controller- (removing the wifi card, and putting this instead, it works good). So no M.2 controller, or whatever you were talking... This wasn't what I said.  This is the one I am using, and I will use in the future. It is already installed via M.2 e-key to mini pic-e adapter). This Marvell is known to have not the best performance for SSD, that's why I asked.

- 3 x Seagate IronWolf 8Tb

- 2 x WD 8Tb (WD80EZAZ)

So your replies:

1) That's why I was planning to do RAID 1 (mirror) with 2 SATA SSD, to avoid corruption.

2) Not applicable... I am not planning get one of this.

3) There is no other way to expand. And 6 SATA ports now is enough, but not in the future. There are other possibilities with other controllers though.

So, summarising you think doesn't worth it the SSD cache. Curious, I thought this will improve a lot the performance of the NAS. Here one example.

 

Thanks!!

Link to comment
Share on other sites

9 minutes ago, ed_co said:

1) That's why I was planning to do RAID 1 (mirror) with 2 SATA SSD, to avoid corruption.

Again, there are plenty of examples of volumes being corrupted with SSD cache in RAID1!  Check reddit, Synology forum, etc.

 

9 minutes ago, ed_co said:

2) Not applicable... I am not planning get one of this.

You have a M.2 adapter to PCIe and a PCIe SATA controller.  The bandwidth commentary is completely relevant to your configuration.   Consider testing performance on SATA SSD connected to both chipset and Marvell controllers and prove to yourself that it is different, not rely on "known" information.  That will save you a lot of effort if the performance is the same!

 

9 minutes ago, ed_co said:

3) There is no other way to expand. And 6 SATA ports now is enough, but not in the future. There are other possibilities with other controllers though.

So, summarising you think doesn't worth it the SSD cache. Curious, I thought this will improve a lot the performance of the NAS. Here one example.

The example just shows that the poster planned to configure SSD cache, not that it was tested to offer any measurable benefit.  He hypothesizes that writes will benefit (theoretically true).  It is equally true that RAM will give you the same benefit, on real-world workloads, without the corruption risk.  SSD cache sounds good in concept but it is not very useful in reality, unless you have a sustained enterprise workload.

 

And now I am just repeating my opinion here.  You can do what you want to, half the fun is in experimenting!

  • Like 1
Link to comment
Share on other sites

4 hours ago, flyride said:

Again, there are plenty of examples of volumes being corrupted with SSD cache in RAID1!  Check reddit, Synology forum, etc.

Fair enough, I will take a look. I guess you are referring it is not just a problem with xpenology, but synology original NAS's they have that problem too (the ones with the possibility to add a SSD cache, I mean), right?

 

Quote

You have a M.2 adapter to PCIe and a PCIe SATA controller.  The bandwidth commentary is completely relevant to your configuration. 

Here, I am not agree, the adapter is a M.2 e-key (which is now the WIFI connector for motheboards, don't get confused to B+M key, which is used just for SSD). Both are very different even in speed.

But even though, the only bandwidth you have to consider here is the one of mini PCIe (which is the SATA card, and you should remember there are 4 ports to share with) here... which is at least half (1x) of M.2 e-key (2x).

 

Quote

Consider testing performance on SATA SSD connected to both chipset and Marvell controllers and prove to yourself that it is different, not rely on "known" information.  That will save you a lot of effort if the performance is the same!

Don't have SSD right now to try with... I didn't even configure the RAID in synology yet (I am using to test another spare drive to test with in which I have DSM 6.2.1 installed). This is what is all about, decide how to configure it, to don't mess anything up for now and the future...

 

Quote

The example just shows that the poster planned to configure SSD cache, not that it was tested to offer any measurable benefit.  He hypothesizes that writes will benefit (theoretically true).  It is equally true that RAM will give you the same benefit, on real-world workloads, without the corruption risk.  SSD cache sounds good in concept but it is not very useful in reality, unless you have a sustained enterprise workload.

I talked with the guy, and he did, and all went fine. He didn't gave me any numbers though. Will try to talk with him.

 

Quote

And now I am just repeating my opinion here.  You can do what you want to, half the fun is in experimenting!

Which is HIGHLY appreciated buddy!!

 

I just want to the things right, I don't want to realize in some months that I did the things wrong, and 1) get attached with something with limitations I can't change easily, 2) have a really hard time to improve it afterwards, because I chose wrong at the beginning.

Thanks!!

Edited by ed_co
Link to comment
Share on other sites

@flyride please take a look in my previous post!! Cheers

 

EDIT: after seen in several threads in reddit about what you told me about SSD cache. It looks like the best options to speed up things is with just SSD read only..., it speeds that system up without data risk comprimisses... what do you think?

 

 

Edited by ed_co
Link to comment
Share on other sites

On 12/26/2018 at 7:20 PM, ed_co said:

EDIT: after seen in several threads in reddit about what you told me about SSD cache. It looks like the best options to speed up things is with just SSD read only..., it speeds that system up without data risk comprimisses... what do you think?

@flyrideI mean, just using one SSD for a read-only cache, not need of 2 in RAID...

Edited by ed_co
Link to comment
Share on other sites

Again, just my personal opinion and continued thread drift:

 

I'm not trying to talk you out of SSD cache, but you seem to want to talk me into it!  I do agree that Synology read cache is less risk than write.

 

There are going to be 5 HDDs in your RAID5. If a typical NAS drive is capable of 75MBps sequential read, 4 (net throughput from a 5xRAID5) of them can do 300MBps.  Let's round down to 250MBps for rotational latency and other overhead.  With a Gigabit Ethernet interface, the maximum throughput is about 125MBps (1Gbps divided by 8 bits). This is half the sequential throughput of your HDDs.

 

SATA SSD maximum throughput is 550MBps.  550 is more than 250, but if it all has to fit in a 125 pipe, it doesn't matter much.  The only benefit is for small, random reads that happen to be cached already. In that unique case, the SSD cache is probably "faster." The SSD Cache feature visually markets to you how great the cache is ("90% cache effective!") but it doesn't explain how fast the HDDs would have retrieved the same data without the cache.

 

If the main workload is single-user, then it is also are going to be affected by the performance of the client. Very often, the small random reads that the cache can improve are workloads that the client takes the most time to process, and therefore can't make requests fast enough to fill the pipe.  We want to blame the NAS performance but it is the client PC or OS that is at fault.

 

So if you have a 10Gbe interface and a specific workload (e.g. multi-user) that you are sure that cache can optimize, then by all means do it.  For most general file and media serving activities that 90% of us might be doing on our systems, cache offers little performance benefit, and rapidly wears out your SSD.  That SSD can be put to much better use isolated to disk-intensive activities WITHIN the NAS where all the performance can be leveraged, such as Synology apps or Docker or Virtual Machine.

 

I strongly encourage you to set up some workloads that are meaningful to you, and benchmark both with and without SSD cache. You may be surprised.

Edited by flyride
  • Like 3
Link to comment
Share on other sites

As I said, I didn't configure the DSM in my 5x8tb disks. I was waiting to decide if getting or not the SSD cache, for the mess of moving the disks into both SATA controllers afterwards.

But now, I think that I am pretty much convinced: I will start with the RAID5, and put all my stuff inside, and start taking a look how it performs, and if it is nice, I will just leave it as is. If not, will try to improve it in the future.

Thanks for your explanations, they were really clear and helpful!!

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...