Jump to content
XPEnology Community
  • 0

Starting from scratch


merve04

Question

So if any of you have seen a recent post from me due to file system error, I haven’t been able to fix it, so I picked up 4x 12 TB to offload my data and rebuilt my current array. I’m gonna return the 4 12’s when done, greasy but whatever. 
What i would like to know is, I will once again have a 54TB storage pool in SHR2, can i create a BTRFS volume and a EXT4 volume on the same pool? My thought is I’ll make a 1TB btrfs for just sensitive data which is duplicated twice on 2 different cloud storage and for all my apps. Second volume would be EXT4 strictly just holding media. Is this doable?

Link to comment
Share on other sites

Recommended Posts

  • 0

So mapping your drives out by controller and array, it looks like this.

 

image.thumb.png.e88277b50ffe4c4df6d5dc1b521bf837.png

 

md2 is the array that supports the small drives (sda/b/c/d/j) and md3 is the array that incorporates the remainder of the storage on the large drives.  The two arrays are joined and represented as a single Storage Pool. The larger the size difference between the small and large drives, the more writes are going to the large array so the small drives will be less busy (or even idle sometimes).  This is a byproduct of how SHR/SHR2 is designed, and not anything wrong.

 

Because of this, you haven't seen the actual write throughput on disk 1-4.  So if they were underperforming for some reason, impact to performance would be random and sporadic, because it would only be when writes were going to the small array.  What we are trying to figure out is if one component or array (or controller even) is performing abnormally slowly.  You might want to repeat the evaluation of the drives using the dd technique from earlier instead of copying files from the Mac, and maybe run for an extended time until Disk 1-4 get used, as it will stress the disk system much more. 

 

Just a random thought, is write cache enabled for the individual drives?

 

Edited by flyride
Link to comment
Share on other sites

  • 0

On very slow hardware and limited memory,  yes.  You always have to trade features for performance until your hardware makes the tradeoff cost irrelevant.

 

My 4-core system can run my RAIDF1 array with btrfs at speeds > 1 gigabyte per second using btrfs, exceeding the capacity of the 10Gbe interface.  Should I care that ext4 is faster?

Link to comment
Share on other sites

  • 0

Another thought came to mind, i would imagine plugging each 12TB, one at a time, id transfer the data over substantially faster, rather than over network. My concern would be DSM will want to partition in such a way that it has a system partition on the drive, when i wipe all of the hdd's in the array as ill just want to start with clean dsm install, those 12tb would want to boot dsm, im guess theres no real effictive way around this?

Link to comment
Share on other sites

  • 0

Your CPU and RAM is fine.   btrfs will use more RAM than ext4 with your data size. At some point I'd consider adding memory with your large storage capacity, but it isn't urgent.

 

But I'm not quite sure I understand that you are describing with the 12TB drives.  Are these bare drives you are going to add to DSM?  Or are they externals?

 

Link to comment
Share on other sites

  • 0

Thanks for that reply, i am considering in the near future to double up my ram. 
 

yes they are bare drives, will be getting them some time this week.

what I was trying to say is, I still have open slots in my chassis, wouldn’t I get faster transfer speeds if I plugged them directly and have the drive visible under dsm? Just add 1 12 at a time as a simple volume to copy the data over? 
but it’s my understanding that once the drive is installed, dsm will want to probably create a system partition on it? Won’t that create a potential hassle when I try to add one drive back at a time to copy the data back over? How would my system know which drive to boot?!?

As I want to format all my drives and litterally start from scratch once all data is transferred out. 
if it’s too complicated I will stick with doing it over the network on a desktop machine. 

Edited by merve04
Link to comment
Share on other sites

  • 0

Do you have slots for all four drives?  If so, I would create as another storage pool/volume.

  • Assuming your current volume is Volume1, you can do this and create as Volume2 (or Volume3 if necessary)
  • Make it a JBOD single volume (or a RAID5 if you can fit in 36TB of storage)
  • Once all data is complete, pull the 12TB's out and set aside
  • Reinstall and create new SHR2 btrfs Volume1 with your old drives
  • Once everything is up and running, reinstall the 12TB drives and the Volume2 will magically appear, ready for you to copy back

This keeps all the storage transfer in-box and doesn't use the network.  The critical item for you is to make sure your volume numbering does not collide from old to new. In other words, the 12TB volume number must be larger than the number of volumes in the new configuration.

Link to comment
Share on other sites

  • 0

Unfortunately no I don’t enough ports. I got 13 drives running at the moment, I could rip out volume 2 as it’s a single hdd which has nothing important. Id then have physical space for adding 4 in (I have a 16bay chassis) but between the mobo and lsi controller, only have 14 connections. It would be too insane to rip

out the 2 parity drives IMO. 

Edited by merve04
Link to comment
Share on other sites

  • 0

My advice is not to get creative moving drives around in your live system, as you only have one copy of your data.

 

While you could create volumes one at a time to offload data, it's again your only copy distributed across multiple volumes that all have to be compatible with your new configuration.  It's possible but increases your risk.

 

Any solution that requires you to expand your very large volume on the new build, or change from SHR1 and SHR2 will take at least as long as the network copy.

 

So keeping things uncomplicated and safe may be the right move here.

Link to comment
Share on other sites

  • 0

Agree, I do only have one copy of my data with exception to my personal docs/pics but y’a I guess I best keep things simple. I went with conservative numbers, transferring data at 50MB/s will take about 9 days one way. It just would of been nice to do it twice as fast 😉 

Edited by merve04
Link to comment
Share on other sites

  • 0

Today I nuked my server and reinstalled DSM, but did i do something wrong?

I went in storage pool and created a shr2 with all the hdd's i wanted, then went in volume and created 1 volume for the entire pool in btrfs and its doing consistency check, and its extremely slow and my whole system is just boggled down. I dont recall seeing a skip option when creating the volume.

Link to comment
Share on other sites

  • 0

Well i dont know if its just my bad luck, but I'm really hating using btrfs. My system performance is absolute garbage. I do a few simple tasks and everything comes to a crawl. No longer am I able to hit 60-70MBps download speeds with usenets, 30 is the best ive seen, avg is more like 15 now. When i move\rename files i see it take a hit to almost 1-2MBps, if plex is trying to do a task in the library it also takes a hit. Just trying to browse diskstation via mac os using finder is painfully slow. is there anything i can check to improve this?

The sad part in all of this is I still havent installed surveillance station and virtual machines which i had running previously.

Edited by merve04
Link to comment
Share on other sites

  • 0

Synology certifies btrfs on Atom hardware and it's not that slow.  So something isn't quite right.

 

Start with lowest level hardware first, then move to higher-level structures.  Is everything else the same?  Have you tested the raw performance of each disk connected?  What is the configuration and state of the array?  What's the memory usage? 

Edited by flyride
Link to comment
Share on other sites

  • 0

/dev/sda:
 Timing buffered disk reads: 366 MB in  3.01 seconds = 121.57 MB/sec
/dev/sdb:
 Timing buffered disk reads: 440 MB in  3.01 seconds = 146.22 MB/sec
/dev/sdc:
 Timing buffered disk reads: 414 MB in  3.01 seconds = 137.74 MB/sec
/dev/sdd:
 Timing buffered disk reads: 388 MB in  3.01 seconds = 128.90 MB/sec
/dev/sde:
 Timing buffered disk reads: 340 MB in  3.01 seconds = 113.03 MB/sec

/dev/sdg:
 Timing buffered disk reads: 430 MB in  3.01 seconds = 142.88 MB/sec
/dev/sdh:
 Timing buffered disk reads: 418 MB in  3.00 seconds = 139.12 MB/sec
/dev/sdi:
 Timing buffered disk reads: 446 MB in  3.00 seconds = 148.57 MB/sec

/dev/sdj:
 Timing buffered disk reads: 396 MB in  3.01 seconds = 131.71 MB/sec

/dev/sdk:
 Timing buffered disk reads: 470 MB in  3.00 seconds = 156.66 MB/sec

/dev/sdl:
 Timing buffered disk reads: 526 MB in  3.00 seconds = 175.17 MB/sec

/dev/sdm:
 Timing buffered disk reads: 428 MB in  3.00 seconds = 142.52 MB/sec

/dev/sdn:
 Timing buffered disk reads: 556 MB in  3.02 seconds = 184.23 MB/sec

Link to comment
Share on other sites

  • 0

Those look ok.

 

Did you turn OFF "Record file access time frequency" on each of the volumes? (Storage Manager | Volume Select | Action | Configure | General)

 

You didn't add encryption by chance?

 

When you created the Storage Pool, did you pick the flexibility or performance option?

Link to comment
Share on other sites

  • 0
19 minutes ago, merve04 said:

I've turned off Record file access time frequency on volume 1, none of my share folders have encryption. I do believe i would of selected flexibility as choosing performance does not offer SHR.

 

You just turned it off, or had turned it off?  The SHR option makes sense.

 

Might be worth doing some synthetic tests using something like NASPT for reference now.

https://www.intel.com/content/www/us/en/products/docs/storage/nas-performance-toolkit.html

Link to comment
Share on other sites

  • 0

I just turned it off on volume1, was set to monthly. Unfortunately being on a Mac, i wont be able to use NASPT. I'm currently running 6.2.2u6 918+, would it be worth my wild to try again 6.2.3 on 918+? Maybe migrate to 3615 while staying on 6.2.2u6 as it has native support for lsi-9211? In the end it still baffles me because i never had performance issues prior to wipping my nas out.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...