Jump to content
XPEnology Community

ilciumbia

Member
  • Posts

    28
  • Joined

  • Last visited

Posts posted by ilciumbia

  1. Hi all,

     

    I have just purchased a 4TB HDD which I would like to use purely for backup purposes. I have an XPEnology box with DSM5.2.5644.5 and a 3 volumes: RAID1, RAID5 and no RAID (actually they are all SHR, I used standard nomenclature to explain the redundancy).

     

    My question is: should I install the new disk inside the XPEnology box, create a new volume (volume4) and use this as the backup destination or should I insert the HDD into a USB external box and backup everything there?

     

    My thoughts:

     

    Internal: no space on desk, as fast as it can get, protected from power surges and shortages BUT ext4 (so a bit more complex to access backup if needed), power always on, backup in the same place (and power source) as the data;

     

    External: removable, NTFS makes access easier, powers off between backups BUT slower, heats up more

     

    What do you guys think?

     

    BTW: I tried to mount an internal drive as NTFS so that it is not part of DSM arrays, trying to get the best of both worlds, but it seems Synology has implemented a rather "unusual" way of dealing with fstab, so I can mount NTFS internal disks but they are not seen by DSM GUI so using them for backup makes things a little more cumbersome... Therefore I gave up.

     

    Thanks!

  2. Hi all,

     

    I realise my need for a backup disk for all my data, so I've been wondering lately: if I connect an internal hard drive to a SATA port (so not USB), is there a way to convince DSM not to initialize it as part of an array but keep it standalone and access it as if it were an external USB disk? This would have two reasons:

     

    1- Since it would only serve a backup purpose, I would like to format it to NTFS, it would make things easier should I ever need to recover data;

    2- since I often have Download Station running, disks never go to sleep; if one of the disks is not part of an array, it would not have DSM installed on it, so there is at least a chance for it to suspend.

     

    What do you guys think?

     

    Thanks!

  3. What does parted's "align opt "n"" where n is each of the labels on the disk report? In particular does it agree that the partitions is aligned?

     

    Er... it does not... ???

     

    root@DiskStation:~>parted /dev/sda align opt 1

    1 aligned

    root@DiskStation:~>parted /dev/sda align opt 2

    2 not aligned

    root@DiskStation:~>parted /dev/sda align opt 5

    5 not aligned

     

    root@DiskStation:~>parted /dev/sdc align opt 1

    1 aligned

    root@DiskStation:~>parted /dev/sdc align opt 2

    2 not aligned

    root@DiskStation:~>parted /dev/sdc align opt 3

    3 aligned

     

    Ouch... I am lost again... :shock:

     

    hdparm -I /dev/sda

     

    reports:

     

    Logical Sector size: 512 bytes

    Physical Sector size: 4096 bytes

     

    So my data partition on sda is in fact not aligned (not to mention the cache partition)?...

  4. Ilciumba,

     

    May I ask you to please repeat your parted output, (print list) for sda-f, but setting the units to bytes (units B) to eliminate rounding? That way we can divide starting sector by 4096 and at least confirm good alignment if it returns integers.

     

    Would you also consider the output of parted's align, e.g. align opt 1, for each partition on each sample disk?

     

     

    Wow, how right you were! I did as you suggested:

     

    
    Model: HGST HDN724040ALE640 (scsi)
    Disk /dev/hda: 4000787030016B
    Sector size (logical/physical): 512B/512B
    Partition Table: gpt
    Disk Flags:
    
    Number  Start        End             Size            File system     Name  Flags
    1      1048576B     2551054335B     2550005760B     ext4                  raid
    2      2551054336B  4698537983B     2147483648B     linux-swap(v1)        raid
    5      4840079360B  4000681082879B  3995841003520B                        raid
    
    

     

    So in fact partitions ARE 4k aligned!

    1048576/4096 = 256

    2551054336/4096 = 622816

    4840079360/4096 = 1181660

     

    I did not know about the "unit" option in GParted! Thank you sooo much!

     

    But now I have yet another question: why is the first disk listed twice, once as hda and once as sda??? I see it happens to you as well...

  5. Hi all, thanks for your replies.

     

    The best way to find out if a HDD is really an "Advance Format Drive" is by checking online the model number and spec from the manufacture websites.

    Absolutely agree. The datasheet for the HGST https://www.hgst.com/products/hard-drives/nas-desktop-drive-kit drives in question are silent on the topic though.

     

    Exactly. WD EARS are indeed Advanced Format, while HGST says nothing about it, but, according to Linux, it does have 4k sectors.

     

    Besides once the HDD becomes parts of the RAID, all the formatting goes out the window, as the RAID and format structure will be maintained by the mdadm, so the underlying physical cluster size no longer matters.

     

    It only matters most when you try to use it for FAT32 or NTFS partitions for Windows, otherwise under linux and EXT4 partition just blankets over whatever it's underneath.

    I must gently disagree. For all operating systems and disk arrangements, sector size matters.

    For single disks, the minimum file system allocation unit must be a multiple of the sector size. Otherwise, misaligned allocation units will cause two sectors to be read / modified / written whenever a file system allocation unit crosses a 4k sector boundary.

     

    I gently disagree too. Now that I think about it, when I initialized under DSM the EARS disk which is NOT reported as having 4k sectors, the file transfers were extremely slow, around 30MB/s, and I could not understand why. Now I do!

     

    Fundamentally, make sure the start of each label / partition / volume, whatever the OS calls it is 4k aligned.

     

    Yeah, the point is that, as I showed in my dumps in the OP, DSM system and cache partitions are not, while data partitions are. It is probably not so bad, since DSM system does not need that much speed, however the reason behind this strange behaviour puzzles me...

  6. Hi all,

     

    I have recently set up my system, starting with a pair of 4TB HSGT Deskstar NAS and then adding 1 TB WD green disks.

     

    Now I wonder if DSM is capable of correctly recognizing and aligning 4k-sector disks. I don't know if the HGST are in fact 4k-sector disks, but if I run:

     

    cat /sys/class/block/sda/queue/physical_block_size

     

    I get:

    4096

     

    However, if I run parted on /dev/sda and I press p I get:

     

    Model: HGST HDN724040ALE640 (scsi)

    Disk /dev/sda: 4001GB

    Sector size (logical/physical): 512B/512B

    Partition Table: gpt

    Disk Flags:

     

    Number Start End Size File system Name Flags

    1 1049kB 2551MB 2550MB ext4 raid

    2 2551MB 4699MB 2147MB linux-swap(v1) raid

    5 4840MB 4001GB 3996GB raid

     

    So apparently parted and /sys/class/block/sda/queue/physical_block_size tell two different stories. In fact, you can see that the system partitions reported by gparted are not at all aligned, while the data partition, the most important one, appears to be, but I cannot be certain since the start is given in MB and not in kB, so the rounding could trick me.

     

    Then I have another disk, a WD WD10EARS-14Y5B1, which, following cat /sys/class/block/sda/queue/physical_block_size, reports 4096 as well, whereas parted says:

     

    Model: WDC WD10EARS-14Y5B1 (scsi)

    Disk /dev/sdd: 1000GB

    Sector size (logical/physical): 512B/512B

    Partition Table: msdos

    Disk Flags:

     

    Number Start End Size Type File system Flags

    1 1049kB 2551MB 2550MB primary raid

    2 2551MB 4699MB 2147MB primary raid

    3 4832MB 1000GB 995GB primary raid

     

    Again, parted does not agree in the physycal sector section and system partitions are exactly as for the HGST (so not 4k aligned). However, the data partition appears to be aligned, with the same provision as before.

     

    Curiously enough, then, I have another WD disk, this time the model is reported by parted as:

     

    Model: WDC WD10EARS-00MVWB0 (scsi)

    Disk /dev/sdc: 1000GB

    Sector size (logical/physical): 512B/512B

    Partition Table: msdos

    Disk Flags:

     

    Number Start End Size Type File system Flags

    1 1049kB 2551MB 2550MB primary raid

    2 2551MB 4699MB 2147MB primary raid

    3 4832MB 1000GB 995GB primary raid

     

    however, this one answers 512 to the cat /sys/class/block/sda/queue/physical_block_size. Possibly it does not report the physical sector size correctly, because, to my knowledge, the EARS are 4k "advanced format" drives... Am I correct?

     

    Being in RAID1 with the previous one, the starting point of partitions are identical.

     

    Last disk I have is:

     

    Model: WDC WD20EARX-00PASB0 (scsi)

    Disk /dev/sdf: 2000GB

    Sector size (logical/physical): 512B/512B

    Partition Table: msdos

    Disk Flags:

     

    Number Start End Size Type File system Flags

    1 1049kB 2551MB 2550MB primary raid

    2 2551MB 4699MB 2147MB primary raid

    3 4832MB 2000GB 1996GB extended lba

    5 4840MB 2000GB 1995GB logical raid

     

    reported again as 4k by cat /sys/class/block/sda/queue/physical_block_size.

     

    Also in this case the data partition would seem to be 4k aligned, even if I do not know why it ended up as logical partition instead of primary...???

     

    To recap, my question is: why are system partitions not 4k aligned even if the disks report 4096 block size? Is it because it is not important that system partitions are, since they do not require speed, opposite to the data partition? Or did something go wrong during installation?

     

    Is there a way to realign existing partitions in a non-destructive way?

     

    Also: why the sdf has a logical partition in it?

     

    Thanks!!!

  7. Ciao a tutti

     

    possiedo un server HP Gen8 con due dischi da 2 TB.

    ho fatto una prima installazione andata a buon fine di XPenology con DSM configurando i 2 dischi in RAID 1.

     

    succesivamente, per sbaglio, ho fatto l'installazione del DSM versione 6 e ovviamente si è piantato tutto. :cry:

     

    attraverso una chiavetta USB con ubuntu ho recuperato tutti i dati e poi ho cancellato solo la piccola partizione iniziale dei dischi dove credo si installi il DSM.

    non ho cancellato la partizione (in mirror) con i dati.

     

    sono ripartito installando di nuovo il tutto ma è sopraggiunto un problema:

    dopo aver installato la versione corretta del DSM 5.x non mi permette di creare il nuovo volume con i due dischi in RAID 1.

     

    i dischi vengono visti ma non permette di applicare le impostazioni, anche seg li dico di cancellare tutti i dati.

     

    vi è mai capitato questo problema?

    avete qualche suggerimento?

     

    grazie mille per l'aiuto !!!

    i dischi vengono visti ma come? Nuovi, inizializzati, non inutilizzati, vuoti, pieni?...

     

    Inviato dal mio Nexus 10 utilizzando Tapatalk

  8. Apparently the only reason why they moved to N3160 is because the N3150 was phased out by Intel, but since no recalls have been issued, I believe there are no problems with the processor itself and DS716+ and DS216+ will continue to receive support and upgrades... I myself have an Asrock N3150M and I have no intention to change it...

     

    Inviato dal mio Nexus 10 utilizzando Tapatalk

  9. I thought about buying the N3700 as well, but 30€ difference (70 vs 100) was unjustified for me, the difference being only the boost frequency (for the most part you will stay at 1.6GHz or less, unless you are heavily transcoding) and a slightly better GPU (which is mainly useless for a NAS). No brainer after a bit of research, and the reason is still valid after trying it.

  10. I would like to buy an ASRock N3150M and run XPEnology on it.

    However, according to the thread http://xpenology.com/forum/viewtopic.php?f=2&t=9658, many people said that XPEnology cannot control the CPU frequency with Asrock N3700-itx.

    Basically, the CPU run 1.6ghz constantly, cannot sleep and boost.

    So, will XPEnology work well with ASRock N3150M and able to control CPU frequency? Did anyone tried to install XPEnology with ASRock N3150M?

    Also, how about the WOL function?

    Thank you and sorry for my poor English.

    I have just built a NAS with exactly that board, and I am very happy about it. It is true, there is no burst CPU control yet, but that is no real problem, at least for the basic stuff I have yet to see the processor go above 50%, usually much less. Another problem is that I cannot control the HD fan speed, but also for that I hope that drivers will be included in the next xpenoboot release.

     

    I cannot say anything about WOL, never used it...

  11. Hi Trantor,

     

    I just assembled my NAS with a AsRock N3150M motherboard, and I cannot see fan controls in control panel. Does it require a specific driver for that? I would like to be able to control fan speed according to disks temperature, otherwise it always sounds like an aeroplane taking off... :grin:

     

    Thanks!

  12. Thanks for the reply. Extend it yes, but add the 4 TB to change the RAID level from nothing to RAID1... I have not found a way... Is it possible?

     

    EDIT:

     

    I have re-read your reply more carefully and I cannot quite figure out what could happen without trying (and I cannot do it now because the two 2TB disks are currently busy).. I can only guess that adding a 4 TB disk to a 2x2TB SHR will give me the choice to move from RAID1 to RAID5... Perhaps it could allow me to create two 2TB volumes, redundant each on half the 4TB disk... to 2TB//(4TB/2) + 2TB//(4TB/2), but I am not sure this is what will happen...

     

    Can anybody enlighten me?...

     

    Thanks!

  13. Hi all,

     

    I have just finished assembling my station: N3150M motherboard, extra Syba SATA controller, 8GB of RAM. I am experimenting a bit and I am trying to do something which does not appear, unfortunately, possible.

     

    First attempt: I thought I could take my new 4 TB drive, create a volume and copy data on it, then add my spare 2x2TB drives, create a disk group in JBOD and use this group as second member of a RAID1 array. In pratice, 4//(2+2). Apparently, not possible.

     

    Second attempt: I was disappointed to find out I could not do this either: 2TB+2TB disk group in JBOD, create a volume with that group, add the 4TB disk and create a RAID1 array. In practice, (2+2)//4. Not possible, too.

     

    What I get from this is that, even if still great, the DiskStation has it limits in terms of flexibility in disk usage and assembly (or maybe I was expecting too much!).

     

    Can you confirm that my deductions are correct? Is there no way to concatenate more drives in JBOD style and use the resulting bigger capacity as if it were a bigger disk in a RAID1/5/6 array?

     

    Thanks!

  14. Hello,

     

    thanks for your replies. In fact I am still undecided: I am trying to choose between the N3700M (mATX) and the N3150-ITX. The former has 3 free PCIe slots, but only 2 SATA ports on board, so I would need to install 2 PCIe SATA controllers to reach 10 SATA ports (8 needed at the moment), plus 1 empty slot for a possible GBE card for redundancy/link aggregation on Ethernet. The 3150-ITX, on the other hand, has 4 SATA ports but only one PCIe slot, plus a miniPCIe, for which exist GBE cards but it is unknown (to me!) if they will work... And then there is the memory thing... but I like the ITX form factor! :smile:

     

    Are there any PCIe x1 SATA controller cards which carry more than 4 ports which are known to work with XPEnology? And mPCIe GBE network cards?

     

    Thanks!

  15. Hello,

     

    I am wondering if, for a system based on the N3150 or N3700, it makes a huge difference to have single channel or dual channel memory. I am planning to buy 8 GB for my XPEnology station, the question is: 2x4 or 1x8? Of course 2x4 would be better, however one single 8 GB module is more "recyclable" shoud I realise that I need more memory (maybe for virtualization): I could simply add one more module, while if I buy now 2x4 I would need to replace both, and recycling 2x4 GB modules might be a little harder.

     

    Or is 2x2GB enough? :smile:

     

    Thanks!

×
×
  • Create New...