Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. You have a simple RAID5 so the logical volume manager (lvm) is probably not being used and you won't have vg's.  You need to figure out what device your array is.  Try a "df" and see if you can match /dev/md... to your volume.  If that is inconclusive because the volume isn't mounting "cat /etc/fstab"

     

    See this thread for some options: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/#comment-108013

  2. As you are familiar with LAG or other network interface aggregation tech, you'll agree that it won't help you get a single client (i.e. gaming machine) to go any faster than a single port.

     

    To put this into perspective:
    A single SATA SSD (or 2 in RAID 1) will easily read faster than a 1 Gbe interface.

    2 SATA SSD's in RAID 0 will nearly fill a 10Gbe interface.

    4 SATA SSD's in RAID 5 will certainly saturate a 10Gbe interface.

    • Like 1
  3. 14 minutes ago, nick413 said:

    CPU - sees 16 cores (Check - cat /proc/cpuinfo)

     

    Again, DSM will only use 16 THREADS not cores.  You have 12 cores, and 12 SMT (Hyperthreading) threads.  So DSM is actually only using 8 cores, and 8 threads.

     

    You will get better performance if you disable SMT and then DSM will report 12 actual cores.

  4. Well that makes sense why you want everything you can out of that 40Gbps card, as the theoretical drive throughput is 2.5x your network bandwidth.  So maybe it's not quite so critical that you get the iSCSI hardware support working natively as that won't be the limiting factor.  But good luck however it turns out.

     

    You may know this already, but:

     

    DS361x image has a native maximum of 12 drives

    DS918 image has a native maximum of 16 drives

     

    These can be modified, but every time that you update DSM the maximum will revert and your array will be compromised.  It SHOULD come right back once you fix the MaxDisks setting.

  5. 26 minutes ago, nick413 said:

    Thanks for the advice, but my infrastructure is just built on the ESXi, I know its capabilities and weaknesses.

    The storage with SSD with RAID F1 will be connected to the Dell M1000E factory via the ISCSI protocol through the Dell Networking MXL blade switch QSFP+ port, it is important for me to have a bandwidth of 40G.

     

    It is the LSI SAS 9300-i8 that has powerful chips for transmitting data via the ISCSI protocol, which suits me.

    How many drives will you have in your RAID F1?

  6. 52 minutes ago, nick413 said:

    The data storage system with SSD based on ESXi is a significant drop in performance.

    + ESXI does not fully support 40G data transfer with Mellanox network cards.

     

    My system is very close in design to yours (see my signature).  If you virtualize your network and storage, you may be correct.  However, ESXi allows you to be selective as to what it manages and what it does not.

     

    I am using 2x enterprise NVMe drives that are presented to DSM via physical RDM, which is a simple command/protocol translation.  The disks are not otherwise managed by ESXi. This allows me to use them as SATA or SCSI within DSM (they would be totally inaccessible otherwise).  If you have a difficult to support storage controller, the same tactic may apply. From a performance standpoint, if there is overhead it is negligible, as I routinely see 1.4MBps (that's megaBYTES) throughput, which is very close to the stated limits of the drive.

     

    If the hardware is directly supported by DSM, ESXi can passthrough the device and not touch it at all.  I do this with my dual Mellanox 10Gbps card and can easily max out the interfaces simultaneously.  In the case of SATA, I pass that through as well so there is no possible loss of performance on that controller and attached drives.

     

    The point is that ESXi can help resolve a problematic device in a very elegant way, and can still provide direct access to hardware that works well with DSM.

    • Thanks 1
  7. On ‎12‎/‎10‎/‎2019 at 3:14 PM, nick413 said:

    I have 2 processors, how do I know if the system uses one processor or two?

     

     

    DSM representation is cosmetic and is hard coded to the DSM image you're using.  cat /proc/cpuinfo if you want to see what is actually recognized in the system.  There is a limit of 16 threads.  You will need to disable SMT if you want to use all the cores (you are using two hexacore CPU's).

     

    https://xpenology.com/forum/topic/15022-maximum-number-of-cores/?do=findComment&comment=115359

     

    Just a general comment on this thread (which I am following with interest): this task would be a lot easier if you ran the system as a large VM within ESXi.

    • Like 1
  8. Under "Features and Services" within the TOS:

    2. QuickConnect and Dynamic Domain Name Service (DDNS)
    Users who wish to use this service must register their Synology device to a Synology Account.

     

    When using XPenology, you are not using a Synology device.  Therefore you aren't able to register that device to a Synology account.  If you do, you are violating the TOS.

     

    This is tantamount to stealing proprietary cloud services, and is discouraged here and by the cited FAQ.

  9. Nobody knows.  The current 1.03b and 1.04b loaders seem to work with DSM 6.2.x but any new DSM patch can (and does with surprising regularity) fail to work with them.  The community has found workarounds in most cases. That's the reason for this thread here:

    https://xpenology.com/forum/forum/78-dsm-updates-reporting/

     

    Look for folks with similar hardware, virtualization, loader and DSM versions being successful before attempting any DSM update.  And seeing as you are planning to use ESXi, there really is no excuse not to have a test XPenology DSM instance to see if the upgrade fails or succeeds before committing the update to your production VM.

     

    When Synology releases DSM 7.0, it's a virtual certainty that the current loaders will not work.  Someone will have to develop a new DSM 7.0 loader hack, and there is really no information about how long it might take or how difficult it may be.

  10. 27 minutes ago, Jamzor said:

    I have a HP Microserver gen 8 with the intel E3-1265L v2 CPU. Running Xpenology on ESXi 6.5 U3 - HP custom, at the moment.
    Now question is, I saw in the other post that hardware transcoding is only possible with 918+ ? But I see everyone is using 3615xs on this machine? Why not 918+? Is that not working even if you buy the correct network card?

     

    I'm not an HP expert, but I can answer the distilled-down question above.  The CPU you have is an Ivy Bridge architecture, which is too old to run the DS918 version of DSM, compiled to use new instructions present in Haswell or later.  So those running Ivy Bridge architecture have no choice but to run DS3615xs.

     

    Hardware transcoding requires Intel Quicksync drivers that are only implemented on DS918 DSM.  This post may help you understand the limitations further.

  11. MBR and Legacy are two different things.  If you can support a GPT partition, definitely do so.

     

    Loader 1.02b (for 6.1.x) can work in either Legacy or UEFI mode

    Loader 1.03b (for 6.2.x) works in Legacy mode

    Loader 1.04b (for 6.2.x) works only in UEFI mode

     

  12. 9 hours ago, bughatti said:

    I have a [raid 5] that correlates to volume 1.  I moved my setup a few days ago and when I plugged it back in, my raid 5 lost 2 of the 4 drives.  1 drive was completely hosed, not readable in anything else.

     

    [snip]

     

    I tried alot of commands ( I apologize but I do not remember them all) to get the raid 5 back.  In the end I just replaced the bad drive, so at this point I had 2 original raid 5 good drives, and 2 other drives that did not show in the raid 5.  

     

    I ended up do mdadm --create /dev/md2 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sda3 missing /dev/sdc3 /dev/sdd3

    this put the raid back in a degraded stat which allowed me to repair using the newly replaced drive.  The repair completed but now volume1 which did show up under volumes as crashed, is missing under volumes. 

     

    Sorry for the event and to bring you bad news.  As you know, RAID 5 spans parity across the array such that all members, less one must be present for data integrity. Your data may have been recoverable at one time, but once the repair operation was initiated with only 2 valid drives, the data on all four drives was irreparably lost.  I've highlighted the critical items above.

×
×
  • Create New...