Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. 3 hours ago, haldi said:

    Is there a way to Migrate into ESXI by directly passing trough the Harddrives without the need for wiping the existing Raid5 ?

     

    I've done this successfully.  But have a backup of your data.

  2. Not initialized just means that the drive is foreign to DSM and the DSM partition structure has not been built yet.  As soon as you add to, or create an array using the drive, it will be initialized automatically.

     

    A drive will not automatically be joined to the volume - you must initiate it with a Repair operation.  However, you cannot replace a 4TB drive with a 500GB drive and restore redundancy.  You can only replace a drive with another one equal or larger in capacity.  Because of this, DSM is not offering you the option to Repair the array.

  3. AFAIK the limit is in the kernel itself, as compiled by Synology.  The platforms enabled by the current loaders have the following compute characteristics:

     

    DS3617xs - Xeon D-1527 - 4C/8T

    DS3615xs - Core i3-4130 - 2C/4T

    DS918 - J3455 - 4C/4T

     

    There has been some confusion about cores vs. threads.  I think that 16 threads is the kernel limit.

    As you can see, 16 threads covers all these CPUs and we have evidence that 16 threads are supported on all three platforms (EDIT: this is not correct after further examination, see here for current discussion.)

     

    If you have more than 8 cores, you will get better performance by disabling SMT. @levifig, you are already doing this. I don't think there is any other way to support @Yossi1114's 10C/20T processor other than to disable SMT.

     

    If someone wants to develop a loader against a platform with more thread support, may I suggest investigating the FS3017 (E5-2620v3 x 2 = 12C/24T) FS2017 (D-1541 = 8C/16T) or RS3618xs (D-1521 = 8C/16T). It would stand to reason that the kernel thread limits might be higher for those platforms.

    • Thanks 1
  4. Your BIOS may not support hot plugging, or you may need to enable it.

    What you want to avoid is booting the wrong DSM copy (the one from the drive you removed). 

     

    Do you have a computer that you could use to wipe the WD disk not currently in the array?  If you can install it to another computer and delete all the partitions, then you can install in your NAS, boot normally, then rebuild the clean drive back in to the array.

  5. Did you switch platforms?  Better SCSI support on DS3615xs vs. DS918, but in any case I wouldn't be surprised if 6.2.1 and 6.2.2 break things that worked before, since that seems to be the way things go now.

  6. Vdisks have to be attached as SATA for installation, but you may be able to switch them to SCSI or SAS after the installation is done.  I did test that on 6.2 some time ago.

     

    However, if you upgrade or degrade the array, it will probably fail on bootup until you go back into the VM and reattach the drives to SATA.  My recollection is that it was all a bit finicky.

  7. So you deliberately degraded your array in order to backrev.  Boggle...

     

    At this point, remove the Hitachi and re-install your fourth WD drive. You really ought to do this while the system is running (don't reboot or power it down).  The system should recognize the WD as a new drive. You should then be able to repair your array and the array may be back to normal.

     

    You'll need to noodle through your packages and settings (hopefully you did a settings backup?).  Packages are lost when the system cannot see the volume the packages are installed on after bootup.  Is your volume named the same as it was before the upgrade (i.e. volume1)?

     

  8. Honestly, I don't think any of the repair options are likely to help.  Your LV seems to be ok, but the FS is toast.  The post I linked specifically discussed the recovery option.  The FS does not have to mount in order to use that option to recover your files.

     

    Your biggest challenge will be to find enough storage to perform the recovery.  I would probably build up another NAS and NFS mount it to the problem server.

  9. You might want to review this thread: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability

     

    And in particular, recovering files per post #14: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=108021

     

    Mostly btrfs tends to self-heal, but there are not a lot of easy options on Synology to fix a btrfs volume once it has corrupted.  At least, none that are documented and functional.

  10. I think you are declaring success... ?

     

    This technical information is earlier in the thread, but I'll restate here.

     

    Synology's reference to the "system partition" is a generic term for two physical partitions that are each in RAID 1.  These arrays span across all regular drives in the system.  When a disk is called "Initialized" it means that DSM has created the partition structures on the drive.  The first partition is for DSM itself and is mapped to /dev/md0.  The second is for Linux swap and is mapped to /dev/md1.  A "failed" system partition means that either or both of the arrays (/dev/md0 or /dev/md1) are degraded.  Repairing the system partition resyncs those arrays.

     

    What we have done is changed two of your four drives to hotspares in /dev/md0, the RAID array for the DSM partition.  The hotspare drives must have the partition configured (or DSM will call the disk "Not Initialized"), and must be in the array (or DSM will call the system partition "Failed") but are idle unless one of the remaining members fails. 

  11. This doesn't make a ton of sense.  Did you have 5 drives to start with?  What was the exact configuration of the array and volumes before the upgrade?

    The 160GB Hitachi drive can't be part of your array, yet it says it is intact and degraded.  So something isn't adding up.

     

    Along with the answers to the above, go to the command line and cat /proc/mdstat and post here.

     

  12. This is referring to the presence or absence of kernel support for hardware video acceleration.  DRI is an acronym for "Direct Rendering Infrastructure"

     

    The /dev/dri driver will appear if 1) kernel support for the driver exists, and 2) compatible display hardware has been detected.

     

    Hardware acceleration in Video Station and Plex, etc require this driver to be present and working.

  13. I think there are two issues here.  First, the GUI "hot spare" refers to /dev/md2, /dev/md3... and is not the same thing as manually configured hotspares on /dev/md0.  I am unsure if you are trying additional commands in order to get something to appear in the hotspares GUI screen.  If you successfully convert /dev/sdc1 and /dev/sdd1 to hotspares, it will NOT be reflected there.

     

    Second, this technique was tested and documented for DSM 6.1.x and you are running 6.2.x.  I just tested on a DSM 6.2 system and it does just what you say it does - clicking repair from the GUI restores the failed partitions to active status.  This is a change in behavior from the GUI in 6.1.x.

     

    EDIT: previous advice to implement the conversion from the command line instead of the GUI repair button now incorporated in the original post

  14. 13 hours ago, martine said:

     

    
    admin@BALOO:/$ sudo mdadm --grow -n 2 /dev/md0
    mdadm: /dev/md0: no change requested

    hope anyone can point me to the next steps as i am no Linux guy

     

    This could be a bit of a high risk operation if you don't understand what is happening.  But in any case, if you are seeking assistance, post output of

    cat /proc/mdstat

    and

    df

     

×
×
  • Create New...