Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. "Note that when I pop the synoboot usb back, it asks me to migrate, and since all files are read-only, I can't restore my synoinfo.conf back to the original - 20 drives, 4 usb, 2 esata."

     

    Can you explain this further?  You said at the beginning  you had 12 data drives and one cache drive.  synoinfo.conf is stored on the root filesystem, which is /dev/md0 and is still functional.  Why can't you modify synoinfo.conf directly as needed?

     

    If you are saying drives are not accessible, we need to fix that before anything else.

  2. - Outcome of the update: SUCCESSFUL

    - DSM version prior update: DSM 6.2.2-24922 Update 4

    - Loader version and model: Jun's v1.04b - DS918+

    - Using custom extra.lzma: real3x mod

    - Installation type: BAREMETAL - ASRock J4105-ITX

    - Additional comments: This was a DSM instance patched for NVMe. Disabled SSD Cache prior to upgrade.

      Upgrade completed normally. NVMe SSD still recognized and verified no change to libsynonvme.so.  R/O Cache reinstalled without issue.

     

    - Outcome of the update: SUCCESSFUL

    - DSM version prior update: DSM 6.2.2-24922 Update 4

    - Loader version and model: Jun's v1.04b - DS918+

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7U2

  3. image.thumb.png.ced169d13f901a696d0e906f2a7a5a7b.png

     

    https://www.synology.com/en-global/releaseNote/DS918+

    Spoiler

    (2020-01-15)

    Important Note

    1. The update is expected to be available for all regions within the next few days, although the time of release in each region may vary slightly.
    2. This update will restart your Synology NAS.

     

    Fixed Issues

    1. Enhanced the compatibility of M.2 NVMe SSD devices.

     

    ⚠️ Note the update of NVMe code. ⚠️

    Anyone who has used the NVMe patch should consider removing cache prior to applying this update

  4. I apologize if this may seem unkind, but you need to get very methodical and organized, and resist the urge to do something quickly. There's a thing that happens, folks sort of throw themselves at this problem and end up potentially doing damage without really doing research. That leads to a low likelihood of data recovery, and at best recovery with a lot of corruption and data loss.

     

    If your data is intact, it will patiently wait for you. I don't know why you decided to boot up Ubuntu but you must understand that all the device ID's are probably different and nothing will match up.  It actually looks like some of the info you posted is from DSM and some of it is from Ubuntu.  So pretty much we have to ignore it and start over to have any chance of figuring things out.

     

    If you want help, PUT EVERYTHING BACK EXACTLY THE WAY IT WAS, boot up in DSM, and get shell access. Then:

    1. Summarize exactly what potentially destructive steps you took.  You posted that you clicked Repair, and then Deactivate.  Specifically where, affecting what?  Anything else?
    2. cat /proc/mdstat
    3. Using each of the arrays returned by mdstat: mdadm --detail /dev/<array>
    4. ls /dev/sd*
    5. ls /dev/md*
    6. ls /dev/vg*

    Please post the results  (that is if you want help... on your own you can do what you wish, of course). There is no guarantee that your data is recoverable - there is no concrete evidence of anything yet.

    • Like 1
  5. That is wholly dependent upon your hardware.  The Synology fan control software interfaces via serial hardware that is not in a regular motherboard.

     

    Your motherboard may integrally support a fan profile which responds to internal temp sensors and/or system utilization to control fans.  It also may have a "high/medium/low" setting that is fixed (which would solve your noise problem).  This is probably the simplest solution, and that case, XPEnology has nothing to do with it.

     

    There are a number of online examples of more sophisticated, NAS-specific hacks to directly control fan interfaces such as on Supermicro motherboards.  This approach would be dependent upon the fan hardware interface in your specific motherboard.  At one time, I wrote up something that ran as a Linux daemon so that it could monitor drive temperatures and respond to that more aggressively than the hardware profile.  But in the end, there wasn't enough of a measured difference to matter, at least in my climate.

  6. 1 hour ago, Captainfingerbang said:

    Speaking of brilliant, i think i forgot all of my linux commands in my brain. Its been so long!

    It appears the system detects the drive but its also giving me error.

    I DID manage to get the sh script moved to the proper directory though!

    
    root@ahern2:~# cp /volume1/test/libNVMEpatch.sh /usr/local/etc/rc.d/

     

     

    Yeah, the only thing in that whole long terminal session that did anything was the copy file above.  You're lucky the random shell action didn't do any damage as root.

    Now you need to make the script executable:

     

    On 11/29/2019 at 10:38 AM, The Chief said:

     

    
    chmod 0755  /usr/local/etc/rc.d/libNVMEpatch.sh

     

     

    Then reboot and your drive should show up in Storage Manager.  Please do yourself a favor and only use R/O cache mode, not R/W ...

    • Like 2
    • Thanks 1
  7. Interesting.  In these past few years, it seems that we would have had an empirical example of someone specifically hitting the 8 thread limit, however.

     

    Is there anyone out there that can attest to using more than 8 threads (or cores, for that matter) on a platform other than DS3617?

  8. 27 minutes ago, IG-88 said:

    but as i did not know of the processor (and that server has also 2 sockets) the 3617 is a safer choice as it supports 8cores with HT or 16 without HT, its only half of that for the 3615

    so it depends on the cpu, if it its a smaller build then 3615 can also be used (the perc h310 in it mode would use the normal mpt2sas driver)

     

    Are you sure about that?  I don't think there is a kernel thread support difference between the two platforms.

  9. if /dev/sdb is your new drive #2, there's your problem.  While they are the "same" type (they actually are two different models of Caviar Green) and are both 1TB drives, the actual number of sectors available on /dev/sdb is less than /dev/sdf.  The array SHR base size is based on the exact size of /dev/sdf.

     

    In other words, the new drive (again, assuming it's mapped to /dev/sdb) is too small to add to the array.

     

    mdadm will let you create an array with up to 1% mismatch because not all drives are exactly the same size, but in an existing array, it has to be larger.

    • Like 1
  10. Most on this site are using Jun's loader and most of the resources and community support are aligned to that.   The link you cited is to another resource.

     

    Using Jun's loader, typically a minor version update is done through the Control Panel.  The community uses crowdsourced reports to help understand where there might be problems.  See here: https://xpenology.com/forum/forum/78-dsm-updates-reporting/

     

    As far as recovering a bricked installation - the best answer is not to have a bricked system.  Use a test environment to make sure your hardware accepts the update and then you will never have to worry about it.

  11. 18 hours ago, merve04 said:

    I will ask this, maybe some knows.. so there is X amount of data which had 1 parity, now changing to having 2 parity.. that's all great as its reading all the files and figuring stuff about, but due to the nature of how long this takes, I'm constantly adding more and more data in the GB to TB range while this is processing.. is that data written as double parity as I'm adding these new files? or is it caching somewhere to be processed once shr2 migration is completed?

     

    This article by the author of md might help answer your questions. 

    http://neil.brown.name/blog/20090817000931

     

    This excerpt should help explain why your conversion speed is so slow:

    Quote

    For a reshape that does not change the number of devices, such as changing chunksize or layout, every write will be over-writing the old layout of that same data so after a crash there will definitely be a range of blocks that we cannot know whether they are in the old layout or the new layout or a bit of both. So we need to always have a backup of the range of blocks that are currently being reshaped.

     

    This is the most complex part of the new functionality in mdadm 3.1 [snip]. mdadm monitors the reshape, setting an upper bound of how far it can progress at any time and making sure the area that it is allow to rearrange has writes disabled and has been backed-up.

     

    This means that all the data is copied twice, once to the backup and once to the new layout on the array. This clearly means that such a reshape will go very slowly. But that is the price we have to pay for safety. It is like insurance. You might hate having to pay it, but you would hate it much more if you didn't and found that you needed it.

     

    The TL;DR version: If you convert from RAID5 to RAID6 while adding a disk, conversion takes one pass.  An existing array set takes two passes.

  12. It's a lot of I/O and multiple stages as I recall.  You can speed it up with MDADM tuning, or in DSM 6.2.x Storage Manager | Storage Pool | Configurations | Custom and change max and min to something silly like 2500MB (25000000).  Put it back to Lower Impact when done.

  13. Please, only try this on a non-production system you don't mind being irrevocably corrupted.  The likelihood of the loader failing is extremely high.

    And, given the DS918 code base utilizing Haswell+ instructions, I would not expect your Westmere CPUs to work even if the loader does.

     

    Lastly, even on a compatible Synology box, it is very likely that a full reinstall will be required in order to replace the preview version with the eventual release of DSM 7.0.

    • Like 2
  14. I think this is your server here: https://h20195.www2.hpe.com/v2/GetDocument.aspx?docname=c04111408

    The onboard Ethernet card is e1000e based and should work with DSM/XPEnology.  You will need to use the DS3615xs or DS3617xs image and Jun loader 1.03b because of your older processor.

     

    Most people who need SATA ports beyond those on the motherboard are using Marvell-based or LSI Logic PCIe cards.  You'll need to make a determination as to which one works best for your present and future plans.  There is a lot of information on this forum. Start looking for folks with system configurations similar to yours, and validate with the compatibility threads.  Some places to look for some information:

     

    https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/

    https://xpenology.com/forum/forum/90-user-reported-compatibility-thread-drivers-requests/

    https://xpenology.com/forum/forum/78-dsm-updates-reporting/

     

×
×
  • Create New...