Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Community Answers

  1. flyride's post in Ironwolf question was marked as the answer   
    RedPill will spoof SMART if necessary. For sure with virtual disks, and possibly with HBA drives.  You can check the kernel log or comparatively analyze the SMART output to be certain.
     
    To make things more complicated, Synology added Ironwolf support for some models, and then removed it from all models from 2022 on (i.e. DS3622xs+) as they are trying to steer customers into their branded drives.
    https://kb.synology.com/en-vn/DSM/tutorial/Which_Synology_DiskStation_RackStation_supports_Seagate_IronWolf_Health_Management
  2. flyride's post in M.2 to SATA card, it will work? was marked as the answer   
    M.2 is a type of port which is an interface into a SATA controller, or a miniaturized PCIe slot, or both (configurable to one or the other).
    If it is NVMe compatible or M-key, it's a PCIe slot.  It should be PCIe 1x, 2x or 4x, depending upon the motherboard implementation.
     
    The card itself can be PCIe 1x, 2x or 4x as well.  The actual lanes in use is defined by the slot or card with the least lanes.
     
    Maximum bandwidth of a SATA-3 port is 600 MBps.  Maximum bandwidth of a 5-port card would be 3 GBps.  This requires SSD's as real-world performance of spinning disks max out around 200MBps per channel.
     
    PCIe 3.0 performance per lane is roughly 1 GBps.
    Thus, a 1x connection will definitely be maxed out by 5 channels of SSD
    2x connection would potentially be maxed out with SSD (but probably ok with spinning disk)
    4x will always have headroom regardless of the SATA device types
  3. flyride's post in M.2 NVMe speed only 400 MBps was marked as the answer   
    Not a typical configuration to have NVMe as a regular DSM volume.  How did you set it up?  It might have an impact.
     
    You did not say what model your SSD was. Generally, write performance on low-capacity Samsung models is not great.  PM961 is a typical 128GB unit and its spec is "up to" 600MBps sequential write which is a raw value, and the score you are citing is full throughput including filesystem overhead etc.
     
    Also all consumer Samsung drives have TLC with SLC/MLC cache so once that internal cache fills up they slow down on writes a lot.
     
    I suspect Windows is lying to you with 1GBps write.  But post more info about what you have if you want
  4. flyride's post in Issue Installing it on Z290 MSI OR Z390 Asus was marked as the answer   
    The embedded Intel LAN driver does not support newer revisions of silicon.  You can look up which are supported here and match it to the device on your motherboard:
    https://xpenology.com/forum/topic/13922-guide-to-native-drivers-dsm-617-and-621-on-ds3615/
    https://xpenology.com/forum/topic/14127-guide-to-native-drivers-dsm-621-on-ds918/
     
    Adding extra.lzma with newer driver compiles may allow your device to work:
    https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
     
    Alternatively purchase an Intel CT PCIe card ($20) which is supported, or use a hypervisor and install XPe as a VM.
  5. flyride's post in Am I Restricted In How Many NICs I Can Add To A System? was marked as the answer   
    DS918+ has internal synoinfo.cfg parameter maxlanport which is defaulted to 2 ports.  You can change it but it will get overwritten on DSM upgrade, so be careful it doesn't disable your LAN because you didn't think about it.
  6. flyride's post in Error When connecting 2 Drive to SATA PCie Controller was marked as the answer   
    Port multiplier-based controllers are not supported.  You'll have to find an appropriate controller.
  7. flyride's post in Crashed volume - SHR was marked as the answer   
    This thread (starting with the linked post) details a specific recovery using lvm's backup.  However, your underlying problem (which is not really known) is different than the the thread original poster. https://xpenology.com/forum/topic/41307-storage-pool-crashed/?do=findComment&comment=195342
     
    The folder with the backup data is /etc/lvm/backup
    The restore command involved is vgcfgrestore (or in your case lvm vgcfgrestore)
     
    You are running older lvm (because of DSM 5.2) than this example so there might be other differences.
  8. flyride's post in S.M.A.R.T end of lifetime (SSDs) was marked as the answer   
    Raw value does not correspond to anything, it's unique to the manufacturer.
     
    The value you are concerned with is the "Worst" column.  Your disk 5 is at 71% life (29% used).  Your disk 6 is at 4% life (96% used).  The disks have not failed SMART so they are healthy, but DSM is rightly warning you that one of your SSD's has the potential of expiring very soon.
     
    SSD cache is very hard on consumer drives that have very low TBW ratings.
  9. flyride's post in Change to RAID 0 was marked as the answer   
    mdadm should allow you to reshape without unmounting, and it is easier if you don't have to.  I would stop Docker from Package Manager before trying a mounted reshape however.

    The only command I believe you should have to do is the reshape (decreasing the array size in the article would not apply to you). I expect that DSM will automatically increase your Storage Pool and volumes when more space is available, but if it does not, it's easy to do.
     
    i.e:
    # mdadm --grow /dev/md2 --level=0
     
    Obviously, have a backup before doing anything like this.  The reshape is dependent upon kernel support, and Synology modifies mdadm for their own purposes, so no guarantee it will work.  If it does work, you save time.  If it doesn't, you are back to the manual process.
  10. flyride's post in 6.2.1-23824 Update 6 -> 6.2.3-25426 was marked as the answer   
    Most J3xxx/4xxx/5xxx need a modification to the loader in order to work with 6.2.1 and 6.2.2.
     
    If you modified the loader with extra.lzma or "real3x" mod you will need to revert it by restoring the original configuration to the loader from running DSM. Then apply the 6.2.3 update immediately, before rebooting.  See details here: https://xpenology.com/forum/topic/28131-dsm-623-25423-recalled-on-may-13/?do=findComment&comment=141604
     
    If you are running now without a loader modification, you should be able to upgrade with no other actions.
     
    If for some reason you lock yourself out trying to upgrade, just do a migration install with a clean 1.04b loader and install 6.2.3 directly, should work fine.
  11. flyride's post in Xeon CPU Showing Diffrent CPU was marked as the answer   
    It's cosmetic and not affects performance.  The DSM UI displays the CPU that originally came with the host system (you are using a DS3615xs image).
     
    Someone that built a patch to change the text string if it really bothers you.
  12. flyride's post in Starting from scratch was marked as the answer   
    So mapping your drives out by controller and array, it looks like this.
     

     
    md2 is the array that supports the small drives (sda/b/c/d/j) and md3 is the array that incorporates the remainder of the storage on the large drives.  The two arrays are joined and represented as a single Storage Pool. The larger the size difference between the small and large drives, the more writes are going to the large array so the small drives will be less busy (or even idle sometimes).  This is a byproduct of how SHR/SHR2 is designed, and not anything wrong.
     
    Because of this, you haven't seen the actual write throughput on disk 1-4.  So if they were underperforming for some reason, impact to performance would be random and sporadic, because it would only be when writes were going to the small array.  What we are trying to figure out is if one component or array (or controller even) is performing abnormally slowly.  You might want to repeat the evaluation of the drives using the dd technique from earlier instead of copying files from the Mac, and maybe run for an extended time until Disk 1-4 get used, as it will stress the disk system much more. 
     
    Just a random thought, is write cache enabled for the individual drives?
     
  13. flyride's post in Critical system errors occurred. Please contact us immediately for technical support was marked as the answer   
    Well, you can't repair a system partition when you only have one drive.  There is no disk redundancy in your system.  If there were, DSM would allow you to do the repair from the Control Panel.
     
    For reference, /dev/md0 is DSM/Linux.  /dev/md1 is Linux swap and /dev/md2 is your volume. "E" is a custom mdadm disk status flag that is part of Synology's data integrity enhancements to Linux.
     
    While it appears that the filesystem on /dev/md0 is operational (I assume you can still boot DSM), the one array member is flagged as bad. I agree the cause may have been the power outage and uncommitted write state, so DSM flagged it to error. There are two ways to fix this problem:
    recreate the array with mdadm.  Here's a reference:
    https://www.dsebastien.net/2015/05/19/recovering-a-raid-array-in-e-state-on-a-synology-nas/
    However, this requires that you are able to stop the array to recreate it, and that is your booted OS.  So you will need to take the drive and install it on another Linux system to do it.  The array you need to rebuild is /dev/md0 and you will have to figure out the disk array member (it will probably be /dev/sdb1 if you install the DSM disk to a single-disk Linux system). As long as you don't make a mistake, this has no impact to the data inside the array. reinstall DSM. You should be able to do this from Synology Assistant without building up a new loader USB. Just download and install the same PAT file you are running now. This will reset any OS customizations (save off your configuration in Control Panel first, then restore afterward), but your user data should be unaffected. In the future, you should consider adding another drive for redundancy so that you don't encounter this again. It really should be a non-issue.
  14. flyride's post in DSM 6.2.3 -- does not see more than one HDD was marked as the answer   
    Be sure you are using an isolated virtual SATA controllers for your loader and then another controller for all the data drives.
  15. flyride's post in About a number of logs... was marked as the answer   
    I believe you do not need to be concerned. The design of Linux in general is to manage log files in a way that does not exhaust the storage available.  Spurious data in logs does use up space, but I did not suggest there was a crash problem solely due to log utilization. Logging is a good thing; zeroing log files is an unnecessary and forensically destructive practice.
     
    In DSM, syslog events are logged by default to /var/log/messages.  Each Linux installation has syslog rules that split off certain logs to other files.  There are also multiple ingress points.  Kernel events, for instance, are independently logged to the kernel log and the syslog default.
     
    There are a number of unimpactful, essentially useless, and unmanageable error logs due to unsupported system events in XPEnology. My intention was solely to improve the signal to noise ratio in the log files (make them more useful) by suppressing those types of logs.
  16. flyride's post in How to suppress the log? was marked as the answer   
    Actually now that I'm looking at it, you are trying to filter non-disk messages.  All the other filters presume and require that there is a /dev/sdx reference in the log entry.
     
    You'll need to modify the file in this way:
    filter fs_cachemonitor { match("cache_monitor\.c:.*Can't support DS with cpu number" value("MESSAGE")); }; filter fs_allmsgs { filter(fs_badsec) or filter(fs_errcnt) or filter(fs_tmpget) or filter(fs_health) or filter(fs_sdread) or filter(fs_stests) or filter(fs_tstget); }; filter fs_smart { filter(fs_disks) and filter(fs_allmsgs); }; filter f_smart { filter(fs_smart) or filter(fs_cachemonitor); }; log { source(src); filter(f_smart); };  
  17. flyride's post in RAID1->RAID5, disk failed during migration. was marked as the answer   
    Is data still accessible now?  Let the RAID transformation finish, then replace the crashed drive.  RAID rebuild is a lot faster than conversion.
     
    What is the SMART status of the crashed drive?  Are the sectors pending or reallocated?  ESXi does not show you all SMART data. Sometimes if a drive runs out of temp spec it will mark sectors for replacement, but then if overwritten they will be recovered.  If the drive actually has permanent bad sectors, replace it.  But if not, once RAID transformation completes, just deallocate it and reallocate it and see if it recovers.
  18. flyride's post in SSD Sata in cache ? was marked as the answer   
    Yes, SATA works fine for cache.  Be advised more RAM is probably better than SSD cache for most workloads.
     
    R/W cache exposes your array to crash-induced data loss.  R/O cache is okay.
  19. flyride's post in Create volume at drive 1? was marked as the answer   
    DSM is installed on all the data drives in the system.  So by definition you create volumes on the same drives as the DSM OS.  Any disk that shows "Initialized" or "Normal" has the DSM OS installed to it.
     
    You might be thinking of the bootloader device - that is not the DSM OS.  If it's visible, it's because you haven't configured VID/PID and/or SATA mapping to hide it properly.  Regardless do not attempt to create a volume on it.
  20. flyride's post in Docker -Bridge vs Docker Host was marked as the answer   
    Running docker in host or bridge mode won't have any impact on your stated concern.
     
    Your firewall, inherent Docker (in)security, the apps you are using and their configuration will determine that.
     
  21. flyride's post in Using SHR was marked as the answer   
    You should be able to migrate install using SHR on DS3615/17xs even though SHR management is not enabled.
     
    If you want to be able to make new SHR volumes, just enable the UI functionality per the FAQ:
    https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-81094
  22. flyride's post in Controlling fan speed was marked as the answer   
    That is wholly dependent upon your hardware.  The Synology fan control software interfaces via serial hardware that is not in a regular motherboard.
     
    Your motherboard may integrally support a fan profile which responds to internal temp sensors and/or system utilization to control fans.  It also may have a "high/medium/low" setting that is fixed (which would solve your noise problem).  This is probably the simplest solution, and that case, XPEnology has nothing to do with it.
     
    There are a number of online examples of more sophisticated, NAS-specific hacks to directly control fan interfaces such as on Supermicro motherboards.  This approach would be dependent upon the fan hardware interface in your specific motherboard.  At one time, I wrote up something that ran as a Linux daemon so that it could monitor drive temperatures and respond to that more aggressively than the hardware profile.  But in the end, there wasn't enough of a measured difference to matter, at least in my climate.
  23. flyride's post in Can one use Synology Control panel DSM upgrade? was marked as the answer   
    You were so close, that knowledge arrives on the 16th minute of looking.
     
    https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-95507
  24. flyride's post in Asrock J4105 - What is the current working DSM? was marked as the answer   
    Look up the real3x mod to disable the faulty i915 video driver, the combination you are proposing works fine.
  25. flyride's post in Stupid question: Going from 6.1 loader to 6.2 was marked as the answer   
    Hmm, calling people out as arrogant isn't the way to curry favor for help.  Anyone you could possibly reach with that comment has spent countless hours trying to help many, many people such as yourself. Based on your questions, both you and OP present with little or no evidence of effort to understand DSM and how the XPenology loader works.  And only slightly below the veneer of that request is a challenge to be assured you won't lose data, which you cannot possibly get from someone on an online forum.  It's your responsibility, nobody else's, to make sure your data is safe.
     
    Let's spell it out: All the loader does it let us boot DSM on non-Synology hardware. Nothing more, nothing less.  Any other behavior is attributable to DSM, Synology's operating system.  Yes, it's based on Linux, but that's not a limiting factor.  Many XPenology users never launch the shell, nor do they need to. If you want to be successful running DSM on XPenology, it will be in your interest to know something about DSM.  There are many, many places to learn about how to do things with DSM, not the least of which are Synology's forums.
     
    So here are a couple of key points that ARE, literally, embedded in the tutorials.  Hopefully they will help steer you in the right direction.
    Upgrading DSM from 6.1 to 6.2 is a function of DSM.  Not the loader. If you want to upgrade from 6.1 to 6.2, you'll need to install a 6.2 compatible version of the loader (either 1.03b or 1.04b), otherwise DSM will crash once upgraded The 6.2 compatible loader must also work with your hardware, which isn't a guarantee even if you were successfully running DSM 6.1. Installing a new loader is analogous to moving your disks to a new Synology Diskstation -  DSM will prompt for migration or upgrade. Migrating and/or upgrading DSM isn't inherently a data-destroying process, if done properly.  Again, this is a DSM behavior Any upgrade or migration operation can fail for many reasons, including loader incompatibility (ref hardware issues above) or user mistake. Those who attempt an upgrade or migration operation without a data backup plan are, bluntly, foolish To you, OP and anyone else who wants to upgrade - it's very much in your interest to build up a test environment and validate your upgrade plan each and every time before subjecting your production system and data to risk.  This is repeated again and again in the tutorials.  It is one of the benefits of a virtualized (i.e. ESXi) environment - it makes it very easy to test without extra hardware.
     
    Good luck to you and OP.  Your linarrogant friends online will be waiting to help if you run into trouble.
     
×
×
  • Create New...