flyride

Members
  • Content Count

    1,934
  • Joined

  • Last visited

  • Days Won

    98

Posts posted by flyride

  1. nVidia drivers are not included in any currently supported platforms.

     

    However, I believe there has been some progress to compile nVidia support and added it to DS918+.  You'll have to search for that in the forums.  Once it is working (nVidia device appears as /dev/dri) then it should be trivial to set up a Docker mining app.

  2. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/

     

    Based on this you would need to install DS3615xs or DS3617xs with loader 1.03b if you want the latest functioning version 6.2.3.  Note you must download the exact 6.2.3-25426 version as the latest 6.2.4 and 7.0.1 builds do not work with the released loader.

     

    It's not difficult to install if you follow the FAQ and tutorials closely.  But it can be a bit finicky as it is, at its core, a hack of a purpose-built software/hardware system.

  3. 14 hours ago, tdse13 said:

    I do not use a basic volume but Raid1. I formated one of the 2 disks and started again. Unfortunately, I cannot move shares from the read-only remaining disk from the degraded Raid1 volume1 to the new basic volume3 since it is read-only. Any idea how to fix it? Thank you.

     

    Moving shared folders from the control panel is a write operation.  So that cannot be done on a read-only filesystem.

     

    btrfs attempts to fix itself in real-time.  If it cannot due to corruption, it will mount read-only or fail to mount at all without external input.  There is a filesystem check tool but it really isn't intended to return a filesystem to service if it has suffered unfixable corruption. The correct course of action is to COPY the files from the read-only filesystem, which you are supposedly doing.

     

    17 hours ago, tdse13 said:

    When checking dmesg I find btrfs errros related to docker: BTRFS error (device dm-1): cannot find qgroup item, qgroupid=2419 ! I don't want to get a second read only system

     

    If Synology's implementation of docker is installed to a btrfs filesystem, it will use btrfs snapshots in support of docker functionality.  Those snapshots can get corrupted on their own.  My experience is that again, the corruption cannot be fixed, but simply deleting and recreating the affected container will resolve the problem.

  4. You can convert a Basic Storage Pool to a RAID1 at any time by adding an appropriate disk.

     

    Understand the difference between the Storage Pool (where the array redundancy is) and the Volume (the filesystem, which has some corruption which is resulting in a read-only state).

     

    You are proposing to sacrifice your healthy array redundancy and create another Storage Pool and Volume.  This will work logically, but this may be an inappropriate risk depending upon the value of the data and whether you have another copy somewhere else.  As always, redundancy in the RAID array is not a BACKUP.  It's a method of ensuring that your data remains immediately and functionally available in the event of a disk media failure of some sort.

     

    If it were my data, I would insist on a copy of the data somewhere else before I deliberately sacrificed array redundancy.

  5. You're running low on options... please take a look at this post. If none of the information works for you, that is probably the extent of what I know to help you.

    https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=108021

     

    There might be some btrfs experts out there in the ether but I don't think you'll get any other options here.

     

    Good luck and if something works, please report back!

  6. Well, you can do what I suggested, which is copy off files, delete and remake your Storage Pool/Volume and it will use the size.

     

    Synology will automatically expand multi-drive arrays that are grown, but it has no provision for expanding a single drive.

     

    It can be done manually and is a basic procedure common to all Linux systems. You can follow this thread to get it done:

    https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/

     

    You have an ext4 filesystem and you are not using lvm, and you have a one-disk system so DSM appears to be using the physical partitions instead of arrays, so choose your command arguments accordingly.  Also it goes without saying that you run the risk of volume corruption if a mistake is made, so you should have all your files backed up somewhere else before trying this.

     

  7. 7 hours ago, kaku said:

    I have copied out my data to another HDD!!!!! I made another storage pool and volume. and copied date off to it. 3TB. I think data is good. How can I check which files were corrupted or not fully restored?

     

    Did you have btrfs checksum "on" for the affected volume?  If so, btrfs would have told you if there was data corruption via pop-up in the UI.  Normally it would also fix it, but you have no redundancy.  Without the checksum, there is no way to determine corruption.  If files were missing for some reason, that is also not really detectable.  A good reason to have a real backup somewhere...

     

    image.thumb.png.f2c23bd3a5faceed5cfd3fdb86abea2c.png

     

    7 hours ago, kaku said:

    Now that Data is back. Should I try repairing the original Volume?

     

    I say this often: btrfs will fix itself if it can.  If it cannot do that, there is underlying corruption that MAY be fixable via filesystem correction tools, but probably won't fully address the problem. Linux culture holds that ext4/fsck can fix ANYTHING and always should have confidence in the filesystem after it is complete, but that just isn't true with btrfs.

     

    I strongly recommend you delete the volume and rebuild it from scratch.  Since you also have a Storage Pool problem, there is consequently no reason not to delete that as well, replace any drives that are actually faulty, and re-create a clean Storage Pool too.  Then copy your files back in from your backup.

     

    Glad this worked out, probably as well as it could have for you given the difficult intermediate steps.

  8. Hello,

     

    Some of what you are reporting doesn't make sense.  Maybe you have tried some recovery activity from the thread?  The thread you cited is a pretty catastrophic failure recovery with multiple bad drives, and the procedures in it circumvent the protections inherent to the md system.

     

    Also, this post is in the Q&A session and you really need much more than a simple, single post response.

     

    Please repost in a support forum like The Noob Lounge or Post-Installation Questions as this will take many post iterations.

     

    And, use the first few posts in this example thread, to help you with data gathering that will be needed to determine if your data still exists in a recoverable form.

    https://xpenology.com/forum/topic/41307-storage-pool-crashed/

     

     

  9. Jun/default drivers for DS918+ support I219-V on PCI ID's 1570 and 15B8 only (e1000e driver).  Yours is 15FA (clearly newer)

    https://xpenology.com/forum/topic/14127-guide-to-native-drivers-dsm-621-on-ds918/

     

    The DS918+ extra.lzma/extra2.lzma 0.13.3 driver compilation contains the following PCI ID's (latest 15E3, older than yours)

    Spoiler

    alias=pci:v00008086d00000D4Dsv*sd*bc*sc*i*
    alias=pci:v00008086d00000D4Csv*sd*bc*sc*i*
    alias=pci:v00008086d00000D4Fsv*sd*bc*sc*i*
    alias=pci:v00008086d00000D4Esv*sd*bc*sc*i*
    alias=pci:v00008086d000015E2sv*sd*bc*sc*i*
    alias=pci:v00008086d000015E1sv*sd*bc*sc*i*
    alias=pci:v00008086d000015E0sv*sd*bc*sc*i*
    alias=pci:v00008086d000015DFsv*sd*bc*sc*i*
    alias=pci:v00008086d000015BCsv*sd*bc*sc*i*
    alias=pci:v00008086d000015BBsv*sd*bc*sc*i*
    alias=pci:v00008086d000015BEsv*sd*bc*sc*i*
    alias=pci:v00008086d000015BDsv*sd*bc*sc*i*
    alias=pci:v00008086d00000D55sv*sd*bc*sc*i*
    alias=pci:v00008086d00000D53sv*sd*bc*sc*i*
    alias=pci:v00008086d000015D6sv*sd*bc*sc*i*
    alias=pci:v00008086d000015E3sv*sd*bc*sc*i*
    alias=pci:v00008086d000015D8sv*sd*bc*sc*i*
    alias=pci:v00008086d000015D7sv*sd*bc*sc*i*
    alias=pci:v00008086d000015B9sv*sd*bc*sc*i*
    alias=pci:v00008086d000015B8sv*sd*bc*sc*i*
    alias=pci:v00008086d000015B7sv*sd*bc*sc*i*
    alias=pci:v00008086d00001570sv*sd*bc*sc*i*
    alias=pci:v00008086d0000156Fsv*sd*bc*sc*i*
    alias=pci:v00008086d000015A3sv*sd*bc*sc*i*
    alias=pci:v00008086d000015A2sv*sd*bc*sc*i*
    alias=pci:v00008086d000015A1sv*sd*bc*sc*i*
    alias=pci:v00008086d000015A0sv*sd*bc*sc*i*
    alias=pci:v00008086d00001559sv*sd*bc*sc*i*
    alias=pci:v00008086d0000155Asv*sd*bc*sc*i*
    alias=pci:v00008086d0000153Bsv*sd*bc*sc*i*
    alias=pci:v00008086d0000153Asv*sd*bc*sc*i*
    alias=pci:v00008086d00001503sv*sd*bc*sc*i*
    alias=pci:v00008086d00001502sv*sd*bc*sc*i*
    alias=pci:v00008086d000010F0sv*sd*bc*sc*i*
    alias=pci:v00008086d000010EFsv*sd*bc*sc*i*
    alias=pci:v00008086d000010EBsv*sd*bc*sc*i*
    alias=pci:v00008086d000010EAsv*sd*bc*sc*i*
    alias=pci:v00008086d00001525sv*sd*bc*sc*i*
    alias=pci:v00008086d000010DFsv*sd*bc*sc*i*
    alias=pci:v00008086d000010DEsv*sd*bc*sc*i*
    alias=pci:v00008086d000010CEsv*sd*bc*sc*i*
    alias=pci:v00008086d000010CDsv*sd*bc*sc*i*
    alias=pci:v00008086d000010CCsv*sd*bc*sc*i*
    alias=pci:v00008086d000010CBsv*sd*bc*sc*i*
    alias=pci:v00008086d000010F5sv*sd*bc*sc*i*
    alias=pci:v00008086d000010BFsv*sd*bc*sc*i*
    alias=pci:v00008086d000010E5sv*sd*bc*sc*i*
    alias=pci:v00008086d0000294Csv*sd*bc*sc*i*
    alias=pci:v00008086d000010BDsv*sd*bc*sc*i*
    alias=pci:v00008086d000010C3sv*sd*bc*sc*i*
    alias=pci:v00008086d000010C2sv*sd*bc*sc*i*
    alias=pci:v00008086d000010C0sv*sd*bc*sc*i*
    alias=pci:v00008086d00001501sv*sd*bc*sc*i*
    alias=pci:v00008086d00001049sv*sd*bc*sc*i*
    alias=pci:v00008086d0000104Dsv*sd*bc*sc*i*
    alias=pci:v00008086d0000104Bsv*sd*bc*sc*i*
    alias=pci:v00008086d0000104Asv*sd*bc*sc*i*
    alias=pci:v00008086d000010C4sv*sd*bc*sc*i*
    alias=pci:v00008086d000010C5sv*sd*bc*sc*i*
    alias=pci:v00008086d0000104Csv*sd*bc*sc*i*
    alias=pci:v00008086d000010BBsv*sd*bc*sc*i*
    alias=pci:v00008086d00001098sv*sd*bc*sc*i*
    alias=pci:v00008086d000010BAsv*sd*bc*sc*i*
    alias=pci:v00008086d00001096sv*sd*bc*sc*i*
    alias=pci:v00008086d0000150Csv*sd*bc*sc*i*
    alias=pci:v00008086d000010F6sv*sd*bc*sc*i*
    alias=pci:v00008086d000010D3sv*sd*bc*sc*i*
    alias=pci:v00008086d0000109Asv*sd*bc*sc*i*
    alias=pci:v00008086d0000108Csv*sd*bc*sc*i*
    alias=pci:v00008086d0000108Bsv*sd*bc*sc*i*
    alias=pci:v00008086d0000107Fsv*sd*bc*sc*i*
    alias=pci:v00008086d0000107Esv*sd*bc*sc*i*
    alias=pci:v00008086d0000107Dsv*sd*bc*sc*i*
    alias=pci:v00008086d000010B9sv*sd*bc*sc*i*
    alias=pci:v00008086d000010D5sv*sd*bc*sc*i*
    alias=pci:v00008086d000010DAsv*sd*bc*sc*i*
    alias=pci:v00008086d000010D9sv*sd*bc*sc*i*
    alias=pci:v00008086d00001060sv*sd*bc*sc*i*
    alias=pci:v00008086d000010A5sv*sd*bc*sc*i*
    alias=pci:v00008086d000010BCsv*sd*bc*sc*i*
    alias=pci:v00008086d000010A4sv*sd*bc*sc*i*
    alias=pci:v00008086d0000105Fsv*sd*bc*sc*i*
    alias=pci:v00008086d0000105Esv*sd*bc*sc*i*

    https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/

     

    So unless someone has a newer compile of e1000e you are probably SOL...

  10. On 9/23/2021 at 9:18 PM, Darksplat said:

    The crazy thing is that my last server was from a ASUS B150 Pro Gaming D3 rig that has the same onboard Nic, so I suspect that it is the order in which the motherboard boots up, maybe IDK

     

    Check the PCI ID's, Intel revs with every chipset.  The marketing device name remains the same, but the silicon and driver requirements differ.  They have done this consistently for a long time.