Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. - Outcome of the update: SUCCESSFUL

    - DSM version prior to update: DSM 6.2.1-23824U6

    - Loader version and model: Jun v1.03b - DS3615

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7

     

    - Outcome of the update: SUCCESSFUL

    - DSM version prior to update: DSM 6.2.1-23824U6

    - Loader version and model: Jun v1.04b - DS918

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7

     

    - Outcome of the update: SUCCESSFUL

    - DSM version prior to update: DSM 6.2.1-23824U6

    - Loader version and model: Jun v1.04b - DS918 plus real3x mod

    - Using custom extra.lzma: YES, see above

    - Installation type: BAREMETAL - ASRock J4105-ITX

    • Thanks 1
  2. DSM is installed on all your drives, unless you do something heroic with mdadm to disable updates to the system partitions.

     

    Reinstalling DSM doesn't impact any of your pools unless you choose to overwrite them.

     

  3. It is not at all like RAID1.

     

    In your original premise, you identified two independent hard disks with two filesystems.  You then suggested that you want to use a replication strategy (which is temporal - i.e. HyperBackup, rsync, btrfs Snapshot).

     

    With RAID1, a real-time copy of your data would exist on both drives simultaneously, and at all times.

     

    With a temporal replication, the data you select for replication is duplicated on intervals you specify.  At other times new data is not replicated.  However, a temporal replication strategy may be more efficient with disk utilization (as you point out) and will be tolerant of source file system corruption or file deletion (Snapshots with multiple copies).

     

  4. Ah, I understand your question now. Yes a snapshot is a point in time reference so that subsequent changes only are stored as the new blocks. But each snapshot references all files in the file system. So when a snapshot is replicated, it initially is a copy of all files in the file system at that point in time. Subsequent replicated snapshots will realize the common blocks on the target so only the updated blocks will be copied, and then you have the same point in time views on both file systems.

  5. I use SnapShot replication between two systems as backup.  If you understand what it is doing, it is extremely effective.  Enterprise storage systems use snapshotting for backup all the time.

     

    I'm not sure what you mean by "snapshot works without the original files."  A share is snapshot to another (new) target share.  These can be set up in each direction on your two volumes just fine, as long as you have the space on each.

     

  6. You are ignoring the fact that Synology compiles DSM for each piece of hardware.  We have support for the DS918 build via 1.04b and DS3615 via 1.03b.

     

    The only requirement for Haswell is the DS918 DSM image as it is compiled with FMA3 instructions.

     

    As long as you stay on loader 1.03b and DS3615 DSM you should be able to run 6.2.1 fine.  However, on this loader and 6.2.1, very few network drivers are able to boot - e1000e is the one most are successful with. I think you have a 82576 NIC (you said 82676) which is supported by the igb driver.  You can try it but if the network won't come up, you may need to disable in BIOS and add a e1000e NIC such as an 82571, 82574 or 82579 and it should work.

    • Like 1
  7. 4 hours ago, ccxpenologyxcc said:

    OP, did you ever get it running with this hardware? I'm planning a build with the same hardware for freenas and I want to see if xpenology will work with it in case It becomes too much of a headache (zfs sounds like a pain to deal with)

     

    That hardware should work great, either baremetal or under ESXi.  Suggested loader 1.02b or 1.03b and DS3615 DSM.

     

  8. Well this doesn't show any virtual NIC in use, at least one that is connected to the PCI bus.  However, it does indicate that the virtualization environment is Hyper-V, which is known not to work with DSM/XPEnology because the Microsoft virtual drivers aren't supported within DSM.

     

    See this thread for relevant discussion - it's possible that if you can select that dialect on your Azure instance, the DEC 21140/net-tulip driver might be able to be added via extra.lzma which might work, but I haven't heard of anyone who has gotten it running as of yet.

     

  9. What network adapters is Azure providing?  If the VM is being delivered via Hyper-V, the current state is that Microsoft does not provide a compatible NIC.

  10. 1 hour ago, georgg said:

    Is there a place where I can check a list of supported NICs for 6.2/1.03b?

     

    Under Tutorials and Guides, or use the links in my signature.  I strongly recommend you use DS3615 instead of DS3617.

     

    However, it's well documented you must have an Intel card to boot 6.2.1 on 1.03b because of kernel crashes.

  11. 5 hours ago, Donkey545 said:

    dmesg

    
    [249007.103992] BTRFS: error (device dm-0) in convert_free_space_to_extents:456: errno=-5 IO failure
    [249007.112998] BTRFS: error (device dm-0) in add_to_free_space_tree:1049: errno=-5 IO failure
    <snip>
    [249007.181327] BTRFS: open_ctree failed
    

    sudo btrfs inspect-internal dump-super -f /dev/vg1000/lv

    
    sudo btrfs inspect-internal dump-super -f /dev/vg1000/lv
    superblock: bytenr=65536, device=/dev/vg1000/lv
    <snip>
    compat_flags            0x8000000000000000
    compat_ro_flags         0x3
                            ( FREE_SPACE_TREE |
                              FREE_SPACE_TREE_VALID )
    incompat_flags          0x16b
                            ( MIXED_BACKREF |
                              DEFAULT_SUBVOL |
                              COMPRESS_LZO |
                              BIG_METADATA |
                              EXTENDED_IREF |
                              SKINNY_METADATA )

     

     

    Ok, there is good information here.  The kernel output (dmesg) tells us that you probably have a hardware failure that is not a bad sector.  Possibly a disk problem, possibly a bad SATA cable, or a disk controller failure which has caused some erroneous information to be written to the disk.  

     

    It's extremely likely that a btrfs check would fix your problem. In particular I would be try these repair commands in this order, with an attempt to mount after each has completed:

    sudo btrfs check --init-extent-tree /dev/vg1000/lv

    sudo btrfs check --init-csum-tree /dev/vg1000/lv

    sudo btrfs check --repair /dev/vg1000/lv

     

    HOWEVER: Because the btrfs filesystem has incompatibility flags reported via the inspect-internal dump, I believe you won't be able to run any of the commands, and they will error out with "couldn't open RDWR because of unsupported option features." The flags tell us that the way the btrfs filesystem has been built, the btrfs code in the kernel and the btrfs tools are not totally compatible with each other.  I think Synology may use its own custom binaries to generate the btrfs filesystem and don't intend for the standard btrfs tools to be used to maintain it.  They may have their own maintenance tools that we don't know about, or they may only load the tools when providing remote support.

     

    It might be possible to compile the latest versions of the btrfs tools against a Synology kernel source and get them to work.  If it were me in your situation I would try that. It's actually been on my list of things to do when I get some time, and if it works I will post them for download. The other option is to build up a Linux system, install the latest btrfs, connect your drives to it and run btrfs tools from there.  Obviously both of these choices are fairly complex to execute.

     

    In summary, I think the filesystem is largely intact and the check options above would fix it.  But in lieu of a working btrfs check option, consider this alternative, which definitely does work on Synology btrfs:

     

    Execute a btrfs recovery.

     

    5 hours ago, Donkey545 said:

    Now this may be a dumb question, but what is my best method for backing up the data? should I connect directly to SATA, USB to Sata, or Configure my other NAS and use my network?

     

    Btrfs has a special option to dump the whole filesystem to completely separate location, even if the source cannot be mounted.  So if you have a free SATA port, install an adequately sized drive, create a second storage pool and set up a second volume to use as a recovery target.  Alternatively, you could build it on another NAS and NFS mount it.  Whatever you come up with has to be directly accessible on the problem system.

     

    For example's sake, let's say that you have installed and configured /volume2.  This command should extract all the files from your broken btrfs filesystem and drop them on /volume2.  Note that /volume2 can be set up as btrfs or ext4 - the filesystem type does not matter.

    sudo btrfs restore /dev/vg1000/lv /volume2

     

    FMI: https://btrfs.wiki.kernel.org/index.php/Restore

    Doing this restore is probably a good safety option even if you are able to successfully execute some sort of repair on the broken filesystem with btrfs check.

     

    I'm just about out of advice at this point. You do a very good job of pulling together relevant logs, however.  If you encounter something interesting, post it.  Otherwise, good luck!

  12. On ‎2‎/‎9‎/‎2019 at 4:53 PM, IT_Informant said:

    3. Booted with my 4 SSDs so DSM would install there (unplugged my 16 spinning disks) 

    9. Connected up my 16 spinning disks with the SSDs. Booted and bingo. All 20 drives present

     

    Just so you know, this strategy doesn't accomplish anything.  The instant you installed the 16 spinning disks, OS and swap partitions were created and DSM was replicated to each of them.

     

    It is in fact possible to keep DSM I/O confined to your SSD's, but it is impossible to recover the space on the spinning disks.  If the former interests you, see here.

     

    Quote

    I would like to get some clarification on whether or not Intel's I219-V NIC is compatible with any recent boot loaders (1.02b +)

     

    Unfortunately the answer is that it depends.  There are several reasons for this:

    • Intel has up-revved the silicon for the I219-V PHY and has been unfortunately assigning new device ID's, which are incompatible with existing drivers
    • The various DSM versions (DSM 6.1.7, 6.2, 6.2.1) and platforms (DS3615, DS916, DS918) are not identical with regard to kernel drivers
    • The various DSM versions (DSM 6.1.7, 6.2, 6.2.1) and platforms (DS3615, DS916, DS918) are not identical with regard to kernel driver versions

     

    The first bullet is the major issue.  Essentially, Intel is selling a new device but calls it the same name as the old one.  So depending on which one you have, it may work or not.  Installing the "latest" drivers via extra.lzma has been the usual way of combatting this, but with the latest loaders and versions, extra.lzma support is less robust, so your mileage may vary.

     

    Without extra.lzma, the only way to tell for sure that your device will work is to get the device ID and check it against the supported device ID's for the DSM version and platform you want to use.  Links to these resources are also in my signature, or you can go to the Tutorials and Guides page and they are pinned there.

  13. No, the stick is the boot device and is modified by DSM whenever you take an update, and perhaps at other times.

     

    If you need to access the files on it while booted up, there are methods of mounting the partitions so that they are accessible from the shell.

  14. A few more things to try.  But before that, do you have another disk on which you could recover the 4TB of files?  If so, that might be worth preparing for.

     

    First, attempt one more mount:

    sudo mount -o recovery,ro /dev/vg1000/lv /volume1

     

    If it doesn't work, do all of these:

    sudo btrfs rescue super /dev/vg1000/lv

    sudo btrfs-find-root /dev/vg1000/lv

    sudo btrfs insp dump-s -f /dev/vg1000/lv

     

    Post results of all.  We are getting to a point where the only options are a check repair (which doesn't really work on Syno because of kernel/utility mismatches) or a recovery to another disk as mentioned above.

  15. 3 hours ago, Donkey545 said:

    Is there a reason that you think the missing LV is the issue?

     

    Syno has several different historical methods of building and naming LVs.  I think this one must have been built on 6.1.x because it doesn't use the current design.  The current one has a "syno_vg_reserved_area" 2nd LV when a volume is created.  But yours may be complete with only one LV.

     

    It should not be necessary, but make sure you have activated all your LV's by executing

    sudo vgchange -ay

     

    At this point I think we have validated all the way up to the filesystem. 

     

    3 hours ago, Donkey545 said:

    I think that it was BTRFS, but I am not sure honestly. For some reason in my research I neglected to make sure I was looking for that detail. This suggests that btrfs is correct

     

    Yep, you are using btrfs.  Therefore the tools (fsck, mke2fs)  you were trying to use at the beginning of the post are not correct.  Those are appropriate for the ext4 filesystem only.  NOTE: Btrfs has a lot of redundancy built into it so running a filesystem check using btrfs check --repair is generally not recommended.

     

    Before you do anything else, just be absolutely certain you cannot manually mount the filesystem via

    sudo mount /dev/vg1000/lv /volume1

     

    If that doesn't work, try mounting without the filesystem cache

    sudo mount -o clear_cache /dev/vg1000/lv /volume1

     

    If that doesn't work, try a recovery mount

    sudo mount -o recovery /dev/vg1000/lv /volume1

     

    If any of these mount options work, copy all the data off that you can, delete your volume and rebuild it.

    If not, report the failure information.

  16. So FWIW this proves that your volume is not subject to a 16TB limitation (only applicable to ext4 created with 32-bit OS).

    Do you know if your filesystem was btrfs or ext4?

    /$ sudo vgdisplay
      --- Volume group ---
      VG Name               vg1000
      System ID
      Format                lvm2
      Metadata Areas        2
      Metadata Sequence No  5
      VG Access             read/write
      VG Status             resizable
      MAX LV                0
      Cur LV                1  <----------- this looks like a problem to me, I think there should be 2 LVs

    Post the output of vgdisplay --verbose and lvdisplay --verbose

     

     

×
×
  • Create New...