Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. - Outcome of the update: SUCCESSFUL (but see comments)

    - DSM version prior update: DSM 6.2.2-24922 Update 6

    - Loader version and model: Jun's v1.04b - DS918+

    - Using custom extra.lzma: YES - real3x mod (but see comments)

    - Installation type: BAREMETAL - ASRock J4105-ITX

    - Comments: no /dev/dri (missing Gemini Lake firmware)

     

    NVMe code is new. The original NVMe patch does not work. I uninstalled NVMe cache as a prerequisite prior to upgrade and recommend that you do too.  NVMe cache is reinstalled and working fine after applying the updated patch here.

     

    ASRock J/Q-series motherboards require extra.lzma or real3x mod to boot DSM 6.2.2. 6.2.3 is verified to install on J4105-ITX with no mods. So I chose to revert real3x mod immediately prior to invoking the upgrade with the following procedure:

    Spoiler

     

    # mkdir /mnt/synoboot2

    # cd /dev

    # mount synoboot2 /mnt/synoboot2

    # cd /mnt/synoboot2

    # <copy/rename/restore the original 1.04b extra.lzma and extra2.lzma to this folder, extract them from boot loader second partition if necessary>

    # cd /

    # umount /mnt/synoboot2

    (do not skip this umount step or your upgrade will fail with Error 21)

    # rm /usr/lib/modules/update/*

    # rmdir /usr/lib/modules/update

     

    Now install the 6.2.3 upgrade via Control Panel Manual DSM update prior to reboot, or your system will be bricked with no access to LAN!

     

     

    • Thanks 4
  2. Loader version and type (918+):  1.04b DS918+

    DSM version in use (including critical update):  DSM 6.2.3-25423

    Using custom modules/ramdisk? If yes which one?:  None

    Hardware details:  Motherboard:  ASRock J4105-ITX

    Comments: A clean baremetal install of 6.2.3 installs on this motherboard and supports the RealTek NIC

     

    Gemini Lake i915 drivers and firmware will need to be added in order to transcode

    NVMe cache support requires an updated patch

  3. NOTE: 6.2.3 upgrade overwrites the nvme lib file.

    I strongly recommend deleting your r/w cache prior to installing 6.2.3 or risk corrupting your volume on reboot.

     

    The patch installs (meaning the search strings are found) but it is not working completely.  There is an additional PCI port check that is in the new library that is not addressed by this patch.

  4. 1 minute ago, chickenstealers said:

    I mount to volume2 because volume2 has 1 TB space and works well

     

    That doesn't work the way you think it does.  If you mounted it there, it would not be storing any data on the filesystem mounted on /volume1.  You will only confuse yourself by doing so.

    But it doesn't matter, as far as I can see, your filesystem is quite damaged.  Do the restore and see if you can get your files out.

     

  5. 1 hour ago, chickenstealers said:

    My volume only contain 1 disk each

    
    IPUserverOLD:~# cat /proc/mdstat
    Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1]
    md2 : active raid1 sdb3[0]
          1068919808 blocks super 1.2 [1/1] [U]
    
    md3 : active raid1 sdc3[0]
          1068919808 blocks super 1.2 [1/1] [U]

     

    In that case you only have filesystem repair options, or extract files in a recovery mode.

     

    If you have no other plan, start here and follow the subsequent posts until resolution.

  6. Bay Trail precedes Haswell, but it's one of those hybrid architectures.  The GPU is from Ivy Bridge.  I've previously suggested that the processor feature DS918+ is dependent upon is FMA3, which showed up in Haswell, and really isn't anything that we should use in the NAS world, but it seems to be how the kernel is compiled.

     

    I'm not sure how Bearcat got a J1900 to work in that case, but here are the architecture comparisons nonetheless.

    https://en.wikipedia.org/wiki/Silvermont

    https://en.wikipedia.org/wiki/Haswell_(microarchitecture)

  7. - Outcome of the update: SUCCESSFUL

    - DSM version prior update: DSM 6.2.2-24922 Update 6

    - Loader version and model: Jun's v1.03b - DS3615xs

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7U2

    - Comments: Test VM.  Boot loader no longer aliased as /dev/synoboot but displayed as /dev/sdm. "satashare-x" appeared in Control Panel and File Station

     

    - Outcome of the update: SUCCESSFUL

    - DSM version prior update: DSM 6.2.2-24922 Update 5

    - Loader version and model: Jun's v1.04b - DS918+

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7U2

    - Comments: Test VM.  Boot loader no longer aliased as /dev/synoboot but displayed as /dev/sdm

     

    /dev/synoboot is broken on ESXi and DSM 6.2.3.  This has a number of negative implications. See here for resolution and a warning for arrays larger than 12 disks

    • Thanks 3
  8. 5 hours ago, jbesclapez said:

    I did some reading like you recommended and I also contacted a FORENSIC to ask for a quote with all the details you gave me.

    Then I did more reading to see what i could do. Now I need your opinion, as once again, I might misundertand things :-)

     

    I was thinking of doing a RAW copy of the drives and then some analysis. I will have to buy a new drive - but anyway I was thinking of upgrading my NAS as it is pretty old, so the drive is not a waste. 

    Before diving into this, I would like to know if you still would be OK to assist me as you did before? I will surely need your help in that, and I was wondering if you would be OK to continue this adventure with me :-). I would prefer having your help than the one of a Forensic! At least, I am learning a side of IT I did not know before... It is also kind of fun, i have to admit, even it plays with my nerves.

     

    A good news is that I realised yesterday I had a drive with the backup of 99% of my personnal videos (kids, family). What is left on the NAS, I would still like to recover it...

     

    There is a big difference between directing someone on a technical recovery, and offering opinion on a course of action.  I have done a lot of mdadm array recoveries in my past work life, a fair number of my own, and those you see online.  DSM cannot repair but the most simple array failures, which leaves folks otherwise in the cold, while I am pretty confident that if there is a reasonable way to recover mdadm, I can help find it.

     

    I don't know anything about working with a forensic recovery service, and I have no direct knowledge of how to run recovery tools.  So while I am happy to offer my opinion if you describe a situation and a decision point, I can't offer step-by-step, detailed direction on how to proceed. In short, I can't continue to be a guru. I post online here because I want to provide good examples on how to recover broken arrays so that XPenology users can help each other and resolve their own problems. I think it would be worth documenting your upcoming experience here for the same reason, and I am willing to participate in and facilitate that.

     

    I agree that if you want to attempt filesystem repairs, making a full copy of your array to another (larger) drive is a good move before proceeding.  You will need to decide if you want to work on the original or the copy (I advise the copy). Be careful when doing this operation as it is operating directly on storage devices with no safeguards and it's not impossible to accidentally damage your original or some other storage device (like the root filesystem).

    • Like 1
  9. 58 minutes ago, jbesclapez said:
    
    [ 4796.371660] EXT4-fs (dm-0): VFS: Can't find ext4 filesystem

     

     

    So that is where you are now.  48 hours ago, this was the state:

     

    On 4/10/2020 at 12:35 PM, flyride said:

    But here's the situation:

    1. Your RAID5 array is critical (no redundancy) and has mild corruption
    2. Your filesystem has some corruption but we have been able to get it to mount

    My strong recommendation is that you not attempt to "fix" anything further, and do the following:

    1. Copy everything off your volume1 onto another device.  If you need to go buy an 8TB external drive, do it.
    2. Delete your volume1
    3. Delete your SHR
    4. Click the Fix System Partition options in Storage Manager to correct the DSM and swap replicas
    5. Remove/replace your bad drive #0/sda
    6. Create a new SHR
    7. Create a new volume1
    8. Copy your files back

     

    I also said this:

     

    On 4/10/2020 at 12:20 PM, flyride said:

    I'm pretty sure Syno won't rewrite fstab as long as you don't make any changes to the GUI.

     

    Up to this point, we had really made *NO* irreversible changes to the system, aside from rewriting the array superblocks when executing the array create.  The point of editing the fstab file is so we did not have to do a filesystem recovery or other action that would alter the data in the filesystem.  When the filesystem was finally mounted, it was read-only so that no other action could damage the data within.

     

    Then you asked if you could attach an 8TB drive directly to the system. Aside from diverging from recommendation, I was not expecting you to use the GUI to configure it.  It could have been attached via usb, or mkfs on the new device. Unfortunately GUI was used and I believe that DSM tried to write to your lv device directly to build the new filesystem, which in all evidence has corrupted the filesystem that was already there.  So when dmesg says "can't find ext4 filesystem" that is probably the reason why.

     

    I think that now there's no direct way to get to the data on the filesystem without heroic and/or irreversible filesystem check/recovery operations. That's the bad news.

    The good news is that your array is still in the same state (critical, but intact). And very likely that most of your data is still physically present on the array.

     

    You basically have three choices now, and to be blunt, you're on your own with the decision and how to proceed further.

     

    1. Send your drives off for forensic file recovery. This is expensive, but they will find data and recover it for you.  If you do this, it's vitally important that you tell them the following:

    • You are sending a FOUR-drive Synology SHR, with one missing member
    • drive 0 is missing
    • drive 1 is /dev/sdc5:      Array UUID : 75762e2e:4629b4db:259f216e:a39c266d     Device UUID : 6ba575e4:53121f53:a8fe4876:173d11a9
    • drive 2 is /dev/sdb5:      Array UUID : 75762e2e:4629b4db:259f216e:a39c266d     Device UUID : 7eee55dc:dbbf5609:e737801d:87903b6c
    • drive 3 is /dev/sdd5:      Array UUID : 75762e2e:4629b4db:259f216e:a39c266d     Device UUID : fb417ce4:fcdd58fb:72d35e06:9d7098b5
    • The filesystem type is ext4, NOT BTRFS (there may be spurious btrfs signatures in various places in /dev/lv)

    2. Attempt to recover the filesystem using e2fsck or variant

    • At its core, fsck is an irreversible process, which might invalidate a subsequent choice to send the drives for forensic recovery
    • If I were to do this, I would first clone /dev/lv to another device as a backup.  Unfortunately you don't possess a device large enough to do this (8.17TB > 7.27TB)

    3. Use third-party utilities to attempt to recover data from the /dev/lv device directly

    • You would need to be able to safely install such utility via command line
    • You would need to connect storage. The best way to do that would be to connect your 8TB via usb which then would be accessible as a mounted filesystem.
    • These tools are not plug and play, may need some technical guidance, and may have varying degrees of effectiveness

    Here's a Google search to help you with your decisions:

    https://www.google.com/search?q=recover+data+from+unmountable+ext4&oq=recover+data+from+unmountable+ext4

     

  10.  

    5 minutes ago, jbesclapez said:

    The 8TB is out of the system. I edidted teh fstab, did the reboot and the fstab goes back to its previous state without our work.

    Did you note also that when I log on with putty i get this error > Could not chdir to home directory /var/services/homes/admin: No such file or directory
    Any idea on what is happening?

     

    The symlink to the homes root folder has been damaged.  I don't think you should worry about that now.

     

    Can you mount your filesystem manually?

    # mount -v -oro,noload,sb=1934917632 /dev/vg1000/lv /volume1

     

×
×
  • Create New...