jun

DSM 6.2 Loader

Recommended Posts

So i made the mistake of running an update without checking the forum first.

Im on an Ivy Bridge i5, and as far as i can tell the new update doesnt support that old arcitecture.

 

I can do a clean install with my old image on jun 1.03b and ds3617xs, but when i connect the disks from my old shr raid it does not initialize them and recognize them as a shr group until reboot. After reboot it shows as recoverable, but it runs the newer update on the raid disks in the recovery process and takes me back to square 1. (jun loads normally, but no network on the onboard intel nic and no disk activity)

 

is there a good way of preventing the upgrade in recovery, or just corrupting the dsm that is on the old raid?

Share this post


Link to post
Share on other sites

Sounds like you need to downgrade to a compatible version of DSM yes?

 

Barnie, when I had an issue with an update not suitable for my system I followed the advice of a fellow member here to allow me to downgrade my DSM version.

 

1. I made a USB for an alternate model and did a migration keeping data (I have DS3617xs hardware so made a DS3615xs USB), that forced some of the core files changed in the unsuccessful upgrade to be rewritten and allow me to do a proper downgrade of the DSM OS.

 

Manually select DSM (not online) and use the version that is correct for the loader (15 vs 17 for example)  

 

2. Once I had things running properly again on the older version I then (with a new DS3617XS USB) did a second migration back to the loader I wanted with the appropriate DSM version for my hardware.

 

I hope that makes sense and is successful for you as well.

 

Hopefully someone else will chime in if my advice is not correct for Barnie.

Edited by mgrobins

Share this post


Link to post
Share on other sites

Hi, just after clarification.

 

Im on BM install with hardware listed in my sig. Loader 1.03b 3617xs and DSM 6.2-23739

 

6.2-23824 is the next offered update but this will NOT run on the 3617xs with 1.03b.

 

Some are commenting that minor updates to 2379 are operable on the 3615xs. Is this the case?

 

I don't think there are any landmark changes between 23739 and 23824 for a general home NAS (any disagreement?) so will just sit on my current version for now.

Share this post


Link to post
Share on other sites

Unfortunatly, it will not install a lower dsm version. Even though its a migrate from 3617 to 3615.

I think ill have to make an edit on the system partitions on the drives to make it work?

 

Capture.PNG

Share this post


Link to post
Share on other sites

I downloaded .ISO file. I changed MAC address.

 

Is there step by step instruction on what to do next on ESXI, and how to install Xpenology on ESXI?

 

Does .ISO file need to be converted to .VMDK?

When you create VM, do you add one or two HDD? Where do you put .ISO and where .VMDK file?

Edited by mandreto10

Share this post


Link to post
Share on other sites
On 1/11/2019 at 8:24 AM, NeoID said:

 

I will answer my own post as it might help others or start a at least shine some light on how to configure the sata_args parameters.
As usually I've modified grub.cfg and added my serial and mac before commenting out all menu items except the one for ESXi.

 

When booting the image Storage Manager gave me the above setup which is far from ideal. The first one is the bootloader attached to the virtual SATA controller 0 and the other two are my first and second drive connected to my LSI HBA in pass-through mode. In order to hide the drive I added DiskIdxMap=1F to the sata_args. This pushed the bootloader out of the 16 disk boundary so I was left with the other two data drives. In order to move them to the beginning of the slots I also added SasIdxMap=0xfffffffe. I've testet multiple values and decreased the value one by one until all drives aligned correctly. The reason for why you see 16 drives is because maxdrives in /etc.defaults/synoinfo.conf was set to 16. Not exactly sure why it is set to that value and not 4 or 12, but maybe it's changed based on your hardware/sata ports and the fact that my HBA has 16 drives? No idea about that last part, but it seems to work as intended.

 


set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'

 

Capture.PNG

Capture.PNG

 

 

Hi,

When you say"As usually I've modified grub.cfg and added my serial and mac before commenting out all menu items except the one for ESXi."

 

Where did you find instructions on what to comment out, and which serial (I assume serial number of Synology device, not serial port?), to provide?

Why do you need to comment out anything in .cfg file?

 

Please clarify.

 

 

Share this post


Link to post
Share on other sites
2 hours ago, mandreto10 said:

I downloaded .ISO file. I changed MAC address.

 

Is there step by step instruction on what to do next on ESXI, and how to install Xpenology on ESXI?

 

Does .ISO file need to be converted to .VMDK?

When you create VM, do you add one or two HDD? Where do you put .ISO and where .VMDK file?

 

How do you know which PID and VID to use?

Share this post


Link to post
Share on other sites
17 hours ago, Bamle said:

Unfortunatly, it will not install a lower dsm version. Even though its a migrate from 3617 to 3615.

I think ill have to make an edit on the system partitions on the drives to make it work?

 

Capture.PNG

You need to follow the instructions in this topic, a few of us had the same issue. You can do it, the instructions can be a bit confusing to follow. Just let us know if you get stuck.

 

 

  • Like 1

Share this post


Link to post
Share on other sites

Hi All

 

I am on v1.03b DS3615xs and for some odd reason my NAS is showing me there is an update to 6.2.1 when it should not. Anyone else seeing this or is this correct.

 

DSM-Update-Error.thumb.PNG.541a8af803304e1d8486200ce65d444d.PNG

 

Thanks

Share this post


Link to post
Share on other sites

Yes, 6.2.1 is a Synology recommended update from 6.2.  If you do NOT have an Intel NIC serviced by the e1000e driver, installing that update will brick your installation, otherwise it should be okay.

 

You can confirm you have such a NIC by SSH to your box and type "lspci -k" and you should receive something similar to the following in your output:

# lspci -k
0000:13:00.0 Class 0200: Device 8086:10d3
        Subsystem: Device 15ad:07d0
        Kernel driver in use: e1000e
#

 

  • Like 2

Share this post


Link to post
Share on other sites

As mentioned before, when using the following sata_args, the system crashes after a little while.

I have given my VM a serial port so I could see what's going on, but I have a hard time understanding the issue.

 

Anyone who could explain to me what's going on here and how I can get the sata_args to work? I mean everything looks right from DSM/Storage Manager....

set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'

 

Spoiler

�[H�[J�[1;1H�[?25l�[m�[H�[J�[1;1H�[2;30HGNU GRUB  version 2.02

�[m�[4;2H+----------------------------------------------------------------------------+�[5;2H|�[5;79H|�[6;2H|�[6;79H|�[7;2H|�[7;79H|�[8;2H|�[8;79H|�[9;2H|�[9;79H|�[10;2H|�[10;79H|�[11;2H|�[11;79H|�[12;2H|�[12;79H|�[13;2H|�[13;79H|�[14;2H|�[14;79H|�[15;2H|�[15;79H|�[16;2H|�[16;79H|�[17;2H+----------------------------------------------------------------------------+�[m�[18;2H�[19;2H�[m     Use the ^ and v keys to select which entry is highlighted.          
      Press enter to boot the selected OS, `e' to edit the commands       
      before booting or `c' for a command-line.                           �[5;80H �[7m�[5;3H*DS918+ 6.2.1/6.2 VMWare/ESXI with Jun's Mod v1.04b                         �[m�[5;78H�[m�[m�[6;3H                                                                            �[m�[6;78H�[m�[m�[7;3H                                                                            �[m�[7;78H�[m�[m�[8;3H                                                                            �[m�[8;78H�[m�[m�[9;3H                                                                            �[m�[9;78H�[m�[m�[10;3H                                                                            �[m�[10;78H�[m�[m�[11;3H                                                                            �[m�[11;78H�[m�[m�[12;3H                                                                            �[m�[12;78H�[m�[m�[13;3H                                                                            �[m�[13;78H�[m�[m�[14;3H                                                                            �[m�[14;78H�[m�[m�[15;3H                                                                            �[m�[15;78H�[m�[m�[16;3H                                                                            �[m�[16;78H�[m�[16;80H �[5;78H�[22;1H   The highlighted entry will be executed automatically in 5s.                 �[5;78H�[22;1H   The highlighted entry will be executed automatically in 4s.                 �[5;78H�[22;1H   The highlighted entry will be executed automatically in 3s.                 �[5;78H�[22;1H                                                                               �[23;1H                                                                               �[5;78H�[?25h�[H�[J�[1;1H�[H�[J�[1;1H�[H�[J�[1;1H�[H�[J�[1;1H[    1.648999] ata2: No present pin info for SATA link down event
[    1.954077] ata3: No present pin info for SATA link down event
[    2.259160] ata4: No present pin info for SATA link down event
[    2.564267] ata5: No present pin info for SATA link down event
[    2.869319] ata6: No present pin info for SATA link down event
[    3.174492] ata7: No present pin info for SATA link down event
[    3.479478] ata8: No present pin info for SATA link down event
[    3.784601] ata9: No present pin info for SATA link down event
[    4.089639] ata10: No present pin info for SATA link down event
[    4.394779] ata11: No present pin info for SATA link down event
[    4.699792] ata12: No present pin info for SATA link down event
[    5.004877] ata13: No present pin info for SATA link down event
[    5.310027] ata14: No present pin info for SATA link down event
[    5.615100] ata15: No present pin info for SATA link down event
[    5.920115] ata16: No present pin info for SATA link down event
[    6.225196] ata17: No present pin info for SATA link down event
[    6.530275] ata18: No present pin info for SATA link down event
[    6.835423] ata19: No present pin info for SATA link down event
[    7.140430] ata20: No present pin info for SATA link down event
[    7.445510] ata21: No present pin info for SATA link down event
[    7.750664] ata22: No present pin info for SATA link down event
[    8.055675] ata23: No present pin info for SATA link down event
[    8.360797] ata24: No present pin info for SATA link down event
[    8.665902] ata25: No present pin info for SATA link down event
[    8.970911] ata26: No present pin info for SATA link down event
[    9.276046] ata27: No present pin info for SATA link down event
[    9.581121] ata28: No present pin info for SATA link down event
[    9.886194] ata29: No present pin info for SATA link down event
[   10.191250] ata30: No present pin info for SATA link down event
patching file etc/rc
patching file etc/synoinfo.conf
Hunk #2 FAILED at 263.
Hunk #3 FAILED at 291.
Hunk #4 FAILED at 304.
Hunk #5 FAILED at 312.
Hunk #6 FAILED at 328.
5 out of 6 hunks FAILED -- saving rejects to file etc/synoinfo.conf.rej
patching file linuxrc.syno
patching file usr/sbin/init.post
START /linuxrc.syno
Insert basic USB modules...
:: Loading module usb-common ... [  OK  ]
:: Loading module usbcore ... [  OK  ]
:: Loading module xhci-hcd ... [  OK  ]
:: Loading module xhci-pci ... [  OK  ]
:: Loading module usb-storage ... [  OK  ]
:: Loading module BusLogic ... [  OK  ]
:: Loading module vmw_pvscsi ... [  OK  ]
:: Loading module megaraid_mm ... [  OK  ]
:: Loading module megaraid_mbox ... [  OK  ]
:: Loading module scsi_transport_spi ... [  OK  ]
:: Loading module mptbase ... [  OK  ]
:: Loading module mptscsih ... [  OK  ]
:: Loading module mptspi ... [  OK  ]
:: Loading module mptctl ... [  OK  ]
:: Loading module megaraid ... [  OK  ]
:: Loading module megaraid_sas ... [  OK  ]
:: Loading module scsi_transport_sas ... [  OK  ]
:: Loading module raid_class ... [  OK  ]
:: Loading module mpt3sas ... [  OK  ]
:: Loading module mdio ... [  OK  ]
:: Loading module rtc-cmos ... [  OK  ]
Insert net driver(Mindspeed only)...
Starting /usr/syno/bin/synocfgen...
/usr/syno/bin/synocfgen returns 0
[   10.663536] md: invalid raid superblock magic on sda3
[   10.679427] md: invalid raid superblock magic on sdb3
Partition Version=8
 /sbin/e2fsck exists, checking /dev/md0... 
/sbin/e2fsck -pvf returns 0
Mounting /dev/md0 /tmpRoot
------------upgrade
Begin upgrade procedure
No upgrade file exists
End upgrade procedure
============upgrade
Wait 2 seconds for synology manufactory device
Sun Jan 20 20:31:05 UTC 2019
/dev/md0 /tmpRoot ext4 rw,relatime,data=ordered 0 0
none /sys/kernel/debug debugfs rw,relatime 0 0
sys /sys sysfs rw,relatime 0 0
none /dev devtmpfs rw,relatime,size=1012292k,nr_inodes=253073,mode=755 0 0
proc /proc proc rw,relatime 0 0
linuxrc.syno executed successfully.
Post init


synotest login: [10259.795238] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
[10259.796149] blk_update_request: I/O error, dev sdb, sector 4982400
[10259.796776] blk_update_request: I/O error, dev sdb, sector in range 9437184 + 0-2(12)
[10259.797596] blk_update_request: I/O error, dev sda, sector 9437192
[10259.811059] blk_update_request: I/O error, dev sdb, sector in range 9580544 + 0-2(12)
[10259.811892] md/raid1:md2: sdb3: rescheduling sector 143776
[10259.812617] blk_update_request: I/O error, dev sdb, sector 9583008
[10259.813311] blk_update_request: I/O error, dev sda, sector 9583008
[10259.813951] raid1: Disk failure on sdb3, disabling device. 
[10259.813951]     Operation continuing on 1 devices
[10259.814969] md/raid1:md2: redirecting sector 143776 to other mirror: sda3
[10259.815712] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
[10259.816551] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
[10259.816574] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
[10259.818192] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 1, flush 0, corrupt 0, gen 0
[10259.819209] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.820249] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.821295] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
[10259.822108] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 2, flush 0, corrupt 0, gen 0
[10259.823030] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.824078] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
[10259.824909] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 3, flush 0, corrupt 0, gen 0
[10259.825336] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.825822] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
[10259.825824] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 4, flush 0, corrupt 0, gen 0
[10259.825828] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
[10259.825829] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 5, flush 0, corrupt 0, gen 0
[10259.830364] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.831691] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.832682] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 6, flush 0, corrupt 0, gen 0
[10259.833685] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.834674] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 7, flush 0, corrupt 0, gen 0
[10259.835676] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.836665] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 8, flush 0, corrupt 0, gen 0
[10259.837577] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 9, flush 0, corrupt 0, gen 0
[10259.838449] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.839543] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10259.840654] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 10, flush 0, corrupt 0, gen 0
[10259.843248] BTRFS error (device md2): BTRFS: md2 failed to repair btree csum error on 65224704, mirror = 1
[10259.843248] 
[10259.844581] blk_update_request: I/O error, dev sda, sector 11680160
[10259.847284] BTRFS error (device md2): BTRFS: md2 failed to repair btree csum error on 65224704, mirror = 2
[10259.847284] 
[10259.848491] BTRFS error (device md2): error loading props for ino 17188 (root 257): -5
[10259.873224] md/raid1:md0: sdb1: rescheduling sector 831776
[10259.874011] blk_update_request: I/O error, dev sdb, sector 833824
[10259.874796] blk_update_request: I/O error, dev sda, sector 833824
[10259.875564] raid1: Disk failure on sdb1, disabling device. 
[10259.875564]     Operation continuing on 1 devices
[10259.876809] md/raid1:md0: redirecting sector 831776 to other mirror: sda1
[10259.877732] blk_update_request: I/O error, dev sda, sector 833824
[10259.882257] blk_update_request: I/O error, dev sda, sector 4982400
[10259.885206] blk_update_request: I/O error, dev sda, sector 1045816
[10259.993102] BTRFS error (device md2): failed to repair data csum of ino 16633 off 0 (ran out of all copies)
[10259.993102] 
[10259.994648] BTRFS error (device md2): failed to repair data csum of ino 16633 off 4096 (ran out of all copies)
[10259.994648] 
[10259.995787] BTRFS error (device md2): failed to repair data csum of ino 16633 off 8192 (ran out of all copies)
[10259.995787] 
[10260.003366] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.004476] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.005420] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.006371] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.007319] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.007320] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.007321] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.007322] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.007322] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.017129] BTRFS error (device md2): failed to repair data csum of ino 16633 off 49152 (ran out of all copies)
[10260.017129] 
[10260.017517] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
[10260.018539] BTRFS error (device md2): failed to repair data csum of ino 16633 off 12288 (ran out of all copies)
[10260.018539] 
[10260.018544] BTRFS error (device md2): failed to repair data csum of ino 16633 off 16384 (ran out of all copies)
[10260.018544] 
[10260.018545] BTRFS error (device md2): failed to repair data csum of ino 16633 off 20480 (ran out of all copies)
[10260.018545] 
[10260.018547] BTRFS error (device md2): failed to repair data csum of ino 16633 off 24576 (ran out of all copies)
[10260.018547] 
[10260.018548] BTRFS error (device md2): failed to repair data csum of ino 16633 off 28672 (ran out of all copies)
[10260.018548] 
[10260.018550] BTRFS error (device md2): failed to repair data csum of ino 16633 off 32768 (ran out of all copies)
[10260.018550] 
[10260.018558] BTRFS error (device md2): failed to repair data csum of ino 16633 off 40960 (ran out of all copies)
[10260.018558] 
[10260.019465] BTRFS error (device md2): failed to repair data csum of ino 16633 off 36864 (ran out of all copies)
[10260.019465] 
[10260.019466] BTRFS error (device md2): failed to repair data csum of ino 16633 off 45056 (ran out of all copies)
[10260.019466] 
[10260.020695] BTRFS error (device md2): failed to repair data csum of ino 16633 off 53248 (ran out of all copies)
[10260.020695] 
[10260.020697] BTRFS error (device md2): failed to repair data csum of ino 2557 off 0 (ran out of all copies)
[10260.020697] 
[10260.022003] BTRFS error (device md2): failed to repair data csum of ino 2557 off 4096 (ran out of all copies)
[10260.022003] 
[10260.023254] BTRFS error (device md2): failed to repair data csum of ino 2557 off 8192 (ran out of all copies)
[10260.023254] 
[10260.024410] BTRFS error (device md2): failed to repair data csum of ino 2557 off 12288 (ran out of all copies)
[10260.024410] 
[10260.024412] BTRFS error (device md2): failed to repair data csum of ino 2557 off 16384 (ran out of all copies)
[10260.024412] 
[10260.024413] BTRFS error (device md2): failed to repair data csum of ino 2557 off 20480 (ran out of all copies)
[10260.024413] 
[10260.024415] BTRFS error (device md2): failed to repair data csum of ino 2557 off 24576 (ran out of all copies)
[10260.024415] 
[10260.026028] BTRFS error (device md2): failed to repair data csum of ino 2557 off 28672 (ran out of all copies)
[10260.026028] 
[10260.026030] BTRFS error (device md2): failed to repair data csum of ino 2557 off 32768 (ran out of all copies)
[10260.026030] 
[10260.026031] BTRFS error (device md2): failed to repair data csum of ino 2557 off 36864 (ran out of all copies)
[10260.026031] 
[10260.026032] BTRFS error (device md2): failed to repair data csum of ino 2557 off 40960 (ran out of all copies)
[10260.026032] 
[10260.026033] BTRFS error (device md2): failed to repair data csum of ino 2557 off 45056 (ran out of all copies)
[10260.026033] 
[10260.026035] BTRFS error (device md2): failed to repair data csum of ino 2557 off 49152 (ran out of all copies)
[10260.026035] 
[10260.026036] BTRFS error (device md2): failed to repair data csum of ino 2557 off 53248 (ran out of all copies)
[10260.026036] 
[10260.026037] BTRFS error (device md2): failed to repair data csum of ino 2557 off 57344 (ran out of all copies)
[10260.026037] 
[10260.026038] BTRFS error (device md2): failed to repair data csum of ino 2557 off 61440 (ran out of all copies)
[10260.026038] 
[10260.026039] BTRFS error (device md2): failed to repair data csum of ino 2557 off 65536 (ran out of all copies)
[10260.026039] 
[10260.026040] BTRFS error (device md2): failed to repair data csum of ino 2557 off 69632 (ran out of all copies)
[10260.026040] 
[10260.026041] BTRFS error (device md2): failed to repair data csum of ino 2557 off 73728 (ran out of all copies)
[10260.026041] 
[10260.026042] BTRFS error (device md2): failed to repair data csum of ino 2557 off 77824 (ran out of all copies)
[10260.026042] 
[10260.062467] BTRFS error (device md2): failed to repair data csum of ino 15149 off 4096 (ran out of all copies)
[10260.062467] 
[10260.063653] BTRFS error (device md2): failed to repair data csum of ino 15149 off 8192 (ran out of all copies)
[10260.063653] 
[10260.064868] BTRFS error (device md2): failed to repair data csum of ino 15149 off 12288 (ran out of all copies)
[10260.064868] 
[10260.064965] BTRFS error (device md2): failed to repair data csum of ino 15149 off 0 (ran out of all copies)
[10260.064965] 
[10260.064968] BTRFS error (device md2): failed to repair data csum of ino 15149 off 16384 (ran out of all copies)
[10260.064968] 
[10260.064970] BTRFS error (device md2): failed to repair data csum of ino 15149 off 20480 (ran out of all copies)
[10260.064970] 
[10260.064971] BTRFS error (device md2): failed to repair data csum of ino 15149 off 24576 (ran out of all copies)
[10260.064971] 
[10260.064972] BTRFS error (device md2): failed to repair data csum of ino 15149 off 28672 (ran out of all copies)
[10260.064972] 
[10260.064973] BTRFS error (device md2): failed to repair data csum of ino 15149 off 32768 (ran out of all copies)
[10260.064973] 
[10260.064979] BTRFS error (device md2): failed to repair data csum of ino 15149 off 36864 (ran out of all copies)
[10260.064979] 
[10260.065045] BTRFS error (device md2): failed to repair data csum of ino 15149 off 40960 (ran out of all copies)
[10260.065045] 
[10260.065046] BTRFS error (device md2): failed to repair data csum of ino 15149 off 45056 (ran out of all copies)
[10260.065046] 
[10260.065058] BTRFS error (device md2): failed to repair data csum of ino 15149 off 57344 (ran out of all copies)
[10260.065058] 
[10260.065059] BTRFS error (device md2): failed to repair data csum of ino 15149 off 61440 (ran out of all copies)
[10260.065059] 
[10260.065075] BTRFS error (device md2): failed to repair data csum of ino 15149 off 65536 (ran out of all copies)
[10260.065075] 
[10260.065086] BTRFS error (device md2): failed to repair data csum of ino 15149 off 69632 (ran out of all copies)
[10260.065086] 
[10260.065099] BTRFS error (device md2): failed to repair data csum of ino 15149 off 73728 (ran out of all copies)
[10260.065099] 
[10260.065113] BTRFS error (device md2): failed to repair data csum of ino 15149 off 77824 (ran out of all copies)
[10260.065113] 
[10260.065235] BTRFS error (device md2): failed to repair data csum of ino 15149 off 114688 (ran out of all copies)
[10260.065235] 
[10260.065247] BTRFS error (device md2): failed to repair data csum of ino 15149 off 118784 (ran out of all copies)
[10260.065247] 
[10260.065273] BTRFS error (device md2): failed to repair data csum of ino 15149 off 122880 (ran out of all copies)
[10260.065273] 
[10260.065275] BTRFS error (device md2): failed to repair data csum of ino 15149 off 126976 (ran out of all copies)
[10260.065275] 
[10260.065355] BTRFS error (device md2): failed to repair data csum of ino 15149 off 81920 (ran out of all copies)
[10260.065355] 
[10260.065357] BTRFS error (device md2): failed to repair data csum of ino 15149 off 86016 (ran out of all copies)
[10260.065357] 
[10260.065358] BTRFS error (device md2): failed to repair data csum of ino 15149 off 90112 (ran out of all copies)
[10260.065358] 
[10260.065359] BTRFS error (device md2): failed to repair data csum of ino 15149 off 94208 (ran out of all copies)
[10260.065359] 
[10260.065360] BTRFS error (device md2): failed to repair data csum of ino 15149 off 98304 (ran out of all copies)
[10260.065360] 
[10260.065361] BTRFS error (device md2): failed to repair data csum of ino 15149 off 102400 (ran out of all copies)
[10260.065361] 
[10260.065362] BTRFS error (device md2): failed to repair data csum of ino 15149 off 106496 (ran out of all copies)
[10260.065362] 
[10260.065363] BTRFS error (device md2): failed to repair data csum of ino 15149 off 49152 (ran out of all copies)
[10260.065363] 
[10260.065364] BTRFS error (device md2): failed to repair data csum of ino 15149 off 53248 (ran out of all copies)
[10260.065364] 
[10260.065365] BTRFS error (device md2): failed to repair data csum of ino 15149 off 110592 (ran out of all copies)
[10260.065365] 
[10260.065743] BTRFS error (device md2): failed to repair data csum of ino 14128 off 0 (ran out of all copies)
[10260.065743] 
[10260.065757] BTRFS error (device md2): failed to repair data csum of ino 14128 off 4096 (ran out of all copies)
[10260.065757] 
[10260.065771] BTRFS error (device md2): failed to repair data csum of ino 14128 off 8192 (ran out of all copies)
[10260.065771] 
[10260.110152] BTRFS error (device md2): failed to repair data csum of ino 15149 off 135168 (ran out of all copies)
[10260.110152] 
[10260.111476] BTRFS error (device md2): failed to repair data csum of ino 15149 off 139264 (ran out of all copies)
[10260.111476] 
[10260.112707] BTRFS error (device md2): failed to repair data csum of ino 15149 off 143360 (ran out of all copies)
[10260.112707] 
[10260.113300] BTRFS error (device md2): failed to repair data csum of ino 15149 off 147456 (ran out of all copies)
[10260.113300] 
[10260.113303] BTRFS error (device md2): failed to repair data csum of ino 15149 off 151552 (ran out of all copies)
[10260.113303] 
[10260.113304] BTRFS error (device md2): failed to repair data csum of ino 15149 off 155648 (ran out of all copies)
[10260.113304] 
[10260.113306] BTRFS error (device md2): failed to repair data csum of ino 15149 off 159744 (ran out of all copies)
[10260.113306] 
[10260.113307] BTRFS error (device md2): failed to repair data csum of ino 15149 off 163840 (ran out of all copies)
[10260.113307] 
[10260.113308] BTRFS error (device md2): failed to repair data csum of ino 15149 off 167936 (ran out of all copies)
[10260.113308] 
[10260.113309] BTRFS error (device md2): failed to repair data csum of ino 15149 off 172032 (ran out of all copies)
[10260.113309] 
[10260.113310] BTRFS error (device md2): failed to repair data csum of ino 15149 off 176128 (ran out of all copies)
[10260.113310] 
[10260.113311] BTRFS error (device md2): failed to repair data csum of ino 15149 off 180224 (ran out of all copies)
[10260.113311] 
[10260.113313] BTRFS error (device md2): failed to repair data csum of ino 15149 off 184320 (ran out of all copies)
[10260.113313] 
[10260.113314] BTRFS error (device md2): failed to repair data csum of ino 15149 off 188416 (ran out of all copies)
[10260.113314] 
[10260.113315] BTRFS error (device md2): failed to repair data csum of ino 15149 off 192512 (ran out of all copies)
[10260.113315] 
[10260.113316] BTRFS error (device md2): failed to repair data csum of ino 15149 off 196608 (ran out of all copies)
[10260.113316] 
[10260.113318] BTRFS error (device md2): failed to repair data csum of ino 15149 off 200704 (ran out of all copies)
[10260.113318] 
[10260.113319] BTRFS error (device md2): failed to repair data csum of ino 15149 off 204800 (ran out of all copies)
[10260.113319] 
[10260.113320] BTRFS error (device md2): failed to repair data csum of ino 15149 off 208896 (ran out of all copies)
[10260.113320] 
[10260.113321] BTRFS error (device md2): failed to repair data csum of ino 15149 off 212992 (ran out of all copies)
[10260.113321] 
[10260.113323] BTRFS error (device md2): failed to repair data csum of ino 15149 off 217088 (ran out of all copies)
[10260.113323] 
[10260.113324] BTRFS error (device md2): failed to repair data csum of ino 15149 off 221184 (ran out of all copies)
[10260.113324] 
[10260.113325] BTRFS error (device md2): failed to repair data csum of ino 15149 off 225280 (ran out of all copies)
[10260.113325] 
[10260.113326] BTRFS error (device md2): failed to repair data csum of ino 15149 off 229376 (ran out of all copies)
[10260.113326] 
[10260.113327] BTRFS error (device md2): failed to repair data csum of ino 15149 off 233472 (ran out of all copies)
[10260.113327] 
[10260.113328] BTRFS error (device md2): failed to repair data csum of ino 15149 off 237568 (ran out of all copies)
[10260.113328] 
[10260.113330] BTRFS error (device md2): failed to repair data csum of ino 15149 off 241664 (ran out of all copies)
[10260.113330] 
[10260.113331] BTRFS error (device md2): failed to repair data csum of ino 15149 off 245760 (ran out of all copies)
[10260.113331] 
[10260.113332] BTRFS error (device md2): failed to repair data csum of ino 15149 off 249856 (ran out of all copies)
[10260.113332] 
[10260.113333] BTRFS error (device md2): failed to repair data csum of ino 15149 off 253952 (ran out of all copies)
[10260.113333] 
[10260.113334] BTRFS error (device md2): failed to repair data csum of ino 15149 off 258048 (ran out of all copies)
[10260.113334] 
[10260.113335] BTRFS error (device md2): failed to repair data csum of ino 15149 off 262144 (ran out of all copies)
[10260.113335] 
[10260.113336] BTRFS error (device md2): failed to repair data csum of ino 15149 off 266240 (ran out of all copies)
[10260.113336] 
[10260.113338] BTRFS error (device md2): failed to repair data csum of ino 15149 off 270336 (ran out of all copies)
[10260.113338] 
[10260.113339] BTRFS error (device md2): failed to repair data csum of ino 15149 off 274432 (ran out of all copies)
[10260.113339] 
[10260.113340] BTRFS error (device md2): failed to repair data csum of ino 15149 off 278528 (ran out of all copies)
[10260.113340] 
[10260.113341] BTRFS error (device md2): failed to repair data csum of ino 15149 off 282624 (ran out of all copies)
[10260.113341] 
[10260.113342] BTRFS error (device md2): failed to repair data csum of ino 15149 off 286720 (ran out of all copies)
[10260.113342] 
[10260.113343] BTRFS error (device md2): failed to repair data csum of ino 15149 off 290816 (ran out of all copies)
[10260.113343] 
[10260.113344] BTRFS error (device md2): failed to repair data csum of ino 15149 off 294912 (ran out of all copies)
[10260.113344] 
[10260.113345] BTRFS error (device md2): failed to repair data csum of ino 15149 off 299008 (ran out of all copies)
[10260.113345] 
[10260.113346] BTRFS error (device md2): failed to repair data csum of ino 15149 off 303104 (ran out of all copies)
[10260.113346] 
[10260.113347] BTRFS error (device md2): failed to repair data csum of ino 15149 off 307200 (ran out of all copies)
[10260.113347] 
[10260.113349] BTRFS error (device md2): failed to repair data csum of ino 15149 off 311296 (ran out of all copies)
[10260.113349] 
[10260.113350] BTRFS error (device md2): failed to repair data csum of ino 15149 off 315392 (ran out of all copies)
[10260.113350] 
[10260.113351] BTRFS error (device md2): failed to repair data csum of ino 15149 off 319488 (ran out of all copies)
[10260.113351] 
[10260.113352] BTRFS error (device md2): failed to repair data csum of ino 15149 off 323584 (ran out of all copies)
[10260.113352] 
[10260.113353] BTRFS error (device md2): failed to repair data csum of ino 15149 off 327680 (ran out of all copies)
[10260.113353] 
[10260.113355] BTRFS error (device md2): failed to repair data csum of ino 15149 off 331776 (ran out of all copies)
[10260.113355] 
[10260.113355] BTRFS error (device md2): failed to repair data csum of ino 15149 off 335872 (ran out of all copies)
[10260.113355] 
[10260.113357] BTRFS error (device md2): failed to repair data csum of ino 15149 off 339968 (ran out of all copies)
[10260.113357] 
[10260.113358] BTRFS error (device md2): failed to repair data csum of ino 15149 off 344064 (ran out of all copies)
[10260.113358] 
[10260.113359] BTRFS error (device md2): failed to repair data csum of ino 15149 off 348160 (ran out of all copies)
[10260.113359] 
[10260.113360] BTRFS error (device md2): failed to repair data csum of ino 15149 off 352256 (ran out of all copies)
[10260.113360] 
[10260.113361] BTRFS error (device md2): failed to repair data csum of ino 15149 off 356352 (ran out of all copies)
[10260.113361] 
[10260.113363] BTRFS error (device md2): failed to repair data csum of ino 15149 off 360448 (ran out of all copies)
[10260.113363] 
[10260.113364] BTRFS error (device md2): failed to repair data csum of ino 15149 off 364544 (ran out of all copies)
[10260.113364] 
[10260.113365] BTRFS error (device md2): failed to repair data csum of ino 15149 off 368640 (ran out of all copies)
[10260.113365] 
[10260.113366] BTRFS error (device md2): failed to repair data csum of ino 15149 off 372736 (ran out of all copies)
[10260.113366] 
[10260.113367] BTRFS error (device md2): failed to repair data csum of ino 15149 off 376832 (ran out of all copies)
[10260.113367] 
[10260.113368] BTRFS error (device md2): failed to repair data csum of ino 15149 off 380928 (ran out of all copies)
[10260.113368] 
[10260.113369] BTRFS error (device md2): failed to repair data csum of ino 15149 off 385024 (ran out of all copies)
[10260.113369] 
[10260.113370] BTRFS error (device md2): failed to repair data csum of ino 15149 off 389120 (ran out of all copies)
[10260.113370] 
[10260.113371] BTRFS error (device md2): failed to repair data csum of ino 15149 off 393216 (ran out of all copies)
[10260.113371] 
[10260.113372] BTRFS error (device md2): failed to repair data csum of ino 15149 off 397312 (ran out of all copies)
[10260.113372] 
[10260.113373] BTRFS error (device md2): failed to repair data csum of ino 15149 off 401408 (ran out of all copies)
[10260.113373] 
[10260.113374] BTRFS error (device md2): failed to repair data csum of ino 15149 off 405504 (ran out of all copies)
[10260.113374] 
[10260.113375] BTRFS error (device md2): failed to repair data csum of ino 15149 off 409600 (ran out of all copies)
[10260.113375] 
[10260.113376] BTRFS error (device md2): failed to repair data csum of ino 15149 off 413696 (ran out of all copies)
[10260.113376] 
[10260.113378] BTRFS error (device md2): failed to repair data csum of ino 15149 off 417792 (ran out of all copies)
[10260.113378] 
[10260.113379] BTRFS error (device md2): failed to repair data csum of ino 15149 off 421888 (ran out of all copies)
[10260.113379] 
[10260.113380] BTRFS error (device md2): failed to repair data csum of ino 15149 off 425984 (ran out of all copies)
[10260.113380] 
[10260.113381] BTRFS error (device md2): failed to repair data csum of ino 15149 off 430080 (ran out of all copies)
[10260.113381] 
[10260.113382] BTRFS error (device md2): failed to repair data csum of ino 15149 off 434176 (ran out of all copies)
[10260.113382] 
[10260.113383] BTRFS error (device md2): failed to repair data csum of ino 15149 off 438272 (ran out of all copies)
[10260.113383] 
[10260.113384] BTRFS error (device md2): failed to repair data csum of ino 15149 off 442368 (ran out of all copies)
[10260.113384] 
[10260.113385] BTRFS error (device md2): failed to repair data csum of ino 15149 off 446464 (ran out of all copies)
[10260.113385] 
[10260.113386] BTRFS error (device md2): failed to repair data csum of ino 15149 off 450560 (ran out of all copies)
[10260.113386] 
[10260.113388] BTRFS error (device md2): failed to repair data csum of ino 15149 off 454656 (ran out of all copies)
[10260.113388] 
[10260.113389] BTRFS error (device md2): failed to repair data csum of ino 15149 off 458752 (ran out of all copies)
[10260.113389] 
[10260.113390] BTRFS error (device md2): failed to repair data csum of ino 15149 off 462848 (ran out of all copies)
[10260.113390] 
[10260.113391] BTRFS error (device md2): failed to repair data csum of ino 15149 off 466944 (ran out of all copies)
[10260.113391] 
[10260.113392] BTRFS error (device md2): failed to repair data csum of ino 15149 off 471040 (ran out of all copies)
[10260.113392] 
[10260.113393] BTRFS error (device md2): failed to repair data csum of ino 15149 off 475136 (ran out of all copies)
[10260.113393] 
[10260.113394] BTRFS error (device md2): failed to repair data csum of ino 15149 off 479232 (ran out of all copies)
[10260.113394] 
[10260.113395] BTRFS error (device md2): failed to repair data csum of ino 15149 off 483328 (ran out of all copies)
[10260.113395] 
[10260.113396] BTRFS error (device md2): failed to repair data csum of ino 15149 off 487424 (ran out of all copies)
[10260.113396] 
[10260.113398] BTRFS error (device md2): failed to repair data csum of ino 15149 off 131072 (ran out of all copies)
[10260.113398] 
[10260.125896] EXT4-fs error (device md0): ext4_find_entry:1614: inode #7151: comm SYNO.Core.Deskt: reading directory lblock 0
[10260.126926] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.232402] BTRFS error (device md2): failed to repair data csum of ino 36135 off 4096 (ran out of all copies)
[10260.232402] 
[10260.233653] BTRFS error (device md2): failed to repair data csum of ino 36135 off 8192 (ran out of all copies)
[10260.233653] 
[10260.233775] BTRFS error (device md2): failed to repair data csum of ino 36135 off 12288 (ran out of all copies)
[10260.233775] 
[10260.233778] BTRFS error (device md2): failed to repair data csum of ino 36135 off 16384 (ran out of all copies)
[10260.233778] 
[10260.233779] BTRFS error (device md2): failed to repair data csum of ino 36135 off 20480 (ran out of all copies)
[10260.233779] 
[10260.233780] BTRFS error (device md2): failed to repair data csum of ino 36135 off 24576 (ran out of all copies)
[10260.233780] 
[10260.233796] BTRFS error (device md2): failed to repair data csum of ino 36135 off 32768 (ran out of all copies)
[10260.233796] 
[10260.233812] BTRFS error (device md2): failed to repair data csum of ino 36135 off 28672 (ran out of all copies)
[10260.233812] 
[10260.233819] BTRFS error (device md2): failed to repair data csum of ino 36135 off 36864 (ran out of all copies)
[10260.233819] 
[10260.233820] BTRFS error (device md2): failed to repair data csum of ino 36135 off 40960 (ran out of all copies)
[10260.233820] 
[10260.233829] BTRFS error (device md2): failed to repair data csum of ino 36135 off 49152 (ran out of all copies)
[10260.233829] 
[10260.233830] BTRFS error (device md2): failed to repair data csum of ino 36135 off 45056 (ran out of all copies)
[10260.233830] 
[10260.233834] BTRFS error (device md2): failed to repair data csum of ino 36135 off 53248 (ran out of all copies)
[10260.233834] 
[10260.233835] BTRFS error (device md2): failed to repair data csum of ino 36135 off 61440 (ran out of all copies)
[10260.233835] 
[10260.233836] BTRFS error (device md2): failed to repair data csum of ino 36135 off 65536 (ran out of all copies)
[10260.233836] 
[10260.233841] BTRFS error (device md2): failed to repair data csum of ino 36135 off 57344 (ran out of all copies)
[10260.233841] 
[10260.233858] BTRFS error (device md2): failed to repair data csum of ino 36135 off 69632 (ran out of all copies)
[10260.233858] 
[10260.233966] BTRFS error (device md2): failed to repair data csum of ino 36135 off 73728 (ran out of all copies)
[10260.233966] 
[10260.233969] BTRFS error (device md2): failed to repair data csum of ino 36135 off 77824 (ran out of all copies)
[10260.233969] 
[10260.233976] BTRFS error (device md2): failed to repair data csum of ino 36135 off 0 (ran out of all copies)
[10260.233976] 
[10260.233978] BTRFS error (device md2): failed to repair data csum of ino 36135 off 90112 (ran out of all copies)
[10260.233978] 
[10260.233979] BTRFS error (device md2): failed to repair data csum of ino 36135 off 81920 (ran out of all copies)
[10260.233979] 
[10260.233988] BTRFS error (device md2): failed to repair data csum of ino 36135 off 94208 (ran out of all copies)
[10260.233988] 
[10260.234001] BTRFS error (device md2): failed to repair data csum of ino 36135 off 86016 (ran out of all copies)
[10260.234001] 
[10260.234014] BTRFS error (device md2): failed to repair data csum of ino 36135 off 102400 (ran out of all copies)
[10260.234014] 
[10260.234028] BTRFS error (device md2): failed to repair data csum of ino 36135 off 106496 (ran out of all copies)
[10260.234028] 
[10260.234041] BTRFS error (device md2): failed to repair data csum of ino 36135 off 110592 (ran out of all copies)
[10260.234041] 
[10260.234054] BTRFS error (device md2): failed to repair data csum of ino 36135 off 114688 (ran out of all copies)
[10260.234054] 
[10260.234067] BTRFS error (device md2): failed to repair data csum of ino 36135 off 118784 (ran out of all copies)
[10260.234067] 
[10260.234080] BTRFS error (device md2): failed to repair data csum of ino 36135 off 122880 (ran out of all copies)
[10260.234080] 
[10260.234093] BTRFS error (device md2): failed to repair data csum of ino 36135 off 126976 (ran out of all copies)
[10260.234093] 
[10260.235079] BTRFS error (device md2): failed to repair data csum of ino 36135 off 98304 (ran out of all copies)
[10260.235079] 
[10260.276973] BTRFS error (device md2): failed to repair data csum of ino 36135 off 364544 (ran out of all copies)
[10260.276973] 
[10260.278239] BTRFS error (device md2): failed to repair data csum of ino 36135 off 352256 (ran out of all copies)
[10260.278239] 
[10260.279428] BTRFS error (device md2): failed to repair data csum of ino 36135 off 356352 (ran out of all copies)
[10260.279428] 
[10260.280603] BTRFS error (device md2): failed to repair data csum of ino 36135 off 348160 (ran out of all copies)
[10260.280603] 
[10260.281321] BTRFS error (device md2): failed to repair data csum of ino 36135 off 331776 (ran out of all copies)
[10260.281321] 
[10260.281324] BTRFS error (device md2): failed to repair data csum of ino 36135 off 319488 (ran out of all copies)
[10260.281324] 
[10260.281325] BTRFS error (device md2): failed to repair data csum of ino 36135 off 323584 (ran out of all copies)
[10260.281325] 
[10260.281335] BTRFS error (device md2): failed to repair data csum of ino 36135 off 147456 (ran out of all copies)
[10260.281335] 
[10260.281336] BTRFS error (device md2): failed to repair data csum of ino 36135 off 151552 (ran out of all copies)
[10260.281336] 
[10260.281338] BTRFS error (device md2): failed to repair data csum of ino 36135 off 155648 (ran out of all copies)
[10260.281338] 
[10260.281339] BTRFS error (device md2): failed to repair data csum of ino 36135 off 159744 (ran out of all copies)
[10260.281339] 
[10260.281340] BTRFS error (device md2): failed to repair data csum of ino 36135 off 163840 (ran out of all copies)
[10260.281340] 
[10260.281341] BTRFS error (device md2): failed to repair data csum of ino 36135 off 167936 (ran out of all copies)
[10260.281341] 
[10260.281343] BTRFS error (device md2): failed to repair data csum of ino 36135 off 172032 (ran out of all copies)
[10260.281343] 
[10260.281344] BTRFS error (device md2): failed to repair data csum of ino 36135 off 176128 (ran out of all copies)
[10260.281344] 
[10260.281345] BTRFS error (device md2): failed to repair data csum of ino 36135 off 180224 (ran out of all copies)
[10260.281345] 
[10260.281346] BTRFS error (device md2): failed to repair data csum of ino 36135 off 184320 (ran out of all copies)
[10260.281346] 
[10260.281347] BTRFS error (device md2): failed to repair data csum of ino 36135 off 188416 (ran out of all copies)
[10260.281347] 
[10260.281349] BTRFS error (device md2): failed to repair data csum of ino 36135 off 192512 (ran out of all copies)
[10260.281349] 
[10260.281350] BTRFS error (device md2): failed to repair data csum of ino 36135 off 196608 (ran out of all copies)
[10260.281350] 
[10260.281351] BTRFS error (device md2): failed to repair data csum of ino 36135 off 200704 (ran out of all copies)
[10260.281351] 
[10260.281352] BTRFS error (device md2): failed to repair data csum of ino 36135 off 204800 (ran out of all copies)
[10260.281352] 
[10260.281354] BTRFS error (device md2): failed to repair data csum of ino 36135 off 208896 (ran out of all copies)
[10260.281354] 
[10260.281354] BTRFS error (device md2): failed to repair data csum of ino 36135 off 212992 (ran out of all copies)
[10260.281354] 
[10260.281356] BTRFS error (device md2): failed to repair data csum of ino 36135 off 217088 (ran out of all copies)
[10260.281356] 
[10260.281357] BTRFS error (device md2): failed to repair data csum of ino 36135 off 221184 (ran out of all copies)
[10260.281357] 
[10260.281359] BTRFS error (device md2): failed to repair data csum of ino 36135 off 225280 (ran out of all copies)
[10260.281359] 
[10260.281377] BTRFS error (device md2): failed to repair data csum of ino 36135 off 233472 (ran out of all copies)
[10260.281377] 
[10260.281379] BTRFS error (device md2): failed to repair data csum of ino 36135 off 237568 (ran out of all copies)
[10260.281379] 
[10260.281380] BTRFS error (device md2): failed to repair data csum of ino 36135 off 241664 (ran out of all copies)
[10260.281380] 
[10260.281381] BTRFS error (device md2): failed to repair data csum of ino 36135 off 245760 (ran out of all copies)
[10260.281381] 
[10260.281382] BTRFS error (device md2): failed to repair data csum of ino 36135 off 249856 (ran out of all copies)
[10260.281382] 
[10260.281386] BTRFS error (device md2): failed to repair data csum of ino 36135 off 258048 (ran out of all copies)
[10260.281386] 
[10260.281387] BTRFS error (device md2): failed to repair data csum of ino 36135 off 262144 (ran out of all copies)
[10260.281387] 
[10260.281389] BTRFS error (device md2): failed to repair data csum of ino 36135 off 266240 (ran out of all copies)
[10260.281389] 
[10260.281392] BTRFS error (device md2): failed to repair data csum of ino 36135 off 274432 (ran out of all copies)
[10260.281392] 
[10260.281397] BTRFS error (device md2): failed to repair data csum of ino 36135 off 286720 (ran out of all copies)
[10260.281397] 
[10260.281399] BTRFS error (device md2): failed to repair data csum of ino 36135 off 290816 (ran out of all copies)
[10260.281399] 
[10260.281400] BTRFS error (device md2): failed to repair data csum of ino 36135 off 294912 (ran out of all copies)
[10260.281400] 
[10260.281401] BTRFS error (device md2): failed to repair data csum of ino 36135 off 299008 (ran out of all copies)
[10260.281401] 
[10260.281402] BTRFS error (device md2): failed to repair data csum of ino 36135 off 303104 (ran out of all copies)
[10260.281402] 
[10260.281412] BTRFS error (device md2): failed to repair data csum of ino 36135 off 307200 (ran out of all copies)
[10260.281412] 
[10260.281424] BTRFS error (device md2): failed to repair data csum of ino 36135 off 315392 (ran out of all copies)
[10260.281424] 
[10260.281425] BTRFS error (device md2): failed to repair data csum of ino 36135 off 327680 (ran out of all copies)
[10260.281425] 
[10260.281426] BTRFS error (device md2): failed to repair data csum of ino 36135 off 335872 (ran out of all copies)
[10260.281426] 
[10260.281428] BTRFS error (device md2): failed to repair data csum of ino 36135 off 339968 (ran out of all copies)
[10260.281428] 
[10260.281429] BTRFS error (device md2): failed to repair data csum of ino 36135 off 344064 (ran out of all copies)
[10260.281429] 
[10260.281434] BTRFS error (device md2): failed to repair data csum of ino 36135 off 360448 (ran out of all copies)
[10260.281434] 
[10260.281438] BTRFS error (device md2): failed to repair data csum of ino 36135 off 135168 (ran out of all copies)
[10260.281438] 
[10260.281439] BTRFS error (device md2): failed to repair data csum of ino 36135 off 368640 (ran out of all copies)
[10260.281439] 
[10260.281441] BTRFS error (device md2): failed to repair data csum of ino 36135 off 372736 (ran out of all copies)
[10260.281441] 
[10260.281442] BTRFS error (device md2): failed to repair data csum of ino 36135 off 376832 (ran out of all copies)
[10260.281442] 
[10260.281447] BTRFS error (device md2): failed to repair data csum of ino 36135 off 143360 (ran out of all copies)
[10260.281447] 
[10260.281448] BTRFS error (device md2): failed to repair data csum of ino 36135 off 229376 (ran out of all copies)
[10260.281448] 
[10260.281449] BTRFS error (device md2): failed to repair data csum of ino 36135 off 253952 (ran out of all copies)
[10260.281449] 
[10260.281451] BTRFS error (device md2): failed to repair data csum of ino 36135 off 270336 (ran out of all copies)
[10260.281451] 
[10260.281452] BTRFS error (device md2): failed to repair data csum of ino 36135 off 278528 (ran out of all copies)
[10260.281452] 
[10260.281462] BTRFS error (device md2): failed to repair data csum of ino 36135 off 282624 (ran out of all copies)
[10260.281462] 
[10260.281475] BTRFS error (device md2): failed to repair data csum of ino 36135 off 311296 (ran out of all copies)
[10260.281475] 
[10260.281531] BTRFS error (device md2): failed to repair data csum of ino 36135 off 131072 (ran out of all copies)
[10260.281531] 
[10260.348805] BTRFS error (device md2): failed to repair data csum of ino 36135 off 139264 (ran out of all copies)
[10260.348805] 
[10260.696920] BTRFS error (device md2): failed to repair data csum of ino 27165 off 0 (ran out of all copies)
[10260.696920] 
[10260.698120] BTRFS error (device md2): failed to repair data csum of ino 27165 off 8192 (ran out of all copies)
[10260.698120] 
[10260.699357] BTRFS error (device md2): failed to repair data csum of ino 27165 off 4096 (ran out of all copies)
[10260.699357] 
[10260.699360] BTRFS error (device md2): failed to repair data csum of ino 27165 off 12288 (ran out of all copies)
[10260.699360] 
[10260.705479] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
[10260.706684] EXT4-fs (md0): previous I/O error to superblock detected
[10260.707378] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.708285] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
[10260.709493] EXT4-fs (md0): previous I/O error to superblock detected
[10260.710206] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.711011] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
[10260.712173] EXT4-fs (md0): previous I/O error to superblock detected
[10260.712968] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.713776] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
[10260.714941] EXT4-fs (md0): previous I/O error to superblock detected
[10260.715799] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.718017] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
[10260.719205] EXT4-fs (md0): previous I/O error to superblock detected
[10260.720032] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.720852] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
[10260.722048] EXT4-fs (md0): previous I/O error to superblock detected
[10260.722865] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.723690] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
[10260.724893] EXT4-fs (md0): previous I/O error to superblock detected
[10260.725699] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.726514] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
[10260.727605] EXT4-fs (md0): previous I/O error to superblock detected
[10260.728326] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.731103] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41141: comm synopkg: reading directory lblock 0
[10260.732204] EXT4-fs (md0): previous I/O error to superblock detected
[10260.732924] Buffer I/O error on dev md0, logical block 0, lost sync page write
[10260.733730] EXT4-fs (md0): previous I/O error to superblock detected
[10264.816286] blk_update_request: I/O error, dev sda, sector in range 897024 + 0-2(12)
[10264.817714] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
[10264.820288] blk_update_request: I/O error, dev sda, sector in range 901120 + 0-2(12)
[10264.824346] blk_update_request: I/O error, dev sda, sector in range 897024 + 0-2(12)
[10264.832623] blk_update_request: I/O error, dev sda, sector in range 909312 + 0-2(12)
[10264.833553] blk_update_request: I/O error, dev sda, sector in range 2125824 + 0-2(12)
[10264.834646] Buffer I/O error on dev md0, logical block in range 262144 + 0-2(12) , lost sync page write
[10264.835793] Aborting journal on device md0-8.
[10264.836372] blk_update_request: I/O error, dev sda, sector in range 2097152 + 0-2(12)
[10264.837343] Buffer I/O error on dev md0, logical block in range 262144 + 0-2(12) , lost sync page write
[10264.838486] blk_update_request: I/O error, dev sda, sector in range 0 + 0-2(12)
[10264.839385] Buffer I/O error on dev md0, logical block in range 0 + 0-2(12) , lost sync page write
[10264.839387] JBD2: Error -5 detected when updating journal superblock for md0-8.
[10264.841774] EXT4-fs error (device md0): ext4_journal_check_start:56: Detected aborted journal
[10264.842892] blk_update_request: I/O error, dev sda, sector 2048
[10265.044738] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
[10265.316836] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
[10265.317809] BTRFS error (device md2): bdev /dev/md2 errs: wr 2, rd 3154, flush 0, corrupt 0, gen 0
[10265.331020] blk_update_request: I/O error, dev sda, sector 13710688
[10265.331719] BTRFS error (device md2): bdev /dev/md2 errs: wr 3, rd 3154, flush 0, corrupt 0, gen 0
[10265.333550] blk_update_request: I/O error, dev sda, sector 13710688
[10265.334225] BTRFS error (device md2): bdev /dev/md2 errs: wr 4, rd 3154, flush 0, corrupt 0, gen 0
[10265.336264] blk_update_request: I/O error, dev sda, sector 13710688
[10265.336927] BTRFS error (device md2): bdev /dev/md2 errs: wr 5, rd 3154, flush 0, corrupt 0, gen 0
[10265.725824] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3154, flush 0, corrupt 0, gen 0
[10265.911190] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3155, flush 0, corrupt 0, gen 0
[10265.912153] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.913177] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.914171] blk_update_request: I/O error, dev sda, sector 14187288
[10265.914175] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3156, flush 0, corrupt 0, gen 0
[10265.915760] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.916778] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3157, flush 0, corrupt 0, gen 0
[10265.917699] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.918730] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.919744] blk_update_request: I/O error, dev sda, sector 14187296
[10265.919746] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3158, flush 0, corrupt 0, gen 0
[10265.919749] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3159, flush 0, corrupt 0, gen 0
[10265.922210] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.923206] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.924222] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.925225] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.926239] blk_update_request: I/O error, dev sda, sector 14187296
[10265.926892] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
[10265.928329] BTRFS error (device md2): failed to repair data csum of ino 35375 off 4096 (ran out of all copies)
[10265.928329] 
[10265.929571] BTRFS error (device md2): failed to repair data csum of ino 35375 off 8192 (ran out of all copies)
[10265.929571] 
[10265.930880] BTRFS error (device md2): failed to repair data csum of ino 35375 off 12288 (ran out of all copies)
[10265.930880] 
[10265.932071] BTRFS error (device md2): failed to repair data csum of ino 35375 off 0 (ran out of all copies)
[10265.932071] 
[10285.122533] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10285.123476] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10285.124393] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10285.125310] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10285.126282] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10285.126406] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
[10285.126420] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
[10285.136863] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
[10285.137799] blk_update_request: I/O error, dev sda, sector in range 720896 + 0-2(12)
[10285.138597] blk_update_request: I/O error, dev sda, sector in range 720896 + 0-2(12)
[10293.649817] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
[10293.650971] blk_update_request: I/O error, dev sda, sector in range 9646080 + 0-2(12)
[10293.651927] blk_update_request: I/O error, dev sda, sector in range 9646080 + 0-2(12)
[10293.652882] blk_update_request: I/O error, dev sda, sector in range 11743232 + 0-2(12)
[10293.652997] BTRFS error (device md2): bdev /dev/md2 errs: wr 7, rd 3203, flush 0, corrupt 0, gen 0
[10293.652999] BTRFS error (device md2): bdev /dev/md2 errs: wr 8, rd 3203, flush 0, corrupt 0, gen 0
[10293.653000] BTRFS error (device md2): bdev /dev/md2 errs: wr 9, rd 3203, flush 0, corrupt 0, gen 0
[10293.653001] BTRFS error (device md2): bdev /dev/md2 errs: wr 10, rd 3203, flush 0, corrupt 0, gen 0
[10293.653002] BTRFS error (device md2): bdev /dev/md2 errs: wr 11, rd 3203, flush 0, corrupt 0, gen 0
[10293.653003] BTRFS error (device md2): bdev /dev/md2 errs: wr 12, rd 3203, flush 0, corrupt 0, gen 0
[10293.653004] BTRFS error (device md2): bdev /dev/md2 errs: wr 13, rd 3203, flush 0, corrupt 0, gen 0
[10293.653004] BTRFS error (device md2): bdev /dev/md2 errs: wr 14, rd 3203, flush 0, corrupt 0, gen 0
[10293.661248] BTRFS error (device md2): bdev /dev/md2 errs: wr 15, rd 3203, flush 0, corrupt 0, gen 0
[10293.661273] blk_update_request: I/O error, dev sda, sector in range 11743232 + 0-2(12)
[10293.662955] BTRFS error (device md2): bdev /dev/md2 errs: wr 16, rd 3203, flush 0, corrupt 0, gen 0
[10293.663889] BTRFS: error (device md2) in btrfs_commit_transaction:2347: errno=-5 IO failure (Error while writing out transaction)
[10293.665153] BTRFS: error (device md2) in cleanup_transaction:1960: errno=-5 IO failure
[10293.864269] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
[10294.863533] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
[10294.864685] blk_update_request: I/O error, dev sda, sector in range 106496 + 0-2(12)
[10294.865634] Buffer I/O error on device md0, logical block in range 12288 + 0-2(12)
[10295.066598] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
[10335.337416] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10335.338386] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10335.339320] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10335.340249] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10335.341176] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10335.342238] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
[10335.343141] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
[10335.343353] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
[10335.343371] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
[10335.343375] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
[10349.636854] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10349.637657] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10349.638444] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10349.639231] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10349.640014] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
[10349.644211] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
[10349.645067] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
[10349.647320] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
[10349.648150] blk_update_request: I/O error, dev sda, sector in range 1216512 + 0-2(12)
[10349.648154] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)

 


 

 

Edited by NeoID

Share this post


Link to post
Share on other sites

@flyride Thanks for the command.  Yes i heard i need a Intel NIC and HP NC360T was one of them i was recommended to get. Since my CPU's are not Haswell they are all - https://ark.intel.com/products/71074/Intel-Celeron-Processor-G1610T-2M-Cache-2-30-GHz-

 

So can i still upgrade to 6.2.1 DS3615xs v1.03b. Once i fix the NIC Issue out ?

 

This is what i get:

 

0000:00:00.0 Class 0600: Device 8086:0158 (rev 09)
0000:00:01.0 Class 0604: Device 8086:0151 (rev 09)
        Kernel driver in use: pcieport
0000:00:06.0 Class 0604: Device 8086:015d (rev 09)
        Kernel driver in use: pcieport
0000:00:1a.0 Class 0c03: Device 8086:1c2d (rev 05)
        Subsystem: Device 103c:330d
        Kernel driver in use: ehci-pci
0000:00:1c.0 Class 0604: Device 8086:1c10 (rev b5)
        Kernel driver in use: pcieport
0000:00:1c.4 Class 0604: Device 8086:1c18 (rev b5)
        Kernel driver in use: pcieport
0000:00:1c.6 Class 0604: Device 8086:1c1c (rev b5)
        Kernel driver in use: pcieport
0000:00:1c.7 Class 0604: Device 8086:1c1e (rev b5)
        Kernel driver in use: pcieport
0000:00:1d.0 Class 0c03: Device 8086:1c26 (rev 05)
        Subsystem: Device 103c:330d
        Kernel driver in use: ehci-pci
0000:00:1e.0 Class 0604: Device 8086:244e (rev a5)
0000:00:1f.0 Class 0601: Device 8086:1c54 (rev 05)
        Kernel driver in use: lpc_ich
0000:00:1f.2 Class 0106: Device 8086:1c02 (rev 05)
        Subsystem: Device 103c:330d
        Kernel driver in use: ahci
0000:01:00.0 Class 0880: Device 103c:3306 (rev 05)
        Subsystem: Device 103c:3381
0000:01:00.1 Class 0300: Device 102b:0533
        Subsystem: Device 103c:3381
0000:01:00.2 Class 0880: Device 103c:3307 (rev 05)
        Subsystem: Device 103c:3381
0000:01:00.4 Class 0c03: Device 103c:3300 (rev 02)
        Subsystem: Device 103c:3381
        Kernel driver in use: uhci_hcd
0000:03:00.0 Class 0200: Device 14e4:165f
        Subsystem: Device 103c:2133
        Kernel driver in use: tg3
0000:03:00.1 Class 0200: Device 14e4:165f
        Subsystem: Device 103c:2133
        Kernel driver in use: tg3
0000:04:00.0 Class 0c03: Device 1912:0014 (rev 03)
        Subsystem: Device 103c:1996
        Kernel driver in use: xhci_hcd
0001:07:00.0 Class 0000: Device 1b4b:9235 (rev ff)
0001:08:00.0 Class 0000: Device 1b4b:9235 (rev ff)
0001:09:00.0 Class 0000: Device 1b4b:9235 (rev ff)
0001:0a:00.0 Class 0000: Device 1b4b:9235 (rev ff)

Share this post


Link to post
Share on other sites

The tg3 driver indicates the active NICs are Broadcom, which I think is the native network adapter in your motherboard.  So you will need an Intel ethernet card before going to 6.2.1.  When you install it, configure it with your IP's and eventually disable your motherboard NICs, you should see the "e1000e" driver servicing the 0200 class.

Share this post


Link to post
Share on other sites

@flyride Thanks i will go found one. Any card will do correct that has an Intel ChipSet. Also i need to stay on v1.03b DS3615xs due to the CPU Limitations.

 

What Card do you recommend

Edited by Vodka2014

Share this post


Link to post
Share on other sites
@flyride Thanks i will go found one. Any card will do correct that has an Intel ChipSet. Also i need to stay on v1.03b DS3615xs due to the CPU Limitations.
 
What Card do you recommend

Not any Intel Ethernet cars, one that is supported by the e1000e driver. You see which chipsets are supported here

https://www.intel.com.au/content/www/au/en/support/articles/000005480/network-and-i-o/ethernet-products.html

“The Linux* e1000e driver supports PCI Express* Gigabit Network Connections except the 82575, 82576, 82580, I350, I354, and I210/I211.”
  • Like 1

Share this post


Link to post
Share on other sites

@chipped Thanks for this

 

Anyone tried to add the drivers for HP Ethernet 1Gb 2-port 332i Adapter so we who have a G8 can get 6.2.1 Installed ?

Edited by Vodka2014

Share this post


Link to post
Share on other sites

extra.lzma for 1.04b 918. Seems that hdd's hibernation will work and speeds in vmm have become prohibitive. Try and report.

 

P.S. To replace extra.lzma on working DSM you need to mount synoboot2 as described here, replace extra.lzma and reboot xpenology.

Thanks so much @TeleDDim for his idea to modify extra.lzma.

Edited by Olegin
  • Like 1
  • Thanks 2

Share this post


Link to post
Share on other sites

Can somebody test extra.lzma from my previous post on ASRock-4205-itx or 3455-itx? Seems that hw transcoding will be work.

Edited by Olegin

Share this post


Link to post
Share on other sites
On 8/6/2018 at 7:58 PM, Saoclyph said:

@extenue, @Lennartt, @wenlez, @pateretou, @sashxp, @enzo, @dodo-dk

 

- Outcome of the update: SUCCESSFUL 

- DSM version prior update: 6.1.7 Update 2 with Jun's loader v1.02b

- Loader version and model: Jun's Loader v1.03b - DS3617

- Using custom extra.lzma: NO

- Installation type: VM Proxmox 5.2.6 - Xeon D-1537 (need to pass kvm64 cpu type), passthrough LSI SAS2116 with 5 x WD RED 3TB Raid5, 2 x WD RED 4TB Raid1 & 2 x Intel DC S3700 200GB Raid1

- Additional comments : SeaBIOS, loader on sata and ESXi boot line. Update to U2 ok. Had to replace/delete remnant files from older loaders in /etc/rc.*, /.xpenoboot (see last paragraph below).

 

Using the usb method, I got a "mount failed" as others on Proxmox, but it was successful when using a sata image disk:

  • rename the loader with a .raw instead of .img and place it in the VM images folder /var/lib/vz/images/100/ (Proxmox parser does not understand .img)
  • add a sata0 disk in the vm .conf (/etc/pve/qemu-server/100.conf)  : 

sata0: local:100/synoboot_ds3617_v1.03b.raw,size=52429K
  • choose sata0 in Option/Boot Order in the GUI
  • at start in the GUI console, choose the ESXi boot line

 

My vm ID is 100, replace it with yours.

I also had to choose the kvm64 cpu type.

 

  Bonus: easy way to edit grub.cfg (Reveal hidden contents)

It easy to change the loader grub.cfg by mounting the loader image:



cd /var/lib/vz/images/100/
mkdir synoboot_mount
mount -o loop,rw,offset=$((2048*512)) synoboot_ds3617_v1.03b.raw synoboot_mount
vi synoboot_mount/grub/grub.cfg
# unmount it after editing
umount /var/lib/vz/images/100/synoboot_mount

 

 

A serial port is also a good thing to have for debug. You can access the serial console with the following line (type Ctrl-O to exit): 


socat UNIX-CONNECT:/var/run/qemu-server/100.serial0 STDIO,raw,echo=0,escape=0x0f

 

The serial port was very needed in my case.

After I first updated from 6.1 to 6.2, the VM was starting well (docker and ssh were Ok)  but I was not able to logging into DSM, and after ~5 mins from boot, everything was shutting down and I was losing network (as @JBark). I thought it had completely shutdown. But using the serial port, I saw that it just killed everything Synology related even the network config.

With a fresh VM, it was working well, so tried to find differences between the DSM filesystems.

I found that a lot of /etc/rc.* files where referencing old binaries that do not exist anymore so I replaced all the /etc/rc.* files by the ones from the fresh installation. When rebooting, it was still closing down after 5 mins, but I think it was needed in combination with the following removals.

I also saw several /etc/*.sh scripts, and a /.xpenology folder, that were not there in the fresh installation.

After deleting them, and cleaning a little the /etc/synoinfo.conf file (removed support_syno_hybrid_raid option, and some other options, not sure if it had an effect), everything was working great again!

 

@jun Thanks for the loader!

 

 

I seem to have killed my 6.2.1 upgrade on my HP Gen8.  Can ssh in but can't get into the web interface and after 5 mins or so the services where shutting down and losing network too.

 

Where are the rc.* files from the extracted PAT file.  I have extracted the PAT file but I can't seem to find any of the rc files.

 

Thanks

Share this post


Link to post
Share on other sites
Il y a 7 heures, Olegin a dit :

Can somebody test extra.lzma from my previous post on ASRock-4205-itx or 3455-itx? Seems that hw transcoding will be work.

@Olegin, i have check your extra.lzma file on my ASRock-4205-itx. I doesn't see the /dev/dri  entry. So it doesn't work.

I have only your extra.lzma file in my synoboot2 partition (no extra2.lzma).

Do you have a specific Bios or Bios setting ?

Edited by John RAZ

Share this post


Link to post
Share on other sites
1 час назад, John RAZ сказал:

@Olegin, i have check your extra.lzma file on my ASRock-4205-itx. I doesn't see the /dev/dri  entry. So it doesn't work.

I have only your extra.lzma file in my synoboot2 partition (no extra2.lzma).

Do you have a specific Bios or Bios setting ?

I have another motherboard, so I asked to check for 4205. It means that this extra.lzma didn't fix problem with hw transcoding in 4205. 😡

In my ASRock E3C226D2I hw started working after installation 3.5 bios version.

Share this post


Link to post
Share on other sites

For some reason I cannot upgrade my Microserver G8 to 6.2.1, using loader 1.03b. I installed a Intel based Hp NC360T dual network card, disabling the onboard ones. Tried to manually upload .PAT files etc. However unit dissapears after migrate / repairs actions. No clues anymore. 

Could it be my processor, being an i3-3240T?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now