Jump to content
XPEnology Community

NeoID

Member
  • Posts

    224
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by NeoID

  1. 18 minutes ago, ed_co said:

    Again, came back after a while (I have been busy changing jobs), and again, I don't know how to even start. Apart from the first post which is highly outdated, I don't know if there were any advances on it, if it is usable, or how to start or whatever. Looks like the original poster ThorGroup is not even replying since long time ago... is it abandoned like jun? No change logs, just a huge thread, that seriously I don't have the time to read the 144 pages of it... Don't get me wrong I appreciate there are people working/using it. I am just lost and a little guidance should be great.

    I am still very happy with my Xpenology baremetal Asrock H370M-ITX 918+ with last  jun loader. Is stable and works well, but I wouldn't mind to update if it is reliable enough. Not interested in VM but baremetal.

    Thanks!!

    1) Yes it's "abandoned" like Juns, but this time it's open source
    2) To be honest... If you don't want to read the 144+ pages you should invest into a real Synology that just works. Here there will be dragons (and lots of)
    3) Most/all people use this for testing only and it's not as stable as Juns loader

    4) If you are happy with Juns loader I recommend you to stay on it, especially if you use it for "production". Redpill is not ready for production. If you have hardware to spare you should look into the amazing TinyCore project to get up and running quickly. Synology will also launch DSM 7.1 sometimes Q1, so the grand question is if that will be supported/stable at all going forward

    • Like 3
  2. 17 hours ago, Null said:

    redpill has been gone for many months, does anyone know what happened to redpill? or any information?

    without redpill, what should we do with the next major version update of Synology?


    Redpill as in the project isn't gone, but the core developer(s) @ThorGroup is. No, unfortunately there is no additional information information available regarding. Thanks to the developers being awesome and making the project open source anyone can contribute this time. You should not use this project if you depend on future updates or intend to make this a production system/make it available on the Internet. As mentioned before, it's only for testing purposes and that will probably always be the case.
     

    11 hours ago, Hackaro said:

    HI all ! :-)  @WiteWulf @pocopico

    After many months of following this thread I was able to play with some VM's (actually Fusion) and successfully install DSM 7.0.1 (918+) on them. 

    Now I would jump on my BareMetal rig, firstly with some test HDDs then with the working ones. 

     

    My HW is pretty simple (the one in signature): Gigabyte C246-m with Intel i7-9700T CPU, two onboard NICs (i-219 and a 1000e) plus an added NIC (Realtek 8125 2,5Gbe working thanks to extra_lma in Jun's loader 🙏 DSM 6.2.3u3), and two NVMe disks working as RAID1 cache.
    Now, considering that my mobo belongs to Intel 300's series and so it's SATA controller is 8th/9th gen and therefore I will go for DS918+ setup, some questions for you: 

    • To your experience do you expect any problems with the correct recognition of its 8 SATA ports? I need them all! ;-)
    • IS there any chance of having both the 2 NVMe working as caching drives as expected? Please if someone could provide a procedure to successfully patch the necessary library, post it in replying to this msg. Thanks! 
    • I truly need my Realtek 8125 to work correctly because I've implemented a 2,5Gbe ring that's working very well. But I might consider upgrading to a 10Gb NIC if needed (which one is most compatible at the moment?)
    • If I buy a Synology 10Gb NIC, I would have the chance to use QC? (which now is not enabled because I was not able the change the MAC address in a hardware manner).

    from a software's point of view I don't need that much:

    • I do not use Docker at the moment.
    • TM MUST work because I absolutely need a backup. It's very important to me.
    • I do use extensively PLEX and I need it, with HW acceleration. The lack of it would be a no-go for me. 
    • I do use NextCloud, installed with this procedure . It needs Apache web server, mariaDB and Php. The lack of it would be a no-go for me. 
    • It would be nice if Synology Photos works correctly, including the facial recognition. 
    • WakeOnLan must work, on all the NICs. The lack of it would be a no-go for me. 
    • Sleep should work and HDD hibernation too, as I don't want useless spins of my HDDs. CPU should work at the correct frequencies, I really hate waste of energy and noise.
    • QuickConnect would be nice if it's possible... 

    Last but not least please suggest me the best branch of git which is now more robust and stable, including the necessary compiled drivers. 

     

    Sorry for the length of my post and thanks and gratitude 🙏 to all who will have the patience of reading it all and answer all the questions.☺️

    I wouldn't recommend running Plex on DSM if you are highly dependent on performance. It's way better on a Linux VM with a later kernel and a GPU in passthrough. Some have also reported DSM 7 to be unstable when stressed, not sure thats only tied to Docker or also the Plex package though. I've never been able to get sleep to work, but honestly it's not recommened either as it most likely will decrease the life expectancy of your drives depending on how often you wake them up. Spinning up/down is what kills most drives, not spinning 24/7. I was also looking into this for a long time, but concluded with spindown being a silly thing unless you are pretty sure that the disks can be spinned down for at least a day at a time, preferable over several days. QuickConnect and face recognition is only working for real devices as it requires a real serial/mac. There are ways to get around those limitations, but at that point I would highly recommend to buy a real Synology. Especially if you want something that just works and doesn't waste energy. They are great devices, low power consumption and rock solid. Without the use of docker the performance is also not that bad given the dated hardware they use.

    There is no such thing as robust or stable branch. Everything is done in development branches that might break tomorrow, but a good starting point is the config used by TinyCore: https://github.com/pocopico/tinycore-redpill/blob/main/global_config.json
     

    23 minutes ago, J0K3R said:

    Hello. Like others, I wanted to upgrade my instance 6.2.3.25426 to DSM7. After going through the steps of the instructions, I received a working version of DSM 7.0.1-42218U2 without disks. The only thing left is to migrate to new instance. However after switching the SATA controller (built into the motherboard) into a VM with DSM7 I got errors:  "we've detected errors on the hard drives" or uninitialized disks, depending on the moment when I switch the controller with disks to VM.  I found posts with the same troubles but no right way. Are there any ideas?

    11.jpg

    12.jpg


    You need to play around with SataPortMap in order to get the disk order correct.
    Havent tried TinyCore much myself, but it has a auto-detect feature that might help you. I also recommend to read through the other link with very easy to understand information on how SataPortMap and DiskIdxMap works.

     

     

    • Like 2
  3. 5 hours ago, ilovepancakes said:

     

    Tested, doesn't work on ESXi. Without Vmware tools or open-vm-tools installed, etc... it greys out the option to shutdown or restart the guest VM. Pretty sure those two options in ESXi work differently than typical ACPID events anyway. Was worth a shot.

    I was pretty sure ESXi also sent ACPI requests on shutdown since you can do it from it's command line.... that's really a bummer you can't configure it as the default way of shutting down a given VM... 😕

  4. 10 hours ago, ilovepancakes said:

    Could open-vm-tools somehow be implemented as a Redpill extension? It seems it can't be installed the conventional way (DSM Package) used in DSM 6.2 because DSM 7 doesn't allow packages to run as root, which seems to be needed for open-vm-tools to work 100%. This would allow data (IP address, etc.) to be passed from DSM to ESXi based installs and would allow ESXi VM Guest shutdown/restart features to work.

    Just use this, it works great. Gives you support for shutdown/restart. IP should be static so it never changes. Don't think there are any other benefits of open-vm-tools. Most important for me was the shutdown/restart support: https://github.com/jumkey/redpill-load/raw/develop/redpill-acpid/rpext-index.json


  5. bromolow_user_config.json:

    {
      "extra_cmdline": {
        "vid": "0x46f4",
        "pid": "0x0001",
        "sn": "*******",
        "mac1": "xxxxxxxxxxxx",
        "mac2": "xxxxxxxxxxxx",
        "mac3": "xxxxxxxxxxxx",
        "netif_num": 3
      },
      "synoinfo": {},
      "ramdisk_copy": {}
    }

     

    bundled-exts.json:

    {
      "thethorgroup.virtio": "https://github.com/jumkey/redpill-load/raw/develop/redpill-virtio/rpext-index.json",
      "thethorgroup.boot-wait": "https://github.com/jumkey/redpill-load/raw/develop/redpill-boot-wait/rpext-index.json"
    }

     

    unRAID xml:

    ...
        <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='e1000e'/>
          <address type='pci' domain='0x0000' bus='0x02' slot='0x00' function='0x0'/>
        </interface>
        <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x03' slot='0x00' function='0x0'/>
        </interface>
        <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
        </interface>
    ...


    lsmod:
     

    virtio_balloon          4629  0
    virtio_scsi             9753  0
    virtio_net             18933  0
    virtio_blk              8528  0
    virtio_pci              6925  0
    virtio_mmio             3984  0
    virtio_ring             8912  6 virtio_blk,virtio_net,virtio_pci,virtio_balloon,                                                                            virtio_mmio,virtio_scsi
    virtio                  3602  6 virtio_blk,virtio_net,virtio_pci,virtio_balloon,                                                                            virtio_mmio,virtio_scsi

     

    dmesg:

    https://pastebin.com/BS0Br3nD

     

    I can see that only the first vNIC works (e1000e), the other two do not show up in DSM and there is nothing in the dmesg-logs as far as I can tell. lspci lists them:

    0000:03:00.0 Class 0200: Device 8086:10d3
    0000:04:00.0 Class 0200: Device 1af4:1041 (rev 01)
    0000:05:00.0 Class 00ff: Device 1af4:1045 (rev 01)
    0000:06:00.0 Class 0200: Device 1af4:1041 (rev 01)
    0000:07:00.0 Class 0780: Device 1af4:1043 (rev 01)
    0001:07:00.0 Class 0106: Device 1b4b:9235 (rev 11)



     

  6. On 8/19/2021 at 12:01 PM, ThorGroup said:

    It's recommended to use q35 for everything (not just here). It's a much newer platform and support a proper PCIe passthrough.

    I see you're recommending changing VirtIO to e1000e - is the VirtIO broken on unRAID?


    I've build RedPill and everything it running as intended, but I can't get any other vNIC to work except e1000e.

    Both virtio and virtio-net aren't recognized. Anyone else tried this on UnRAID and gotten them to work? Would be nice to test higher speeds then 1Gbps.

  7. 3 hours ago, apriliars3 said:

    Redpill's development is on hold or has it definitely died? Some news from ThorGroup?

    That discussion should probably be posted in: 

    At this point its unlikely that ThorGroup returns. Most likely due to personal reasons given that the repository is still active and the rather sudden silence after spending so much time responding to comments and giving long and incredible intriguing changelogs. I'm sure someone knows more, but nothing has been posted as of yet. Whether or not it's dead or just on hold all depends if someone else takes over the project. 

    • Like 2
  8. 1 hour ago, Brunox said:

    How does it go on with RP if TTG no longer reports. then the project dies .......or someone can continue it??  I hope the guys are fine and they'll be back soon

    Well yes... either someone takes over or it or it dies.... All depends if anyone else is skilled enough and willing to put time and effort into this.

     

    I have my doubts "they" will be back, makes no sense to disappear in order to focus on a beta-version. My bet is that something personal happened. I really hope "they" will manage and continue on this project. It's nice to see how much more active the forum had become since.

    • Like 2
  9. Oh thanks a ton for the links! I thought I was the only one having these issues. I've encountered this as well after going to v7u2 (installed on a USB device) and have been pulling my hairs out of frustration. I've since upgraded to u3 and haven't yet encountered the issue again so hope this one is true...
     

    Quote

    For ESXi 7.0, the issue is resolved in Update 3, available at VMware Downloads .

     

  10. - Outcome of the installation/update:  UNSUCCESSFUL

    - DSM version prior update:  DSM 6.2.3 25426 Update 2

    - Loader version and model: JUN'S LOADER v1.03b - DS3617xs

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7

    - Additional comments: Same as all other people who use ESXi and a PCI/LSI card in passthrough mode. After the upgrade the NAS cannot be found on the network anymore. The only way I got access again was to temporarily switch to the DS3615xs loader and reinstall DSM and then migrate back to DS3617xs again. If you use a PCI card in passthrough DO NOT UPDATE to this version.

  11. I've by mistake figured out that the DS Photo app (at least on Android) by default overwrites images with the same name when moving them from one album to another.

    So if you move test.jpg from album A to B and album B already has test.jpg (even if it isn't the same photo), it's overwritten WITHOUT notice.

     

    There is supposed to be a setting to change the default behaviour from overwrite to keep, but I can't find it anywhere. Anyone else familiar with this?

  12. 7 hours ago, jensmander said:

    QGROUPS are part of the btrfs file system. Did you enable snapshots or quotas on your volume? 

    Not that I know of. I have not used snapshots or quotas before.
    Enable shared folder quota is off for every shared folder i have... User quota in the profiles are all set to "No Limit"

  13. I've searched around the web, but can't seem to find any information on this.

    Anyone seen these errors in "/var/log/messages" before and know what they mean?

    2019-12-11T11:15:53+01:00 x kernel: [13651.708876] BTRFS error (device md2): cannot find qgroup item, qgroupid=15084 !
    2019-12-11T11:15:53+01:00 x kernel: [13651.708876]
    2019-12-11T11:15:53+01:00 x kernel: [13651.944144] BTRFS error (device md2): cannot find qgroup item, qgroupid=14903 !
    2019-12-11T11:15:53+01:00 x kernel: [13651.944144]
    2019-12-11T11:15:54+01:00 x kernel: [13652.795072] BTRFS error (device md2): cannot find qgroup item, qgroupid=14897 !
    2019-12-11T11:15:54+01:00 x kernel: [13652.795072]
    2019-12-11T11:15:54+01:00 x kernel: [13652.796348] BTRFS error (device md2): cannot find qgroup item, qgroupid=14894 !

     

  14. I'm running ESXI, HBA in passthrough mode and Jun's Loader v1.03b DS3617xs without any customization.

    I use a INTEL Xeon D-1527, 4 cores and 25 GB with ECC RAM. I also use a Intel X540-T2 network card in passthrough.

     

    My Xpenology server feels a bit sluggish, but I have a hard time to pinpoint the issue. I run quite a few Docker images, but there is hardly any CPU utilization.

    Disk and Volume utilization stays around 50% each and I/O wait at ~25%. I'm not sure, but doesn't that sound much? Does anyone have any tips on how to get to the bottom of this?

     

    I've read some people suggesting to enable disk write cache, but that only yields "Operation failed". Not sure if that's possible when using a LSI card in passthrough?

     

  15. Anyone struggling with SMART with the DS918 image on ESXI?

    I'm using a LSI00244 9201-16i with 3615 and 3617 and everything works great, but when trying 918 I get the following message when trying to get the SMART data from any hard drive in the storage manager. Since SHR and vmxnet3 works in 918 it would be awesome to continue to use that image.

     

    Edit: Is missing kernel support the reason for this?

     

    image.thumb.png.a48da9679350cf28e9593c43d12b326a.png

  16. As mentioned before, when using the following sata_args, the system crashes after a little while.

    I have given my VM a serial port so I could see what's going on, but I have a hard time understanding the issue.

     

    Anyone who could explain to me what's going on here and how I can get the sata_args to work? I mean everything looks right from DSM/Storage Manager....

    set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'

     

    Spoiler
    
    �[H�[J�[1;1H�[?25l�[m�[H�[J�[1;1H�[2;30HGNU GRUB  version 2.02
    
    �[m�[4;2H+----------------------------------------------------------------------------+�[5;2H|�[5;79H|�[6;2H|�[6;79H|�[7;2H|�[7;79H|�[8;2H|�[8;79H|�[9;2H|�[9;79H|�[10;2H|�[10;79H|�[11;2H|�[11;79H|�[12;2H|�[12;79H|�[13;2H|�[13;79H|�[14;2H|�[14;79H|�[15;2H|�[15;79H|�[16;2H|�[16;79H|�[17;2H+----------------------------------------------------------------------------+�[m�[18;2H�[19;2H�[m     Use the ^ and v keys to select which entry is highlighted.          
          Press enter to boot the selected OS, `e' to edit the commands       
          before booting or `c' for a command-line.                           �[5;80H �[7m�[5;3H*DS918+ 6.2.1/6.2 VMWare/ESXI with Jun's Mod v1.04b                         �[m�[5;78H�[m�[m�[6;3H                                                                            �[m�[6;78H�[m�[m�[7;3H                                                                            �[m�[7;78H�[m�[m�[8;3H                                                                            �[m�[8;78H�[m�[m�[9;3H                                                                            �[m�[9;78H�[m�[m�[10;3H                                                                            �[m�[10;78H�[m�[m�[11;3H                                                                            �[m�[11;78H�[m�[m�[12;3H                                                                            �[m�[12;78H�[m�[m�[13;3H                                                                            �[m�[13;78H�[m�[m�[14;3H                                                                            �[m�[14;78H�[m�[m�[15;3H                                                                            �[m�[15;78H�[m�[m�[16;3H                                                                            �[m�[16;78H�[m�[16;80H �[5;78H�[22;1H   The highlighted entry will be executed automatically in 5s.                 �[5;78H�[22;1H   The highlighted entry will be executed automatically in 4s.                 �[5;78H�[22;1H   The highlighted entry will be executed automatically in 3s.                 �[5;78H�[22;1H                                                                               �[23;1H                                                                               �[5;78H�[?25h�[H�[J�[1;1H�[H�[J�[1;1H�[H�[J�[1;1H�[H�[J�[1;1H[    1.648999] ata2: No present pin info for SATA link down event
    [    1.954077] ata3: No present pin info for SATA link down event
    [    2.259160] ata4: No present pin info for SATA link down event
    [    2.564267] ata5: No present pin info for SATA link down event
    [    2.869319] ata6: No present pin info for SATA link down event
    [    3.174492] ata7: No present pin info for SATA link down event
    [    3.479478] ata8: No present pin info for SATA link down event
    [    3.784601] ata9: No present pin info for SATA link down event
    [    4.089639] ata10: No present pin info for SATA link down event
    [    4.394779] ata11: No present pin info for SATA link down event
    [    4.699792] ata12: No present pin info for SATA link down event
    [    5.004877] ata13: No present pin info for SATA link down event
    [    5.310027] ata14: No present pin info for SATA link down event
    [    5.615100] ata15: No present pin info for SATA link down event
    [    5.920115] ata16: No present pin info for SATA link down event
    [    6.225196] ata17: No present pin info for SATA link down event
    [    6.530275] ata18: No present pin info for SATA link down event
    [    6.835423] ata19: No present pin info for SATA link down event
    [    7.140430] ata20: No present pin info for SATA link down event
    [    7.445510] ata21: No present pin info for SATA link down event
    [    7.750664] ata22: No present pin info for SATA link down event
    [    8.055675] ata23: No present pin info for SATA link down event
    [    8.360797] ata24: No present pin info for SATA link down event
    [    8.665902] ata25: No present pin info for SATA link down event
    [    8.970911] ata26: No present pin info for SATA link down event
    [    9.276046] ata27: No present pin info for SATA link down event
    [    9.581121] ata28: No present pin info for SATA link down event
    [    9.886194] ata29: No present pin info for SATA link down event
    [   10.191250] ata30: No present pin info for SATA link down event
    patching file etc/rc
    patching file etc/synoinfo.conf
    Hunk #2 FAILED at 263.
    Hunk #3 FAILED at 291.
    Hunk #4 FAILED at 304.
    Hunk #5 FAILED at 312.
    Hunk #6 FAILED at 328.
    5 out of 6 hunks FAILED -- saving rejects to file etc/synoinfo.conf.rej
    patching file linuxrc.syno
    patching file usr/sbin/init.post
    START /linuxrc.syno
    Insert basic USB modules...
    :: Loading module usb-common ... [  OK  ]
    :: Loading module usbcore ... [  OK  ]
    :: Loading module xhci-hcd ... [  OK  ]
    :: Loading module xhci-pci ... [  OK  ]
    :: Loading module usb-storage ... [  OK  ]
    :: Loading module BusLogic ... [  OK  ]
    :: Loading module vmw_pvscsi ... [  OK  ]
    :: Loading module megaraid_mm ... [  OK  ]
    :: Loading module megaraid_mbox ... [  OK  ]
    :: Loading module scsi_transport_spi ... [  OK  ]
    :: Loading module mptbase ... [  OK  ]
    :: Loading module mptscsih ... [  OK  ]
    :: Loading module mptspi ... [  OK  ]
    :: Loading module mptctl ... [  OK  ]
    :: Loading module megaraid ... [  OK  ]
    :: Loading module megaraid_sas ... [  OK  ]
    :: Loading module scsi_transport_sas ... [  OK  ]
    :: Loading module raid_class ... [  OK  ]
    :: Loading module mpt3sas ... [  OK  ]
    :: Loading module mdio ... [  OK  ]
    :: Loading module rtc-cmos ... [  OK  ]
    Insert net driver(Mindspeed only)...
    Starting /usr/syno/bin/synocfgen...
    /usr/syno/bin/synocfgen returns 0
    [   10.663536] md: invalid raid superblock magic on sda3
    [   10.679427] md: invalid raid superblock magic on sdb3
    Partition Version=8
     /sbin/e2fsck exists, checking /dev/md0... 
    /sbin/e2fsck -pvf returns 0
    Mounting /dev/md0 /tmpRoot
    ------------upgrade
    Begin upgrade procedure
    No upgrade file exists
    End upgrade procedure
    ============upgrade
    Wait 2 seconds for synology manufactory device
    Sun Jan 20 20:31:05 UTC 2019
    /dev/md0 /tmpRoot ext4 rw,relatime,data=ordered 0 0
    none /sys/kernel/debug debugfs rw,relatime 0 0
    sys /sys sysfs rw,relatime 0 0
    none /dev devtmpfs rw,relatime,size=1012292k,nr_inodes=253073,mode=755 0 0
    proc /proc proc rw,relatime 0 0
    linuxrc.syno executed successfully.
    Post init
    
    
    synotest login: [10259.795238] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
    [10259.796149] blk_update_request: I/O error, dev sdb, sector 4982400
    [10259.796776] blk_update_request: I/O error, dev sdb, sector in range 9437184 + 0-2(12)
    [10259.797596] blk_update_request: I/O error, dev sda, sector 9437192
    [10259.811059] blk_update_request: I/O error, dev sdb, sector in range 9580544 + 0-2(12)
    [10259.811892] md/raid1:md2: sdb3: rescheduling sector 143776
    [10259.812617] blk_update_request: I/O error, dev sdb, sector 9583008
    [10259.813311] blk_update_request: I/O error, dev sda, sector 9583008
    [10259.813951] raid1: Disk failure on sdb3, disabling device. 
    [10259.813951]     Operation continuing on 1 devices
    [10259.814969] md/raid1:md2: redirecting sector 143776 to other mirror: sda3
    [10259.815712] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
    [10259.816551] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
    [10259.816574] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
    [10259.818192] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 1, flush 0, corrupt 0, gen 0
    [10259.819209] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.820249] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.821295] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
    [10259.822108] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 2, flush 0, corrupt 0, gen 0
    [10259.823030] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.824078] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
    [10259.824909] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 3, flush 0, corrupt 0, gen 0
    [10259.825336] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.825822] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
    [10259.825824] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 4, flush 0, corrupt 0, gen 0
    [10259.825828] blk_update_request: I/O error, dev sda, sector in range 9580544 + 0-2(12)
    [10259.825829] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 5, flush 0, corrupt 0, gen 0
    [10259.830364] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.831691] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.832682] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 6, flush 0, corrupt 0, gen 0
    [10259.833685] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.834674] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 7, flush 0, corrupt 0, gen 0
    [10259.835676] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.836665] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 8, flush 0, corrupt 0, gen 0
    [10259.837577] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 9, flush 0, corrupt 0, gen 0
    [10259.838449] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.839543] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10259.840654] BTRFS error (device md2): bdev /dev/md2 errs: wr 0, rd 10, flush 0, corrupt 0, gen 0
    [10259.843248] BTRFS error (device md2): BTRFS: md2 failed to repair btree csum error on 65224704, mirror = 1
    [10259.843248] 
    [10259.844581] blk_update_request: I/O error, dev sda, sector 11680160
    [10259.847284] BTRFS error (device md2): BTRFS: md2 failed to repair btree csum error on 65224704, mirror = 2
    [10259.847284] 
    [10259.848491] BTRFS error (device md2): error loading props for ino 17188 (root 257): -5
    [10259.873224] md/raid1:md0: sdb1: rescheduling sector 831776
    [10259.874011] blk_update_request: I/O error, dev sdb, sector 833824
    [10259.874796] blk_update_request: I/O error, dev sda, sector 833824
    [10259.875564] raid1: Disk failure on sdb1, disabling device. 
    [10259.875564]     Operation continuing on 1 devices
    [10259.876809] md/raid1:md0: redirecting sector 831776 to other mirror: sda1
    [10259.877732] blk_update_request: I/O error, dev sda, sector 833824
    [10259.882257] blk_update_request: I/O error, dev sda, sector 4982400
    [10259.885206] blk_update_request: I/O error, dev sda, sector 1045816
    [10259.993102] BTRFS error (device md2): failed to repair data csum of ino 16633 off 0 (ran out of all copies)
    [10259.993102] 
    [10259.994648] BTRFS error (device md2): failed to repair data csum of ino 16633 off 4096 (ran out of all copies)
    [10259.994648] 
    [10259.995787] BTRFS error (device md2): failed to repair data csum of ino 16633 off 8192 (ran out of all copies)
    [10259.995787] 
    [10260.003366] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.004476] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.005420] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.006371] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.007319] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.007320] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.007321] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.007322] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.007322] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.017129] BTRFS error (device md2): failed to repair data csum of ino 16633 off 49152 (ran out of all copies)
    [10260.017129] 
    [10260.017517] Buffer I/O error on device md0, logical block in range 221184 + 0-2(12)
    [10260.018539] BTRFS error (device md2): failed to repair data csum of ino 16633 off 12288 (ran out of all copies)
    [10260.018539] 
    [10260.018544] BTRFS error (device md2): failed to repair data csum of ino 16633 off 16384 (ran out of all copies)
    [10260.018544] 
    [10260.018545] BTRFS error (device md2): failed to repair data csum of ino 16633 off 20480 (ran out of all copies)
    [10260.018545] 
    [10260.018547] BTRFS error (device md2): failed to repair data csum of ino 16633 off 24576 (ran out of all copies)
    [10260.018547] 
    [10260.018548] BTRFS error (device md2): failed to repair data csum of ino 16633 off 28672 (ran out of all copies)
    [10260.018548] 
    [10260.018550] BTRFS error (device md2): failed to repair data csum of ino 16633 off 32768 (ran out of all copies)
    [10260.018550] 
    [10260.018558] BTRFS error (device md2): failed to repair data csum of ino 16633 off 40960 (ran out of all copies)
    [10260.018558] 
    [10260.019465] BTRFS error (device md2): failed to repair data csum of ino 16633 off 36864 (ran out of all copies)
    [10260.019465] 
    [10260.019466] BTRFS error (device md2): failed to repair data csum of ino 16633 off 45056 (ran out of all copies)
    [10260.019466] 
    [10260.020695] BTRFS error (device md2): failed to repair data csum of ino 16633 off 53248 (ran out of all copies)
    [10260.020695] 
    [10260.020697] BTRFS error (device md2): failed to repair data csum of ino 2557 off 0 (ran out of all copies)
    [10260.020697] 
    [10260.022003] BTRFS error (device md2): failed to repair data csum of ino 2557 off 4096 (ran out of all copies)
    [10260.022003] 
    [10260.023254] BTRFS error (device md2): failed to repair data csum of ino 2557 off 8192 (ran out of all copies)
    [10260.023254] 
    [10260.024410] BTRFS error (device md2): failed to repair data csum of ino 2557 off 12288 (ran out of all copies)
    [10260.024410] 
    [10260.024412] BTRFS error (device md2): failed to repair data csum of ino 2557 off 16384 (ran out of all copies)
    [10260.024412] 
    [10260.024413] BTRFS error (device md2): failed to repair data csum of ino 2557 off 20480 (ran out of all copies)
    [10260.024413] 
    [10260.024415] BTRFS error (device md2): failed to repair data csum of ino 2557 off 24576 (ran out of all copies)
    [10260.024415] 
    [10260.026028] BTRFS error (device md2): failed to repair data csum of ino 2557 off 28672 (ran out of all copies)
    [10260.026028] 
    [10260.026030] BTRFS error (device md2): failed to repair data csum of ino 2557 off 32768 (ran out of all copies)
    [10260.026030] 
    [10260.026031] BTRFS error (device md2): failed to repair data csum of ino 2557 off 36864 (ran out of all copies)
    [10260.026031] 
    [10260.026032] BTRFS error (device md2): failed to repair data csum of ino 2557 off 40960 (ran out of all copies)
    [10260.026032] 
    [10260.026033] BTRFS error (device md2): failed to repair data csum of ino 2557 off 45056 (ran out of all copies)
    [10260.026033] 
    [10260.026035] BTRFS error (device md2): failed to repair data csum of ino 2557 off 49152 (ran out of all copies)
    [10260.026035] 
    [10260.026036] BTRFS error (device md2): failed to repair data csum of ino 2557 off 53248 (ran out of all copies)
    [10260.026036] 
    [10260.026037] BTRFS error (device md2): failed to repair data csum of ino 2557 off 57344 (ran out of all copies)
    [10260.026037] 
    [10260.026038] BTRFS error (device md2): failed to repair data csum of ino 2557 off 61440 (ran out of all copies)
    [10260.026038] 
    [10260.026039] BTRFS error (device md2): failed to repair data csum of ino 2557 off 65536 (ran out of all copies)
    [10260.026039] 
    [10260.026040] BTRFS error (device md2): failed to repair data csum of ino 2557 off 69632 (ran out of all copies)
    [10260.026040] 
    [10260.026041] BTRFS error (device md2): failed to repair data csum of ino 2557 off 73728 (ran out of all copies)
    [10260.026041] 
    [10260.026042] BTRFS error (device md2): failed to repair data csum of ino 2557 off 77824 (ran out of all copies)
    [10260.026042] 
    [10260.062467] BTRFS error (device md2): failed to repair data csum of ino 15149 off 4096 (ran out of all copies)
    [10260.062467] 
    [10260.063653] BTRFS error (device md2): failed to repair data csum of ino 15149 off 8192 (ran out of all copies)
    [10260.063653] 
    [10260.064868] BTRFS error (device md2): failed to repair data csum of ino 15149 off 12288 (ran out of all copies)
    [10260.064868] 
    [10260.064965] BTRFS error (device md2): failed to repair data csum of ino 15149 off 0 (ran out of all copies)
    [10260.064965] 
    [10260.064968] BTRFS error (device md2): failed to repair data csum of ino 15149 off 16384 (ran out of all copies)
    [10260.064968] 
    [10260.064970] BTRFS error (device md2): failed to repair data csum of ino 15149 off 20480 (ran out of all copies)
    [10260.064970] 
    [10260.064971] BTRFS error (device md2): failed to repair data csum of ino 15149 off 24576 (ran out of all copies)
    [10260.064971] 
    [10260.064972] BTRFS error (device md2): failed to repair data csum of ino 15149 off 28672 (ran out of all copies)
    [10260.064972] 
    [10260.064973] BTRFS error (device md2): failed to repair data csum of ino 15149 off 32768 (ran out of all copies)
    [10260.064973] 
    [10260.064979] BTRFS error (device md2): failed to repair data csum of ino 15149 off 36864 (ran out of all copies)
    [10260.064979] 
    [10260.065045] BTRFS error (device md2): failed to repair data csum of ino 15149 off 40960 (ran out of all copies)
    [10260.065045] 
    [10260.065046] BTRFS error (device md2): failed to repair data csum of ino 15149 off 45056 (ran out of all copies)
    [10260.065046] 
    [10260.065058] BTRFS error (device md2): failed to repair data csum of ino 15149 off 57344 (ran out of all copies)
    [10260.065058] 
    [10260.065059] BTRFS error (device md2): failed to repair data csum of ino 15149 off 61440 (ran out of all copies)
    [10260.065059] 
    [10260.065075] BTRFS error (device md2): failed to repair data csum of ino 15149 off 65536 (ran out of all copies)
    [10260.065075] 
    [10260.065086] BTRFS error (device md2): failed to repair data csum of ino 15149 off 69632 (ran out of all copies)
    [10260.065086] 
    [10260.065099] BTRFS error (device md2): failed to repair data csum of ino 15149 off 73728 (ran out of all copies)
    [10260.065099] 
    [10260.065113] BTRFS error (device md2): failed to repair data csum of ino 15149 off 77824 (ran out of all copies)
    [10260.065113] 
    [10260.065235] BTRFS error (device md2): failed to repair data csum of ino 15149 off 114688 (ran out of all copies)
    [10260.065235] 
    [10260.065247] BTRFS error (device md2): failed to repair data csum of ino 15149 off 118784 (ran out of all copies)
    [10260.065247] 
    [10260.065273] BTRFS error (device md2): failed to repair data csum of ino 15149 off 122880 (ran out of all copies)
    [10260.065273] 
    [10260.065275] BTRFS error (device md2): failed to repair data csum of ino 15149 off 126976 (ran out of all copies)
    [10260.065275] 
    [10260.065355] BTRFS error (device md2): failed to repair data csum of ino 15149 off 81920 (ran out of all copies)
    [10260.065355] 
    [10260.065357] BTRFS error (device md2): failed to repair data csum of ino 15149 off 86016 (ran out of all copies)
    [10260.065357] 
    [10260.065358] BTRFS error (device md2): failed to repair data csum of ino 15149 off 90112 (ran out of all copies)
    [10260.065358] 
    [10260.065359] BTRFS error (device md2): failed to repair data csum of ino 15149 off 94208 (ran out of all copies)
    [10260.065359] 
    [10260.065360] BTRFS error (device md2): failed to repair data csum of ino 15149 off 98304 (ran out of all copies)
    [10260.065360] 
    [10260.065361] BTRFS error (device md2): failed to repair data csum of ino 15149 off 102400 (ran out of all copies)
    [10260.065361] 
    [10260.065362] BTRFS error (device md2): failed to repair data csum of ino 15149 off 106496 (ran out of all copies)
    [10260.065362] 
    [10260.065363] BTRFS error (device md2): failed to repair data csum of ino 15149 off 49152 (ran out of all copies)
    [10260.065363] 
    [10260.065364] BTRFS error (device md2): failed to repair data csum of ino 15149 off 53248 (ran out of all copies)
    [10260.065364] 
    [10260.065365] BTRFS error (device md2): failed to repair data csum of ino 15149 off 110592 (ran out of all copies)
    [10260.065365] 
    [10260.065743] BTRFS error (device md2): failed to repair data csum of ino 14128 off 0 (ran out of all copies)
    [10260.065743] 
    [10260.065757] BTRFS error (device md2): failed to repair data csum of ino 14128 off 4096 (ran out of all copies)
    [10260.065757] 
    [10260.065771] BTRFS error (device md2): failed to repair data csum of ino 14128 off 8192 (ran out of all copies)
    [10260.065771] 
    [10260.110152] BTRFS error (device md2): failed to repair data csum of ino 15149 off 135168 (ran out of all copies)
    [10260.110152] 
    [10260.111476] BTRFS error (device md2): failed to repair data csum of ino 15149 off 139264 (ran out of all copies)
    [10260.111476] 
    [10260.112707] BTRFS error (device md2): failed to repair data csum of ino 15149 off 143360 (ran out of all copies)
    [10260.112707] 
    [10260.113300] BTRFS error (device md2): failed to repair data csum of ino 15149 off 147456 (ran out of all copies)
    [10260.113300] 
    [10260.113303] BTRFS error (device md2): failed to repair data csum of ino 15149 off 151552 (ran out of all copies)
    [10260.113303] 
    [10260.113304] BTRFS error (device md2): failed to repair data csum of ino 15149 off 155648 (ran out of all copies)
    [10260.113304] 
    [10260.113306] BTRFS error (device md2): failed to repair data csum of ino 15149 off 159744 (ran out of all copies)
    [10260.113306] 
    [10260.113307] BTRFS error (device md2): failed to repair data csum of ino 15149 off 163840 (ran out of all copies)
    [10260.113307] 
    [10260.113308] BTRFS error (device md2): failed to repair data csum of ino 15149 off 167936 (ran out of all copies)
    [10260.113308] 
    [10260.113309] BTRFS error (device md2): failed to repair data csum of ino 15149 off 172032 (ran out of all copies)
    [10260.113309] 
    [10260.113310] BTRFS error (device md2): failed to repair data csum of ino 15149 off 176128 (ran out of all copies)
    [10260.113310] 
    [10260.113311] BTRFS error (device md2): failed to repair data csum of ino 15149 off 180224 (ran out of all copies)
    [10260.113311] 
    [10260.113313] BTRFS error (device md2): failed to repair data csum of ino 15149 off 184320 (ran out of all copies)
    [10260.113313] 
    [10260.113314] BTRFS error (device md2): failed to repair data csum of ino 15149 off 188416 (ran out of all copies)
    [10260.113314] 
    [10260.113315] BTRFS error (device md2): failed to repair data csum of ino 15149 off 192512 (ran out of all copies)
    [10260.113315] 
    [10260.113316] BTRFS error (device md2): failed to repair data csum of ino 15149 off 196608 (ran out of all copies)
    [10260.113316] 
    [10260.113318] BTRFS error (device md2): failed to repair data csum of ino 15149 off 200704 (ran out of all copies)
    [10260.113318] 
    [10260.113319] BTRFS error (device md2): failed to repair data csum of ino 15149 off 204800 (ran out of all copies)
    [10260.113319] 
    [10260.113320] BTRFS error (device md2): failed to repair data csum of ino 15149 off 208896 (ran out of all copies)
    [10260.113320] 
    [10260.113321] BTRFS error (device md2): failed to repair data csum of ino 15149 off 212992 (ran out of all copies)
    [10260.113321] 
    [10260.113323] BTRFS error (device md2): failed to repair data csum of ino 15149 off 217088 (ran out of all copies)
    [10260.113323] 
    [10260.113324] BTRFS error (device md2): failed to repair data csum of ino 15149 off 221184 (ran out of all copies)
    [10260.113324] 
    [10260.113325] BTRFS error (device md2): failed to repair data csum of ino 15149 off 225280 (ran out of all copies)
    [10260.113325] 
    [10260.113326] BTRFS error (device md2): failed to repair data csum of ino 15149 off 229376 (ran out of all copies)
    [10260.113326] 
    [10260.113327] BTRFS error (device md2): failed to repair data csum of ino 15149 off 233472 (ran out of all copies)
    [10260.113327] 
    [10260.113328] BTRFS error (device md2): failed to repair data csum of ino 15149 off 237568 (ran out of all copies)
    [10260.113328] 
    [10260.113330] BTRFS error (device md2): failed to repair data csum of ino 15149 off 241664 (ran out of all copies)
    [10260.113330] 
    [10260.113331] BTRFS error (device md2): failed to repair data csum of ino 15149 off 245760 (ran out of all copies)
    [10260.113331] 
    [10260.113332] BTRFS error (device md2): failed to repair data csum of ino 15149 off 249856 (ran out of all copies)
    [10260.113332] 
    [10260.113333] BTRFS error (device md2): failed to repair data csum of ino 15149 off 253952 (ran out of all copies)
    [10260.113333] 
    [10260.113334] BTRFS error (device md2): failed to repair data csum of ino 15149 off 258048 (ran out of all copies)
    [10260.113334] 
    [10260.113335] BTRFS error (device md2): failed to repair data csum of ino 15149 off 262144 (ran out of all copies)
    [10260.113335] 
    [10260.113336] BTRFS error (device md2): failed to repair data csum of ino 15149 off 266240 (ran out of all copies)
    [10260.113336] 
    [10260.113338] BTRFS error (device md2): failed to repair data csum of ino 15149 off 270336 (ran out of all copies)
    [10260.113338] 
    [10260.113339] BTRFS error (device md2): failed to repair data csum of ino 15149 off 274432 (ran out of all copies)
    [10260.113339] 
    [10260.113340] BTRFS error (device md2): failed to repair data csum of ino 15149 off 278528 (ran out of all copies)
    [10260.113340] 
    [10260.113341] BTRFS error (device md2): failed to repair data csum of ino 15149 off 282624 (ran out of all copies)
    [10260.113341] 
    [10260.113342] BTRFS error (device md2): failed to repair data csum of ino 15149 off 286720 (ran out of all copies)
    [10260.113342] 
    [10260.113343] BTRFS error (device md2): failed to repair data csum of ino 15149 off 290816 (ran out of all copies)
    [10260.113343] 
    [10260.113344] BTRFS error (device md2): failed to repair data csum of ino 15149 off 294912 (ran out of all copies)
    [10260.113344] 
    [10260.113345] BTRFS error (device md2): failed to repair data csum of ino 15149 off 299008 (ran out of all copies)
    [10260.113345] 
    [10260.113346] BTRFS error (device md2): failed to repair data csum of ino 15149 off 303104 (ran out of all copies)
    [10260.113346] 
    [10260.113347] BTRFS error (device md2): failed to repair data csum of ino 15149 off 307200 (ran out of all copies)
    [10260.113347] 
    [10260.113349] BTRFS error (device md2): failed to repair data csum of ino 15149 off 311296 (ran out of all copies)
    [10260.113349] 
    [10260.113350] BTRFS error (device md2): failed to repair data csum of ino 15149 off 315392 (ran out of all copies)
    [10260.113350] 
    [10260.113351] BTRFS error (device md2): failed to repair data csum of ino 15149 off 319488 (ran out of all copies)
    [10260.113351] 
    [10260.113352] BTRFS error (device md2): failed to repair data csum of ino 15149 off 323584 (ran out of all copies)
    [10260.113352] 
    [10260.113353] BTRFS error (device md2): failed to repair data csum of ino 15149 off 327680 (ran out of all copies)
    [10260.113353] 
    [10260.113355] BTRFS error (device md2): failed to repair data csum of ino 15149 off 331776 (ran out of all copies)
    [10260.113355] 
    [10260.113355] BTRFS error (device md2): failed to repair data csum of ino 15149 off 335872 (ran out of all copies)
    [10260.113355] 
    [10260.113357] BTRFS error (device md2): failed to repair data csum of ino 15149 off 339968 (ran out of all copies)
    [10260.113357] 
    [10260.113358] BTRFS error (device md2): failed to repair data csum of ino 15149 off 344064 (ran out of all copies)
    [10260.113358] 
    [10260.113359] BTRFS error (device md2): failed to repair data csum of ino 15149 off 348160 (ran out of all copies)
    [10260.113359] 
    [10260.113360] BTRFS error (device md2): failed to repair data csum of ino 15149 off 352256 (ran out of all copies)
    [10260.113360] 
    [10260.113361] BTRFS error (device md2): failed to repair data csum of ino 15149 off 356352 (ran out of all copies)
    [10260.113361] 
    [10260.113363] BTRFS error (device md2): failed to repair data csum of ino 15149 off 360448 (ran out of all copies)
    [10260.113363] 
    [10260.113364] BTRFS error (device md2): failed to repair data csum of ino 15149 off 364544 (ran out of all copies)
    [10260.113364] 
    [10260.113365] BTRFS error (device md2): failed to repair data csum of ino 15149 off 368640 (ran out of all copies)
    [10260.113365] 
    [10260.113366] BTRFS error (device md2): failed to repair data csum of ino 15149 off 372736 (ran out of all copies)
    [10260.113366] 
    [10260.113367] BTRFS error (device md2): failed to repair data csum of ino 15149 off 376832 (ran out of all copies)
    [10260.113367] 
    [10260.113368] BTRFS error (device md2): failed to repair data csum of ino 15149 off 380928 (ran out of all copies)
    [10260.113368] 
    [10260.113369] BTRFS error (device md2): failed to repair data csum of ino 15149 off 385024 (ran out of all copies)
    [10260.113369] 
    [10260.113370] BTRFS error (device md2): failed to repair data csum of ino 15149 off 389120 (ran out of all copies)
    [10260.113370] 
    [10260.113371] BTRFS error (device md2): failed to repair data csum of ino 15149 off 393216 (ran out of all copies)
    [10260.113371] 
    [10260.113372] BTRFS error (device md2): failed to repair data csum of ino 15149 off 397312 (ran out of all copies)
    [10260.113372] 
    [10260.113373] BTRFS error (device md2): failed to repair data csum of ino 15149 off 401408 (ran out of all copies)
    [10260.113373] 
    [10260.113374] BTRFS error (device md2): failed to repair data csum of ino 15149 off 405504 (ran out of all copies)
    [10260.113374] 
    [10260.113375] BTRFS error (device md2): failed to repair data csum of ino 15149 off 409600 (ran out of all copies)
    [10260.113375] 
    [10260.113376] BTRFS error (device md2): failed to repair data csum of ino 15149 off 413696 (ran out of all copies)
    [10260.113376] 
    [10260.113378] BTRFS error (device md2): failed to repair data csum of ino 15149 off 417792 (ran out of all copies)
    [10260.113378] 
    [10260.113379] BTRFS error (device md2): failed to repair data csum of ino 15149 off 421888 (ran out of all copies)
    [10260.113379] 
    [10260.113380] BTRFS error (device md2): failed to repair data csum of ino 15149 off 425984 (ran out of all copies)
    [10260.113380] 
    [10260.113381] BTRFS error (device md2): failed to repair data csum of ino 15149 off 430080 (ran out of all copies)
    [10260.113381] 
    [10260.113382] BTRFS error (device md2): failed to repair data csum of ino 15149 off 434176 (ran out of all copies)
    [10260.113382] 
    [10260.113383] BTRFS error (device md2): failed to repair data csum of ino 15149 off 438272 (ran out of all copies)
    [10260.113383] 
    [10260.113384] BTRFS error (device md2): failed to repair data csum of ino 15149 off 442368 (ran out of all copies)
    [10260.113384] 
    [10260.113385] BTRFS error (device md2): failed to repair data csum of ino 15149 off 446464 (ran out of all copies)
    [10260.113385] 
    [10260.113386] BTRFS error (device md2): failed to repair data csum of ino 15149 off 450560 (ran out of all copies)
    [10260.113386] 
    [10260.113388] BTRFS error (device md2): failed to repair data csum of ino 15149 off 454656 (ran out of all copies)
    [10260.113388] 
    [10260.113389] BTRFS error (device md2): failed to repair data csum of ino 15149 off 458752 (ran out of all copies)
    [10260.113389] 
    [10260.113390] BTRFS error (device md2): failed to repair data csum of ino 15149 off 462848 (ran out of all copies)
    [10260.113390] 
    [10260.113391] BTRFS error (device md2): failed to repair data csum of ino 15149 off 466944 (ran out of all copies)
    [10260.113391] 
    [10260.113392] BTRFS error (device md2): failed to repair data csum of ino 15149 off 471040 (ran out of all copies)
    [10260.113392] 
    [10260.113393] BTRFS error (device md2): failed to repair data csum of ino 15149 off 475136 (ran out of all copies)
    [10260.113393] 
    [10260.113394] BTRFS error (device md2): failed to repair data csum of ino 15149 off 479232 (ran out of all copies)
    [10260.113394] 
    [10260.113395] BTRFS error (device md2): failed to repair data csum of ino 15149 off 483328 (ran out of all copies)
    [10260.113395] 
    [10260.113396] BTRFS error (device md2): failed to repair data csum of ino 15149 off 487424 (ran out of all copies)
    [10260.113396] 
    [10260.113398] BTRFS error (device md2): failed to repair data csum of ino 15149 off 131072 (ran out of all copies)
    [10260.113398] 
    [10260.125896] EXT4-fs error (device md0): ext4_find_entry:1614: inode #7151: comm SYNO.Core.Deskt: reading directory lblock 0
    [10260.126926] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.232402] BTRFS error (device md2): failed to repair data csum of ino 36135 off 4096 (ran out of all copies)
    [10260.232402] 
    [10260.233653] BTRFS error (device md2): failed to repair data csum of ino 36135 off 8192 (ran out of all copies)
    [10260.233653] 
    [10260.233775] BTRFS error (device md2): failed to repair data csum of ino 36135 off 12288 (ran out of all copies)
    [10260.233775] 
    [10260.233778] BTRFS error (device md2): failed to repair data csum of ino 36135 off 16384 (ran out of all copies)
    [10260.233778] 
    [10260.233779] BTRFS error (device md2): failed to repair data csum of ino 36135 off 20480 (ran out of all copies)
    [10260.233779] 
    [10260.233780] BTRFS error (device md2): failed to repair data csum of ino 36135 off 24576 (ran out of all copies)
    [10260.233780] 
    [10260.233796] BTRFS error (device md2): failed to repair data csum of ino 36135 off 32768 (ran out of all copies)
    [10260.233796] 
    [10260.233812] BTRFS error (device md2): failed to repair data csum of ino 36135 off 28672 (ran out of all copies)
    [10260.233812] 
    [10260.233819] BTRFS error (device md2): failed to repair data csum of ino 36135 off 36864 (ran out of all copies)
    [10260.233819] 
    [10260.233820] BTRFS error (device md2): failed to repair data csum of ino 36135 off 40960 (ran out of all copies)
    [10260.233820] 
    [10260.233829] BTRFS error (device md2): failed to repair data csum of ino 36135 off 49152 (ran out of all copies)
    [10260.233829] 
    [10260.233830] BTRFS error (device md2): failed to repair data csum of ino 36135 off 45056 (ran out of all copies)
    [10260.233830] 
    [10260.233834] BTRFS error (device md2): failed to repair data csum of ino 36135 off 53248 (ran out of all copies)
    [10260.233834] 
    [10260.233835] BTRFS error (device md2): failed to repair data csum of ino 36135 off 61440 (ran out of all copies)
    [10260.233835] 
    [10260.233836] BTRFS error (device md2): failed to repair data csum of ino 36135 off 65536 (ran out of all copies)
    [10260.233836] 
    [10260.233841] BTRFS error (device md2): failed to repair data csum of ino 36135 off 57344 (ran out of all copies)
    [10260.233841] 
    [10260.233858] BTRFS error (device md2): failed to repair data csum of ino 36135 off 69632 (ran out of all copies)
    [10260.233858] 
    [10260.233966] BTRFS error (device md2): failed to repair data csum of ino 36135 off 73728 (ran out of all copies)
    [10260.233966] 
    [10260.233969] BTRFS error (device md2): failed to repair data csum of ino 36135 off 77824 (ran out of all copies)
    [10260.233969] 
    [10260.233976] BTRFS error (device md2): failed to repair data csum of ino 36135 off 0 (ran out of all copies)
    [10260.233976] 
    [10260.233978] BTRFS error (device md2): failed to repair data csum of ino 36135 off 90112 (ran out of all copies)
    [10260.233978] 
    [10260.233979] BTRFS error (device md2): failed to repair data csum of ino 36135 off 81920 (ran out of all copies)
    [10260.233979] 
    [10260.233988] BTRFS error (device md2): failed to repair data csum of ino 36135 off 94208 (ran out of all copies)
    [10260.233988] 
    [10260.234001] BTRFS error (device md2): failed to repair data csum of ino 36135 off 86016 (ran out of all copies)
    [10260.234001] 
    [10260.234014] BTRFS error (device md2): failed to repair data csum of ino 36135 off 102400 (ran out of all copies)
    [10260.234014] 
    [10260.234028] BTRFS error (device md2): failed to repair data csum of ino 36135 off 106496 (ran out of all copies)
    [10260.234028] 
    [10260.234041] BTRFS error (device md2): failed to repair data csum of ino 36135 off 110592 (ran out of all copies)
    [10260.234041] 
    [10260.234054] BTRFS error (device md2): failed to repair data csum of ino 36135 off 114688 (ran out of all copies)
    [10260.234054] 
    [10260.234067] BTRFS error (device md2): failed to repair data csum of ino 36135 off 118784 (ran out of all copies)
    [10260.234067] 
    [10260.234080] BTRFS error (device md2): failed to repair data csum of ino 36135 off 122880 (ran out of all copies)
    [10260.234080] 
    [10260.234093] BTRFS error (device md2): failed to repair data csum of ino 36135 off 126976 (ran out of all copies)
    [10260.234093] 
    [10260.235079] BTRFS error (device md2): failed to repair data csum of ino 36135 off 98304 (ran out of all copies)
    [10260.235079] 
    [10260.276973] BTRFS error (device md2): failed to repair data csum of ino 36135 off 364544 (ran out of all copies)
    [10260.276973] 
    [10260.278239] BTRFS error (device md2): failed to repair data csum of ino 36135 off 352256 (ran out of all copies)
    [10260.278239] 
    [10260.279428] BTRFS error (device md2): failed to repair data csum of ino 36135 off 356352 (ran out of all copies)
    [10260.279428] 
    [10260.280603] BTRFS error (device md2): failed to repair data csum of ino 36135 off 348160 (ran out of all copies)
    [10260.280603] 
    [10260.281321] BTRFS error (device md2): failed to repair data csum of ino 36135 off 331776 (ran out of all copies)
    [10260.281321] 
    [10260.281324] BTRFS error (device md2): failed to repair data csum of ino 36135 off 319488 (ran out of all copies)
    [10260.281324] 
    [10260.281325] BTRFS error (device md2): failed to repair data csum of ino 36135 off 323584 (ran out of all copies)
    [10260.281325] 
    [10260.281335] BTRFS error (device md2): failed to repair data csum of ino 36135 off 147456 (ran out of all copies)
    [10260.281335] 
    [10260.281336] BTRFS error (device md2): failed to repair data csum of ino 36135 off 151552 (ran out of all copies)
    [10260.281336] 
    [10260.281338] BTRFS error (device md2): failed to repair data csum of ino 36135 off 155648 (ran out of all copies)
    [10260.281338] 
    [10260.281339] BTRFS error (device md2): failed to repair data csum of ino 36135 off 159744 (ran out of all copies)
    [10260.281339] 
    [10260.281340] BTRFS error (device md2): failed to repair data csum of ino 36135 off 163840 (ran out of all copies)
    [10260.281340] 
    [10260.281341] BTRFS error (device md2): failed to repair data csum of ino 36135 off 167936 (ran out of all copies)
    [10260.281341] 
    [10260.281343] BTRFS error (device md2): failed to repair data csum of ino 36135 off 172032 (ran out of all copies)
    [10260.281343] 
    [10260.281344] BTRFS error (device md2): failed to repair data csum of ino 36135 off 176128 (ran out of all copies)
    [10260.281344] 
    [10260.281345] BTRFS error (device md2): failed to repair data csum of ino 36135 off 180224 (ran out of all copies)
    [10260.281345] 
    [10260.281346] BTRFS error (device md2): failed to repair data csum of ino 36135 off 184320 (ran out of all copies)
    [10260.281346] 
    [10260.281347] BTRFS error (device md2): failed to repair data csum of ino 36135 off 188416 (ran out of all copies)
    [10260.281347] 
    [10260.281349] BTRFS error (device md2): failed to repair data csum of ino 36135 off 192512 (ran out of all copies)
    [10260.281349] 
    [10260.281350] BTRFS error (device md2): failed to repair data csum of ino 36135 off 196608 (ran out of all copies)
    [10260.281350] 
    [10260.281351] BTRFS error (device md2): failed to repair data csum of ino 36135 off 200704 (ran out of all copies)
    [10260.281351] 
    [10260.281352] BTRFS error (device md2): failed to repair data csum of ino 36135 off 204800 (ran out of all copies)
    [10260.281352] 
    [10260.281354] BTRFS error (device md2): failed to repair data csum of ino 36135 off 208896 (ran out of all copies)
    [10260.281354] 
    [10260.281354] BTRFS error (device md2): failed to repair data csum of ino 36135 off 212992 (ran out of all copies)
    [10260.281354] 
    [10260.281356] BTRFS error (device md2): failed to repair data csum of ino 36135 off 217088 (ran out of all copies)
    [10260.281356] 
    [10260.281357] BTRFS error (device md2): failed to repair data csum of ino 36135 off 221184 (ran out of all copies)
    [10260.281357] 
    [10260.281359] BTRFS error (device md2): failed to repair data csum of ino 36135 off 225280 (ran out of all copies)
    [10260.281359] 
    [10260.281377] BTRFS error (device md2): failed to repair data csum of ino 36135 off 233472 (ran out of all copies)
    [10260.281377] 
    [10260.281379] BTRFS error (device md2): failed to repair data csum of ino 36135 off 237568 (ran out of all copies)
    [10260.281379] 
    [10260.281380] BTRFS error (device md2): failed to repair data csum of ino 36135 off 241664 (ran out of all copies)
    [10260.281380] 
    [10260.281381] BTRFS error (device md2): failed to repair data csum of ino 36135 off 245760 (ran out of all copies)
    [10260.281381] 
    [10260.281382] BTRFS error (device md2): failed to repair data csum of ino 36135 off 249856 (ran out of all copies)
    [10260.281382] 
    [10260.281386] BTRFS error (device md2): failed to repair data csum of ino 36135 off 258048 (ran out of all copies)
    [10260.281386] 
    [10260.281387] BTRFS error (device md2): failed to repair data csum of ino 36135 off 262144 (ran out of all copies)
    [10260.281387] 
    [10260.281389] BTRFS error (device md2): failed to repair data csum of ino 36135 off 266240 (ran out of all copies)
    [10260.281389] 
    [10260.281392] BTRFS error (device md2): failed to repair data csum of ino 36135 off 274432 (ran out of all copies)
    [10260.281392] 
    [10260.281397] BTRFS error (device md2): failed to repair data csum of ino 36135 off 286720 (ran out of all copies)
    [10260.281397] 
    [10260.281399] BTRFS error (device md2): failed to repair data csum of ino 36135 off 290816 (ran out of all copies)
    [10260.281399] 
    [10260.281400] BTRFS error (device md2): failed to repair data csum of ino 36135 off 294912 (ran out of all copies)
    [10260.281400] 
    [10260.281401] BTRFS error (device md2): failed to repair data csum of ino 36135 off 299008 (ran out of all copies)
    [10260.281401] 
    [10260.281402] BTRFS error (device md2): failed to repair data csum of ino 36135 off 303104 (ran out of all copies)
    [10260.281402] 
    [10260.281412] BTRFS error (device md2): failed to repair data csum of ino 36135 off 307200 (ran out of all copies)
    [10260.281412] 
    [10260.281424] BTRFS error (device md2): failed to repair data csum of ino 36135 off 315392 (ran out of all copies)
    [10260.281424] 
    [10260.281425] BTRFS error (device md2): failed to repair data csum of ino 36135 off 327680 (ran out of all copies)
    [10260.281425] 
    [10260.281426] BTRFS error (device md2): failed to repair data csum of ino 36135 off 335872 (ran out of all copies)
    [10260.281426] 
    [10260.281428] BTRFS error (device md2): failed to repair data csum of ino 36135 off 339968 (ran out of all copies)
    [10260.281428] 
    [10260.281429] BTRFS error (device md2): failed to repair data csum of ino 36135 off 344064 (ran out of all copies)
    [10260.281429] 
    [10260.281434] BTRFS error (device md2): failed to repair data csum of ino 36135 off 360448 (ran out of all copies)
    [10260.281434] 
    [10260.281438] BTRFS error (device md2): failed to repair data csum of ino 36135 off 135168 (ran out of all copies)
    [10260.281438] 
    [10260.281439] BTRFS error (device md2): failed to repair data csum of ino 36135 off 368640 (ran out of all copies)
    [10260.281439] 
    [10260.281441] BTRFS error (device md2): failed to repair data csum of ino 36135 off 372736 (ran out of all copies)
    [10260.281441] 
    [10260.281442] BTRFS error (device md2): failed to repair data csum of ino 36135 off 376832 (ran out of all copies)
    [10260.281442] 
    [10260.281447] BTRFS error (device md2): failed to repair data csum of ino 36135 off 143360 (ran out of all copies)
    [10260.281447] 
    [10260.281448] BTRFS error (device md2): failed to repair data csum of ino 36135 off 229376 (ran out of all copies)
    [10260.281448] 
    [10260.281449] BTRFS error (device md2): failed to repair data csum of ino 36135 off 253952 (ran out of all copies)
    [10260.281449] 
    [10260.281451] BTRFS error (device md2): failed to repair data csum of ino 36135 off 270336 (ran out of all copies)
    [10260.281451] 
    [10260.281452] BTRFS error (device md2): failed to repair data csum of ino 36135 off 278528 (ran out of all copies)
    [10260.281452] 
    [10260.281462] BTRFS error (device md2): failed to repair data csum of ino 36135 off 282624 (ran out of all copies)
    [10260.281462] 
    [10260.281475] BTRFS error (device md2): failed to repair data csum of ino 36135 off 311296 (ran out of all copies)
    [10260.281475] 
    [10260.281531] BTRFS error (device md2): failed to repair data csum of ino 36135 off 131072 (ran out of all copies)
    [10260.281531] 
    [10260.348805] BTRFS error (device md2): failed to repair data csum of ino 36135 off 139264 (ran out of all copies)
    [10260.348805] 
    [10260.696920] BTRFS error (device md2): failed to repair data csum of ino 27165 off 0 (ran out of all copies)
    [10260.696920] 
    [10260.698120] BTRFS error (device md2): failed to repair data csum of ino 27165 off 8192 (ran out of all copies)
    [10260.698120] 
    [10260.699357] BTRFS error (device md2): failed to repair data csum of ino 27165 off 4096 (ran out of all copies)
    [10260.699357] 
    [10260.699360] BTRFS error (device md2): failed to repair data csum of ino 27165 off 12288 (ran out of all copies)
    [10260.699360] 
    [10260.705479] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
    [10260.706684] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.707378] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.708285] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
    [10260.709493] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.710206] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.711011] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
    [10260.712173] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.712968] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.713776] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41178: comm synopkg: reading directory lblock 0
    [10260.714941] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.715799] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.718017] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
    [10260.719205] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.720032] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.720852] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
    [10260.722048] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.722865] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.723690] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
    [10260.724893] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.725699] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.726514] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41149: comm synopkg: reading directory lblock 0
    [10260.727605] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.728326] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.731103] EXT4-fs error (device md0): ext4_find_entry:1614: inode #41141: comm synopkg: reading directory lblock 0
    [10260.732204] EXT4-fs (md0): previous I/O error to superblock detected
    [10260.732924] Buffer I/O error on dev md0, logical block 0, lost sync page write
    [10260.733730] EXT4-fs (md0): previous I/O error to superblock detected
    [10264.816286] blk_update_request: I/O error, dev sda, sector in range 897024 + 0-2(12)
    [10264.817714] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
    [10264.820288] blk_update_request: I/O error, dev sda, sector in range 901120 + 0-2(12)
    [10264.824346] blk_update_request: I/O error, dev sda, sector in range 897024 + 0-2(12)
    [10264.832623] blk_update_request: I/O error, dev sda, sector in range 909312 + 0-2(12)
    [10264.833553] blk_update_request: I/O error, dev sda, sector in range 2125824 + 0-2(12)
    [10264.834646] Buffer I/O error on dev md0, logical block in range 262144 + 0-2(12) , lost sync page write
    [10264.835793] Aborting journal on device md0-8.
    [10264.836372] blk_update_request: I/O error, dev sda, sector in range 2097152 + 0-2(12)
    [10264.837343] Buffer I/O error on dev md0, logical block in range 262144 + 0-2(12) , lost sync page write
    [10264.838486] blk_update_request: I/O error, dev sda, sector in range 0 + 0-2(12)
    [10264.839385] Buffer I/O error on dev md0, logical block in range 0 + 0-2(12) , lost sync page write
    [10264.839387] JBD2: Error -5 detected when updating journal superblock for md0-8.
    [10264.841774] EXT4-fs error (device md0): ext4_journal_check_start:56: Detected aborted journal
    [10264.842892] blk_update_request: I/O error, dev sda, sector 2048
    [10265.044738] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
    [10265.316836] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
    [10265.317809] BTRFS error (device md2): bdev /dev/md2 errs: wr 2, rd 3154, flush 0, corrupt 0, gen 0
    [10265.331020] blk_update_request: I/O error, dev sda, sector 13710688
    [10265.331719] BTRFS error (device md2): bdev /dev/md2 errs: wr 3, rd 3154, flush 0, corrupt 0, gen 0
    [10265.333550] blk_update_request: I/O error, dev sda, sector 13710688
    [10265.334225] BTRFS error (device md2): bdev /dev/md2 errs: wr 4, rd 3154, flush 0, corrupt 0, gen 0
    [10265.336264] blk_update_request: I/O error, dev sda, sector 13710688
    [10265.336927] BTRFS error (device md2): bdev /dev/md2 errs: wr 5, rd 3154, flush 0, corrupt 0, gen 0
    [10265.725824] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3154, flush 0, corrupt 0, gen 0
    [10265.911190] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3155, flush 0, corrupt 0, gen 0
    [10265.912153] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.913177] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.914171] blk_update_request: I/O error, dev sda, sector 14187288
    [10265.914175] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3156, flush 0, corrupt 0, gen 0
    [10265.915760] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.916778] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3157, flush 0, corrupt 0, gen 0
    [10265.917699] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.918730] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.919744] blk_update_request: I/O error, dev sda, sector 14187296
    [10265.919746] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3158, flush 0, corrupt 0, gen 0
    [10265.919749] BTRFS error (device md2): bdev /dev/md2 errs: wr 6, rd 3159, flush 0, corrupt 0, gen 0
    [10265.922210] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.923206] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.924222] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.925225] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.926239] blk_update_request: I/O error, dev sda, sector 14187296
    [10265.926892] md2: syno_self_heal_is_valid_md_stat(451): md's current state is not suitable for data correction
    [10265.928329] BTRFS error (device md2): failed to repair data csum of ino 35375 off 4096 (ran out of all copies)
    [10265.928329] 
    [10265.929571] BTRFS error (device md2): failed to repair data csum of ino 35375 off 8192 (ran out of all copies)
    [10265.929571] 
    [10265.930880] BTRFS error (device md2): failed to repair data csum of ino 35375 off 12288 (ran out of all copies)
    [10265.930880] 
    [10265.932071] BTRFS error (device md2): failed to repair data csum of ino 35375 off 0 (ran out of all copies)
    [10265.932071] 
    [10285.122533] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10285.123476] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10285.124393] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10285.125310] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10285.126282] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10285.126406] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
    [10285.126420] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
    [10285.136863] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
    [10285.137799] blk_update_request: I/O error, dev sda, sector in range 720896 + 0-2(12)
    [10285.138597] blk_update_request: I/O error, dev sda, sector in range 720896 + 0-2(12)
    [10293.649817] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
    [10293.650971] blk_update_request: I/O error, dev sda, sector in range 9646080 + 0-2(12)
    [10293.651927] blk_update_request: I/O error, dev sda, sector in range 9646080 + 0-2(12)
    [10293.652882] blk_update_request: I/O error, dev sda, sector in range 11743232 + 0-2(12)
    [10293.652997] BTRFS error (device md2): bdev /dev/md2 errs: wr 7, rd 3203, flush 0, corrupt 0, gen 0
    [10293.652999] BTRFS error (device md2): bdev /dev/md2 errs: wr 8, rd 3203, flush 0, corrupt 0, gen 0
    [10293.653000] BTRFS error (device md2): bdev /dev/md2 errs: wr 9, rd 3203, flush 0, corrupt 0, gen 0
    [10293.653001] BTRFS error (device md2): bdev /dev/md2 errs: wr 10, rd 3203, flush 0, corrupt 0, gen 0
    [10293.653002] BTRFS error (device md2): bdev /dev/md2 errs: wr 11, rd 3203, flush 0, corrupt 0, gen 0
    [10293.653003] BTRFS error (device md2): bdev /dev/md2 errs: wr 12, rd 3203, flush 0, corrupt 0, gen 0
    [10293.653004] BTRFS error (device md2): bdev /dev/md2 errs: wr 13, rd 3203, flush 0, corrupt 0, gen 0
    [10293.653004] BTRFS error (device md2): bdev /dev/md2 errs: wr 14, rd 3203, flush 0, corrupt 0, gen 0
    [10293.661248] BTRFS error (device md2): bdev /dev/md2 errs: wr 15, rd 3203, flush 0, corrupt 0, gen 0
    [10293.661273] blk_update_request: I/O error, dev sda, sector in range 11743232 + 0-2(12)
    [10293.662955] BTRFS error (device md2): bdev /dev/md2 errs: wr 16, rd 3203, flush 0, corrupt 0, gen 0
    [10293.663889] BTRFS: error (device md2) in btrfs_commit_transaction:2347: errno=-5 IO failure (Error while writing out transaction)
    [10293.665153] BTRFS: error (device md2) in cleanup_transaction:1960: errno=-5 IO failure
    [10293.864269] blk_update_request: I/O error, dev sda, sector in range 9437184 + 0-2(12)
    [10294.863533] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
    [10294.864685] blk_update_request: I/O error, dev sda, sector in range 106496 + 0-2(12)
    [10294.865634] Buffer I/O error on device md0, logical block in range 12288 + 0-2(12)
    [10295.066598] blk_update_request: I/O error, dev sda, sector in range 4980736 + 0-2(12)
    [10335.337416] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10335.338386] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10335.339320] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10335.340249] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10335.341176] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10335.342238] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
    [10335.343141] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
    [10335.343353] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
    [10335.343371] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
    [10335.343375] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
    [10349.636854] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10349.637657] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10349.638444] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10349.639231] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10349.640014] blk_update_request: I/O error, dev sda, sector in range 831488 + 0-2(12)
    [10349.644211] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
    [10349.645067] blk_update_request: I/O error, dev sda, sector in range 794624 + 0-2(12)
    [10349.647320] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)
    [10349.648150] blk_update_request: I/O error, dev sda, sector in range 1216512 + 0-2(12)
    [10349.648154] blk_update_request: I/O error, dev sda, sector in range 835584 + 0-2(12)

     


     

     

  17. I know this is only a cosmetic thing, but no matter how I try to hide the boot drive using DiskIdxMap in the sata_args, DSM stopps working after a few hours. After a while it stops working and I'm only presented with a white page when trying to login, but after a while I get the message saying that the disk space is full when trying to login. Any ideas?

     

    I guess the best choice is to stick to the default "set sata_args='SataPortMap=4'" and just ignore the bootloader and the four spaces between the PCI card. This way you still get to use 12 drives.

  18. Anyone seen this before? Occurs after 5-10 hours and requires a reboot to get past.

    I can't see anything in dmesg or logs and can't get access to SSH either once the error arises.

    Capture.PNG

  19. Apparently the lastest bootloader or version of DSM does something silly.

    When doing cat /proc/mdstat I get:

     

    md1 : active raid1 sda2[0] sdb2[1]
          2097088 blocks [16/2] [UU______________]

    But that doesn't seem right.. a RAID 1 on the previous loader looked like this:

    md3 : active raid1 sdi3[0] sdj3[1]
          483564544 blocks super 1.2 [2/2] [UU]

    Netdata doesn't like the 16/2 status as it assumes drives are missing from the array and therefore degraded. DSM doesn't seem to care though, but still not perfect.

  20. I may have found a bug. I'm using netdata for system monitoring on both my "production" and test xpenology (see storage manager screenshot in my previous post).

    Using synoboot 1.03b/ds3615xs  everything is fine, but using 1.04b with two drives it tells me that my array is degraded. DSM on the other hand says everything is normal.

    As mentioned, this isn't an issue on the previous loader.

    Capture.PNG

×
×
  • Create New...