Bose321

Members
  • Content Count

    18
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Bose321

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Is it strange that /dev/vg1000 isn't available? I keep reading that everywhere, but that doesn't exist for me. I can't really find commands that I can use, since they almost all use that. I used to have passthrough disks in the past, but that was later no longer advised and I then switched. I've got it mounted with this command: mount -o recovery,ro /dev/md2 /volume1 That works and I can see my files. But I can't normally mount it. Will have to backup and see if I can mount it, or just recreate it. edit2: Okay weird, I just shut down my VM, spun it up again and my volume is back to normal!?
  2. Of course, sorry. [ 2362.066098] parent transid verify failed on 2854282428416 wanted 5246133 found 5245893 [ 2362.066580] parent transid verify failed on 2854282428416 wanted 5246133 found 5245893 [ 2362.066953] md/raid1:md2: syno_raid1_self_heal_set_and_submit_read_bio(1226): No suitable device for self healing retry read at round 2 at sector 1563983552 [ 2362.067252] md/raid1:md2: syno_raid1_self_heal_set_and_submit_read_bio(1226): No suitable device for self healing retry read at round 2 at sector 1563983560 [ 2362.067536] md/raid1:md2: syno_raid1_self_heal_set_and_submit_read_bio(1226): No suitable device for self healing retry read at round 2 at sector 1563983576 [ 2362.067808] md/raid1:md2: syno_raid1_self_heal_set_and_submit_read_bio(1226): No suitable device for self healing retry read at round 2 at sector 1563983568 [ 2362.068123] parent transid verify failed on 2854282428416 wanted 5246133 found 5245893 [ 2362.068127] BTRFS error (device md2): BTRFS: md2 failed to repair parent transid verify failure on 2854282428416, mirror = 2 [ 2362.089368] BTRFS: open_ctree failed
  3. Thanks, but that tells me this: mount: wrong fs type, bad option, bad superblock on /dev/md2, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so.
  4. Thanks, here are the outputs: synodisk --enum ************ Disk Info *************** >> Disk id: 2 >> Slot id: -1 >> Disk path: /dev/sdb >> Disk model: Virtual SATA Hard Drive >> Total capacity: 2794.00 GB >> Tempeture: -1 C ************ Disk Info *************** >> Disk id: 3 >> Slot id: -1 >> Disk path: /dev/sdc >> Disk model: Virtual SATA Hard Drive >> Total capacity: 2794.00 GB >> Tempeture: -1 C ************ Disk Info *************** >> Disk id: 4 >> Slot id: -1 >> Disk path: /dev/sdd >> Disk model: Virtual SATA Hard Drive >> Total capacity: 50.00 GB >> Tempeture: -1 C ************ Disk Info *************** >> Disk id: 5 >> Slot id: -1 >> Disk path: /dev/sde >> Disk model: Virtual SATA Hard Drive >> Total capacity: 3500.00 GB >> Tempeture: -1 C synodisk --detectfs /volume1 Partition [/volume1] unknown cat /etc/fstab none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/md2 /volume1 btrfs auto_reclaim_space,synoacl,relatime 0 0 /dev/md4 /volume3 btrfs auto_reclaim_space,synoacl,relatime 0 0 /dev/md3 /volume2 btrfs auto_reclaim_space,synoacl,relatime 0 0 /dev/md5 /volume4 btrfs auto_reclaim_space,synoacl,relatime 0 0 cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid1 sdb3[0] 2924899328 blocks super 1.2 [1/1] [U] md3 : active raid1 sdc3[0] 2924899328 blocks super 1.2 [1/1] [U] md4 : active raid1 sdd3[0] 47606784 blocks super 1.2 [1/1] [U] md5 : active raid1 sde3[0] 3665193984 blocks super 1.2 [1/1] [U] md1 : active raid1 sdb2[0] sdc2[1] sdd2[2] sde2[3] 2097088 blocks [12/4] [UUUU________] md0 : active raid1 sdb1[0] sdc1[1] sdd1[3] sde1[2] 2490176 blocks [12/4] [UUUU________] unused devices: <none>
  5. I've updated to 6.2.3 and now one of my 4 volumes is crashed. I'm running on VMWare and I applied the synoboot fix from here: The sata drives that appeared wrongfully are gone, so that's good, but my volume1 is still crashed. The pool and disk both still are good. All volumes and pools are on a separate disk and are basic, no RAID or something. I believe SHR. IIRC it was btrfs. I can still cd to /volume1 via SSH, but I only see a `@database' folder with mariadb10 it seems. So can this be fixed somehow? Or is everything gone? Thanks in advance.
  6. Can anyone help me out here? I've noticed that one of my volumes has crashed after updating from 6.2.2 to 6.2.3. So I checked the command and I noticed that I needed to run this script. The two sata drives (or something like that) are now gone, so that's good. However my volume is still crashed. My pool and disk that the volume is on is still healthy according to DSM. Anything I can do or am I in problems? Most of my packages were on that volume... I can still cd to /volume1 but all I see is a `@database` folder on there.
  7. Just managed to get this working in VMWare Workstation 12 following the first post. I had to change my NIC from e1000 to e1000e, but after that it worked fine. I'm on the latest release now. What's the best way to get my existing 6.1 install on the 6.2.2 version? Can I just use the new loader (1.02b now, and 1.03b in test VM) and change the NIC and then update the DSM version? Or should I migrate to the new VM?
  8. Bose321

    DSM 6.2 Loader

    I can't get the new loaders (1.03 and 1.04) to work on my VMware Workstation installation. Working fine on 6.1 and 1.02 right now. But I'd like to update. I assume I can just load the new loader and update afterwards? I tried 1.03, but I'm not getting anything in Synology Assistant. Haven't tried 1.04 since I've got a 3615 now. I once had a 3617 which worked great, but that stopped working for some reason... No access to internet. Downgraded to a 3615 and it worked again. Is there any difference for a VM? I haven't noticed anything. How about updates? I read you can't run the latest version but have to use a certain version?
  9. Bose321

    DSM 6.2 Loader

    I'm running on 6.1.7 on a VMWare Workstation install. Can I just reboot with 1.03 or 1.04 and upgrade to 6.2 via DSM directly? Or should I try via the .pat file, or isn't it this easy at all? Right now I have a 3615xs. But there is no loader for that on 1.04?
  10. Thanks. Tried that, but it didn't work. Changing it to another loader and migrating worked! Thanks alot. Very strange issue...
  11. Bose321

    DSM 6.1.x Loader

    Not booting anymore for me on VMWare Workstation with 1.02 ds3617 loader. It does say it boots, but nothing on the network.
  12. Same here on my production VM with DSM 6.1.6-15266. Any solution?
  13. I've got a VMWare Workstation installation running for a long time, and decided to update. Checked the thread first and nothing weird that. I updated, and waited for it to reboot, but that was taking a while. So I tried to ping it. Nothing. Tried to reboot, but nothing still. Anyone an idea? Not sure if there's an error or something... Using Jun's 1.02b loader.
  14. Right sorry, I meant Workstation or Player indeed. I'll try to move some stuff over and start over.
  15. Thanks. I guess I have to find a way to transfer my files then... Esxi or baremetal are not an option since I've got a HTPC for Kodi with DSM as a VM. Vmware supports 6.1 good right?