Chuck12

Members
  • Content Count

    22
  • Joined

  • Last visited

  • Days Won

    2

Chuck12 last won the day on October 28 2020

Chuck12 had the most liked content!

Community Reputation

5 Neutral

About Chuck12

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I have disabled it since the host redo. there were a few things that caused the problem. one of the drives in the pool was bad which caused vmware to halt during a scrub. another was a kernel power failure from a bad windows update. a clean install of windows so far have been stable. I have the vhds set aside. still have not recovered yet. need to read up on Linux some more. I thought I did everything from my first attempt but it's not working second time.
  2. Well that didn't last long. A windows update forced a reboot while the nas was doing a consistency check on a separate volume. Unfortunately, this time, I can't mount the array in ubuntu to access the data. Below are the diagnostic data of the drives in the nas. Note that two of the missing drives are shown as Initialized and healthy in the HDD/SSD group but they are not assigned to the specific pool. HELP! root@NAS:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdi5[0] sdo5[3] 7749802112 bloc
  3. After lots of googling, I was able to get access to my data from that crashed volume. I removed the vhd from the VM, added them to an ubuntu VM and followed the Synology link to recover the data. This was done after the raid was repaired that it showed up in the ubuntu files manager app. Previously when the failed drive was not repaired, the volume was not available. It was visible as an option but an error appeared when I tried to access it. I ran the lvscan to display what to mount and mine showed /dev/vg1003/lv. Since it was a btrfs file system, I then installed the btrfs-progs package an
  4. The repair process completed, drives are shown as healthy, storage pool shows healthy but volume shows crashed. This is for Storage Pool 4 / Volume 4. I'm running xpenology in VMWare Workstation 16. root@NAS:~# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md5 : active raid5 sde5[0] sdd5[3] sdf5[1] 9637230720 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU] md3 : active raid1 sdp5[0] 1882600064 blocks super 1.2 [1/1] [U] md2 : active raid5 sdj5[3] sdl5[5] sdk5[4] 15509261312 bloc
  5. I put the drives back and the repair option re-appeared. One of the drive was bad so I'll need to get a replacement drive in its place to do the repair. Thanks
  6. I read the other post by pavey but my situation is different. One of my pool crashed and the actions button is grayed out. The volume is a SHR-2. I followed Synology's suggestion to recover the data from https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC but got stuck at step 9. I get the following when I run mdadm -Asf && vgchange -ay mdadm: No arrays found in config file or automatically I launched gparted and can see the volume info /dev/md127 14.44Tib Partition:
  7. Just wanted to let those interested v6.2.3-25426-Update 3 works fine on this VM. The update was applied to the original upload (synobootfix applied) and not the updated appliance.
  8. Hi, I have a remote NAS DSM 6.2.3-Update2 that I don't have physical access to and I want to encrypt the shared folders. When I initialize the Key Manager, it recommends the keys to be stored on a usb drive. In a VM setup, how do I add a thin disk virtual usb drive to xpenology and just move the virtual usb drive to the cloud for safe storage? For VM setups, what is your process to encrypt shared folders and safeguard the keys outside of DSM for safe keeping? Thanks!
  9. I was able to update mine after following flyride's post. I'm a linux newb and followed his 2nd post exactly. Update-2 worked perfectly on an existing VM. Many many thanks to armazohairge, flyride and everyone else's help!
  10. It's working great now. Thank you! 24 drives is more than enough for my needs. I have 13 drives and running two separate vms was cumbersome. I was pleasantly surprised that i was able to combine the existing shr drives from a second vm without any troubles. All of folder shares and permissions were intact. Just had to do a quick system partition repair and it was all good. To simplify in setting up for a friend that have more than 12 drives like i do, would setting the usb and esata ports as 0x0 be ok?
  11. I'm curious since I don't use any virtual usb connected drives, does it make sense to zero the usb configuration out and just have the internalportcfg value only? esataportcfg="0x0" internalportcfg="0xffffff" usbportcfg="0x0" What would be the purpose of an attached virtual usb drive for?
  12. After posting the above, I tried moving drives > 13 to SATA2:x in the VM settings. Sure enough, the drives are now shown in Storage Manager and I am able to add them a volume just like any other drive. Also, the graphic display of available drives is not like the other screens I've seen. As long as the drives are seen and can be used, I think I'm good for now. I'll continue to play around with this so more.
  13. I have the DSM VM setup (DS3615xs DSM v6.2.3-25426-Release-final) in VMWARE Workstaion 15 with mods to the synoinfo.conf file as follows: maxdisks="24" esataportcfg="0xff000" internalportcfg="0xFFFFFF" usbportcfg="0x300000" 24 disk slots are displayed within the Storage Manager overview. I added about 18 VHDs (20GB each) to my VM. The first 12 are shown in Storage Manager and can be included in a volume. The remaining VHDs are listed as ESATA Disks. I'm able to format these designated ESATA drives in File Station but what I'm looking to do is to have them populate