New Members
  • Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About kyeung

  • Rank
  1. Unfortunately my system is Xpenology DS3615xs, and I don't have the option to create SHR volume anymore (even with the SHR support activated through config file). I still have >10TB available in my volume which should last me at least several more years, but it's kind of sad to realize that I can't add more drives to that volume. On top of that, I'm not sure if the volume can even expand when I replace the existing 6TB drives with enough (at least 4) 8 or 10TB ones.
  2. Thanks again IG-88. Apparently the screen is different for SHR (Synology Hybrid RAID) and it does not have the "Limit max drive number in RAID" option on that screen. My SHR RAID volume was first set up under DSM 5.2 and migrated over to DSM 6.1 with the SHR support reinstated back in the config file. I understand that Synology has official dropped SHR support in DSM 6.x and I'm thinking that this might be the reason why I could not add another disk to that legacy volume (even after the config file update). If anyone has different experience in working with legacy SHR in DSM 6.x, please share your experience. Thanks!
  3. Thanks for the reply IG-88. When I check the "RAID Group" tab under "Storage Manager", I can see the RAID Type (SHR with 2 disks fault-tolerance), but I don't see "Limit max drive number in RAID". Can you please show me where I can find that info?
  4. Hi, I am running ESXi 6.5.0 build 5310538 with Xpenology DS3615xs DSM 6.1.5-15254, Jun 1.02b loader. I have two pass through RAID cards and currently using only 1 to connect 8 hdds (7x6TB + 1x8TB, all WD RED). When I added the 9th drive (6TB WE RED) on the second RAID, I got the error "Operation failed because errors occurred with the file system" when I tried to expand the volume under the "RAID Group" tab in Storage Manager. It seems that the system has limited the volume to 8 hdds even though the NAS can support up to 12. Is this normal? Or am I missing something in my configuration setup? p.s. I could create a new RAID Group using the 9th drive so it seems that the drive hardware was not the issue.