Search the Community

Showing results for tags 'shr'.

More search options

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


  • General
    • Readers News & Rumours
    • The Noob Lounge
    • Information and Feedback
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • GERMAN
    • KOREAN

Find results in...

Find results that contain...

Date Created

  • Start


Last Updated

  • Start


Filter by number of...


  • Start



About Me

Found 9 results

  1. So I have a bunch of drives in /volume1: 5x3TB 3x6TB (added later) Now I have a bunch more drives that currently in /volume3 (/volume2 was deleted, just 1 SSD for VM storage): 1x3TB 2x10TB and a bunch of unused drives: 3x3TB So I wanted to add all of the drives into one big /volume1. So... 1. I backed up /volume1 using HyperBackup into /volume2. 2. delete/destroy /volume1, 3. add all 3TB drives first (minus the one 3TB drive in /volume2) as well as the 3x6TB drives into the newlybuilt /volume1. All in SHR. 4. restore the HyperBackup from /volume2, destroy /volume2 5. Expand/add the 2x10TB from /volume2 into /volume1. So I'm left with 1x3TB unused. Questions: 1. Will this work? 2. What will happen to my DSM and apps during deletion, creation of SHR, and during restoration from HyperBackup? 3. Will the newly created volume/storage pool be in /volume1 or /volume4 after I removed /volume1? 4. Anything else I need to worry about? Because /volume1 used size is 20TB, and the HyperBackup in /volume2 is 18.4TB in size. Cheers and thanks!
  2. hi i have a ds3617xs setup. i created aa shr volume with 4 2TB disk. one of the disk died and i decided that i needed more space ,so i bought 2 new 4TB red drives. i started by replacing the failed 2TB drive with a 4TB repaired my volume and finally swapped an other 2Tb for a 4TB. all went well expect the total volume size doesn't seem to have increased. i was expecting to have 8TB available but i only have 5.6TB. do any of you can give me a clue about what's going on?
  3. So I've been running different versions of Xpenology for a while now and when I think about it, the only real reason I like it is because I can use multiple types of drive sizes to create one volume. My workplace recently bought a brand new Supermicro server valued over $15k USD and the IT guy was explaining how the "pools" work in the new Windows Server environment. Apparently, it's a rip of a Linux method of creating volumes which means this now works similar to SHR in that it can create volumes with multiple drive sizes without hiding a big chunk of data because it won't fit into an equal RAID amount. For me the biggest plus would be if I have to restart my server, I won't have to rebuild the entire volume because windows server supports more than 12 drives unlike Xpenology on reboot. Not to mention hyperthreading power. I was just wanting to hear peoples thoughts on this. Would you switch to windows server 2016? Why or why not?
  4. A little background... I've been thinking about setting up a NAS for quite a while. Recently my family decided to digitize the entire stockpile of VHS and camcorder Hi8 home videos. After digitizing the first box, we calculate that it's going to take many terabytes to get it all done. Cloud storage is looking to be hundreds of dollars a year. So I decided to pull the trigger and build a NAS to store them all along with all my other media. I have Roku's all through the house and purchased PLEX. Roku's have a great PLEX application. My co-worker suggested building my own NAS to save money and have a NAS capable of supporting PLEX transcoding and various other computing intensive activities desirable on a NAS. I'm using a Windows box with an I-3 processor and 12 GB RAM that was laying around. It had a couple of 500 GB drives and I added 2 more 4 TB drives with the expectation of upgrading the 500's over time. In another thread on this forum, I was told that all I needed was a supported SATA 3.0 expansion card with the port density I wanted and let DSM's software RAID controller do the work. I went with that advice and installed a four port SATA 3.0 card to give me 6 ports total including the 2 motherboard SATA 3.0 ports. Currently the 4 drives are attached to the card. I had small spare SSD drive that I connected to the motherboard that I intend to use as a cache. After the hardware build, I followed these instructions: and successfully installed DSM from the .pat file linked to in this tutorial. Using the two 500 GB drives was based on my co-worker's experience of using SHR to have a redundant RAID setup with different sized drives and the ability to swap out a bad drive with a larger drive and/or upgrade to larger drives in the future. However after the install, when attempting to build the RAID group, there is no SHR option. Some quick research revealed that not all versions of DSM support SHR. Here's a page that lists those versions: Looking in the specs of my DSM install shows that it thinks it's a DSM3615xs device. Sure enough, this is listed on the above page as not supporting SHR. So my questions are, should I pursue trying to install a version of DSM that does support SHR with the xpenology boot loader? I really like the idea of being able to use what ever size drives I want. If I set up a traditional RAID, I'll need to purchase two more 4 TB drives now and then I'd have to start all over if I want to increase drive size instead of adding more 4 TB drives. If SHR is desirable, what version of DSM should I use and where do I get it? Thanks
  5. Hi Everyone, I made a really stupid mistake of deleting volume 1 of my NAS. The Volume consists of 2 - 2TB WD drives and 1 - 1TB WD drive. one of the 2TB drives failed and I was supposed to repair it. I've done this before but in this instance, it totally slipped my mind that I need to uninstall the failed drive first before running the repair function. I made the moronic mistake of thinking the repair function was the remove volume function and now I lost volume 1. The failed drive is now uninstalled, the other 2 drives show they are healthy but the status is "system partition failed". Is there a way I can rebuild volume 1 or just remount it from the data of the 2 remaining drives? Thanks so much for your help. Details: RAID: SHR Machine: DS3615xs DSM Version: 5.2-5644
  6. hi, if some one has some time to spare it might worth a try to have a look into the extra.lzma /etc/jun.patch jun is using that to patch (diff files) dsm config files at boot on 916+ he is patching synoinfo.conf to maxdisks=12 (there might be a mistake in that case as he sets the intern disks to 0xff instead of 0xfff? - maybe just a typo no one recognized before?) that could also be done on 3515/17 to achieve a higher disk count and activate shr and as patch (diff) kicks in if it exactly matches it could be done in a way that it kicks in when the already modded synoinfo.conf is reset to the default one like when a dsm update reseting the synoinfo.conf mostly interesting for people with a higher disk count then 12 any one willing to do and test it? if done and tested it could be part of the extra.lzma i do for the additional drivers (extended jun.patch file) like new default disk count to 24 (needs to touch maxdisks, usbportcfg, internalportcfg, esataportcfg) and activate SHR in best case there will be much less hassle when updating dsm
  7. I'm just getting started on my own bare metal installation but I have been running several true Synology systems for years. First things I noticed was the lack of SHR (Synology Hybrid Raid) as an option. Is this a hardware limitation or something else?
  8. Alright, strap yourselves in, because this might get long... Hardware setup: 4x WD 2TB Red in SHR ASRock H81M-HDS Mobo Intel Celeron Processor 8GB Crucial Ballistix RAM First, some background: A few days ago I noted the network drives that I have on my system were not showing up in Windows so I navigated to the system via my browser and the system told me I needed to install an update and that my drives were from an old system and would need migration. I wrote a different post about that here: The versions it wanted to install was the same version (or slightly higher) of 5.2 so I thought nothing of it and agreed to let the system update. It went through the install smoothly, but never rebooted. Eventually I was able to navigate back to the web browser and it told me I now had 5.2 firmware, but 6.1-15152 DSM. I am still unclear how this install happened, but I assume that it downloaded it automatically from the internet even though I had enabled the "only install security patches" option. As I posted in the Tutorial forum a few posts after the linked one, I was able to get Jun's loader installed and boot into 6.1-15152 and I thought all was well. However, when I booted into the DSM, I was in a world of hurt. I have one bad disk in the array clearly that lists bad sectors, but that's the point of the SHR array right? Well I let the RAID start to repair itself and always around 1.5% into the repair it crashes and tells me the System Volume has crashed. However, you'll notice in the Disk Info Section there are only 3 disks. Looking into the logs show that Disk 5 (the bad one) failed at trying to correct bad sectors: However, when this happens Disk 1 (AFAIK, perfectly fine drive) switched into Initialized, Normal but drops out of the RAID array and then it goes into crash mode. I don't understand the relationship between Disk 5 crashing out when repairing the RAID and Disk 1 disappearing. It stands to reason that if Disk 1 is fine, which is seems to be that it would just fail and stay in degraded mode until I can swap in a new drive. I have tried starting the system with Disk 5 unplugged, but that does no good. I have also begun playing around with attempts at data recovery in a LiveUSB of Ububntu using some of Synology's guides as well as just googling around. So I suppose I have a few questions. 1. Does anyone know of a relationship between possibly installing the new system, and the bad disk causing the good disk to crash? 2. How likely is it that Disk 1 (AFAIK good disk) is also toast. 3. Do you have any tips for recovering data from a situation like this? I would greatly appreciate any help or advice you can provide. I have been banging my head against a wall for 3 nights working on this. I have all the really important stuff backed up to the cloud so it is not a matter of life and death (5 year, 10000 photos) but there is a lot of other media that I am willing to do a lot to not replace or only replace some of.
  9. I have run DSM 5.2 for quite a while. Initially on HP microserver, then on ESXi. upgraded a couple of times(from dsm4). didn't have issue. This time, I upgraded from 5.2 to 6.1, also from ESXi to directly run on Lenovo ts440. It took me a while to make upgrade and migration work because I initially downloaded wrong loader image. When it finally worked, I noticed one of the 6 disks that make up my primary volume is missing. files are still fine though, but the volume was in 'degrade' state. Before I realized it was because DS3615 support upto 12 internal drives by default, and just by chance the missing drive was on the 13th slot, I did a couple of thing trying to identify which drive was missing and if it was damaged. - ... multiple try to pull out individual drive -- now two drives were missing. but I can get SHR back once I everything back - I narrow down to one of three drives, so instead of pulling them one by one, I rotate them in different slot. Let me call them A, B, C and this is in original slot order; after switch, the order became B, C, A. - now I can see that C was originally missing, but now A is missing and C appears but has error access system partition. and now the volume is "crashed". Now I think the fatal step was that I opt to "repair" the system partition on C at this point. I should have put everything back at this point. Of course I still don't have all the drives, but I finally remembered 12 disk limit by DS3615's default, and I change that to 24, expecting all would be good. However I was present with the degraded volume with 0 size and 0 available space, and the shared directories disappeared from both web interface and ls -l command (ssh remote in). the missing drive becomes available drive, so I can use it to 'repair' the volume. I started to really worry now, but still remain hopeful that after repair I will be able to recover some data. I was wrong. after repair I have a volume with full size (about 13T), but all used, and 0 byte free space, and all folders are still missing. I did back up my photo/home video folders, home folders, and shared document folders. I do wish to get my video library, music, and ebook libraries back, so that I don't need to go back the the original media (some might not here anymore). I am quite comfortable with linux command line, if someone can point out a way to recover the data it would be very much appreciated!!! Thank you in advance!