Search the Community

Showing results for tags 'shr'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN
    • CHINESE
    • HUNGARIAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 11 results

  1. Buenos días, Tengo un HP Proliant Microserver gen8 con Proxmox montado y una de las máquinas virtuales con un Xpenology DSM 6.1. En ese Xpenólogy, la primera bahía (sata0) la tengo con la imagen de arranque synoboot.img y luego las cuatro bahías físicas del servidor están mapeadas directamente como sata1, sata2, sata3, sata4 con un grupo SHR y finalmente un sata5 con un disco virtual mapeado de un SSD como caché SSD por lo que me aparece la primera bahía utilizada sin uso y las siguientes 5 bahías con los discos. He conseguido otro servidor igual y quiero migrarlo con e
  2. So I have a bunch of drives in /volume1: 5x3TB 3x6TB (added later) Now I have a bunch more drives that currently in /volume3 (/volume2 was deleted, just 1 SSD for VM storage): 1x3TB 2x10TB and a bunch of unused drives: 3x3TB So I wanted to add all of the drives into one big /volume1. So... 1. I backed up /volume1 using HyperBackup into /volume2. 2. delete/destroy /volume1, 3. add all 3TB drives first (minus the one 3TB drive in /volume2) as well as the 3x6TB drives into the newlybuilt /volume1. All in SHR. 4. restor
  3. hi i have a ds3617xs setup. i created aa shr volume with 4 2TB disk. one of the disk died and i decided that i needed more space ,so i bought 2 new 4TB red drives. i started by replacing the failed 2TB drive with a 4TB repaired my volume and finally swapped an other 2Tb for a 4TB. all went well expect the total volume size doesn't seem to have increased. i was expecting to have 8TB available but i only have 5.6TB. do any of you can give me a clue about what's going on?
  4. So I've been running different versions of Xpenology for a while now and when I think about it, the only real reason I like it is because I can use multiple types of drive sizes to create one volume. My workplace recently bought a brand new Supermicro server valued over $15k USD and the IT guy was explaining how the "pools" work in the new Windows Server environment. Apparently, it's a rip of a Linux method of creating volumes which means this now works similar to SHR in that it can create volumes with multiple drive sizes without hiding a big chunk of data because it won't fit int
  5. A little background... I've been thinking about setting up a NAS for quite a while. Recently my family decided to digitize the entire stockpile of VHS and camcorder Hi8 home videos. After digitizing the first box, we calculate that it's going to take many terabytes to get it all done. Cloud storage is looking to be hundreds of dollars a year. So I decided to pull the trigger and build a NAS to store them all along with all my other media. I have Roku's all through the house and purchased PLEX. Roku's have a great PLEX application. My co-worker suggested building my ow
  6. Hi Everyone, I made a really stupid mistake of deleting volume 1 of my NAS. The Volume consists of 2 - 2TB WD drives and 1 - 1TB WD drive. one of the 2TB drives failed and I was supposed to repair it. I've done this before but in this instance, it totally slipped my mind that I need to uninstall the failed drive first before running the repair function. I made the moronic mistake of thinking the repair function was the remove volume function and now I lost volume 1. The failed drive is now uninstalled, the other 2 drives show they are healthy but the status is "
  7. Общие FAQ / Общие часто задаваемые вопросы Перевод. Оригинал здесь https://xpenology.com/forum/topic/9392-general-faq/ Целью данного руководства являются ответы на обычно задаваемые вопросы! Это не раздел, где просят о помощи. Для этой цели Вы можете использовать форум. 1. Что такое XPEnology? XPEnology представляет собой основанный на Linux загрузчик, разработанный с целью эмулировать (оригинальный) загрузчик Synology, позволяющий операционной системе (ОС) Synology Disk Station Manager (DSM) работать на сторонних аппаратных средствах (читай: железе, выпущенн
  8. hi, if some one has some time to spare it might worth a try to have a look into the extra.lzma /etc/jun.patch jun is using that to patch (diff files) dsm config files at boot on 916+ he is patching synoinfo.conf to maxdisks=12 (there might be a mistake in that case as he sets the intern disks to 0xff instead of 0xfff? - maybe just a typo no one recognized before?) that could also be done on 3515/17 to achieve a higher disk count and activate shr and as patch (diff) kicks in if it exactly matches it could be done in a way that it kicks in when
  9. I'm just getting started on my own bare metal installation but I have been running several true Synology systems for years. First things I noticed was the lack of SHR (Synology Hybrid Raid) as an option. Is this a hardware limitation or something else?
  10. Alright, strap yourselves in, because this might get long... Hardware setup: 4x WD 2TB Red in SHR ASRock H81M-HDS Mobo Intel Celeron Processor 8GB Crucial Ballistix RAM First, some background: A few days ago I noted the network drives that I have on my system were not showing up in Windows so I navigated to the system via my browser and the system told me I needed to install an update and that my drives were from an old system and would need migration. I wrote a different post about that here: The versions it wanted to install was
  11. I have run DSM 5.2 for quite a while. Initially on HP microserver, then on ESXi. upgraded a couple of times(from dsm4). didn't have issue. This time, I upgraded from 5.2 to 6.1, also from ESXi to directly run on Lenovo ts440. It took me a while to make upgrade and migration work because I initially downloaded wrong loader image. When it finally worked, I noticed one of the 6 disks that make up my primary volume is missing. files are still fine though, but the volume was in 'degrade' state. Before I realized it was because DS3615 support upto 12 internal drives by defaul