Polar

Transition Members
  • Content count

    5
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Polar

  • Rank
    Newbie
  1. Polar

    SHR non-hot-swap disk failure simulation

    "Its more like if any drives fail (just one of course) your data will still be there" ... Haha !!! And that is the technical wonder I'm trying to understand . If I have 16TB of data, and only 8TB for failover... how does this work? If I only have a one-on-one failover system (like I used to have in Linux : one disk has its own "second" disk for raid). All data is continiously synced. If one disk goes belly up, the second already has ALL data. But I can't figure out how SHR does this with 16TB agains 8TB failover.
  2. Hi all, I have 2 x 8TB configured as SHR with 1-disk fault tolerance. To make sure that the raid function is working I was looking for a way to test this. When working on a linux computer i could Force-fail by software. Newer versions of raidtools come with a raidsetfaulty command. By using raidsetfaulty you can just simulate a drive failure without unplugging things off. Is this also possible when running XPEnology? Any other suggestions on how to simulate this on a non-hot-swap system (HP N54L bios not modified) Another question that is bugging me is this: Right now i have only 2 disks, and am assuming that the current setup is ment to work in a situation where disk 1 would fail, the second disk just takes over. This would mean that there is already a synced copy of the data available on disk 2, or not? And what would the case be when I add a third 8TB HDD to the mix. The final goal would be to have 2 x 8TB = 16 TB data storage available + 1 x 8TB for 1-disk fault tolerance. It would be my assumption that if disk 1 or 2 would fail, disk 3 would step in. But then again the question would be, if I have data spread over disk 1 and 2, how does SHR knows what data to mirror on disk 3? Is there actual data on disk 3? Thanks for clearing this technical dilema
  3. Polar

    unable to find libfuse

    Hi all. I am runing DSM 5.0.4458 (model name DS3612xs). I installed python 3.4 and pip successfully. Then I installed acd_cli. when mounting the folder as acd drive I got an error saying that the system was unable to find libfuse. Doing a find / -name libfuse gave no results. Can somebody explain how to install libfuse on my NAS? Many thanks in advance
  4. Thx for the advice. I have been reading that section, and looked at the videos. Butmaybe I am over-complicating things... What I can not seem to figure out is if I need a raid controller or not. Due to the reason that I explained above. Another thing I noticed is the status of my to added RMD drives. In ESXi 5.1 I added 2 drives to my VM by using RDM. The drives show in the XPenolgy > control panel > Info Center > Storage. Disk 4: Virtual disk, Size 16Gb, Status: Normal, Volume 1 // Disk 5: WD41EADS, Disk Size 931 Gb, Status "Not Initialized" // Disk6: WD20EARX, Disk Size 1863 Gb, Status "Not Initialized". What does this mean? Are disk 5+6 ready for usage or not? I've been reading about how XPenology installes an OS on every drive. Is that a must? Or am I missing something here? Sent from my i9220 using Tapatalk
  5. Hi. Since the N54L does not support passthrough, I am uncertain about how to go about with SHR. My VM running XPEnology has no hardware raid controller. Is this a requirement in order to use SHR? Or is this a pure software based solution? If SHR is not possible I guess the only way to play safe is to set up a HA platform with 2 servers. Any other suggestions? Sent from my i9220 using Tapatalk