honelnik

Experiences with drive crash?

Recommended Posts

Hello everyone,

 

I'm a noob who's just considering building his first xpenology box over buying a genuine synology one.

 

I've been reading a lot and understand the basic install, HW & the day-to-day operational stuff, however I have not seen a lot of survival stories after things go bad.

I understand that most posts in here reporting some issues are catastrophic failures that require more than a basic troubleshooting for recovery, what I am interested in are some survival stories of people who experienced a failure and went on to resolve it without losing their data.

 

1. From what I understand running bare metal install is almost identical to running a synology box, so I shouldn't expect any extra problems than what a synology box would give me. Correct me if I'm wrong please.

 

2. I'm more interested in running esxi host with RDM :

 

A. Has anyone experienced a drive failure? Did a simple swap of the drive solve the issue?

 

B. When running xpenology as a VM, has anyone experienced a power failure? Was the system affected in any way? Were the data intact?

 

C. If my HW crashes, will I be able to migrate my drives to a new xpenology box or synology without issues/losing data? I've read multiple times that "this should be possible", however I haven't seen a post by anyone who has actually done this.

 

D. I've read a couple of posts about people having issues with their pool after expanding it. Does this generally work fine?

 

E. Is there anything on the VMWare side of things that can cause issues?

 

F. Just curious, has anybody been able to run their box without downtime for extended periods of time (years)?

 

Thanks a lot

Share this post


Link to post
Share on other sites
Hello everyone,

 

I'm a noob who's just considering building his first xpenology box over buying a genuine synology one.

 

I've been reading a lot and understand the basic install, HW & the day-to-day operational stuff, however I have not seen a lot of survival stories after things go bad.

I understand that most posts in here reporting some issues are catastrophic failures that require more than a basic troubleshooting for recovery, what I am interested in are some survival stories of people who experienced a failure and went on to resolve it without losing their data.

 

1. From what I understand running bare metal install is almost identical to running a synology box, so I shouldn't expect any extra problems than what a synology box would give me. Correct me if I'm wrong please.

- Mostly correct. Depending on the Boot image you use drivers may be a sticking point, especially if you go with eSata cards to expand drive count beyond what your mobo has. Don't know enough about Linux to say with any degree of accuracy but I suspect this is the case re volume expansion - see link

 

2. I'm more interested in running esxi host with RDM :

- No problem, i have one of those too but under Workstation not ESXi.

 

A. Has anyone experienced a drive failure? Did a simple swap of the drive solve the issue?

- Had one drive go bad but have also replaced smaller capacity drives with larger. most recent experience - viewtopic.php?f=15&t=3339 - twice.

 

B. When running xpenology as a VM, has anyone experienced a power failure? Was the system affected in any way? Were the data intact?

- I use a UPS but have had an extend power outage where battery drained. Booted up without issue. (Dedicated HW only, no idea about VM but VM is smart enough to make checkpoints)

 

C. If my HW crashes, will I be able to migrate my drives to a new xpenology box or synology without issues/losing data? I've read multiple times that "this should be possible", however I haven't seen a post by anyone who has actually done this.

- Within reason, if you move drives to like H/W, you are good to go.

 

D. I've read a couple of posts about people having issues with their pool after expanding it. Does this generally work fine?

- I have 2 DSM systems and mixed results - DSM5x, no problem, DSM 4x see link

 

E. Is there anything on the VMWare side of things that can cause issues?

- Can't say, I only use VM for testing

 

F. Just curious, has anybody been able to run their box without downtime for extended periods of time (years)?

- running 24x7 for months at a time, will power down if away for extended periods or impending doom

 

Thanks a lot

Share this post


Link to post
Share on other sites

Have been runing a HP Microserver with Xpenology 4 on it for about a year, nice and stable, no problems. Now upgraded (fresh install) to NanoBoot DSM 5. Have a few Microservers running DSM and a few VM's. Stable and very nice NAS OS!

Share this post


Link to post
Share on other sites

 

A. Has anyone experienced a drive failure? Did a simple swap of the drive solve the issue?

 

B. When running xpenology as a VM, has anyone experienced a power failure? Was the system affected in any way? Were the data intact?

 

C. If my HW crashes, will I be able to migrate my drives to a new xpenology box or synology without issues/losing data? I've read multiple times that "this should be possible", however I haven't seen a post by anyone who has actually done this.

 

D. I've read a couple of posts about people having issues with their pool after expanding it. Does this generally work fine?

 

E. Is there anything on the VMWare side of things that can cause issues?

 

F. Just curious, has anybody been able to run their box without downtime for extended periods of time (years)?

 

Hi, may be my experience will be of interest

 

I have 8 bay U-NAS box:

 

- MB Jetway nf9e-q77

- 500 mb SATA drive for ESXI & datastore

- Adaptec 7805

- nanoboot + DSM 5 + ESXi 5.5

- 3x3Tb WD Red in a RAID 5 + 1 disk as a hot spare

- whole LUN passed to DSM as RDM where used for a single volume

 

As to your specific questions:

 

- I never had a drive failure as such

- I had power failures due to my stupidity - while setting up Network UPS Tools as a server on the router and a client in ESXi, i realised too late that the box was plugged into the filter socket rather into battery backed up socket, so a dosen of power offs occured. No issues, exept for DSM crying that it experienced ubnormal shutdown.

- no sure on drives migration, but as far as I remember I sucessfully mounted RDM in a different DSM VM and for sure several times under ubuntu and CentOS VMs.

- expansion can be a bit difficult, see my post http://xpenology.com/forum/viewtopic.php?f=15&t=3339&start=40#p24563 and that thread itself, it is not that long.

- SMART data passthrough can be an issue, I actually do not care - will set up some notification with ESXi/Adaptec's tools a bit later. After switching to nanoboot, I experianced DSM loosing any connection, though VM was running and after sending it to reboot from the console (VM tools were also off) it was running fine. I guess this is a nanoboot issue. will dig into when i get a chance

- I have not run the box for years, but several months without reboots/offs

 

If need more help, pls ask.

 

BR

Share this post


Link to post
Share on other sites