Jump to content
XPEnology Community

voidru

Rookie
  • Posts

    3
  • Joined

  • Last visited

Everything posted by voidru

  1. When RAID transformation is finished I got degraded RAID5 (2 of 3 disks). The new disk was marked 'crashed' so I plugged it out and made SMART check and a surface test -- no errors. Then plugged in the disk to another SATA port and started RAID5 rebuild process. The rebuild finished successfully and now my DSM is healthy: Thanks everyone! The thread can be closed.
  2. Yes it is. Decided to follow your suggestion. Conversion will end in 24 hours. Fortunately I've made a full backup.
  3. Hi, I have the following configuration: ESXi 6.7.0 Update 2. Xpenology DSM 6.1.5-15254 (DS3615xs). WD RED 2 x 3Tb added to VM using vmkfstools -z (RDM). RAID1, btfrs I was running low on DSM, so I decided to add an additional WD Red 3 Tb (same WD model, but 4 years newer) to my VM and migrate from RAID1 to RAID5. I made a full backup and started the migration. The progress bar shows me that the migration will finish within 48 hours. But after several hours I got the following in the Log Center: Disk 3 (the new one) is marked as "Crashed" but RAID is coninuining changing. Here is what I have in /var/log/messages: and /proc/mdstat: S.M.A.R.T on ESXi shown no errors: Nevertheless the volume is mounted and my files are accessible over SMB, but the lattency is high due to the migration. I believe that something is wrong with the new HDD, and I will never feel safe knowing that there were errors during RAID migration. Any suggestions? Thanks. UPD: The new disk shows 0 write\read rate in ESXi monitor: So I guess, in the end I will get RAID5 with only 2 disks (and one is missing).
×
×
  • Create New...