Rowdy

Members
  • Content Count

    29
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Rowdy

  • Rank
    Junior Member

Recent Profile Visitors

820 profile views
  1. It works! It just works! And a green 'healthy' sign, I'm really happy. Thank you very, very much and might you ever find yourself in the vicinity of Venlo, the Netherlands, swing by, I'll buy you a beer. (or ten)
  2. Yes, my drives are hot-swappable. :) The parity consistency check is done, and the volume has entered a new status; the warning status. That's a new one! Mdstat also seems okay? rowdy@prime-ds:/$ sudo cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md3 : active raid5 sdb6[6] sda6[0] sdd6[4] sdc6[5] 2930228736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md2 : active raid5 sda5[6] sdb5[1] sdc5[4] sdd5[5] 5846050368 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] md1 : active raid1 sdd2[3] sdc2[2]
  3. I will. Parity consistency check is running. Just the below command I presume? rowdy@prime-ds:/$ sudo cat /proc/mdstat And now I'm looking back,I've totally missed it there was something wrong... So this old output md3 : active raid5 sda6[0] sdd6[4] sdc6[5] 2930228736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [U_UU] Should become something like this if the parity consistency is ready? md3 : active raid5 sda6[0] sdd6[4] sdc6[5] 2930228736 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/4] [UUUU] Trying to grasp wh
  4. I was aware of that, but I tried it three times because I thought I screwed up; ie with repairing it while the volume wasn't accessable.. rowdy@prime-ds:/$ sudo smartctl -d sat --all /dev/sda | fgrep -i sector Sector Sizes: 512 bytes logical, 4096 bytes physical 5 Reallocated_Sector_Ct 0x0033 195 195 140 Pre-fail Always - 154 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0
  5. Ow well. That was a bust. So, after parity check it shows as degraded, but fine. Can access all files etc. I get occasionally a message thet the system partition is damaged, if i want to repair that. Yes, and it's fine. I don't have the option to deactivate drive 1 (I could deactivate 3..4, not a good plan ) and If I yank it out the volume is crashed. If I enter the new drive, it won't show me the option to repair, probably because the volume is crashed? After that, if I put the old drive back, run the command you gave me (mdadm /dev/md2 --manage --add /dev/sda5) it wil start the check again
  6. Small update; after the check I could 'kick' the drive, however; I've did the add/parity check while the volume was not accessable.. So I could kick the drive, but could not add the new drive. So now I've booted with the old drive and an accessable volume, and restarted the add/parity check.. More news probably late this evening...
  7. It's at 88%.. Fingers crossed! I'll keep you posted. Thansk for the help so far!
  8. No problem, I was not able deduce that there where two arrays myself Using the OLD drive, I could perform that command, and the storage manager seems to be checking parity now. Does that mean it's finding the loss in redundancy and it's going to correct that?
  9. Unfortunatly, I can't do anything there I'm afraid. I can do health checks with both (OLD, NEW) drives in and hit 'Configure' with the OLD drive in. I've also checked what I could do on drive two, that shoudl have some errors as per your first reply, but no joy I'm afraid?
  10. Thanks! I've tried that, with the old drive, new drive, no drive and reboots, but they all say the same; rowdy@prime-ds:/$ sudo mdadm /dev/md2 --manage --add /dev/sdb5 Password: mdadm: Cannot open /dev/sdb5: Device or resource busy How do find what's keeping the drive busy?
  11. So, I'm a bit stuck. Over the last years when a disk crashed, I'd pop the crashed disk out, put in a fresh one and repair the volume. No sweat. Over time I've migrated from a mixed 1 and 2TB disks to all 3TB one and earlier this year I'd received a 6TB one as a RMA for a 3TB one from WD. So I was running: Disk 1: 3TB WD - Crashed Disk 2: 3TB WD Disk 3: 6TB WD Disk 4: 3TB WD So I've bought a shiny new 6TB WD to replace disk 1. But that did not work out well. When running the above setup; I've a degraded volume, but it's accessable. When putting in the new
  12. - Outcome of the update: SUCCESSFUL - DSM version prior update: 6.2.2-24922 Update 5 - Loader version and model: Jun's v1.04b DS918 - Using custom extra.lzma: ig-88's extra.lzma and extra2.lzma - Installation type: BAREMETAL - ASRock J4105-ITX - Additional comments: I've rebuild my boot usb stick with the new extra.lzma and the extra2.lzma from ig-88's topic. I've combined that with the rd.gz and the zImage files from the DSM_DS918+_25426.pat from Synology. After rebooting I've used https://find.synology.com to locate my server and started the migration process. Choose
  13. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 Update 2 - Loader version and model: Jun's v1.04b DS918 - Using custom extra.lzma: real3x mod - Installation type: BAREMETAL - ASRock J4105-ITX - Additional comments: No reboot
  14. No, I'm sorry, that's for a rainy day, did not come to it to activate it. Also the mentioned folder does not exist on my system..
  15. Ah, I didn't get why my logs where not collapsed. Thanks, I'll do!