Rihc0

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Rihc0

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Rightnow I use reclaime pro recovery software from a friend of mine and I can see the data, and try to get it of the raid. hopefully it works
  2. I shut down the server and removed the GPU that was causing problems. I started the server and the virtual machine and this is the output of the commands you have send me. this is with the harddrives the xpenology VM had in the first place I won't touch it unless you say so ^^. sorry for doing many things wrong output of "cat /proc/mdstat" ash-4.3# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md3 : active raid5 sdg3[0] sde3[1] 15606591488 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/2
  3. Sorry for not saying the purple screen happend, totally forgot it due some personal issues. I have removed the gpu which cause the purple screen and will be uploading the output of the commands. And wont touch the server until you guys say so . Sorry for the trouble, I appreciate you guys helping me.
  4. The whole server rebooted because i got a purple screen view days ago. Also removed a few hard drives but those did not belong to the original raid, maybe i added them accidentally to the virtual machine en removed them when i pulled them out of the server. you still think this can work ?
  5. ash-4.3# fdisk -l /dev/sd* Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x22d5f435 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/sdb2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/sdb3 9437184 33349631 23912448 11.4G fd Linux raid autodetect Disk /dev/sdb1: 2.4 GiB, 2550005760 bytes, 49804
  6. so I did the first 3 commands, and with the third command I got this. output of: # mdadm -v --create --assume-clean -e1.2 -n5 -l5 /dev/md3 /dev/sdg3 /dev/sde3 /dev/sdf3 /dev/sdh3 missing -uff64862b:9edfe233:c498ea84:9d4b9ffd ash-4.3# mdadm -v --create --assume-clean -e1.2 -n5 -l5 /dev/md3 /dev/sdg3 /dev/sde3 /dev/sdf3 /dev/sdh3 missing -uff64862b:9edfe233:c498ea84:9d4b9ffd mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/sdg3 appears to be part of a raid array: level=raid5 devices=5 ctime=Sat Jun 20 00:46:08 2020 mdadm: /dev/sde3 app
  7. I see, I thought I used raid 5, but not sure, and filesystem is ext4 for sure. lets try. Btw, how did you configure your nas? I want to do it the best way but at the moment I learn by making mistakes, and I don't know what is the best way to set it up.
  8. I don't, I guess that is something that shows how the raid is configured?
  9. Okay, Sorry for the late reply, the lifecycle controller on my server was not doing great and had to troubleshoot it. Ill redo all the commands because there might have been some changes. output of sudo fdisk -l /dev/sd* ash-4.3# sudo fdisk -l /dev/sd* Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x22d5f435 Device Boot Start End Sectors Size Id Type /dev/sdb1
  10. Okay, I had to do this on an another virtual machine because I deleted the other one because i thought it was hopeless :P. output of sudo fdisk -l /dev/sd* ash-4.3# sudo fdisk -l /dev/sd* Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x22d5f435 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/sdb2
  11. I created the 2x 16gb so I could boot the Synology Nas. I had a ds918+ but the hardware was slow :(. I asked if you think I won't be able to repair this because if not then, I know it is pointless on working on this problem. I should learn more abut the raid and how it works, I am too unfamiliar with it
  12. Disk /dev/sdb: 16 GiB, 17179869184 bytes, 33554432 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0xa43eb840 Device Boot Start End Sectors Size Id Type /dev/sdb1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/sdb2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/sdb3 9437184 33349631 23912448 11.4G fd Linux raid autodetect Disk /dev/sdb1: 2.4 GiB, 2550005760 bytes, 4980480 sectors Units: sectors of 1