C-Fu

Members
  • Content Count

    83
  • Joined

  • Last visited

Community Reputation

0 Neutral

About C-Fu

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I just changed from 750W psu to a 1600W psu that's fairly new (only a few day's use max), so I don't believe the PSU is the problem. When I get back on monday, I'll see if I can replace the whole system (I have a few motherboards unused) and cables and whatnot and reuse the SAS card if that's not likely the issue, and maybe reinstall Xpenology. Would that be a good idea?
  2. Damn. You're right. Usually when something like this happens... is there a way to prevent the sas card from doing this? Like a setting or a bios update or something. Or does this mean that the card is dying? If I take out say, sda - the SSD and put it back in, will the assignments change and revert back? Or whatever drive connected to the sas card. Sorry I'm just frustrated but still wanna understand
  3. Yeah it is. Slow, but working. # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdn6[2] sdm6[1] 11720987648 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/3] [UUU__] md2 : active raid5 sdb5[0] sdk5[12] sdo5[11] sdq5[9] sdp5[8] sdn5[7] sdm5[6] sdl5[5] sdf5[4] sde5[3] sdd5[2] sdc5[1] 35105225472 blocks super 1.2 level 5, 64k chunk, algorithm 2 [13/12] [UUUUUUUUUU_UU] md5 : active raid1 sdo7[3] 3905898432 blocks super 1.2 [2/0] [__] md1 : active raid1 sdb2[0] sdc2[1] sd
  4. There was another powercut. Dammit! Now I can't even do cat /proc/mdstat. I'll wait a while just to see if it will work or not. Sorry! So frustrating I connected the pc to an UPS btw. So not really sure why powercuts can still affect the whole system. edit: ok cat /proc/mdstat works but slow. Should I continue? with mdadm assemble md4 md5? or maybe take out sda SSD and see if it fixes the slow mdstat? Please advice. # cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md4 : active raid5 sdl6[0] sdq6[
  5. Two errors: root@homelab:~# mdadm --stop /dev/md4 mdadm: stopped /dev/md4 root@homelab:~# mdadm --stop /dev/md5 mdadm: stopped /dev/md5 root@homelab:~# mdadm --assemble /dev/md4 -u648fc239:67ee3f00:fa9d25fe:ef2f8cb0 mdadm: /dev/md4 assembled from 3 drives - not enough to start the array. root@homelab:~# mdadm --assemble /dev/md5 -uae55eeff:e6a5cc66:2609f5e0:2e2ef747 mdadm: /dev/md5 assembled from 0 drives and 1 rebuilding - not enough to start the array. What do you mean by not current btw?
  6. Cool, no worries. # mdadm --detail /dev/md5 /dev/md5: Version : 1.2 Creation Time : Tue Sep 24 19:36:08 2019 Raid Level : raid1 Array Size : 3905898432 (3724.96 GiB 3999.64 GB) Used Dev Size : 3905898432 (3724.96 GiB 3999.64 GB) Raid Devices : 2 Total Devices : 0 Persistence : Superblock is persistent Update Time : Tue Jan 21 05:58:00 2020 State : clean, FAILED Active Devices : 0 Failed Devices : 0 Spare Devices : 0 Number Major Minor RaidDevice State - 0 0 0 removed - 0
  7. Great! root@homelab:~# mdadm -Cf /dev/md2 -e1.2 -n13 -l5 --verbose --assume-clean /dev/sd[bcdefpqlmn]5 missing /dev/sdo5 /dev/sdk5 -u43699871:217306be:dc16f5e8:dcbe1b0d mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/sdb5 appears to be part of a raid array: level=raid5 devices=13 ctime=Tue Jan 21 05:07:10 2020 mdadm: /dev/sdc5 appears to be part of a raid array: level=raid5 devices=13 ctime=Tue Jan 21 05:07:10 2020 mdadm: /dev/sdd5 appears to be part of a raid array: level=raid5 devices=13 ctime=Tue Jan 21 05:07:10 2020 md
  8. I can confirm, no array drives changing since 9 hours ago. The one I just ran still has the same 4 "new" entries of add & remove [sda] ssd drive compared to Tuesday's post #113. # fgrep "hotswap" /var/log/disk.log 2020-01-18T10:21:23+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sdk] hotswap [add] ==== 2020-01-18T10:21:23+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sdl] hotswap [add] ==== 2020-01-18T10:21:24+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sdm] hotswap [add] ==== 2020-01-18T10:21:24+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA d
  9. ok I did a fgrep hotswap and compare both with notepad++ and seems like there are a few additions: 2020-01-21T18:19:12+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sda] hotswap [remove] ==== 2020-01-21T18:19:14+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sda] hotswap [add] ==== 2020-01-21T20:39:20+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sda] hotswap [remove] ==== 2020-01-21T20:39:22+08:00 homelab hotplugd: hotplugd.c:1451 ==== SATA disk [sda] hotswap [add] ==== I thought sda is the ssd? # fdisk -l /dev/sda Disk /dev/sda: 223.6 GiB, 2400
  10. # fdisk -l /dev/sdb Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 43C8C355-AE0A-42DC-97CC-508B0FB4EF37 Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID # fdisk -l /dev/sdc Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 586
  11. # fdisk -l /dev/sd? Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x696935dc Device Boot Start End Sectors Size Id Type /dev/sda1 2048 468857024 468854977 223.6G fd Linux raid autodetect Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 byte
  12. @flyride I wanted to wait 10 mins, but my ssh session just got closed a few seconds after I connect the drive. Still gonna wait 10 mins. fdisk -l shows the drive is at /dev/sdo. Disk /dev/sdo: 9.1 TiB, 10000831348736 bytes, 19532873728 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 1713E819-3B9A-4CE3-94E8-5A3DBF1D5983 # mdadm --examine /dev/sd?5 /dev/sdb5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0
  13. root@homelab:~# vgchange -an 0 logical volume(s) in volume group "vg1" now active root@homelab:~# mdadm --examine /dev/sd?5 /dev/sdb5: Magic : a92b4efc Version : 1.2 Feature Map : 0x0 Array UUID : 43699871:217306be:dc16f5e8:dcbe1b0d Name : homelab:2 (local to host homelab) Creation Time : Tue Jan 21 05:07:10 2020 Raid Level : raid5 Raid Devices : 13 Avail Dev Size : 5850870912 (2789.91 GiB 2995.65 GB) Array Size : 35105225472 (33478.95 GiB 35947.75 GB) Data Offset : 2048 sectors Super Offset : 8 sectors Unused Space : be
  14. Would reinserting the "bad" 10TB and switch with the current 10TB, or add to the array help? root@homelab:/# vgchange -an 0 logical volume(s) in volume group "vg1" now active root@homelab:/# root@homelab:/# mdadm --stop /dev/md2 mdadm: stopped /dev/md2 root@homelab:/# mdadm -Cf /dev/md2 -e1.2 -n13 -l5 --verbose --assume-clean /dev/sd[bcdefpqlmn]5 missing /dev/sdo5 /dev/sdk5 -u43699871:217306be:dc16f5e8:dcbe1b0d mdadm: layout defaults to left-symmetric mdadm: chunk size defaults to 64K mdadm: /dev/sdb5 appears to be part of a raid array: level=raid5 devices=13 ctime=Sat Jan
  15. Nope, doesn't work. root@homelab:~# mount -t btrfs -o ro,nologreplay /dev/vg1/volume_1 /volume1 mount: wrong fs type, bad option, bad superblock on /dev/vg1/volume_1, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. root@homelab:~# dmesg | tail [16079.284996] init: dhcp-client (eth4) main process (16851) killed by TERM signal [16079.439991] init: nmbd main process (17405) killed by TERM signal [16083.428550] alx 0000:05:00.0 eth4: NIC Up: 100 Mbps Full [16084.471931] iSCSI:iscsi_target.c:520:isc