Jump to content
XPEnology Community

undecided

Member
  • Posts

    45
  • Joined

  • Last visited

Everything posted by undecided

  1. Cool thanks. I plugged in additional power into the LSI 9300-16 by means of the 6-pin pciex cable from the PSU and everything works fine. It just needs extra power when stressed (like when adding disks). Thx
  2. 6.1.7 . I also found a post on STH about possibly needing to plug in the additional pciex power into the 9300-16i which I have not done as it is plugged into a pciex-16 port which should have enough power but who knows. I was very nervous going to 7.1. Been on 6.x for years.
  3. There were no errors during regular operation of the RAID (copied 4-6TB to it). Only during this disk initialization process I see this problem.
  4. Ooof, it just happened again. This time it was a different set of disks. System was running fine until I added this new disk and started the 'expansion' process which has been going for 2-3 days now. I wonder if the HBA is faulty.
  5. Interesting. It was the same 5 disks both times. The weird thing is that they all failed on same sector. Now, my SATA power comes from different modular psu cables so I am pretty sure they do not share 1 cable and the one with the most plugs has 4 SATA power connectors on it and I have 5 drives fail simultaneously but I will double check.
  6. This is a new Xpenology server running 6.1.7 bare metal on an Intel 4790T with an LSI HBA ( LSI 9300-16i )and 6 6TB HDDs in RAID Group 1 (Raid-6,Btrfs). Most drives are WDC Red Plus with 2 Seagates. I also have a RAID Group 2 (Raid-6,Btrfs) which is 5 500GB SSDs. That has been stable. I had an addition EXOS drive set up as Hot Spare on Raid Group 1 and when the System Partition Failed message happened, that drive dropped out of being hot spare. The logs look suspicious: 2023/03/03 02:52:35 Write error at internal disk [12] sector 2153552. 2023/03/03 02:52:35 Write error at internal disk [8] sector 2153552. 2023/03/03 02:52:35 Write error at internal disk [7] sector 2153552. 2023/03/03 02:52:34 Write error at internal disk [13] sector 2153552. 2023/03/03 02:52:26 Write error at internal disk [14] sector 2153552. How can all 5 drives fail on the same sector? Thx
  7. I have the following hardware: Gigabyte Z97N-WIFI with Intel i7-4790T (Onboard Intel & Realtek LAN ports) 16GB RAM LSI 9300-16i SAS Controller I booted this up with old 6.1.2 Jun Loader that I've been using on a different system and I am familiar with and everything was discovered and operated fine. Is this loader capable of running on the above hardware and if yes, which version of DSM should I put on it? Thanks
  8. Tried DSM_DS918+_23739,DSM_DS918+_25426,DSM_DS918+_25556. Using 1.04b as 918+ on a haswell based quad core gigabyte mb.
  9. Thx, do you know what's weird? I have another ST6000DM003 in the same box and it works fine.....So, I put the problematic ST6000DM003 in another PC and it works fine there. This is the most bizarro thing I've seen.
  10. I have an XPenology 6.x on an old FoxConn D70S-P motherboard with an add-on sata controller for a total of 8 drives. When I replaced one of the older 3TB drives with a new 6TB Seagate ST6000DM003, the pc won't boot. It goes to the 1st screen which shows the add-on controller's SATA drives (4 of them) and then that's it. The fans just spin at full speed and it won't boot. If I put back the older drive, it boots fine. The seagate ST6000DM003 works fine in a usb enclosure. I tried the drive on each of the SATA cables so both on the onboard controller and on the add-on controller, made no difference.
  11. Yeah, this worked. My array is now in the process of becoming an SHR-2 array. Awesome, thanks a bunch
  12. Thanks, I understand that but are you saying that I should make the array degrade on purpose by removing the 500GB? Then what?
  13. Yeah, data is backed up. Why would the swap from 500GB to 4TB allow me to change to shr-2? It would go from Disk 1: 1.8TB Disk 3: 2.7TB Disk 5: 2.7TB Disk 6: 3.6TB Disk 7: (Not initialized, reserved for upgrade to SHR-2) 3.6TB Disk 8: 3.6TB Disk 9: 2.7TB Disk 10: 466GB to Disk 1: 1.8TB Disk 3: 2.7TB Disk 5: 2.7TB Disk 6: 3.6TB Disk 7: (Not initialized, reserved for upgrade to SHR-2) 3.6TB Disk 8: 3.6TB Disk 9: 2.7TB Disk 10: 3.6TB Still 3 different sizes of drives.
  14. Disk 1: 1.8TB Disk 3: 2.7TB Disk 5: 2.7TB Disk 6: 3.6TB Disk 7: (Not initialized, reserved for upgrade to SHR-2) 3.6TB Disk 8: 3.6TB Disk 9: 2.7TB Disk 10: 466GB The shitty part is that right after the 6.1 upgrade I 'initialized' Disk 8 which was previously unused as well.
  15. Tried it again. It seems that making that change to synoinfo.conf results in the diskstation wanting to reinstall.
  16. So, I reinstalled but I still don't see the option to 'Change Raid Type'
  17. I guess not. A quick google search found that I need to ssh and then sudo vi and comment out supportraidgroup="yes" and add support_syno_hybrid_raid = "yes". Just did it. Damn, now upon reboot, my DSM is asking to reinstall/migrate. Is this normal? Should I proceed with installing/migrating again?
  18. I have 7 drives in there right now as SHR-1, largest drive is a 4TB and I added a non-initialized 4TB to perform the change to SHR-2.
  19. Hello, does anyone know why I cannot perform 'Change RAID Type' on 6.1.3 update 8? That is the main reason I upgraded. I wanted to go from SHR-1 to SHR-2.
  20. Also, I would like to let everyone know that the migration went flawlessly. I even let it auto-update to update 6.1.3 update 8 and it's all good so far. Thanks to the OP for the great tutorial.
  21. I would like to thank sbv3000, using his advise, I will changed the PID/VID in the grub config file and made the rest of the changes at the grub console and the install proceeded fine.
  22. Sorry, just to clarify, are you saying that editing the s/n and mac BEFORE the install (hitting 'c' right after it boots) is the way to go? Like, don't let it boot with the default settings? If that's what you mean, it is weird that it works that way.
×
×
  • Create New...