undecided
-
Posts
45 -
Joined
-
Last visited
Posts posted by undecided
-
-
6.1.7 . I also found a post on STH about possibly needing to plug in the additional pciex power into the 9300-16i which I have not done as it is plugged into a pciex-16 port which should have enough power but who knows. I was very nervous going to 7.1. Been on 6.x for years.
-
There were no errors during regular operation of the RAID (copied 4-6TB to it). Only during this disk initialization process I see this problem.
-
-
-
Ooof, it just happened again. This time it was a different set of disks. System was running fine until I added this new disk and started the 'expansion' process which has been going for 2-3 days now. I wonder if the HBA is faulty.
-
16 minutes ago, Mary Andrea said:
Sometimes could be micro-cuts power. Have you tried change cable (Sata/Power) among disks? Check each connection through Board and Disks. Check plug tight. Try it.
Interesting. It was the same 5 disks both times. The weird thing is that they all failed on same sector. Now, my SATA power comes from different modular psu cables so I am pretty sure they do not share 1 cable and the one with the most plugs has 4 SATA power connectors on it and I have 5 drives fail simultaneously but I will double check.
-
This is a new Xpenology server running 6.1.7 bare metal on an Intel 4790T with an LSI HBA ( LSI 9300-16i )and 6 6TB HDDs in RAID Group 1 (Raid-6,Btrfs). Most drives are WDC Red Plus with 2 Seagates. I also have a RAID Group 2 (Raid-6,Btrfs) which is 5 500GB SSDs. That has been stable. I had an addition EXOS drive set up as Hot Spare on Raid Group 1 and when the System Partition Failed message happened, that drive dropped out of being hot spare. The logs look suspicious:
2023/03/03 02:52:35 Write error at internal disk [12] sector 2153552.
2023/03/03 02:52:35 Write error at internal disk [8] sector 2153552.
2023/03/03 02:52:35 Write error at internal disk [7] sector 2153552.
2023/03/03 02:52:34 Write error at internal disk [13] sector 2153552.
2023/03/03 02:52:26 Write error at internal disk [14] sector 2153552.
How can all 5 drives fail on the same sector?
Thx
-
I have the following hardware:
Gigabyte Z97N-WIFI with Intel i7-4790T (Onboard Intel & Realtek LAN ports)
16GB RAM
LSI 9300-16i SAS Controller
I booted this up with old 6.1.2 Jun Loader that I've been using on a different system and I am familiar with and everything was discovered and operated fine.
Is this loader capable of running on the above hardware and if yes, which version of DSM should I put on it? Thanks
-
Tried DSM_DS918+_23739,DSM_DS918+_25426,DSM_DS918+_25556.
Using 1.04b as 918+ on a haswell based quad core gigabyte mb.
-
On 8/12/2021 at 4:08 PM, sbv3000 said:
This website seems to suggest the 6tb drives are not compatible with the d70 http://findhard.ru/en/checkcompatibility/part?partType=Drives&id=17597&model=seagate-6-tb-st6000nm0034
Thx, do you know what's weird? I have another ST6000DM003 in the same box and it works fine.....So, I put the problematic ST6000DM003 in another PC and it works fine there. This is the most bizarro thing I've seen.
-
I have an XPenology 6.x on an old FoxConn D70S-P motherboard with an add-on sata controller for a total of 8 drives. When I replaced one of the older 3TB drives with a new 6TB Seagate ST6000DM003, the pc won't boot. It goes to the 1st screen which shows the add-on controller's SATA drives (4 of them) and then that's it. The fans just spin at full speed and it won't boot. If I put back the older drive, it boots fine. The seagate ST6000DM003 works fine in a usb enclosure. I tried the drive on each of the SATA cables so both on the onboard controller and on the add-on controller, made no difference.
-
On 11/6/2017 at 7:16 PM, Dfds said:
Essentially yes, when you remove the 500gb the array will degrade, add the 4tb DSM should see the new disk & expand the array. Once complete you should be able to change the raid type to shr-2 with the spare 4tb disk.
Yeah, this worked. My array is now in the process of becoming an SHR-2 array. Awesome, thanks a bunch
-
6 minutes ago, Dfds said:
Check out the raid calculator https://www.synology.com/en-uk/support/RAID_calculator
Thanks, I understand that but are you saying that I should make the array degrade on purpose by removing the 500GB?
Then what?
-
11 minutes ago, Dfds said:
If you're array is ok why not swap the 500gb disk for a 4tb, let the array rebuild then change to shr-2. I'm assuming that you have backups of all your important files while you're doing this of course.
Yeah, data is backed up. Why would the swap from 500GB to 4TB allow me to change to shr-2? It would go from
Disk 1: 1.8TB
Disk 3: 2.7TB
Disk 5: 2.7TB
Disk 6: 3.6TB
Disk 7: (Not initialized, reserved for upgrade to SHR-2) 3.6TB
Disk 8: 3.6TB
Disk 9: 2.7TB
Disk 10: 466GB
to
Disk 1: 1.8TB
Disk 3: 2.7TB
Disk 5: 2.7TB
Disk 6: 3.6TB
Disk 7: (Not initialized, reserved for upgrade to SHR-2) 3.6TB
Disk 8: 3.6TB
Disk 9: 2.7TB
Disk 10: 3.6TB
Still 3 different sizes of drives.
-
Just now, Dfds said:
I think you need another 4tb disk.
Oh boy. I am out of SATA ports
-
7 minutes ago, Dfds said:
From Synology website:
SHR-1 with the following disk configurations may require two additional disks when changing to SHR-2:
There are only two disks.
All disks have different capacities.
Three or more disks in the RAID Array contain a pair of higher-capacity disks compared to the other disks in the RAID Array.
what disk configuration do you have?
Disk 1: 1.8TB
Disk 3: 2.7TB
Disk 5: 2.7TB
Disk 6: 3.6TB
Disk 7: (Not initialized, reserved for upgrade to SHR-2) 3.6TB
Disk 8: 3.6TB
Disk 9: 2.7TB
Disk 10: 466GB
The shitty part is that right after the 6.1 upgrade I 'initialized' Disk 8 which was previously unused as well.
-
33 minutes ago, undecided said:
So, I reinstalled but I still don't see the option to 'Change Raid Type'
Tried it again. It seems that making that change to synoinfo.conf results in the diskstation wanting to reinstall.
-
39 minutes ago, undecided said:
I guess not. A quick google search found that I need to ssh and then sudo vi and comment out supportraidgroup="yes" and add
support_syno_hybrid_raid = "yes". Just did it.
Damn, now upon reboot, my DSM is asking to reinstall/migrate. Is this normal? Should I proceed with installing/migrating again?
So, I reinstalled but I still don't see the option to 'Change Raid Type'
-
34 minutes ago, b0fh said:
And do you have SHR enabled in synoinfo.conf? It will certainly "import" existing SHR volumes, but I am not sure you can make any changes without it enabled.
I guess not. A quick google search found that I need to ssh and then sudo vi and comment out supportraidgroup="yes" and add
support_syno_hybrid_raid = "yes". Just did it.
Damn, now upon reboot, my DSM is asking to reinstall/migrate. Is this normal? Should I proceed with installing/migrating again?
-
1 hour ago, Dfds said:
Do you have the required number of disks?
I have 7 drives in there right now as SHR-1, largest drive is a 4TB and I added a non-initialized 4TB to perform the change to SHR-2.
-
Hello, does anyone know why I cannot perform 'Change RAID Type' on 6.1.3 update 8? That is the main reason I upgraded. I wanted to go from SHR-1 to SHR-2.
-
Also, I would like to let everyone know that the migration went flawlessly. I even let it auto-update to update 6.1.3 update 8 and it's all good so far. Thanks to the OP for the great tutorial.
-
I would like to thank sbv3000, using his advise, I will changed the PID/VID in the grub config file and made the rest of the changes at the grub console and the install proceeded fine.
-
6 hours ago, sbv3000 said:
I had the '56%' issue a few times and found that if I edited the vid/pid for my usb but left the MAC and S/N as the 'default' in the grub, DSM installed ok . After reboot and checking I changed the MAC to the machines NIC (plus S/N to a non-Syno random string) and it worked ok.
I have no idea why this worked, on some installs, editing the MAC before install didn't cause the error.
Sorry, just to clarify, are you saying that editing the s/n and mac BEFORE the install (hitting 'c' right after it boots) is the way to go? Like, don't let it boot with the default settings? If that's what you mean, it is weird that it works that way.
Getting System Partition Failed on Raid Group 1 on multiple drives (SATA HDD) while verifying new disk to be added. Repair works but it happened twice already.
in General Questions
Posted
Cool thanks. I plugged in additional power into the LSI 9300-16 by means of the 6-pin pciex cable from the PSU and everything works fine. It just needs extra power when stressed (like when adding disks). Thx