Recommended Posts

Preface: I'm new to Xpenology and DSM...

 

Long story short, I'm building a new (really just reallocating hardware) 24x SAS drive SAN and decided to run Xpenology on it following THIS to do the install and THIS to increase the disk capacity beyond 12 drives. Everything went great. I had a couple 3TB drives that I copied my vCenter VMs to, took all of the SAS drives from my hosts, put them in the 24 bays of my SAN, and started to migrate the VMs to the two R10 arrays I created via iSCSI. Migration was going great... I migrated everything but the vCenter appliance.... Well, I was migrating the vCenter appliance...

 

Then my UPS faulted, and my SAN shutdown. See... I have two UPS', and each server and my SAN have two PSUs each. One PSU goes to one UPS, and another PSU goes to the other UPS. It's a good, redundant system... However... My SAN only had one PSU connected at the time, and it was to the UPS that faulted...

 

Ok, so hopefully I can boot up the SAN, and it's no big deal. I could probably recover VCSA...

 

Nope. I boot up the SAN, go to the GUI, and am prompted with this message:

 

8GfkTBY.png

 

Geez... ok... I select "Migration: Keep my data and most of the settings" from the installation type selection, select the DSM that I downloaded in accordance with the first video I referenced, and let it install. Of course, when it came back up and went to the GUI, only 12 disks were visable, and not 45, so of course DSM thinks the array crashed. OK... FINE... I edit the synoinfo.conf to allow 45 drives again. But that requires a reboot. So I rebooted the SAN, and I'm then greeted with the same message prompting for me to reinstall DSM...

 

WHAT DO I DO!? How can I recover DSM and my arrays?

Share this post


Link to post
Share on other sites

If you check the forum you will see that there are various posts about the maximum number of drives and I think there is a limit of 24. Some people have reported exceeding this but on reboot the config is lost.

Share this post


Link to post
Share on other sites
25 minutes ago, sbv3000 said:

If you check the forum you will see that there are various posts about the maximum number of drives and I think there is a limit of 24. Some people have reported exceeding this but on reboot the config is lost.

 

That's great to know, thanks. I've searched for "xpenology more than 12 drives" and other terms on Google and found nothing about a 24 drive limit.

 

How would I go about ensuring that DSM then only sees my 24x hot-swap bays and not my internal Santa connections on my motherboard? In storage manager, I believe the disks start at an odd number, and then the hot swap bays were disks 11-34. How do I get DSM to see the hotswap bays as disks 1-24?

Share this post


Link to post
Share on other sites
54 minutes ago, eptesicus said:

 

That's great to know, thanks. I've searched for "xpenology more than 12 drives" and other terms on Google and found nothing about a 24 drive limit.

How would I go about ensuring that DSM then only sees my 24x hot-swap bays and not my internal Santa connections on my motherboard?

 

Did you allready try the KISS approach? Just disable the internal controller ports?

 

You may want to do some reading here and here.

And allso search for the terms " internalportcfg= " and " maxdisks="12" " that might give you some hints.

 

Related files:

 /etc/synoinfo.conf 

 /etc.defaults/synoinfo.conf 

Edited by bearcat

Share this post


Link to post
Share on other sites
50 minutes ago, eptesicus said:

 

That's great to know, thanks. I've searched for "xpenology more than 12 drives" and other terms on Google and found nothing about a 24 drive limit.

 

How would I go about ensuring that DSM then only sees my 24x hot-swap bays and not my internal Santa connections on my motherboard? In storage manager, I believe the disks start at an odd number, and then the hot swap bays were disks 11-34. How do I get DSM to see the hotswap bays as disks 1-24?

check this thread

 

Share this post


Link to post
Share on other sites

Thanks all. So with Jun's loader, I can get a max of 26 drives... I see instructions from Quicknick on his loader to get up to 64 drives, and found what I thought were the files here: https://xpenology.club/downloads/, but I find nothing else on his loader. Did he make it, provide instructions, but not actually release it?

Share this post


Link to post
Share on other sites

1 - According to the FAQ : "There is a physical limit of 26 drives due to limitations in drive addressing in Linux"

2 - Forget all about  Quicknick, as his work has been put on a hold, and might never be released/updated.

Share this post


Link to post
Share on other sites
1 hour ago, bearcat said:

1 - According to the FAQ : "There is a physical limit of 26 drives due to limitations in drive addressing in Linux"

2 - Forget all about  Quicknick, as his work has been put on a hold, and might never be released/updated.

Unfortunate about Quicknick. I imagine a lot of people would have loved to have used his more-than-26-drives bootloader/DSM package.

Share this post


Link to post
Share on other sites
On 9/7/2018 at 5:40 PM, eptesicus said:

Unfortunate about Quicknick. I imagine a lot of people would have loved to have used his more-than-26-drives bootloader/DSM package.


its (mostly) based on scripts, so you can "read" whats done and going on, i remember that there where specific numbers for drives allowed and we also had a discussion here how to hadle >26 drives (so there might be something here to find, maybe it was was quicknick used as base?)

but you i think you can go another way to at least get your data for backup purpose

dsm is using the normal linux tools/mechanisms for raid and volumes so its about "mdadm" and "lvm"

 

i'd expect they did not invent everything new to get iscsi for there dsm so maybe it will be as easy for iscsi as it is for the data volumes

as there is no "synology-limit" in a normal (live) linux about the number of drives that can be used with lvm and mdadm you can used a live linux to mount your drives and access them (even over network, see the howto/faq section for this) and when you have the proper iscsi tools installed (iscsitarget or open-iscsi?) you can discover your iscsi (similar as done in the step before with the raid) and copy your data to a safe location, after this you can start experimenting with higher drive counts in dsm

 

just a example:

(iSCSI) Discovering the iSCSI target by using the iscsiadm utility on Red Hat 5, 6, 7, SUSE 10, 11

https://library.netapp.com/ecmdocs/ECMP1217221/html/GUID-2A8546C7-347A-40B0-B937-4B31DAAA16DA.html

 

https://www.thomas-krenn.com/de/wiki/ISCSI_unter_Linux_mounten

Edited by IG-88

Share this post


Link to post
Share on other sites

Did you get your raid / dsm get back working ?

i also still search for a 24 bay permanent solution :-S 

 

 

Share this post


Link to post
Share on other sites

if its about updating and (always) having synoinfo.conf containing 24 drives instead of 12 the you could mod you bootimage

jun mod things when loading with a diff file and the program patch, if you look into the files containing the diff's you can see whats done

on 3615/3617 the synology default is 12 drives the you will not see the maxdrive been modded but you can see it how it would look in the end 918+ image

to create a diff (patch) you take the file as it is (from synology) let jun's mod run normally (boot into dsm with hos loader) and afterward mod you synoinfo.conf in the way you want it to be and create a new patch (diff file - just google "create diff file") and insert the parts about the drive count and pattern into the patch file in the loader you are using to boot your system, when (wit a bigger update) your synoinfo.conf is overwritten with the original the the patch kicks in at boot time and will mod the drive count and pattern so it will always be 24 instead of 12, only time you will have to redo this is when using a new loader (like 1.02/dsm6.1 -> 1.03/dsm6.2)

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now