eptesicus Posted September 6, 2018 Share #1 Posted September 6, 2018 Preface: I'm new to Xpenology and DSM... Long story short, I'm building a new (really just reallocating hardware) 24x SAS drive SAN and decided to run Xpenology on it following THIS to do the install and THIS to increase the disk capacity beyond 12 drives. Everything went great. I had a couple 3TB drives that I copied my vCenter VMs to, took all of the SAS drives from my hosts, put them in the 24 bays of my SAN, and started to migrate the VMs to the two R10 arrays I created via iSCSI. Migration was going great... I migrated everything but the vCenter appliance.... Well, I was migrating the vCenter appliance... Then my UPS faulted, and my SAN shutdown. See... I have two UPS', and each server and my SAN have two PSUs each. One PSU goes to one UPS, and another PSU goes to the other UPS. It's a good, redundant system... However... My SAN only had one PSU connected at the time, and it was to the UPS that faulted... Ok, so hopefully I can boot up the SAN, and it's no big deal. I could probably recover VCSA... Nope. I boot up the SAN, go to the GUI, and am prompted with this message: Geez... ok... I select "Migration: Keep my data and most of the settings" from the installation type selection, select the DSM that I downloaded in accordance with the first video I referenced, and let it install. Of course, when it came back up and went to the GUI, only 12 disks were visable, and not 45, so of course DSM thinks the array crashed. OK... FINE... I edit the synoinfo.conf to allow 45 drives again. But that requires a reboot. So I rebooted the SAN, and I'm then greeted with the same message prompting for me to reinstall DSM... WHAT DO I DO!? How can I recover DSM and my arrays? Quote Link to comment Share on other sites More sharing options...
sbv3000 Posted September 7, 2018 Share #2 Posted September 7, 2018 If you check the forum you will see that there are various posts about the maximum number of drives and I think there is a limit of 24. Some people have reported exceeding this but on reboot the config is lost. Quote Link to comment Share on other sites More sharing options...
eptesicus Posted September 7, 2018 Author Share #3 Posted September 7, 2018 25 minutes ago, sbv3000 said: If you check the forum you will see that there are various posts about the maximum number of drives and I think there is a limit of 24. Some people have reported exceeding this but on reboot the config is lost. That's great to know, thanks. I've searched for "xpenology more than 12 drives" and other terms on Google and found nothing about a 24 drive limit. How would I go about ensuring that DSM then only sees my 24x hot-swap bays and not my internal Santa connections on my motherboard? In storage manager, I believe the disks start at an odd number, and then the hot swap bays were disks 11-34. How do I get DSM to see the hotswap bays as disks 1-24? Quote Link to comment Share on other sites More sharing options...
bearcat Posted September 7, 2018 Share #4 Posted September 7, 2018 (edited) 54 minutes ago, eptesicus said: That's great to know, thanks. I've searched for "xpenology more than 12 drives" and other terms on Google and found nothing about a 24 drive limit. How would I go about ensuring that DSM then only sees my 24x hot-swap bays and not my internal Santa connections on my motherboard? Did you allready try the KISS approach? Just disable the internal controller ports? You may want to do some reading here and here. And allso search for the terms " internalportcfg= " and " maxdisks="12" " that might give you some hints. Related files: /etc/synoinfo.conf /etc.defaults/synoinfo.conf Edited September 7, 2018 by bearcat Quote Link to comment Share on other sites More sharing options...
sbv3000 Posted September 7, 2018 Share #5 Posted September 7, 2018 50 minutes ago, eptesicus said: That's great to know, thanks. I've searched for "xpenology more than 12 drives" and other terms on Google and found nothing about a 24 drive limit. How would I go about ensuring that DSM then only sees my 24x hot-swap bays and not my internal Santa connections on my motherboard? In storage manager, I believe the disks start at an odd number, and then the hot swap bays were disks 11-34. How do I get DSM to see the hotswap bays as disks 1-24? check this thread Quote Link to comment Share on other sites More sharing options...
eptesicus Posted September 7, 2018 Author Share #6 Posted September 7, 2018 Thanks all. So with Jun's loader, I can get a max of 26 drives... I see instructions from Quicknick on his loader to get up to 64 drives, and found what I thought were the files here: https://xpenology.club/downloads/, but I find nothing else on his loader. Did he make it, provide instructions, but not actually release it? Quote Link to comment Share on other sites More sharing options...
bearcat Posted September 7, 2018 Share #7 Posted September 7, 2018 1 - According to the FAQ : "There is a physical limit of 26 drives due to limitations in drive addressing in Linux" 2 - Forget all about Quicknick, as his work has been put on a hold, and might never be released/updated. Quote Link to comment Share on other sites More sharing options...
eptesicus Posted September 7, 2018 Author Share #8 Posted September 7, 2018 1 hour ago, bearcat said: 1 - According to the FAQ : "There is a physical limit of 26 drives due to limitations in drive addressing in Linux" 2 - Forget all about Quicknick, as his work has been put on a hold, and might never be released/updated. Unfortunate about Quicknick. I imagine a lot of people would have loved to have used his more-than-26-drives bootloader/DSM package. Quote Link to comment Share on other sites More sharing options...
IG-88 Posted September 9, 2018 Share #9 Posted September 9, 2018 (edited) On 9/7/2018 at 5:40 PM, eptesicus said: Unfortunate about Quicknick. I imagine a lot of people would have loved to have used his more-than-26-drives bootloader/DSM package. its (mostly) based on scripts, so you can "read" whats done and going on, i remember that there where specific numbers for drives allowed and we also had a discussion here how to hadle >26 drives (so there might be something here to find, maybe it was was quicknick used as base?) but you i think you can go another way to at least get your data for backup purpose dsm is using the normal linux tools/mechanisms for raid and volumes so its about "mdadm" and "lvm" i'd expect they did not invent everything new to get iscsi for there dsm so maybe it will be as easy for iscsi as it is for the data volumes as there is no "synology-limit" in a normal (live) linux about the number of drives that can be used with lvm and mdadm you can used a live linux to mount your drives and access them (even over network, see the howto/faq section for this) and when you have the proper iscsi tools installed (iscsitarget or open-iscsi?) you can discover your iscsi (similar as done in the step before with the raid) and copy your data to a safe location, after this you can start experimenting with higher drive counts in dsm just a example: (iSCSI) Discovering the iSCSI target by using the iscsiadm utility on Red Hat 5, 6, 7, SUSE 10, 11 https://library.netapp.com/ecmdocs/ECMP1217221/html/GUID-2A8546C7-347A-40B0-B937-4B31DAAA16DA.html https://www.thomas-krenn.com/de/wiki/ISCSI_unter_Linux_mounten Edited September 9, 2018 by IG-88 Quote Link to comment Share on other sites More sharing options...
smokers Posted September 16, 2018 Share #10 Posted September 16, 2018 Did you get your raid / dsm get back working ? i also still search for a 24 bay permanent solution :-S Quote Link to comment Share on other sites More sharing options...
IG-88 Posted September 17, 2018 Share #11 Posted September 17, 2018 if its about updating and (always) having synoinfo.conf containing 24 drives instead of 12 the you could mod you bootimage jun mod things when loading with a diff file and the program patch, if you look into the files containing the diff's you can see whats done on 3615/3617 the synology default is 12 drives the you will not see the maxdrive been modded but you can see it how it would look in the end 918+ image to create a diff (patch) you take the file as it is (from synology) let jun's mod run normally (boot into dsm with hos loader) and afterward mod you synoinfo.conf in the way you want it to be and create a new patch (diff file - just google "create diff file") and insert the parts about the drive count and pattern into the patch file in the loader you are using to boot your system, when (wit a bigger update) your synoinfo.conf is overwritten with the original the the patch kicks in at boot time and will mod the drive count and pattern so it will always be 24 instead of 12, only time you will have to redo this is when using a new loader (like 1.02/dsm6.1 -> 1.03/dsm6.2) 1 Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.