Transition Members
  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About tcs

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've read most of the posts in this thread and didn't see it referenced so I apologize if I just missed it: Is this still bound by the stock synology kernels? Can we get GPU offload (/dev/dri) into the 3615/3617 images even though synology's official kernels leave that out? Any chance this can incorporate the changes I uncovered in the other thread here to support higher drive counts? Specifically: #change maxdisks="64" esataportcfg="0x0" usbportcfg="0x0" internalportcfg="0xffffffffffffffff" and possibly (not at home
  2. Well sure - for your data volume you wouldn't put everything into one large array. When I had this working in the past with 40 drives, I did 2x16+2 with a hot spare for the data volume. If I get it working again, that'll likely be the case.
  3. Where specifically was it at in Nick's loader? What file/folder? I can tell you that the system pretends like it's going to use rc.subr (and I'm guessing it did once upon a time) but at this point, as best I can tell, completely ignores it. Well in excess, but if you look at those systems they still only have 12-16 as "internal" disks, everything else is in an external enclosure. Keep in mind, the root volume and swap volume are RAID-1. Having 100 drives isn't anymore risky than 2 (quite the opposite), you've jus
  4. For what it's worth I did more digging on this. The issue is that on setup, the system does a "raidtool initsetup" - this utility calls scemd which is a binary blob. That binary blob has a hard call to read synoinfo.conf to see what maxdisks is set to, it then uses the value in maxdisks to try to create the initial md0 (root volume) and md1 (swap). It will look at what your active drives are, and create logical placeholders for the rest called "missing". Unfortunately it's also hard-coded to use mdadm metadata format 0.9, which limits an md array to 27 devices. So - at least for initial s
  5. tcs

    DSM 6.2 Loader

    Feel free to move this if it isn't in the right spot. Was running 6.2 with 1.03b loader (DS3617) on a dual-xeon X5600 supermicro box (tylersburg chipset) and recently upgraded to 6.2.1. Post upgrade I'm getting a kernel panic around ehci-pci. Anyone run into this or have any ideas? patching file etc/rc patching file etc/synoinfo.conf Hunk #1 FAILED at 261. 1 out of 1 hunk FAILED -- saving rejects to file etc/synoinfo.conf.rej patching file linuxrc.syno Hunk #3 succeeded at 552 (offset 1 line). patching file usr/sbin/ cat: can't open '/etc/synoinfo_override.conf': No such
  6. Native "external" synology enclosures are treated differently than internal drives, that's why. I'm sure it would be possible to simulate external trays, but it's not really worth the effort.
  7. You actually CAN'T do it on initial install. If you have more than 12 drives on initial install you'll run into all sorts of issues. Once you're past that initial install is when you increase the number of drives. Still curious when this is going to get posted. I'm assuming nick ran into issues since he said it was uploading last Friday but I still haven't seen it posted anywhere.
  8. I would imagine he modifies the swap option that I found. My fix works just fine, the only issue was that it doesn't persist across upgrades because Jun didn't change it in his loader so on upgrade the synoinfo.conf would default to whatever was in the bootloader.
  9. You still planning on posting this weekend? I've actually got a system sitting in wait that I've yet to build a custom bootloader for that needs the drive fix.
  10. Still planning on releasing a new loader this week? Also, one word of caution on the raid-card front - be very careful which RAID adapter you pick. I had some 3ware adapters that would go completely out to lunch and the only way to recover was a hard power cycle. Then occassionally they would mark drives bad under heavy load even though the drives were perfectly fine... it was the ASIC on the RAID adapter losing its mind.
  11. I may have found a fix. In order for that to happen I'd need Jun to build us a custom bootloader. If he's not willing to build us a one-off, I can probably do it myself but it'll be a while before I have time to build out that environment myself. There's a possibility the change I want to make isn't possible without the full source, but I'm not sure at this point.
  12. Well gentlemen, I'm going to say after much gnashing of teeth I've found a solution. The source of the issue seems to be something in /etc/rc.subr but I can't quite pin it down. HOWEVER, what I *DID* find is that you can disable swap entirely, which is where the actual problem lies. Normally when you have external shelves, the system identifies drives in them with unique IDs per shelf. So the system is never expecting a drive with more than 3 letters in the device ID (IE: it expects an sda2, but not an sdaa2). First I tried getting it to identify my external trays, but I quick