Jump to content
XPEnology Community

tcs

Member
  • Posts

    39
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by tcs

  1. 4 minutes ago, scoobdriver said:

    I have esxi 6.7 on the micro sd in my g8 micro server with free licence I sometimes boot to test esxi vm’s , it’s on the free licence , but it doesn’t appear to have virtual serial. Was this introduced for version 7 on free licence ? 

     


    I believe directing it to a network port requires a higher license but you should be able to present the physical serial port of the box to it which - if you have a BMC should be available over the network.  Presumably if he has a 24-way xeon box he has a BMC on it.

     

  2. I've read most of the posts in this thread and didn't see it referenced so I apologize if I just missed it:


    Is this still bound by the stock synology kernels?  Can we get GPU offload (/dev/dri) into the 3615/3617 images even though synology's official kernels leave that out?

    Any chance this can incorporate the changes I uncovered in the other thread here to support higher drive counts?

     

     

    Specifically:

    #change

    maxdisks="64"

    esataportcfg="0x0"

    usbportcfg="0x0"

    internalportcfg="0xffffffffffffffff"

     

     

    and possibly (not at home to validate right now):

    no_disk_swap="yes"

    support_sys_adjust="yes"

      

     

     

  3. 41 minutes ago, IG-88 said:

    somehow i do remember that its always only the 1st 12 disks in that raid1 (never looked into this as it seems not so important)

    my concern about redundancy is about the raid6 volume with my data, i don't mind DSM , that can be reinstalled but loosing a big xxx TB raid volume ... yeah you should have a backup but even restore needs time, its not meant to break because of failing disks (or even controller/cpu/ram in SAN environments) - oh what fun where these server ssd'd failing all at the same time because of internal "counter" disabling them

     

     

    Well sure - for your data volume you wouldn't put everything into one large array.   When I had this working in the past with 40 drives, I did 2x16+2 with a hot spare for the data volume.  If I get it working again, that'll likely be the case.

  4. Quote

    anything you can say about this?

     

    Where specifically was it at in Nick's loader?  What file/folder?  I can tell you that the system pretends like it's going to use rc.subr (and I'm guessing it did once upon a time) but at this point, as best I can tell, completely ignores it.

     

     

     

    Quote

    synology at least sells units with extension to go above 27 but i never studied the documentation of that units to see how big a single raid set can be, maybe a single set of disks is limited to 26?

     

    Well in excess, but if you look at those systems they still only have 12-16 as "internal" disks, everything else is in an external enclosure.

     

     

     

    Quote

    beside this also the normal mdadm raid (and DSM) might not be the best solution for such big systems (i do remember that backblaze is using something more application oriented for the storage systems), just having two redundant disk (raid6) with 40-60 drives sounds risky, so it seems logical to limit the drive count of a single raid set to 24 or 26?

     

     

    Keep in mind, the root volume and swap volume are RAID-1.  Having 100 drives isn't anymore risky than 2 (quite the opposite), you've just got more copies of data.

  5. For what it's worth I did more digging on this.  The issue is that on setup, the system does a "raidtool initsetup" - this utility calls scemd which is a binary blob.  That binary blob has a hard call to read synoinfo.conf to see what maxdisks is set to, it then uses the value in maxdisks to try to create the initial md0 (root volume) and md1 (swap).  It will look at what your active drives are, and create logical placeholders for the rest called "missing".  Unfortunately it's also hard-coded to use mdadm metadata format 0.9, which limits an md array to 27 devices.  So - at least for initial setup, you cannot have maxdisks set to >27 devices, or the raidtool setup will fail.  AFTER initial setup remains to be seen... it looks like the upgrade utilities should handle more devices just fine, but I haven't done any testing.

  6. Feel free to move this if it isn't in the right spot.  Was running 6.2 with 1.03b loader (DS3617) on a dual-xeon X5600 supermicro box (tylersburg chipset) and recently upgraded to 6.2.1. Post upgrade I'm getting a kernel panic around ehci-pci.  Anyone run into this or have any ideas?

     

    patching file etc/rc
    patching file etc/synoinfo.conf
    Hunk #1 FAILED at 261.
    1 out of 1 hunk FAILED -- saving rejects to file etc/synoinfo.conf.rej
    patching file linuxrc.syno
    Hunk #3 succeeded at 552 (offset 1 line).
    patching file usr/sbin/init.post
    cat: can't open '/etc/synoinfo_override.conf': No such file or directory
    START /linuxrc.syno
    Insert Marvell 1475 SATA controller driver
    Insert basic USB modules...
    :: Loading module usb-common ... [  OK  ]
    :: Loading module usbcore ... [  OK  ]
    :: Loading module ehci-hcd ... [  OK  ]
    :: Loading modul[    1.249619] kernel tried to execute NX-protected page - exploit attempt? (uid: 0)
    [    1.257466] BUG: unable to handle kernel paging request at ffff880c02784260
    [    1.264447] IP: [<ffff880c02784260>] 0xffff880c0278425f
    [    1.269688] PGD 1ac2067 PUD 8000000c000000e3
    [    1.274081] Oops: 0011 [#1] SMP
    [    1.277346] Modules linked in: ehci_pci(F+) ehci_hcd(F) usbcore usb_common mv14xx(O) tthphahhpxqlhpx(OF)
    [    1.286936] CPU: 8 PID: 3548 Comm: insmod Tainted: GF          O 3.10.105 #23824
    [    1.294323] Hardware name: Supermicro X8DAH/X8DAH, BIOS 2.1        12/30/2011
    [    1.301445] task: ffff880c02706800 ti: ffff880bfdb88000 task.ti: ffff880bfdb88000
    [    1.308921] RIP: 0010:[<ffff880c02784260>]  [<ffff880c02784260>] 0xffff880c0278425f
    [    1.316579] RSP: 0018:ffff880bfdb8bb78  EFLAGS: 00010046
    [    1.321887] RAX: ffff880c0277ae10 RBX: 0000000000000006 RCX: 0000000000000002
    [    1.329009] RDX: 0000000000000006 RSI: 0000000000000000 RDI: ffff880c02797800
    [    1.336127] RBP: ffff880c02797800 R08: ffff880bfdb8bb84 R09: 000000000000fffb
    [    1.343248] R10: 0000000000000000 R11: 0000000000000027 R12: 0000000000000000
    [    1.350369] R13: ffff880bfdb8bbbe R14: 0000000000000246 R15: ffffffffa00ebb40
    [    1.357491] FS:  00007f4496161700(0000) GS:ffff880c3fd00000(0000) knlGS:0000000000000000
    [    1.365565] CS:  0010 DS: 0000 ES: 0000 CR0: 000000008005003b
    [    1.371298] CR2: ffff880c02784260 CR3: 0000000bfdbd8000 CR4: 00000000000207e0
    [    1.378418] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
    [    1.385539] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
    [    1.392659] Stack:
    [    1.394670]  ffffffff81292f25 00000000fdb8bbd8 0000000000000015 000000000000000a
    [    1.402119]  ffff880c02796ff8 0000000000000000 ffff880c02797090 ffffffff81298046
    [    1.409570]  0000000000000001 ffff880c02796ff8 ffffffff8129808b ffff880bfdabd800
    [    1.417018] Call Trace:
    [    1.419466]  [<ffffffff81292f25>] ? pci_bus_read_config_word+0x65/0x90
    e ehci-pci[    1.425987]  [<ffffffff81298046>] ? __pci_bus_find_cap_start+0x16/0x40
    [    1.433365]  [<ffffffff8129808b>] ? pci_find_capability+0x1b/0x50
    [    1.439449]  [<ffffffffa00eb09d>] ? ehci_pci_setup+0x9d/0x5a0 [ehci_pci]
    [    1.446145]  [<ffffffffa00aac4d>] ? usb_add_hcd+0x1bd/0x660 [usbcore]
    [    1.452580]  [<ffffffffa00b99d3>] ? usb_hcd_pci_probe+0x363/0x410 [usbcore]
    [    1.459538]  [<ffffffff8129bd80>] ? pci_device_probe+0x60/0xa0
    [    1.465366]  [<ffffffff8130575a>] ? really_probe+0x5a/0x220
    [    1.470935]  [<ffffffff813059e1>] ? __driver_attach+0x81/0x90
    [    1.476676]  [<ffffffff81305960>] ? __device_attach+0x40/0x40
    [    1.482416]  [<ffffffff81303a53>] ? bus_for_each_dev+0x53/0x90
    [    1.488243]  [<ffffffff81304ef8>] ? bus_add_driver+0x158/0x250
    [    1.494072]  [<ffffffffa00ed000>] ? 0xffffffffa00ecfff
    [    1.499201]  [<ffffffff81305fe8>] ? driver_register+0x68/0x150
    [    1.505031]  [<ffffffffa00ed000>] ? 0xffffffffa00ecfff
    [    1.510177]  [<ffffffff810003aa>] ? do_one_initcall+0xea/0x140
    [    1.516008]  [<ffffffff8108baf4>] ? load_module+0x1a04/0x2120
    [    1.521751]  [<ffffffff81088cb0>] ? store_uevent+0x40/0x40
    [    1.527232]  [<ffffffff8108c2a1>] ? SYSC_init_module+0x91/0xc0
    [    1.533065]  [<ffffffff814c0dc4>] ? system_call_fastpath+0x22/0x27
    [    1.539238]  [<ffffffff814c0d11>] ? system_call_after_swapgs+0xae/0x13f
    [    1.545847] Code: 88 ff ff 00 ae 77 02 0c 88 ff ff 70 63 69 30 30 30 30 3a 30 30 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 <70> 63 69 30 30 30 30 3a 30 30 00 00 00 00 00 00 00 00 00 00 00
    [    1.565788] RIP  [<ffff880c02784260>] 0xffff880c0278425f
    [    1.571107]  RSP <ffff880bfdb8bb78>
    [    1.574590] CR2: ffff880c02784260
    [    1.577899] ---[ end trace 76b8d46e138e2d6c ]---
     ... [FAILED]
    :: Loading modul[    1.588113] ohci_hcd: USB 1.1 'Open' Host Controller (OHCI) Driver
    e ohci-hcd ... [  OK  ]
    :: Loading module uhci-hcd[    1.599906] uhci_hcd: USB Universal Host Controller Interface driver
    [    1.934848] tsc: Refined TSC clocksource calibration: 2533.422 MHz
    [    1.941026] Switching to clocksource tsc

     

  7. 10 hours ago, George said:

    me... still curious...

     

    So the default pat file we're all using is mapped to the DS3615xs which is specced as being able to support up to 36 drives via a base unit of 12 drives and 2 disk expanders, each of 12 drives.

     

    But when we install DSM using the 3615xs pat file it only shows 12 drives capability? Ye I figure thats because the max drives are currently set to 12 drives, basically need confirmation of my logic.... guessing when you add a disk expander to the real synology ds3615xs it then modifies this max_drives itself at time of picking up there are external disk trays attached ?

     

    G

     

    Native "external" synology enclosures are treated differently than internal drives, that's why.  I'm sure it would be possible to simulate external trays, but it's not really worth the effort.

  8. 2 minutes ago, George said:

    Is this (the increase of max_drives in grub.conf) something that HAS to be done before install, or can it be changed later.

     

    Looking at my long term plans will probably push up to 24 drives,

     

    G

     

     

    You actually CAN'T do it on initial install.  If you have more than 12 drives on initial install you'll run into all sorts of issues.  Once you're past that initial install is when you increase the number of drives.

     

    Still curious when this is going to get posted.  I'm assuming nick ran into issues since he said it was uploading last Friday but I still haven't seen it posted anywhere.

  9. 5 hours ago, IG-88 said:

     

    whats the reason for those specific numbers? or why not 36, 44, 52 or 56?

    the original number used in 916+ by synology is 4 (and is set to 12 by jun's patch)

     

     

    how? whats the difference to setting up values in synoinfo.conf after setting up dsm

     

     

     

    I would imagine he modifies the swap option that I found.  My fix works just fine, the only issue was that it doesn't persist across upgrades because Jun didn't change it in his loader so on upgrade the synoinfo.conf would default to whatever was in the bootloader.

     

  10. On 2/23/2018 at 11:59 PM, quicknick said:

    What everyone has failed to notice, is that this guy was using xpenoboot 5.2 which uses heavily modded kernel, not the oem one we currently used in 6.x.

    That being said, I can make this a permanent solution for the few that need it. My loader will be out in a few days and I can certainly add this from the boot menu.

    What you should know though, is that even with 12+ drives, DSM will only install to the first 12 drives. You can certainly rebuild the software raid to all attached drives, but it is a manual process.

    My configuration tool, in the next release, allows you to set to 45 drives, custom amount (12-64) or back to default value of 12 drives automatically.

    Another cheat way to get more drives, is by using RAID cards. instead using HBAs or JBODs on RAID cards, you can do HW raid and just provision virtual drives to DSM. and do basic no protection 1 drive volumes. Depending on your RAID controller, it could be faster than software RAID, and still have prtectection.



    Sent from my SM-N950U using Tapatalk
     

     

     

    Still planning on releasing a new loader this week?

     

     

    Also, one word of caution on the raid-card front - be very careful which RAID adapter you pick.  I had some 3ware adapters that would go completely out to lunch and the only way to recover was a hard power cycle.  Then occassionally they would mark drives bad under heavy load even though the drives were perfectly fine... it was the ASIC on the RAID adapter losing its mind.

  11. On 12/5/2017 at 3:10 PM, sbv3000 said:

    Thats an interesting find - well done, lets see what beasts it lets loose :)

    From my earlier tests, I suspect that your analysis of the SES services is what Synology use with real systems so it allows the 4 letter allocation as with my DX unit.

    The next challenge will be finding a solution to retaining the extensive synoinfo.conf changes that these setups need to avoid raid crashes during upgrades.

     

    I may have found a fix.  In order for that to happen I'd need Jun to build us a custom bootloader.  If he's not willing to build us a one-off, I can probably do it myself but it'll be a while before I have time to build out that environment myself.  There's a possibility the change I want to make isn't possible without the full source, but I'm not sure at this point.

  12. Well gentlemen, I'm going to say after much gnashing of teeth I've found a solution. 

     

    The source of the issue seems to be something in /etc/rc.subr but I can't quite pin it down.  HOWEVER, what I *DID* find is that you can disable swap entirely, which is where the actual problem lies.  Normally when you have external shelves, the system identifies drives in them with unique IDs per shelf.  So the system is never expecting a drive with more than 3 letters in the device ID (IE: it expects an sda2, but not an sdaa2).  First I tried getting it to identify my external trays, but I quickly realized that they are expecting certain SES (SCSI enclosure services) responses that we can't really mimic.  Maybe if you purchased a shelf from the same vendor they buy theirs through, but that's kind of a lost cause.  HOWEVER... drumroll (sorry for the long-winded response) - I did find that you can disable swap entirely.  Simply add no_disk_swap="yes" to your synoinfo.conf.

     

    Once that's done, you should be good to go.

     

    TL;dr - add 

    no_disk_swap="yes"

    to your synoinfo.conf 

    ???

    profit

    • Thanks 1
×
×
  • Create New...