gnoboot
-
Posts
278 -
Joined
-
Last visited
Posts posted by gnoboot
-
-
@Diverge
@EmmTeh
Why do you want to use RS3614xs+?
I'm installing in a 24bay SuperMicro Chassis. Server has Xeon 5650 CPU, 24GB RAM, 10gig Ethernet, and 3 LSI HBAs.
I haven't had much luck with the DS3612 codebase. I can get all the drives to show up after editing synoinfo.con but things always end up breaking
Have you tried this guide - viewtopic.php?f=2&t=2028?
That matters even if ASM 1081 PCIe SATA controller isn't being passthrough? It's my esxi boot device. When I don't try to passthrough the Panther Point AHCI controller everything works fine with the ASM 1061 controller as esxi boot and datastore.
No, it only happens when you passthrough. So your controller is ASM1061, I suggest you install latest Ubuntu/CentOS and try it again. If it doesn't break then its Synology kernel bug, will try to find the patch from kernel.org.
Edit: @Diverge, found a possible patch.
-
@Diverge
Are you using esxi 5.1 or 5.5? LInux kernel version that you tested?
@EmmTeh
Why do you want to use RS3614xs+?
Edit:
@gnoboot,Should I be able to passthrough my Intel Panther Point ACHI controller to gnoboot? I'm getting pagefault errors:
Known kernel issue with ASM1083/1085 - https://lkml.org/lkml/2012/1/30/216
-
Upgrade guide from 4.x to 5.x posted on page 1.
I tried this with virtual box ,it cannot install any PAT ,with stock or patched one .But ,after install DSM in VMWare ,it work fine in virtual box .
Hope this help
Post your `lspci -v` or boot to debug mode and enable serial logging if supported by virtualbox, send me the results. Thanks!
-
Thanks a lot!! I would like to download the gnoboot-alpha5, but the file type is .RSDF?
Just use jdownloader to get the image or google for decrypt RSDF. Enjoy and report any issues .
-
I'm currently preparing my upcoming release, so far I broke iSCSI again with the following drivers and features added (w/o my iSCSI fix).
- Add all PATA/SATA/SCSI and Network (1G - 10G) drivers supported by popular Linux distributions
- Infiniband - Qlogic, Chelsio, Mellanox
- Set maximum CPU supported to 4K
- Virtual I/O
- HyperV drivers
Alpha5 is coming soon, maximum CPU supported will not be included as breaks iSCSI . But more drivers were added in this release, check my blog to see the complete list;). Don't forget to click ads to support my development and I'm also accepting donations .
I think it is better if you can add the drive for hp n36l/n54l.NIC is NC107I.thanks a lot.Could you post the `lspci -v`result?
I am using HP N54L too, I can post it for your reference... thanks.
00:00.0 Class 0600: Device 1022:9601 Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 0 Capabilities: [c4] HyperTransport: Slave or Primary Interface Capabilities: [54] HyperTransport: UnitID Clumping Capabilities: [40] HyperTransport: Retry Mode Capabilities: [9c] HyperTransport: #1a Capabilities: [f8] HyperTransport: #1c 00:01.0 Class 0604: Device 103c:9602 Flags: bus master, 66MHz, medium devsel, latency 64 Bus: primary=00, secondary=01, subordinate=01, sec-latency=64 I/O behind bridge: 0000e000-0000efff Memory behind bridge: fe700000-fe8fffff Prefetchable memory behind bridge: 00000000f0000000-00000000f7ffffff Capabilities: [44] HyperTransport: MSI Mapping Enable+ Fixed+ Capabilities: [b0] Subsystem: Device 103c:1609 00:06.0 Class 0604: Device 1022:9606 Flags: bus master, fast devsel, latency 0 Bus: primary=00, secondary=02, subordinate=02, sec-latency=0 Memory behind bridge: fe900000-fe9fffff Capabilities: [50] Power Management version 3 Capabilities: [58] Express Root Port (Slot-), MSI 00 Capabilities: [a0] MSI: Enable+ Count=1/1 Maskable- 64bit- Capabilities: [b0] Subsystem: Device 103c:1609 Capabilities: [b8] HyperTransport: MSI Mapping Enable+ Fixed+ Capabilities: [100] Vendor Specific Information: ID=0001 Rev=1 Len=010 <?> Capabilities: [110] Virtual Channel Kernel driver in use: pcieport 00:11.0 Class 0106: Device 1002:4391 (rev 40) (prog-if 01) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 41 I/O ports at d000 [size=1] I/O ports at c000 [size=1] I/O ports at b000 [size=1] I/O ports at a000 [size=1] I/O ports at 9000 [size=1] Memory at fe6ffc00 (32-bit, non-prefetchable) [size=1K] Capabilities: [50] MSI: Enable+ Count=1/8 Maskable- 64bit+ Capabilities: [70] SATA HBA v1.0 Capabilities: [a4] PCI Advanced Features Kernel driver in use: ahci 00:12.0 Class 0c03: Device 1002:4397 (prog-if 10) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 18 Memory at fe6fe000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:12.2 Class 0c03: Device 1002:4396 (prog-if 20) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17 Memory at fe6ff800 (32-bit, non-prefetchable) [size=11] Capabilities: [c0] Power Management version 2 Capabilities: [e4] Debug port: BAR=1 offset=00e0 Kernel driver in use: ehci_hcd 00:13.0 Class 0c03: Device 1002:4397 (prog-if 10) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 18 Memory at fe6fd000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:13.2 Class 0c03: Device 1002:4396 (prog-if 20) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17 Memory at fe6ff400 (32-bit, non-prefetchable) [size=11] Capabilities: [c0] Power Management version 2 Capabilities: [e4] Debug port: BAR=1 offset=00e0 Kernel driver in use: ehci_hcd 00:14.0 Class 0c05: Device 1002:4385 (rev 42) Flags: 66MHz, medium devsel 00:14.3 Class 0601: Device 1002:439d (rev 40) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 0 00:14.4 Class 0604: Device 1002:4384 (rev 40) (prog-if 01) Flags: bus master, 66MHz, medium devsel, latency 64 Bus: primary=00, secondary=03, subordinate=03, sec-latency=64 00:16.0 Class 0c03: Device 1002:4397 (prog-if 10) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 18 Memory at fe6fc000 (32-bit, non-prefetchable) [size=4K] Kernel driver in use: ohci_hcd 00:16.2 Class 0c03: Device 1002:4396 (prog-if 20) Subsystem: Device 103c:1609 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 17 Memory at fe6ff000 (32-bit, non-prefetchable) [size=11] Capabilities: [c0] Power Management version 2 Capabilities: [e4] Debug port: BAR=1 offset=00e0 Kernel driver in use: ehci_hcd 00:18.0 Class 0600: Device 1022:1200 Flags: fast devsel Capabilities: [80] HyperTransport: Host or Secondary Interface 00:18.1 Class 0600: Device 1022:1201 Flags: fast devsel 00:18.2 Class 0600: Device 1022:1202 Flags: fast devsel 00:18.3 Class 0600: Device 1022:1203 Flags: fast devsel Capabilities: [f0] Secure device <?> Kernel driver in use: k10temp 00:18.4 Class 0600: Device 1022:1204 Flags: fast devsel 01:05.0 Class 0300: Device 1002:9712 Subsystem: Device 103c:1609 Flags: bus master, fast devsel, latency 0, IRQ 10 Memory at f0000000 (32-bit, prefetchable) [size=128M] I/O ports at e000 [size=11] Memory at fe8f0000 (32-bit, non-prefetchable) [size=64K] Memory at fe700000 (32-bit, non-prefetchable) [size=1M] Expansion ROM at [disabled] Capabilities: [50] Power Management version 3 Capabilities: [a0] MSI: Enable- Count=1/1 Maskable- 64bit+ 02:00.0 Class 0200: Device 14e4:165b (rev 10) Subsystem: Device 103c:705d Flags: bus master, fast devsel, latency 0, IRQ 42 Memory at fe9f0000 (64-bit, non-prefetchable) [size=64K] Capabilities: [48] Power Management version 3 Capabilities: [40] Vital Product Data Capabilities: [60] Vendor Specific Information: Len=6c <?> Capabilities: [50] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [cc] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [13c] Virtual Channel Capabilities: [160] Device Serial Number xx-xx-xx-xx-xx-xx-xx-xx Capabilities: [16c] Power Budgeting <?> Kernel driver in use: tg3
Fixed, need testers...
- Add all PATA/SATA/SCSI and Network (1G - 10G) drivers supported by popular Linux distributions
-
I think it is better if you can add the drive for hp n36l/n54l.NIC is NC107I.thanks a lot.
Could you post the `lspci -v`result?
-
I've already tricked updater script to write on a seperate partition. Though, I haven't included it on the current boot image as it makes a larger download (32MB - /dev/synoboot1, 96MB - /dev/synoboot2). After updating the boot image, it will then update the hardware BIOS which break the whole update process. I've already posted some screenshot while trying to make it work on DSM 5.0 beta.
IMHO, update/upgrade process:
- upload pat file to DSM and verify checksum
- extract hda1.tgz to a certain directory
- verify checksum, update kernel and grub
- update AMI bios firmware
- reboot???
- /etc/upgrade.sh will remove the current root directory and move the extracted hda1.tgz (I tried this part but it breaks DSM installation)
- gnoboot will replace any updated kernel drivers.
- upload pat file to DSM and verify checksum
-
I'm currently working on adding support for Hyper-V with my boot image. What's the required controller and network driver to support Hyper-V?
-
You can use original pats, I have every time I played with a new install. I'm not 100% sure about the updates, since I followed a guide here that changes the updater script just before you apply the updates so it doesn't try to reflash over the boot flash (i'm not sure if there is any built in protection in gnoboot to prevent that - would be cool if there was ).
I will try to re-write a new updater script, but users will have to learn how to make their own pat files and not my priority right now. I don't also want to distribute pat files .
For those who want to try my boot image, please wait for the next release as it will include the fix for auto shutdown issue. But it's ok if you're just upgrading from an older release. This means you are not upgrading/updating DSM to the latest version.
I'm currently preparing my upcoming release, so far I broke iSCSI again with the following drivers and features added (w/o my iSCSI fix).
- Add all PATA/SATA/SCSI and Network (1G - 10G) drivers supported by popular Linux distributions
- Infiniband - Qlogic, Chelsio, Mellanox
- Set maximum CPU supported to 4K
- Virtual I/O
- HyperV drivers
- Add all PATA/SATA/SCSI and Network (1G - 10G) drivers supported by popular Linux distributions
-
@Diverge,
I'm was able to reproduced the issue reported on ESXi. I will try to fixed it soon.
Good to know, I was thinking maybe it was something I was doing wrong when going from .IMG's to virtual disks, or adding/removed stuff from the images (zImage, rd, kernel mods, ect).
My init script was broken starting from alpha3 to the latest release which cause auto shutdown for fresh installs. Will try to fix it this week.
-
I was able to go beyond 12 disk limit using this guide, but tested it only on VMware Workstation/Player using virtual LSI controller (SCSI) and RS3614xs+ model. Using the right hex code will also allow you to automatically grow by 3 ports every time you reach the maximum .
-
Which version is appropriate for testing on real hardware?
use the latest one
Dear gnoboot,Could you live a links for the live CD? it have much version.
i.e. sus , ubuntu and/or xxx verion of live CD.
Or simply state, what is use at your side?
Thanks!
Any livecd will work as long as it has dd. I'm using system rescue cd.
@Diverge,
I'm was able to reproduced the issue reported on ESXi. I will try to fixed it soon.
-
Hello
Sorry for a dumb question maybe, but i cannot find the answer.
What's the purpose/differences of the gnoboot version vs a 'traditionnal' transtor/VM install?
thanks a lot for all the work here!
Check features from page one.
-
No, unless someone recompile the kernel for Marvell Armada. I can do it for you but it's not free .
-
quick question, if you cant bootup while drives are connected, but have to remove all drives in order for xpenology to kick in, is there something I should be looking for?. conf file?. tried asking in a few threads and came up empty so im wondering if theres something else i might be missing. xpenology works but if i hit reboot and the drive is connected, forget about it. machine is as worthless as a homeless man.
IMHO, if the driver for your controller was built as modules, you can insmod it once XPEnology booted to activate the disks.
-
Alpha4 is available.
To load and unload drivers use the following parameters in grub:
[spoiler=]load:
kernel /zImage root=/dev/md0 ihd_num=1 netif_num=0 syno_hw_version=DS3612xs vga=0x370 mod_add=igb,zram
unload
kernel /zImage root=/dev/md0 ihd_num=1 netif_num=0 syno_hw_version=DS3612xs vga=0x370 mod_del=ata_piix,e1000
-
My kernels are built w/o the default wireless drivers.
-
Yes, give me a list of wireless adapters you need.
-
I've fixed the issue but haven't release the working boot image.
-
It's a known issue.
-
You need to remove the driver from the ramdisk, code above will only work after DSM is installed. I will make it easier to add/remove drivers on the next release.
-
Add the following code in your /etc/rc.local.
touch /etc/rc.local chmod 755 /etc/rc.local echo "rmmod ata_piix" > /etc/rc.local
-
You haven't applied the ihd_num settings in your grub.conf? There's a kernel traceback related to probing all the attached disk in your VM. Try to change ihd_num to 1 and let me know how it goes.
-
Don't forget to boot on the debug option:)
XPEnology gnoBoot
in DSM 5.2 and earlier (Legacy)
Posted
Did you update DSM 5 to update 1 first before moving your array?