• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About idstein

  • Rank
  1. The tools sources does not matter! You only need some custom gcc toolchain, if you have some sort of obscure CPU architecture. For the Intel or AMD x86/AMD64 instruction set which all most every XPenology build uses, we do not depend on these toolchains. You can compile and build the kernel with a standard gcc toolchain using if necessary cross compilation for x64 on a x86 machine or something similar. Thats no magic at all
  2. NEW DSM 5.1-5004 AVAILABLE !!

    Well, I think it's beyond my skills. I first put the list of files to a text file resulting in something like this: DECIMAL HEXADECIMAL DESCRIPTION -------------------------------------------------------------------------------- 0 0x0 ELF 64-bit LSB executable, AMD x86-64, version 1 (SYSV) 2113122 0x203E62 Zlib compressed data, default compression 2118901 0x2054F5 Zlib compressed data, default compression 2600084 0x27AC94 Zlib compressed data, best compression 2866326 0x2BBC96 Zlib compressed data, default compression 3241685 0x3176D5 LZMA compressed data, properties: 0xB8, dictionary size: 2097152 bytes, uncompressed size: 807683271 bytes 3924944 0x3BE3D0 Zlib compressed data, compressed 3950308 0x3C46E4 Zlib compressed data, compressed 4084752 0x3E5410 Zlib compressed data, best compression 4205012 0x4029D4 Zlib compressed data, best compression 4321820 0x41F21C Zlib compressed data, best compression 4532468 0x4528F4 Zlib compressed data, best compression But then I fail . Can't seem to find the rd image or I don't know how to use dd and cpio. I've use some commands like: grep -abo 0707 vmlinux | head -n1 and grep -abo TRAILER vmlinux | head -n1 and these give me some numbers back. I then use dd with offset to export to .cpio. But can't seem to extract the cpio. It's beyond my skillset... The following will automatically extract any found compressed data (including the initramfs) binwalk -e vmlinux
  3. NEW DSM 5.1-5004 AVAILABLE !!

    My search for the first occurrence of CPIO magic header and ending string have been rewarded with a 35 Byte large cpio package. Consequently, I believe the vmlinux contains multiple cpio sections and you have to find the right one. I've used 'binwalk' to find the correct CPIO package. As there are many false positive inside the vmlinux binary, I simply decompressed everything binwalk found.
  4. NEW DSM 5.1-5004 AVAILABLE !!

    Thats what I've done to extract ramdisk from nanoboot in order to include additional driver module in it and make some changes in rc files: 1. We need to extract zImage to vmlinux bunary by using scripts/extract-vmlinux utility in kernel source tree. 2. In vmlinux binary search for cpio magic string '0707'. That will give us start of ramdisk offset. 3. In vmlinux binary search for end of cpio magic string 'TRAILER!!!'. From this we can calculate the size of cpio binary. 4. Strip cpio ramdisk binary from vmlinux by using dd utility and starting offset and size. 5. Extract files from ramdisk using cpio. After making some changes to include your ramdisk files in kernel you need to do a reverse procedure: 1. Use cpio to pack files in one binary. 2. Use make config in kernel sources to set init_ramdisk to your file (somewhere in 'General kernel parameters' menu). 3. Make kernel zImage which will include your ramdisk. Put it in .IMG or simply copy it to mounted flash disk overwriting old version. So i've just checked Nanoboot's initramfs and the scripts are a little bit different. In details: - Kernel boot parameter upgrade replaces only some lines in /etc/VERSION instead of a complete file - init scripts always remove /dev/synobios if present - init scripts replace some standard Synology bin's such as scemd You can find the compressed initramfs (gzipped) here: ... fs.gz?dl=0
  5. NEW DSM 5.1-5004 AVAILABLE !!

    Yet a better place would be inside rd.gz, which is an lzma compressed cpio archive. I thin the hda1.tgz is used for the SATA DOM (or only the installed synology partition) where as the rd.gz is the actual ramdisk used as rootfs during system startup of Synology systems. @Difference between nanoboot From what I see from my inspection is that it basically uses the old way of assembling a ramdiskfs. That is bundling it inside the kernel image (=zImage). See ... tramfs.txt for further details. However, there are also different boot parameters such as upgrade= that is not present with our current available gnoboot source code. Therefore, I believe there are changes (even though probably very subtile ones) in the nanoboot source.
  6. NEW DSM 5.1-5004 AVAILABLE !!

    Looking at the PAT files from DSM 5.0 respectively revision 4458 or 4493 and compare it to the most recent DSM 5.1 PATs respectively revision 5004 or 5021, you will notice a huge difference between the updater executable. They do seem to have moved out all basic C lib and BIOS, crypto stuff to H2OFFT-Lx64. In addition to that the updater lack the references to syno_dual_head (USB and SATA DOM parallel boot). Consequently, I assume they have changed the way the updater works. It has nothing to do with the kernel, that we can actually build due to GPL source code access. The update however, is bundled with the PAT and inaccessible for us. So far I have had no success to modify the updater due to my lack in ASM skill. But maybe someone else can continue here. It does not contain any reference to '/dev/synoboot' nor '/dev/synoboot2'.
  7. NEW DSM 5.1-5004 AVAILABLE !!

    Could also please check the following `ll /lib/libsyno*`? I believe by inspecting a most recent PAT (5010) that there exists a new lib called libsynovfs. That could mean that they have moved the virtual file system to the user space?
  8. NEW DSM 5.1-5004 AVAILABLE !!

    I've just been checking on my XPenology NAS that there is a synoacl_ext4, but there isn't diskstation> ll /lib/modules/syno* -rw-r--r-- 1 root root 23840 May 30 2014 /lib/modules/synoacl_vfs.ko -rw-r--r-- 1 root root 66416 Jun 8 2014 /lib/modules/synobios.ko Yet XPenology tries to load the kernel modules synoacl_vfs AND synoacl_ext4. diskstation> lsmod | grep acl synoacl_vfs 16275 2 Consequently, I believe formerly there has been syndical_ext4 module, but as of now it has become part of the kernel ext4 module using the kernel flag CONFIG_EXT4_FS_SYNO_ACL. Could someone check on a real device which modules are available using `ll /lib/modules/syno*`? Thanks!
  9. NEW DSM 5.1-5004 AVAILABLE !!

    No, that basically ensures that the path /sbin/sysctl is present and only if that's the case set the system control values. Actually, at that place I would have placed a -x for checking, that /sbin/sysctl is executable Refer to for further details.
  10. NEW DSM 5.1-5004 AVAILABLE !!

    As this is a kernel image, you can not boot straight forward with it in VirtualBox. VirtualBox simulates a complete virtual machine in opposite to qemu/KVM. Consequently, it directly switches to the boot devices MBR and tries to find a bootloader which is not present in your img file. Take a look at for further details on how to package your compiled kernel image with GRUB (Legacy) to a bootable ISO.
  11. +1 There's even a standard configuration for Synology KVM builds in the kernel! But my kernel build does not seem to start. I will investigate that, if I have some time to spare. Even though it's currently possible to run DSM with decent performance on a Xen domU, virtio drivers are still missing in the Nanoboot build. That means you need to turn on Veridian virtualisation and HVM (instead of PV). Here's a cut down version of my XL configuration: builder = 'hvm' name = "synology" memory = 4096 vcpus = 2 # Nanboot image (first; read-only) and some virtual disk as boot disk to install apps to them disk = ['file:/etc/xen/images/nanoboot/latest.img,hda,r','file:/etc/xen/images/synology.img,hdb,w'] #boot = 'dc' vif = [ 'mac=00:16:3E:58:D7:8A,bridge=xenbr0,model=e1000' ] # No virtio tty (=console) drivers => Use VNC as a console replacement vnc = 1 # ACPI and APIC events acpi=1 apic=1 # Windows virtualization viridian =1 # PCI passthrough of an Intel SATA controller to get physical disk control xen_platform_pci=1 pci = ['00:1f.2']