Jump to content
XPEnology Community

RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

@Elpatii The checkbox was checked but I had nothing, I unchecked, validated, then checked again and now I have the beta pane activated...

I will be able to test with the beta version if the problems have been corrected when using several hundred Go in on-demand synchronisation.

Link to comment
Share on other sites

Update

So a small update first: we've got the PCI emulation layer working!

 

The PCI standard is truly an awful thing, which kernel developers found out too and put a fair warning on the top of their PCI docs:

The world of PCI is vast and full of (mostly unpleasant) surprises.
Since each CPU architecture implements different chip-sets and PCI devices
have different requirements (erm, "features"), the result is the PCI support
in the Linux kernel is not as trivial as one would wish.

 

Linux doesn't have any high level APIs to add devices or buses. Because of this we had to resort to emulating a custom hardware platform and exposing a virtualized PIC-complaint memory region which then the PCI subsystem can consume. This is as low level as you can go in the kernel. We will not go into details here, as it is rather complex. For anyone interested there's a MASSIVE header comment essay in internal/virtual_pci.c which explains the fundamentals + a bunch of links to more details.

 

This is also just scratching a surface of the PCI itself. However, it is sufficient to emulate devices presence and basic behavior if needed. The module comes with an easy yet flexible API so that adding a new device can be accomplished with just a few lines as shown below for 3515xs (shim/pci_shim.c😞

    allocate_vpci_dev_dsc();
    dev_dsc->vid = PCI_VENDOR_ID_MARVELL_EXT;
    dev_dsc->dev = 0x9235;
    dev_dsc->rev_id = 0x11;
    dev_dsc->class = U24_CLASS_TO_U8_CLASS(PCI_CLASS_STORAGE_SATA_AHCI);
    dev_dsc->subclass = U24_CLASS_TO_U8_SUBCLASS(PCI_CLASS_STORAGE_SATA_AHCI);
    dev_dsc->prog_if = U24_CLASS_TO_U8_PROGIF(PCI_CLASS_STORAGE_SATA_AHCI);
    add_vdev_and_return();

 

There's however a major bug which will not cause problems for users but it's a PITA for developers. Once a PCI device is added it cannot be removed and then re-added under the same BDF address. The API kernel provided for this is simply broken (at least on DSMs v3.10). The module will remove just fine (which is no small feat here) but the sysfs subsystem will leave some dangling entries in /sys. When we try to populate the device again the kernel will crumble in its internals complaining about duplicated sysfs entries in /sys/devices/. It happens even when the official sys interface available in the userland is used (writing "1" to /sys/bus/pci/devices/..../remove). There may be some way around it but it's not crucial.

 

As a side note: does anybody have a 3615xs and can dump "lspci -tvnn" for us? Jun's loader doesn't have a full emulation so it does not contain all the information.

 

==============================================================

 

Answers

Now let us respond to all the fascinating comments (we love this community already!).

 

On 7/13/2021 at 4:38 PM, Vortex said:

Not an errors. Just Status notifiers/Commands goes to the PIC16F1829

According to:
(...)

It would be great if these commands (GetMicroP is most important)

(together with /dev/ttyS1) were shimmed in the LKM!

13 hours ago, Vortex said:

I realize the @ThorGroupdid not understand the purpose of all the synobios_ops. Here's a complete list (...)

Huge thanks for both of these! We will add them to the code & docs soon. While searching for these constants we ran across a post on a random blog which has some details too: https://smallhacks.wordpress.com/2012/04/17/working-with-synology-hardware-devsynobios-and-devttys1/ (archived here).


Regarding the LKM emulation, the GetMicroP (& others) are already there - see shim/bios/bios_shims_collection.c. However this is not an exact science, as replacement is not done via any API (as there obviously isn't any) but rather by patching the memory of a vtable (this is absolutely crazy but effective!). The code for it is in shim/bios_shim.c and it patches the vtable when the module reaches "ready" state. This means that there could be some brief time where the bios module calls the original function.

 

The code can be moved to run on "loading" stage but then there's a risk some later bios module code overrides our override ;D We can actually do both here if it causes problems. It's also possible to release the spinlock, start a separate thread and just brute-force the replacement every ~20-30ms for some time (say 5s).
However, we don't believe such an extreme measures nor responding to the first GetMicroP are needed here because:

  •  UART is inherently loosy and that's why the module repeats requests if they fail
  •  It's still WAY faster than any ioctl to the bios module
  •  Since the only method of kernel module <=> userspace communication is the ioctl and we're replacing the vtable faster than the /dev/synobios can appear this method should be sufficient

 

 

On 7/13/2021 at 9:46 PM, T-REX-XP said:

It will be greater when we can process it via the hook)) to fix fan speed or beep via the webui))

As I mentioned above it is currently nullified via a hook.

But actually... this is not a bad idea :) Instead of nullyfing the requests they can be used for something good. The beeps are very easy as the PC speaker interactions are practically baked into the PC platform (see how simple the pcspkr driver is). As for the fan speed it can be done too but to make it portable it should probably call some user-space daemon akin to how hot-plug handles events from the kernel (e.g. lm-sensors).

 

 

On 7/13/2021 at 7:09 PM, mcdull said:

Is the blocker only for baremetal?  Can basic function work in QEMU?  
Guess most baremetal can run qmeu and it is far more easy to maintain with virutal environment.

We are testing and developing the code using a Proxmox cluster (=QEMU on steroids). Bare metal, as long as drivers are there, is no different. However VMs are much better for testing (as you can restore a live system from a snapshot in literally 1-2s after a  kernel crash instead of doing a full reboot taking a minute on a desktop and even 10 minutes on a server).
Besides, with the new powerful hardware it makes less and less sense to run a NAS software on bare metal. Even assuming a given DSM build could utilize 64 cores what's the point? There's docker and there's VMM but there are far superior tools to do the job. DSM is great as a storage platform with occasional container here and there IMHO.

 

 

On 7/13/2021 at 11:13 PM, Eoghan said:

Looking forward to this release however I do have a question users on Juns loader will they be able to upgrade to redpill without the need of starting from scratch etc? Also if you planning on making a website for this release etc I can host it for you free of charge as a thanks from the community just send me a PM 😉

Yes, we are deliberately trying to keep e.g. config options the same to avoid confusion and make the upgrade process easy. At worst the DSM will ask to be "repaired" (which doesn't remove anything). I saw your PM too.

As for the website when things stabilize a bit we will most likely add a GH pages one as this is the simplest to manage. Also, as the Xpenology community is concentrated here, so we believe the best place for releases will be in a separate thread (as people can follow etc).

 

 

On 7/14/2021 at 5:54 AM, ilovepancakes said:

P.S. Really looking forward to a DSM v7 compatible solution, sign me up as a tester 😅

13 hours ago, UnknownO said:

Cool!So... now it can work on QEMU?
PS: I want to join the test

Actually everything needed to play with it right now is in the repos ;) (open source ftw).

You need some GRUB-based bootloader with cmdline exactly like for the Jun's loader which boots modified kernel image (binary patches are in the research repo). The kernel module can be compiled using a standard sequence of commands (it's also noted in the readme file). When we have a generator, which will be a massive shell script to be portable, it will be much easier to do it automatically.

 

 

On 7/14/2021 at 6:05 AM, rok1 said:

Not sure what 7.0 will give us aside from some minor updates aesthetically and functionally. DSM 7 for my 1621+ still is only on kernel 4.4.180. To be able to use XPE 918+ with more current hardware we're going to need a newer kernel.

21 hours ago, flyride said:

This is a bit OT, but we get what we get.  Synology is selling you a 14nm 1st gen circa 2018 CPU as a new product.  It's no mistake that older gen hardware tends to work better.  And even if a platform were not supported under kernel 4.4.x, all Syno has to do is backport the kernel mods for that particular CPU/chipset/NIC and they could care less about other new hardware.

 

I don't think the objective of redpill or any other loader should necessarily be to be compatible with ALL hardware, particularly the newest stuff.  It's great if it can work out that way, but I would consider a loader development a success if it could run on LIKE hardware, particularly if it can be emulated via virtual machine.

The kernel version number is not REALLY that meaningful in this case. They backport a lot of things (see a massive file present in kernel sources under synoconfigs/back-porting-records.txt). However the other side of the story is as @flyride mentioned: they obviously target their own hardware. They aren't on the cutting edge for many reasons. Stability of the platform, R&D cost of a completely custom motherboards, and frankly no pressing need. It's not like someone will use DSM to run on even 40G network cards (I know, someone for sure did but it's a different class of solutions).

 

As for backporting deep hardware changes this is VERY tricky. Often times new hardware milestones require internal kernel changes (e.g. support for NUMA). The kernel development velocity is insane and really the have an internal policy to never break user space while not being afraid of major breakages in the internal kernel space. This makes backporting in the kernel a nightmare. One of us used to do it as a daily job - I will not quote what he said about that because it's a public space :D

 

If anyone's interested there's an old but great video from one of the Linux core maintainers describing the issue with having a custom fork: https://www.youtube.com/watch?v=L2SED6sewRw

 

 

 

On 7/14/2021 at 6:10 AM, pro_info said:

Personally, what I expect from DSM 7 is the improvement in raid rebuild, "smart" bad disk detection, improved SSD cache management and access to Synology Drive 3.0 because v2 is still buggy as hell when using on-demand sync under windows... For the interface, of course, it's nice, but I rarely go on it, once everything is set up.

On 7/14/2021 at 6:47 AM, ilovepancakes said:

The latest security updates and some features which actually do improve DSM. I agree a lot of v7 is fluff but is always nice to run on latest builds especially as they start introducing more new features and eventually 7.1, 7.2, etc.

It's a massive overhaul of the UI. However, the stabiliy for now is far from perfect. Honestly we believe that it shouldn't be called stable. It will get better with 7.1/7.2. Problems are especially visible on RS where probably not a lot of people tested the betas.

As for the on-demand sync under windows one of our Windows guys mentioned that this has to do more with the windows side of things and NTFS being difficult to work with. Sure, there are a lot of things to improve on the DSM side but the client side is way more problematic.

 

 

19 hours ago, rok1 said:

You're not wrong of course, but hardware continues to improve and new features are no longer compatible. Since my 10th gen i3 wont work (plex transcoding, etc) I had to buy older stuff to get my desired outcome (4c/8t). That used i7-7700 and an older motherboard cost more than the i3-10100 and motherboard and it's technically slower.

 

I've been using Xpenology since 5.x, I use it instead of something like Unraid or OpenMediaVault because I have official Synology units and love the ecosystem. Folks like IG88 and others have done standout work getting us compatibility drivers for newer nics, HBAs, IGP etc for use above baseline installs.

 

Regarding virtual machines, I'm not a huge fan of using them. I like them in my OS for use, but not as the primary hypervisor with multiple OS or containers running off of them. If our only way forward for 6.2.4/7.0 is via VM so be it, but many users are not going to want to dig that deep to get it to work. Even ESXi 7.0u2 is not compatible out of the box and required some finagling to get a 2.5g driver installed on the z490 motherboard I attempted with. I finally gave up and decided to drop 350$ for a used 7700 cpu/mobo to run baremetal.

We are definitely not excluding bare metal here.

The loader aside you may want to try Proxmox instead of ESXi. While ESXi is more an enterprise solution it usually is only stable (if even) on officially supported platforms. The Linux KVM subsystem has way more developers and works great.
We actually had a discussion (unrelated to the RP) about that and we believe that the hardware crossed the tipping point. Especially with the new AMD server platforms there aren't many scenarios where you want to dedicate the whole machine to a single task. With DSM especially you cannot really utilize the full power of a very fast box (like 40-100G networking kind of a thing). It's a SOHO/medium business system after all. The overhead introduced by VMs is very small in comparison to what hardware is capable of and you get many features which you cannot get on bare metal.

 

 

19 hours ago, test4321 said:

Sounds like a CIA / FBI op.

Are you seriously saying that a three letter agency is interesting in running an OS made by a Taiwanese company on a 3rd party hardware and is sharing the progress and source code on GitHub?

I guess we do have connections with the NSA since we use Ghidra and with Russia because we use IDA. No, but seriously: as cool as their tech is none of us is affiliated with them ;)

 

 

13 hours ago, smoojay said:

Does the anonymization of dates really makes sense?

Metadata is king - if we can strip we will. However here it's actually a by-product of (spaghetti-code) tools we use to pull stuff into git from other VCSs. It doesn't break anything thou as git doesn't rely on timestamp for anything besides showing them.

 

 

11 hours ago, Aigor said:

It would be amazing if possible, add zfs filesystem support 

Adding ZFS support isn't hard - it's just a module and some userland tools. However nothing in the DSM will work with it... so practically you get a FS which you can mount and manage from the CLI but that's about it. Syno put a bet on btrfs (didn't they even hire some btrfs devs?) and built arrays on rock-solid md. Besides, the ZOL isn't as stable as ZFS on BSD itself. We run many Linux boxes with btrfs and many TrueNAS instances... but ZOL... well, we have a couple test VMs ;)

 

 

==============================================================

p.s. Is there any way to switch the post editor to some text mode with markup? The default one doesn't even allow for copying with quotes etc which forced a rewrite of this post when after an accidental "Back" (it "restored" an empty post).

Edited by ThorGroup
  • Like 8
  • Thanks 3
Link to comment
Share on other sites

On 7/13/2021 at 5:47 PM, ilovepancakes said:

 

The latest security updates and some features which actually do improve DSM. I agree a lot of v7 is fluff but is always nice to run on latest builds especially as they start introducing more new features and eventually 7.1, 7.2, etc.

Does it now natively support ExFAT?  At some point, that was suppose to make it as default inclusion, into newer Linux kernels.

 

https://www.techrepublic.com/article/synology-ceo-why-a-nas-is-the-solution-to-tiny-storage-space-on-mobile-devices/

Link to comment
Share on other sites

I want to know whether this program can be used for 918+. I downloaded the 25426 linux source code of 918+, but I got the following error during compilation

scripts/Makefile.build:269: recipe for target '/home/dog/dog/nas/redpill-lkm/internal/stealth/sanitize_cmdline.o' failed
make[2]: *** [/home/dog/dog/nas/redpill-lkm/internal/stealth/sanitize_cmdline.o] Error 1
Makefile:1411: recipe for target '_module_/home/dog/dog/nas/redpill-lkm' failed
make[1]: *** [_module_/home/dog/dog/nas/redpill-lkm] Error 2
make[1]: Leaving directory '/home/dog/dog/nas/linux-4.4.x'
Makefile:27: recipe for target 'all' failed
make: *** [all] Error 2
5 hours ago, ThorGroup said:

Actually everything needed to play with it right now is in the repos ;) (open source ftw).

You need some GRUB-based bootloader with cmdline exactly like for the Jun's loader which boots modified kernel image (binary patches are in the research repo). The kernel module can be compiled using a standard sequence of commands (it's also noted in the readme file). When we have a generator, which will be a massive shell script to be portable, it will be much easier to do it automatically.

 

Link to comment
Share on other sites

7 hours ago, gericb said:

Does it now natively support ExFAT?  At some point, that was suppose to make it as default inclusion, into newer Linux kernels.

This is like v5.7 territory. However since they charge for ExFAT they may have some agreement with MS to license it fully legally.

 

3 hours ago, UnknownO said:

I downloaded this file and compiled it according to the way in readme.md. And added the compiled file to extra.lzma. But still unable to start 6.2.4

You cannot mix Jun's kexec kernel patching with RedPill LKM. You have to patch the kernel with binary patches.

 

2 hours ago, yanjun said:

I want to know whether this program can be used for 918+. I downloaded the 25426 linux source code of 918+, but I got the following error during compilation


scripts/Makefile.build:269: recipe for target '/home/dog/dog/nas/redpill-lkm/internal/stealth/sanitize_cmdline.o' failed
make[2]: *** [/home/dog/dog/nas/redpill-lkm/internal/stealth/sanitize_cmdline.o] Error 1
Makefile:1411: recipe for target '_module_/home/dog/dog/nas/redpill-lkm' failed
make[1]: *** [_module_/home/dog/dog/nas/redpill-lkm] Error 2
make[1]: Leaving directory '/home/dog/dog/nas/linux-4.4.x'
Makefile:27: recipe for target 'all' failed
make: *** [all] Error 2

 

We didn't test it with v4 yet. However can you post the full error log? Make sure you prepare the kernel source first as described in the readme.

 

Link to comment
Share on other sites

30 minutes ago, ThorGroup said:

You cannot mix Jun's kexec kernel patching with RedPill LKM. You have to patch the kernel with binary patches.

wow, this touched my knowledge blind zone. Can you share how to compile the kernel

PS: I also got an error when compiling on the Linux 4 kernel

Link to comment
Share on other sites

1 hour ago, ThorGroup said:

We didn't test it with v4 yet

there are various problems with v4, the one i saw are with 'set_memory_rw/ro' which are not exported in v4 anymore. you have to find them via 'kallsyms_lookup_name'. in addition i had to revert the last two commits 'Add virtual PCI subsystem & preliminary PCI shim' and 'Implement kernel cmdline sanitization' to get it compiled with a v4 kernel. tests pending ..

Edited by smoojay
  • Like 1
Link to comment
Share on other sites

/root/redpill-lkm/internal/stealth/sanitize_cmdline.c: In function 'locate_proc_cmdline':
/root/redpill-lkm/internal/stealth/sanitize_cmdline.c:114:35: error: 'struct proc_dir_entry' has no member named 'next'
     while (proc_entry = proc_entry->next, proc_entry) {
                                   ^~

v4 error log and another

# git diff
diff --git a/Makefile b/Makefile
index 30f7a24..6490e83 100755
--- a/Makefile
+++ b/Makefile
@@ -1,4 +1,4 @@
-LINUX_SRC := "./linux-3.10.x-bromolow-25426"
+LINUX_SRC := "./linux-4.4.x-apollolake-25426"
 PWD   := $(shell pwd)

 SRCS-$(DBG_EXECVE) += debug/debug_execve.c
diff --git a/internal/virtual_pci.c b/internal/virtual_pci.c
index 5858dd7..0a0f37e 100644
--- a/internal/virtual_pci.c
+++ b/internal/virtual_pci.c
@@ -306,7 +306,7 @@ static struct pci_ops pci_shim_ops = {
 static struct pci_sysdata x86_sysdata = {
     .domain = PCIBUS_VIRTUAL_DOMAIN,
 #ifdef CONFIG_ACPI
-    .acpi = NULL,
+    .companion = NULL,
 #endif
     .iommu = NULL
 };

 

Edited by jumkey
Link to comment
Share on other sites

13 hours ago, ThorGroup said:

But actually... this is not a bad idea :) Instead of nullyfing the requests they can be used for something good.

 

Great to see the redpill talking is back !

 

Why not also use an arduino/esp or even a PIC16 etc to actually perform some of the actions on real hw ? e.g. power on/off schedules led control etc...

 

Link to comment
Share on other sites

1 minute ago, yanjun said:

So whether we can establish a developer discussion group, similar to discord, telegram or others, I don’t know if this is compliant, I will try to establish one first, and welcome everyone to join so that we can iterate this project more efficiently.
telegram: https://t.me/joinchat/eYcaYV4ywDY4MWU9

I think this is a good way of communication.

Link to comment
Share on other sites

Hi,

 

I'm a computer engineer by education, but not professionally.  Couple questions for this excellent project:

 

  1. Are all of the DSM apps built into the kernel?  This is an interesting way to get this done... no wonder they don't want to just give it away.
  2. Since we're compiling from the actual DSM kernel, does this mean we can build this for any model that Synology makes (more than just what is offered by Jun)?  I can see some awesome projects coming out of this possibility.  

 

I'm excitedly looking forward to a tutorial how to build this with your patches.

 

Thank you.

Link to comment
Share on other sites

3 hours ago, calimansi said:

Hi,

 

I'm a computer engineer by education, but not professionally.  Couple questions for this excellent project:

 

  1. Are all of the DSM apps built into the kernel?  This is an interesting way to get this done... no wonder they don't want to just give it away.
  2. Since we're compiling from the actual DSM kernel, does this mean we can build this for any model that Synology makes (more than just what is offered by Jun)?  I can see some awesome projects coming out of this possibility.  

 

I'm excitedly looking forward to a tutorial how to build this with your patches.

 

Thank you.

I am an amateur, I spent a whole day trying to compile the other day, but only got redpill.ko. And I don’t even know how to use redpill.ko and add it to the core of Synology

It is also expected that the author of this patch will announce how to use or compile the kernel. But... he seems to have not logged in to this forum for a long time, he last logged in on Friday

Link to comment
Share on other sites

4 hours ago, calimansi said:

Since we're compiling from the actual DSM kernel, does this mean we can build this for any model that Synology makes (more than just what is offered by Jun)?  I can see some awesome projects coming out of this possibility.  

 

@calimansiYes, even with Jun's loader you could modify it to support new models. I think there is a post somewhere that on a Chinese forum, somebody did it for 1019+ (if I am not mistaken). The issue here is that developing a loader to support other models you wont gain anything. For example 3615xs and 3617xs support different amount of cores, the DS918+ supports booting from UEFI and NVMEs. So taking the example with the 1019+ compared to 918+ there is no difference between them, no point of making a new loader if they both offer the same stuff.

Link to comment
Share on other sites

1 hour ago, gadreel said:

 

@calimansiYes, even with Jun's loader you could modify it to support new models. I think there is a post somewhere that on a Chinese forum, somebody did it for 1019+ (if I am not mistaken). The issue here is that developing a loader to support other models you wont gain anything. For example 3615xs and 3617xs support different amount of cores, the DS918+ supports booting from UEFI and NVMEs. So taking the example with the 1019+ compared to 918+ there is no difference between them, no point of making a new loader if they both offer the same stuff.

They modified it to 1019+ simply because it has the exact same core hardware.  And it is NOT an easy task to support whatever emulated model because of those pci address issues as discussed.

Link to comment
Share on other sites

I would be more inclined to see if we can develop a loader for the Rackstation branded versions. It looks like some of them support a higher core count which may be valuable to us that would like to run more than 8 cores. It looks like the RS3621xs+ can support up to 16 threads which I believe is more than what images such as the 918 support.

Link to comment
Share on other sites

1 hour ago, wreslrstud said:

I would be more inclined to see if we can develop a loader for the Rackstation branded versions. It looks like some of them support a higher core count which may be valuable to us that would like to run more than 8 cores. It looks like the RS3621xs+ can support up to 16 threads which I believe is more than what images such as the 918 support.

 

DS3617xs offers 16 threads to you now.

Link to comment
Share on other sites

Ah thanks for the info flyride! I've been using the 918 distro for about 2 years now and love it. I know VMM isn't the best hypervisor out there, but it's easier and cheaper for me to just fire up a VM real quickly to test than spinning up a second box running ESXi or something like that.

Link to comment
Share on other sites

46 minutes ago, merve04 said:

Do rackstations even support intel quicksync for hw transcoding?

 

No, the RS3621xs+ is indirectly supporting NVMe cache through add-in card, but there is no current Synology device that supports all of the desired image-specific features that are available in either DS918+ or DS3617xs: NVMe, Quicksync, RAIDF1 and 16-thread.

  • Like 1
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...