RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

Thanks for helping me get up and running with ESXi, folks. I spent most of the evening upgrading from 6.5 to 7.0 (damn VIB dependencies!) but once that was out of the way the DSM installation was pretty smooth. The only problem I had was that the network it's on has no DHCP (for reasons), so I had to get the virtual terminal working so I could go in and manually give it an IP address, netmask and gateway to even be able to get at the web interface. Now it's up and running! 👍

 

It's got a 918+ image on it at the moment, given that I've got loads of CPUs and RAM in this host: is 918+ the best image for it? I'm hoping to eventually use this as a Plex host, so CPU, disk and network throughout will be important.

Link to post
Share on other sites
49 minutes ago, shibby said:

@pocopico or someone else, can you compile tn40xx module for me (Tehuti Networks Ltd. TN9710P 10GBase-T/NBASE-T Ethernet Adapter) for Edimax EN-9320TX-E?

 

tried compile module by myself but without luck.

Edited by shibby
Link to post
Share on other sites
On 10/4/2021 at 4:45 PM, luckcolors said:

I wanted to ask, has ayone got a working compiled poweroff module for DSM 6.2.3?
Currently AHCI shutdown does not work and so you have to manually get into the dashboard for starting DSM shutdown.

 

If someone has one i wouldn't mind helping in packaging it into a REDPill mod.

 

As long as there is no powerbutton module, I have used a workaround in Proxmox by using hookscripts.

When shutting down the VM from Proxmox, at the 'pre-stop phase' it SSH's into DSM and issues the 'shutdown now' command.

 

elsif ($phase eq 'pre-stop') {

    # Third phase 'pre-stop' will be executed before stopping the guest
    # via the API. Will not be executed if the guest is stopped from
    # within e.g., with a 'poweroff'

    print "$vmid will be stopped.\n";
    print "Shutting down DSM\n";
    system("ssh -T USERNAME\@xx\.xx\.xx\.xx 'sudo shutdown now'");

 

Works great!

  • Like 1
Link to post
Share on other sites
6 minutes ago, scoobdriver said:

is there a fork that support DS3615XS 7.0.1-42218 loader ? 

You can build it with Haydib's docker scripts by adding the relevant details to the config json file. NB. TTG don't support 42218, so this is pulling from jumkey's repo and you won't be able to use extensions with this. This is what I'm using on my system.

 

{
            "id": "bromolow-7.0.1-42218",
            "platform_version": "bromolow-7.0.1-42218",
            "user_config_json": "bromolow_user_config.json",
            "docker_base_image": "debian:8-slim",
            "compile_with": "toolkit_dev",
            "redpill_lkm_make_target": "dev-v7",
            "downloads": {
                "kernel": {
                    "url": "https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/bromolow-source/linux-3.10.x.txz/download",
                    "sha256": "18aecead760526d652a731121d5b8eae5d6e45087efede0da057413af0b489ed"
                },
                "toolkit_dev": {
                    "url": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.bromolow-7.0.dev.txz/download",
                    "sha256": "a5fbc3019ae8787988c2e64191549bfc665a5a9a4cdddb5ee44c10a48ff96cdd"
                }
            },
            "redpill_lkm": {
                "source_url": "https://github.com/RedPill-TTG/redpill-lkm.git",
                "branch": "master"
            },
            "redpill_load": {
                "source_url": "https://github.com/jumkey/redpill-load.git",
                "branch": "develop"
            }
        },

 

Edited by WiteWulf
Link to post
Share on other sites
6 minutes ago, WiteWulf said:

You can build it with Haydib's docker scripts by adding the relevant details to the config json file. NB. TTG don't support 42218, so this is pulling from jumkey's repo and you won't be able to use extensions with this. This is what I'm using on my system.

 

Cheers ! LOL this is what I was doing to use Apollolake 918 7.0.1 , not sure why I forgot, (go on holiday for a week and mind goes blank, must make more notes :) ) I checked Junkeys fork but was looking at master rather than develop :S 

 

I guess I may wait if extensions arn't supported , I wanted to try HBA card on Esxi Gen8 , but I'll perhaps wait until I can use the sas-activator extension . 

  • Like 1
Link to post
Share on other sites
27 minutes ago, scoobdriver said:

 

Cheers ! LOL this is what I was doing to use Apollolake 918 7.0.1 , not sure why I forgot, (go on holiday for a week and mind goes blank, must make more notes :) ) I checked Junkeys fork but was looking at master rather than develop :S 

 

I guess I may wait if extensions arn't supported , I wanted to try HBA card on Esxi Gen8 , but I'll perhaps wait until I can use the sas-activator extension . 

You can do it the old way and extract/rebuild rd.gz adding mpt2sas.ko from @pocopico in it if you already have a working 7.0.1 build.

 

Edit : but there is already a report issue with HBA/SAS passtrough :

https://github.com/RedPill-TTG/redpill-lkm/issues/19#issuecomment-932954295

Edited by Orphée
Link to post
Share on other sites
19 minutes ago, Orphée said:

You can do it the old way and extract/rebuild rd.gz adding mpt2sas.ko from @pocopico in it if you already have a working 7.0.1 build.

 

Edit : but there is already a report issue with HBA/SAS passtrough :

https://github.com/RedPill-TTG/redpill-lkm/issues/19#issuecomment-932954295

 

Thanks, Yeah I'll perhaps hold off for the moment , I have some test 1TB disks , but I did note the issue logged on GH with <2TB , my production/normal disks are 4x3TB (not that I'm ready or willing quite right now to move my stable system across.) 

 

Edit* Also sounds like a few of us have similar HW , which is great , so I'll keep following developments . I have a little test ESXi system, but it lacks the LSI card my Microserver has .  

Edited by scoobdriver
Link to post
Share on other sites
1 hour ago, WiteWulf said:

Yeah, we need to ideally wait for TTG to support 7.0.1, or as a bodge, for Jumkey to update their repo to support extensions.

I already have locally updated sources for 918+ v 7.0.1 with support for extensions. The problem is, that TTG's extensions also needs to be updated to support 7.0.1 or someone has to fork them and modify...

 

edit: so I've decided to wait for official support for 7.0.1. Until that I will stay on my current 7.0.1. Good for me that it is stable :)

Edited by abesus
Link to post
Share on other sites
5 minutes ago, Orphée said:

If you mean "stuck with DS3615xs loader because of missing CPU feature", yes, it seems ! 😅

and the frustration of not been able to use the Quick Sync Video on the E3-1265 v2 due to the architecture of the G8 MS 😂, but hey I still love the little Box . 

Link to post
Share on other sites
12 hours ago, WiteWulf said:

Thanks for helping me get up and running with ESXi, folks. I spent most of the evening upgrading from 6.5 to 7.0 (damn VIB dependencies!) but once that was out of the way the DSM installation was pretty smooth. The only problem I had was that the network it's on has no DHCP (for reasons), so I had to get the virtual terminal working so I could go in and manually give it an IP address, netmask and gateway to even be able to get at the web interface. Now it's up and running! 👍

 

It's got a 918+ image on it at the moment, given that I've got loads of CPUs and RAM in this host: is 918+ the best image for it? I'm hoping to eventually use this as a Plex host, so CPU, disk and network throughout will be important.

image.png.e713b581b5c14dc37a897333804a0a14.thumb.png.94c4daeea539bc965e6d17e419866d7b.png

 

Am I right in thinking that the above table means both 918+ and 3615xs images will only make use of up to 8 cores, and that only the 3617xs image (which isn't available for redpill) will support 16?

Link to post
Share on other sites

I assume they used threads as opposed to cores in the table to cover hyperthreading CPUs. As I'm provisioning vCPUs in ESXi it makes no difference. I've assigned 8 cores/1 socket and it's making use of all 8 "CPUs" for Plex transcoding.

 

Now I wonder where we're at with hardware transcoding for Plex? I've never bothered looking into it before as my main hardware (HP Gen8) isn't capable...

Link to post
Share on other sites
6 minutes ago, WiteWulf said:

Now I wonder where we're at with hardware transcoding for Plex? I've never bothered looking into it before as my main hardware (HP Gen8) isn't capable...

I pass the igpu through esxi to a ds918 vm and am able to do hw transcoding in plex (docker) 

need to change permissions on dev/dri on boot up so plex can use it. 

Link to post
Share on other sites
9 minutes ago, scoobdriver said:

I pass the igpu through esxi to a ds918 vm and am able to do hw transcoding in plex (docker) 

need to change permissions on dev/dri on boot up so plex can use it. 

Ah, my mistake. I'd assumed the 2.5GHz E5-2680 v3 beasties in this box supported QuickSync, but apparently not.

Link to post
Share on other sites
34 minutes ago, WiteWulf said:

Ah, my mistake. I'd assumed the 2.5GHz E5-2680 v3 beasties in this box supported QuickSync, but apparently not.

 

LOL not sure what your use case is .. most my devices direct play content, so I very rarely need to transcode (and the G8 Xeon handled that cpu transcoding ok) I only really set up the HW transcoding 918 VM as an experiment, and incase I ever needed to use it whilst traveling and had bad internet connection . It maps my media content on the main HP microserver . I also use it as hyperback and snapshot replication target for the HP MS . It runs in ESXi on a tiny micro form factor optiplex , its very low power consumption cpu 35w so no big deal keeping it running 

Link to post
Share on other sites
3 minutes ago, scoobdriver said:

 

LOL not sure what your use case is .. most my devices direct play content, so I very rarely need to transcode (and the G8 Xeon handled that cpu transcoding ok) I only really set up the HW transcoding 918 VM as an experiment,

 

Yeah, this was mostly as an experiment just to see if it would work :)

 

I share Plex libraries with a few work colleagues. My TVs are both 4k HDR capable, so I never do any transcoding (and keep the 4k content in a separate library so ppl without capable eqpt don't play it by accident), but some folk watch on laptops and 720p TVs, so transcoding is often used for 1080p content. It's currently running on a QNAP TS-853A which does hardware transcoding with a GPU (it needs it as the Celeron CPU in it is rubbish!), but I figure moving it off onto this new system will be faster, and I'll just use the QNAP for storage (as it has a 12TB SATA array in it).

 

Quote

and incase I ever needed to use it whilst traveling and had bad internet connection . It maps my media content on the main HP microserver . I also use it as hyperback and snapshot replication target for the HP MS . It runs in ESXi on a tiny micro form factor optiplex , its very low power consumption cpu 35w so no big deal keeping it running 

 

I'm hoping to do similar with hyperback. I had a quick play last night backing up my home server to the new ESXi host at work and it was very fast and easy.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.