Jump to content
XPEnology Community

haydibe

Contributor
  • Posts

    705
  • Joined

  • Last visited

  • Days Won

    35

Posts posted by haydibe

  1. 15 hours ago, haydibe said:

    Update the toolchain builder to 0.6.0

     

    @Amoureux just pointed out that "realpath" causes errors on his system. I was mistaken to think it commes pre-installed with the linux core tools. Appearently it's not the case:

     

     

    If you get an error like this:

    Quote

     

    ./redpill_tool_chain.sh: line 34: realpath: command not found

     

     

     

     

    You will need to install the realpath package, e.g. for Ubuntu based sytems:

     

    sudo apt-get install -y realpath

     

    on yum based systems, it should be part of the coreutils package.

    • Like 1
  2.  

     

    56 minutes ago, dodo-dk said:

    Hi, thanks for all your work @ThorGroup @haydibe and all @others.

     

    I have updated the loader and all works good with i440fx in Proxmox (DS3615xs).

    I have read the q35 is better than i440fx. So I switch to q35 but with that, the boot takes longer and my Volume2 is crashed.

    I doesn't found all Harddrives. I have one Virtual and 5 Passtrough Disks. I have switched back to i440fx and all works, thanks god, Volume2 is back.

     

     

    With Q35 the first drive is located at position 7, you will need to define SataPortMap and DiskIdxMap to set the ordering right. Though, before you do that, you will need to check how many sata/scsi controllers are detected inside DSM7 with `lspci | grep -Ei '(SATA|SCSI)'`. I get 4 controllers in total for my Apollolake DSM7 instance without additional passthrough adapters - you will want to configure the values for all controllers to get a reliable drive ordering!

    • Like 1
    • Thanks 1
  3. @ThorGroup thank you once again for the update and adressing all issues that come up!

     

    Would you mind to provide some insights about where `"DiskIdxMap": "0C", "SataPortMap": "1", "SasIdxMap": "0"` actualy are ment to be configured in the user_config.json? According the redpill-load README.md it should be a k/v structure specified in "synoinfo". Though so far everyone seems to add them in "extra_cmdline".


    Is this the intended usage?

    ...
    "synoinfo": {
      "DiskIdxMap": "OC",
      "SataPortMap": "1",
      "SasIdxMap": "0"
    },
    ...

     

    Afterall, those parameters are not only usefull for those that use sata_dom boot, but also for those that use the usb boot and want to bring order into the drive ordering.

     

    Note: I believe SasIdxMap is not required if no sas controller is passed into the vm, on the other side it doesn't seem to make any harm.

  4. Update the toolchain builder to 0.6.0

     

     

    # Inofficial redpill toolchain image builder
    - Creates a OCI Container (~= Docker) image based tool chain.
    - Takes care of downloading (and caching) the required sources to compile redpill.ko and the required os packages that the build process depends on.
    - Caches .pat downloads inside the container on the host.
    - Configuration is done in the JSON file `global_config.json`; custom <platform_version> entries can be added underneath the `building_configs` block. Make sure the id is unique per block!
    - Support a `user_config.json` per <platform_version>
    - Allows to bind a local redpill-load folder into the container (set `"docker.local_rp_load_use": "true"` and set `"docker.local_rp_load_path": "path/to/rp-load"`)

    ## Changes
    - removed `user_config.json.template`, as it was orphaned and people started to use it in an unintended way.
    - new parameters in `global_config.json`:
    -- `docker.local_rp_load_use`: wether to mount a local folder with redpill-load into the build container (true/false)
    -- `docker.local_rp_load_path`: path to the local copy of redpill-load to mount into the build container (absolute or relative path)
    -- `build_configs[].user_config_json`: allows to defina a user_config.json per <platform_version>. 

    ## Usage

    1. edit `<platform>_user_config.json` that matches your <platform_version> according https://github.com/RedPill-TTG/redpill-load and place it in the same folder as redpill_tool_chain.sh
    2. Build the image for the platform and version you want:
       `./redpill_tool_chain.sh build <platform_version>`
    3. Run the image for the platform and version you want:
       `./redpill_tool_chain.sh auto <platform_version>`

     

    You can always use `./redpill_tool_chain.sh run <platform_version>` to get a bash prompt, modify whatever you want and finaly execute `make -C /opt/build_all` to build the boot loader image.

     

    Note1: run `./redpill_tool_chain.sh` to get the list of supported ids for the <platform_version> parameter.

    Note2: if `docker.use_local_rp_load` is set to `true`, the auto action will not pull latest redpill-load sources.

     

    See README.md for examples

    redpill-tool-chain_x86_64_v0.6.zip

    • Like 6
    • Thanks 8
  5. I just migrated a 3 node vSphere cluster with vCenter to Proxmox. vSphere is definitly more polished and its fixed set of operations can be used almost completly from the web ui. Proxmox looks and feels less polished, and allows to configure the main subset of features that qemu/kvm provides from the ui - everything beyond requires the user to jump to the command line.

     

    I believe that qemu/kvm are so well hooked into the host's kernel, that the performance of vm guest's is astonishing well. Proxmox is more flexible then ESXi, especialy if you compare the "free" versions of both - with ESXi many feature (like direct-o or serial port forwarding) are not available for the free version.

     

    In Proxmox you could leverage cephs for Hyperconverged Storage for free, which is comparable to what vSphere's vsan provides for a shi*load of money. Though both realy only make sense if the network amongst the nodes has more then 10gbps (40gpbs+ prefered) and beefy machines with fast storage.

     

    if KVM is good enough for OpenStack to create bare meta private clouds, then it should be good enough for me to be my hypervisor.  Most public cloud providers rather base on KVMish-core for their compute instances, then on ESXi.. 

     

    So is Proxmox the better ESXi? If you want to replace the free ESXi version -> definitly yes. If you compare the enterprise versions -> I doubt it. If you consider the value you get for your money -> it definitly is.

     

    • Like 2
    • Thanks 1
  6. 11 hours ago, renxpe said:

    Really want to give a huge thanks to all the devs that have, and continue to make this possible @ThorGroup@haydibe @gadreel and any others I've missed!

     

     

    3 hours ago, seanone said:

    Thank you for your hard work, @ThorGroup @haydibe and others.

     

     

    Welcome guys, its a please to contribute something that makes it easier for you guys to test what @ThorGroup @jumkeyand @UnknownO created.

    Without their work and the feedback from all enthusiastic testers here, that help to reflects the current state and help each other to get a better understanding, we wouldn't be where we are now.

     

    So thanks to all of you :)

     

    • Like 5
    • Thanks 1
  7. According https://ark.intel.com/content/www/us/en/ark/products/43517/intel-atom-processor-d410-512k-cache-1-66-ghz.html, it is a x86_64 cpu with an astonishing passmark value of 164 - I didn't knew that numbers could go that low ^^.

     

    Just for comparison the  Celeron J3455 in a DS918+ has a passmark of 2259.

    Given that DSM is already bloody slugish on original boxes, you don't want to put it on a system that is 13 times slower than an original DS918+ :)

     

     

    • Haha 2
  8. 22 minutes ago, Orphée said:

    I'm still trying to understand how DiskIdxMap and SataPortMap works... maybe I'm missing something with it...

     

    DiskIdxMap is a two digit hex value that indicates per SataPort where its first device start.

    StatPortMap is a single digit integer value that indicates how many ports a sata controller  provides.

     

    `DiskIdxMap=0008 SataPortMap=82` for instance would provide 8 ports starting at 00 (=drive 1 in the ui) and a second controller with two ports starting at 08 (=drive 9 in the ui).  You should check with lcpsi how many controllers are actualy detected and assign values for each of them!

     

    I always though their number in the pci bus would indicate their possition in DiskIdxMap and SataPortMap, but it apears to be not the case. I alway have to play arround to find out which Port is which controller...  

     

     

    Though, I am currious what `sata_uid=1 sata_pcislot=5` actualy means? Does it specify the location of the sata_dom drive?

    On my Proxmox bromollow + lsi box with june's loader, I used usb boot instead of sata_dom and did not specify these two settings and it still work as desired - but I had to specify DiskIdxMap and SataPortMap to get the right order. 

    • Like 1
    • Thanks 1
  9. 2 hours ago, taiziccf said:

    alright, figured it out!

    user_config.json must be in root folder same as redpill_tool_chain.sh

    but the template is actually kept inside docker folder, thats why I though it should be save inside docker folder for the user_config.json file aswell....

    Yep. everything inside the docker folder is realy just ment to be used when building the docker image.

    Of couse the user_config.json needs to be where redpill_tool_chain.sh is. Actualy the template was intended for people that just wanted to specify the 4 params and don't want to deal with a user_config.json. During the magration from make to bash, I should have removed it as I implemented nothing to actualy set the parameters that would render a meaningfull user_config.json inside the container. It is impossible to implement the required flexibility without ending up with something that looks like the user_config.json, thus mapping a pre-created user_config.json makes more sense. 

     

    • Like 1
  10. Ich behaupte das es nicht möglich ist, da diese Syno-Packages keine alleinstehenden Anwendungen sind (eigener Port? Separate Oberfläche?).

    Diese Pakete sind so mit DSM verdrahtet, dass es selbst für einen "profi" schwer bis unmöglich wird alle notwendigen Abhängigkeiten zu erkennen, isolieren und diese dann auch noch zum fliegen zu bringen.

     

     

     

  11. Should we worry about this output in /var/log/messages?

     

    2021-08-20T22:30:28+02:00 dsm synocloudserviceauth[27855]: cloudservice_get_api_key.cpp:21 Cannot get key
    2021-08-20T22:30:28+02:00 dsm synocloudserviceauth[27855]: cloudservice_register_api_key.cpp:293 Register api key failed: Auth Fail
    2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_update_ds_token.c:52 fgets failed
    2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_update_ds_token.c:147 Can't set api key
    2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_utils.c:325 SYNOPushserviceUpdateDsToken failed.
    2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_utils.c:387 GenerateDsToken Failed
    2021-08-20T22:30:28+02:00 dsm notify[27851]: Failed to get ds token.

    Since 18:30 this block repeats roughly 50 times in the logs.  Seems there is the need to supress one more url.

     

     

  12. 50 minutes ago, psychoboi32 said:

    @haydibe R u able to get lsi-sas-2008 card & virtio on ds3615xs built. means i tried but failed hope you guide me here.

    I just have migrated my ESXi testbox to redpill apollolake-7.0-41890 on xpe - but this box has no additional controller.  VIrtio just works fine on this box. Smart looks fine with thin-lvm provisioned drives, Info-Center looks fine, Docker works.. I am good. This is stable enough for me to "look arround" in DSM7.

     

    I am going to migrate my main ESXi box, based on bromolow with a LSI9211-8i controller, tomorrow. Though, I am aiming for Jun's bootloader.

     

    Can't help you when it commes to redpill and passthough LSI controller, as this migration will not happen before redpill is final.

     

  13. Changing the url in the global_config.json and building a new image after that should do the job.

    Afterall the parameters are fetched from the json file for the "platform_version" and provided as parameters to the dockerfile.

     

    add this to build_configs:

           ,{
                "id": "apollolake-7.0-41890-mpt3sas",
                "platform_version": "apollolake-7.0-41890",
                "docker_base_image": "debian:10-slim",
                "compile_with": "toolkit_dev",
                "download_urls": {
                    "kernel": "https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/apollolake-source/linux-4.4.x.txz/download",
                    "toolkit_dev": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.apollolake-7.0.dev.txz/download"
                },
                "redpill_lkm": {
                    "source_url": "https://github.com/RedPill-TTG/redpill-lkm.git",
                    "branch": "master"
                },
                "redpill_load": {
                    "source_url": "https://github.com/420Xnu/redpill-load.git",
                    "branch": "mpt3sas"
                }
            }

     

     

    then run it with build apollolake-7.0-41890-mpt3sas

     

    • Like 1
  14. The RP would've been the dealbreaker. I couldn't find it. Thanks to Orphée, we know it exists, but it placed in the "advanced" tab of the "Login Portal" navigation item.

     

    Docker works fine. Just tested a compose deployment -> works like a charm. Tested a swarm stack deployment -> still broken (been broken since 17.05 supports the swarm mode) on Synology's Docker distribution.

     

  15. 42 minutes ago, psychoboi32 said:

    I have last option which is passthough all my hdd and mask its drive controller (everything physical hdd to virtual) (no temp. nothing it will just work for sake of POC (prof of concept)).

    Seems reasonable for a PoC...  Since Redpill is not final - regardless whether you manage to passthrought the controller and find a way to compile your own set of drivers or passthrough each drive and let DSM918+ handle them for you - it is a PoC anyway.

     

    My main box uses a LSI controller well with bromollow since ages. Though, untill Redpill is final, it will remain on Jun's bootloader. 

     

    Yay, even the docker package works like a charm on DSM7.  Though, I am currious what happend to Synologies  reverse proxy UI... Seems they gave up on it.  

  16. 1 hour ago, psychoboi32 said:

    Didn’t get my device working if i got mpt3sas.ko that will be gg i will add like virtio and modify docker so it will build image. but now i am tired if you guys have anything please share with me thanks

    If i am not mistaken, the drivers are only privded ootb in bromolow. For apollolake you will need your own set of compiled drivers. If I am not mistaken, I remember that @IG-88 once wrote that LSI controllers are not reliable on 918+ with the additionaly driver packe and it's advices to stick to bromollow for the sake of reliability.

    • Confused 1
  17. 2 hours ago, scoobdriver said:

    without removing everything and starting again , are there any locations I can remove files ? 

     

     

    If layers end up in the buildcache, you can clean it with: docker builder purge -a

     

    If images stack up in the local image cache, you can clean them with: docker image purge (warning: this will remove all "dangling" images, where the image:tag got assigned to a new image).  Though, I typical remove the images by hand with `docker image rm imageid1 imageid2 image3` (you can delete one or more images at a time)

     

    • Like 1
  18. 13 hours ago, psychoboi32 said:

    it is easy if you need my config i can share with you

    
    
    boot: order=usb0
    cores: 4
    cpu: SandyBridge
    hostpci0: 0000:01:00
    hostpci1: 0000:08:00
    memory: 8192
    name: XPenology-TMP-ds918
    net1: e1000=8A:C1:48:E3:A6:4D,bridge=vmbr0
    numa: 0
    ostype: l26
    sata0: local-lvm:vm-102-disk-2,size=32G
    scsihw: virtio-scsi-pci
    serial0: socket
    smbios1: uuid=04F674CF-F8B7-4268-998C-629149B97412
    sockets: 1
    tablet: 0
    usb0: host=090c:1000,usb3=1
    vga: serial0


    just make vm and put this config change pcie card according that if you need any help how to passthrough i have written guide i can share with you

    there was backup of proxmox available on this forum just use that and remove jun loader files and put redpill there as you can see my vm name it is used from that 😛

    You are right, wasn't that hard after all. 

     

    I have noticed the backup, but honestly don't like the idea of restoring someone elses backup.

     

    At the end it was as easy as adding a virtio network card, the usb controller and serial0 port, and manualy edit the vm's conf to add:

    args: -device 'qemu-xhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/XXX/redpill-DS918+_7.0-41890.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot,bootindex=5'

    at the top of the vm config file: /etc/pve/nodes/zzz/qemu-server/xxx.conf

     

    Then used this user_config.json:

    {
        "extra_cmdline": {
          "pid": "0x0001",
          "vid": "0x46f4",
          "sn": "xxxxxxxx",
          "mac1": "xxxxxxxxxxx"
        },
        "synoinfo": {},
        "ramdisk_copy": {}
      }
    }

    Of course with a generated sn and mac1.

     

    What should I say: Successfully migrated one of my old XPE vm's from ESXi to Proxmox.

     

    I do favor the usb bootdrive approach over the sata_dom boot approach. For me that's the way to go, unless ThorGroup argues in one of their fabulous explainations why it's better to use sata_dom instead :)

     

    • Like 2
    • Haha 1
  19. 17 minutes ago, Balrog said:

    am currently not able to build the Apollolake 7.0 image for #41890:

     

    
    ...
     > [stage-1 3/9] RUN git clone https://github.com/RedPill-TTG/redpill-lkm.git  -b master  /opt/redpill-lkm &&     git clone https://github.com/jumkey/redpill-load.git -b 7.0-41890 /opt/redpill-load:
    #6 0.438 Cloning into '/opt/redpill-lkm'...
    #6 1.081 Cloning into '/opt/redpill-load'...
    #6 1.426 fatal: Remote branch 7.0-41890 not found in upstream origin
    ------
    executor failed running [/bin/sh -c git clone ${REDPILL_LKM_REPO}  -b ${REDPILL_LKM_BRANCH}  ${REDPILL_LKM_SRC} &&     git clone ${REDPILL_LOAD_REPO} -b ${REDPILL_LOAD_BRANCH} ${REDPILL_LOAD_SRC}]: exit code: 128

     

    Is the remote branch for 7.0-41890 deactivated and I have missed this?

    You need to use v0.5.4 of the script OR just change the url and branch in the global_settings.json. Just copy the values for load repo from one of the other profiles. 

    • Like 1
×
×
  • Create New...