Jump to content
XPEnology Community

haydibe

Contributor
  • Posts

    705
  • Joined

  • Last visited

  • Days Won

    35

Everything posted by haydibe

  1. @Amoureux just pointed out that "realpath" causes errors on his system. I was mistaken to think it commes pre-installed with the linux core tools. Appearently it's not the case: If you get an error like this: You will need to install the realpath package, e.g. for Ubuntu based sytems: sudo apt-get install -y realpath on yum based systems, it should be part of the coreutils package.
  2. With Q35 the first drive is located at position 7, you will need to define SataPortMap and DiskIdxMap to set the ordering right. Though, before you do that, you will need to check how many sata/scsi controllers are detected inside DSM7 with `lspci | grep -Ei '(SATA|SCSI)'`. I get 4 controllers in total for my Apollolake DSM7 instance without additional passthrough adapters - you will want to configure the values for all controllers to get a reliable drive ordering!
  3. @ThorGroup thank you once again for the update and adressing all issues that come up! Would you mind to provide some insights about where `"DiskIdxMap": "0C", "SataPortMap": "1", "SasIdxMap": "0"` actualy are ment to be configured in the user_config.json? According the redpill-load README.md it should be a k/v structure specified in "synoinfo". Though so far everyone seems to add them in "extra_cmdline". Is this the intended usage? ... "synoinfo": { "DiskIdxMap": "OC", "SataPortMap": "1", "SasIdxMap": "0" }, ... Afterall, those parameters are not only usefull for those that use sata_dom boot, but also for those that use the usb boot and want to bring order into the drive ordering. Note: I believe SasIdxMap is not required if no sas controller is passed into the vm, on the other side it doesn't seem to make any harm.
  4. There is a bind-mount in place from the ./cache folder on the host to the /opt/redpill-load/cache folder in the container. The content (~=pat files) of the cache folder has been been stored directly on the host since v0.4.
  5. Update the toolchain builder to 0.6.0 # Inofficial redpill toolchain image builder - Creates a OCI Container (~= Docker) image based tool chain. - Takes care of downloading (and caching) the required sources to compile redpill.ko and the required os packages that the build process depends on. - Caches .pat downloads inside the container on the host. - Configuration is done in the JSON file `global_config.json`; custom <platform_version> entries can be added underneath the `building_configs` block. Make sure the id is unique per block! - Support a `user_config.json` per <platform_version> - Allows to bind a local redpill-load folder into the container (set `"docker.local_rp_load_use": "true"` and set `"docker.local_rp_load_path": "path/to/rp-load"`) ## Changes - removed `user_config.json.template`, as it was orphaned and people started to use it in an unintended way. - new parameters in `global_config.json`: -- `docker.local_rp_load_use`: wether to mount a local folder with redpill-load into the build container (true/false) -- `docker.local_rp_load_path`: path to the local copy of redpill-load to mount into the build container (absolute or relative path) -- `build_configs[].user_config_json`: allows to defina a user_config.json per <platform_version>. ## Usage 1. edit `<platform>_user_config.json` that matches your <platform_version> according https://github.com/RedPill-TTG/redpill-load and place it in the same folder as redpill_tool_chain.sh 2. Build the image for the platform and version you want: `./redpill_tool_chain.sh build <platform_version>` 3. Run the image for the platform and version you want: `./redpill_tool_chain.sh auto <platform_version>` You can always use `./redpill_tool_chain.sh run <platform_version>` to get a bash prompt, modify whatever you want and finaly execute `make -C /opt/build_all` to build the boot loader image. Note1: run `./redpill_tool_chain.sh` to get the list of supported ids for the <platform_version> parameter. Note2: if `docker.use_local_rp_load` is set to `true`, the auto action will not pull latest redpill-load sources. See README.md for examples redpill-tool-chain_x86_64_v0.6.zip
  6. I just migrated a 3 node vSphere cluster with vCenter to Proxmox. vSphere is definitly more polished and its fixed set of operations can be used almost completly from the web ui. Proxmox looks and feels less polished, and allows to configure the main subset of features that qemu/kvm provides from the ui - everything beyond requires the user to jump to the command line. I believe that qemu/kvm are so well hooked into the host's kernel, that the performance of vm guest's is astonishing well. Proxmox is more flexible then ESXi, especialy if you compare the "free" versions of both - with ESXi many feature (like direct-o or serial port forwarding) are not available for the free version. In Proxmox you could leverage cephs for Hyperconverged Storage for free, which is comparable to what vSphere's vsan provides for a shi*load of money. Though both realy only make sense if the network amongst the nodes has more then 10gbps (40gpbs+ prefered) and beefy machines with fast storage. if KVM is good enough for OpenStack to create bare meta private clouds, then it should be good enough for me to be my hypervisor. Most public cloud providers rather base on KVMish-core for their compute instances, then on ESXi.. So is Proxmox the better ESXi? If you want to replace the free ESXi version -> definitly yes. If you compare the enterprise versions -> I doubt it. If you consider the value you get for your money -> it definitly is.
  7. Welcome guys, its a please to contribute something that makes it easier for you guys to test what @ThorGroup @jumkeyand @UnknownO created. Without their work and the feedback from all enthusiastic testers here, that help to reflects the current state and help each other to get a better understanding, we wouldn't be where we are now. So thanks to all of you
  8. Hmm, according https://www.synology-wiki.de/index.php/Welchen_Prozessortyp_besitzt_mein_System? only DS710+ used a D410. The specs for QNAP TS-439 Pro II+ seem hard to track.. So you mean it didn't use a Atom D410 cpu?
  9. According https://ark.intel.com/content/www/us/en/ark/products/43517/intel-atom-processor-d410-512k-cache-1-66-ghz.html, it is a x86_64 cpu with an astonishing passmark value of 164 - I didn't knew that numbers could go that low ^^. Just for comparison the Celeron J3455 in a DS918+ has a passmark of 2259. Given that DSM is already bloody slugish on original boxes, you don't want to put it on a system that is 13 times slower than an original DS918+
  10. DiskIdxMap is a two digit hex value that indicates per SataPort where its first device start. StatPortMap is a single digit integer value that indicates how many ports a sata controller provides. `DiskIdxMap=0008 SataPortMap=82` for instance would provide 8 ports starting at 00 (=drive 1 in the ui) and a second controller with two ports starting at 08 (=drive 9 in the ui). You should check with lcpsi how many controllers are actualy detected and assign values for each of them! I always though their number in the pci bus would indicate their possition in DiskIdxMap and SataPortMap, but it apears to be not the case. I alway have to play arround to find out which Port is which controller... Though, I am currious what `sata_uid=1 sata_pcislot=5` actualy means? Does it specify the location of the sata_dom drive? On my Proxmox bromollow + lsi box with june's loader, I used usb boot instead of sata_dom and did not specify these two settings and it still work as desired - but I had to specify DiskIdxMap and SataPortMap to get the right order.
  11. Does `lspci -k` even show that the .ko is assigned to the controller?
  12. Yep. everything inside the docker folder is realy just ment to be used when building the docker image. Of couse the user_config.json needs to be where redpill_tool_chain.sh is. Actualy the template was intended for people that just wanted to specify the 4 params and don't want to deal with a user_config.json. During the magration from make to bash, I should have removed it as I implemented nothing to actualy set the parameters that would render a meaningfull user_config.json inside the container. It is impossible to implement the required flexibility without ending up with something that looks like the user_config.json, thus mapping a pre-created user_config.json makes more sense.
  13. haydibe

    DSM Pakete

    Ich behaupte das es nicht möglich ist, da diese Syno-Packages keine alleinstehenden Anwendungen sind (eigener Port? Separate Oberfläche?). Diese Pakete sind so mit DSM verdrahtet, dass es selbst für einen "profi" schwer bis unmöglich wird alle notwendigen Abhängigkeiten zu erkennen, isolieren und diese dann auch noch zum fliegen zu bringen.
  14. Might be related to accidently opening "active insight" (the package came pre-installed and can't be removed). I would be surprised if the installation of the docker package caused it.
  15. Should we worry about this output in /var/log/messages? 2021-08-20T22:30:28+02:00 dsm synocloudserviceauth[27855]: cloudservice_get_api_key.cpp:21 Cannot get key 2021-08-20T22:30:28+02:00 dsm synocloudserviceauth[27855]: cloudservice_register_api_key.cpp:293 Register api key failed: Auth Fail 2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_update_ds_token.c:52 fgets failed 2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_update_ds_token.c:147 Can't set api key 2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_utils.c:325 SYNOPushserviceUpdateDsToken failed. 2021-08-20T22:30:28+02:00 dsm notify[27851]: pushservice_utils.c:387 GenerateDsToken Failed 2021-08-20T22:30:28+02:00 dsm notify[27851]: Failed to get ds token. Since 18:30 this block repeats roughly 50 times in the logs. Seems there is the need to supress one more url.
  16. I just have migrated my ESXi testbox to redpill apollolake-7.0-41890 on xpe - but this box has no additional controller. VIrtio just works fine on this box. Smart looks fine with thin-lvm provisioned drives, Info-Center looks fine, Docker works.. I am good. This is stable enough for me to "look arround" in DSM7. I am going to migrate my main ESXi box, based on bromolow with a LSI9211-8i controller, tomorrow. Though, I am aiming for Jun's bootloader. Can't help you when it commes to redpill and passthough LSI controller, as this migration will not happen before redpill is final.
  17. Changing the url in the global_config.json and building a new image after that should do the job. Afterall the parameters are fetched from the json file for the "platform_version" and provided as parameters to the dockerfile. add this to build_configs: ,{ "id": "apollolake-7.0-41890-mpt3sas", "platform_version": "apollolake-7.0-41890", "docker_base_image": "debian:10-slim", "compile_with": "toolkit_dev", "download_urls": { "kernel": "https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/apollolake-source/linux-4.4.x.txz/download", "toolkit_dev": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.apollolake-7.0.dev.txz/download" }, "redpill_lkm": { "source_url": "https://github.com/RedPill-TTG/redpill-lkm.git", "branch": "master" }, "redpill_load": { "source_url": "https://github.com/420Xnu/redpill-load.git", "branch": "mpt3sas" } } then run it with build apollolake-7.0-41890-mpt3sas
  18. The RP would've been the dealbreaker. I couldn't find it. Thanks to Orphée, we know it exists, but it placed in the "advanced" tab of the "Login Portal" navigation item. Docker works fine. Just tested a compose deployment -> works like a charm. Tested a swarm stack deployment -> still broken (been broken since 17.05 supports the swarm mode) on Synology's Docker distribution.
  19. Seems reasonable for a PoC... Since Redpill is not final - regardless whether you manage to passthrought the controller and find a way to compile your own set of drivers or passthrough each drive and let DSM918+ handle them for you - it is a PoC anyway. My main box uses a LSI controller well with bromollow since ages. Though, untill Redpill is final, it will remain on Jun's bootloader. Yay, even the docker package works like a charm on DSM7. Though, I am currious what happend to Synologies reverse proxy UI... Seems they gave up on it.
  20. If i am not mistaken, the drivers are only privded ootb in bromolow. For apollolake you will need your own set of compiled drivers. If I am not mistaken, I remember that @IG-88 once wrote that LSI controllers are not reliable on 918+ with the additionaly driver packe and it's advices to stick to bromollow for the sake of reliability.
  21. If layers end up in the buildcache, you can clean it with: docker builder purge -a If images stack up in the local image cache, you can clean them with: docker image purge (warning: this will remove all "dangling" images, where the image:tag got assigned to a new image). Though, I typical remove the images by hand with `docker image rm imageid1 imageid2 image3` (you can delete one or more images at a time)
  22. You are right, wasn't that hard after all. I have noticed the backup, but honestly don't like the idea of restoring someone elses backup. At the end it was as easy as adding a virtio network card, the usb controller and serial0 port, and manualy edit the vm's conf to add: args: -device 'qemu-xhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/XXX/redpill-DS918+_7.0-41890.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot,bootindex=5' at the top of the vm config file: /etc/pve/nodes/zzz/qemu-server/xxx.conf Then used this user_config.json: { "extra_cmdline": { "pid": "0x0001", "vid": "0x46f4", "sn": "xxxxxxxx", "mac1": "xxxxxxxxxxx" }, "synoinfo": {}, "ramdisk_copy": {} } } Of course with a generated sn and mac1. What should I say: Successfully migrated one of my old XPE vm's from ESXi to Proxmox. I do favor the usb bootdrive approach over the sata_dom boot approach. For me that's the way to go, unless ThorGroup argues in one of their fabulous explainations why it's better to use sata_dom instead
  23. You need to use v0.5.4 of the script OR just change the url and branch in the global_settings.json. Just copy the values for load repo from one of the other profiles.
  24. Haven't migrated my dsm config from ESXi to proxmox yet. I still need to dig into how to actualy run XPE/Redpill on Proxmox.
  25. The fix is not required anymore. Its fixed in the redpill-load repo. There is no need to build a new image. the next `auto` execution pulls the latest sources and everything should be fine.
×
×
  • Create New...