Jump to content
XPEnology Community

haydibe

Contributor
  • Posts

    705
  • Joined

  • Last visited

  • Days Won

    35

Everything posted by haydibe

  1. pinging @haydibe here. The loader itself doesn't care [now] but probably the container config has some assumption somewhere. You can leave them, you can even set them to DEAD and BEEF - it will not cause any problems but a warning message during boot When they're set but not needed they're only validated to the point of "is this a hex number". When the container is created, the file is mounted into rp-load folder, without any validation.
  2. @Orphée I completly missed the fact that the issue only takes place with pass through controllers...
  3. Can you try DiskIdxMap=0F00 (which should be drive 17 for the fist controller, and 1 for the the 2nd) Not that it's just an issue with interpreting the first hex digit of the two hex digit value. I use this on Proxmox (virtual usb drive, no passthrough on my test rig): "SasIdxMap": "0", "SataPortMap": "66", "DiskIdxMap": "0600" I did set the number of ports on each controller to 6, as I've read the claim that Proxmox would only suppot up to 6 drives (tbh, I never verified that claim) update: I forget to mention that my sata drives start with drive 1 -> without these settings, they start at 7. Update2: probably it makes sense to add the device list lspci -k | grep -A2 -E '(SCSI|SATA)' 0000:00:1f.2 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) Subsystem: Red Hat, Inc. Device 1100 Kernel driver in use: ahci -- 0000:06:07.0 SATA controller: Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02) Subsystem: Red Hat, Inc. Device 1100 Kernel driver in use: ahci -- 0001:00:12.0 SATA controller: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series SATA AHCI Controller 0001:00:13.0 Non-VGA unclassified device: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series PCI Express Port A #1 0001:00:14.0 Non-VGA unclassified device: Intel Corporation Celeron N3350/Pentium N4200/Atom E3900 Series PCI Express Port B #1 -- 0001:01:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11) 0001:02:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03) 0001:03:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
  4. I doubt that it will fix the general issue within DSM... Python3 application seem to use a binding that makes use of the systems openssl libraries. So all Python3 should be affected. Some SynoComunity packages (like git) comme with an update openssl library under the hood - I wonder if it's possible to copy the binaries and libs into the main system... Though, please do NOT try this on your live systems! I repeat: It is a stupid idea, don't do it on you live system!
  5. Keine Ahnung ob es noch weitere Updates für 6.2.3 geben wird. Danke, aber den Blogpost überspring ich mal. Wenn die Essenz dort ist das eine zu alten OpenSSL Version zum Einsatz kommt, dann wird https://hub.docker.com/r/linuxserver/nextcloud das Problem sicherlich nicht haben.
  6. Geht es hier um Letsencrypt Zertifikate? Das Zertifikat für die Intermediate CA ISRG_Root_X1.crt ist im u3.pat enthalten. Genauso wie eine ne ca-certificates.crt Datei mit den neuen Zertifikaten. Alles was die OpenSSL-Bibliotheken des OS verwendet, wird mit der aktuellen Version trotzdem nicht funktionieren (obwohl alle notwendingen Zertifikate vorliegen!). Hier wird ein Update der Bibliothek(en?) benötigt, dass mit den verwendeten Algorithmen bzw. der doppelten Zertifikatskette der LE-Zertifikate umgehen kann. Das in dem OS enthaltende Curl braucht diese Bibliotheken und funktioniert seit dem 30.09.2021 nicht mehr mit LE Zertifikaten. Neu ausgestellt Zertifikate haben zwar nicht mehr die Expired-CA's, aber dafür kann die veraltet Version der OpenSSL-Bibliotheken nicht mit den LE-Zertifikaten umgehen. Wenn man eine statisch kompilierte (=openssl mit eingebacken) Variante verwendet, dann geht es mit dieser: wget -L https://github.com/moparisthebest/static-curl/releases/download/v7.79.1/curl-amd64 && chmod +x curl-amd64 && ./curl-amd64 Wenn die Zertifikatskette das Problem wäre, dann würde es auch mit der statisch kompilierten Curl-Variante nicht gehen... Das löst aber das generelle Problem nicht, dass alles das von den OpenSSL-Bibliotheken abhängig ist, aktuell nicht funktioniert. Wenn NC in einem Container läuft, dann ist es nicht abhängig von der veralteten OpenSSL-Bibliothek und das Problem kann hier nicht auftreten. Natürlich muss auch der Client (Chrome hat bspw. kein Probleme) die neuen Zertifikate kennen und mit den Zertifikaten umgehen können.
  7. I doubt that it will. If the u3 does not provide new openssl libraries then, the updated ca-certificates.crt alone are worth nothing for everything that depends on these libraries. I couldn't find evidence whether openssl libraries are part of u3 as well. Like I wrote: if a static compiled version of curl is used (with build in openssl library), accessing LE services work like a charm with the very same ca-certificates.crt. I repeat: the problem is not only that ca certifcates are missing/ expired in ca-certificates.crt. Hope you guys find a solution with DSM6.2.3.
  8. I have checked the content of the 6.2.3u3 pat, it inclides a new ca-certificates.crt and the new LE CA. Maybee installantion of the u3.pat might solve the issue? I am not sure why, but since I moved from ESXI to Proxmox, updates fail for me as well. Previosuly I used a sata loader, now I use a usb loader and it seems like it is kind of related to why the updates fail.
  9. The problem does not exist with DSM7. I just copied the /etc/ssl/certs/ca-certificates.crt file from DSM7 to DSM6.2.3. It still does not work. When a staticly compiled version of curl is used (see: https://github.com/moparisthebest/static-curl/releases/tag/v7.79.1). It works, even though it uses the very same cabundle.... Smells like a library problem in an outdated openssl version. That's terrible! It is still worth to check if the obeves action solves the issue for sabnzbd, as it uses python, which may or may not use the openssl libraries from the OS. The commands won't do anything more worse than they are now ^^ And then there is always the way to run things in a docker container, which won't suffer from the old library problem.
  10. There is a way to check if certifcates are outdated: for cert in /etc/ssl/certs/*.pem; do openssl verify -CApath /etc/ssl/certs $cert done According these tests the cross-signed LE certifictes are outdated?! That's why I removed them from aboves download list.
  11. Actualy this one should do everything that is required, but it still doesn't seem to work: CERT_DIR=/usr/share/ca-certificates/le mkdir -p "${CERT_DIR}" # Root Certificates curl --insecure --silent "https://letsencrypt.org/certs/isrgrootx1.pem" --output "${CERT_DIR}/isrgrootx1.crt" curl --insecure --silent "https://letsencrypt.org/certs/isrg-root-x2.pem" --output "${CERT_DIR}/isrg-root-x2.crt" # Intermediate Certificates curl --insecure --silent "https://letsencrypt.org/certs/lets-encrypt-r3.pem" --output "${CERT_DIR}/lets-encrypt-r3.crt" curl --insecure --silent "https://letsencrypt.org/certs/lets-encrypt-e1.pem" --output "${CERT_DIR}/lets-encrypt-e1.crt" # add certificates to ca-certifactes, not sure if required? cat ${CERT_DIR}/*.crt >> /etc/ssl/certs/ca-certificates.crt for cert in ${CERT_DIR}/*.crt; do crt=${cert##*/} pem=${crt/.crt/.pem} # create symlink from downloaded certs to /etc/ssl/certs ln -fs ${CERT_DIR}/${crt} /etc/ssl/certs/${pem} pushd /etc/ssl/certs # create symlink with hash of the pem certificate. ln -fs ${pem} `openssl x509 -hash -noout -in ${pem}`.0 popd done In Theory that should work. But it doesn't. Make sure to run the `cat` command not more than once, otherwise you might end up with duplicate entries in the ca-certifcates.crt file. Either the server side still uses a certificate with the outdated certificate chain in one of the both cross signed paths inside the certificate, or we realy do have an issue with our openssl version.
  12. I envy France and Romania for these cheap lines. In Germany we have 1gbps down and 50mbps up for ~50€ (or was it 60€? not sure).
  13. My current understanding is that an extension bundles one or more main driver and their dependencies. The "kmods" item in a recepie decleare the load order of the modules for this extension during boot. If the dependencies are not supposed to be included in an extension, is there a mechanism in place to defince dependencies to other extensions? I assume this would require a dependency graph/tree detection, which I assume is highly messy or even imposible to archive with plain bash.
  14. Not sure why I didn't think about it earlier, but in fact you can checkout redpill-load on the host, configure "docker.local_rp_load_use": "true" and "docker.local_rp_load_path": "/path/where/your/local/redpill-load/copy/is" in global_config.json. And then use the ext-manager inside your local redpill-load copy before you perform your next auto build. This way there is no need to integrate the ext-manager inside the redpill-tool-chain, as it can be directly used inside the local redpill-load folder. Update: I just checked out of curiousity. This feature was available since v0.6
  15. Seems we are not getting on the same page here. I was not realy asking for any instructions or ways to modify the toochain builder.. The run action actualy IS the recommended approach at this time, I just wrote about the additional step required to actualy build the bootloader image when using the run action. Update: there is a better alternative, see my next post
  16. Uhm, so you didn't ask before what the next commands would be to follow up on the steps you posted before? I already had the implementation for configurable ext-manager support finished on wednesday, but had no extension repos to test. Now that we have some existing repos, I can test it. But the release will be on hold to make sure my implementation alligns with what @ThorGroup has in mind.
  17. For time beeing this is the approach, until it's sorted out with TTG on how to implement the integration. To build the bootloader you then simply have to execute the command `make -C /opt/ build_all` or follow the steps in the redpill-load documentation. As per my understanding, the bootloader should be build with the added extension. As a quick solution to get rid of step 3. I could embbed the symlink to the ext-manager.sh like TTG suggested.
  18. I think I found a clever way. I added "extensions": to each platform version with "id": and "url": fields, and pass in the whole extensions array as a single string and process it inside the container. Now I need some extensions to take this addition to a test drive. I also added support for a custom_config.json that allows to merge its values with the global_config.json. This would allow to keep individual platform version configurations in your own config file, which won't be replaced by a new release of the toolchain loader.
  19. Well done @ThorGroup! Once again a marvelous update that brings a lot of excitement to all of us I did check the toolchain builder builds for the 6.2.4 and 7.0 configurations that are supported out of the box (=configurations from TTG's redpill-load repository). All of them use the new plugin manager and build like a charm. Though I haven't checked the generated image yet. I noticed the cached pat files fpr apollolake are re-downloading from redpill-load's build-loader.sh, but this time the `+` character is replaced with a `p` character in the output filename. Thus `ds918+_25556.pat` became `ds918p_25556.pat`. Who ever is affected can simply replace the + in the filenames with a p and prevent the files from beeing re-downloaded again. Additionaly I tested jumkey's apollolake-7.0.1-42218, which is not yet alligned with the recent changes in redpill-load. I must admit, this one went over my head I was going to check how `build-loader.sh` calls `ext-manager.sh` and then simply add an array to declare plugins in the toolchain builders global_config.json and execute the command for each element before executing the loader build. Update: the ext-manager.sh syntax is straight forward, no need to check how build-loader.sh uses it. Though, I have to find a clever way to pass in an array, even though docker doesn't support to use arrays as environment variables.
  20. Affirmative. Everything right on spot! DSM6.2.4 builds use the kernel sources that are publicly available - thus the condition checks for the kernel sources beeing used is met and there is noreason to throw the warning. From my perspective there is nothing wrong with the warning on DSM7 builds, as it's a constant reminder that we need to switch from toolkit-dev to the kernel sources as soon as they are publicly available. This warning is a functional warning, not a technical warning. @dateno1 To "fix" this warning you just need to convince Synology to publish the DSM7 kernel sources 😃
  21. Technical correct, but won't make much of a difference in this case bacause "compile_with": "toolkit_dev" will use toolkit-dev and not the kernel to build redpill-lkm. Once the new kernel sources for dsm7 are available the url has to change to the new url and "compile_with" needs to be set to "kernel" to build redpill-lkm using the kernel.
  22. Please open an issue in the github repository of the redpill-load version configured in <platfrom version> that you used to build the image. I repeat: the toolchain builder is not responsible for that.
  23. @ThorGroup thank you for the update! And indeed, I spoted and incorporated the new make targets into the new toolchain builder version Taken from the README.md: Supports the make target to specify the redpill.ko build configuration. Set <platform version>.redpill_lkm_make_target to `dev-v6`, `dev-v7`, `test-v6`, `test-v7`, `prod-v6` or `prod-v7`. Make sure to use the -v6 ones on DSM6 build and -v7 on DSM7 build. By default the targets `dev-v6` and `dev-v7` are used. I snatched following details from the redpill-lkm Makefile: - dev: all symbols included, debug messages included - test: fully stripped with only warning & above (no debugs or info) - prod: fully stripped with no debug messages See README.md for usage. redpill-tool-chain_x86_64_v0.10.zip
  24. This doesn't add up.... The toochain builder does exactly nothing to either support or prevent uefi, this is part of what redpill-load does. So far I understand your post as "the toolchain builder uses a redpill-load version that lacks the features that have been previously there". The toolchain can and will not influence how redpill-lkm and redpill-load work. Never did, never will!
×
×
  • Create New...