Jump to content
XPEnology Community

haydibe

Contributor
  • Posts

    705
  • Joined

  • Last visited

  • Days Won

    35

Everything posted by haydibe

  1. Neither the Dockerfile, nor the Makefile changed between v0.8 and v0.9. Thus nothing that would effect how the image is build or how the container runs was changed... You might want to clean your build cache and image cache and try 0.9 again.
  2. Ah okay. Forget that I ever mention my observation regarding Ryzen cpus and old kernels
  3. @WiteWulfAre you and the other having problem with Docker on 3615sx running Ryzen CPUs by any chance? My observation is that cpu softlocks randomly occour with modern Ryzen cpus on older kernel, which cause a freeze of the system. I made this obervation when building vagrant base boxes with packer on a Ryzen7 4800H CPU and especialy CentOS (having a 3.10.x kernel) tends to randomly freeze during packer execution (which just creates a virtualbox vm, installs the os from the iso and provisions additional software). Never had this experience on Intel cpus.
  4. Copy the block with the id "bromolow-7.0-41222", change "id" and "platform_version" to "bromolow-7.0.1-42214", then replace the "redpill_load" subitems "source_url" and "branch" to whatever you want to use and be good... The id can be realy anything, it is just used to match the configuration when using the build/auto/run actions. Though, make sure the value of platform_version exactly matches the format "{platform}-{version}-{build}", as this will be used to derive information required for the build.
  5. You can add as many platform version blocks (=set of configurations for a specific version) as you want. Just make sure every new block has a unique id.
  6. Everytime when I think, there is nothing left to optimize on the toolchain builder someone commes around and finds something that makes sense to be added... Thus said, here is the v0.9 of the toolchain builder. - added sha256 cheksum for kernel and toolkit-dev downloads in `global_config.json`. The item `"download_urls"` is renamed to `"downloads"` and has a new structure. Make sure to allign your custom <platform version> configurations to the new structure when copying them into the `global_config.json` - check checksum of kernel or toolkit-dev when building the image and fail if checksums missmatch. Will not delete the corrupt file! - added`"docker.custom_bind_mounts"` in `global_config.json` to add as many custom bind-mounts as you want, set `"docker.use_custom_bind_mounts":` to `"true"` enable the feature. - fixed: only download kernel or toolkit-dev required to build the image (always downloaded both before, but only used either one of them when building the image) - added simple precondition check to see if required tools are available See README.md for usage. redpill-tool-chain_x86_64_v0.9.zip
  7. Thanks, that reminds me that I wanted to add checksum validation for the downloaded files. I would put my money on "corrupt or incomplete download", delete the file mentioned in your screenshot and try again.
  8. I assume you mean that the toolchain docker image you wanted to build failed to be created? At least that's what you screenshot indicates. Though, the error message does not help to pinpoint the cause. It might be thrown by apt or curl itself. I just checked the jq download url in a browser -> still valid. The" I have no idea" attempt to fix it, would be: "clean all" and a restart of the docker service or the docker host itself.
  9. I removed all platform versions from the global_config.json that did not point to the @ThorGroupredpill-load repository. Thus, there is no build-in <platform_version> for DSM7.0.1 anymore and you will have to add according settings to the global_config.json yourself. It proved to be impossible to keep track of all changes that happen arround redpill-load, and after all not everyone wants to have an optioniated configuration that uses a specific fork. changes:redpill-tool-chain_x86_64_v0.8.zip removed all platform versions that use redpill-load repostories other then the offical TTG repo -> all 7.0.1 versions
  10. ... this kind of makes me wonder if I should release the toolchain loader with just the ttg repos in the global_settings.json, and let everyone add their own repos for everything custom. Afterall the global_settings.json is designed to support that szenario.
  11. Thought about it... but this is rather an addition to redpill, than XPE development, isn't it? I understand that you volunteer to push it to github? Be my guest Everyone is free to modify the sources, publish modifications here or push them to github. I claim no ownership of the toolchain builder - as such I would appreciate if my name is not mentioned in the github repo. I prefer to not leave my marks on Github with it... Update: I forget to mention that I uploaded the fix for 0.7.4 in the original post pointing to the development branch for apollolake-7.0.1-42214.
  12. "clean all" removed all images (last build + orphaned), but did only remove the build cache of orhaned images, but not the latest. Now "clean all" will clean the build cache for the last build image as well. re-uploaded with yumkey fix redpill-tool-chain_x86_64_v0.7.4_fix.zip
  13. That's the base image for apollolake 7.x builds. "clean all" realy just targets images build by the toochain builder - it doesn't clean anything else. I introduced lables in the Dockerfile for this particular purpose to be able to filter for those images and just prune the build cache for those. It wouldn't feel right to generaly prune everything Though, still the double "clean all" just doesn't make sense, since the operation should be idempotent... realy strange, will check later if for whatever reason something slips through. It will be sometime in the evening, I still have to work 4hrs and then rest before I am able to take a look.
  14. Hmm, that's odd. Can share the output of `docker image ls`? Let me check my history, not that I skipped testing it again.. everything is possible before the first coffe 😊 update: I did check it before and I checked it again: the image builds and creating a loader with auto works for me. I have no idea where and how and how Docker for Desktop behaves differently than on Linux. I only use docker where it's a first class citizen... Linux. Everywhere else it's a half-assed implementation that monkeywires the local docker cli client, with a linux vm that actualy runs docker. DfD dominates the issues in the offical Docker forums..
  15. Seems I missed to test the bromolow-7.0.1 image... So this is the one that caused this mystery behavior. I disabled the build cache now by default - it can be turned on again in global_settings.json. Now "clean all" deletes all images, including the last build one. changes in v0.7.3 : - fixed usage of label that determins the redpill-tool-chain images for clean up - add `"docker.use_build_cache": "false"` to global_settings.json - add `"docker.clean_images": "all"` to global_settings.json (set to orphaned to cleanup everything except the latest build) See README.md for usage. When I introduced labels, I must have made a last adjustment to the Dockerfile, which actual resulted in all images beeing created using the correct key, but with the wrong value (=all identical) . Please clean up your build cache and images to benefit from the fix: docker builder prune --all --filter label=redpill-tool-chain docker image ls --filter label=redpill-tool-chain --quiet | xargs docker image rm redpill-tool-chain_x86_64_v0.7.3.zip
  16. Is docker.local_rp_load_use set to true in your global_settings.json? If so:
  17. I am not sure guys what you did, but I was able to build all variations supported by the bootchain loader without the issues you had (except the bug with 'WARNING: "unregister_native_sata_boot_shim" [/opt/redpill-lkm/redpill.ko] undefined!'). Once you map a modified local redpill-load folder into the build container, its up to you to merge changes from the remote branch into your local copy yourself. This will prevent a pull in "auto" as your head moved in a different direction than the head of the redpill-load repository. This is the effect of manual doing and needs to be "fixed" manually - I used double quotes as this is common git day to day routine and nothing the bootloader should handle. While some people might prefer a `reset --hard` other would start to curse if all their customizations suddenly disappear. Sadly a "do what I want" implementation rarely works for everyone, as everyone wants it to do something differernt... Though, What I can implement is a flag to define whether build-cache should be leveraged or not AND if clean should just delete the orphaned images or all. Still: this will not help if a local redpill-load folder is mapped into the container.
  18. Actualy it deletes all orphaned images, as in images that are not the latest build image for a specific platform version. It appears that Individualy mapped Redpill-load folder should be updated outside the container, as the authentification information does not exist inside the image.
  19. I haven't followed the module compilation discussions in depth as my target environment (=proxmox) is supported out of the box. The build chain loader just follows the instructions of the respective git project, which made it easy to put into a dockerfile. Like I wrote before, I am not going to implemenet anything that needs to patch redpill-load sources on the fly. Those type of changes can be done in individual forks of redpill-load, you can simply point the repo url to your repo and be good. Everything else needs discussion via pm before I am able to descide wheter this is in scope of the tool chain builder or not. Everyone is free to modify the toochain builder as they need and can share those modifications here.
  20. Let's say I share the reservation that things might become ugly to maintain and new problems might be introduced. I must admit I am not realy keen to spend time on research about what would needed to be done. Personaly I prefer to wait for the official redpill implementation as well
  21. Sure, lets assume I add the unmodified kernel sources, and you provide the commands required to inject those in an image created by redpill-load, it can be done. Contribute the required commands, and I bake them into the Makefile target "auto". Though if it's something where I would need to modify the redpill-load sources, then its a hard pass...
  22. I would assume this is related to what redpill-load states in the README.md for DS918+:
  23. Are you sure that you executed `sudo su - root` to become root before the vi line? Instead you could use `sudo -i` to become root. Refactored to by dynamic: #!/bin/sh for cpu in /sys/devices/system/cpu/cpu[0-9]*; do echo powersave > ${cpu}/cpufreq/scaling_governor done
×
×
  • Create New...