Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 09/17/2021 in all areas

  1. Everytime when I think, there is nothing left to optimize on the toolchain builder someone commes around and finds something that makes sense to be added... Thus said, here is the v0.9 of the toolchain builder. - added sha256 cheksum for kernel and toolkit-dev downloads in `global_config.json`. The item `"download_urls"` is renamed to `"downloads"` and has a new structure. Make sure to allign your custom <platform version> configurations to the new structure when copying them into the `global_config.json` - check checksum of kernel or toolkit-dev when building the image and fail if checksums missmatch. Will not delete the corrupt file! - added`"docker.custom_bind_mounts"` in `global_config.json` to add as many custom bind-mounts as you want, set `"docker.use_custom_bind_mounts":` to `"true"` enable the feature. - fixed: only download kernel or toolkit-dev required to build the image (always downloaded both before, but only used either one of them when building the image) - added simple precondition check to see if required tools are available See README.md for usage. redpill-tool-chain_x86_64_v0.9.zip
    6 points
  2. I guess we should open another thread for helping others to use the loader and to leave this thread for development needs. now this thread is flooding with operational issue.
    4 points
  3. @WiteWulf { "extra_cmdline": { "pid": "0x0001", "vid": "0x46f4", "sn": "1330Lxxxxx", "mac1": "0011xxxxxx", "DiskIdxMap": "1000", "SataPortMap": "4", "SasIdxMap": "0" }, "synoinfo": { "supportsystemperature": "no", "supportsystempwarning": "no" }, "ramdisk_copy": {} } This works on a fresh install.
    2 points
  4. Copy the block with the id "bromolow-7.0-41222", change "id" and "platform_version" to "bromolow-7.0.1-42214", then replace the "redpill_load" subitems "source_url" and "branch" to whatever you want to use and be good... The id can be realy anything, it is just used to match the configuration when using the build/auto/run actions. Though, make sure the value of platform_version exactly matches the format "{platform}-{version}-{build}", as this will be used to derive information required for the build.
    2 points
  5. Just my Feedback , Id been testing redpill DS918 using Esxi on a new machine for HW transcoding . But out of curiosty did an install on my Gen8 Microserver, as bare metal . (This usually runs 6.2.3/Juns 1.03b for most my services/dockers and VM's, plus a VDSM7.0 as I was testing photos app(face rec works) ) Spec : - Xeon E3-1265L V2 16GB H220/LSI HBA Card IT mode - I disabled this as I don't believe there to be Sas Support as yet - Used onboard b120i in ACHI mode , used spare disks Onboard Broadcom Dual NIC/HP Ethernet 1Gb 2-port 332i - Both Working (I use different VLANS on each) APC UPS connected via USB - Working So far not had any reboots. installed Docker , already disabled IPV6 , about to start copying some of my docker configs across . (check if I can replicate what others are seeing)
    1 point
  6. Hey people, please don’t share loader images 🙏🏻 They contain software that is the property of Synology and is not open source. ThorGroup specifically designed the toolchain and build process to download the freely available software from Synology’s own servers to build the images with. This way no one can be accused of redistributing Synology’s intellectual property, and potentially get the project and the forums shut down.
    1 point
  7. I just Fixed this by changing the virtual com port from Com 2 to Com 1 in the Bios
    1 point
  8. 1 point
  9. https://github.com/RedPill-TTG/redpill-lkm/issues/21
    1 point
  10. HPE ProLiant MicroServer Gen8 Intel Xeon E3-1220L v2 12 GB, Micro Server Loader :redpill-DS3615xs_7.0.1-42214_b1631729759 Image : ds3615xs_42214
    1 point
  11. @WiteWulf confirm Intel Xeon E3-1241 v3 DSM 7.0.1-42214 DS3615xs
    1 point
  12. Hp Gen8 - ESXi 7.0u2d - DS3615xs - DSM 7.0.1 Intel(R) Xeon(R) CPU E3-1270 V2 @ 3.50GHz
    1 point
  13. So it looks like this problem is connected to DS3615xs. I'm running DS918+. One week ago I was running DS3615xs on the same baremetal (xeon E3-1265L V4, MB MSI Z97I ACK GAMING) machine with same dockers (this time I had InfluxDB docker runnig) and everything was working stable.
    1 point
  14. On my baremetal gen8 redpill 7.0.1 : 2 docker container deployed 2 reboot
    1 point
  15. I've created InfluxDB container but at this moment I have no connection to db. I'll let you now if anything happens. Im using few containers and everything is working great (gitlab, roon-server, mincraft server) btw: I've found this info, which looks very similar to your problems: https://access.redhat.com/solutions/1354963
    1 point
  16. System was not responding while high CPU usage. I'm not able to reproduce the issue ... docker started again... The only difference between last crash and now is I did not reboot after disabling IPv6 earlier... Whereas now the system fresh started with IPv6 disabled... Don't know if it is related... but currently seems to be stable.
    1 point
  17. Thanks for grabbing the console output 👍 This actually looks a little different to what I've been seeing (crashes with containerd-shim). Yours is indicating a problem with 'runs', and seems related to a thread I found while searching this morning: https://github.com/opencontainers/runc/issues/2530 This still manifests as problems with containerd. Some people are seeing reboots, some are simply seeing lockups. You really ought to update the BIOS on your Gen8, by the way , and check out whatever other firmware updates are available for it...
    1 point
  18. Same for me, but at least IPv6 disabled in DSM fix the crashes.
    1 point
  19. You need to build the apollolake target to get 918, bromolow builds 3615xs. Also, make sure your hardware (CPU and chipset) is compatible with apollolake.
    1 point
  20. Same problem here in Proxmox after some Docker starts and stops. I have deactivated the IPv6 Support in the DSM Network Settings (not in Docker Settings) and it seams to be stable at the moment.
    1 point
  21. You can always open issues on Github for any issues that are addressed to the developers. This thread IMHO, helps people understand the concept, install using the loader on their test systems and that's pushing the development even further. Testing process anyway is a development stage.
    1 point
  22. synoinfo.conf gets patched by the loader image like @Orphée says so its a permanent across reboots solution.
    1 point
  23. Thanks for the continued hard work developing and supporting this, @haydibe, it's an invaluable resource for those community members for whom maintaining docker is a stretch of the *nix skills. The streamlining of the build process also leaves a lot more time for us to get on with solving compatibility issues and bugs with the redpill tools themselves, so everyone wins!
    1 point
  24. You can add as many platform version blocks (=set of configurations for a specific version) as you want. Just make sure every new block has a unique id.
    1 point
  25. for inetd create /usr/syno/etc/rc.d/J00inetd.sh no need patch /etc/rc or open http://IP:5000/webman/start_telnet.cgi
    1 point
  26. Схожую ситуацию описывают тут, но более ясно и на русском
    1 point
  27. Like i said i have compiled it and verified that the vmxnet driver works on DSM7.0/7.01 You should though put it in your loader /usr/lib/modules and modify linuxrc.syno.impl (of rd.gz) so it will be loaded during boot time. Best line number to add insmod /lib/modules/XXX.ko is 285 vmxnet3.7z
    1 point
  28. possible that 6.2.3 was the latest for xpenology. hope someone will find a way to use DSM 7.0
    1 point
  29. Опишу метод как я вернулся к 6.2.3 update 3 1) Выключаем)) 2) Достаем флеху и HDD 3) Переписал загрузчик заново на 6.2.3 4) Подключил HDD к винде, скачал программу "MiniTool Partition Wizard" выбрал первый раздел (Размер ~2.7GB) и отформатировал. 5) Все это добро вставил назад и через find.synology.com нашел хрень. 6) Произвел установку вручную (предварительно скачал на сайте *.pat с 6.2.3) и восстановил все настройки. 7) Обновился до 6.2.3 update 3 так же в ручную, скачав 6.2.3 update 3 с сайта. Спасибо парни кто подсказал процесс, получилось не с первого раза, но получилось. В следующий раз 3 раза подумаю прежде чем обновляться.
    1 point
  30. Давно не поднимал этот образ... Когда-то давно запускал этот образ. Заинтриговали с 10 гигабитами. Скачал, накатил. Первичная загрузка системы - Лоадер джуна спарили с пингвинами. Меню покоцали. Бог с ними - едем дальше. Первичная загрузка 2 минуты. Странно. Больше 20 секунд и на голом железе-то ни крутилось. Два диска в архиве - бут и на 16 гигов. Ставлю партицию на тот диск, на котором 16 гигов. После этого в меню томов диск пропадает Окэй. Тушу машину. Добавляю виртуальный диск на 120Гб. Запускаю. Загрузка 3 минуты - что бл..? Захожу в хрень. Появился мой диск. Накатываю партицию. Хрень кричит, что новый диск - крашеный. Как крашеный, он же виртуальный.. сам лепил. До гигабитов так и не добрался. Махмуд! Поджигай! Жаль. Я такому свои базы не доверю. Единственная опция накатить на сервак с виндой воркстейшн. Но это ж гипервизор второго уровня бла бла бла. Потери на виртуализацию, и скорость 70 мегабайт, а я уже привык агрегацией на железной хрени упираться в 180 - скорость физического диска. Будем ждать нового Джуна, а еще лучше - авторов оригинальной хрени - у них то гипер-ви работал...
    1 point
  31. 1. В центре пакетов, добавляете источник пакетов https://packages.synocommunity.com (в настройках разрешаете установку из любых источниках) 2. Устанавливайте SynoCli file tool. 3. Кидаем на хрень libsynophoto-plugin-detection.so в свою главную папку т.е home (или куда душе угодно) 4. Подключайтесь через pytty 5. логин - пасс. потом команда sudo su спросит пароль 6. mc 7. Появится список файлов главной папки там и будет libsynophoto-plugin-detection.so . Выделяем его, жмем Ctrl+x затем C 8. Ставим права на файл (визуально это будет выглядеть так) 0 - Это значит не отмечено 0 0 0 X X X X 0 X X 0 X 9. жмем TAB переходим в правую колонку идем по пути /var/packages/SynologyMoments/target/usr/lib/ . Возвращаемся в левую колонку TAB. Если не выделен файл libsynophoto-plugin-detection.so выделяем. Жмем F5 соглашаемся на замену. 10. Останавливаем пакет moments и снова запускаем. Профит !!!!
    1 point
  32. there are no drivers in the package, its just x64 application code denverton seems to use the same kernel as 918+ (4.4.59) denvertons are: 1618+, 1819+, 2419+, dva3219, rs820+, rs820rp+, rs2418+, rs2418rp+, rs2818rp+ as it seems to be for dva3219 that one should at least contain the nvidia drivers (binary) in the DSM_DVA3219_24922.pat in /usr/lib/modules/ nvidia-modeset.ko nvidia-uvm.ko nvidia.ko so as long as you dont have this files for 918+ the spk package will be of no use edit: the nvidia files are also in the 1618+ so they might be in all denverton version version numer in nvidia.ko is 381.22, its as old as the kernel synology used (~5/2017)
    1 point
×
×
  • Create New...