RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

16 minutes ago, Drones said:

I'm trying this solution, but no luck.

How i can change PID&VID on ready-made image (post tocinillo2)?

I think my problem is with wrong PID&VID.

You can use OSFMount to mount the image then change the grub.cfg

  • Thanks 1
Link to post
Share on other sites
1 hour ago, haydibe said:

Incorporated the addition from @taiziccf for chchia's  apollolake-7.0.1-42214, and added @jumkey's bromolow-7.0.1-42214 from his develop branch.

 

You guys are moving at an incredible pace :)

 

Prompt the  

parse error: Objects must consist of key:value pairs at line 66, column 10
 

redpill-tool-chain_x86_64_v0.6.1.zip 7.47 kB · 20 downloads

Prompt the

20210905012106.png

Link to post
Share on other sites
3 hours ago, pkdick1 said:

 

Hello, thank you very much for this file. May I ask you a question ?: I have myself an Asrock J4150-ITX, do you think that using you image, I will be able to migrate from DSM 6.23 to DSM 7.01 ? I also thought that it was also necessary to put a legit serial nember to make the baremetal XPENOLOGY device working: could you give me your point about this ?

 

I admit that I start to long a bit about this wonderful development, so I am close to take the risk to use you image...

 

Thank you in advance,

Mine is just to update partial of your questions: Yes, you can upgrade from DSM 6.2-3 to DSM 7.0. I had just done it do my production DSM server.

FYI, if you use image from other user, you need to at least change the PID and VID.

Please check your installed package vs the DSM package. Many packages had been remove from DSM 7.0. After you update to DSM 7, those missing package will become not available.

and remember backup your data first... JUST IN CASE.

Link to post
Share on other sites
2 hours ago, haydibe said:

Incorporated the addition from @taiziccf for chchia's  apollolake-7.0.1-42214, and added @jumkey's bromolow-7.0.1-42214 from his develop branch.

 

You guys are moving at an incredible pace :)

 

 

 

redpill-tool-chain_x86_64_v0.6.1.zip 7.47 kB · 40 downloads

 

File global_config.json has a syntax error. You need to add "}" at the end of line 45

  • Like 1
Link to post
Share on other sites

Thanx, will add it and repost. I should have tested it before *cough*

 

Also, I missed out on that vscode highlighted the block as incomplete... 

 

The repost is attached. 

 

In case I miss out on those things: feel free to post the zip here, no need to wait for me.

 

Update: deleted the attachment again -> look for 0.7, which has a "clean" action now to clean up old images. 

 

Edited by haydibe
  • Like 2
  • Thanks 2
Link to post
Share on other sites

As in create a new image? Since the version information is embedded into the image during build time, I am afraid it needs to be rebuilt. 

If something is wrong it becomes easier to pinpoint, thus to maintain. 

 

Though, Feel free to modify the script to support your needs. Pass TARGET_PLATFORM, TARGET_VERSION and TARGET_REVISION as --env into the container when the container is created in the runContainer function and you should be good. This is from the top of my head and might miss out other required variables, but they would follow the same principle: identify the variable and just pass it in as --env.

 

Update: appearently it's more complicated than that. Each image is for a dedicated plattform version and set of configured git reposistories. 

 

Edited by haydibe
Link to post
Share on other sites

After using the 7.0.1RC (918) on proxmox, I got quite a lot of call trace in log, and the system is very slow.  Anyone experiencing the same?

[ 2160.666146] INFO: task kworker/u8:4:14489 blocked for more than 120 seconds.
[ 2160.668620]       Tainted: P           OE   4.4.180+ #42214
[ 2160.670135] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.671611] kworker/u8:4    D ffff8802613a3850     0 14489      2 0x00000000
[ 2160.672895] Workqueue: writeback wb_workfn (flush-btrfs-2)
[ 2160.674311]  ffff8802613a3850 00000000812c380a ffffffff818114c0 ffff8800367b6600
[ 2160.675579]  0000000000000000 ffff88027dc16300 7fffffffffffffff ffffffff81576880
[ 2160.676493]  ffff8802613a3970 ffff8802613a3860 ffffffff81576126 ffff8802613a38d8
[ 2160.677916] Call Trace:
[ 2160.678311]  [<ffffffff81576880>] ? bit_wait+0x60/0x60
[ 2160.679061]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.679679]  [<ffffffff81578cfe>] schedule_timeout+0x16e/0x280
[ 2160.680306]  [<ffffffff810b836a>] ? ktime_get+0x3a/0xa0
[ 2160.682031]  [<ffffffff81576880>] ? bit_wait+0x60/0x60
[ 2160.682702]  [<ffffffff81575871>] io_schedule_timeout+0xa1/0x110
[ 2160.683374]  [<ffffffff81576896>] bit_wait_io+0x16/0x60
[ 2160.683942]  [<ffffffff8157663e>] __wait_on_bit_lock+0x4e/0xd0
[ 2160.684637]  [<ffffffff8113124b>] __lock_page+0xab/0xb0
[ 2160.685314]  [<ffffffff8108e850>] ? prepare_to_wait_event+0x100/0x100
[ 2160.685922]  [<ffffffffa08c150d>] extent_write_cache_pages.isra.46.constprop.68+0x34d/0x480 [btrfs]
[ 2160.686800]  [<ffffffff8108e351>] ? __wake_up+0x41/0x50
[ 2160.687517]  [<ffffffff81082c15>] ? update_curr+0xa5/0x130
[ 2160.688139]  [<ffffffffa08c28bb>] extent_writepages+0x4b/0x60 [btrfs]
[ 2160.688829]  [<ffffffffa08a00b0>] ? btrfs_submit_direct+0x940/0x940 [btrfs]
[ 2160.689620]  [<ffffffffa089c8b6>] btrfs_writepages+0x26/0x30 [btrfs]
[ 2160.690432]  [<ffffffff8113ea7b>] do_writepages+0x2b/0x80
[ 2160.690986]  [<ffffffffa08bd876>] ? clear_state_bit+0x156/0x1e0 [btrfs]
[ 2160.691691]  [<ffffffff811c34aa>] __writeback_single_inode+0x4a/0x380
[ 2160.692445]  [<ffffffff811c3c69>] writeback_sb_inodes+0x1b9/0x530
[ 2160.693183]  [<ffffffff811c4044>] __writeback_inodes_wb+0x64/0xb0
[ 2160.693901]  [<ffffffff811c434a>] wb_writeback+0x22a/0x310
[ 2160.694571]  [<ffffffff811c49f2>] wb_workfn+0x162/0x360
[ 2160.695197]  [<ffffffff81073ceb>] worker_run_work+0x9b/0xe0
[ 2160.695860]  [<ffffffff811c4890>] ? inode_wait_for_writeback+0x30/0x30
[ 2160.696723]  [<ffffffff8106b2e3>] process_one_work+0x1e3/0x4f0
[ 2160.697559]  [<ffffffff8106b61e>] worker_thread+0x2e/0x4b0
[ 2160.698251]  [<ffffffff8106b5f0>] ? process_one_work+0x4f0/0x4f0
[ 2160.698850]  [<ffffffff810700f5>] kthread+0xd5/0xf0
[ 2160.699369]  [<ffffffff81070020>] ? kthread_worker_fn+0x160/0x160
[ 2160.699981]  [<ffffffff81579fef>] ret_from_fork+0x3f/0x80
[ 2160.700547]  [<ffffffff81070020>] ? kthread_worker_fn+0x160/0x160
[ 2160.701382] INFO: task SYNO.API.Auth.T:16585 blocked for more than 120 seconds.
[ 2160.702204]       Tainted: P           OE   4.4.180+ #42214
[ 2160.702807] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.703679] SYNO.API.Auth.T D ffff880235643c50     0 16585  10537 0x00000000
[ 2160.704476]  ffff880235643c50 000000008182ed60 ffff880274970000 ffff8802643b8cc0
[ 2160.705284]  ffff88025e54d760 ffff8802643b8cc0 ffff88025e54d764 00000000ffffffff
[ 2160.706104]  ffff88025e54d768 ffff880235643c60 ffffffff81576126 ffff880235643c70
[ 2160.706929] Call Trace:
[ 2160.707196]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.707682]  [<ffffffff815763b9>] schedule_preempt_disabled+0x9/0x10
[ 2160.708367]  [<ffffffff81577b6c>] __mutex_lock_slowpath+0x8c/0x100
[ 2160.709101]  [<ffffffff8119add9>] ? lookup_fast+0xc9/0x320
[ 2160.709633]  [<ffffffff81577bf2>] mutex_lock+0x12/0x30
[ 2160.710157]  [<ffffffff8119c8fb>] walk_component+0x21b/0x330
[ 2160.710773]  [<ffffffff8119db24>] path_lookupat+0xb4/0x230
[ 2160.711384]  [<ffffffff811a2d0a>] filename_lookup+0x9a/0x100
[ 2160.711974]  [<ffffffff8117ca4d>] ? kmem_cache_alloc_trace+0x13d/0x150
[ 2160.712665]  [<ffffffff811a2a23>] ? getname_flags+0x53/0x190
[ 2160.713253]  [<ffffffff811a2e23>] user_path_at_empty+0x33/0x40
[ 2160.713815]  [<ffffffff8118b0ff>] SyS_access+0x8f/0x2a0
[ 2160.714307]  [<ffffffff81579c4a>] entry_SYSCALL_64_fastpath+0x1e/0x8e
[ 2160.714866] INFO: task synoappnotify:16640 blocked for more than 120 seconds.
[ 2160.715496]       Tainted: P           OE   4.4.180+ #42214
[ 2160.715977] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.716882] synoappnotify   D ffff880035693c60     0 16640      1 0x00000004
[ 2160.717741]  ffff880035693c60 000000008119a7e5 ffffffff818114c0 ffff8802732ccc80
[ 2160.718576]  ffff88025e54d760 ffff8802732ccc80 ffff88025e54d764 00000000ffffffff
[ 2160.719414]  ffff88025e54d768 ffff880035693c70 ffffffff81576126 ffff880035693c80
[ 2160.720268] Call Trace:
[ 2160.720550]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.721168]  [<ffffffff815763b9>] schedule_preempt_disabled+0x9/0x10
[ 2160.721904]  [<ffffffff81577b6c>] __mutex_lock_slowpath+0x8c/0x100
[ 2160.722627]  [<ffffffff81577bf2>] mutex_lock+0x12/0x30
[ 2160.723246]  [<ffffffff811a0614>] path_openat+0x444/0x1a40
[ 2160.723882]  [<ffffffff811e458a>] ? flock_lock_inode+0xda/0x280
[ 2160.724579]  [<ffffffff811e47ee>] ? locks_remove_flock+0xbe/0xc0
[ 2160.725298]  [<ffffffff811a390e>] do_filp_open+0x7e/0xc0
[ 2160.725897]  [<ffffffff811ab2b1>] ? dput.part.25+0x91/0x200
[ 2160.726549]  [<ffffffff811b1c6b>] ? __alloc_fd+0x3b/0x170
[ 2160.727187]  [<ffffffff81189b4f>] do_sys_open+0x1af/0x240
[ 2160.727715]  [<ffffffff8118bf3f>] SyS_openat+0xf/0x20
[ 2160.728242]  [<ffffffff81579c4a>] entry_SYSCALL_64_fastpath+0x1e/0x8e
[ 2160.728924] INFO: task SYNO.API.Auth.T:19202 blocked for more than 120 seconds.
[ 2160.729745]       Tainted: P           OE   4.4.180+ #42214
[ 2160.730348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.731201] SYNO.API.Auth.T D ffff880231103c50     0 19202  10537 0x00000000
[ 2160.731980]  ffff880231103c50 000000008182ed60 ffffffff818114c0 ffff88026c4f3300
[ 2160.732827]  ffff88025e54d760 ffff88026c4f3300 ffff88025e54d764 00000000ffffffff
[ 2160.733753]  ffff88025e54d768 ffff880231103c60 ffffffff81576126 ffff880231103c70
[ 2160.734605] Call Trace:
[ 2160.734819]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.735273]  [<ffffffff815763b9>] schedule_preempt_disabled+0x9/0x10
[ 2160.735812]  [<ffffffff81577b6c>] __mutex_lock_slowpath+0x8c/0x100
[ 2160.736580]  [<ffffffff8119add9>] ? lookup_fast+0xc9/0x320
[ 2160.737218]  [<ffffffff81577bf2>] mutex_lock+0x12/0x30
[ 2160.737791]  [<ffffffff8119c8fb>] walk_component+0x21b/0x330
[ 2160.738406]  [<ffffffff8119db24>] path_lookupat+0xb4/0x230
[ 2160.738980]  [<ffffffff811a2d0a>] filename_lookup+0x9a/0x100
[ 2160.739615]  [<ffffffff8117ca4d>] ? kmem_cache_alloc_trace+0x13d/0x150
[ 2160.740399]  [<ffffffff811a2a23>] ? getname_flags+0x53/0x190
[ 2160.741083]  [<ffffffff811a2e23>] user_path_at_empty+0x33/0x40
[ 2160.741734]  [<ffffffff8118b0ff>] SyS_access+0x8f/0x2a0
[ 2160.742330]  [<ffffffff81579c4a>] entry_SYSCALL_64_fastpath+0x1e/0x8e
[ 2180.147079] <redpill/pmu_shim.c:324> Got 1 bytes from PMU: reason=1 hex={2d} ascii="-"
[ 2180.297936] <redpill/pmu_shim.c:324> Got 1 bytes from PMU: reason=1 hex={75} ascii="u"
[ 2180.298917] <redpill/pmu_shim.c:253> Executing cmd OUT_FAN_HEALTH_ON handler cmd_shim_noop+0x0/0x2d [redpill]
[ 2180.298917] <redpill/pmu_shim.c:42> vPMU received OUT_FAN_HEALTH_ON using 1 bytes - NOOP

 

Edited by mcdull
Link to post
Share on other sites
3 часа назад, D.S сказал:

Вы можете использовать  OSFMount  для монтирования образа, а затем изменить grub.cfg

Thank you for your responses!
As an old XPE user, I certainly tried OFS Mount, but it refused to open the image of the new bootloader.
As I later understood, everything turned out to be due to the old version, wired into Xpenology_Tool_V141.
The current version of OFS correctly opened the new bootloader IMG file.
I confirm that it works on AsRock 4105. Everything works well, including hardware transcoding, VMM, Active Backup for Business and QC.
At the moment I only have a problem with face recognition (valid SN & MAC).
Many thanks to the community for DSM7!

Link to post
Share on other sites

Updated the toolchain image builder to 0.7

 

Changes:

- Added DSM7.0.1 support (done in 0.6.2)

- Added `clean` action that either deletes old images for a specific platform version or for all, if `all` is used instead.

- Added the label `redpill-tool-chain` to the Dockerfile, it will be embedded in new created images and allows the clean action to filter for those images.

 

The clean action will only work for images build from 0.7 on. It will not clean up images created by previous versions of the script.

Use something like `docker image ls --filter dangling=true | xargs docker image rm` or `docker image prune` to get rid of previous existing images (might delete all unused images, more then just the redpill-tool-chain images)

 

See README.md for usage.

 

 

redpill-tool-chain_x86_64_v0.7.zip

Edited by haydibe
  • Thanks 7
Link to post
Share on other sites

This is my first post on the forum so it would be appropriate to say hello! Hello XPenlogy!

Great job RedPill - I was able to get DSM 7.0 running on an HP Microserver Gen 8 in EXSI - the problem was only with the disk but the solution from https://github.com/RedPill-TTG/redpill-lkm/issues/14 helped. The only thing left is the pesky message that the disk needs attention, does anyone have a solution for that?

Link to post
Share on other sites
5 hours ago, T-REX-XP said:

Is it possible add UEFI support to the tool chain ?

Actualy this is nothing the tool chain does, it is something that needs to be done in redpill-load. The DSM7.0.1 versions point to repos from jumkey and chchia that actualy seem to have added support for UEFI. 

Link to post
Share on other sites
54 minutes ago, haydibe said:

Actualy this is nothing the tool chain does, it is something that needs to be done in redpill-load. The DSM7.0.1 versions point to repos from jumkey and chchia that actualy seem to have added support for UEFI. 

yes uefi ready.keep mind 7.0.1 synoinfo add some new update url have not changed to https://example.com/

Edited by jumkey
Link to post
Share on other sites

Further findings in my case of "We have detected errors on HDD" with HP EliteBook 840 G2:

 

On Juno's bootloader 6.2.1 in DSM the disks are listed as Disk 2 and Disk 4. Both of them are detected without an issue and everything works except in redpill.

Edited by maxhartung
Link to post
Share on other sites
8 hours ago, mcdull said:

After using the 7.0.1RC (918) on proxmox, I got quite a lot of call trace in log, and the system is very slow.  Anyone experiencing the same?


[ 2160.666146] INFO: task kworker/u8:4:14489 blocked for more than 120 seconds.
[ 2160.668620]       Tainted: P           OE   4.4.180+ #42214
[ 2160.670135] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.671611] kworker/u8:4    D ffff8802613a3850     0 14489      2 0x00000000
[ 2160.672895] Workqueue: writeback wb_workfn (flush-btrfs-2)
[ 2160.674311]  ffff8802613a3850 00000000812c380a ffffffff818114c0 ffff8800367b6600
[ 2160.675579]  0000000000000000 ffff88027dc16300 7fffffffffffffff ffffffff81576880
[ 2160.676493]  ffff8802613a3970 ffff8802613a3860 ffffffff81576126 ffff8802613a38d8
[ 2160.677916] Call Trace:
[ 2160.678311]  [<ffffffff81576880>] ? bit_wait+0x60/0x60
[ 2160.679061]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.679679]  [<ffffffff81578cfe>] schedule_timeout+0x16e/0x280
[ 2160.680306]  [<ffffffff810b836a>] ? ktime_get+0x3a/0xa0
[ 2160.682031]  [<ffffffff81576880>] ? bit_wait+0x60/0x60
[ 2160.682702]  [<ffffffff81575871>] io_schedule_timeout+0xa1/0x110
[ 2160.683374]  [<ffffffff81576896>] bit_wait_io+0x16/0x60
[ 2160.683942]  [<ffffffff8157663e>] __wait_on_bit_lock+0x4e/0xd0
[ 2160.684637]  [<ffffffff8113124b>] __lock_page+0xab/0xb0
[ 2160.685314]  [<ffffffff8108e850>] ? prepare_to_wait_event+0x100/0x100
[ 2160.685922]  [<ffffffffa08c150d>] extent_write_cache_pages.isra.46.constprop.68+0x34d/0x480 [btrfs]
[ 2160.686800]  [<ffffffff8108e351>] ? __wake_up+0x41/0x50
[ 2160.687517]  [<ffffffff81082c15>] ? update_curr+0xa5/0x130
[ 2160.688139]  [<ffffffffa08c28bb>] extent_writepages+0x4b/0x60 [btrfs]
[ 2160.688829]  [<ffffffffa08a00b0>] ? btrfs_submit_direct+0x940/0x940 [btrfs]
[ 2160.689620]  [<ffffffffa089c8b6>] btrfs_writepages+0x26/0x30 [btrfs]
[ 2160.690432]  [<ffffffff8113ea7b>] do_writepages+0x2b/0x80
[ 2160.690986]  [<ffffffffa08bd876>] ? clear_state_bit+0x156/0x1e0 [btrfs]
[ 2160.691691]  [<ffffffff811c34aa>] __writeback_single_inode+0x4a/0x380
[ 2160.692445]  [<ffffffff811c3c69>] writeback_sb_inodes+0x1b9/0x530
[ 2160.693183]  [<ffffffff811c4044>] __writeback_inodes_wb+0x64/0xb0
[ 2160.693901]  [<ffffffff811c434a>] wb_writeback+0x22a/0x310
[ 2160.694571]  [<ffffffff811c49f2>] wb_workfn+0x162/0x360
[ 2160.695197]  [<ffffffff81073ceb>] worker_run_work+0x9b/0xe0
[ 2160.695860]  [<ffffffff811c4890>] ? inode_wait_for_writeback+0x30/0x30
[ 2160.696723]  [<ffffffff8106b2e3>] process_one_work+0x1e3/0x4f0
[ 2160.697559]  [<ffffffff8106b61e>] worker_thread+0x2e/0x4b0
[ 2160.698251]  [<ffffffff8106b5f0>] ? process_one_work+0x4f0/0x4f0
[ 2160.698850]  [<ffffffff810700f5>] kthread+0xd5/0xf0
[ 2160.699369]  [<ffffffff81070020>] ? kthread_worker_fn+0x160/0x160
[ 2160.699981]  [<ffffffff81579fef>] ret_from_fork+0x3f/0x80
[ 2160.700547]  [<ffffffff81070020>] ? kthread_worker_fn+0x160/0x160
[ 2160.701382] INFO: task SYNO.API.Auth.T:16585 blocked for more than 120 seconds.
[ 2160.702204]       Tainted: P           OE   4.4.180+ #42214
[ 2160.702807] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.703679] SYNO.API.Auth.T D ffff880235643c50     0 16585  10537 0x00000000
[ 2160.704476]  ffff880235643c50 000000008182ed60 ffff880274970000 ffff8802643b8cc0
[ 2160.705284]  ffff88025e54d760 ffff8802643b8cc0 ffff88025e54d764 00000000ffffffff
[ 2160.706104]  ffff88025e54d768 ffff880235643c60 ffffffff81576126 ffff880235643c70
[ 2160.706929] Call Trace:
[ 2160.707196]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.707682]  [<ffffffff815763b9>] schedule_preempt_disabled+0x9/0x10
[ 2160.708367]  [<ffffffff81577b6c>] __mutex_lock_slowpath+0x8c/0x100
[ 2160.709101]  [<ffffffff8119add9>] ? lookup_fast+0xc9/0x320
[ 2160.709633]  [<ffffffff81577bf2>] mutex_lock+0x12/0x30
[ 2160.710157]  [<ffffffff8119c8fb>] walk_component+0x21b/0x330
[ 2160.710773]  [<ffffffff8119db24>] path_lookupat+0xb4/0x230
[ 2160.711384]  [<ffffffff811a2d0a>] filename_lookup+0x9a/0x100
[ 2160.711974]  [<ffffffff8117ca4d>] ? kmem_cache_alloc_trace+0x13d/0x150
[ 2160.712665]  [<ffffffff811a2a23>] ? getname_flags+0x53/0x190
[ 2160.713253]  [<ffffffff811a2e23>] user_path_at_empty+0x33/0x40
[ 2160.713815]  [<ffffffff8118b0ff>] SyS_access+0x8f/0x2a0
[ 2160.714307]  [<ffffffff81579c4a>] entry_SYSCALL_64_fastpath+0x1e/0x8e
[ 2160.714866] INFO: task synoappnotify:16640 blocked for more than 120 seconds.
[ 2160.715496]       Tainted: P           OE   4.4.180+ #42214
[ 2160.715977] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.716882] synoappnotify   D ffff880035693c60     0 16640      1 0x00000004
[ 2160.717741]  ffff880035693c60 000000008119a7e5 ffffffff818114c0 ffff8802732ccc80
[ 2160.718576]  ffff88025e54d760 ffff8802732ccc80 ffff88025e54d764 00000000ffffffff
[ 2160.719414]  ffff88025e54d768 ffff880035693c70 ffffffff81576126 ffff880035693c80
[ 2160.720268] Call Trace:
[ 2160.720550]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.721168]  [<ffffffff815763b9>] schedule_preempt_disabled+0x9/0x10
[ 2160.721904]  [<ffffffff81577b6c>] __mutex_lock_slowpath+0x8c/0x100
[ 2160.722627]  [<ffffffff81577bf2>] mutex_lock+0x12/0x30
[ 2160.723246]  [<ffffffff811a0614>] path_openat+0x444/0x1a40
[ 2160.723882]  [<ffffffff811e458a>] ? flock_lock_inode+0xda/0x280
[ 2160.724579]  [<ffffffff811e47ee>] ? locks_remove_flock+0xbe/0xc0
[ 2160.725298]  [<ffffffff811a390e>] do_filp_open+0x7e/0xc0
[ 2160.725897]  [<ffffffff811ab2b1>] ? dput.part.25+0x91/0x200
[ 2160.726549]  [<ffffffff811b1c6b>] ? __alloc_fd+0x3b/0x170
[ 2160.727187]  [<ffffffff81189b4f>] do_sys_open+0x1af/0x240
[ 2160.727715]  [<ffffffff8118bf3f>] SyS_openat+0xf/0x20
[ 2160.728242]  [<ffffffff81579c4a>] entry_SYSCALL_64_fastpath+0x1e/0x8e
[ 2160.728924] INFO: task SYNO.API.Auth.T:19202 blocked for more than 120 seconds.
[ 2160.729745]       Tainted: P           OE   4.4.180+ #42214
[ 2160.730348] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
[ 2160.731201] SYNO.API.Auth.T D ffff880231103c50     0 19202  10537 0x00000000
[ 2160.731980]  ffff880231103c50 000000008182ed60 ffffffff818114c0 ffff88026c4f3300
[ 2160.732827]  ffff88025e54d760 ffff88026c4f3300 ffff88025e54d764 00000000ffffffff
[ 2160.733753]  ffff88025e54d768 ffff880231103c60 ffffffff81576126 ffff880231103c70
[ 2160.734605] Call Trace:
[ 2160.734819]  [<ffffffff81576126>] schedule+0x26/0x70
[ 2160.735273]  [<ffffffff815763b9>] schedule_preempt_disabled+0x9/0x10
[ 2160.735812]  [<ffffffff81577b6c>] __mutex_lock_slowpath+0x8c/0x100
[ 2160.736580]  [<ffffffff8119add9>] ? lookup_fast+0xc9/0x320
[ 2160.737218]  [<ffffffff81577bf2>] mutex_lock+0x12/0x30
[ 2160.737791]  [<ffffffff8119c8fb>] walk_component+0x21b/0x330
[ 2160.738406]  [<ffffffff8119db24>] path_lookupat+0xb4/0x230
[ 2160.738980]  [<ffffffff811a2d0a>] filename_lookup+0x9a/0x100
[ 2160.739615]  [<ffffffff8117ca4d>] ? kmem_cache_alloc_trace+0x13d/0x150
[ 2160.740399]  [<ffffffff811a2a23>] ? getname_flags+0x53/0x190
[ 2160.741083]  [<ffffffff811a2e23>] user_path_at_empty+0x33/0x40
[ 2160.741734]  [<ffffffff8118b0ff>] SyS_access+0x8f/0x2a0
[ 2160.742330]  [<ffffffff81579c4a>] entry_SYSCALL_64_fastpath+0x1e/0x8e
[ 2180.147079] <redpill/pmu_shim.c:324> Got 1 bytes from PMU: reason=1 hex={2d} ascii="-"
[ 2180.297936] <redpill/pmu_shim.c:324> Got 1 bytes from PMU: reason=1 hex={75} ascii="u"
[ 2180.298917] <redpill/pmu_shim.c:253> Executing cmd OUT_FAN_HEALTH_ON handler cmd_shim_noop+0x0/0x2d [redpill]
[ 2180.298917] <redpill/pmu_shim.c:42> vPMU received OUT_FAN_HEALTH_ON using 1 bytes - NOOP

 

 

i dont have this problem, mind to share your proxmox setup?

Link to post
Share on other sites
9 hours ago, haydibe said:

Updated the toolchain image builder to 0.7

 

Changes:

- Added DSM7.0.1 support (done in 0.6.2)

- Added `clean` action that either deletes old images for a specific platform version or for all, if `all` is used instead.

- Added the label `redpill-tool-chain` to the Dockerfile, it will be embedded in new created images and allows the clean action to filter for those images.

 

The clean action will only work for images build from 0.7 on. It will not clean up images created by previous versions of the script.

Use something like `docker image ls --filter dangling=true | xargs docker image rm` or `docker image prune` to get rid of previous existing images (might delete all unused images, more then just the redpill-tool-chain images)

 

See README.md for usage.

 

 

redpill-tool-chain_x86_64_v0.7.zip 7.73 kB · 74 downloads

thanks haydibe! 

anybody get 3615xs 7.0.1 working?  build and boot is ok, but can't find IP for install.   918+  is fine.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.