RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

i started the latest docker and great works as usual :-)

 

wondering why i get this warning:

 

/opt/redpill-lkm/config/runtime_config.c:168:47: warning: passing argument 2 of 'validate_nets' discards 'const' qualifier from pointer target type
     valid &= validate_nets(config->netif_num, config->macs);
                                               ^
/opt/redpill-lkm/config/runtime_config.c:80:20: note: expected 'char (**)[13]' but argument is of type 'char (* const*)[13]'
 static inline bool validate_nets(const unsigned short if_num, mac_address *macs[MAX_NET_IFACES])
                    ^

 

for the following user_config (obfuscated value of course but i kept the length of it :-) )

 

Using user_config.json:
{
  "extra_cmdline": {
    "pid": "XXXXXX",
    "vid": "XXXXXX",
    "sn": "-",
    "mac1": "000000000000",
    "mac2": "000000000000"
  },
  "synoinfo": {},
  "ramdisk_copy": {}
}

 

Edited by titoum
Link to post
Share on other sites
3 minutes ago, pocopico said:

@haydibehi, thanks for your work, it simplifies things a lot. Current version fails in apt .. any ideas ?


i have same issue with vmwareplayer. i did a restart and it went through like a breeze... ubuntu 20

Edited by titoum
spelling
Link to post
Share on other sites

The Dockerfile didn't change to cause this behavior. I assume it was a docker "hickup". Though, I can confirm that packages are downloaded bloody slow from the debian apt repos this morning. I was buisy in the kitchen an didn't notice it directly.

 

I am building image all this morning, because I want to generate images and populated build cache, to see if my latest modification works.

I forget to cover to clean the build cache in the clean action. Will release it as soon as finished with testing   

Edited by haydibe
  • Like 1
Link to post
Share on other sites

Toolchain builder updated to 0.7.1

 

Changes:

- `clean` now cleans the build cache as well and shows before/after statistics of the disk consumption docker has on your system.

 

Bare in mind that cleaning the build cache, will require the next build to take longer, as it needs to build up the cache again.

 

See README.md for instructions.

 

Update: removed attachment, please use download 0.7.2 instead, which cleans the build cache as well. 

 

Edited by haydibe
  • Like 1
  • Thanks 2
Link to post
Share on other sites
5 минут назад, haydibe сказал:

Toolchain builder updated to 0.7.1

 

Changes:

- `clean` now cleans the build cache as well and shows before/after statistics of the disk consumption docker has on your system.

 

Bare in mind that cleaning the build cache, will require the next build to take longer, as it needs to build up the cache again.

 

See README.md for instructions.

redpill-tool-chain_x86_64_v0.7.1.zip 7 \u041a\u0431 · 0 downloads

Thanks for your work!

 

Maybe add clear cache every build by default?

Edited by Amoureux
Link to post
Share on other sites
18 minutes ago, Amoureux said:

Maybe add clear cache every build default?

Thought about it, but it's a rather expensive operation because no build would be able to leverage any sort of build cache. I am not sure how I feel about that :)

 

Update: I hear you. Now there is a setting in `global_settings.json` that allows to enable auto clean, which does what you asked for. It is set to "false" by default and needs to set to "true" in order to be enabled.

 

 

redpill-tool-chain_x86_64_v0.7.2.zip

Edited by haydibe
  • Like 2
  • Thanks 1
Link to post
Share on other sites

Hello to all,

   I am trying to test the loader on my current hardware and I am stuck to a stupid step : my system didn't find the USB key to boot on it. With the previous setup (Jun's loader with 3615xs model, 6.2.3 DSM) it was working well.

 

My hardware is :

Gen8 HP microserver with Xeon E3-1220L CPU update

6Gb of RAM

 

I am using le last env (redpill-tool-chain_x86_64_v0.7.2) to build the image with MAC OS big sur platform., and docker installed.

 

My bromolow_user_config.json:

content is (xx and yy are replacing real values and pid/vid are the one from my USB key):

{
  "extra_cmdline": {
    "pid": "0xFFF7",
    "vid": "0x203A",
    "sn": "yyyyyyyyyyyyy",
    "mac1": "xxxxxxxxxxxx"
  },
  "synoinfo": {},
  "ramdisk_copy": {}
}

 

Commands to build the image:

cd ~/Desktop/redpill-tool-chain_x86_64_v0.7.2
sudo chmod 777 redpill_tool_chain.sh

sh ./redpill_tool_chain.sh build bromolow-7.0.1-42214
sh ./redpill_tool_chain.sh auto bromolow-7.0.1-42214
 

The image is written on USB key with BalenaEtcher

 

Did I am missing something ?

 

Help will be appreciate.

 

Thanks !

 

Rikk

 

 

  • Like 1
Link to post
Share on other sites
On 9/3/2021 at 5:00 PM, tocinillo2 said:

Just for testing, but I can upgrade to 6.2.4 in one of my J4125 baremetal NAS and I can login without problems (latest redpill bootloader with uefi).

 

But when I try to upgrade to 7.0, I have the same problem as @maxhartung I tried sataportmap with same results:

 

Tried: DiskIdxMap=00 SataPortMap=4  -> We have detected errors on the hard drives 1,3,4

Tried: DiskIdxMap=00 SataPortMap=3  -> We have detected errors on the hard drives 1,3

Tried: DiskIdxMap=00 SataPortMap=2 -> We have detected errors on the hard drives 1

Tried: DiskIdxMap=00 SataPortMap=1 -> No drives detected

Tried: DiskIdxMap=00 SataPortMap=0 -> not loading/no network

 

Tried: DiskIdxMap=00 SataPortMap=1 SasIdxMap=0 -> No drives detected

Tried: DiskIdxMap=00 SataPortMap=2 SasIdxMap=0 -> We have detected errors on the hard drives 1

 

Tried: DiskIdxMap=01 SataPortMap=1 -> No drives detected

Tried: DiskIdxMap=01 SataPortMap=2 -> We have detected errors on the hard drives 2

 

The motherboard have 2 SATA port, I added 2 PCI-e 4 Port sata (total= 10 sata port) that works fine with 6.2.4...

@tocinillo2Try, DiskIdxMap=000307 SataPortMap=244 SasIdxMap=0
or
DiskIdxMap=00 SataPortMap=244 SasIdxMap=0

Edited by jforts
Link to post
Share on other sites
22 minutes ago, jforts said:

@tocinillo2Try, DiskIdxMap=000307 SataPortMap=244 SasIdxMap=0
or
DiskIdxMap=00 SataPortMap=244 SasIdxMap=0

 

Don't worry. In my real NAS I can upgrade (yes, upgrade, keeping all my files, config, apps, scripts...) perfectly. First upgrade to 6.2.4 and then to 7.0.1. I compiled img manually (uploaded here, a few post ago). 

 

Baremetal J4125M with 10 sata port (2 native+2 pcie 4 sata ports).

 

QZ6x5lG.jpg

Edited by tocinillo2
  • Thanks 1
Link to post
Share on other sites
il y a 25 minutes, tocinillo2 a dit :

 

Don't worry. In my real NAS I can upgrade (yes, upgrade, keeping all my files, config, apps, scripts...) perfectly. First upgrade to 6.2.4 and then to 7.0.1. I compiled img manually (uploaded here, a few post ago). 

 

Baremetal J4125M with 10 sata port (2 native+2 pcie 4 sata ports).

 

QZ6x5lG.jpg

 

Hello tocinillo2,

 

When you said that you "upgrade" your system from DSM 6.2.3 to 6.2.4 then 7.0.1, did you use your 7.0.1 image ? or what is necessary to use the first 6.2.4 image you posted ?

 

Moreover, I installed on my Asrock J4125-ITX a M2 to sata (JMB58x AHCI SATA controlle) in order to used two 256 MB GB SSDs for the caching fonction of the DS918+: do you think that this controller will be recognized ?

 

One last question: I need to updated the PID/VID of my USB stick in order to be recognized during install. What about the serial number and the mac address you put in you grub.cfg file ? Are they mandatory for the install ? (I know that the serial number is necessary for DS Video to use hardward transcoding, but what about the mac address?).

 

Thank you very much for your answers.

Link to post
Share on other sites
8 hours ago, haydibe said:

Thought about it, but it's a rather expensive operation because no build would be able to leverage any sort of build cache. I am not sure how I feel about that :)

 

Update: I hear you. Now there is a setting in `global_settings.json` that allows to enable auto clean, which does what you asked for. It is set to "false" by default and needs to set to "true" in order to be enabled.

 

 

redpill-tool-chain_x86_64_v0.7.2.zip 7.88 kB · 82 downloads

Thank you very much for your work, everything works flawlessly :)

 

Only one thing that you can maybe add is for the Toolchain download, indeed for my first attempt, the download wasn't complete and the building script failed.

Maybe you should add a hash check before the building process?

Link to post
Share on other sites
1 hour ago, john_matrix said:

Maybe you should add a hash check before the building process?

Good idea. Next time I am in the mood, I will check how ttg did it in rp-load and take it as insipiration. It is easiest to implement directly after the download, but would be safer directly before the image is build. I am currious how ttg solved it in rp-load.

Link to post
Share on other sites

My Proxmox install seem broken for some reason:

 

I uploaded the img file to via web browser (local home -> ISO Images)

 

/var/lib/vz/template/iso/image.img

 

But when I try to start the VM:

 

kvm:  -device qemu-xhci,addr=0x18 -drive id=synoboot,file=/var/lib/vz/template/iso/image.img,if=none,format=raw -device usb-storage,id=synoboot,drive=synoboot,bootindex=5:

 

Could not open ' -device qemu-xhci,addr=0x18 -drive id=synoboot,file=/var/lib/vz/template/iso/image.img,if=none,format=raw -device usb-storage,id=synoboot,drive=synoboot,bootindex=5': No such file or directory

 

I checked the file via shell and it's there.

 

This is the args in the conf:

 

args: -device 'qemu-xhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/template/iso/image.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot,bootindex=5'

 

root@home:~# ls /var/lib/vz/template/iso/
image.img
root@home:~# 

 

Config:

 

args: -device 'qemu-xhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/template/iso/image.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot,bootindex=5'

balloon: 0
boot: order=sata0;net0
cores: 2
machine: q35
memory: 4096
name: DSM
net0: virtio=F2:9D:7D:2E:7E:61,bridge=vmbr0
numa: 0
ostype: l26
sata0: local-lvm:vm-100-disk-0,size=120G
scsihw: virtio-scsi-pci
smbios1: uuid=27d90b08-fa4c-4b43-85fc-f40200747f72
sockets: 1
vmgenid: 6e85c112-295e-4432-b661-7f91af624a80

Edited by maxhartung
Link to post
Share on other sites

I am not sure if it's because of the path you choose or because of the absence of the usb0 device. Appart of having my bootloader image in /var/lib/vz/images/${MACHINEID}/${IMAGENAME} the args line looks identicaly. 

 

Works like a charme on pve 7.0.1

Edited by haydibe
Link to post
Share on other sites
6 minutes ago, haydibe said:

I am not sure if it's because of the path you choose or because of the absence of the usb0 device. Appart of having my bootloader image in /var/lib/vz/images/${MACHINEID}/${IMAGENAME} the args line looks identicaly. 

 

Works like a charme on pve 7.0.1

 

I tried mkdir folder called 100 (machine id), put the img there and see if it works but same error. 

Edited by maxhartung
Link to post
Share on other sites
3 minutes ago, maxhartung said:

 

I tried mkdir folder called 100 (machine id), put the img there and see if it works but same error.

 

Mine looks like this:

Quote

args: -device 'qemu-xhci,addr=0x18' -drive 'id=synoboot,file=/var/lib/vz/images/XXX/redpill.img,if=none,format=raw' -device 'usb-storage,id=synoboot,drive=synoboot,bootindex=5'
boot: order=sata0
cores: 2
machine: q35
memory: 4096
name: DSM
net0: virtio=XX:XX:XX:XX:XX:XX,bridge=vmbr0
onboot: 1
ostype: l26
sata0: local-lvm:vm-XXX-disk-0,discard=on,size=100G,ssd=1
serial0: socket
smbios1: uuid=XXX
usb0: spice,usb3=1
vmgenid: XXX

I crossed out some details with X, but that's the config that works for me without any issues.

Link to post
Share on other sites
24 minutes ago, haydibe said:

 

Mine looks like this:

I crossed out some details with X, but that's the config that works for me without any issues.

 

So I tried to compare the config via diff tool but nothing syntax related seen, I copied your config, replaced the lines, changed the information and now works.

 

For mapping there is something to be done ? My drive starts at 7

Link to post
Share on other sites

On Proxmox 7.0.1 it's quite easy to edit the grub.cfg without having to rebuild the image or reupload a modifed image again.

 

Make sure the DSM vm is shutdown before executing the commands, otherwise will get damaged!

 

# check if loopback device is alreay used 
losetup -a

# configure to use next free loopback device, change if losetup -a showed the loopback device is alreay used
_LOOP_DEV=/dev/loop1

# path to your redpill image, change this to reflect your path
_RP_IMG=/var/lib/vz/images/XXX/redpill.img

# mout image to loopback and partition to /tmp/mount
losetup -P "${_LOOP_DEV}" "${_RP_IMG}"
mkdir -p /tmp/mount
mount /dev/loop1p1 /tmp/mount

# edit file
vi /tmp/mount/boot/grub/grub.cfg

# unmount partition and image from loopback again
umount /tmp/mount
losetup -d "${_LOOP_DEV}"

 

The restart the vm again. 

Link to post
Share on other sites
On 9/4/2021 at 7:53 PM, kennysino said:

 

 

there is an sata error and i don't know how to fix it

D9D9D9FE864059CFA67FEEFCA9858E76.png

 

12 hours ago, viettanium said:

not work for me!

 

update: after Enable SATA Hot Plug in BIOS, working now!

i have some isssue,but i find another solution,this post said if you put this "DiskIdxMap=00 SataPortMap=1 SasIdxMap=0" in the grub,you can Disable the msata port which also call sata1,but for successfuly boot,you also install the disk in sata2 which located in the center of motherborad,but for my question if i don't install the disk in  masata and sata2,how to disable the msata and sata2 in grub?

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.