RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

1 hour ago, Aigor said:

Forgive me if i'm dumb, if i would build loader for 7.01 for HP Gen8 Microserver, how should i do? 
Can i perform direct upgrade in place without lost config and data? 
Should i backup, perform new installation and restore? 
Which process should be better with balance between data integrity and less operation? 

 

Regardless of what upgrade/migration/fresh build approach you take, you should always take a backup of your data first. The developers do not recommend this release for production use in it's current state and you should not risk your data with it.

 

With that in mind...

 

It's not possible to do an "upgrade" from 6.2.3/Jun's loader (which I assume you're on at present) in the sense of invoking an upgrade in the DSM Control Panel. However, most people on here refer to the following procedure as "upgrade", although it's really a migration:   

- take a backup

- shutdown your server

- remove the Jun's bootloader USB stick

- replace with a new USB stick with the redpill loader on it, built with your SN, MAC address(es) and VID/PID in the grub.cfg

- boot the server on the redpill stick

- the server should dchp and you can either use the Syno tool to find it, or look at the DHCP logs on your router and see what IP address it gets

- browse to http://<your_server_ip>:5001, if it's gone well you should see the Syno web page telling you that it's found disks from another Synology server and offering to migrate them for you. Run through the migration process and you should have 7.x server with all your existing data, settings and packages preserved

 

Did I mention to TAKE A BACKUP first? :D

  • Like 2
Link to post
Share on other sites
9 minutes ago, Aigor said:

The worst part of process 🤣 i have almost 18 Tbyte to backup 

 

Of course the alternative is to wait for this to be formally released and don't take the risk with production data. Or live dangerously.

  • Like 1
Link to post
Share on other sites
Just now, WiteWulf said:

Of course the alternative is to wait for this to be formally released and don't take the risk with production data. Or live dangerously.

I wait "official release" meanwhile, i'm going to make some test on vmware on how to migrate 

 

  • Like 1
Link to post
Share on other sites

The last loader 3615 bromolow redpill-tool-chain_x86_64_v0.8.zip is working on the gen8 with cpu 1230v2 and Intel NIC SFP* 520 10 gbit on AHCI.

But there is a problem sometimes in 1 hour the server lost the connectivity to SMB and to the webinterface  in the DSM Log there is a message that the server has shutdown.

 

How can i check what is the problem that the server lost the connection for 30 seconds?

 

thank you

 

 

 

 

Link to post
Share on other sites

I can confirm working DS3615xs_7.0.1-42214 as VM on ESXi 6.0, using bootloader compiled recently with @haydibe's great tool 0.7.3.

The VM configuration - latest virtual hardware available in ESXi 6.0, custom config - Guest OS Family = Other, Guest OS Version = Other (64-bit), 1 socket - 2 CPU, Memory 2048, Network Adapter E1000e, add 2 SATA controlleres, HDD1=bootloader-SATA 1:0, HDD2=OS-SATA0:0.

In VM BIOS changed boot order to HDD2. Bootloader - select SATA boot. Bootloader cmdline used / changed - syno_hdd_detect=1 DiskIdxMap=0 SataPortMap=12 SasIdxMap=0 HddHotplug=1

 

Link to post
Share on other sites
9 minutes ago, havast said:

Is there any working solution for DS3617XS with DSM 7.0 on ESXI?

If yes, can somebody share with me how to create the loader? I would be grateful :)

 

Thanks

There isn't any DS3617XS bootloader, you better use DS3615XS 
https://xpenology.com/forum/applications/core/interface/file/attachment.php?id=12888 read this message 

 

readme.md from zip file 
 

# Inofficial redpill toolchain image builder
- Creates a OCI Container (~= Docker) image based tool chain.
- Takes care of downloading (and caching) the required sources to compile redpill.ko and the required os packages that the build process depends on.
- Caches .pat downloads inside the container on the host.
- Configuration is done in the JSON file `global_config.json`; custom <platform_version> entries can be added underneath the `building_configs` block. Make sure the id is unique per block!
- Supports a `user_config.json` per <platform_version>
- Supports to bind a local redpill-load folder into the container (set `"docker.local_rp_load_use": "true"` and set `"docker.local_rp_load_path": "path/to/rp-load"`)
- Supports to clean old image versions and the build cache per <platform_version> or for `all` of them at once.
- Supports to auto clean old image versions and the build cache for the current build image, set `"docker.auto_clean":`to `"true"`.
- Allows to configure if the build cache is used or not ("docker.use_build_cache")
- Allows to specify if "clean all" should delete all or only orphaned images.
## Changes
- fixed usage of label that determins the redpill-tool-chain images for clean up
- add `"docker.use_build_cache": "false"` to global_settings.json
- add `"docker.clean_images": "all"` to global_settings.json

## Usage

1. edit `<platform>_user_config.json` that matches your <platform_version> according https://github.com/RedPill-TTG/redpill-load and place it in the same folder as redpill_tool_chain.sh
2. Build the image for the platform and version you want:
   `./redpill_tool_chain.sh build <platform_version>`
3. Run the image for the platform and version you want:
   `./redpill_tool_chain.sh auto <platform_version>`


You can always use `./redpill_tool_chain.sh run <platform_version>` to get a bash prompt, modify whatever you want and finaly execute `make -C /opt/build_all` to build the boot loader image.
After step 3. the redpill load image should be build and can be found in the host folder "images".

Note1: run `./redpill_tool_chain.sh` to get the list of supported ids for the <platform_version> parameter.
Note2: if `docker.use_local_rp_load` is set to `true`, the auto action will not pull latest redpill-load sources.


Feel free to modify any values in `global_config.json` that suite your needs!

Examples:
### See Help text

```
./redpill_tool_chain.sh
Usage: ./redpill_tool_chain.sh <action> <platform version>

Actions: build, auto, run, clean

- build:    Build the toolchain image for the specified platform version.

- auto:     Starts the toolchain container using the previosuly build toolchain image for the specified platform.
            Updates redpill sources and builds the bootloader image automaticaly. Will end the container once done.

- run:      Starts the toolchain container using the previously built toolchain image for the specified platform.
            Interactive Bash terminal.

- clean:    Removes old (=dangling) images and the build cache for a platform version.
            Use `all` as platform version to remove images and build caches for all platform versions. `"docker.clean_images"`="all" only has affect with clean all.

Available platform versions:
---------------------
bromolow-6.2.4-25556
bromolow-7.0-41222
bromolow-7.0.1-42214
apollolake-6.2.4-25556
apollolake-7.0-41890
apollolake-7.0.1-42214
```

### Build toolchain image

For Bromolow 6.2.4   : `./redpill_tool_chain.sh build bromolow-6.2.4-25556`
For Bromolow 7.0     : `./redpill_tool_chain.sh build bromolow-7.0-41222`
For Bromolow 7.0.1   : `./redpill_tool_chain.sh build bromolow-7.0.1-42214`
For Apollolake 6.2.4 : `./redpill_tool_chain.sh build apollolake-6.2.4-25556`
For Apollolake 7.0   : `./redpill_tool_chain.sh build apollolake-7.0-41890`
For Apollolake 7.0.1 : `./redpill_tool_chain.sh build apollolake-7.0.1-42214`

### Create redpill bootloader image

For Bromolow 6.2.4   : `./redpill_tool_chain.sh auto bromolow-6.2.4-25556`
For Bromolow 7.0     : `./redpill_tool_chain.sh auto bromolow-7.0-41222`
For Bromolow 7.0.1   : `./redpill_tool_chain.sh auto bromolow-7.0.1-42214`
For Apollolake 6.2.4 : `./redpill_tool_chain.sh auto apollolake-6.2.4-25556`
For Apollolake 7.0   : `./redpill_tool_chain.sh auto apollolake-7.0-41890`
For Apollolake 7.0.1 : `./redpill_tool_chain.sh auto apollolake-7.0.1-42214`

### Clean old redpill bootloader images and build cache

For Bromolow 6.2.4   : `./redpill_tool_chain.sh clean bromolow-6.2.4-25556`
For Bromolow 7.0     : `./redpill_tool_chain.sh clean bromolow-7.0-41222`
For Bromolow 7.0.1   : `./redpill_tool_chain.sh clean bromolow-7.0.1-42214`
For Apollolake 6.2.4 : `./redpill_tool_chain.sh clean apollolake-6.2.4-25556`
For Apollolake 7.0   : `./redpill_tool_chain.sh clean apollolake-7.0-41890`
For Apollolake 7.0.1 : `./redpill_tool_chain.sh clean apollolake-7.0.1-42214`
For all              : `./redpill_tool_chain.sh clean all`

 

Edited by Aigor
  • Like 1
Link to post
Share on other sites
11 hours ago, haydibe said:

I assume you mean that the toolchain docker image you wanted to build failed to be created? At least that's what you screenshot indicates.

 

Though, the error message does not help to pinpoint the cause. It might be thrown by apt or curl itself. I just checked the jq download url in a browser -> still valid. 

 

The" I have no idea" attempt to fix it, would be: "clean all" and a restart of the docker service or the docker host itself. 

 

 

Sorry, I made a mistake. The problem was solved after reinstalling docker.

 

thank you again for your help

Link to post
Share on other sites

Heya Redpill Community,

 

I am following your progress and this is truly amazing. I am really excited to test DSM 7+ on my Gen8 running ESXI 7.02 for now.

Unfortunately, I don't have enough skills to compile a loader within docker, as I don't clearly understand the entire process (shame for an IT guy), so i will wait when first release will arrive.

 

However, I would have one question because that is not clear to me for now.

Reading earlier comments, I saw that onboard Nics on micro gen8 are not supported for bare-metal install but would it work with vmxnet3.ko drivers under ESXI ?

or the hardware incompatibility is also true with ESXI ?

 

I have currently a P222 controller in the pcie port...I will have to choose If I want network or raid controller...

 

Thanks for your answer.

 

Edited by spikexp31
  • Like 1
Link to post
Share on other sites
40 minutes ago, spikexp31 said:

However, I would have one question because that is not clear to me for now.

Reading earlier comments, I saw that onboard Nics on micro gen8 are not supported for bare-metal install but would it work with vmxnet3.ko drivers under ESXI ?

or the hardware incompatibility is also true with ESXI ?

 

I have currently a P222 controller in the pcie port...I will have to choose If I want network or raid controller..

 

 A few people have managed to get the tg3 driver (for the onboard Broadcom NIC) added to the boot image on a Gen8, baremetal. Read back the last page or two looking posts by scoobdriver and pocopico.

Edited by WiteWulf
  • Like 1
Link to post
Share on other sites
40 minutes ago, spikexp31 said:

I saw that onboard Nics on micro gen8 are not supported for bare-metal install but would it work with vmxnet3.ko drivers under ESXI ?

If your using using Esxi you just use e1000e as the NIC type , no need to change the network card or use other drivers . 

That's the advantage of running as VM

  • Like 1
Link to post
Share on other sites

For some reason, since updating to the latest redpill version (Monday 13th of September), I can no longer mount /dev/synoboot1 in my running server. I'm absolutely certain I did this recently to edit the grub.cfg and update the MAC addresses in there, but I get the following now:

bash-4.4# mount /dev/synoboot1 /mnt/synoboot1
mount: /mnt/synoboot1: wrong fs type, bad option, bad superblock on /dev/synoboot1, missing codepage or helper program, or other error.

 

It's not the end of the world, but it was handy being able to edit the bootloader stick when it was still plugged in inside the server and just reboot it for changes, rather than having to take it out and edit it in another machine.

 

Am I doing something wrong?

 

FWIW, disk still thinks the device is valid:

bash-4.4# fdisk /dev/synoboot

Welcome to fdisk (util-linux 2.33.2).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/synoboot: 14.9 GiB, 16018046976 bytes, 31285248 sectors
Disk model: USB Flash Drive         
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0xf110ee87

Device         Boot Start   End Sectors Size Id Type
/dev/synoboot1 *      2048 100351   98304  48M 83 Linux
/dev/synoboot2      100352 253951  153600  75M 83 Linux
/dev/synoboot3      253952 262143    8192   4M 83 Linux

 

Link to post
Share on other sites
56 minutes ago, scoobdriver said:

Try mount -t vfat ……

That was almost it!

 

mount -t vfat /dev/synoboot1 /mnt/synoboot1

 

...would fail with the same error.

 

But:

cd /dev
mount -t vfat synoboot1 /mnt/synoboot1

 

..works!

 

You have to be in /dev for it to work 🙄

 

Found it here after Googling. Which, of course, links back to a post on this forum :D

 

This is dumb, I've never seen this sort of behaviour in decades of using posix systems.

Edited by WiteWulf
  • Like 1
Link to post
Share on other sites

Works on hp microserver gen8 baremetal with onboard nic

bromolow-7.0.1-42214

 

I used the tg3.ko from

Extract the rd.gz, put the *.ko on \usr\lib\modules

Add insmod in linuxrc.syno.impl like this :

 

    insmod /lib/modules/libphy.ko

    insmod /lib/modules/tg3.ko

Repack rd.gz

And make the first partion active on the sd card

 

Thanks :)

 

Edited by Kouill
  • Like 1
Link to post
Share on other sites

Installed bromolow-7.0.1-42214 to my HP N54L.
I was used Jun's Loader 1.03b (DSM 6.2.3) and all HDD fully migrated with no error. (only keep my data, drop before config)
and it work well, but some part is not work.

1. Info Center - General tab is blank.
2. USB UPS cannot work. I'm using APC UPS, it worked with 6.2.3.
 

Screen Shot 2021-09-16 at 09.03.25.png

Screen Shot 2021-09-16 at 09.03.48.png

Edited by Ermite
Link to post
Share on other sites
On 9/11/2021 at 7:22 PM, shibby said:

i tried compile module atlantic.ko (aquantia) for Asus XG-C100C but no luck. Can anyone compile this module for me (ds918+ image)? I`m using 10gbe network card on 6.2.3 but on 7.0.1 i cant use it anymore.

 

Best Regards.

I just compiled the driver but haven't tested yet. You may need to load crc-itu-t.ko first.

AQC107.zip

  • Like 1
Link to post
Share on other sites

hello,i add the 10G network card’s driver to the boot file  and started smoothly,when ds918+_42214.pat is installed and started again, the 10G network card driver is lost. how can I make the driver of the 10G network card be loaded at startup? which file do i to modify?

 

after startup,i can added the driver manually through SSH. But I can't do this every time. I want Qunhui to automatically load the driver when it starts.

 

sudo insmod /usr/lib/modules/ixgbe.ko

systemctl restart rc-network

 

how can i do ?

Link to post
Share on other sites
On 9/14/2021 at 12:35 PM, pocopico said:

 

Try this and see if it works for you. Its for 3615 v7.0.1. Extract and rename to rd.gz on first partition if you are booting with legacy bios or second if you are booting with EFI.

 

- I have added inetd start by default (telnet will work as soon as you get ip from DHCP for troubleshooting)

- modules tg3, mptsas, mpt3sas, vmxnet3.ko, libphy.ko

 

Of course there is another way to load extra modules but i havent had the time to try yet.

 

rd.gz.microserver.7z 8.08 MB · 38 downloads

You wouldnt be willing to use you ninja module skills to get my update bricked netgear nas back up? 

ICHR7 chipset (NM10) intel and Marvell based nic. its a repurposed old Netgear ReadyNAS Ultra 2 RNDU2000 i had running on juns and updated it to 6.2.4 like a noob. If i could get 7.0.1 booting on it and migrate it that would be great. 

Also this rd.gz sucessfully booted my IBM X3650 but alas there is no M5110e sas drivers so it doesnt detect the disks, but all 6 nic ports do work (intel) so thanks for that :) 

Link to post
Share on other sites

from Haydibe 0.73 docker file created redpillloader  (redpill-DS3615xs_7.0.1-42214_b1631729759.img)  bromolow-7.0.1-42214  installed with this PAT file from the compiler itself (ds3615xs_42214.pat) on my HP gen8 nic intel X520 SFP+ x2

all is working but when i set a docker container  as example JD2 the system shutdown and reboot

I think this is NIC driver Related how can i compile or add this linux driver to the loader or the installed server  ?

https://ark.intel.com/content/www/us/en/ark/products/39776/intel-ethernet-converged-network-adapter-x520-da2.html

 

thank you for all tips and help

 

EDIT it seems is fixed with this file from this thread thank you for the creator.

 

 

Also tested/compiled  Redpill Loader with redpill-DS3615xs_6.2.4-25556_b1631723311.img DSM 25556 (pat file ds3615xs_25556.pat) this one is working great and Rock solid.

 

 

 

 

rd.gz.microserver.7z

Edited by nemesis122
  • Like 1
  • Thanks 1
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.