Jump to content
XPEnology Community

Automated RedPill Loader (ARPL)


fbelavenuto

Recommended Posts

8 hours ago, Peter Suh said:

There is no news that fabio has been on vacation for over two weeks.
My successor to fabio's arpl-modules repo
M SHELL for TCRP is ready for r8101.

 

https://github.com/PeterSuh-Q3/arpl-modules/tree/main/broadwellnk-4.4.180

 

https://github.com/PeterSuh-Q3/tinycore-redpill/releases/tag/v0.9.4.3-1

 

 

I thought your version fixed the 3-drive issue but connecting a third drive to my SAS card "along with an SSD connected to one of the mobo ports" makes the SAS drives disappear. Any idea what could cause this?

Edited by Black6spdZ
Link to comment
Share on other sites

4 hours ago, Black6spdZ said:

 

I thought your version fixed the 3-drive issue but connecting a third drive to my SAS card "along with an SSD connected to one of the mobo ports" makes the SAS drives disappear. Any idea what could cause this?

 

 

Built-in SATA with 1 SSD
Are you using only 2 disks on your SAS controller?


Does the mobo only have 1 sata controller?


If so, if used with a SAS controller
Configure user_config.json as below.

 

"SasIdMap": "0",
"SataPortMap": "1",
"DiskIdxMap": "02"

Edited by Peter Suh
Link to comment
Share on other sites

1 hour ago, Peter Suh said:

 

 

Built-in SATA with 1 SSD
Are you using only 2 disks on your SAS controller?


Does the mobo only have 1 sata controller?


If so, if used with a SAS controller
Configure user_config.json as below.

 

"SasIdMap": "0",
"SataPortMap": "1",
"DiskIdxMap": "02"

 

made your suggested changes but still no go. My mainboard has two SATA3 ports, one SATA2 port and an mSATA slot and for some reason shows 6 ports in the BIOS to enable or disable "maybe the mini PCIe slot doubles as a second mSATA slot"? Anyway, my plan is to use one and maybe the second onboard SATA3 slots with SSDs and the four removable bays are connected to the SAS9308-4i IT mode controller with 4 drives connected. I'll also make a note that it takes almost 10 minutes to get to the DSM login screen when three or four drives are connected to the LSI card.. it boots in just a minute or two with 1 or 2 drives connected "and detects them".

Edited by Black6spdZ
Link to comment
Share on other sites

14 hours ago, bloodilo said:

Wow! Works like a charm! Thanks a lot!

 

Unfortunately I didn’t see any tips about addons as meet ARPL for the first time, so I have no clue about it. Is there any topic I can read about it or possible You can tell me what addons are required for my Mobo?

They are not required, that's why they are called addons. The best bet is to check manifest.yml and install.sh here https://github.com/fbelavenuto/arpl-addons

  • Like 1
Link to comment
Share on other sites

Yesterday I set up an older HP 800 G2 system (i5-6500, 8GB Ram, 280w 80+ platinum PSU) as an replacement for my previous terramaster F4-220 system. I decided to go with the dva3221 modell using the latest ARPL and Firmware.

 

Everything works fine so far but heat and power consumption got my attention. As far as I can see the CPU is even at 1% load running at turbo frequency.

 

cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_cur_freq

3578198
3491074
3471251
3453339

 

 grep . /sys/devices/system/cpu/cpu0/cpufreq/*

/sys/devices/system/cpu/cpu0/cpufreq/affected_cpus:0
/sys/devices/system/cpu/cpu0/cpufreq/bios_limit:3201000
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_cur_freq:3201000
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq:3201000
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_min_freq:800000
/sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_transition_latency:10000
/sys/devices/system/cpu/cpu0/cpufreq/freqdomain_cpus:0 1 2 3
/sys/devices/system/cpu/cpu0/cpufreq/related_cpus:0
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_frequencies:3201000 3200000 3000000 2900000 2700000 2500000 2300000 2200000 2000000 1800000 1700000 1500000 1300000 1100000 1000000 800000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_available_governors:powersave performance userspace
/sys/devices/system/cpu/cpu0/cpufreq/scaling_cur_freq:3475888
/sys/devices/system/cpu/cpu0/cpufreq/scaling_driver:acpi-cpufreq
/sys/devices/system/cpu/cpu0/cpufreq/scaling_governor:performance
/sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq:3201000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq:800000
/sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed:<unsupported>
grep: /sys/devices/system/cpu/cpu0/cpufreq/stats: Is a directory

 

 

cat /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor

performance

 

 

Any idea how this can be changed?

Edited by Tobias2k
Link to comment
Share on other sites

Hello everyone,

 

I am using ARPL v1.1-beta2 on Z590 VISION D + 11700K+ INTEL X540 AT2 +Define 7 case (DS3622, DSM 7.1.1).

 

But I met two issues:

1. Z590 VISION D has  2 2.5g ethernet ports, one of them connected to home router and X540 has 2 10g ethernet ports with one of them connected to my PC directly,   the issue is that sometimes X540 10g not working, I can only visits DSM login page using the ip that assigned by router. 

2. When I selected power off the NAS in DSM web page from broswer, the system shows power off (and cannot connected again), but the hardware still running and not power off.

 

Any ideas to fix these? Thanks very much!

 

 

Link to comment
Share on other sites

55 minutes ago, Sanya_13 said:

Host CPU: Intel Celeron N5105

vCPU in VM: host (all 4 cores) 

How much CPU and RAM your host have ?

Ho much did you give to the VM ?

 

My point is : does your host have enough free CPU and RAM available to itself once your VM is started.

Link to comment
Share on other sites

7 minutes ago, Orphée said:

How much CPU and RAM your host have ?

Ho much did you give to the VM ?

 

My point is : does your host have enough free CPU and RAM available to itself once your VM is started.

My host has 8GB RAM and 4 cores CPU. There is another VM running on the same host and has 4GB RAM and 4 vCPUs. 

 

As I wrote in my first post, I experienced these shutdowns and VM unresponsives when after automatic VM reboot. I configured automatic reboot at 3 AM. Then I got unexpected shutdown at 4:44 AM. There was 0% usage of all VMs. I checked Proxmox performance graphs, there is 1-2% CPU load for that time and enough RAM.  

 

This morning, when I tried to connect to DSM, it was not available. VM was not responding. I had to force stop the VM at 11:46.

Here is VM load:

 

image.thumb.png.75f06c1066c135390108e5b3bde32c89.png

 

Here is host load:

image.thumb.png.d70d200c5a19cc2db9ca6259355c5a19.png

image.thumb.png.89982247465b3731460ff8869b4476ca.png

Link to comment
Share on other sites

Sorry for the newbieishness of this question but I have updated dsm before with tragic consequences. I just want to confirm the steps for updating to new DSM same model.

1)restart the computer to get to the options page config loader-> type menu.sh at the prompt and choose 

update menu

2)I am guessing from that option I have choices but I need to update everything under the update menu

3)I cant remember the workflow, do I then rebuild the loader? I also remember that it prompted me to download a file when I built the original is that going to happen again? I found the download odd anyways since I feel I had to access the web gui and give it the .pat file that downloaded manually from the website manualy to my laptop anyways?

4)the last step seems to be actually accessing the DSM GUI and updating from there. Is the GUI going to be fully functional and I can choose to update or is it going to be the one where I have to give it the .pat file. Do I have to worry about bootloader and DSM version ...I know there is  7.2 beta DSM out there now that I am not really interested in.

 

Lots of questions. Once upon a time I updated Juns version and there was alot of consequences bc I didnt do it correctly lol  

 

PS I used @apriliars3 post as a guide https://xpenology.com/forum/topic/65408-automated-redpill-loader-arpl/?do=findComment&comment=342170

 

Edited by dasbooter
  • Thanks 1
Link to comment
Share on other sites

11 hours ago, Sanya_13 said:

My host has 8GB RAM and 4 cores CPU. There is another VM running on the same host and has 4GB RAM and 4 vCPUs. 

 

As I wrote in my first post, I experienced these shutdowns and VM unresponsives when after automatic VM reboot. I configured automatic reboot at 3 AM. Then I got unexpected shutdown at 4:44 AM. There was 0% usage of all VMs. I checked Proxmox performance graphs, there is 1-2% CPU load for that time and enough RAM.  

 

This morning, when I tried to connect to DSM, it was not available. VM was not responding. I had to force stop the VM at 11:46.

 

11 hours ago, Sanya_13 said:

 

update kernel to 6.2 and install intel-microcode and u dont have anymore VM freezing

follow this link:

https://forum.proxmox.com/threads/vm-freezes-irregularly.111494/

this happened with DMS and ur CPU in proxmox vm (N5105)

update intel microcode to 23 and manual to 24, follow topic instructions

 

Edited by frezeen
  • Like 1
Link to comment
Share on other sites

9 hours ago, frezeen said:

 

update kernel to 6.2 and install intel-microcode and u dont have anymore VM freezing

follow this link:

https://forum.proxmox.com/threads/vm-freezes-irregularly.111494/

this happened with DMS and ur CPU in proxmox vm (N5105)

update intel microcode to 23 and manual to 24, follow topic instructions

 

Thank you @frezeen

 

I updated proxmox, switched to 6.2.6-1-pve kernel and installed intel microcode 24. 

 

Here is the list of commands I used Proxmox Shell in case if somebody face the same issues:

1. ppdate Proxmox:

apt update
apt full-upgrade
reboot

2. Switch kernel to 6.2:

apt update
apt install pve-kernel-6.2
reboot

3. update to intel microcode 24

wget https://r-1.ch/intel-microcode_3.20221108.2_amd64.deb
dpkg -i intel-microcode_3.20221108.2_amd64.deb
reboot

If "dpkg -i intel-microcode_3.20221108.2_amd64.deb" fails with the error that "Package iucode-tool is not installed." install it first with "apt install iucode-tool"

 

Commands to check that everything is installed correctly:

cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.2.6-1-pve root=/dev/mapper/pve-root ro quiet
dmesg | grep "microcode updated early to"
[    0.000000] microcode: microcode updated early to revision 0x24000024, date = 2022-09-02

 

Will update this post in few days after checking if it helped. 

  • Like 1
Link to comment
Share on other sites

Hi,

 

I am testing the latest loader: arpl-1.1-beta2a on ESXi 8.0. 2 x SSD set up. 1 SSD is for loader, 2 SSD 240GB for data. Tested on 1 and 2 SATA controllers. Network card with default addons selected (All default) so vmxnet3 included.

 

When trying to install DSM (latest build 42692) on ds3622xs+ I get : failed to format drive at 8%.

 

Doing exactly the same steps on DS920+ - no issues, everything works.

 

I saw some similar questions here, but not really the solution.

 

 

Link to comment
Share on other sites

2 hours ago, carbon6600 said:

Hello. Installed on proxmox 7.2.3,

image.png.f877d3f4c5c368419fc999618bca632c.png

 

I don't have discs.

image.png.7aa16e7dfabf20d8ffee9a8191e63100.png

 

How to fix it?

 


Please help, virtio network works, but it does not see virtio disks.
 

Are you trying to pass disks to your VM or creating virtual disks?

Link to comment
Share on other sites

3 hours ago, w84no1 said:

I don't think you can use virtio disks with ARPL. I passed through the sata controller to get my disks into DSM.

what makes you think so? if you look into the drivers there are all sorts of virtio drivers present

https://github.com/fbelavenuto/arpl-modules/tree/main/denverton-4.4.180

kvm based solutions with virtio might be the most often used platform for vitalization here and i think also the dev's use it for the loader

Link to comment
Share on other sites

FYI for those encountering TX problems(slow speed when pulling file from NAS to PC, around 60~95MB/s with high CPU usage) on RTL8111 series NIC, select rtl8168 driver(called module) instead of rtl8169 or vice versa.(Don't select both so you can force the exact one you want.)

 

Double check with 'lspci -knn' from terminal/shell on NAS side, see what driver is loaded. (Found this fix the hard way by debugging betweeen proxmox KVM and baremetal setup all day, with corrupted system configuration and DSM reinstalls in between...)

 

If nothing helps you might need to fiddle around with 'ethtool -K ethX tso off'(X=0/1/2... depending on your NIC number) or turn off different TX features(check with 'ethtool -k ethX') from terminal/shell on NAS side as a workaround. My Intel I219V with e1000e driver seems to be affected by 'TSO on' as well, though I checked this one while debugging with TCRP, I'm too lazy to experiment with that from ARPL again.

 

If the workaround is good, you should be getting consistent ~110MB/s transfer speed from NAS to PC.

Edited by vbz14216
  • Thanks 1
Link to comment
Share on other sites

5 hours ago, IG-88 said:

what makes you think so? if you look into the drivers there are all sorts of virtio drivers present

https://github.com/fbelavenuto/arpl-modules/tree/main/denverton-4.4.180

kvm based solutions with virtio might be the most often used platform for vitalization here and i think also the dev's use it for the loader

I stand corrected.

Link to comment
Share on other sites

On 4/4/2023 at 9:41 AM, Sanya_13 said:

Thank you @frezeen

 

I updated proxmox, switched to 6.2.6-1-pve kernel and installed intel microcode 24. 

 

Here is the list of commands I used Proxmox Shell in case if somebody face the same issues:

1. ppdate Proxmox:

apt update
apt full-upgrade
reboot

2. Switch kernel to 6.2:

apt update
apt install pve-kernel-6.2
reboot

3. update to intel microcode 24

wget https://r-1.ch/intel-microcode_3.20221108.2_amd64.deb
dpkg -i intel-microcode_3.20221108.2_amd64.deb
reboot

If "dpkg -i intel-microcode_3.20221108.2_amd64.deb" fails with the error that "Package iucode-tool is not installed." install it first with "apt install iucode-tool"

 

Commands to check that everything is installed correctly:

cat /proc/cmdline
BOOT_IMAGE=/boot/vmlinuz-6.2.6-1-pve root=/dev/mapper/pve-root ro quiet
dmesg | grep "microcode updated early to"
[    0.000000] microcode: microcode updated early to revision 0x24000024, date = 2022-09-02

 

Will update this post in few days after checking if it helped. 

 

After 2 days past DSM works great without freezes and reboots! 

  • Like 1
Link to comment
Share on other sites

On 7/1/2022 at 9:45 PM, fbelavenuto said:

Hope you like it.

Totally. This is phantastic. Thx a lot.

 

Runs under ESXi 8 like a charm. Upgraded my previous red pill by installing / configuring arpl and then just mounting the disks into new VM. DSM recognized the transfer and allowed "keep settings". Bingo. 

Link to comment
Share on other sites

22 hours ago, fbelavenuto said:

Hey guys,

I would like to inform you that I have been inactive for the last few days and will be for a long time due to personal issues. The ARPL will be stalled for the time being.

 

Best wishes to you, take care and have a break so you don't get burned out between projects and personal life.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...