Jump to content
XPEnology Community

TinyCore RedPill Loader Build Support Tool ( M-Shell )


Peter Suh

Recommended Posts

On 10/4/2023 at 11:07 PM, Peter Suh said:

 

I found the cause of unnecessary consumption of space in the 3rd partition.
Because the space occupied by these files was not calculated, an error occurred in which the available space went to 0.

 

There is no need to provide logs.

 

Rebuilding the loader will automatically resolve this issue.

https://github.com/PeterSuh-Q3/tinycore-redpill/commit/2930866140dc924eba2b57309cc53616f898cd33

 

Hey there @Peter Suh, sorry just getting back online here, "life" has been keeping me away and busy for a bit, I fully intended to provide you the requested logs, wasn't ignoring you or your request.  It seems like in my absence, that OCD part of any respectable IT/Tech persons brain to resolve a problem, got ahold of you due to the quirky challenges I was running into.  EXCELLENT FIND! I reiterate my DM!   Thanks You 🥳

  • Like 1
Link to comment
Share on other sites

Hello Peter :).

It's been a long time since I did any testing with you (back when you first were developing your my.sh automated install. 

I'm really impressed with the way the tool has developed into a fully fledged installer - congratulations on the great outcome for many hours of hard work. An achievement to be proud of as an enthusiast :). 

I had three issues using it, related to my own system but I'll mention them in case you have input or think they need attention. Two completely killed my capacity to use the installer and produced dead ends.

 

I began by updating DSM on my old Loader and it messed up my system. It would not install and was just stuck. I decided I had to update the Boot Loader but had to relearn a lot after it being so long. I thought I was going to have to pull the drives, start fresh and then reinstall from my hyperlink backups... but I eventually got things going.

 

 

1. VGA is deprecated - a genuine issue for older server hardware using IPMI (console just shows blank screen after Linux loads). 

I had to press "e" at the first boot menu to enter the boot file, and before the Linux line, enter "Set gfxpayload 1024x768x16, 1024x768"

 

This allowed me to launch the correct resolution. I'm wondering if VGA can be included while still allowing higher resolutions (because forcing people to 1024x768 is really awful to use given the screen window layout :P). Maybe I could use 1280x1024? I didn't test that after I managed to finally get the next parts working I was just too pleased to bother just then.

 

Once this was fixed I was stuck for hours with point 2 because it was just not obvious from a troubleshooting perspective.

 

2. Installer requires UEFI bios mode.
The previous bootloaders required me to use Legacy BIOS modes to successfully install. That was still the way I had BIOS configured. For hours I was trying to get your installer to get past the "Red Pill" logo on my console (after getting VGA to work) and it was just producing a long list of errors relating to system ID's.

Changing my BIOS to EFI for boot immediately allowed bootloader to proceed and load without error. Changing LEGACY setting for devices - USB, LAN, LSI9211-8i and onboard SATA  worked fine and loaded more quickly. 

 

 

3. I CANNOT get all 8 of my HBA ports to recognise in DSM if I have onboard SATA enabled. 
This was a problem for me on the previous loaders too. I usually bypassed it by disabling onboard SATA but I am going to need those extra 6 ports soon.

 

I do request your help with this part please Peter.

 

Testing Done: 
A. I have 7 drives installed on the HBA across its 2 ports. On installing DSM 7.2 after Loader upgrade (with SATA enabled), only 5 are present in DSM - drives 1 and 2 are not registered.

B. Disable SATA in bios and drives are visible again. 

C. Connect 1 drive to SATA and leave drve 2 on HBA = Drive 1 visible and drive 2 not visible

D. Connect 1 and 2 to SATA  = all drives visible in DSM.

This means at the moment SATA appears to "block" 2 of my HBA drives. I have not confirmed if all 6 SATA drives will function just yet. 

I would like to investigate why this is happening and try to untangle the 2 drive controllers so I can use all drive ports.
 

Link to comment
Share on other sites

On 10/4/2023 at 11:07 PM, Peter Suh said:

 

I found the cause of unnecessary consumption of space in the 3rd partition.
Because the space occupied by these files was not calculated, an error occurred in which the available space went to 0.

 

There is no need to provide logs.

 

Rebuilding the loader will automatically resolve this issue.

https://github.com/PeterSuh-Q3/tinycore-redpill/commit/2930866140dc924eba2b57309cc53616f898cd33

I just finished test building a new loader usb, as I did before: 7.2.0-64570 then 7.2.1-69057 and it went smoothly, no showstopper issues and decent free space on /sdb3 once it was completed.  For an additional test, I restarted and repeated the 7.2.1-69057 build as it would typically bring down any new additions/changes you might implement later, where to rebuild is required to pull them down.  The additional step also ran smooth and decent free space left on all partitions. 👍🏼

 

Might I suggest that you copy the zlastbuild.log file automatically over to say the /dev/sdb2 partition at the end of each build cycle.  As it's a small file and would not pose any issues being copied there, so you have a saved snapshot in a safe, accessible location...until the next loader build and can easily be pulled in between as needed. I performed this additional step with a simple "cp" as a test with success.

 

Interesting side note, though I have not done testing with several machines to fully confirm, at the moment, the USB isn't natively viewable/accessible in macOS Sonoma or Windows 11...but IS fully on Windows 10.  I'm intrigued given they are indicated as FAT/FAT32 partitions. 🤔

Link to comment
Share on other sites

3 hours ago, mgrobins said:

Hello Peter :).

It's been a long time since I did any testing with you (back when you first were developing your my.sh automated install. 

I'm really impressed with the way the tool has developed into a fully fledged installer - congratulations on the great outcome for many hours of hard work. An achievement to be proud of as an enthusiast :). 

I had three issues using it, related to my own system but I'll mention them in case you have input or think they need attention. Two completely killed my capacity to use the installer and produced dead ends.

 

I began by updating DSM on my old Loader and it messed up my system. It would not install and was just stuck. I decided I had to update the Boot Loader but had to relearn a lot after it being so long. I thought I was going to have to pull the drives, start fresh and then reinstall from my hyperlink backups... but I eventually got things going.

 

 

1. VGA is deprecated - a genuine issue for older server hardware using IPMI (console just shows blank screen after Linux loads). 

I had to press "e" at the first boot menu to enter the boot file, and before the Linux line, enter "Set gfxpayload 1024x768x16, 1024x768"

 

This allowed me to launch the correct resolution. I'm wondering if VGA can be included while still allowing higher resolutions (because forcing people to 1024x768 is really awful to use given the screen window layout :P). Maybe I could use 1280x1024? I didn't test that after I managed to finally get the next parts working I was just too pleased to bother just then.

 

Once this was fixed I was stuck for hours with point 2 because it was just not obvious from a troubleshooting perspective.

 

2. Installer requires UEFI bios mode.
The previous bootloaders required me to use Legacy BIOS modes to successfully install. That was still the way I had BIOS configured. For hours I was trying to get your installer to get past the "Red Pill" logo on my console (after getting VGA to work) and it was just producing a long list of errors relating to system ID's.

Changing my BIOS to EFI for boot immediately allowed bootloader to proceed and load without error. Changing LEGACY setting for devices - USB, LAN, LSI9211-8i and onboard SATA  worked fine and loaded more quickly. 

 

 

3. I CANNOT get all 8 of my HBA ports to recognise in DSM if I have onboard SATA enabled. 
This was a problem for me on the previous loaders too. I usually bypassed it by disabling onboard SATA but I am going to need those extra 6 ports soon.

 

I do request your help with this part please Peter.

 

Testing Done: 
A. I have 7 drives installed on the HBA across its 2 ports. On installing DSM 7.2 after Loader upgrade (with SATA enabled), only 5 are present in DSM - drives 1 and 2 are not registered.

B. Disable SATA in bios and drives are visible again. 

C. Connect 1 drive to SATA and leave drve 2 on HBA = Drive 1 visible and drive 2 not visible

D. Connect 1 and 2 to SATA  = all drives visible in DSM.

This means at the moment SATA appears to "block" 2 of my HBA drives. I have not confirmed if all 6 SATA drives will function just yet. 

I would like to investigate why this is happening and try to untangle the 2 drive controllers so I can use all drive ports.
 

 

 

1. As for the VGA resolution compatibility issue, I understand that ARPL already has a menu prepared.
I hadn't thought about it since no users had requested it so far.
I will prepare the same menu for TCRP as for ARPL.

 

2. EFI and legacy issues must be handled separately from Boot and Storage.
Perhaps the EFI needed to boot the loader can be pointed to UEFI USB partition 1 in the BIOS.
You can choose whichever works well among Legacy/EFI.
For storage, legacy mode is recommended, especially for HBA.
This is something I also heard from other experts. They said that legacy mode passes through the SAS controller's own BIOS check section, giving it time to accurately recognize the disk.

 

3. If you are trying to utilize both the HBA and SATA controller ports, it will be difficult for me to give you an accurate guide.
This is the setting value when using multiple disks in the HBA recommended by @flyride

 

[HBA Recommended settings from flyride]
Not EUDEV , Use Only DDSML (If you use TCRP-mshell )


"SataPortMap": "12",
"DiskIdxMap": "1000",
"SasIdxMap: "0"

"MaxDisks: "24"


Sata controllers will not be considered.
Once these recommended settings are maintained, a guide pop-up is provided in MSHELL that you can refer to to adjust SataPortMap / DiskIdxMap for the Sata controller. This function is also taken from ARPL.


If you use x "Show Sata(s) # ports and drives" in the main menu, you can see the guide pop-up as shown below.
You must learn how to decipher and utilize this content.

 

2023-10-0811_57_06.png.05229292367b570f95a567f92e5f08fc.png


It's been a while since I learned how to decipher this too.


I think it would be good to look at this map and learn more together.

 

If possible, I will try to configure an environment similar to your situation by combining HBA / SATA.

 

 

Edited by Peter Suh
Link to comment
Share on other sites

1 hour ago, gericb said:

I just finished test building a new loader usb, as I did before: 7.2.0-64570 then 7.2.1-69057 and it went smoothly, no showstopper issues and decent free space on /sdb3 once it was completed.  For an additional test, I restarted and repeated the 7.2.1-69057 build as it would typically bring down any new additions/changes you might implement later, where to rebuild is required to pull them down.  The additional step also ran smooth and decent free space left on all partitions. 👍🏼

 

Might I suggest that you copy the zlastbuild.log file automatically over to say the /dev/sdb2 partition at the end of each build cycle.  As it's a small file and would not pose any issues being copied there, so you have a saved snapshot in a safe, accessible location...until the next loader build and can easily be pulled in between as needed. I performed this additional step with a simple "cp" as a test with success.

 

Interesting side note, though I have not done testing with several machines to fully confirm, at the moment, the USB isn't natively viewable/accessible in macOS Sonoma or Windows 11...but IS fully on Windows 10.  I'm intrigued given they are indicated as FAT/FAT32 partitions. 🤔

 

 

I'm glad that loader build is now possible smoothly.😀


The third partition is not easily accessible without mounting it when the loader is running normally. The 1st/2nd partition varies depending on the situation, but it is also difficult to check the contents without mounting it.


If you are an expert, you can take it out with the mount, but for beginners,
Considering this situation, the zlastbuild.log file does not seem to be convenient to retrieve no matter where it is placed on the 1st/2nd/3rd partition.


I am also using MacOS Ventura as my default OS, but it is not easy to access the USB FAT partition for this loader.
Just as Windows does not consider MacOS partition types APFS or MacOS Extended (Journaled),
It seems natural that MacOS does not consider Windows FAT partitions.
Since I don't use Windows much these days, I first noticed that the way FAT partitions are displayed is different even within the same Windows (11 and 10).
The method previously used to make good use of this FAT partition was to run “Total Commander” with administrator privileges rather than Explorer.
This tip is also useful for looking into the UEFI partition where the letters are assigned.

Edited by Peter Suh
Link to comment
Share on other sites

1 hour ago, Peter Suh said:

 

 

3. If you are trying to utilize both the HBA and SATA controller ports, it will be difficult for me to give you an accurate guide.
This is the setting value when using multiple disks in the HBA recommended by @flyride

 

[HBA Recommended settings from flyride]
Not EUDEV , Use Only DDSML (If you use TCRP-mshell )


"SataPortMap": "12",
"DiskIdxMap": "1000",
"SasIdxMap: "0"

"MaxDisks: "24"


Sata controllers will not be considered.
Once these recommended settings are maintained, a guide pop-up is provided in MSHELL that you can refer to to adjust SataPortMap / DiskIdxMap for the Sata controller. This function is also taken from ARPL.


If you use x "Show Sata(s) # ports and drives" in the main menu, you can see the guide pop-up as shown below.
You must learn how to decipher and utilize this content.

 

2023-10-0811_57_06.png.05229292367b570f95a567f92e5f08fc.png


It's been a while since I learned how to decipher this too.


I think it would be good to look at this map and learn more together.

 

If possible, I will try to configure an environment similar to your situation by combining HBA / SATA.

 

 

I'd like to explore that.

I changed bios to offer dual legacy/EFI boot with PCIE settings as legacy again for the HBA.

I notice on boot, and when I was having it stall yesterday messages regarding UDEV. 
DDSML is associated with the HBA loading. I'm not seeing how the SATA controller is loading though.

I'll have to go back to read up on the portmap theory again. It was annoying last time I tried because I have not used hexadecimal calculations in years and much of the discussion by people online is part answers or grossly incomplete - like a person answering to a person that already knows the answer :P. 

 

Link to comment
Share on other sites

Peter I am beginning to work on the SATA + HBA project now. 

To refresh my generally appalling memory (I take medication that has ruined my memory 😕 ) I'm using old threads. 
I found one with a bit of good info to start. 
https://xpenology.com/forum/topic/30898-dsm-623-ds3617xs-beyond-12-drive/

 

IG-88 offers some great help as always, and there is a video linked to show how the Hex calcs are being done.

 

I need to work out where to go and do the editing again as I can't even remember that lol.

 

Link to comment
Share on other sites

 

I discovered with 16 disks my esataportcfg and internalportcfg values were overlapping when converted to binary. 

I decided to edit the setup for 20 total drives with 20 internal ports in case I swap out my 9211-8i for a 9300-16i (would that just work by the way?)

as I have assumed the esata ports mentioned here are in fact my onboard SATA controller ports? 



I'm not sure how to manage the SATA disk side of things though and forgotten where to do that sataportmap and diskid tinkering.

Link to comment
Share on other sites

23 minutes ago, mgrobins said:

 

I discovered with 16 disks my esataportcfg and internalportcfg values were overlapping when converted to binary. 

I decided to edit the setup for 20 total drives with 20 internal ports in case I swap out my 9211-8i for a 9300-16i (would that just work by the way?)

as I have assumed the esata ports mentioned here are in fact my onboard SATA controller ports? 



I'm not sure how to manage the SATA disk side of things though and forgotten where to do that sataportmap and diskid tinkering.

 

 

I tried to reproduce it like you yesterday using my HBA and SATA port.
However, because the power was not sufficient, it was not possible to install that many disks.
The conclusion is that it cannot be tested.
Instead, we share useful tools.

 

This is a SATA port mapping calculation tool provided by ARPL-i18n developer @wjz304.
It seems that HBA can be ignored and left uncalculated in this tool.
Please keep the settings in user_config.json that I told you about.

 

2023-10-092_51_28.thumb.png.6b4e186650103329fc989a37f70a6065.png

 

internalportcfg,esataportcfg,usbportcfg_eng.xlsx

Link to comment
Share on other sites

18 hours ago, Peter Suh said:

 

 

I tried to reproduce it like you yesterday using my HBA and SATA port.
However, because the power was not sufficient, it was not possible to install that many disks.
The conclusion is that it cannot be tested.
Instead, we share useful tools.

 

This is a SATA port mapping calculation tool provided by ARPL-i18n developer @wjz304.
It seems that HBA can be ignored and left uncalculated in this tool.
Please keep the settings in user_config.json that I told you about.

 

2023-10-092_51_28.thumb.png.6b4e186650103329fc989a37f70a6065.png

 

internalportcfg,esataportcfg,usbportcfg_eng.xlsx 18.8 kB · 0 downloads

I don't read Korean but I think I worked it out :P
Top line is each disk in my system from 1 to x (in my case I have 6 internal SATA ports and 16 internal HBA ports so should I use maxdisks 22?)

I'm finding it quite confusing to understand what settings change what feature given the way different people are describing them in different posts.

What settings (in what file) determine the number and addressing of:
1. HBA disks
2. onboard SATA disks

What settings determine the layout of the UI disk map to ensure all disks are represented and seen?

 

 

Link to comment
Share on other sites

2 hours ago, mgrobins said:

I don't read Korean but I think I worked it out :P
Top line is each disk in my system from 1 to x (in my case I have 6 internal SATA ports and 16 internal HBA ports so should I use maxdisks 22?)

I'm finding it quite confusing to understand what settings change what feature given the way different people are describing them in different posts.

What settings (in what file) determine the number and addressing of:
1. HBA disks
2. onboard SATA disks

What settings determine the layout of the UI disk map to ensure all disks are represented and seen?

 

 

 

 

I'm sorry. I tried to translate all of Excel's Korean language, but some parts were missing. The contents are as follows.

 

디스크 일련번호 ( Disk sequence )
디스크 이름 ( Disk Name)
유형 ( type )


I think maxdisks just needs to be kept above 22.


I have not yet been able to compile the information explained by many people and turn it into my own knowledge.
I am only following the explanation of wjz304, who has the most knowledge.
I'd like to believe that's the way to prevent confusion.


The file in charge of adjustment in Redpil is user_config.json.
Corresponds to the ITEM of the extra_cmdline element.
When SataPortMap / DiskIdxMap / SasIdxMap is recorded, this information is passed to synoinfo.conf


Corresponds to ITEM of another element, synoinfo
I understand that when maxdisks / internalportcfg / esataportcfg / usbportcfg are recorded, this information is also passed to synoinfo.conf.


As for internalportcfg / esataportcfg / usbportcfg, I am still in a position to check it, so I am not sure if it is accurately transmitted to synoinfo.conf.

Link to comment
Share on other sites

[NOTICE]

 

TCRP-mshell Shared improvements dated 2023.10.09

 

2023-10-1011_16_00.png.ca29df8deaf375c23ea1bfa393fb5bfd.png

 

1. Even if you do not enter the Mac address separately, the Real Mac address is automatically entered in advance for each LAN card by default.

    If you later want to change the contents directly to a virtual Mac or a purchased Mac, you can use the "Select Mac Address #" menu above.

 

2. The "Burn Another TCRP Bootload to USB or SSD" menu distributed a few days ago was a function to create a USB or SSD for another new loader.

"Clone TCRP Bootload to USB or SSD" is a function that backs up the loader currently in use and clones it equally.

Likewise, it is possible to burn to USB or SSD.

 

Please note that in both the Burn and Clone menus, after another loader is created, you must remove the rest, leaving only the loader you will actually use.

You may be in a situation where you don't know which of the two will be loaded.

 

2023-10-079_43_02.png.68b83d0a40b9c4c863cd456b51c108b7.png

 

3. Changed the build menu to select DSM VERSION like ARPL style.

As the menu continues to expand and the number of items displayed on the screen seems to increase,

we changed the function slightly for organization purposes.

 

2023-10-1011_01_20.png.0a46d9057b1e7cac20b66628ca7854b5.png

 

4. The Broadcom bnx2x module, which has unstable support in Tinycore Linux, has been stabilized.

Among LAN card modules, there are models that require firmware to operate properly.

The representative one among them is the Broadcom 10G type bnx2 series.

This time, I researched how to include the firmware and updated it.

(I was lost for a while because I couldn't find anything when I googled it. ^^ I finally have the know-how.)

 

In this Tinycore Linux, the LAN card chipset is detected in the upper right monitor window,

but as in the capture below (only numbers 1 and 4 are recognized, bnx2x is missing)

If eth0, eth1, eth2, eth3, etc. are not recognized, please let me know and I will try to include them.

 

2023-10-1011_50_59.thumb.png.567d7cdbe4179a9bc65f87fdb5a10fc2.png

 

In the case of a LAN card with missing firmware, you will see a message that it is missing in Tinycore Linux's dmesg as shown below.

 

2023-10-095_29_47.thumb.png.a8b8d70dafbc5b0adfe612bfab148cea.png

  • Thanks 2
Link to comment
Share on other sites

2 hours ago, Peter Suh said:

[NOTICE]

 

TCRP-mshell Shared improvements dated 2023.10.09

 

2023-10-1011_16_00.png.ca29df8deaf375c23ea1bfa393fb5bfd.png

 

1. Even if you do not enter the Mac address separately, the Real Mac address is automatically entered in advance for each LAN card by default.

    If you later want to change the contents directly to a virtual Mac or a purchased Mac, you can use the "Select Mac Address #" menu above.

 

2. The "Burn Another TCRP Bootload to USB or SSD" menu distributed a few days ago was a function to create a USB or SSD for another new loader.

"Clone TCRP Bootload to USB or SSD" is a function that backs up the loader currently in use and clones it equally.

Likewise, it is possible to burn to USB or SSD.

 

Please note that in both the Burn and Clone menus, after another loader is created, you must remove the rest, leaving only the loader you will actually use.

You may be in a situation where you don't know which of the two will be loaded.

 

2023-10-079_43_02.png.68b83d0a40b9c4c863cd456b51c108b7.png

 

3. Changed the build menu to select DSM VERSION like ARPL style.

As the menu continues to expand and the number of items displayed on the screen seems to increase,

we changed the function slightly for organization purposes.

 

2023-10-1011_01_20.png.0a46d9057b1e7cac20b66628ca7854b5.png

 

4. The Broadcom bnx2x module, which has unstable support in Tinycore Linux, has been stabilized.

Among LAN card modules, there are models that require firmware to operate properly.

The representative one among them is the Broadcom 10G type bnx2 series.

This time, I researched how to include the firmware and updated it.

(I was lost for a while because I couldn't find anything when I googled it. ^^ I finally have the know-how.)

 

In this Tinycore Linux, the LAN card chipset is detected in the upper right monitor window,

but as in the capture below (only numbers 1 and 4 are recognized, bnx2x is missing)

If eth0, eth1, eth2, eth3, etc. are not recognized, please let me know and I will try to include them.

 

2023-10-1011_50_59.thumb.png.567d7cdbe4179a9bc65f87fdb5a10fc2.png

 

In the case of a LAN card with missing firmware, you will see a message that it is missing in Tinycore Linux's dmesg as shown below.

 

2023-10-095_29_47.thumb.png.a8b8d70dafbc5b0adfe612bfab148cea.png

Congratulations on another milestone.


I wrote the names incorrectly Peter - I meant to type " maxdisks / internalportcfg / esataportcfg / usbportcfg "

I have done some more searching and discovered some people are having issues with the new DSM7 and alterations to these values in synoinfo.conf are not resulting in changes.

The examples being quoted relate to the DS918+ and people using the esata port as a internal drive who used to set maxdisks to 5, esata to 0x00, and Internalportcfg to 0x0011  (5 up from 4). Apparently this is no longer working and the esata is always seen as external.

On bootup I can hit "x" and enter the command line to alter my sata or USB settings - that's the opportunity to alter user_config.json, but I may try to recompile and use your menu to see if it fixes my issue as well. 

I think I will need to message wzj304 directly and see if he can talk me through some of it ... then I can come back here with a solution and share it with you to decide if anything worth building into your software is present. 

Honestly, the simple solution for me is to disable SATA and just use HBA as all my drives are seen. If I need to I buy expander card or a -16i SAS card to have more capacity.
The problem solver in me doesn't want to quit though - and I'm positive this would be helpful for other users. 

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, mgrobins said:

Also Peter.... you speak English very well. Maybe other languages or dialects too in addition to Korean?

Please don't ever apologise for not being perfect translating all very technical discussion to English. You do a great job :) 

 

This is the power of Google Translator.^^
I have lived in the United States for about a year.
After using the translator, I can review only the slightly awkward parts.

 

TCRP-mshell is supported in 10 languages.
Languages other than English and Korean have not been properly reviewed by users who speak those languages as their native language.

  • Thanks 1
Link to comment
Share on other sites

On 10/9/2023 at 9:39 PM, pocopico said:

 

Correct, i think i've got this fixed after a version, didn't I ? 🤔🤔🤔🤔🤔

 

 

@pocopico

 

Now, the rd.gz file, which was always causing problems due to the variable file size in Partition 1, has been completely moved to Partition 3.


Modified build-loader.sh of readpill-load.
No longer copies rd.gz to the 1st partition of loader.img.
Instead, copy it to partition 3.


https://github.com/PeterSuh-Q3/redpill-load/commit/022e72c732ee0cba3a06377c9524f5baab031a96


In the rploader.sh script:
1/rd.gz was replaced with 3/rd.gz only.


https://github.com/PeterSuh-Q3/tinycore-redpill/commit/d16c2d5f8f4721d2a1df8a7e9905f981a8d0917b


The loader test results show that it operates without any problems and the ramdisk patch is also processed well.

  • Like 2
Link to comment
Share on other sites

[NOTICE]

 

TCRP-mshell Shared improvements dated 2023.10.18

 

ARPL redesigned from TCRP and efficiently used RAM disk Temp space.

I was always envious of the dramatic increase in build speed.

 

Although the build speed of my MSHELL is much faster than pocopico's original TCRP,

I wanted to get close to ARPL speeds.

 

I also tried using more of the RAM disk space /dev/shm to increase the build time.

As you download addons or integrated modules one by one from github, a lot of curl processing occurs.

It seemed to be a factor that slowed down speed.

 

So, when initially entering the menu, download these two repositories to the ramdisk (/dev/shm) using git clone.

By importing using the copy method, the speed will appear to be significantly faster.

I think ARPL was probably redesigned in this way.

 

https://github.com/PeterSuh-Q3/tinycore-redpill/commit/bc4085050906fe04bbb5e1d880ede5d3eff2483e

https://github.com/PeterSuh-Q3/redpill-load/commit/29bbcc57630b3fa2c5164421449036ea749a3295

Edited by Peter Suh
  • Like 2
Link to comment
Share on other sites

Peter I kept trying to get the SATA disks to not overlap or prevent my HBA discs showing up properly and just could not resolve it. In the end I have had a bit of a sook and gone back to disabling the SATA ports. 

I can add one more disk for now, possibly 3-4 if I work out which HBA and sata ports are not playing well. 

I plan to buy my way out of the problem if I need to - a 16i replacement HBA  or a port expander if I can find a small one that doesn't need PCIe placement and can take power from a molex or sata power plug. 

  • Like 1
Link to comment
Share on other sites

[NOTICE]

 

TCRP-mshell Improvements dated 2023.10.22

 

TCRP-mshell has had major changes in this version, so it has been upgraded from 0.9.6.0, which made the first build speed improvement,

to 0.9.7.0, which has made the second improvement.

I will soon upload version 0.9.7.0 of the GitHub img file.

 

Starting from version 0.9.7.0, the download process of the DSM Pat file will no longer proceed and will be skipped.

I understand that ARPL-i18n has not yet applied this method and has been finalized as the final version.

 

The reason for downloading and using the DSM Pat file from the Synology download home is to extract the rd.gz RAM disk compressed file and zImage file within it.

Every time the loader is built, downloading this DSM file and extracting these two files is inefficient, wastes space, and is a huge waste of loader building time,

so I extracted these files corresponding to all four revisions of all models in advance and stored them separately on GitHub. I put it in the repository.

 

Because the DSM download process is omitted, loader building time is significantly reduced.

The loader in VM ATA mode is built in about 12 seconds as shown in the capture,

and the loader in bare metal USB mode is measured to take about 20 seconds (based on Haswell 4 cores).

 

2023-10-228_13_50.thumb.png.7ddae5b09c9ed9bcb7c01597fb921673.png

Edited by Peter Suh
  • Thanks 2
Link to comment
Share on other sites

20 hours ago, Peter Suh said:

[NOTICE]

 

TCRP-mshell Improvements dated 2023.10.22

 

TCRP-mshell has had major changes in this version, so it has been upgraded from 0.9.6.0, which made the first build speed improvement,

to 0.9.7.0, which has made the second improvement.

I will soon upload version 0.9.7.0 of the GitHub img file.

 

Starting from version 0.9.7.0, the download process of the DSM Pat file will no longer proceed and will be skipped.

I understand that ARPL-i18n has not yet applied this method and has been finalized as the final version.

 

The reason for downloading and using the DSM Pat file from the Synology download home is to extract the rd.gz RAM disk compressed file and zImage file within it.

Every time the loader is built, downloading this DSM file and extracting these two files is inefficient, wastes space, and is a huge waste of loader building time,

so I extracted these files corresponding to all four revisions of all models in advance and stored them separately on GitHub. I put it in the repository.

 

Because the DSM download process is omitted, loader building time is significantly reduced.

The loader in VM ATA mode is built in about 12 seconds as shown in the capture,

and the loader in bare metal USB mode is measured to take about 20 seconds (based on Haswell 4 cores).

 

2023-10-228_13_50.thumb.png.7ddae5b09c9ed9bcb7c01597fb921673.png

 

The whole Tinycore filesystem is on memory and being on RAM, there is no need to load anything to the RAMdisk. Setting the requirement for memory to 2GB to build the loader,. is one of the constraints that i decided is OK. If you have larger memory available you may speed up by writing more files to /tmp instead of the partition 3 of the loader. 

 

 

The whole process is based on the main constraint of not sharing anything (kernel, ramdisk binaries, update images, etc) that is copyrighted on any shared  location. If you decide to do so, you are opening a posibility to be blaimed for distributing copyrighted material. Its you choice but you might have just opened a back door to be procecuted. 

Edited by pocopico
  • Sad 1
Link to comment
Share on other sites

Hello Peter!

Updated to 69057, but the power button not working anymore (was working with 64570).

The loader building process showing this, but still not working :(. Can you check it please?

Thank you in advice!

Quote
[#] Copy downloaded file /dev/shm/tcrp-addons/acpid/recipes/universal.json to /home/tc/redpill-load/custom/extensions/_ext_new_rcp.tmp_json
...
[#] Extension acpid is already installed from https://raw.githubusercontent.com/PeterSuh-Q3/tcrp-addons/master/acpid/rpext-index.json
...
[#] Updating acpid extension...
[#] Copy downloaded file /dev/shm/tcrp-addons/acpid/rpext-index.json to /home/tc/redpill-load/custom/extensions/_new_ext_index.tmp_json

[#] Extension acpid index is already up to date

 

 

Link to comment
Share on other sites

17 hours ago, pocopico said:

 

The whole Tinycore filesystem is on memory and being on RAM, there is no need to load anything to the RAMdisk. Setting the requirement for memory to 2GB to build the loader,. is one of the constraints that i decided is OK. If you have larger memory available you may speed up by writing more files to /tmp instead of the partition 3 of the loader. 

 

 

The whole process is based on the main constraint of not sharing anything (kernel, ramdisk binaries, update images, etc) that is copyrighted on any shared  location. If you decide to do so, you are opening a posibility to be blaimed for distributing copyrighted material. Its you choice but you might have just opened a back door to be procecuted. 


Peter are you are running out of ideas for new features and now trying to make this tool as perfect in all ways as you can? 

Surely it is more important to work on my Sata ports overlapping or blocking my HBA ports? 😁😇😜

I think Pocopico makes a very important point that needs careful examination. 

I have an idea that may help with build time yet be legally the same as now.

Make it so Pat file downloads and begins extraction at the start of the loader launch with a "begin process" button or something so it is in parallel with build decision making and by the time it's needed it is already in the temporary spot it needs to be.


This would save some time potentially, and be a solution to use while examining whether there are restrictions in place as Pocopico states that could leave you vulnerable to legal attack Peter.

In all instances I think: 
1. Protect yourself,
2. Protect yourself :) 

Build time is not likely a problem for most of us. 

How many people are actually rebuilding so often it needs to be optimised to the absolute minimum time?

The most optimised and best functioning result is what we want as users yes? 

  • Like 1
Link to comment
Share on other sites

20 hours ago, pocopico said:

 

The whole Tinycore filesystem is on memory and being on RAM, there is no need to load anything to the RAMdisk. Setting the requirement for memory to 2GB to build the loader,. is one of the constraints that i decided is OK. If you have larger memory available you may speed up by writing more files to /tmp instead of the partition 3 of the loader. 

 

 

The whole process is based on the main constraint of not sharing anything (kernel, ramdisk binaries, update images, etc) that is copyrighted on any shared  location. If you decide to do so, you are opening a posibility to be blaimed for distributing copyrighted material. Its you choice but you might have just opened a back door to be procecuted. 

 

@pocopico Thank you for your concern about the possibility of running into copyright issues.


I've been thinking a lot about this issue since yesterday.
Where on the Synology website can I check directly for a notice of copyright violation that prohibits sharing the (kernel, ramdisk binaries, update images, etc) you mentioned?
Or what precedent does this follow?
It was difficult to find this legal notice or other items directly.
https://www.synology.com/en-us/company/legal/terms_EULA


Also, looking at the contents of this legal notice regarding (kernel, ramdisk binaries, update images, etc), it seems that tampering with the kernel may also be a problem.
In the You and Me repo, under redpill-load/config, there are bsp files created by modifying the kernel.
Wouldn't this bsp file also be considered a copyright violation?

  • Like 1
Link to comment
Share on other sites

2 hours ago, mgrobins said:


Peter are you are running out of ideas for new features and now trying to make this tool as perfect in all ways as you can? 

Surely it is more important to work on my Sata ports overlapping or blocking my HBA ports? 😁😇😜

I think Pocopico makes a very important point that needs careful examination. 

I have an idea that may help with build time yet be legally the same as now.

Make it so Pat file downloads and begins extraction at the start of the loader launch with a "begin process" button or something so it is in parallel with build decision making and by the time it's needed it is already in the temporary spot it needs to be.


This would save some time potentially, and be a solution to use while examining whether there are restrictions in place as Pocopico states that could leave you vulnerable to legal attack Peter.

In all instances I think: 
1. Protect yourself,
2. Protect yourself :) 

Build time is not likely a problem for most of us. 

How many people are actually rebuilding so often it needs to be optimised to the absolute minimum time?

The most optimised and best functioning result is what we want as users yes? 

 

Thank you for your good comments.


As long as the Pat file is downloaded initially and the model and revision are not changed, the function to back up and store the already downloaded Pat file for rebuilding is a feature that has already been prepared.


This time, the above function was blocked by using pre-extracted rd.gz and zImage files.


I would like to take a more in-depth look at pocopico and copyright-related aspects.
I left a related inquiry with him a little while ago.


If there is no good way to implement fast loader builds while protecting developers, I am currently considering returning to the original process.


thank you

Edited by Peter Suh
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...