Jump to content
XPEnology Community

TinyCore RedPill Loader Build Support Tool ( M-Shell )


Peter Suh

Recommended Posts

1 hour ago, shibby said:

@Peter Suh there is still a problem with Virtual Machine Management. Service is started correctly but there is now way to start VM 😕 This is linked with iSCSI Storage, but i didnt find with what exacly and how to solve it 😕

 

 

With the distribution of the san manager repair addon a few days ago,

SAN MANAGER and VMM were successfully started.

 

https://github.com/PeterSuh-Q3/tcrp-addons/tree/main/sanmanager-repair

 

However, we are receiving reports that VMs created within VMM do not actually operate.

I know about this problem too, but I haven't found a solution yet.

Link to comment
Share on other sites

I`m using you M-Shell Tools from the beginning. At the moment i`m on 0.9.5.0 with latest DSM version installed and VMM works correct. So there is something wrong with M-Shell,  not with DSM. Maybe we should step back, back to 100% working M-Shell and then do updates step-by-step and try to find a moment when SAN/VMM has been broken.

Two days ago i made an update from 7.1.1 to 7.2.1 of my friend Xpenology. He`s using VMM for Virtual Machine of Home Assistant. After upgrade he cannot run Home Assistant VM. This is a huge problem for him because the entire smart home installation is based on Home Assistant! and i cannot help him.

 

I really want to see in M-Shell project two changes:
- do not implement changes directly to "prod" repository. You should use "dev" and then pull changes to "prod". Users should be able to select with M-Shell repository they want to use

- M-Shell versioning: do not update M-Shell automatically. If user want to use older version of M-Shell then they should be able to do this. Much more users need a working SAN/VMM instead of exsample "M-Shell offline instalation". If i am sure that specific version of M-Shell is working well then i should be able to use them. At the moment i cannot do this because M-Shell will be automatically updated.

Edited by shibby
  • Like 2
Link to comment
Share on other sites

On 1/29/2024 at 5:59 PM, shibby said:

I`m using you M-Shell Tools from the beginning. At the moment i`m on 0.9.5.0 with latest DSM version installed and VMM works correct. So there is something wrong with M-Shell,  not with DSM. Maybe we should step back, back to 100% working M-Shell and then do updates step-by-step and try to find a moment when SAN/VMM has been broken.

Two days ago i made an update from 7.1.1 to 7.2.1 of my friend Xpenology. He`s using VMM for Virtual Machine of Home Assistant. After upgrade he cannot run Home Assistant VM. This is a huge problem for him because the entire smart home installation is based on Home Assistant! and i cannot help him.

 

I really want to see in M-Shell project two changes:
- do not implement changes directly to "prod" repository. You should use "dev" and then pull changes to "prod". Users should be able to select with M-Shell repository they want to use

- M-Shell versioning: do not update M-Shell automatically. If user want to use older version of M-Shell then they should be able to do this. Much more users need a working SAN/VMM instead of exsample "M-Shell offline instalation". If i am sure that specific version of M-Shell is working well then i should be able to use them. At the moment i cannot do this because M-Shell will be automatically updated.

 

 

I share a solution to recover the SAN MANAGER after it is damaged and the VMM connected to it is damaged.

It is necessary to create the system /config directory used in VMM and adjust the permissions of the files within it.
The key is to create the directory twice and grant permission twice.

 

Ultimately, we will create a vmm-repair addon and distribute it further.

 

The directory to be restored or the directory requiring permission is as shown below.
Prepare tail with root privileges.

 

tail -f /var/log/synoscgi.log | grep "No such file or directory"

 

 

Spoiler


tail -f synoscgi.log | grep "No such file or directory"

 

2024-01-30T21:54:31+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_load_lun[8001]: iSCSI:iscsi_lio_target_load.cpp:113:SYNOiSCSILioTargetLoad mkdir(/config/target/iscsi/iqn.4931fa37-41ab-44bc-b472-5c8ea14a36b2, 448), err=No such file or directory
2024-01-30T21:54:32+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_load_lun[7999]: iSCSI:iscsi_lio_target_load.cpp:113:SYNOiSCSILioTargetLoad mkdir(/config/target/iscsi/iqn.4931fa37-41ab-44bc-b472-5c8ea14a36b2, 448), err=No such file or directory
2024-01-30T21:57:24+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_load_lun[22764]: iSCSI:iscsi_configfs_rw.cpp:21:SYNOiSCSIConfigfsWrite open(/config/target/iscsi/iqn.4931fa37-41ab-44bc-b472-5c8ea14a36b2/tpgt_1/attrib/demo_mode_write_protect, 1), err=No such file or directory
2024-01-30T21:57:25+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_loop_mount[22756]: iSCSI:iscsi_lun_loopback_mount.cpp:118:SYNOiSCSILunLoopbackMount mkdir(/config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2, 448), err=No such file or directory
2024-01-30T21:57:25+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_loop_mount[22814]: iSCSI:iscsi_lun_loopback_mount.cpp:118:SYNOiSCSILunLoopbackMount mkdir(/config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2, 448), err=No such file or directory
2024-01-30T21:57:52+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_loop_mount[25051]: iSCSI:iscsi_lun_loopback_mount.cpp:118:SYNOiSCSILunLoopbackMount mkdir(/config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2, 448), err=No such file or directory
2024-01-30T21:57:52+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_loop_mount[25050]: iSCSI:iscsi_lun_loopback_mount.cpp:118:SYNOiSCSILunLoopbackMount mkdir(/config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2, 448), err=No such file or directory
2024-01-30T21:58:26+09:00 NAS4 synoscgi_SYNO.Core.ISCSI.LUN_1_loop_mount[27910]: iSCSI:iscsi_configfs_rw.cpp:151:SYNOiSCSIConfigfsGetString open(/config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2/tpgt_1/address, O_RDONLY), err=No such file or directory

 

 

As shown in the tail log, the following processing is required.
This example follows my uuid, so it may differ depending on your environment.

 

mkdir -p /config/target/iscsi/iqn.4931fa37-41ab-44bc-b472-5c8ea14a36b2
chmod 777 /config/target/iscsi/iqn.4931fa37-41ab-44bc-b472-5c8ea14a36b2/tpgt_1/attrib/demo_mode_write_protect
mkdir -p /config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2
chmod 777 /config/target/loopback/naa.4931fa37-41ab-44bc-b472-5c8ea14a36b2/tpgt_1/address

 

The above processing may be required for each VMM VOLUME.

Edited by Peter Suh
Link to comment
Share on other sites

Sorry but this looks like "wound dressing" but not like a fix or solution. If i good understant, when NAS will be rebooted or has power failure then after boot VM will not start automatically. Then your addon will detect it in log and recreate folder`s path and permissions but until i will not manually start VM it will be off. This is not a solution.

 

I still dont understand when this bug (problem with loading modules for SAN Manager) was appear. Is this was after added offline installer for M-Shell? I made some test and when i manually edit menu.sh file and force to update tag:v1.0.0.0 and then rebuild the loader then all works as intended (SAN Manager works, VMM is starting and i can run Virtual Machine without any issue.

 

I really think you should come back to v1.0.0.0 as "prod" repository and working on "offline instalation" in "dev" repository.

Link to comment
Share on other sites

21 minutes ago, shibby said:

Sorry but this looks like "wound dressing" but not like a fix or solution. If i good understant, when NAS will be rebooted or has power failure then after boot VM will not start automatically. Then your addon will detect it in log and recreate folder`s path and permissions but until i will not manually start VM it will be off. This is not a solution.

 

I still dont understand when this bug (problem with loading modules for SAN Manager) was appear. Is this was after added offline installer for M-Shell? I made some test and when i manually edit menu.sh file and force to update tag:v1.0.0.0 and then rebuild the loader then all works as intended (SAN Manager works, VMM is starting and i can run Virtual Machine without any issue.

 

I really think you should come back to v1.0.0.0 as "prod" repository and working on "offline instalation" in "dev" repository.

 

What you seem to mean is that the latest version of mshell cannot be trusted unconditionally.


I don't know if you are aware of this problem, but I changed redpill-load/bundled-exts.json to force use of the mac-spoof addon.
It will be dated from December 28, 2023 to January 2, 2024.


https://github.com/PeterSuh-Q3/redpill-load/commit/87ecb48fba5bdeb602b9e03c7dc1e7acba6da491


https://github.com/PeterSuh-Q3/redpill-load/commit/0ed22a488cc2b9d7ec198ea4e3531875056ec35c


At that time, users who built the load for 5 days suffered damage to their SAN MANAGER and VMM due to the forced mac spoof.


After that, mac-spoof was never forced to be used.


After performing a clean installation with the latest version of mshell, I tested VMM on DS3622xs+. It worked without problem.


And friend's prod and dev don't include the version switch functionality you want.
The only purpose is to switch between the development and use versions of the redpill.ko kernel.


I'm sorry, but I would like to tell you that the only fundamental solution so far is a clean installation.
I am sorry for causing you this pain due to my mistake of applying mac spoof without sufficient understanding.


I will continue to do as much as possible to restore SAN MANAGER and VMM while avoiding clean installation, which is the last resort.
For now, there is nothing I can do for you other than "wound dressing"
As you said, it seems difficult to automatically start VMM within this "wound dressing".

Edited by Peter Suh
Link to comment
Share on other sites

now i understand a lot more.

 

You said "clean install" but what do you think, is switch to other model of Synology may fix it? For example: my friend is now on DS920+. What if he will rebuild loader for DS923+? Will it be treated as a clean install (if he select instalation without losing his data)?

 

 

 

Link to comment
Share on other sites

11 hours ago, shibby said:

now i understand a lot more.

 

You said "clean install" but what do you think, is switch to other model of Synology may fix it? For example: my friend is now on DS920+. What if he will rebuild loader for DS923+? Will it be treated as a clean install (if he select instalation without losing his data)?

 

 

 

 

 

This won't help if your migration involves switching to a different model.


SAN MANAGER appears to work fine on the first boot after completing migration, but it is lost again on reboot.


If you want to reinstall from DS920+ to DS923+, you can use it after rebuilding the loader.


DSM of DS923+ is reinstalled only in the system partition area, so it does not invade the data partition.


All settings are initialized, so you can prepare a backup of settings (dss files) and packages and use them for restoration after reinstallation.


For detailed instructions on this, please refer to Synology KB.

Link to comment
Share on other sites

11 hours ago, shibby said:

now i understand a lot more.

 

You said "clean install" but what do you think, is switch to other model of Synology may fix it? For example: my friend is now on DS920+. What if he will rebuild loader for DS923+? Will it be treated as a clean install (if he select instalation without losing his data)?

 

 

 

 

 

However, there is one caveat.


VMM must match the specifications of the real CPU platform and the DSM CPU platform.


Intel and AMD cannot work together.


Although VMM can operate on DS920+ (Gemini Lake) based on Intel CPU,
DS923+(r1000) does not work because it is an AMD-based DSM.

Link to comment
Share on other sites

Hi all

 

first of all, i big thank you for your hard work Peter. I am a total newbie in TCRP world and I must confess this is pretty awesome how easy it is to get it running.

 

I have bought a T-Bao R3 with AMD Ryzen 7 5700 

 

I have installed tinycore-redpill.v1.0.1.0.m-shell.img.gz and it works well using SA6400 model.

 

1/ My previous NAS was initially set with a static IP as it is doing the DHCP for my network. I tried changing to static in ip section of the user config but IP was never picked up. 

It is only showing 169.254 ... ip even if static was set properly. So TCRP Friend was not happy at all :(

I had for force a DHCP from my box just for this  

 

2/ all my USB devices are working properly except Bluetooth dongle

 

I followed the topic here https://svrforum.com/nas/1109248 tried to copy ko files in /modules/lib but it fails loading with insmod

 

root@nasbrock:/lib/modules# uname -a
Linux nasbrock 5.10.55+ #69057 SMP Fri Jan 12 17:02:57 CST 2024 x86_64 GNU/Linux synology_epyc7002_sa6400
root@nasbrock:/lib/modules# insmod /lib/modules/bluetooth.ko
insmod: ERROR: could not insert module /lib/modules/bluetooth.ko: Unknown symbol in module
root@nasbrock:/lib/modules#

 

i see that you are injecting modules for synology_epyc7002_sa6400 in your script. is it possible to get bluetooth support as well or if you could just provide some guidance 

 

many thanks

 

PXL_20240131_203916762.jpg

Edited by fishton
Link to comment
Share on other sites

17 hours ago, shibby said:

now i understand a lot more.

 

You said "clean install" but what do you think, is switch to other model of Synology may fix it? For example: my friend is now on DS920+. What if he will rebuild loader for DS923+? Will it be treated as a clean install (if he select instalation without losing his data)?

 

 

 

 

There was a little more progress today.
There was no need to complicatedly figure out the uuid.
Just because the iscsi and loopback folders do not exist under /config/target.
VMM failed to create a directory under it.
The failure of iscsi and loopback to be automatically created is probably related to SAN MANAGER damage.

Link to comment
Share on other sites

20 hours ago, shibby said:

now i understand a lot more.

 

You said "clean install" but what do you think, is switch to other model of Synology may fix it? For example: my friend is now on DS920+. What if he will rebuild loader for DS923+? Will it be treated as a clean install (if he select instalation without losing his data)?

 

 

 

 

 

The vmm-repair addon is now deprecated.


With just one sanmanager-repair addon, both SAN MANAGER and VMM can be restored, enabling more stable operation and control than service.
It was established in the form of a bootup scheduler.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/sanmanager-repair/src/install.sh


Automatic startup of individual VMs within VMM is now possible.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/sanmanager-repair/src/sanrepair.sh

  • Like 1
Link to comment
Share on other sites

On 1/31/2024 at 10:33 PM, shibby said:

Sorry but this looks like "wound dressing" but not like a fix or solution. If i good understant, when NAS will be rebooted or has power failure then after boot VM will not start automatically. Then your addon will detect it in log and recreate folder`s path and permissions but until i will not manually start VM it will be off. This is not a solution.

 

I still dont understand when this bug (problem with loading modules for SAN Manager) was appear. Is this was after added offline installer for M-Shell? I made some test and when i manually edit menu.sh file and force to update tag:v1.0.0.0 and then rebuild the loader then all works as intended (SAN Manager works, VMM is starting and i can run Virtual Machine without any issue.

 

I really think you should come back to v1.0.0.0 as "prod" repository and working on "offline instalation" in "dev" repository.

 

As you pointed out, there were still remaining issues.


Among the command line options applied from Friend kernel 0.1.0d, skip_vender_mac_interfaces appears to be directly causing SAN MANAGER damage.


rr is prepared to use this option, but mshell does not seem to be able to do so.


I updated Friend kernel 0.1.0j to remove this option and have now confirmed the stability of SAN MANAGER once again.

Edited by Peter Suh
  • Like 2
Link to comment
Share on other sites

23 hours ago, Peter Suh said:

 

 

The vmm-repair addon is now deprecated.


With just one sanmanager-repair addon, both SAN MANAGER and VMM can be restored, enabling more stable operation and control than service.
It was established in the form of a bootup scheduler.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/sanmanager-repair/src/install.sh


Automatic startup of individual VMs within VMM is now possible.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/sanmanager-repair/src/sanrepair.sh

 

2024-02-029_43_53.thumb.png.0a306ddc6e317cf183c0f6d185eee902.png

 

2024-02-029_42_55.thumb.png.c69a011247c0e02a68dffffbec3db163.png

Link to comment
Share on other sites

checked on my friend`s Xpenology yesterday and it didn`t work. sanrepair.sh reported than directory already exist but i coundn`t start Virtual Machine...

 

I migrated his Xpenology under Proxmox and moved Virtual Machine from Synology VMM to Proxmox so for me is "problem solved"

Edited by shibby
Link to comment
Share on other sites

8 minutes ago, shibby said:

checked on my friend`s Xpenology yesterday and it didn`t work. sanrepair.sh reported than directory already exist but i coundn`t start Virtual Machine...

 

I migrated his Xpenology under Proxmox and moved Virtual Machine from Synology VMM to Proxmox so for me is "problem solved"

 

The vm should start automatically in the state processed by sanrepair.sh without any adjustments.

 

It would have been nice to have tracked the contents of /var/log/synoscgi.log more in error situations.

 

If you have already migrated to proxmox, it will be difficult to check anymore.

Link to comment
Share on other sites

On 1/2/2024 at 8:36 PM, Orphée said:

When I mean serial console I mean serial COM port console access.

 

With Telnet access from Windows Putty on a baremetal system, or accessible with Serial Com over TCP with ESXi, or accessible from Proxmox adding a Serial port (Serial0) socket.

image.thumb.png.711140587282824e08fc6918a764c0a8.png

 

Here is RR grub.cfg :

 

# cat grub.cfg 
insmod search
insmod echo
insmod terminal
insmod test
insmod font
insmod loadenv
insmod serial
insmod usb_keyboard
insmod linux
insmod gzio
insmod fat
insmod ext2

set default="boot"
set timeout="5"
set timeout_style="menu"
set vesa_mode=1

if [ -s $prefix/grubenv ]; then
  load_env
fi
if [ "${next_entry}" ]; then
  set default="${next_entry}"
  unset next_entry
  save_env next_entry
fi
if [ "${vesa_mode}" ]; then
  set vesa_mode=${vesa_mode}
fi

function load_video {
  if [ x$feature_all_video_module = xy ]; then
    insmod all_video
  else
    insmod efi_gop
    insmod efi_uga
    insmod ieee1275_fb
    insmod vbe
    insmod vga
    insmod video_bochs
    insmod video_cirrus
  fi
}

load_video
if loadfont unicode; then
  set gfxmode=auto
  insmod gfxterm
  terminal_output gfxterm
fi

set menu_color_normal=light-green/blue
set menu_color_highlight=black/green

if serial --unit=0 --speed=115200; then
  terminal_input --append serial_com0
  terminal_output --append serial_com0
fi

function set_gfxpayload {
  if [ ${vesa_mode} -eq 1 ]; then
    set gfxpayload=keep
  else
    set gfxpayload=text
  fi
}

set TERM=tty2
set RR_CMDLINE=""

search --set=root --label "RR3"
if [ -s /zImage-dsm -a -s /initrd-dsm ]; then
  if [ "${default}" = "direct" ]; then
    set timeout="1"
    menuentry 'Boot DSM kernel directly' --id direct {
      set_gfxpayload
      echo "cmdline:"
      echo "${dsm_cmdline}"
      echo "Loading DSM kernel..."
      linux /zImage-dsm ${dsm_cmdline}
      echo "Loading DSM initramfs..."
      initrd /initrd-dsm
      echo "Booting..."
      echo "Access http://find.synology.com/ to connect the DSM via web."
    }
  fi
  menuentry 'Boot DSM' --id boot {
    set_gfxpayload
    echo "Loading kernel..."
    linux /bzImage-rr console=${TERM} net.ifnames=0 ${RR_CMDLINE}
    echo "Loading initramfs..."
    initrd /initrd-rr
    echo "Booting..."
  }
  menuentry 'Force re-install DSM' --id junior {
    set_gfxpayload
    echo "Loading kernel..."
    linux /bzImage-rr console=${TERM} net.ifnames=0 ${RR_CMDLINE} force_junior
    echo "Loading initramfs..."
    initrd /initrd-rr
    echo "Booting..."
  }
fi

menuentry 'Configure loader' --id config {
  set_gfxpayload
  echo "Loading kernel..."
  linux /bzImage-rr console=${TERM} net.ifnames=0 ${RR_CMDLINE} IWANTTOCHANGETHECONFIG
  echo "Loading initramfs..."
  initrd /initrd-rr
  echo "Booting..."
}

if [ ${vesa_mode} = 1 ]; then
  menuentry 'Change vesa to text video mode' --id videomode {
    set vesa_mode=0
    save_env vesa_mode
    configfile ${prefix}/grub.cfg
  }
else
  menuentry 'Change text to vesa video mode' --id videomode {
    set vesa_mode=1
    save_env vesa_mode
    reboot
    configfile ${prefix}/grub.cfg
  }
fi

 

 

@Orphée

 

 

I have successfully adjusted Tinycore Linux to use menu.sh after logging in as the tc user through the serial COM port you requested.
Four windows appear on the monitor console as usual and can be accessed separately through the COM port.
You need to log in as tc / P@ssw0rd.
After the function is distributed, the automatic update of curepure64.gz must be processed once for the function to start working.

 

 

2024-02-068_41_06.thumb.png.a6ccded96409755a3294eec3cc680beb.png

  • Like 2
Link to comment
Share on other sites

4 hours ago, cferra said:

@Peter Suh on SA6400 Is it a known issue that connecting new drives while being turned on when LSI hba cards is not working?

 

im able to see drives when I reboot the machine but not if I add drives when the machine is on. 
 

LSI 9305 series cards. 

 

You seem to be talking about Hot PlugIn for SATA disks or SAS disks.

Device-Tree based models such as SA6400 do not seem to support Hot PlugIn disk mapping.

Newly added disks will only have new mappings processed through the reboot process.

In a genuine Synology, this process is probably unnecessary.

Link to comment
Share on other sites

13 minutes ago, Peter Suh said:

 

You seem to be talking about Hot PlugIn for SATA disks or SAS disks.

Device-Tree based models such as SA6400 do not seem to support Hot PlugIn disk mapping.

Newly added disks will only have new mappings processed through the reboot process.

In a genuine Synology, this process is probably unnecessary.

Makes sense - 3622 I was unable to build the array though for some reason - instant error when attempting.  Using 14 10tb SAS disks. 

SA6400 this worked with no issue. 
 

is SA3600 better for a large build out? Or even something being with testing?

Edited by cferra
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...