Jump to content
XPEnology Community

TinyCore RedPill Loader Build Support Tool ( M-Shell )


Peter Suh

Recommended Posts

On 9/8/2024 at 10:00 AM, djvas335 said:

It does not resize system partition automatically, to resize you need to format the NAS and reinstall the DSM, the DSM installer will create the new partition layout for you.

 

 

 

I'm not sure if a 7.2.2 update actually resizes my md0 or not, need developer confirmation. Or I might be wrong with all of this.

 

A few days ago I migrated from 918(7.2.1u5) to 3622 but got a load of broken app dependency as I used 7.2.1u1 pat, the apps weren't recoverable and file station was also broken even after I updated OS with u5 pat file. In the end I force re-installed the OS with 7.2.2-72806 pat but selecting "Reset system configurations" to start "fresh".

 

IIRC I had the old 2G layout on md0 when I was on 918(7.2.1u5), after I recovered most things with HB and started poking around with ssh, I found out md0 was resized to 8G layout. I only had 2 RAID1 disk pools.

 

So I did 3 major things:

1. Migrate from 918(7.2.1u5) to 3622. I used 7.2.1u1 and selected "Retain system configurations" on migration page.

But apps were a broken mess even after I updated OS with u5 pat file, decided to nuke the loader then move on to 7.2.2-72806.

 

2. Migration page comes up, I used 7.2.2-72806 pat.

 

3. And selected "Reset system configurations"(Keep the files only).

After initial welcome page, setup and misc restorations with HB, I ssh into the NAS and found out md0 was 8G.

(I migrated to 7 from 6 and checked when I had a drive error a few months ago so I'm sure it was 2G)

 

There are some people on reddit reporting 7.2.2 messed with their storage pools and/or their system partition, it shouldn't be a coincidence.

 

Before(when I had a drive error in April, 918 with 7.2.1u4):

root@NAS:~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdd3[0] sdf3[1](F)
      966038208 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sde3[3] sdc3[2]
      1942790208 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf2[0] sdd2[3] sde2[2] sdc2[1]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sde1[0] sdc1[3] sdd1[2] sdf1[12](F)
      2490176 blocks [12/3] [U_UU________]

 

Now(3622 with 7.2.2-72806):

root@NAS:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde3[3] sdc3[2]
      1942790208 blocks super 1.2 [2/2] [UU]

md3 : active raid1 sdd3[0] sdf3[2]
      966038208 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc2[0] sdf2[3] sde2[2] sdd2[1]
      2096128 blocks super 1.2 [12/4] [UUUU________]

md0 : active raid1 sdc1[0] sdf1[3] sde1[2] sdd1[1]
      8387584 blocks super 1.2 [12/4] [UUUU________]

 

Now here's the question: How is there reserved space to begin with and which step triggered the expansion.

(918 to 3622, 7.2.2 update or with "Reset system configurations")

Edited by vbz14216
fix error
  • Like 1
Link to comment
Share on other sites

On 9/17/2024 at 9:38 PM, vbz14216 said:

 

I'm not sure if a 7.2.2 update actually resizes my md0 or not, need developer confirmation. Or I might be wrong with all of this.

 

A few days ago I migrated from 918(7.2.1u5) to 3622 but got a load of broken app dependency as I used 7.2.1u1 pat, the apps weren't recoverable and file station was also broken even after I updated OS with u5 pat file. In the end I force re-installed the OS with 7.2.2-72806 pat but selecting "Reset system configurations" to start "fresh".

 

IIRC I had the old 2G layout on md0 when I was on 918(7.2.1u5), after I recovered most things with HB and started poking around with ssh, I found out md0 was resized to 8G layout. I only had 2 RAID1 disk pools.

 

So I did 3 major things:

1. Migrate from 918(7.2.1u5) to 3622. I used 7.2.1u1 and selected "Retain system configurations" on migration page.

But apps were a broken mess even after I updated OS with u5 pat file, decided to nuke the loader then move on to 7.2.2-72806.

 

2. Migration page comes up, I used 7.2.2-72806 pat.

 

3. And selected "Reset system configurations"(Keep the files only).

After initial welcome page, setup and misc restorations with HB, I ssh into the NAS and found out md0 was 8G.

(I migrated to 7 from 6 and checked when I had a drive error a few months ago so I'm sure it was 2G)

 

There are some people on reddit reporting 7.2.2 messed with their storage pools and/or their system partition, it shouldn't be a coincidence.

 

Before(when I had a drive error in April, 918 with 7.2.1u4):

root@NAS:~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdd3[0] sdf3[1](F)
      966038208 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sde3[3] sdc3[2]
      1942790208 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf2[0] sdd2[3] sde2[2] sdc2[1]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sde1[0] sdc1[3] sdd1[2] sdf1[12](F)
      2490176 blocks [12/3] [U_UU________]

 

Now(3622 with 7.2.2-72806):

root@NAS:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde3[3] sdc3[2]
      1942790208 blocks super 1.2 [2/2] [UU]

md3 : active raid1 sdd3[0] sdf3[2]
      966038208 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc2[0] sdf2[3] sde2[2] sdd2[1]
      2096128 blocks super 1.2 [12/4] [UUUU________]

md0 : active raid1 sdc1[0] sdf1[3] sde1[2] sdd1[1]
      8387584 blocks super 1.2 [12/4] [UUUU________]

 

Now here's the question: How is there reserved space to begin with and which step triggered the expansion.

(918 to 3622, 7.2.2 update or with "Reset system configurations")

 

I think the system partition size was not fixed at 2G when migrating from DSM 6 to DSM 7.
Since it is difficult to reproduce by reinstalling DSM 6 now, I am looking for evidence from that time.


In DSM 7.2.1 -69057, it has a system partition structure of 8G+2G as below.
This has not changed in DSM 7.2.2.
Can you tell me which apps were specifically affected when migrating between DSM versions?

 

acbf35190ab55670484d79423f422630.thumb.jpg.39fd59bf5d2b65e3652c47440814a81e.jpg

 

 

a4ec9eccafe3c6ea27c95038db670497.thumb.jpg.b9733b3831eb2ff3534152038fb9ff27.jpg

 

 

2024-09-1811_57_34.thumb.png.f42f5f7601847d2573143b0ca953909a.png

Link to comment
Share on other sites

On 9/17/2024 at 9:38 PM, vbz14216 said:

 

I'm not sure if a 7.2.2 update actually resizes my md0 or not, need developer confirmation. Or I might be wrong with all of this.

 

A few days ago I migrated from 918(7.2.1u5) to 3622 but got a load of broken app dependency as I used 7.2.1u1 pat, the apps weren't recoverable and file station was also broken even after I updated OS with u5 pat file. In the end I force re-installed the OS with 7.2.2-72806 pat but selecting "Reset system configurations" to start "fresh".

 

IIRC I had the old 2G layout on md0 when I was on 918(7.2.1u5), after I recovered most things with HB and started poking around with ssh, I found out md0 was resized to 8G layout. I only had 2 RAID1 disk pools.

 

So I did 3 major things:

1. Migrate from 918(7.2.1u5) to 3622. I used 7.2.1u1 and selected "Retain system configurations" on migration page.

But apps were a broken mess even after I updated OS with u5 pat file, decided to nuke the loader then move on to 7.2.2-72806.

 

2. Migration page comes up, I used 7.2.2-72806 pat.

 

3. And selected "Reset system configurations"(Keep the files only).

After initial welcome page, setup and misc restorations with HB, I ssh into the NAS and found out md0 was 8G.

(I migrated to 7 from 6 and checked when I had a drive error a few months ago so I'm sure it was 2G)

 

There are some people on reddit reporting 7.2.2 messed with their storage pools and/or their system partition, it shouldn't be a coincidence.

 

Before(when I had a drive error in April, 918 with 7.2.1u4):

root@NAS:~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdd3[0] sdf3[1](F)
      966038208 blocks super 1.2 [2/1] [U_]

md2 : active raid1 sde3[3] sdc3[2]
      1942790208 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdf2[0] sdd2[3] sde2[2] sdc2[1]
      2097088 blocks [12/4] [UUUU________]

md0 : active raid1 sde1[0] sdc1[3] sdd1[2] sdf1[12](F)
      2490176 blocks [12/3] [U_UU________]

 

Now(3622 with 7.2.2-72806):

root@NAS:~# cat /proc/mdstat
Personalities : [raid1]
md2 : active raid1 sde3[3] sdc3[2]
      1942790208 blocks super 1.2 [2/2] [UU]

md3 : active raid1 sdd3[0] sdf3[2]
      966038208 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdc2[0] sdf2[3] sde2[2] sdd2[1]
      2096128 blocks super 1.2 [12/4] [UUUU________]

md0 : active raid1 sdc1[0] sdf1[3] sde1[2] sdd1[1]
      8387584 blocks super 1.2 [12/4] [UUUU________]

 

Now here's the question: How is there reserved space to begin with and which step triggered the expansion.

(918 to 3622, 7.2.2 update or with "Reset system configurations")

 

Based on this data from 2017, it seems that DSM6 was used around that time, and the system partition structure of DSM 6 appears to be 2.4G + 2G.

 

 

Link to comment
Share on other sites

2 hours ago, Peter Suh said:

 

I think the system partition size was not fixed at 2G when migrating from DSM 6 to DSM 7.
Since it is difficult to reproduce by reinstalling DSM 6 now, I am looking for evidence from that time.


In DSM 7.2.1 -69057, it has a system partition structure of 8G+2G as below.
This has not changed in DSM 7.2.2.
Can you tell me which apps were specifically affected when migrating between DSM versions?

 

acbf35190ab55670484d79423f422630.thumb.jpg.39fd59bf5d2b65e3652c47440814a81e.jpg

 

 

a4ec9eccafe3c6ea27c95038db670497.thumb.jpg.b9733b3831eb2ff3534152038fb9ff27.jpg

 

 

2024-09-1811_57_34.thumb.png.f42f5f7601847d2573143b0ca953909a.png

 

I started from 6.2.x, then upgraded to 7.x. Then 7.2.2 a few days ago.

 

Extra apps installed:

Active Backup for Business

Cloud Sync

Container Manager

Download Station(+Python2)

Hyper Backup(+PHP7.4)

Log Center(the addon that you have to install in order to get extra feature)

Snapshot Replication

Drive Server(+Node.js v20, Universal Search, Application Service and Universal viewer)

Tailscale(not from official repo)

 

This is from step 1, from 918(7.2.1u5) to 3622. I used 7.2.1u1 and selected "Retain system configurations" on migration page.

I did click repair packages but they were still broken, then updated to u5 to see if that helped.

(I suspect downgrading and repairing with older OS was the culprit. But it was just the small updates so I was puzzled as well.)

 

As far as I can recall, these were broken:

Active Backup for Business(like a dummy window when you opened the app, the calendar for tasks was all green)

Hyper Backup(blank, no tasks like "newly installed")

File Station(showing no shared folders, HUGE problem as File Station shouldn't be uninstalled)

If you clicked the buttons from those apps, it says "Unable to perform this operation, possibly because the network connection is unstable or the system is busy".(the generic error messages in 7)

 

Drive Server and Application Service were showing "Stopped due to dependency issues" and couldn't be repaired from Package Center no matter how many times I tried. Shutdown process always says "Active Backup for Business is performing task..." or something during shutdown process and have to click yes.

 

I didn't poke around with /@appdata or other "fix bad package tricks"(like removing the apps and reinstalling them) as I decided to nuke my loader to move onto 7.2.2 with configuration reset.

 

At step3, wheres I'm on 7.2.2 with configurations reset and started to restore from HB. I noticed HB tasks were all available as soon as I launched the newly installed app, it seems not nuking /@appdata or toying around install/uninstall did help with keeping app settings.

 

Other apps almost worked without me reconfiguring them, including:

Active Backup for Business(no need to relink database and tasks, manual backup worked fine)

Cloud Sync was almost fine, I had 2 folder sync jobs and only 1 folder job was not working(something like "Sync folder was not mounted yet.")

Container Manager(container and project ran immediately, no permission issues nor lost images)

Log Center lost all previous logs

Snapshot Replication requires manually inputting snapshot schedules again

Drive Server(team folder still there, no need for resync or re-indexing but need to retrust the SSL cert from PC app).

Tailscale need to re-authenticate the account

File Station worked fine, the shared folders are all available which I felt relieved.

Edited by vbz14216
fix error
Link to comment
Share on other sites

9 hours ago, vbz14216 said:

 

I started from 6.2.x, then upgraded to 7.x. Then 7.2.2 a few days ago.

 

Extra apps installed:

Active Backup for Business

Cloud Sync

Container Manager

Download Station(+Python2)

Hyper Backup(+PHP7.4)

Log Center(the addon that you have to install in order to get extra feature)

Snapshot Replication

Drive Server(+Node.js v20, Universal Search, Application Service and Universal viewer)

Tailscale(not from official repo)

 

This is from step 1, from 918(7.2.1u5) to 3622. I used 7.2.1u1 and selected "Retain system configurations" on migration page.

I did click repair packages but they were still broken, then updated to u5 to see if that helped.

(I suspect downgrading and repairing with older OS was the culprit. But it was just the small updates so I was puzzled as well.)

 

As far as I can recall, these were broken:

Active Backup for Business(like a dummy window when you opened the app, the calendar for tasks was all green)

Hyper Backup(blank, no tasks like "newly installed")

File Station(showing no shared folders, HUGE problem as File Station shouldn't be uninstalled)

If you clicked the buttons from those apps, it says "Unable to perform this operation, possibly because the network connection is unstable or the system is busy".(the generic error messages in 7)

 

Drive Server and Application Service were showing "Stopped due to dependency issues" and couldn't be repaired from Package Center no matter how many times I tried. Shutdown process always says "Active Backup for Business is performing task..." or something during shutdown process and have to click yes.

 

I didn't poke around with /@appdata or other "fix bad package tricks"(like removing the apps and reinstalling them) as I decided to nuke my loader to move onto 7.2.2 with configuration reset.

 

At step3, wheres I'm on 7.2.2 with configurations reset and started to restore from HB. I noticed HB tasks were all available as soon as I launched the newly installed app, it seems not nuking /@appdata or toying around install/uninstall did help with keeping app settings.

 

Other apps almost worked without me reconfiguring them, including:

Active Backup for Business(no need to relink database and tasks, manual backup worked fine)

Cloud Sync was almost fine, I had 2 folder sync jobs and only 1 folder job was not working(something like "Sync folder was not mounted yet.")

Container Manager(container and project ran immediately, no permission issues nor lost images)

Log Center lost all previous logs

Snapshot Replication requires manually inputting snapshot schedules again

Drive Server(team folder still there, no need for resync or re-indexing but need to retrust the SSL cert from PC app).

Tailscale need to re-authenticate the account

File Station worked fine, the shared folders are all available which I felt relieved.

 

You ended up resetting all the packages and making them usable again.
It would have been nice if you could restore without damaging the packages in the middle of the process.

 

I saw that RR's misc addon has some feature that reduces the possibility of system partition space shortage during the migration process.
However, according to wjz304, this feature is only used during the recovery process and is not involved in normal migration.
It seems that /dev/md0 plays a very important role in the migration process,
but Redpill has not done any in-depth analysis or research on this yet.

 

P.S : However, I think I misunderstood something in my last analysis. It seems like there is a solution that can solve the problem of insufficient disk space during migration.

 

2024-09-1911_22_07.png.ebf420131350d09ad6b237b7bea2e377.png

Edited by Peter Suh
  • Like 1
Link to comment
Share on other sites

On 9/19/2024 at 10:19 AM, Peter Suh said:

 

You ended up resetting all the packages and making them usable again.
It would have been nice if you could restore without damaging the packages in the middle of the process.

 

I saw that RR's misc addon has some feature that reduces the possibility of system partition space shortage during the migration process.
However, according to wjz304, this feature is only used during the recovery process and is not involved in normal migration.
It seems that /dev/md0 plays a very important role in the migration process,
but Redpill has not done any in-depth analysis or research on this yet.

 

P.S : However, I think I misunderstood something in my last analysis. It seems like there is a solution that can solve the problem of insufficient disk space during migration.

 

2024-09-1911_22_07.png.ebf420131350d09ad6b237b7bea2e377.png

 

Yes, wjz304 also said those could be deleted if one's low on md0 space. But I'm still puzzled on what triggered /dev/md0 expansion, too bad RP research ended abruptly.

 

Curiosity killed the cat, I just did a force re-install of OS with "remove VERSION method" from loader and selected "Retain system configurations".(awful idea I know) All packages were broken again with identical behavior stated before. I tried the "install/remove a package" trick(AFAIK this triggers some kind of package list refresh) but to no avail.

 

After a quick search it turns out QuickConnect is critical to get Application Service package work in 7.2.2 update, I had always disabled this(with a powerup script that disables it) since I don't need it. Tried starting the QC package then everything worked automagically, followed by a reboot to make extra sure.

 

Extra note from #1354 on "almost worked":

The deduplication feature in ABB was broken, showing 1.00x ratio.
And df -h started to include the versioning subvolume "agent" as /dev/loop0.(Which should not happen, these subvolumes should be hidden away.) I guess some of the btrfs magic was broken apart after the system was messed up by "Reset system configuration". File restoration, incremental backup and compression still worked but no deduplication(very surprised by the resiliency).

I'm not sure what caused this but deleting the ABB storage/database, clearing out the ABB shared folder and redo a backup did the trick.

Link to comment
Share on other sites

12 hours ago, vbz14216 said:

 

Yes, wjz304 also said those could be deleted if one's low on md0 space. But I'm still puzzled on what triggered /dev/md0 expansion, too bad RP research ended abruptly.

 

Curiosity killed the cat, I just did a force re-install of OS with "remove VERSION method" from loader and selected "Retain system configurations".(awful idea I know) All packages were broken again with identical behavior stated before. I tried the "install/remove a package" trick(AFAIK this triggers some kind of package list refresh) but to no avail.

 

After a quick search it turns out QuickConnect is critical to get Application Service package work in 7.2.2 update, I had always disabled this(with a powerup script that disables it) since I don't need it. Tried starting the QC package then everything worked automagically, followed by a reboot to make extra sure.

 

Extra note from #1354 on "almost worked":

The deduplication feature in ABB was broken, showing 1.00x ratio.
And df -h started to include the versioning subvolume "agent" as /dev/loop0.(Which should not happen, these subvolumes should be hidden away.) I guess some of the btrfs magic was broken apart after the system was messed up by "Reset system configuration". File restoration, incremental backup and compression still worked but no deduplication(very surprised by the resiliency).

I'm not sure what caused this but deleting the ABB storage/database, clearing out the ABB shared folder and redo a backup did the trick.

 

If you are curious about the structure of /dev/md0, there is also a way to mount it directly and see it.
In the case of TCRP, you can access the TTYD console via a web browser during the DSM installation phase.

TTYD uses port 7681 and the account is root with no password.

You can also mount it briefly and check the contents with the following script.
mkdir -p /mnt/md0
mount /dev/md0 /mnt/md0/
cd /mnt/md0/

 

Perhaps, when changing to DSM 7, the trigger to expand the system partition from 2.4G to 8G was automatically executed from the script that runs first when Synology boots in Junior mode.
If you want to analyze this as well, you can check the "linuxrc.syno.impl" file in the root path and TTYD access in Junior mode.

 

2024-09-216_38_28.png.e016a675060d0e8315521bb352c6666e.png

 

Originally, QC might be related to package operation in genuine, but
XPE seems to have implemented an attempt to bypass this in lkm by RP developers.
Recently, when installing Apollo Lake on DSM 7.2.2,

there was a problem in the part where the BIOS bypass process was set to not perform genuine check (probably through QC).

@wjz304 and I analyzed /var/log/messages on Junior and found the cause of the above phenomenon that causes error code 21, and updated and compiled lkm once more for this.

https://github.com/RROrg/rr/issues/2607#issuecomment-2315487946

 

I don't know much about ABB packages because I don't have experience with them,
but /dev/loop0 seems to be confirmed on Junior.
I don't know if /dev/loop0 is used because the ABB package is pre-installed,
but the default log shows that it is not being used. It would be difficult to prove that this loop device, which is used for temporary virtualization and mounting, played a major role in destroying the system partition.

2024-09-216_47_01.png.53ebe1279001b8b7d6e7c2d0772c079c.png

 

 

Link to comment
Share on other sites

On 9/21/2024 at 5:53 PM, Peter Suh said:

 

If you are curious about the structure of /dev/md0, there is also a way to mount it directly and see it.
In the case of TCRP, you can access the TTYD console via a web browser during the DSM installation phase.

TTYD uses port 7681 and the account is root with no password.

You can also mount it briefly and check the contents with the following script.
mkdir -p /mnt/md0
mount /dev/md0 /mnt/md0/
cd /mnt/md0/

 

Perhaps, when changing to DSM 7, the trigger to expand the system partition from 2.4G to 8G was automatically executed from the script that runs first when Synology boots in Junior mode.
If you want to analyze this as well, you can check the "linuxrc.syno.impl" file in the root path and TTYD access in Junior mode.

 

2024-09-216_38_28.png.e016a675060d0e8315521bb352c6666e.png

 

Originally, QC might be related to package operation in genuine, but
XPE seems to have implemented an attempt to bypass this in lkm by RP developers.
Recently, when installing Apollo Lake on DSM 7.2.2,

there was a problem in the part where the BIOS bypass process was set to not perform genuine check (probably through QC).

@wjz304 and I analyzed /var/log/messages on Junior and found the cause of the above phenomenon that causes error code 21, and updated and compiled lkm once more for this.

https://github.com/RROrg/rr/issues/2607#issuecomment-2315487946

 

I don't know much about ABB packages because I don't have experience with them,
but /dev/loop0 seems to be confirmed on Junior.
I don't know if /dev/loop0 is used because the ABB package is pre-installed,
but the default log shows that it is not being used. It would be difficult to prove that this loop device, which is used for temporary virtualization and mounting, played a major role in destroying the system partition.

2024-09-216_47_01.png.53ebe1279001b8b7d6e7c2d0772c079c.png

 

 

 

Yeah the QC issue affecting Application Service dependency was weird, looks like there's a reason why $yno never provided a GUI switch to disable QC.

 

I remembered seeing "blocking firmware/BIOS update" on RP research topic. Great work that the problem was quickly fixed by the devs!

https://github.com/RedPill-TTG/dsm-research/blob/master/quirks/hw-firmware-update.md

 

The /dev/loop0 and "Agent" subvolume issue on ABB was when the system has successfully booted into D$M, not during recovery/migration page. I believe somehow I broke ABB's way of mounting and hiding its own datastore(backup images). Just take it with a grain of salt.

  • Like 1
Link to comment
Share on other sites

@Peter Suh Hi, i have a small question after update to 7.2.2. 

Storage Manager asks me to update the drive database but when I try to "update now" it gives me an error (screenshot).

Manual update from file is working.

Is this a problem just for me or is it normal?

mshell v1.0.4.7, DMS 7.2.2-72806

 

dsm.thumb.png.108951b016e1f6820b8023bd4318b7b1.png

Edited by Cornelius_drebbel
Link to comment
Share on other sites

2 hours ago, Cornelius_drebbel said:

@Peter Suh Hi, i have a small question after update to 7.2.2. 

Storage Manager asks me to update the drive database but when I try to "update now" it gives me an error (screenshot).

Manual update from file is working.

Is this a problem just for me or is it normal?

mshell v1.0.4.7, DMS 7.2.2-72806

 

dsm.thumb.png.108951b016e1f6820b8023bd4318b7b1.png

 

 

The original @007revad version has been at 3.5.101 for months
and there doesn't seem to be a patch for 7.2.2.

https://github.com/007revad/Synology_HDD_db/releases

 

mshell and rr are using the original 007revad version as is and are maintaining the same final version 3.5.101.

https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/hdddb/src/hdddb.sh
https://github.com/RROrg/rr-addons/blob/main/hdddb/all/usr/bin/hdddb.sh

 

For now, we can only assume that this phenomenon is continuing, and if you perform a manual update, this phenomenon will disappear.

Link to comment
Share on other sites

@Cornelius_drebbel @Peter Suh

 

If hdddb was run with the -n or --noupdate option it prevents DSM from downloading drive database updates (and drive firmware updates). 

 

Updating the drive database by downloading the Offline Update Pack and doing a Manual Install in package center is the best solution. You would then need to run hdddb again or reboot.

  • Thanks 1
Link to comment
Share on other sites

1 hour ago, 007revad said:

@Cornelius_drebbel @Peter Suh

 

If hdddb was run with the -n or --noupdate option it prevents DSM from downloading drive database updates (and drive firmware updates). 

 

Updating the drive database by downloading the Offline Update Pack and doing a Manual Install in package center is the best solution. You would then need to run hdddb again or reboot.

 

For mshell and rr, I already checked that they are using the -n option. Could the combined options added here have the wrong effect?

 

(mshell)

IMG_7579.thumb.png.efc6065fb57b382330d77c8d02a7d689.png

 

(rr)

IMG_7580.thumb.png.6dfedb4ad6bc3c8a78bc33dd99c58548.png

Link to comment
Share on other sites

9 hours ago, 007revad said:

hdddb with the -n option is supposed to prevent drive database updates. So it's working as it should.

 

If it didn't block drive database updates your drives would change to unverified each time Synology updates the drive database.

 

@Cornelius_drebbel, @007revad

Sorry.
In the case of mshell and rr, which are added as services due to the nature of Synology, the service is activated after going through the reboot process at least once after the initial installation of DSM, and hdddb.sh is executed with the -n option during this process.
I did not recheck this.
There is no problem with the hdddb script.

 

(before reboot)

2024-09-304_40_02.thumb.png.ed6babce07d3eec59c60051842a6bcf7.png

 

(after reboot)

2024-09-304_43_43.thumb.png.1b3af626688f8857cff85b8ff5ce34eb.png

 


In the case of rr, the addon repo is hidden, so you cannot see it,
but a script almost identical to mshell is being used.

Link to comment
Share on other sites

9 hours ago, 007revad said:

hdddb with the -n option is supposed to prevent drive database updates. So it's working as it should.

 

If it didn't block drive database updates your drives would change to unverified each time Synology updates the drive database.

 

Perhaps, for XPE, if we add this additional processing, it will be applicable immediately after installing DSM.

 

https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/hdddb/src/hdddb.sh#L559C1-L559C39

 

synoinfo="/etc.defaults/synoinfo.conf"

->

synoinfo="/tmpRoot/etc.defaults/synoinfo.conf"

 

https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/hdddb/src/hdddb.sh#L2097C9-L2097C69

 

/usr/syno/bin/synosetkeyvalue "$synoinfo" "$dtu" "127.0.0.1"

->

/tmpRoot/usr/syno/bin/synosetkeyvalue "$synoinfo" "$dtu" "127.0.0.1"

 

 

Link to comment
Share on other sites

2 hours ago, 007revad said:

 

Thanks for the quick fix. However, it seems like the results are not going the way we want.
First of all, since ash is used instead of bash in junior state, there is a problem with the check routine being broken.
So I changed it to modify synoinfo.conf directly as follows.

 

if [ "${1}" = "late" ]; then
  echo "Installing addon hdddb - ${1}"
  cp -vf hdddb.sh /tmpRoot/usr/sbin/hdddb.sh
  chmod +x /tmpRoot/usr/sbin/hdddb.sh

  echo "Add drive_db_test_url to synoinfo.conf"
  echo 'drive_db_test_url="127.0.0.1"' >> /tmpRoot/etc.defaults/synoinfo.conf
  echo 'drive_db_test_url="127.0.0.1"' >> /tmpRoot/etc/synoinfo.conf  
  #echo "Excute hdddb.sh with option n."
  #/tmpRoot/usr/sbin/hdddb.sh -n

  mkdir -p "/tmpRoot/etc/systemd/system"
  DEST="/tmpRoot/etc/systemd/system/hdddb.service"
  echo "[Unit]"                                    >${DEST}
  echo "Description=HDDs/SSDs drives databases"   >>${DEST}
  echo "After=multi-user.target"                  >>${DEST}
  echo                                            >>${DEST}
  echo "[Service]"                                >>${DEST}
  echo "Type=oneshot"                             >>${DEST}
  echo "RemainAfterExit=yes"                      >>${DEST}
  echo "ExecStart=/usr/sbin/hdddb.sh -nfreS"      >>${DEST}
  echo                                            >>${DEST}
  echo "[Install]"                                >>${DEST}
  echo "WantedBy=multi-user.target"               >>${DEST}

  mkdir -vp /tmpRoot/etc/systemd/system/multi-user.target.wants
  ln -vsf /etc/systemd/system/hdddb.service /tmpRoot/etc/systemd/system/multi-user.target.wants/hdddb.service
fi

 

 

The synoinfo.conf value below was immediately confirmed at the first boot of DSM,
but it was not successfully blocked as shown in the capture.
2024-09-309_19_46.thumb.png.021ca3bbd8d7f8ec5c20ae2e58ae7a38.png

 

2024-09-309_20_01.thumb.png.9fabd8b03b681f5faf30485181e6bd9e.png

 

It was confirmed that this script was used at some point in mshell and was omitted when the addon was replaced with hdddb.
I think this script was valid at that time, but I don't know why it is not blocked now.

 

https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/syno-hdd-db/src/install.sh#L139

 

 

Link to comment
Share on other sites

2 hours ago, 007revad said:

 

And this is another story.
As you can see in the capture below, there are too many drive_db_test_urls defined in /etc/synoinfo.conf. Maybe they are being added every time I boot.
My SYNO boots once a day.

 

2024-09-309_31_41.png.dcda36b9829c199950b450b8496cca93.png

Link to comment
Share on other sites

15 hours ago, Peter Suh said:

As you can see in the capture below, there are too many drive_db_test_urls defined in /etc/synoinfo.conf. Maybe they are being added every time I boot.
My SYNO boots once a day.

 

The following will add an extra drive_db_test_url="127.0.0.1" line every time:

echo 'drive_db_test_url="127.0.0.1"' >> synoinfo.conf

I use synosetkeyvalue instead of echo because synosetkeyvalue only adds the key if it does not exist, or sets the "127.0.0.1" value if the key exists.

 

Do /tmpRoot/etc.defaults/synoinfo.conf and /tmpRoot/etc/synoinfo.conf exist on every reboot?

Link to comment
Share on other sites

15 hours ago, Peter Suh said:

The synoinfo.conf value below was immediately confirmed at the first boot of DSM,
but it was not successfully blocked as shown in the capture.

2024-09-309_20_01.thumb.png.9fabd8b03b681f5faf30485181e6bd9e.png

 

I don't know how VMWare Virtual drives work. The script would need to be able to get VMWare as the vendor, "Virtual SATA Hard Drive" as the model and "0000000001" as the firmware version to be able to add the virtual drive to the drive database.

Link to comment
Share on other sites

4 hours ago, 007revad said:

 

The following will add an extra drive_db_test_url="127.0.0.1" line every time:

echo 'drive_db_test_url="127.0.0.1"' >> synoinfo.conf

I use synosetkeyvalue instead of echo because synosetkeyvalue only adds the key if it does not exist, or sets the "127.0.0.1" value if the key exists.

 

Do /tmpRoot/etc.defaults/synoinfo.conf and /tmpRoot/etc/synoinfo.conf exist on every reboot?

 

The above script is an issue that occurred in the version that still maintains the previous state (v3.5.101) before the following script was used.

echo 'drive_db_test_url="127.0.0.1"' >> /tmpRoot/etc.defaults/synoinfo.conf

 

 

Link to comment
Share on other sites

Does /tmpRoot/etc.defaults/synoinfo.conf get recreated at each boot?

 

If yes, does it then get copied to /etc.defaults/synoinfo.conf ?

 

I haven't used echo 'drive_db_test_url="127.0.0.1"' >> /etc.defaults/synoinfo.conf since v2.2.45

 

synosetkeyvalue /etc.defaults/synoinfo.conf drive_db_test_url "127.0.0.1" should only create the key if it's missing.

Link to comment
Share on other sites

1 hour ago, 007revad said:

Does /tmpRoot/etc.defaults/synoinfo.conf get recreated at each boot?

 

If yes, does it then get copied to /etc.defaults/synoinfo.conf ?

 

I haven't used echo 'drive_db_test_url="127.0.0.1"' >> /etc.defaults/synoinfo.conf since v2.2.45

 

synosetkeyvalue /etc.defaults/synoinfo.conf drive_db_test_url "127.0.0.1" should only create the key if it's missing.

 

 

If your story is correct, it could be that there is a record from before v2.2.45.


I'll clean up my drive_db_test_url in my synoinfo.conf that is still in v3.5.101 and see how it goes.

 

I think the junior using /tmpRoot is out of the question as it doesn't seem to have any effect at the moment and I'll have to stop the script.

 

Also, there is still no way to trigger synosetkeyvalue in junior.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...