Jump to content
XPEnology Community

DS1823xs+, DS1821+ and DS1621+ seem to have hard coded number of M.2 drives in DSM 7.2-64570-U3 and later


007revad

Recommended Posts

I'm trying to update my Synology_HDD_db script but I've been having issues getting E10M20-T1, M2D20 and M2D18 working in DS1823xs+, DS1821+ and DS1621+ in DSM 7.2-64570-U2 and later. These 3 models all use device tree, have 2 internal M.2 slots and do not officially support E10M20-T1, M2D20, M2D18 or M2D17.

 

Setting the power_limit in model.dtb to 100 for each NVMe drive does not work for these models. The only way I can get more than 2 NVMe drives to work in DS1823xs+, DS1821+ and DS1621+ is to replace these 2 files with versions from 7.2-64570 (which I'd rather not do).

  • /usr/lib/libsynonvme.so.1
  • /usr/syno/bin/synonvme

 

And there are error messages in synoscgi.log, synosnmpcd.log and synoinstall.log

 

With 3 NVMe drives installed the logs contain: 

nvme_model_spec_get.c:90 Incorrect power limit number 3!=2

 

With 4 NVMe drives installed the logs contain: 

nvme_model_spec_get.c:90 Incorrect power limit number 4!=2

 

I suspect that the DS1823xs+, DS1821+ and DS1621+ have the number of M.2 drives hard coded to 2 somewhere.

 

 

If I restore these 2 files to the versions from 7.2.1-69057-U1

  • /usr/lib/libsynonvme.so.1
  • /usr/syno/bin/synonvme

 

And add the default power_limit 14.85,9.075 the Internal M.2 slots show in storage manager and the fans run at normal speed.

 

The "Incorrect power limit" log entries are gone, but synoscgi.log now contains the following for NVMe drives in the E10M20-T1:

synoscgi_SYNO.Core.System_1_info[24819]: nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
synoscgi_SYNO.Core.System_1_info[24819]: nvme_model_spec_get.c:359 Fail to get power limit of nvme0n1
synoscgi_SYNO.Core.System_1_info[24819]: nvme_slot_info_get.c:37 Failed to get model specification
synoscgi_SYNO.Core.System_1_info[24819]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme0n1

synoscgi_SYNO.Core.System_1_info[24819]: nvme_model_spec_get.c:81 Fail to get fdt property of power_limit
synoscgi_SYNO.Core.System_1_info[24819]: nvme_model_spec_get.c:359 Fail to get power limit of nvme1n1
synoscgi_SYNO.Core.System_1_info[24819]: nvme_slot_info_get.c:37 Failed to get model specification
synoscgi_SYNO.Core.System_1_info[24819]: nvme_dev_port_check.c:23 Failed to get slot informtion of nvme1n1

 

I get the default power_limit 14.85,9.075 for the DS1823xs+, DS1821+ and DS1621+ with: 

cat /sys/firmware/devicetree/base/power_limit | cut -d"," -f1

Apparently other models don't have /sys/firmware/devicetree/base/power_limit

Edited by 007revad
Link to comment
Share on other sites

You're having trouble installing more than two NVMe drives on your Synology DS1823xs+, DS1821+, or DS1621+ NAS. This is because these models have a hardcoded power limit of two NVMe drives.

So I think there are two workarounds:

Replace the two files /usr/lib/libsynonvme.so.1 and /usr/syno/bin/synonvme with the versions from DSM 7.2-64570-U2.

Use a custom device tree to override the hardcoded power limit.

 

That's all:)

Link to comment
Share on other sites

Yep, I'm aware of the 14.85,9.075 power limit in /run/model.dtb and /sys/firmware/devicetree/base/power_limit

 

I do edit the device tree already to add support for the E10M20-T1 or M2D20 or M2D18
 

With 3 NVMe drives installed if I set the power limit to 14.85,9.075,9.075 or 14.85,14.85,14.85 or 100,100,100 all the NVMe drives vanish from storage manager and the synoscgi.log shows "Incorrect power limit number 3!=2"

 

With 4 NVMe drives installed if I set the power limit to 14.85,9.075,9.075,9.075 or 14.85,14.85,14.85,14.85 or 100,100,100,100 all the NVMe drives vanish from storage manager and the synoscgi.log shows "Incorrect power limit number 4!=2"

 

So somewhere in 7.2-64570-U1 it knows there should not be more than 2 M.2 drives in a DS1823xs+, DS1821+, or DS1621+

 

The synoscgi.log shows:

 

synoscgi_SYNO.Storage.CGI.Storage_1_load_info[32509]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme2n1
synoscgi_SYNO.Storage.CGI.Storage_1_load_info[32536]: nvme_model_spec_get.c:90 Incorrect power limit number 3!=2
synoscgi_SYNO.Storage.CGI.Storage_1_load_info[32536]: nvme_model_spec_get.c:164 Fail to get power limit of nvme2n1
synoscgi_SYNO.Storage.CGI.Storage_1_load_info[32536]: nvme_slot_info_get.c:37 Failed to get model specification
synoscgi_SYNO.Storage.CGI.Storage_1_load_info[32536]: nvme_dev_is_u2_slot.c:31 Failed to get slot informtion of /dev/nvme2n1

 

I would rather my Synology_HDD_db script did not have to resort to using older versions of nvme and libsynonvme.so.1 because the script is used by @AuxXxilium and @Peter Suh and maybe other xpenology devs.

 

synonvme in DSM 7.2-64570-U1, U2, U3 and 7.2.1-69057-U1 are the same but include power_limit

 

synonvme  7.2-64570        ef36da23c30c17aeef6af943a958a124

synonvme  7.2-64570-U1     97d51425e6ac48ce1ed5fafd8810d359
synonvme  7.2-64570-U2     97d51425e6ac48ce1ed5fafd8810d359
synonvme  7.2-64570-U3     97d51425e6ac48ce1ed5fafd8810d359
synonvme  7.2.1-69057-U1   97d51425e6ac48ce1ed5fafd8810d359

 

 

libsynonvme.so.1 in DSM 7.2-64570-U1, U2 and U3 are the same but 7.2.1-69057-U1 is different:
 

libsynonvme.so.1  7.2-64570       2b41e149acdb7281f6145f9fae214285

libsynonvme.so.1  7.2-64570-U1    1726d502a568c0843d43a2d95bcc6566
libsynonvme.so.1  7.2-64570-U2    1726d502a568c0843d43a2d95bcc6566
libsynonvme.so.1  7.2-64570-U3    1726d502a568c0843d43a2d95bcc6566

libsynonvme.so.1  7.2.1-69057-U1  b4f3463cf353978209171b5fc5a4bc2c

 

 

Link to comment
Share on other sites

Models that inevitably require modification of libsynonvme.so.1 and are limited to only 1 or 2 mountable nmves are also clearly shown in rr's script that references my nvme-cache.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/nvme-cache/src/install-nvme-cache.sh
https://github.com/wjz304/rr-addons/blob/main/nvmecache/all/usr/bin/nvmecache.sh


DS918+ RS1619xs+ DS419+ DS1019+ DS719+ DS1621xs+
Limited to the 6 models above.


If you look at the contents of the file with an actual hex editor, this model name and the PCIE address of the specified genuine product are recorded.


Other models depend on device tree configuration or configuration in the extensionPorts file.


Adjust to recognize nvme cache in these three types.


1823xs+, DS1821+ and DS1621+ are device tree based models, so the libsynonvme.so.1 file is not involved.


If you consider it complexly rather than simply distinguishing it, confusion can arise.

Edited by Peter Suh
  • Like 1
Link to comment
Share on other sites

On 11/24/2023 at 2:34 PM, Peter Suh said:

Models that inevitably require modification of libsynonvme.so.1 and are limited to only 1 or 2 mountable nmves are also clearly shown in rr's script that references my nvme-cache.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/nvme-cache/src/install-nvme-cache.sh
https://github.com/wjz304/rr-addons/blob/main/nvmecache/all/usr/bin/nvmecache.sh


DS918+ RS1619xs+ DS419+ DS1019+ DS719+ DS1621xs+
Limited to the 6 models above.


If you look at the contents of the file with an actual hex editor, this model name and the PCIE address of the specified genuine product are recorded.


Other models depend on device tree configuration or configuration in the extensionPorts file.


Adjust to recognize nvme cache in these three types.


1823xs+, DS1821+ and DS1621+ are device essay tree based models, so the libsynonvme.so.1 file is not involved.


If you consider it complexly rather than simply distinguishing it, confusion can arise.

Thx, this solution has helped me too) Blessing you!

  • Like 1
Link to comment
Share on other sites

  • 6 months later...
On 11/24/2023 at 8:34 AM, Peter Suh said:

Models that inevitably require modification of libsynonvme.so.1 and are limited to only 1 or 2 mountable nmves are also clearly shown in rr's script that references my nvme-cache.


https://github.com/PeterSuh-Q3/tcrp-addons/blob/main/nvme-cache/src/install-nvme-cache.sh
https://github.com/wjz304/rr-addons/blob/main/nvmecache/all/usr/bin/nvmecache.sh


DS918+ RS1619xs+ DS419+ DS1019+ DS719+ DS1621xs+
Limited to the 6 models above.


If you look at the contents of the file with an actual hex editor, this model name and the PCIE address of the specified genuine product are recorded.


Other models depend on device tree configuration or configuration in the extensionPorts file.


Adjust to recognize nvme cache in these three types.


1823xs+, DS1821+ and DS1621+ are device tree based models, so the libsynonvme.so.1 file is not involved.


If you consider it complexly rather than simply distinguishing it, confusion can arise.

@Peter Suh Currently I have built a DS1621XS+ with your a few revisions back repill Looking to migrate to a DS1821+ need to keep my data. I noticed the CPU for the

DS1621XS+ is running at 800 not 4 gig. The cpu I am using is a 4970k with at MSI Z97-GD65 Gaming LGA 1150 Intel Z97 HDM, 32 gig of ram

I. Everything is working even the power schedule. I have not tried the power button on the front but I am not concerned with that since I can shut to bot down via the web portal.

 

Any down sides of Migrating?

 

What is the easiest way to migrate?

 

Thanks for all your hard work.

 

 

Link to comment
Share on other sites

20 minutes ago, midiman007 said:

@Peter Suh Currently I have built a DS1621XS+ with your a few revisions back repill Looking to migrate to a DS1821+ need to keep my data. I noticed the CPU for the

DS1621XS+ is running at 800 not 4 gig. The cpu I am using is a 4970k with at MSI Z97-GD65 Gaming LGA 1150 Intel Z97 HDM, 32 gig of ram

I. Everything is working even the power schedule. I have not tried the power button on the front but I am not concerned with that since I can shut to bot down via the web portal.

 

Any down sides of Migrating?

 

What is the easiest way to migrate?

 

Thanks for all your hard work.

 

 

 

I don't know why you would want to take the risk and try migrating.
There may be no guarantee that m.2 will maintain its volume intact.
If you simply want to increase the bay size, try changing the panel size within the storagepanel addon.


The two models have different support platforms and different ways to manage disks and devices.
DS1621xs+(Broadwell nk, non-DT)m DS1821+ (v1000, DT)
Do you want to experience these differences?


But all change comes with risks.
If the loader operates without problems, it is recommended not to change it frequently.

Link to comment
Share on other sites

2 minutes ago, Peter Suh said:

 

I don't know why you would want to take the risk and try migrating.
There may be no guarantee that m.2 will maintain its volume intact.
If you simply want to increase the bay size, try changing the panel size within the storagepanel addon.


The two models have different support platforms and different ways to manage disks and devices.
DS1621xs+(Broadwell nk, non-DT)m DS1821+ (v1000, DT)
Do you want to experience these differences?


But all change comes with risks.
If the loader operates without problems, it is recommended not to change it frequently.

Thanks you for your quick response.

The main reason for it I am running a few VM and could use the extra power from the CPU.

Bays I am fine since Your repill made that part very easy.

 

I am not using m2 or SSD just the USB drive to boot.

 

My build it 2 mirrors drives sata 0 and 1 the  other drives are 2 thru 5 are 4 drives shr.

Link to comment
Share on other sites

Posted (edited)
18 minutes ago, midiman007 said:

Thanks you for your quick response.

The main reason for it I am running a few VM and could use the extra power from the CPU.

Bays I am fine since Your repill made that part very easy.

 

I am not using m2 or SSD just the USB drive to boot.

 

My build it 2 mirrors drives sata 0 and 1 the  other drives are 2 thru 5 are 4 drives shr.

 

If you are only using a hard disk, there doesn't seem to be a big risk, but if possible, prepare a backup before doing this.

One more thing to note is that DT-based platforms such as v1000, r1000, and Gemini Lake do not yet support HBA (SAS controller).

Edited by Peter Suh
Link to comment
Share on other sites

4 minutes ago, Peter Suh said:

 

If you are only using a hard disk, there doesn't seem to be a big risk, but if possible, prepare a backup before doing this.

One more thing to note is that DT-based platforms such as v1000, r1000, and Gemini Lake do not yet support HBA (SAS controller).

Thank you so much for everything. Again your hard work on your Repill is fantastic.

It really great and easy to sue.

SAS is more of a enterprise thing not something I use at home.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...