Jump to content
XPEnology Community

Recommended Posts

Posted

Wanted to update to dsm 7.2.1 so I updated the loader first. Currently using the ARC loader.

Once updated, all was find, DSM upgraded fine and I was on DSM 7.2.1 until I rebooted. After the reboot all volumes had crashed. I had already experienced something similar in the past and a force reinstall had fixed it. So I did the same. It fixed it, except when I reboot, all volumes crash again. Did this like 4-5 time with the same outcome each time. The last time I tried I wasn't able to upload the .pat file and I was greeted with the dreaded:

Quote

Failed to install DSM. Available system space is insufficient.

Screen Shot 2024-02-08 at 18.50.39.jpg

 

What's the deal here and how can I get out of this pickle?

 

Thanks all.

Posted

Ok so I was able to mount /dev/md0 and delete a .pat file that was in the @autoupdate directory. This said I dont think I still have enought space since the 7.2.1 dsm pat file is over 400MB.

SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.9G    296.6M  87% /tmp/test

 

Question is, what else can be removed? I see plenty of data in 3 directories, namely upd@te, usr and var but I am not sure what can safely be deleted:

SynologyNAS> du -hs *
4.0K    @autoupdate
26.8M   @smallupd@te_deb_uploaded
0       bin
4.0K    config
8.0K    dev
4.8M    etc
2.5M    etc.defaults
4.0K    initrd
0       lib
0       lib32
0       lib64
4.0K    lost+found
4.0K    mnt
4.0K    proc
28.0K   root
24.0K   run
0       sbin
0       stopping
4.0K    sys
4.0K    tmp
376.9M  upd@te
1.2G    usr
242.2M  var
9.2M    var.defaults
4.0K    volume1
4.0K    volume2

 

The upd@te directory contains the following:

SynologyNAS> cd upd@te
SynologyNAS> ls -la
drwxr-xr-x    3 root     root          4096 Jan  1 00:00 .
drwxr-xr-x   26 root     root          4096 Jan  1 00:01 ..
-rw-r--r--    1 root     root       5273600 Sep 23  2023 DiskCompatibilityDB.tar
-rw-r--r--    1 root     root           102 Sep 23  2023 GRUB_VER
-rwxr-xr-x    1 root     root       1010425 Sep 23  2023 H2OFFT-Lx64
-rw-r--r--    1 root     root          5998 Oct 12  2023 Synology.sig
-rwxr-xr-x    1 root     root           678 Sep 23  2023 VERSION
-rw-r--r--    1 root     root      24992123 Oct 12  2023 autonano.pat
-rwxr-xr-x    1 root     root       8388608 Sep 23  2023 bios.ROM
-rw-r--r--    1 root     root          2931 Oct 12  2023 checksum.syno
-rw-r--r--    1 root     root          1302 Sep 23  2023 expired_models
-rw-r--r--    1 root     root            55 Sep 23  2023 grub_cksum.syno
-rw-r--r--    1 root     root     239517368 Sep 23  2023 hda1.tgz
-rw-r--r--    1 root     root       6478104 Sep 23  2023 indexdb.txz
-rwxr-xr-x    1 root     root        917504 Sep 23  2023 oob.ROM
drwxr-xr-x    2 root     root          4096 Jan  1 00:01 packages
-rwxr-xr-x    1 root     root         40610 Sep 23  2023 platform.ini
-rw-r--r--    1 root     root       7157584 Sep 23  2023 rd.gz
-rw-r--r--    1 root     root      22217596 Sep 23  2023 synohdpack_img.txz
-rwxr-xr-x    1 root     root      16488320 Aug 30  2023 updater
-rw-r--r--    1 root     root       3437904 Sep 23  2023 zImage

 

Posted

Ok so I solved the "Failed to install DSM. Available system space is insufficient." error.

 I deleted entirely the upd@te directory first (which was no enough initially) and then the /var/log. I then ended up with:

SynologyNAS> mount /dev/md0 /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.4G    780.3M  65% /tmp/test

which was enough to allow the pat file to be uploaded and deployed without any errors

 

Something I noticed. Anything bellow 750M/700M is no good because although the pat file size is inferior than the available space you need extra space for the file to be deployed after beeing uploaded.

 

Below you can see how the space shrinks to 0 after the upload. I was nervous when I saw that the available space reached 0 and I was expecting once again the dreaded "Failed to install the file. The file is probably corrupt." The space was bordeline enough.

 

SynologyNAS> mount /dev/md0 /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.4G    780.3M  65% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    383.7M  83% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      2.0G    117.4M  95% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      2.1G    108.0M  95% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      2.2G         0 100% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    373.3M  83% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    373.3M  83% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    373.3M  83% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    341.4M  85% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.9G    314.6M  86% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    395.2M  82% /tmp/test
SynologyNAS> df -h /dev/md0
Filesystem                Size      Used Available Use% Mounted on
/dev/md0                  2.3G      1.8G    395.2M  82% /tmp/test
SynologyNAS> Connection closed by foreign host.

 

 

My problem number 1 where all volumes are crashed after a reboot is still persistent. 

 

Anyone got any clues what is happening here and how I can solve this?

Posted

Problem got worse. I can't even access DSM through network anymore after a force reinstall. Looking at the console logs after a force reinstallation a get of bunch of errors such as:

 

[FAILED] Failed to start Adjust NIC sequence.
See "systemctl status SynoInitEth.service" for details.

[FAILED] Failed to start Out of Band Management Status Check.
See "systemctl status syno-oob-check-status.service" for details.

[FAILED] Failed to start synoindex check if ... any synoindex-related packages.
See "systemctl status synoindex-checkpackage.service" for details.

 

I was able to access DSM by poking around and re-enabling the NICs by command line (through a serial cable>consol) but once I get into DSM GUI I can see DSM is not acting normally. So it looks like something got corrupt somewhere along the process.

 

Does anyone know how I can recover from this? I would hate to have to nuke the DSM partition. This would force me to have to reinstall all apps and reconfigure everything which would be a major pain.

Posted

Well since I could not figure it out and had no feedback here I decided to nuke the DSM partition (sdx1 of each drive). There it's done. 3 days trying to solve this shit.

At least now it's all clean and new and the box can run again althoug still needs some configuring.

Posted (edited)

I'm thinking of moving to ARC loader of my old j3455 (before it was j1900) DSM 7.2-64570 installation - which was before dsm 6.0 surely - maybe even dsm 5 - I'm wondering how to check this btw - the initial dsm version.

 

In any case I found info in ARC wiki - https://github.com/AuxXxilium/AuxXxilium/wiki#important - that dsm v6 have different partitions layout, maybe this is related to your issue @Polanskiman

 

/dev/md0 of mine:

 df -h /dev/md0
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        2.3G  1.4G  822M  63% /

 

Edited by klez
Posted
On 2/18/2024 at 3:54 AM, klez said:

I'm thinking of moving to ARC loader of my old j3455 (before it was j1900) DSM 7.2-64570 installation - which was before dsm 6.0 surely - maybe even dsm 5 - I'm wondering how to check this btw - the initial dsm version.

 

In any case I found info in ARC wiki - https://github.com/AuxXxilium/AuxXxilium/wiki#important - that dsm v6 have different partitions layout, maybe this is related to your issue @Polanskiman

 

/dev/md0 of mine:

 df -h /dev/md0
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0        2.3G  1.4G  822M  63% /

 

 

I was moving from DSM 7.2 to DSM 7.2.1 and I think my DSM7 install I did last year was a clean one although I can't be sure, so it could have been that.

In fact, nuking the first partition was the smartest thing I could have done since all it took me was 2-3 hours of reconfiguring everything vs 3 days trying to debug something that was driving me nuts. Since I had a backup of DSM config and most of the app configs are saved in your Volume, it was just a matter of reinstalling all the apps and making sure all was configured as I wanted.

  • 6 months later...
Posted

@Polanskiman and @klez

 

I am experiencing a similar issue... Went from Jun's Loader (DSM 6.x) trying to get DSM 7.x with various loaders. ARC Loader, TinyCore Redpill (original) and TinyCore Redpill M-Shell... Well, what can I say. It was a pain to do so. Ugh. I got those annoying messages like DSM installation failed, not enough space, damaged, etc.

Anyways, now I finally got TinyCore Redpill M-Shell running with DSM 7.1.x, but apparently all my settings, applications and so on are completely gone. Sigh. Data is luckily still there. I have backups of everything nevertheless (took me a couple of days to rsync everything to a temp server based on RAID 0; yeah I know, but I needed the space for the backups, so no other choice).

Anyways, I checked my partition:

Quote

XX@DiskStation:/$ df -Th
Filesystem             Type      Size  Used Avail Use% Mounted on
/dev/md0               ext4      2.3G  1.8G  440M  81% /
devtmpfs               devtmpfs  3.7G     0  3.7G   0% /dev
tmpfs                  tmpfs     3.8G  232K  3.8G   1% /dev/shm
tmpfs                  tmpfs     3.8G   16M  3.8G   1% /run
tmpfs                  tmpfs     3.8G     0  3.8G   0% /sys/fs/cgroup
tmpfs                  tmpfs     3.8G  748K  3.8G   1% /tmp
/dev/mapper/cachedev_0 ext4       19T  7.5T   11T  41% /volume1



I also find some command online for listing all Synology partitions?
Anyways:

Quote

XX@DiskStation:~# synopartition --list
Index  Version  1-Bay  Size of Root/Swap/Reserved Partition
   1.        1    YES    273042/  787185/ 273042 sectors (  133/  384/ 133 MB)
   2.        2    YES   4980087/ 1044225/ 257040 sectors ( 2431/  509/ 125 MB)
   3.        3    YES    530082/  787185/  16002 sectors (  258/  384/   7 MB)
   4.        5    YES    722862/  594405/  16002 sectors (  352/  290/   7 MB)
   5.        6    YES   4980087/ 4192965/ 257040 sectors ( 2431/ 2047/ 125 MB)
   6.        7    YES   4980480/ 4194304/ 262144 sectors ( 2431/ 2048/ 128 MB)
   7.        8    YES   4980480/ 4194304/ 260352 sectors ( 2431/ 2048/ 127 MB)
   8.        9    YES  16777216/ 4194304/ 262144 sectors ( 8192/ 2048/ 128 MB)
   9.        1     NO   1574307/  787185/ 273105 sectors (  768/  384/ 133 MB)
  10.        2     NO   4980087/ 1044225/ 257040 sectors ( 2431/  509/ 125 MB)
  11.        6     NO   4980087/ 4192965/ 257040 sectors ( 2431/ 2047/ 125 MB)
  12.        7     NO   4980480/ 4194304/ 262144 sectors ( 2431/ 2048/ 128 MB)
  13.        8     NO   4980480/ 4194304/ 260352 sectors ( 2431/ 2048/ 127 MB)
  14.        9     NO  16777216/ 4194304/ 262144 sectors ( 8192/ 2048/ 128 MB)



Also did fdisk, but that was a little bit too big to list here, so I attached it as an attachment (fdisk.txt).
No clue if that's useful for anyone.

Anyways, what's my best bet to proceed now? You mentioned nuking the DSM partition (sdx1 of each drive). A few questions about that;

 

  1. How did you nuke it? My data will be safe I guess. I do not mind reinstalling DSM as I currently have to redo all applications anyways.
  2. Also everything seems to be really slow and also installation and running e.g. Docker is already causing issues (cannot start).
    1. I am getting (for Docker) the message: Cannot start packet service or something (sorry translated it).
    2. Maybe there are permission errors because of the upgrade from DSM 6.x to DSM 7.x, who knows?
  3. What can you recommend? I just want to run DSM 7.x like before; I never had issues with DSM 6.x, but it was getting older and older.

In the meantime I will try to update DSM to 7.2. Perhaps another shot with ARC loader. People seem to be favouring that one over TinyCore Redpill M-Shell as far as I can tell.
Maybe it's even better to delete just everything and start fresh. I think I backed up everything in terms of my data and made several backups of MariaDB database and all container settings (e.g. SabNZBD, Sonarr, Radarr, Hydra2, etc.).

Like you said; it will take quite some time setting up everything, but it beats to trouble shoot these issues. I also don't know if setting up a clean/fresh installation of DSM 7.2.x be different in terms of partitions, etc. I think I did a fresh install of DSM 6.x years ago (4 years ago, or even longer).

Anyways, I hope you can advice me or someone else. In the meantime I will try ARC Loader and DSM 7.2 and see what happens.

Thanks in advance.

Regards,
HHawk

fdisk.txt

Posted

I had similar issue while updating to 7.2.2, the problem is the size of the system partition going back from dsm6, dsm7 creates a larger 7.6gb partition when installed from scratch, so my solution was to backup the nas to another nas, erase the whole nas and reinstall the dsm, which created the new partition layout with a larger system partition

 

11 hours ago, HHawk said:

@Polanskiman and @klez

 

I am experiencing a similar issue... Went from Jun's Loader (DSM 6.x) trying to get DSM 7.x with various loaders. ARC Loader, TinyCore Redpill (original) and TinyCore Redpill M-Shell... Well, what can I say. It was a pain to do so. Ugh. I got those annoying messages like DSM installation failed, not enough space, damaged, etc.

Anyways, now I finally got TinyCore Redpill M-Shell running with DSM 7.1.x, but apparently all my settings, applications and so on are completely gone. Sigh. Data is luckily still there. I have backups of everything nevertheless (took me a couple of days to rsync everything to a temp server based on RAID 0; yeah I know, but I needed the space for the backups, so no other choice).

Anyways, I checked my partition:



I also find some command online for listing all Synology partitions?
Anyways:



Also did fdisk, but that was a little bit too big to list here, so I attached it as an attachment (fdisk.txt).
No clue if that's useful for anyone.

Anyways, what's my best bet to proceed now? You mentioned nuking the DSM partition (sdx1 of each drive). A few questions about that;

 

  1. How did you nuke it? My data will be safe I guess. I do not mind reinstalling DSM as I currently have to redo all applications anyways.
  2. Also everything seems to be really slow and also installation and running e.g. Docker is already causing issues (cannot start).
    1. I am getting (for Docker) the message: Cannot start packet service or something (sorry translated it).
    2. Maybe there are permission errors because of the upgrade from DSM 6.x to DSM 7.x, who knows?
  3. What can you recommend? I just want to run DSM 7.x like before; I never had issues with DSM 6.x, but it was getting older and older.

In the meantime I will try to update DSM to 7.2. Perhaps another shot with ARC loader. People seem to be favouring that one over TinyCore Redpill M-Shell as far as I can tell.
Maybe it's even better to delete just everything and start fresh. I think I backed up everything in terms of my data and made several backups of MariaDB database and all container settings (e.g. SabNZBD, Sonarr, Radarr, Hydra2, etc.).

Like you said; it will take quite some time setting up everything, but it beats to trouble shoot these issues. I also don't know if setting up a clean/fresh installation of DSM 7.2.x be different in terms of partitions, etc. I think I did a fresh install of DSM 6.x years ago (4 years ago, or even longer).

Anyways, I hope you can advice me or someone else. In the meantime I will try ARC Loader and DSM 7.2 and see what happens.

Thanks in advance.

Regards,
HHawk

fdisk.txt 8.39 kB · 0 downloads

 

  • 6 months later...
Posted (edited)

Hi!

 

After the flash drive failed and several attempts to install ARC (the "next" version, then the old one, because DSM didn't work correctly on the "next", and I didn't want to reinstall it), I had the same problem: 

Цитата

Failed to install DSM. Available system space is insufficient.

But I found a solution on ARC Wiki how to free the system space. To do this, you need:

  • USB Flash with ARC Next version (I did this on version 2.0.2)
  • Boot in DSM Recovery Mode
  • Go to: http://ip:5000/webman/clean_system_disk.cgi
  • Done!

I hope this helps someone!

Edited by Stalnoff
Posted

Beforehand, you would normally take a larger data hard drive. Set it up with an 8GB partition scheme and mirror the old system partition onto it. Then expand the ext4 file system to 8GB.
Otherwise, as happened in my case, a lot of data would be lost. Because this isn't even recorded in a full backup.

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...