Jump to content
XPEnology Community

Polanskiman

Administrator
  • Posts

    2,737
  • Joined

  • Last visited

  • Days Won

    120

Polanskiman last won the day on April 16 2024

Polanskiman had the most liked content!

Recent Profile Visitors

34,487 profile views

Polanskiman's Achievements

Guru Master

Guru Master (7/7)

549

Reputation

18

Community Answers

  1. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 7.2.1-69057 Update 5 - DSM version AFTER update: DSM 7.2.2-72806 Update 2 - Loader version and model: ARC 24.2.3-next - Installation type: BAREMETAL - Gigabyte H97N-WIFI (rev. 1.0)
  2. DSM 7.2.2-72803 has been recalled by Synology. New version DSM 7.2.2-72806 superseeds it. DSM Updates Reporting: https://xpenology.com/forum/topic/70772-dsm-722-72806/
  3. You probably have to make a direct request to Synology now. I would not trust any download links you find in some thrid party websites.
  4. I was moving from DSM 7.2 to DSM 7.2.1 and I think my DSM7 install I did last year was a clean one although I can't be sure, so it could have been that. In fact, nuking the first partition was the smartest thing I could have done since all it took me was 2-3 hours of reconfiguring everything vs 3 days trying to debug something that was driving me nuts. Since I had a backup of DSM config and most of the app configs are saved in your Volume, it was just a matter of reinstalling all the apps and making sure all was configured as I wanted.
  5. You have both been asked by Peter Suh not to derail the thread with the ARC loader which is a separate loader. If you have questions or comments about that loader, create a new thread.
  6. Well since I could not figure it out and had no feedback here I decided to nuke the DSM partition (sdx1 of each drive). There it's done. 3 days trying to solve this shit. At least now it's all clean and new and the box can run again althoug still needs some configuring.
  7. Problem got worse. I can't even access DSM through network anymore after a force reinstall. Looking at the console logs after a force reinstallation a get of bunch of errors such as: [FAILED] Failed to start Adjust NIC sequence. See "systemctl status SynoInitEth.service" for details. [FAILED] Failed to start Out of Band Management Status Check. See "systemctl status syno-oob-check-status.service" for details. [FAILED] Failed to start synoindex check if ... any synoindex-related packages. See "systemctl status synoindex-checkpackage.service" for details. I was able to access DSM by poking around and re-enabling the NICs by command line (through a serial cable>consol) but once I get into DSM GUI I can see DSM is not acting normally. So it looks like something got corrupt somewhere along the process. Does anyone know how I can recover from this? I would hate to have to nuke the DSM partition. This would force me to have to reinstall all apps and reconfigure everything which would be a major pain.
  8. Ok so I solved the "Failed to install DSM. Available system space is insufficient." error. I deleted entirely the upd@te directory first (which was no enough initially) and then the /var/log. I then ended up with: SynologyNAS> mount /dev/md0 /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.4G 780.3M 65% /tmp/test which was enough to allow the pat file to be uploaded and deployed without any errors Something I noticed. Anything bellow 750M/700M is no good because although the pat file size is inferior than the available space you need extra space for the file to be deployed after beeing uploaded. Below you can see how the space shrinks to 0 after the upload. I was nervous when I saw that the available space reached 0 and I was expecting once again the dreaded "Failed to install the file. The file is probably corrupt." The space was bordeline enough. SynologyNAS> mount /dev/md0 /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.4G 780.3M 65% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 383.7M 83% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 2.0G 117.4M 95% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 2.1G 108.0M 95% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 2.2G 0 100% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 373.3M 83% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 373.3M 83% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 373.3M 83% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 341.4M 85% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.9G 314.6M 86% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 395.2M 82% /tmp/test SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.8G 395.2M 82% /tmp/test SynologyNAS> Connection closed by foreign host. My problem number 1 where all volumes are crashed after a reboot is still persistent. Anyone got any clues what is happening here and how I can solve this?
  9. Ok so I was able to mount /dev/md0 and delete a .pat file that was in the @autoupdate directory. This said I dont think I still have enought space since the 7.2.1 dsm pat file is over 400MB. SynologyNAS> df -h /dev/md0 Filesystem Size Used Available Use% Mounted on /dev/md0 2.3G 1.9G 296.6M 87% /tmp/test Question is, what else can be removed? I see plenty of data in 3 directories, namely upd@te, usr and var but I am not sure what can safely be deleted: SynologyNAS> du -hs * 4.0K @autoupdate 26.8M @smallupd@te_deb_uploaded 0 bin 4.0K config 8.0K dev 4.8M etc 2.5M etc.defaults 4.0K initrd 0 lib 0 lib32 0 lib64 4.0K lost+found 4.0K mnt 4.0K proc 28.0K root 24.0K run 0 sbin 0 stopping 4.0K sys 4.0K tmp 376.9M upd@te 1.2G usr 242.2M var 9.2M var.defaults 4.0K volume1 4.0K volume2 The upd@te directory contains the following: SynologyNAS> cd upd@te SynologyNAS> ls -la drwxr-xr-x 3 root root 4096 Jan 1 00:00 . drwxr-xr-x 26 root root 4096 Jan 1 00:01 .. -rw-r--r-- 1 root root 5273600 Sep 23 2023 DiskCompatibilityDB.tar -rw-r--r-- 1 root root 102 Sep 23 2023 GRUB_VER -rwxr-xr-x 1 root root 1010425 Sep 23 2023 H2OFFT-Lx64 -rw-r--r-- 1 root root 5998 Oct 12 2023 Synology.sig -rwxr-xr-x 1 root root 678 Sep 23 2023 VERSION -rw-r--r-- 1 root root 24992123 Oct 12 2023 autonano.pat -rwxr-xr-x 1 root root 8388608 Sep 23 2023 bios.ROM -rw-r--r-- 1 root root 2931 Oct 12 2023 checksum.syno -rw-r--r-- 1 root root 1302 Sep 23 2023 expired_models -rw-r--r-- 1 root root 55 Sep 23 2023 grub_cksum.syno -rw-r--r-- 1 root root 239517368 Sep 23 2023 hda1.tgz -rw-r--r-- 1 root root 6478104 Sep 23 2023 indexdb.txz -rwxr-xr-x 1 root root 917504 Sep 23 2023 oob.ROM drwxr-xr-x 2 root root 4096 Jan 1 00:01 packages -rwxr-xr-x 1 root root 40610 Sep 23 2023 platform.ini -rw-r--r-- 1 root root 7157584 Sep 23 2023 rd.gz -rw-r--r-- 1 root root 22217596 Sep 23 2023 synohdpack_img.txz -rwxr-xr-x 1 root root 16488320 Aug 30 2023 updater -rw-r--r-- 1 root root 3437904 Sep 23 2023 zImage
  10. Wanted to update to dsm 7.2.1 so I updated the loader first. Currently using the ARC loader. Once updated, all was find, DSM upgraded fine and I was on DSM 7.2.1 until I rebooted. After the reboot all volumes had crashed. I had already experienced something similar in the past and a force reinstall had fixed it. So I did the same. It fixed it, except when I reboot, all volumes crash again. Did this like 4-5 time with the same outcome each time. The last time I tried I wasn't able to upload the .pat file and I was greeted with the dreaded: What's the deal here and how can I get out of this pickle? Thanks all.
  11. Multiple threads have been merged into this one.
  12. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 7.1.1 42962 UPDATE 5 - Loader version and model: TCRP Friend v.0.9.4.9c withfriend - DS3622xs+ - Using custom extra.lzma: NO - Installation type: BAREMETAL - Gigabyte H97N-WIFI (rev. 1.0) - Additional comments: Burned updated loader to usb key. Built loader with DSM 7.1.1. Rebooted. Installed DSM 7.2. Rebooted. Loader detected new DSM version and updated accordingly.
  13. I was able to backup all my data first through SSH as I wanted to make sure it was out of these drives prior trying anything. I used another loader as you suggested and was able to re-install DSM. Now everything is working as it should! I am suspecting that during the initial DSM upgrade I did 2 days ago something went wrong when I rebooted and some system files got corrupted some way. Anyway, now I am back on track. Thank you all.
  14. I just realized I was using pre-DSM 7 mount point, reason why it was not working. This time I used: sudo mount -v /dev/mapper/cachedev_0 /volume1 and it worked. The Volume1 and Storage Pool 1 instantly became healthy and all my data re-appeared. Strange thing though is I can only access the data through ssh. Through the GUI, File Station is empty and all Share folders in the Control Panel are greyed out and DSM says Volume1 is missing. Looks like DSM system files are all screwed up.
  15. Once the repair was over the Storage pool still showed 'Warning' and the volume still showed 'Crashed'. I am hoping Flyride can give me a hand as he is the godfather in these types of situations! Thank you for your answer though.
×
×
  • Create New...