Jump to content
XPEnology Community

Volumes disappear sometimes


Recommended Posts

I am using 2 Disks on my NAS - 8TB & 6TB - with SHR RAID-1 setup on them. Using the standard process, I created a 6TB BTRFS Volume, which is now mirrored & managed by DSM. This however leaves 2 TB of space unused on the disk.

 

Wanting to make use of this unused space, I used "fdisk", "mdadm", "vgcreate" & "lvcreate" to create a partition, raid array, volume group & logical volume on this unused space manually. Then by leaving it unformatted, I was able to get DSM to pick it up as a Crashed Volume. 

 

Now using Storage Manager, I deleted my manually created volume & re-made it using the Synology tools. At this point, I have SHR + SHR Storage pools on the same disk, being picked up by DSM & Storage Manager. 

 

However, the second SHR Pool is not reliable and sometimes vanishes between reboots. Sometime DSM picks it up and mounts my extra volumes normally, sometimes not. How do I fix this? What is the best way to ensure DSM picks up all my volumes? Can I force the boot detection process to pick them up? 

 

 

Link to comment
Share on other sites

I have a solution and I do not know why it works or how reliable it is, but for the past 3 reboots, my disks have come back up. 

 

I have found that re-writing to the fstab with the correct ext4 entries, ensures the volumes are auto mounted correctly. Made a lil script for this, and set it to run at boot.

#!/bin/bash

#Echo into fstab
echo -e "none /proc proc defaults 0 0\n/dev/root / ext4 defaults 1 1\n/dev/vg3/volume_1 /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0\n/dev/vg3/volume_2 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0\n/dev/vg2/volume_4 /volume4 btrfs auto_reclaim_space,synoacl,relatime 0 0" > /etc/fstab

#Log when the script runs
echo $(date) >> /volume1/log-fstab.log

 

 

My guess is that the "/etc.defaults/rc.volume" doesn't expect to find my volumes, as Synology allows a HDD to be part of only 1 SHR/LV, while my 8TB HDD is part of 2. But when the previous fstab has entries for the volumes, it loads em up just fine.

 

The only difference being that when that script re-writes the fstab at boot, it ignores the fstab on next boot. With this script simulating a manual rewrite, the changes stick for the next boot. I assume it has something to do with overlay-fs.

 

 

EDIT: Nope, volumes still disappear. Any help or ideas would be appreciated

Edited by Jseinfeld
Link to comment
Share on other sites

Ok, so I did some more digging in the "/etc.defaults/rc" file and find that "/usr/syno/cfgen/s00_synocheckfstab" is the process wiping my fstab. Check further and that "s00_synocheckfstab" along with all the other "sXX_XXXXXXXXXX" files are all just links to "/usr/syno/bin/synocfgen" 

 

So I wipe that "/usr/syno/bin/synocfgen" and replace it with the script, I had written above. 

 

As you may have guessed, this hosed my DSM. On reboot I was getting "Permission Denied" errors on SSH & "You are not authorized to use this service." on the Web Console. 

 

Re-installed it via the bootloader, and selected "Migration" and gave it the latest PAT file. That seems to have corrected something somewhere. My volumes now show up on boot and running "/usr/syno/cfgen/s00_synocheckfstab" no longer hoses the "/etc/fstab" file. Screenshot of my Storage Manager

 

 

1013852340_ScreenShot2019-09-15at3_08_01AM.thumb.png.2cc25ea9defc8d007362c0c4dc99f6f5.png

 

 

 

 

TL;DR - Have weird issues with volumes not being auto mounted and detected, at times, due to custom mdadm/etc config? Hose your DSM, by deleting the "/usr/syno/bin/synocfgen" file and then select re-install via the bootloader. That does some mumbo-jumbo that fixes things ..... hopefully

 

Edited by Jseinfeld
Link to comment
Share on other sites

Nope, that ^^ is not the solution either. Even after the "migration", my volumes vanished, when I rebooted some 2-3 hours later. At this point, I had checked the "s00_synocheckfstab" and it was back to hosing (removing the volume 1/volume2 entries) the "fstab" file

 

This time, I replaced the "s00_synocheckfstab" file(symlink) with my own BASH script (see post above) and that seems to survive extended reboots. Rebooted some 9 hours later and the volumes 1 & 2 came back just fine.

 

----------------------------------------------------------------------------------------

GUIDE on how to make use of the "unused" space in a Synology setup

----------------------------------------------------------------------------------------

 

Incase anyone reading this is wondering how to make use of the "unused" space as shown by the Synology Raid Calculator, here are the steps - 
 

 - Create a partition on the disk with unused space. I used fdisk as I am most comfortable with that program. Set the partition type to - 8e

fdisk /dev/sda 

 - Create a RAID0 array with the new partition

mdadm --create --raid 1 --force --level linear /dev/md5 /dev/sda3 

- Create a Physical Volume 

pvcreate /dev/md5

 - Create a Volume Group

vgcreate vg3 /dev/md5

 - Create a LV

lvcreate --name volume_1 --extents 100%FREE vg3 

 

!!!! Now Reboot !!!! - Very Important step

 

- Post bootup, DSM should pickup the newly created Logical Volume as a Crashed Disk, as we left the LV unformatted.

 

- Using the Storage Manager, delete this crashed Volume & create the volumes on the Unused space as you wish. In my case, as I had ~1.8TB of unused space and so, I created 2 volumes of 1T & ~800GB. Formatted them in "ext4", just to keep things simple and to avoid the Docker BTRFS bug (fills up the disk with old sub-volumes)

 

- At this point, everything should be perfect. The newly created volumes should be working, and DSM should be saying it's all GOOD.

 

- But wait, not so fast, as the next reboot will hose the "fstab" files, and DSM will be ignoring your newly created Volumes. To avoid this, what we need to do is replace the "s00_synocheckfstab" file with our own script.

 

 - To do this open a SSH session and navigate to "/usr/syno/cfgen/ "

 - Rename the "s00_synocheckfstab" to any other name(or delete it). Here is what I ran 

sudo mv /usr/syno/cfgen/s00_synocheckfstab /usr/syno/cfgen/DISABLED-s00_synocheckfstab

 

- Now to convert your "fstab" into a script. First, echo out the "fstab" file, when all the volumes are being detected. It should look something like this 

none /proc proc defaults 0 0
/dev/root / ext4 defaults 1 1
/dev/vg3/volume_1 /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0
/dev/vg3/volume_2 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0
/dev/vg2/volume_4 /volume4 btrfs auto_reclaim_space,synoacl,relatime 0 0

- Copy this to a text-editor and make these modifications -

--- add "#!/bin/bash"  as the first line of the script file

--- add "rw" to the "ext4" entries

--- remove line breaks and replace them with "\n"

--- Start this above line with - echo -e " - and end it with -  " > /etc/fstab -  (the qoutation marks are needed)

 

- Now to create the "s00_synocheckfstab" file. Copy paste the script you made above, into this file. The final script should look like this

#!/bin/bash
echo -e "none /proc proc defaults 0 0\n/dev/root / ext4 defaults 1 1\n/dev/vg3/volume_1 /volume1 ext4 rw,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0\n/dev/vg3/volume_2 /volume2 ext4 rw,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0\n/dev/vg2/volume_4 /volume4 btrfs auto_reclaim_space,synoacl,relatime 0 0" > /etc/fstab

 

 - Mark your new script executable 

sudo chmod +x /usr/syno/cfgen/s00_synocheckfstab


and that's it. The custom "s00_synocheckfstab" will probably need to be replaced on system updates, but apart from that the extra volumes should work as normal DSM Volumes and be manageable via the Web UI.

 

Enjoy!! 

 

Edited by Jseinfeld
Link to comment
Share on other sites

Nope, even all of that is wrong. Somehow the more aggressive I get with my changes, the lesser things work.

 

I reverted everything back to stock and started going through the "/var/log/messages" files. It seems that the "/usr/syno/bin/spacetool" is the culprit and at every other boot will output a line like this

 

2019-09-15T02:40:43-07:00 pi-hole spacetool.shared: space_unused_block_device_clean.c:50 Remove [/dev/vg3]

 

If this above line is not present, then further down I'll see confirmation of the "volume" mounting

 

2019-09-15T02:43:39-07:00 pi-hole synocheckshare: synocheckshare_vol_mount.c:47 Export Share [INTERNAL] [/dev/vg3/volume_1] [/volume1]

 

Attaching my complete "/var/log/messages" file for the last 2 days, incase that helps troubleshoot.

 

 

How do I fix this? So that it doesn't do random behaviour??

 

messages.txt.zip

Edited by Jseinfeld
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...