Jump to content
XPEnology Community

reclaiming / compacting unused VMDK space for backup purposes


raidsm

Recommended Posts

Hi all! 

 

Can somebody help reducing/compacting a DSM VMDK file from 30G  to what it's really using (1TB) ? Should I've made a snapshot right after my DSM installed with my packages?

 

I want to make a full backup of my virtual xpenology guest VMware fusion machine (OSX host on a SSD). The thing is the xpenology is using approximately 1 gig MAX but , my VMDK size is about 30 GIG as of now... I must have use this space from previous downloaded files but deleted those afterward. My recycle bin in DSM is turned off and empty. The backup is mainly because I use this for many thing including a docker homebridge setup and some other things and I don't want to start over if my HD dies..

 

I've read that

1-BTRFS (and EXT4)  partition doesn't need to be "defragmented" like a ntfs partition for windows guest vm it seems. 

2- can't install VMware tools on DSM guest os

3- As I use a SSD I'm not convince about using a software that zero out all unused bites (unnecessary wear?)

 

thanks for your help!

 

EDIT: I've run the DSM volume defragmentation and no change on the VMDK file size. Volume is at 1.02 GB but VMDK file size still at 30Gb..

Edited by raidsm
Link to comment
Share on other sites

  • 8 months later...
  • 2 weeks later...

If you used thick provisioning when installing this is the default VMware behaviour.

Ran into the exact problem when installing dsm with 1TB hdd thick provisioned.

 

I created another, smaller volume, added to the XPE and moved my files over and deleted the bigger one afterwards.

 

I did not use raid mode inside XPE as my NAS is already on a HW raid but from what I read you also just use a single drive.

Link to comment
Share on other sites

  • 2 months later...

You can zero out the extra space in the VMDK from DSM then use ESXi commands to decrease the size of VMDK. I have done this with Windows and Linux with following guide in past and I am 99% sure I tried with DSM in past too and it worked, but use those commands with caution. Have a backup first. Hope this helps.

 

https://blah.cloud/infrastructure/zero-free-space-using-sdelete-shrink-thin-provisioned-vmdk/

Edited by ilovepancakes
Link to comment
Share on other sites

  • 2 months later...

hi guys

i have a same little problem. i have gen8 esxi 6 with xpenology installed on a thin disk. size of a DSM disk is 1TB, i just upload about 900gb of data on it and i fast receive an alert on disk space

 

immagine.png.79f76f895ef6d9ea52d4f2e20561f501.png

 

 

.....now i just remove a lot of data and only 500gb remain on my folders but DSM tell me that used size is 888gb, why? real data is 500gb but on DSM i see 888gb and i don't understand why. if i try to clear some other space the used size remain on 888gb. i'm become crazy :)

Link to comment
Share on other sites

hi guys, i'm ssh into nas and something is really strange, please see picture below. total volume size is 958gb ok.

disk usage from cloudstation is 812gb, shared folder NAS is 393gb and it's ok, sourveillance 123gb and it's ok. but if if i make a sum of total, space result is over 1tb.....it's impossible.

i think the issue is on @cloudstation folder or something like that. please could you help me?

 

immagine.thumb.png.a144aba434723fa22513c0fb86112deb.png

Edited by reziel84
Link to comment
Share on other sites

finally i solved my issue, i follow this steps and now all works correctly:

 

Logged in as admin on PUTTY after enabling SSH in the control panel >> Terminal & SNMP >>> Check box next to "enable SSH Service"


Linux Commands i did (you can copy and paste these into your putty console)

sudo -i   <<< gives you root level access

cd /volume1  << need to be on right volume

du -h -dl     <<<< this tells you all the space in each dir. takes a while to run  the dl is "D and L" in lower case

cd ./@cloudstation
dir
du -h -d1  <<<<< found out "sync" directory was the culprit with 1.4T used
cd ./@sync
dir
du -h -d1  <<<<< found out "repo" directory was the culprit with 1.4T used
 

Then i stopped cloud station server in my web browser
 

back to putty window  this was my location >>>>   root@YOUR_SERVER_NAME:/volume1/@cloudstation/@sync#

rm -r repo   <<<< this deletes the whole directory and all sub folders and files

it freed up 1.4T

Then back to web browser and STARTED Cloud Station Server.

everything seems to work perfect

Link to comment
Share on other sites

  • 4 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...