Jump to content
XPEnology Community

Installation on Proxmox (with physical disk assignment)


robvantol

Recommended Posts

Hi there,

 

I am new to XPEnology and I wanted to test an installation on Proxmox VE with my physical harddisks assigned to the XPEnology Virtual Machine.

My test is succesfull so far, so here is a how to:

 

=================================================================================================

READ THIS HOW TO CAREFULLY BEFORE YOU START. OTHERWISE YOU CAN HAVE SOME EXTRA UNNEEDED WORK

=================================================================================================

 

1. Download and unpack the XPEnology package for VirtualBox.http://yadi.sk/d/T3MbhAIK22OPL

 

2. Install Virtualbox on your PC. ( I did this with Linux Mint )

 

3. Create a virtual machine in proxmox for XPEnology with the following settings:

Tab General:

--> Give the vm any name you want.

--> Remember the VM ID (we need it later on)

Tab OS:

--> Choose "Other OS Types"

Tab CD/DVD:

--> Choose: "Do not use any media"

Tab Hard Disk:

--> Storage: local

--> Size: 1GB

--> Format: Raw Disk Image (RAW)

Tab: CPU

--> assign the cores you want

Tab: Memory

--> assign as much memory you want (I recommend a minimum of 1GB for good performance)

Tab: Network

--> Choose: Birdged mode

--> Model: Intel E1000

--> MAC Address: 00:11:32:08:D6:2A

Confirm your VM and let Proxmox create it (you do not need to start it)

 

4. Locate the .vdi file of the XPEnology package and remember its location

 

5. Open a terminal window and go to the .vdi file location

 

6. Convert the .vdi to a RAW based .img file with the following command

--> VBoxManage clonehd --format RAW [XPEnology filename].vdi SynoBoot_Proxmox.img

 

7. Copy the file to your proxmox machine with SSH

--> scp SynoBoot_Proxmox.img root@[your proxmox ip]:/var/lib/vz/images/[your vm number]

 

8. Rename the original disk file

--> login SSH ( ssh root@[your proxmox ip] )

--> cd /var/ib/vz/images/[your vm id]

--> ls (this will get you a list of files)

--> mv [the .raw disk file] backup-[the same filename]

 

9. Rename the SynoBoot file

--> mv SynoBoot_Proxmox.img [the original .raw filename]

 

==========================================================================================

THIS IS THE POINT FOR YOU TO DECIDE WHAT STORAGE YOU WANT TO ASSIGN TO YOUR XPENOLOGY VM

==========================================================================================

 

I choose to assign my 2 physical 500 GB disks to the vm. It is also possible to assign an virtual harddrive in Proxmox

 

10. Prepare your physical disks for your XPEnology VM

--> login SSH ( ssh root@[your proxmox ip] )

--> fdisk -l (will het you a list of all the storage devices)

 

**Look carefully for your physical drives, you can look at capacity and partitioning to be sure

**The harddrives must be totally empty without any partition on it

in my case my physical harddisks are "/dev/sdb1" and "/dev/sdc1"

 

--> fdisk /dev/sdb1 (you will be asked for a command, type an n and follow all the defaults given)

--> do the same for all your physical harddisks

 

11. Assign the disks to your XPEnology VM

--> qm set [your vm id] -ide[storage index number] /dev/sdb1

You can find the storage index number in your proxmox dashboard located at your XPEnology VM Hardware.

Your local storage is most likelly IDE0 so you can start at IDE1 and so on.

--> Repeat this step for all your physical harddrives ( note that the index number must be unique for each drive)

 

10. Start your VM in Proxmox

--> it will say "booting kernel" (this is normal because there is no operating system yet)

 

11. Download Synology assitent http://www.synology.com/support/download.php?lang=enu&b=12%20bays&m=DS3612xs

 

12. Start Synology assistent and let it search your network for your XPEnology VM

 

13. right-click your XPEnology VM

--> Choose: Install

--> When prompted for the software choose your .pat file from the XPEnology download package

--> Complete further steps

** In my case installation went wrong on the last step "writing configuration files"

** As far as I could trace the problem, it's caused by the Virtual Machine setup. Synology itself is not able to restart the system.

** When this happens with your installation, it will brick the virtual diskimage. The solution is to repeat the Steps 7 / 8 / 9.

** This will bring back your VM and it should run without anny problems after that because DSM and it's config are on your storage harddrives.

 

=================================================================================================

SOME WORK IN PROGRESS:

=================================================================================================

- Spinning down of the physical harddrives ( Synology has no control of the hardware because it's a vm. Promox must spindown the harddrives )

- Backup the XPEnology VM ( Backing up within proxmox console has bricked my VM, it's really easy to recover with steps 7 / 8 / 9 but it's not as it should be )

- Updating DSM ( I tried to update DSM to 4.2.xxxx but it reported an error. This is really valueble for me because I do not want to re-install for every DSM update )

 

Any help with the last points is much appreciated

Link to comment
Share on other sites

==========================================

UPDATE

==========================================

HDD Spindown is working within Proxmox

Since the Synology VM can't control your hardware because of Proxmox, Proxmox must spin down your harddrives when idle for x time.

 

You can do this with "hdparm".

 

1. Install hdparm

--> login SSH (root@[your proxmox ip]

--> apt-get install update

--> apt-get install hdparm

 

2. Now configure the idle time for spin down (this goes for each drive)

** In the original post above I assigned /dev/sdb1 and /dev/sdc1 to my VM **

 

--> hdparm -S 120 /dev/sdb

--> hdparm -S 120 /dev/sdc

 

** -S means standby and 120 means 10 minutes **

** In this case you don't specify a partition number of the drive, **

** because we want de whole drive to spin down. for example /dev/sdb or /dev/sdc etc. **

Link to comment
Share on other sites

I am having a go at this at the moment.

 

found some different ways to do a couple of steps

 

For those without Virtuabox....

you can upload and convirt the vdi file from within Proxmox itself (save you downloading and installing Virtuabox just to convirt the image) :wink:

 

Rename your synoboot.vdi file to have an ISO ending/extension

 

eg sysnoboot.iso or synoboot.vdi.iso

 

then within the proxmox web interface

Upload this image as an ISO file to your Proxmox CDROM Store

 

Click on your Proxmox host on the left hand pane

Select the Local Data Store

Click on upload

Browse to your newly renaimed file

 

Then SSH into proxmox host (or direct console if you have a keyboard / monitor attached)

log in

change to the ISO store directory

 

cd /var/lib/vz/template/iso

 

from here rename the file back to a vdi extension

 

eg

mv synoboot.vdi.iso synoboot.vdi

 

Then we can use the qemu-img tool to convert the vdi file into a raw file

 

eg

qemu-img convert -O raw synoboot.vdi synoboot.raw

 

and now rename that to the virtual machines name and move it into the correct directory and away you go.

 

Hope it helps

 

I am off to finish my setup (hopefully)

.

Link to comment
Share on other sites

ok had a couple of niggling problems

 

but overcame them

 

My Proxmox VE 2.3 install didn't seem to find the drives I gave the virtual as IDE, had to use SATA

 

so in my case I added 2 x fake SATA drives (passed thru)

 

I also thought it best to use disks by ID number and not sdb sdc etc... as they can and do change around sometimes....

 

so instead of using something like

 

qm set 100 -ide1 /dev/sdb

 

I used

 

qm set 100 -sata0 /dev/disk/by-id/ata-HUA721010KLA330-43W7625_42C0400IBM_GTA60PBHTURHE

 

I also tried out removing a drive and replacing it with a larger one (twice) to simulate a HDD crash/failure etc

 

Started with 2 x 50gig SSD's

 

Shutdown Synology VM

Shutdown Proxmox

replace 1 x SSD with a 1TB SATA drive

rebooted Proxmox

 

(this was tricky bit) had to look at Proxmox to figure out which HDD ID number was which, and then redid the

qm set 100 -sata .............

to replace the Passed thru virtual HDD

started back up the VM and it detected right away that there was a problem, in which I let it autorepair itself. (eg rebuild the mirror pair of 1x50gigSSD + 1x 1TB SATA)

 

once it had completed, I did the same for the second drive

 

shurdown VM

Shutdown Proxmox

swap SSD for 1TB SATA drive

reboot

 

did the qm set 100 -sata ......trick for the second 1TB drive

restarted the VM

and did the autorecover

 

in which it once again rebuilt the array.... but ALSO expanded the array from roughly 44.5gb space to 931gb :grin:

 

next I will try adding a 3rd 1TB drive to see what happens.

 

.

Link to comment
Share on other sites

BTW tried to update the software.... but that failed like yourself.

 

THO

 

I noticed, that when first setting up the system using the synology assistant etc.

the first time I performed it, it actuall downloaded and installed the latest version (I think)

as I forgot to check the 3rd box and transfer it over from the files downloaded.

 

So maybe thats one way of getting the latest?

 

I will try that out later also,

 

.

Link to comment
Share on other sites

  • 2 weeks later...

I tried to build a RAID-5 with 4 physical drives.

I assigned my harddrives by-id because a RAID-5 makes multiple partitions on each disk.

 

I managed to install and configure the system completely but after 1 data copy to XPEnology, the RAID-5 set degrades almost immediately.

The Synology DSM webinterface is now gone and i get a 500 Internal Server Error.

 

When I re-install Synology DSM we shows a degraded RAID-5 which I cannot rebuild.

 

My setup:

Asus P8H77-I (ITX) mainboard

Intel Xeon E-1230V2 quad-core

8GB DDR3 memory

NVidia passive graphic-card

1x 320GB harddrive (OS + VM's)

4x 2000GB hardrive (RAID-5 deticated XPEnology)

 

The only Virtual platform which runs Synology RAID-5 is Virtualbox.

The option which causing this is the fixed chipset driver, i think.

 

Is there an option for fixed chipset in the configuration of a VM?

Link to comment
Share on other sites

ESXi supports my hardware on v5.1 U1. ESXi doesn't have a webbased GUI.

The only way to manage the VM's, is with the VSphere client which requires Windows.

I don't want any Microsoft Machine in my own network :smile: , not even a VM.

 

I tried running Proxmox on KVM virtualization. That could be the reason why the RAID set degrades.

Has anyone tried XPEnology on OpenVZ or KVM Paravirtualized?

Link to comment
Share on other sites

Is there any reason why you forward only the first partition (sdb1 / sdc1) of a drive instead the whole drive (sdb/sdc)?

It then should be possible to use synologys energy management.

 

Anyway tried your howto but ended with a kernel panic when booting.

Edit: Selected another cpu architecture (core2duo) and it booted fine.

Link to comment
Share on other sites

  • 6 months later...

Another update

 

Tried to make myself a LARGE Xpenology proxmox setup, but ran into some limitations

 

Case = Supermicro sc933t-r760b (15 x 3.5inch bays)

Motherboard = Gigabyte P55-UD3R

CPU = Xeon X3450

Memory = 8Gb

Dell H310 SAS Controller

 

Installed ProxMox onto a 640G drive in bay 1

 

Made a new VM

Other OS Type

4Gb Memory

1Gb IDE Drive

 

Tried to assign 10 Drives to the Xpenology VM... but here is where the problem lies

 

MAX SATA drives you can assign to a VM = 6

MAX IDE drives you can assign to a VM = 4

MAX SCSI drives you can assign to a VM = 16

 

Tried Trantors 3810 repack v1.0

http://xpenology.com/forum/viewtopic.php?f=14&t=1700#p7980

 

But it didn't like something.

 

With 3 x 1TB IDE (can't use IDE0 as thats for the fake USB stick to boot) and 6 x 1TB SATA drives assigned, the install goes well... overwrites the synoboot-trantor-4.3-3810-v1.0.img naturally

replace the synoboot-trantor-4.3-3810-v1.0.img / reboot and and it fires up

Can see the drives, but complains about the volume being stuffed. (because it was made from initially 9 x 1TB and 1 x 1Gb (the boot .img) drives

 

So off to storage manager and remake the lot again.... this time skipping the 1Gb boot .img

Seems to go fine.... but you can see it having troubles in the console output

 

Lots of

want_idx 5 index 6

want_idx 6 index 7

errors

 

and it's having trouble with choosing which mode to talk to the drives in

 

some drives spit out

configured for MWDMA2/100

 

other spit out

configured for UDMA/133

 

some negotiate at 1.5Gbps, others 3.0Gbps

 

once the volume is complete

Shutdown restart

 

CRASHES with major EXT4 errors that I can't seem to fix.

 

doing a df on the VM show major weirdness going on

 

eg before any files copied or anything done

 

Volume = 7TB

Used = 4.5TB

 

Tried lots of different ways...

 

Made the VM without IDE drives, no difference

Made it with SCSI drives... can't boot

 

Maybe we need to compile proper virtio drivers for it to work efficently?

 

Example errors

29.442391] ata1.00: configured for UDMA/100
[   29.442995] ata1: EH complete
[   29.443611] ata2.00: configured for UDMA/100
[   29.444307] ata2: EH complete
[   29.445022] ata3.00: configured for UDMA/100
[   29.445676] ata3: EH complete
[   29.446947] ata7.00: configured for MWDMA2
[   29.447551] ata7: EH complete
[   29.477663] netlink: 12 bytes leftover after parsing attributes.
[   35.974383] loop: module loaded
[   36.222399] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[   36.223222] ata1.00: irq_stat 0x40000001
[   36.223774] ata1.00: failed command: FLUSH CACHE EXT
[   36.224452] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[   36.224453]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[   36.226181] ata1.00: status: { DRDY ERR }
[   36.226905] ata1.00: configured for UDMA/100
[   36.227500] ata1.00: retrying FLUSH 0xea Emask 0x1
[   36.228200] ata1.00: device reported invalid CHS sector 0
[   36.228872] ata1: EH complete
[   36.250525] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[   36.251261] ata3.00: irq_stat 0x40000001
[   36.251825] ata3.00: failed command: FLUSH CACHE EXT
[   36.252453] ata3.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[   36.252453]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[   36.254268] ata3.00: status: { DRDY ERR }
[   36.254975] ata3.00: configured for UDMA/100
[   36.255570] ata3.00: retrying FLUSH 0xea Emask 0x1
[   36.256249] ata3.00: device reported invalid CHS sector 0
[   36.256947] ata3: EH complete
[   36.446143] findhostd uses obsolete (PF_INET,SOCK_PACKET)
[   59.517725] md: md0: resync done.
[   59.539980] md: resync of RAID array md1

 

Then later

 

205.264134] ata1.00: device reported invalid CHS sector 0
[  205.264817] ata1: EH complete
[  205.265589] ata2.00: device reported invalid CHS sector 0
[  205.266443] ata2: EH complete
[  205.267373] ata3.00: configured for UDMA/100
[  205.267934] ata3.00: retrying FLUSH 0xea Emask 0x1
[  205.268713] ata3.00: device reported invalid CHS sector 0
[  205.269447] ata3: EH complete
[  209.244813] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[  209.245621] ata2.00: irq_stat 0x40000001
[  209.246175] ata2.00: failed command: FLUSH CACHE EXT
[  209.246824] ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[  209.246824]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[  209.248552] ata2.00: status: { DRDY ERR }
[  209.249259] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[  209.249960] ata3.00: irq_stat 0x40000001
[  209.250519] ata3.00: failed command: FLUSH CACHE EXT
[  209.251154] ata3.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[  209.251155]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[  209.252861] ata3.00: status: { DRDY ERR }
[  209.253810] ata2.00: configured for UDMA/100
[  209.254448] ata2.00: retrying FLUSH 0xea Emask 0x1
[  209.255185] ata2.00: device reported invalid CHS sector 0
[  209.277912] ata2: EH complete
[  209.278876] ata3.00: configured for UDMA/100
[  209.279454] ata3.00: retrying FLUSH 0xea Emask 0x1
[  209.280139] ata3.00: device reported invalid CHS sector 0

 

No volume

All drives recognised as SSD's with no SMART data (naturally)

Xpenology-proxmoxerrors.png

Xpenology-proxmoxerrors1.png

 

Later Speed drops even more

412.016564] ata3: EH complete
[  415.663388] ata3.00: limiting speed to UDMA/66:PIO4
[  415.664168] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
[  415.664966] ata3.00: irq_stat 0x40000001
[  415.665625] ata3.00: failed command: FLUSH CACHE EXT
[  415.666432] ata3.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[  415.666433]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[  415.668452] ata3.00: status: { DRDY ERR }
[  415.669174] ata3: hard resetting link
[  415.974170] ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[  415.975031] ata3.00: configured for UDMA/66
[  415.975575] ata3.00: retrying FLUSH 0xea Emask 0x1
[  415.976282] ata3.00: device reported invalid CHS sector 0
[  415.976964] ata3: EH complete
[  417.662362] ata1.00: limiting speed to UDMA/66:PIO4
[  417.663221] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6
[  417.663932] ata1.00: irq_stat 0x40000001
[  417.664552] ata1.00: failed command: FLUSH CACHE EXT
[  417.665152] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[  417.665153]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[  417.666957] ata1.00: status: { DRDY ERR }
[  417.667567] ata1: hard resetting link
[  417.668175] ata2.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[  417.668895] ata2.00: irq_stat 0x40000001
[  417.669485] ata2.00: failed command: FLUSH CACHE EXT
[  417.670094] ata2.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[  417.670095]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[  417.671893] ata2.00: status: { DRDY ERR }
[  417.672827] ata2.00: configured for UDMA/100
[  417.673398] ata2.00: retrying FLUSH 0xea Emask 0x1
[  417.674059] ata2.00: device reported invalid CHS sector 0
[  417.674820] ata2: EH complete
[  417.975450] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 310)
[  417.976267] ata1.00: configured for UDMA/66
[  417.976933] ata1.00: retrying FLUSH 0xea Emask 0x1
[  417.977705] ata1.00: device reported invalid CHS sector 0
[  417.978377] ata1: EH complete
[  424.342755] EXT4-fs (dm-0): barriers disabled
[  424.453973] EXT4-fs (dm-0): mounted filesystem with writeback data mode. Opts: usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,data=writeback,oldalloc
[  424.532457] EXT4-fs (dm-0): re-mounted. Opts: usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl
[  424.543915] EXT4-fs (dm-0): re-mounted. Opts: (null)
[  424.564745] EXT4-fs (dm-0): re-mounted. Opts: (null)
[  435.077838] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0
[  435.078634] ata3.00: irq_stat 0x40000001
[  435.079183] ata3.00: failed command: FLUSH CACHE EXT
[  435.079826] ata3.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0
[  435.079827]          res 41/00:00:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error)
[  435.082694] ata3.00: status: { DRDY ERR }
[  435.084111] ata3.00: configured for UDMA/66
[  435.084785] ata3.00: retrying FLUSH 0xea Emask 0x1
[  435.085699] ata3.00: device reported invalid CHS sector 0
[  435.086535] ata3: EH complete

 

Volume gets created again

Xpenology-proxmoxerrors2.png

 

Output of DF before reboot

DiskStation> df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0               2451064    490516   1858148  21% /
/dev/md0               2451064    490516   1858148  21% /proc/bus/pci
/tmp                   2026972       324   2026648   0% /tmp
/dev/vg1000/lv       1913539412    246400 1913190612   0% /volume1

 

Massive errors on reboot eg

[   26.178209] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14789 failed (2329!=0)
[   26.179321] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14790 failed (13801!=0)
[   26.180430] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14791 failed (57784!=0)
[   26.181539] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14792 failed (11912!=0)
[   26.182650] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14793 failed (64217!=0)
[   26.183770] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14794 failed (50729!=0)
[   26.185550] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14795 failed (4728!=0)
[   26.186748] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14796 failed (49097!=0)
[   26.187861] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14797 failed (27544!=0)
[   26.188973] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14798 failed (22376!=0)
[   26.190258] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14799 failed (33593!=0)
[   26.191380] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14800 failed (35083!=0)
[   26.192492] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14801 failed (23898!=0)
[   26.193606] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14802 failed (25002!=0)
[   26.194726] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14803 failed (46587!=0)
[   26.195809] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14804 failed (6218!=0)
[   26.196931] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14805 failed (52251!=0)
[   26.198052] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14806 failed (61675!=0)
[   26.199163] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14807 failed (9402!=0)
[   26.200273] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14808 failed (60298!=0)
[   26.201390] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14809 failed (16347!=0)
[   26.202499] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14810 failed (811!=0)
[   26.203601] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14811 failed (55162!=0)
[   26.204715] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14812 failed (31435!=0)
[   26.205833] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14813 failed (44698!=0)
[   26.206941] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14814 failed (37482!=0)
[   26.208076] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14815 failed (17979!=0)
[   26.209195] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14816 failed (34318!=0)
[   26.210315] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14817 failed (21087!=0)
[   26.211428] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14818 failed (28335!=0)
[   26.212533] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14819 failed (47870!=0)
[   26.213644] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14820 failed (5967!=0)
[   26.215430] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14821 failed (49950!=0)
[   26.216701] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14822 failed (65518!=0)
[   26.217816] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14823 failed (11199!=0)
[   26.218922] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14824 failed (58511!=0)
[   26.220044] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14825 failed (12510!=0)
[   26.221223] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14826 failed (3118!=0)
[   26.222335] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14827 failed (55423!=0)
[   26.223451] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14828 failed (30158!=0)
[   26.224570] EXT4-fs (dm-0): ext4_check_descriptors: Checksum for group 14829 failed (41375!=0)

 

Contradicting messages when logging in

Volume has been successfully created, but behind it error Volume crashed

Xpenology-proxmoxerrors3.png

 

DiskStation> df
Filesystem           1K-blocks      Used Available Use% Mounted on
/dev/md0               2451064    493432   1855232  21% /
/dev/md0               2451064    493432   1855232  21% /proc/bus/pci
/tmp                   2026972       308   2026664   0% /tmp

 

Xpenology-proxmoxerrors4.png

 

.

Link to comment
Share on other sites

I have my system finally running well without having any problems at all.

 

I added an extra storage controller to my system and configured the pass through to my xpenology VM.

I also added an extra NIC which is also forwarded to the VM.

 

I tried many setups with forwarded harddrivers. They all crashed almost instantly.

After spending a lot of time searching on google, i found a really obvious cause.

 

xpenology is a deticated system. it wants to control the hardware of your system.

Proxmox wants to do the same thing. The result is a conflict.

 

The conflict on storage controller is the reason the raid set was crashing and the disk io was really poor.

After the modifications a did the performance is almost the same as a system without virtualization.

 

network throughput is stable at almost 80 mb/s and peaks up to 100mb/s.

The VM has 2 CPU cores which are running between 5% and 40% usage and memory usage is about 800MB

Link to comment
Share on other sites

  • 2 weeks later...

Hi.

 

I have one question, which versions of XPE you installed and what the biggest hadrdrives you were able to connect.

 

I have a GL380 G5 with two HPSA P400 and i doing exactly the same thing as you described in your last post also on proxmox.

Link to comment
Share on other sites

×
×
  • Create New...