bmacklin

noVNC Virtual Machine Manager not working and SHR not an option DSM 6.1.2-15132

Recommended Posts

Running DSM 6.1.2-15132 baremetal. Everything working great except two issues:

 

1. Virtual Machine Manager doesn't allow me to noVNC connect to my instances. I can connect the first time during a VM installation, but once windows reboots the noVNC doesn't reconnect and will never connect to it again. I have been able to load the storage drivers during installation... noVNC is unable to reconnect or ever connect again after the first reboot. I tried loading a VMDK of another virtual machine created elsewhere and it won't connect either.

2. SHR is missing as an option during storage creation. I tried this a few times and same results each time. SHR is simply not available as an option.

 

Anyone else experience these issues?

Share this post


Link to post
Share on other sites

Hi

 

For SHR, you need to do following steps

- connect with SSH

- get root acces 

# sudo -i

- edit synoinfo.conf

# vi /etc.defaults/synoinfo.conf

- comment line support raid group = "yes"

- add line support_syno_hybrid_raid = "yes"

- reboot then SHR will be usable (be carefull : all data on disk will be lost)

 

For VNC i haven't use VMM in baremetal

 

 

Share this post


Link to post
Share on other sites

Why did they disable SHR? I thought it was more flexible and probably better than raid.

Share this post


Link to post
Share on other sites

The new generation of NAS don't officially support SHR.

SHR is more flexible, can use mixed capacity but also less stable than standard RAID (a bad drive can crash entire volume)

Share this post


Link to post
Share on other sites

The greater Business Devices ds3615xs and ds3617xs natively support all normal raid modes

On ds916+ image you can use shr natively 

 

But you can change with obove config change

Share this post


Link to post
Share on other sites

I am using Virtual Machine Manager on my system.

I installed Windows 7 SP1 X86.

Working fine so far, also noVNC is working, but i turned on RDP because it is more flexible in my opinion.

But i don't think i can help you with your issue.

Share this post


Link to post
Share on other sites
5 hours ago, nevusZ said:

I am using Virtual Machine Manager on my system.

I installed Windows 7 SP1 X86.

Working fine so far, also noVNC is working, but i turned on RDP because it is more flexible in my opinion.

But i don't think i can help you with your issue.

Yeah, it was a long shot. By all accounts it should have been pretty easy and straightforward to have that VMM working but for some reason it just doesn't work for me. I have reinstalled DSM twice now.

 

I also have two ethernet ports. Both are on the same subnet. I dedicated one for cluster-to-cluster communication and tried assigning my VMs to the other one, but no luck.

Edited by bmacklin

Share this post


Link to post
Share on other sites

My second adapter has no ip

They enforce you to have one but i dont have a cluster, so no need for a second nic

Maybe try to unplug the cable from second adapter and try again

Share this post


Link to post
Share on other sites
On 7/25/2017 at 11:15 AM, bmacklin said:

Running DSM 6.1.2-15132 baremetal. Everything working great except two issues:

 

1. Virtual Machine Manager doesn't allow me to noVNC connect to my instances. I can connect the first time during a VM installation, but once windows reboots the noVNC doesn't reconnect and will never connect to it again. I have been able to load the storage drivers during installation... noVNC is unable to reconnect or ever connect again after the first reboot. I tried loading a VMDK of another virtual machine created elsewhere and it won't connect either.

2. SHR is missing as an option during storage creation. I tried this a few times and same results each time. SHR is simply not available as an option.

 

Anyone else experience these issues?

 

I'm having the same issue as you.  noVNC no longer works after the first reboot at the end of the Windows installation.  I started on DSM 6.1 -something, upgraded to 6.1.2-15132 then again to 6.1.3-1512, all on the same baremetal Dell.  Reinstalling the VM each time I upgraded DSM (VMM had an update at 6.1.2).  No luck.  I did notice that installing Ubuntu 16.04 Server works after all reboots.

 

I SSH to the xpenology box, ran top and noticed it is using qemu as the hypervisor.  I've tinkered with qemu/kvm with Openstack before, but can't remember much about it.  Though, I did notice that after about five seconds after powering on, the Windows VM goes into a "paused" State.  To watch this from the terminal, type "virsh list --all".   The number one cause of this error for qemu/virsh appears to be out of disk space on the storage (According to my Google searches :smile: ).  I verified that my 100GB virtual disk fits just fine on the 500GB volume, and then recreated it on the 4TB volume just to make sure.  No luck on either one.  I lowered the RAM and the CPUs from 4GB/2Cores to 1GB/1Core.  Still no good.  Note that the Ubuntu VM is 2 cores and 2GB RAM.  Sorry I don't have an answer yet, but I'll keep looking into it.  I wanted to post this in case it may spark some interest.

 

TLDR: The Ubuntu VMs stay in the "running" state, noVNC works fine.  The Windows VMs go to a "paused" state, noVNC don't work.

Edited by Nucklez
  • Like 1

Share this post


Link to post
Share on other sites

I've messed around a bit with this when I can.  Haven't gotten Windows to boot after being installed yet, but maybe some more details.  If you tail the log file in /var/log/libvirt/qemu that has the same ID as your VM when you run virsh list --all, you see this error a few seconds after the Windows VM boots.

 

KVM internal error. Suberror: 1
emulation failure
EAX=00000200 EBX=0000aa55 ECX=00000007 EDX=00000080
ESI=00007bd0 EDI=00000800 EBP=000007be ESP=00007be0
EIP=00000684 EFL=00003202 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0
ES =0000 00000000 ffffffff 00809300
CS =0000 00000000 ffffffff 00809b00
SS =0000 00000000 ffffffff 00809300
DS =0000 00000000 ffffffff 00809300
FS =0000 00000000 ffffffff 00809300
GS =0000 00000000 ffffffff 00809300
LDT=0000 00000000 0000ffff 00008200
TR =0000 00000000 0000ffff 00008b00
GDT=     00000000 00000000
IDT=     00000000 000003ff
CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000
DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000
DR6=00000000ffff0ff0 DR7=0000000000000400
EFER=0000000000000000
Code=7c 68 01 00 68 10 00 b4 42 8a 56 00 8b f4 cd 13 9f 83 c4 10 <9e> eb 14 b8 01 02 bb 00 7c 8a 56 00 8a 76 01 8a 4e 02 8a 6e 03 cd 13 66 61 73 1c fe 4e 11

I've searched this error, and the main areas of concern of this one is one guy trying to do a passthrough for a nvidia card in KVM/QEMU, not Synology, just a bare Linux install.  I've also seen many bug reports from an older Linux Kernel version from 2014ish.  I could get the exact number if anyone is interested, but google's first 10 results for a search on KVM internal error suberror 1, will provide all the information I've learned so far.

 

I found that mounting the Ubuntu install CD, and setting the VM to boot from the CD instead of a disk will in fact start the VM.  If I try to boot from the "Boot from first hard disk" from the live CD, I get an error on the noVNC console that states "Booting from local disk...   Boot failed: press a key to retry..."  

 

I am running an older Core2 Quad Q9550, but verified that KVM is compatible with it.

# egrep -c '(vmx|svm)' /proc/cpuinfo

4

 

According to the KVM docs, as long as the results to that command isn't 0 I should be good.  We don't have other packages installed, such as kvm-ok, so I couldn't test everything.

Share this post


Link to post
Share on other sites

It has something to do with the bootloader that Windows is installing.  

* I booted the Windows VM off the Ubuntu 16.04 Desktop Live CD.

* I choose to install Ubuntu 16.04 Desktop on the same disk as Windows, shrinking the Windows partition to allow Ubuntu to fit.  

* After Ubuntu installed, I shut down the VM, unmounted the Ubuntu Live CD and started the "Windows" VM.

* This time, it booted into GRUB ( I think this is what Ubuntu 16.04 is still using. :smile: )

* The default value is Ubuntu, but I choose "Windows Vista" even though it's currently Windows 2016 Server Datacenter edition.

* She booted up just fine into Windows.  So, we have to figure out what Windows is doing to the boot loader that KVM/QEMU (aka Synology Virtual Machine Manager) isn't liking on our system.

  • Like 1

Share this post


Link to post
Share on other sites

This is amazing news and an awesome workaround! Please let us know if you figure out a way to directly boot windows!

Share this post


Link to post
Share on other sites

Some news on this?

I got the same problem and test the grub workaround as i write.

(AND IT WORK!!!) 

 

For me as linux noob, it is ~6GB lost linux space in every windows VM,

plus the linux install overhead. no exactly smooth.

 

Is it possible to only install GRUB from a live cd or somth like that,

and defining windows as first boot selection?

 

TIA and special thanks @Nucklez !

 

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now