Jump to content
XPEnology Community

Nucklez

Rookie
  • Posts

    4
  • Joined

  • Last visited

Everything posted by Nucklez

  1. It has something to do with the bootloader that Windows is installing. * I booted the Windows VM off the Ubuntu 16.04 Desktop Live CD. * I choose to install Ubuntu 16.04 Desktop on the same disk as Windows, shrinking the Windows partition to allow Ubuntu to fit. * After Ubuntu installed, I shut down the VM, unmounted the Ubuntu Live CD and started the "Windows" VM. * This time, it booted into GRUB ( I think this is what Ubuntu 16.04 is still using. ) * The default value is Ubuntu, but I choose "Windows Vista" even though it's currently Windows 2016 Server Datacenter edition. * She booted up just fine into Windows. So, we have to figure out what Windows is doing to the boot loader that KVM/QEMU (aka Synology Virtual Machine Manager) isn't liking on our system.
  2. I've messed around a bit with this when I can. Haven't gotten Windows to boot after being installed yet, but maybe some more details. If you tail the log file in /var/log/libvirt/qemu that has the same ID as your VM when you run virsh list --all, you see this error a few seconds after the Windows VM boots. KVM internal error. Suberror: 1 emulation failure EAX=00000200 EBX=0000aa55 ECX=00000007 EDX=00000080 ESI=00007bd0 EDI=00000800 EBP=000007be ESP=00007be0 EIP=00000684 EFL=00003202 [-------] CPL=0 II=0 A20=1 SMM=0 HLT=0 ES =0000 00000000 ffffffff 00809300 CS =0000 00000000 ffffffff 00809b00 SS =0000 00000000 ffffffff 00809300 DS =0000 00000000 ffffffff 00809300 FS =0000 00000000 ffffffff 00809300 GS =0000 00000000 ffffffff 00809300 LDT=0000 00000000 0000ffff 00008200 TR =0000 00000000 0000ffff 00008b00 GDT= 00000000 00000000 IDT= 00000000 000003ff CR0=00000010 CR2=00000000 CR3=00000000 CR4=00000000 DR0=0000000000000000 DR1=0000000000000000 DR2=0000000000000000 DR3=0000000000000000 DR6=00000000ffff0ff0 DR7=0000000000000400 EFER=0000000000000000 Code=7c 68 01 00 68 10 00 b4 42 8a 56 00 8b f4 cd 13 9f 83 c4 10 <9e> eb 14 b8 01 02 bb 00 7c 8a 56 00 8a 76 01 8a 4e 02 8a 6e 03 cd 13 66 61 73 1c fe 4e 11 I've searched this error, and the main areas of concern of this one is one guy trying to do a passthrough for a nvidia card in KVM/QEMU, not Synology, just a bare Linux install. I've also seen many bug reports from an older Linux Kernel version from 2014ish. I could get the exact number if anyone is interested, but google's first 10 results for a search on KVM internal error suberror 1, will provide all the information I've learned so far. I found that mounting the Ubuntu install CD, and setting the VM to boot from the CD instead of a disk will in fact start the VM. If I try to boot from the "Boot from first hard disk" from the live CD, I get an error on the noVNC console that states "Booting from local disk... Boot failed: press a key to retry..." I am running an older Core2 Quad Q9550, but verified that KVM is compatible with it. # egrep -c '(vmx|svm)' /proc/cpuinfo 4 According to the KVM docs, as long as the results to that command isn't 0 I should be good. We don't have other packages installed, such as kvm-ok, so I couldn't test everything.
  3. I'm in the same boat. Right now, I like the idea of running the Synology on baremetal and use VMM for the VMs. I only have 8GB or RAM to play with, so actually running ESXi, a Windows VM and a Synology VM don't leave me much room to do anything else. Though, running Synology baremetal with a 4GB Windows VM doesn't allow another VM with only 2GB of RAM to boot up because of not enough RAM available to Synology I suppose. With ESX, you do get the ability to take snapshots of your Synology disk before running an upgrade though. So, there has to be some merit to that.
  4. I'm having the same issue as you. noVNC no longer works after the first reboot at the end of the Windows installation. I started on DSM 6.1 -something, upgraded to 6.1.2-15132 then again to 6.1.3-1512, all on the same baremetal Dell. Reinstalling the VM each time I upgraded DSM (VMM had an update at 6.1.2). No luck. I did notice that installing Ubuntu 16.04 Server works after all reboots. I SSH to the xpenology box, ran top and noticed it is using qemu as the hypervisor. I've tinkered with qemu/kvm with Openstack before, but can't remember much about it. Though, I did notice that after about five seconds after powering on, the Windows VM goes into a "paused" State. To watch this from the terminal, type "virsh list --all". The number one cause of this error for qemu/virsh appears to be out of disk space on the storage (According to my Google searches ). I verified that my 100GB virtual disk fits just fine on the 500GB volume, and then recreated it on the 4TB volume just to make sure. No luck on either one. I lowered the RAM and the CPUs from 4GB/2Cores to 1GB/1Core. Still no good. Note that the Ubuntu VM is 2 cores and 2GB RAM. Sorry I don't have an answer yet, but I'll keep looking into it. I wanted to post this in case it may spark some interest. TLDR: The Ubuntu VMs stay in the "running" state, noVNC works fine. The Windows VMs go to a "paused" state, noVNC don't work.
×
×
  • Create New...