Jump to content
XPEnology Community

JBark

Member
  • Posts

    16
  • Joined

  • Last visited

  • Days Won

    3

JBark last won the day on November 7 2019

JBark had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

JBark's Achievements

Newbie

Newbie (1/7)

6

Reputation

  1. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.1.7-15284 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: BAREMETAL - HP MicroServer Gen8 with XEON 1265v2 - Additional comments: Previous attempts to upgrade from 6.1 to 6.2 failed, as all services, including networking, would shutdown a few seconds after logging in to DSM. Saw other posts mentioning an old .xpenoboot folder in / as the cause, so I deleted that and upgrade worked fine. I've been running XPenology on this same server for over 3 years now, so I'm not surprised there's a bunch of old crap laying around that could cause these problems.
  2. JBark

    DSM 6.2 Loader

    That's interesting, because I've also added that line to my synoinfo.conf file in 6.1. I'd completely forgotten about it, so I'll mess around with 6.2 again and see if this might be the trigger. Those scripts in /etc are suspicious as well, because they don't exist in 6.x, but it sure looks like /etc/upgrade.sh is being called on my system, when I watch 1st boot output using VSP through iLO. I wonder if they're some holdover from 5.x and are accidentally being called during this upgrade, and messing everything up. Oh well, it's a place to start.
  3. JBark

    DSM 6.2 Loader

    HP MicroServer Gen8 with E3-1256L V2 boots fine with new loader and upgraded OK from 6.1 to 6.2, but getting some odd behaviour when I log into DSM. If I don't mess with DSM, it runs fine, including the VM I have configured under VMM. The second I log into DSM, it shuts down every service and I lose connection about 30 seconds later. It's not a hard crash, as I can watch the logs and see it's shutting down/pausing every service that's running. Pretty odd, I'm guessing something didn't upgrade nicely from 6.1 -> 6.2, I doubt it's loader related. From /var/log/messages: 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgtool.cpp:2284 chmod failed for /var/packages (Operation not permitted) 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgtool.cpp:2293 chmod failed for /usr/local/etc/rc.d (Operation not permitted) 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-phpMyAdmin (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-PHP5.6 (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-MariaDB10 (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-Apache2.2 (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-FileStation (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-TextEditor (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-tvheadend (err=-1) [0x0900 servicecfg_root_object_set.c:43] 2018-08-02T12:20:52+08:00 HPGEN8 synopkg: pkgstartstop.cpp:379 Failed to stop pkgctl-SynoFinder (err=-1) [0x0900 servicecfg_root_object_set.c:43] Tons of entries like that, followed by.... 2018-08-02T12:21:55+08:00 HPGEN8 ore.Desktop.Initdata_1_get: service_pause_all.c:173 Pause service [synosnmpcd] by reason of [unidentified] failed. 2018-08-02T12:21:55+08:00 HPGEN8 ore.Desktop.Initdata_1_get: service_pause_all.c:173 Pause service [synogpoclient] by reason of [unidentified] failed. 2018-08-02T12:21:55+08:00 HPGEN8 ore.Desktop.Initdata_1_get: service_pause_all.c:173 Pause service [crond] by reason of [unidentified] failed. 2018-08-02T12:21:55+08:00 HPGEN8 ore.Desktop.Initdata_1_get: service_pause_all.c:173 Pause service [iscsitrg] by reason of [unidentified] failed. 2018-08-02T12:21:55+08:00 HPGEN8 ore.Desktop.Initdata_1_get: service_pause_all.c:173 Pause service [atalk] by reason of [unidentified] failed. Continues like that until I lose the connection. If I check messages after the next boot, it's shutdown everything like it's planning to reboot or something, but never actually finishes.
  4. Interesting thing to note is that Ports 1&2 are SATA3, Ports 3&2 are SATA2. Googling around shows lots of other hits from people having issues with these new Gold drives not being detected, and in one case somebody reported there were only working at SATA2 speeds. I'd say there's probably a firmware bug in the drives that's affecting speed negotiation with the chipset at boot time, and the firmware that's loaded in AHCI vs RAID mode is different enough that it works. Some WD SATA drives have jumpers where you can force the speeds of the drive back to SATA2, but doesn't look like this drive has any jumpers on it.
  5. Yeah, as far as I can tell, it requires the 2nd interface, since it binds one NIC for management traffic (clustering), and the other NICs are for VM traffic. I don't think there's any reason why they couldn't do it all on 1 NIC, but requiring at least two means they can guarantee that VM traffic won't affect cluster traffic (and access to DSM itself), and vice versa. Would be kinda nice if they had an option to disable clustering, so you could run in single host mode, but I think this is designed mainly for the SMB side of things, and they probably want to get rid of as many points of failure as possible.
  6. I just switched to using the new Synology Virtual Machine Manager that's been in beta for a couple weeks: https://www.synology.com/en-global/beta ... ualization It's not anywhere near as full featured as phpVirtualBox yet, but it's the usual qemu and libvirt that is common on LInux, and does run my Windows VM just fine. I'll never have to worry about compatibility issues again when upgrading DSM, which is great. I would not recommend trying to migrate a VM from phpVirtualBox, to VMM, it was an absolutely nightmare. There's also some gotchas, you must have 2 or more LAN connections, and you can only put the VMs on BTRFS volumes. No workaround for either that I've seen. The problems I ran into were twofold. One, VMM seems to not support UEFI yet, and I had my vbox vm using UEFI and GPT partitions. Two, and this is the big problem, VMM only supports VirtIO SCSI for the drives, and it's virtually impossible to install these drivers cleanly in a pre-existing VM. If they were using the regular qemu setup, you could migrate over a VM, set the controller to IDE, and boot up fine the first time. Add a 2nd small drive using VirtIO SCSI to get Windows to properly install the drivers, then shutdown, remove the 2nd drive, and change the primary over the VirtIO SCSI. However, this isn't possible with VMM. So here's what I did. First, I copied over the vdi and vbox files from the NAS so I could fix everything up on my local PC, since it's much faster. UEFI to MBR: I imported into VirtualBox, added a 2nd drive the same size as the primary, and booted up with a GParted live iso. Initialized 2nd drive using MBR, then copied the partition from the GPT to MBR drive using GParted. Shutdown VM, replaced GParted ISO with Win10 ISO, booted back up, ran Command Prompt from Recovery menu ran these commands: bootrec /scanos bootrec /rebuildbcd bootrec /fixmbr bootrec /fixboot Rebooted back into recovery again, ran Startup Repair, booted back into Windows without issue after that. Getting VirtIO Drivers installed: This is annoying, because even though you can right-click install the inf files, Windows won't complete the install until it sees the hardware, and this can't be on first boot. So you somehow need Windows to "see" the VirtIO hardware and install the drivers before you can switch the boot drive. And VirtualBox doesn't support VirtIO SCSI (just VirtIO network), so much more work is needed. Here's my painful steps. QEMU works much better under linux, so I created an Ubuntu VM with VMWare Player (vbox can't do nested virtualization, player can if you enable it), copied my VDI file from vbox to it, and installed the qemu files sudo apt-get install qemu-kvm libvirt-bin virt-manager (something like that). Converted the VDI to a format qemu likes qemu-img convert -p -f vdi vbox.vdi -O qcow2 qemu.qcow2 Created a VM using virt-manager that matched the specs of my VM, added the new qcow2 disk image as an IDE drive. Fight with VM settings to actually get something to boot. Booted VM up, confirmed working. Added a 2nd 1GB drive using VirtIO SCSI, windows saw device, installed drivers from VirtIO iso: https://fedorapeople.org/groups/virt/vi ... io-win.iso Added a few other drives using the other VirtIO types, since I didn't want to mess with this again. Powered down VM Important step, have to remove the VM (leaving the qcow2 image along), then setup a new VM again, this time chosing to add the qcow2 drive as VirtIO. When I just changed the drive from IDE to VirtIO, it wouldn't boot, and googling found I had to remove and reimport. Dumb. Powered up VM, all good. Powered down, converted back to a format vbox can use (I used vmdk cause I need to export to OVA anyway) qemu-img convert -p -f qcow2 qemu.qcow2 -O vmdk vbox-fixed.vmdk Copy vbox-fixed.vmdk back to PC with vbox, replace old vdi in VM with this new vmdk Export the vm to an ova file. Uninstall phpVirtualbox, upgrade NAS to 6.1.1 (required for VMM), enable beta software, install VMM, configure VMM, import ova file, cross fingers. Worked for me, though still some weirdness that the VM gets stuck in a boot loop if I reboot it, but is fine if I shutdown/power up. So it looks like things are borked as Windows does the first boot and installed devices and reboots, but just needs a forced power off/back on to be fine. It took me literally days to get it working, and it was totally not worth it. Could have easily just created a new VM and reconfigured it in far lass time, but I was mainly curious to see if it was actually possible. If you're just starting out, and you meet the 2+ LAN and BTRFS requirements, definitely look at VMM. It's getting a bunch of updates to add new features roughly every week, so should be quite nice in a couple months.
  7. This latest beta version has been working fine for me on my HP Gen 8 running Jun's 1.02 Alpha loader and 6.1 Update 1. I did notice it didn't automatically shut down the guest VM when I rebooted DSM, but I'm not sure if that has ever worked.
  8. JBark

    DSM 6.1.x Loader

    Yep, running Update 1 without any problems, other than the usual false alarm "storage space has failed" message I get at boot.
  9. JBark

    DSM 6.1.x Loader

    Confirming this worked for me as well on my Gen8. I needed to get up to 6.1 so I could finally update to the newest phpVirtualBox, and this was my only option. I'm getting the "failed storage space" for my single SSD at boot, but everything runs fine so that's not really a big deal.
  10. Thanks guys for putting out an updated version for DSM 6. I've been meaning to get around to it, but haven't bothered updating since phpVirtualBox hasn't been updated yet. I'll update the OP here sometime soon with the links to this new version.
  11. I can confirm that the -5 version has fixed the bridging issues I was seeing as well with the latest bootloader. Nice to be able to run the latest everything again without having to resort to any sort of workaround.
  12. It does work if you can downgrade xpenoboot to the previous version. After updating dsm to 5.2 I flashed xpenoboot back to 5.2-5592.2. Everything has been fine for the past month or so, but it definitely doesn't work on every setup, so I can't say it's a guaranteed fix. Once we get a fixed xpenoboot then it should work again.
  13. On the HP Gen8, rebooting DSM will cause the BIOS to reset to defaults. It's annoying, but you can workaround it by configuring everything correctly in the BIOS, then saving those settings as the default. Then when it resets to default, it will just reset to those settings. AHCI is the preferred way to go, since SMART values on the drives can't be read in RAID mode. There used to be some issues with fan speeds increasing when running in AHCI, but that should be fixed if you've got the latest firmware installed.
  14. JBark

    Baremetal vs ESXi

    If you're running bare metal, you can use this VirtualBox package to run VMs. viewtopic.php?f=15&t=3497 Just scroll to the end to find the post with the most recent package. Works great, I've been using it for the past couple months to run a Windows VM without issue.
  15. I'm running bare metal, with the B120i in AHCI mode. The fan speed depends on the intake temp, so I probably just have slightly lower temps that you. I just checked today, the intake temp is 21C and the fan is at 11%. I think AHCI is always slightly higher than RAID mode, though it used to be much worse with the older firmware, where the fan was around 30% in AHCI mode. There are a couple other hp apps that get installed on Linux, like the hp-health one. It could be that one of those are needed as well to get the same fan speeds we see in supported OSes? I'm not really sure.
×
×
  • Create New...