Jump to content
XPEnology Community

kachunkachunk

Transition Member
  • Posts

    8
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

kachunkachunk's Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. Ack, I was bitten by that earlier too. So, it's helpful to have a clean version of your bootloader (pre-installation) saved somewhere - I got back up and running pretty quickly because of that; clone it before attaching it to your VM with `vmkfstools -i`. You can also consider VM snapshotting, but I didn't go this route. Maybe next time I end up with a new bootloader: - The synoboot VMDK should be attached to SATA0:0 as usual, in dependent mode. - Any capacity VMDKs you plan on using should be attached to SATA1:x in Independent-Persistent mode. - Finally, snapshot the VM before you boot, or at a time where you know it's in a good state, so you can roll back to this point of time later. You can confirm this is working right if you look at the VM directory and see that there's a -000001.vmdk for the bootloader VMDK, but not the data disks. - If you use Physical RDMs, you effectively get the same results, too, since pRDMs can't be snapshotted.
  2. I went through the trouble of building a DS3615xs VM for ESXi using the 1.03b loader (I'm running X5675 CPUs), but I wasn't happy with e1000e or SATA devices, despite being on 6.2.1. I had some drives dropping out prematurely from some minor storage network interruptions, and DSM wants you to permanently rebuild after enough of a jolt. I don't recall my experiences on 6.1.x and SCSI being so sensitive, so opted to try to get back to a SCSI setup. So I'm now using your/swords80's DS3617xs OVA (much easier to deal with), and indeed still need to manually install DSM 6.2 23739 (not the latest it pulls, which is 6.2.1 so far). All is well. Now, regarding 6.2.1: Looking at the serial outputs, after DSM 6.2.1 installs, all subsequent boots result in panics/backtraces upon loading pvscsi and vmxnet3. So loses networking, and has no disks except the loader itself, and it never really finishes the installation process properly, really. Judging from how none of the other SCSI controllers work, I think the same is happening there, but I haven't checked the serial logs for those (LSI SAS, LSI Parallel, and Buslogic (not that you want that last one)). Looks like bogus memory addresses when you look at the traces, as well. Hopefully it's fixable in a new loader?
  3. Same here. During Syno-HA configuration, there is a step that involves enabling/testing it. The VMs both locked up, and now every time they boot, the VMs lock up in a loop of some sort, using 100% CPU, until they are powered off. No way to resolve this without rebuilding my VMs, sadly. Hoping to see if the OP did this in a VM or bare metal. And if the test run worked.
  4. They are close, but not the same. Baremetal will not mask the local SATA controller in the VM, while ESXi will. I don't know what other changes there are (if any), though. The idea behind ESXi mode is that you can use the loader as a virtual SATA disk, and have it disappear from DSM, not taking up a drive slot. Switching between the two has caused me problems between boots (installation recovery mode). I strongly recommend editing Grub and setting the default selection to ESXi if you are planning on using the other boot option even once. Just to also note, it counts from zero, so "2" will be the third grub menuentry selection.
  5. That's great news. Hoping you can share some info about this because I've caused two virtual machines to self-brick when merely testing the HA config during setup. Given this was a primary use case for Xpenology, my project is on hold until I see some success stories like this, haha.
  6. Yeah, I think the LSI drivers/hardware is fairly ubiquitous so it's pretty safe to err on wider inclusion. I ended up pulling the card and swapping it for an older LSI 2008-based controller (IBM M1015, cross-flashed to LSI 9211-8i). This actually matches my working-good system where another XPenology VM is already working fine with its passthrough adapter. Since I wanted to play with Syno HA, may as well stick with this. Some additional clarifications: Turns out the LSI 9361-8i cannot be cross-flashed, so I was mistaken about my own setup - I had disabled its BIOS (I'm not booting from a RAID on it), and the devices were just in JBOD mode. The adapter firmware was not current, so I'm not certain if that played a role in a device discovery panic like above. The included megasas driver appears to be quite old. There's a good chance that a newer driver would do some good here, but admittedly recompiling instructions were making my eyes glaze over, and I deemed that a challenge for another day. To answer your questions - the controller always worked wonderfully in passthrough to another Ubuntu VM with current updates (I'm intending on moving from a hand-built NAS VM to XPenology). When troubleshooting, I can power down the XPenology machine, then power on the original Ubuntu VM and see the module load, and disks are detected/working without issue. So strictly speaking this looks to me a driver/kernel centric issue but probably correctable with a new driver. I'm afraid I just won't be testing that since I moved the card out. Appreciate your input though! If you do update the loader or extras, it could still help people.
  7. Ah, needed a reboot. So now it looks like megasas is loading, but it's deffo panicking. Also turns out megasas is included in the original ramdisk file, so no need for the extra/extension stuff in the mix after all. May have to pose elsewhere. Boot logs Hide
  8. Hey guys. So I'm fairly sure I've done my homework here before posting - I've done the following, but cannot load/detect an LSI 9361-8i storage adapter despite loading the extra driver modules. I do have a working installation, though. ESXi 6.5 Virtual DS3617xs DSM 6.1.5-15254 Kernel 3.10.102 Jun's official v1.02b loader (notably the DS3617xs derivative) extra.lzma for ds3617 v4.5 LSI 9361-8i adapter, passed through to the VM (confirmed was working fine in passthrough for an Ubuntu VM) lspci -v output: 0000:0c:00.0 Class 0104: Device 1000:005d (rev 02) Subsystem: Device 1000:9361 Flags: fast devsel, IRQ 19 I/O ports at 4000 [disabled] Memory at fe100000 (64-bit, non-prefetchable) Memory at fe000000 (64-bit, non-prefetchable) Capabilities: <access denied> Two additional virtual paravirtual SCSI disks are attached and working fine (hence why I have a working install). Due to booting in virtual/ESXi mode, the bootloader SATA device is not showing up in DSM (this is good/desired). I can't enable write cache in DSM. I'm concerned this will torpedo my further efforts and performance, but I think it is probably irrelevant when virtualizing. Can't also see estimated lifespan, sector counts, temperature, serial number, firmware version of the virtual disks. This is something I'm still investigating, though. May not really matter in the end. Anyway it seems that the extra kernel module is not being loaded here, or it's not satisfied with the hardware. I haven't gone down the path of trying to compile things myself. Any suggestions or thoughts? I figure I could just disable passthrough of the adapter and set up the disks as physical RDMs for the VM, but that masks the disks a bit too much for comfort. Edit: Booted back into Ubuntu to double-check what module was loaded (megaraid_sas, since I've flashed this adapter into IT mode). Annnd the device isn't showing any drives, I think the firmware crashed. Going to reboot and see if it works the Syno VM!
×
×
  • Create New...