• Announcements

    • Polanskiman

      DSM 6.2-23739 - WARNING   05/23/2018

      This is a MAJOR update of DSM. DO NOT UPDATE TO DSM 6.2 with Jun's loader 1.02b or earlier. Your box will be bricked.  You have been warned.   https://www.synology.com/en-global/releaseNote/DS3615xs
    • Polanskiman

      Email server issues - FIXED   06/14/2018

      We have been experiencing email server issues. We are working on it. Bear with us. Thank you. ------------- Problem has been fixed. You might receive duplicate emails. Sorry for that.

kachunkachunk

Transition Members
  • Content count

    6
  • Joined

  • Last visited

Community Reputation

0 Neutral

About kachunkachunk

  • Rank
    Newbie
  1. Successfully setup HA cluster

    Same here. During Syno-HA configuration, there is a step that involves enabling/testing it. The VMs both locked up, and now every time they boot, the VMs lock up in a loop of some sort, using 100% CPU, until they are powered off. No way to resolve this without rebuilding my VMs, sadly. Hoping to see if the OP did this in a VM or bare metal. And if the test run worked.
  2. Tutorial - Install DSM 6.1.5 on ESXi 6.5

    They are close, but not the same. Baremetal will not mask the local SATA controller in the VM, while ESXi will. I don't know what other changes there are (if any), though. The idea behind ESXi mode is that you can use the loader as a virtual SATA disk, and have it disappear from DSM, not taking up a drive slot. Switching between the two has caused me problems between boots (installation recovery mode). I strongly recommend editing Grub and setting the default selection to ESXi if you are planning on using the other boot option even once. Just to also note, it counts from zero, so "2" will be the third grub menuentry selection.
  3. Successfully setup HA cluster

    That's great news. Hoping you can share some info about this because I've caused two virtual machines to self-brick when merely testing the HA config during setup. Given this was a primary use case for Xpenology, my project is on hold until I see some success stories like this, haha.
  4. Yeah, I think the LSI drivers/hardware is fairly ubiquitous so it's pretty safe to err on wider inclusion. I ended up pulling the card and swapping it for an older LSI 2008-based controller (IBM M1015, cross-flashed to LSI 9211-8i). This actually matches my working-good system where another XPenology VM is already working fine with its passthrough adapter. Since I wanted to play with Syno HA, may as well stick with this. Some additional clarifications: Turns out the LSI 9361-8i cannot be cross-flashed, so I was mistaken about my own setup - I had disabled its BIOS (I'm not booting from a RAID on it), and the devices were just in JBOD mode. The adapter firmware was not current, so I'm not certain if that played a role in a device discovery panic like above. The included megasas driver appears to be quite old. There's a good chance that a newer driver would do some good here, but admittedly recompiling instructions were making my eyes glaze over, and I deemed that a challenge for another day. To answer your questions - the controller always worked wonderfully in passthrough to another Ubuntu VM with current updates (I'm intending on moving from a hand-built NAS VM to XPenology). When troubleshooting, I can power down the XPenology machine, then power on the original Ubuntu VM and see the module load, and disks are detected/working without issue. So strictly speaking this looks to me a driver/kernel centric issue but probably correctable with a new driver. I'm afraid I just won't be testing that since I moved the card out. Appreciate your input though! If you do update the loader or extras, it could still help people.
  5. Ah, needed a reboot. So now it looks like megasas is loading, but it's deffo panicking. Also turns out megasas is included in the original ramdisk file, so no need for the extra/extension stuff in the mix after all. May have to pose elsewhere. Boot logs Hide
  6. Hey guys. So I'm fairly sure I've done my homework here before posting - I've done the following, but cannot load/detect an LSI 9361-8i storage adapter despite loading the extra driver modules. I do have a working installation, though. ESXi 6.5 Virtual DS3617xs DSM 6.1.5-15254 Kernel 3.10.102 Jun's official v1.02b loader (notably the DS3617xs derivative) extra.lzma for ds3617 v4.5 LSI 9361-8i adapter, passed through to the VM (confirmed was working fine in passthrough for an Ubuntu VM) lspci -v output: 0000:0c:00.0 Class 0104: Device 1000:005d (rev 02) Subsystem: Device 1000:9361 Flags: fast devsel, IRQ 19 I/O ports at 4000 [disabled] Memory at fe100000 (64-bit, non-prefetchable) Memory at fe000000 (64-bit, non-prefetchable) Capabilities: <access denied> Two additional virtual paravirtual SCSI disks are attached and working fine (hence why I have a working install). Due to booting in virtual/ESXi mode, the bootloader SATA device is not showing up in DSM (this is good/desired). I can't enable write cache in DSM. I'm concerned this will torpedo my further efforts and performance, but I think it is probably irrelevant when virtualizing. Can't also see estimated lifespan, sector counts, temperature, serial number, firmware version of the virtual disks. This is something I'm still investigating, though. May not really matter in the end. Anyway it seems that the extra kernel module is not being loaded here, or it's not satisfied with the hardware. I haven't gone down the path of trying to compile things myself. Any suggestions or thoughts? I figure I could just disable passthrough of the adapter and set up the disks as physical RDMs for the VM, but that masks the disks a bit too much for comfort. Edit: Booted back into Ubuntu to double-check what module was loaded (megaraid_sas, since I've flashed this adapter into IT mode). Annnd the device isn't showing any drives, I think the firmware crashed. Going to reboot and see if it works the Syno VM!