• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About fafner

  • Rank
    Junior Member
  1. - Outcome of the update: SUCCESSFUL - DSM version prior to update: DSM 6.2.1-23824U6 - Loader version and model: Jun v1.04b - DS918+ - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7u2 on Supermicro X11SPM - Additional comments: Manual update, reboot. Disk as VMDK on VMware Paravirtual SCSI controller.
  2. - Outcome of the update: SUCCESSFUL - DSM version prior to update: DSM 6.2.1-23824U6 - Loader version and model: Jun v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7u2 on Supermicro X9SRH - Additional comments: Reboot required, all disks as RDM on 2nd SATA controller.
  3. Did anyone get any SCSI Controller to work? I would like to use RDMs. Edit: Forget it. Just noticed that I can attach RDM to SATA controller. Great, really. 👍
  4. Not surprised. Probably no one is working on this. Sticking with iSCSI so far.
  5. iSCSI seems to work here so far...
  6. I've been running the DS412+ for month with NFS-VAAI including 5562U2 before 5592 and it was working all the time. So I am sure XPEnology is broken here.
  7. You think DSM 5.2-5565 has problem, but 5592 doesn't, right ? No. XPEnology is broken. It's not a matter of DSM. I have 5592 on a DS412+ and there it's working.
  8. I can confirm it's broken in XPEnoboot 5.2-5565.2 with vSphere 5.5.
  9. I did and I used it and it said "online" or "ready" or whatever it says when it's working. Also the Server was connected, iSCSI-Initiator in WS said "connected" / "ok" and DSM too. But alas, no drive was showing up. After recreating the target in DSM it looked exactly the same but now the drive was accessible.
  10. I have passed through the LSI-Adapter to the XPEnology VM so it has direct access to the HDs. Great performance.
  11. Great, guys. Thanks for the work. It's back again! At first my WS2K12R2 didn't see the drive although it was connected and everything looked fine. I then deletet and recreated the iSCSI-target (not the LUN of course) and then it was like before.
  12. Yes. Damn. Didn't see this thread before upgrading and now I'm stuck. iSCSI Targets are offline and can't connect to the LUNs. Does one know if it worked with 5.2-5565.1 without Update-1 and if one can downgrade to that?
  13. Für mich war die Problematik, daß ich zwar auf dem X9SRH einen LSI-Controller drauf habe, für den es auch ESXi-Treiber gibt, der aber einfach eine unterirdische Performance liefert. So vl. 20 MB/s. Grund scheint zu sein, daß der keinen Cache hat. So in der Art kann man das googlen. Dabei hab ich auch gelesen, daß manche dieses Problem "lösen", indem sie den Controller mit pass through an ein VM-NAS (meist natürlich OpenNAS) weiterreichen. Da ich schon länger Synology verwende und auch mit XPEnology experimentiere, wurde ich hellhörig. Das Ergebnis ist sensationell. Die vier HD im X9SRH mit Brutto 14 TB liefern unter SHR 9 TB netto. Als NFS den beiden ESXen zur Verfügung gestellt und fertig. Innerhalb des X9SRH kommt eine VM auf 250 MB/s.
  14. That's strange. No difference here with IDE, same need to replace the .vmdk after 1st install. 5.5u1 also. So at least some fiddling seems necessary. Either replace .vmdk or "rmmod=at a_piix", of which I don't have an idea what it means. In case someone wants to try my all-SCSI solution, here is an .ova: http://tinyurl.com/pros2tr Just add an VMXNET3 nic and it's ready to go. (admin, no pw, dhcp, vmware tools installed). Like I said, once it's installed just replace the (small) .vmdk with a newer version of StarWind-V2V-ESXi-converted Nanoboot if available.
  15. Not so here. I'm using the ESX variant with SCSI controllers (e.g., VMware Paravirtual) as device 0:0. Works like a charm. That's right, but that happens only on the very first (completely fresh) install of DSM. Just power off the VM, replace Nanoboot .vmdk and boot normal. Everything works. After that one can just replace the Nanoboot .vmdk with a new version if available. DSM continues to work as before. At least my experience here. Did this the last couple of updates.