Jump to content
XPEnology Community

fafner

Member
  • Posts

    28
  • Joined

  • Last visited

Posts posted by fafner

  1. On 7/1/2022 at 9:45 PM, fbelavenuto said:

    Hope you like it.

    Totally. This is phantastic. Thx a lot.

     

    Runs under ESXi 8 like a charm. Upgraded my previous red pill by installing / configuring arpl and then just mounting the disks into new VM. DSM recognized the transfer and allowed "keep settings". Bingo. 

  2. On 10/20/2022 at 12:54 PM, HorstHoden said:

    https://MYIP:5001/webapi/auth.cgi?api=SYNO.ActiveBackup.Activation&method=set&version=1&activated=true&serial_number=MYXPENOLOGYSERIAL

    {

     

        "error": {
            "code": 119
        },
        "success": false
    }

    This works for Active Backup for Business. But does one know the URL for Active Backup for Google Workspace...? 🤔

  3. On 3/23/2022 at 9:28 PM, alienman said:

    Anyone interested on create new module "open-vm-tools"? The idea is to generate something similar to the ACPI module, but including the "open-vm-tools" package.

    I created a docker container from https://registry.hub.docker.com/r/linuxkit/open-vm-tools/

     

    Runs like I've never seen before. Ok, didn't test shut down / restart so far because I always use the Syno web interface for that. But snapshot w/quiescence works soo nice.

  4. On 1/25/2022 at 4:58 PM, iceman said:

    Open Vm tool is updated add support dsm 7, please check in use

     

    https://github.com/NeverEatYellowSwissSnow/synology-dsm-open-vm-tools

    Qiescence (Snapshot) does not work. 

     

    2022-01-27T12:20:44.657Z In(05) vcpu-2 - [msg.snapshot.quiesce.vmerr] The guest OS has reported an error during quiescing.
    2022-01-27T12:20:44.657Z In(05)+ vcpu-2 - The error code was: 3
    2022-01-27T12:20:44.657Z In(05)+ vcpu-2 - The error message was: Error when enabling the sync provider.
    2022-01-27T12:20:44.657Z In(05) vcpu-2 - ----------------------------------------
    2022-01-27T12:20:44.659Z In(05) vcpu-2 - VigorTransportProcessClientPayload: opID=4501266d-01-32-28e5 seq=8814: Receiving Bootstrap.MessageReply request.
    2022-01-27T12:20:44.659Z In(05) vcpu-2 4501266d-01-32-28e5 VigorTransport_ServerSendResponse opID=4501266d-01-32-28e5 seq=8814: Completed Bootstrap request.
    2022-01-27T12:20:44.659Z In(05) vcpu-2 - ToolsBackup: changing quiesce state: STARTED -> ERROR_WAIT
    2022-01-27T12:20:46.661Z In(05) vcpu-2 - ToolsBackup: changing quiesce state: ERROR_WAIT -> IDLE
    2022-01-27T12:20:46.661Z In(05) vcpu-2 - ToolsBackup: changing quiesce state: IDLE -> DONE
    2022-01-27T12:20:46.661Z In(05) vcpu-2 - SnapshotVMXTakeSnapshotComplete: Done with snapshot 'VM Snapshot 27.1.2022, 13:20:36': 0
    2022-01-27T12:20:46.662Z In(05) vcpu-2 - DISKLIB-CBT   :ChangeTrackerESX_DestroyMirror: Destroyed mirror node 126d1cea-4dd2938-cbtmirror.
    2022-01-27T12:20:46.727Z In(05) vcpu-2 - DISKLIB-CBT   :ChangeTrackerESX_DestroyMirror: Destroyed mirror node 13a31cde-50c2934-cbtmirror.
    2022-01-27T12:20:46.729Z In(05) vcpu-2 - SnapshotVMXTakeSnapshotComplete: Snapshot 0 failed: Failed to quiesce the virtual machine (29).
    2022-01-27T12:20:46.729Z In(05) vcpu-2 - VVolObjNotifySnapshotDone: isEnabled: 1

     

  5. On 1/10/2022 at 8:27 PM, ilovepancakes said:

    That being said, on 1.04b 6.2.x I have used virtual SCSI disks in ESXi as the main DSM drives and have had 0 issues with them.

    Same here. Used to have four real SATA-Disks as RDM via the PVSCSI controller... And thought I have to do so with tinycore-redpill DSM 7. 🤔

     

    On 1/6/2022 at 4:32 PM, ilovepancakes said:

    ... and first off I am curious if anyone else experiences the same symptoms or if it's just me.

    Didn't get it done here either. 🙄

     

    BUT... obviously I can use my SATA RDM with vSATA AHCI. Works great so far. 🥳

  6. - Outcome of the update: SUCCESSFUL

    - DSM version prior to update: DSM 6.2.1-23824U6

    - Loader version and model: Jun v1.04b - DS918+

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7u2 on Supermicro X11SPM

    - Additional comments: Manual update, reboot. Disk as VMDK on VMware Paravirtual SCSI controller.

  7. - Outcome of the update: SUCCESSFUL

    - DSM version prior to update: DSM 6.2.1-23824U6

    - Loader version and model: Jun v1.03b - DS3617xs

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.7u2 on Supermicro X9SRH

    - Additional comments: Reboot required, all disks as RDM on 2nd SATA controller.

  8. The problem is you are testing with 2 different versions. You have for the HW the actual version and for XPEnology only 5.2-5565 Update 2. If it's possible test the DS415+/DS412+ with 5.2-5565 Update 2 and see if there is also no problem.

     

    So you can't say with 100% accuracy that XPEnology is broken. It's only an assumption.

    I've been running the DS412+ for month with NFS-VAAI including 5562U2 before 5592 and it was working all the time.

     

    So I am sure XPEnology is broken here.

  9. Hi, to me it looks more as an issue with XPE than a Synology Problem with DSM 5.2 and ESXI 6.0.

     

    As you can see the NFS connection with my DS415+ (DSM 5.2-5592) is using hardware acc. vs my DS3615 (XPE on DSM 5.2-5565 Update 2) is not.

    You think DSM 5.2-5565 has problem, but 5592 doesn't, right ?

    No. XPEnology is broken. It's not a matter of DSM.

    I have 5592 on a DS412+ and there it's working.

  10. @fafner: didn't you notice the enable button under storage manager -> iSCSI target?
    I did and I used it and it said "online" or "ready" or whatever it says when it's working. Also the Server was connected, iSCSI-Initiator in WS said "connected" / "ok" and DSM too. But alas, no drive was showing up.

     

    After recreating the target in DSM it looked exactly the same but now the drive was accessible. :roll:

  11. Great, guys. Thanks for the work. It's back again!

     

    At first my WS2K12R2 didn't see the drive although it was connected and everything looked fine. I then deletet and recreated the iSCSI-target (not the LUN of course) and then it was like before.

  12. I haven't made the plunge to 5.2 yet so you guys are specifically talking about 5.2 + Update 1 correct?
    Yes.

     

    Damn. Didn't see this thread before upgrading and now I'm stuck. iSCSI Targets are offline and can't connect to the LUNs.

     

    Does one know if it worked with 5.2-5565.1 without Update-1 and if one can downgrade to that?

  13. Ich würde gerne eure Meinung in Bezug auf die PCie Passthrough Funktion mit dem Digitus DS-30104-1 Controller erfahren.
    Für mich war die Problematik, daß ich zwar auf dem X9SRH einen LSI-Controller drauf habe, für den es auch ESXi-Treiber gibt, der aber einfach eine unterirdische Performance liefert. So vl. 20 MB/s. Grund scheint zu sein, daß der keinen Cache hat. So in der Art kann man das googlen. Dabei hab ich auch gelesen, daß manche dieses Problem "lösen", indem sie den Controller mit pass through an ein VM-NAS (meist natürlich OpenNAS) weiterreichen. Da ich schon länger Synology verwende und auch mit XPEnology experimentiere, wurde ich hellhörig.

     

    Das Ergebnis ist sensationell. Die vier HD im X9SRH mit Brutto 14 TB liefern unter SHR 9 TB netto. Als NFS den beiden ESXen zur Verfügung gestellt und fertig. Innerhalb des X9SRH kommt eine VM auf 250 MB/s.

    1.jpg.7828aa1bb53bbbbdbce795e1150d27f2.jpg

    xpe-vm160gb-clone-iops.PNG.85efca283a9f563f7697b7ad9bf1ee26.PNG

    xpe-vm160gb-clone-xfer.PNG.a962e1cee148f5f9eb3d4c155059d0ff.PNG

  14. The first time I did that I found it irritating that I had to replace the Nanoboot .vmdk. So instead of using SCSI I used the IDE pre-allocated .vmdk. Everything went ok. No more replacing.

    That's strange. No difference here with IDE, same need to replace the .vmdk after 1st install. 5.5u1 also.

     

    That happens because DSM is seeing your boot drive. If you use ide for your vdmk boot drive you can rmmod=at a_piix so that drive won't be detected.

    So at least some fiddling seems necessary. Either replace .vmdk or "rmmod=at a_piix", of which I don't have an idea what it means.

     

    In case someone wants to try my all-SCSI solution, here is an .ova: http://tinyurl.com/pros2tr

    Just add an VMXNET3 nic and it's ready to go. (admin, no pw, dhcp, vmware tools installed).

     

    Like I said, once it's installed just replace the (small) .vmdk with a newer version of StarWind-V2V-ESXi-converted Nanoboot if available.

  15. Be aware that an IDE VMWare pre-allocated VMDK should be created, not the ESX variant.

    Not so here. I'm using the ESX variant with SCSI controllers (e.g., VMware Paravirtual) as device 0:0. Works like a charm.

     

    My problem was that the disk got overwritten or the vm didn't boot.

    That's right, but that happens only on the very first (completely fresh) install of DSM. Just power off the VM, replace Nanoboot .vmdk and boot normal. Everything works.

     

    After that one can just replace the Nanoboot .vmdk with a new version if available. DSM continues to work as before. At least my experience here. Did this the last couple of updates.

×
×
  • Create New...