unmesh

Members
  • Content Count

    105
  • Joined

  • Last visited

Community Reputation

0 Neutral

About unmesh

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes, open-vm-tools_x64-6.1_10.2.0-1
  2. When initiated from the DSM GUI, the OS appears to shut down but not all the way as shown by the Vmware GUI. I set up a serial-over-LAN console and got the follwing output [ 357.190206] init: synonetd main process (5393) killed by TERM signal [ 357.192923] init: ddnsd main process (9158) terminated with status 1 [ 357.200603] init: synostoraged main process (9417) terminated with status 15 [ 357.218126] init: hotplugd main process (9940) killed by TERM signal [ 357.233112] init: smbd main process (10212) killed by TERM signal System is going to poweroff. [ 361.195232] init: synoscheduler-vmtouch main process (11420) killed by TERM signal [ 361.290355] init: Disconnected from D-Bus system bus [ 364.414348] iSCSI:extent_pool.c:780:ep_exit syno_extent_pool successfully finalized [ 364.416200] iSCSI:target_core_rodsp_server.c:844:rodsp_server_exit RODSP server stopped. [ 364.479316] EXT4-fs (md2): re-mounted. Opts: (null) [ 364.601074] init: skip respawn syslog-ng during shutdown [ 365.534343] md2: detected capacity change from 29421993984 to 0 [ 365.535315] md: md2: set sdb3 to auto_remap [0] [ 365.536008] md: md2 stopped. [ 365.536456] md: unbind<sdb3> [ 365.536974] md: export_rdev(sdb3) [ 366.678609] md: md0 in immediate safe mode [ 366.679304] md: md1 in immediate safe mode [ 366.681192] init: cgmanager main process (4717) killed by KILL signal [ 366.682227] init: skip respawn cgmanager during shutdown [ 366.683194] init: dhcp-client (eth0) main process (7474) killed by KILL signal [ 366.684645] init: tty main process (10168) killed by KILL signal [ 366.685720] init: skip respawn tty during shutdown [ 366.688852] init: Failed to spawn dhcp-client (eth0) post-stop process: unable to connect to CGManager: Unknown error 196609 [ 366.727126] EXT4-fs (md0): re-mounted. Opts: (null) [ 367.767601] sd 1:0:0:0: [sdb] Stopping disk [ 367.768376] sd 0:0:0:0: [sdm] Stopping disk [ 367.769306] e1000e: EEE TX LPI TIMER: 00000000 [ 367.805455] e1000e 0000:03:00.0: Refused to change power state, currently in D0 Any thoughts on what might be the cause? Running Jun's 1.03b bootloader and the latest DS3615xs DSM with a virtual hard drive
  3. Thanks @luchuma I put identical MAC addresses in grub.cfg and the VM settings and everything works fine now.
  4. I added a second virtual NIC to a test DSM6.2 installation that was connected to the same vswitch as the first one, changed the grub.cfg to add a second MAC address, created new .vmdk files for the bootloader, rebooted and DSM showed the second NIC and an IP address for it obtained through DHCP. I then created a new vswitch connected to a new physical NIC and moved the second vNIC to this vswitch. This configuration will not pick up an IP address from the DHCP server even though the GUI "console" shows the relevant port icons as green. Do I need to change something with respect to the MAC addresses in the bootloader? The VM is configured to let ESXi assign them. Thanks. Added: I spooled up an Ubuntu VM with a single vNIC connected to the second (newer) vswitch and the network works fine.
  5. Thanks, IG-88 I took a close look at the working VM and realized that I needed to make changes to things like NIC type etc. In the end, I essentially cloned both the VM and the boot image file and made the necessary changes to get it up and running as a distinct DSM instances.
  6. I've had a DSM 6.1.7-15284 VM running on Jun's 1.02b loader for a while and a DSM 6.2.2-24922 running for a shorter while on Jun's 1.03b loader. Each VM uses a different RDM hard disk. Earlier today, I accidentally hit update on the GUI for the first one when it offered to upgrade to 6.2.2-24922 and it (obviously!) won't boot correctly. So I replaced the loader in the VM with a suitably configured 1.03b, chose the ESXi option in the console, but it still won't boot DSM. I have the serial port on the VM configured to map to a network port which displays "mount failed" and nothing else Is it possible to recover the installation? Thanks.
  7. I was able to run DSM DS918+ 6.2.1 Update 6 over Jun's Mod V1.04b loader successfully though I don't run it all the time.
  8. I'm assuming you mean M.2 NVMe and not SATA Things may have changed for the better but I've only been successful in using M.2 NVMe SSDs by running Xpenology as a Guest OS on ESXi.
  9. Olegin, That is what I had done too. Thanks.
  10. I haven't done stress testing but it absolutely splits traffic across both GE NICs as reported by both the Windows 10 desktop and the baremetal DSM. I suppose I could time the transfer with a watch too and report back to this thread After some experimentation, I managed to reproduce the results under ESXi. I created a new vswitch, attached an unused GE port as an uplink, created a new port group that used this vswitch and added the port group as a new network interface to the DSM VM. I then configured SMB to use DSM3 through the GUI and by making an edit to smb.conf and was off and running. One thing that had concerned me was how Windows would know which two IP addresses to use for the server. It turns out that my network attached drive from the single port days was enough to make the connection. I had considered using 10G but put it off because I wanted to do multipoint and wasn't ready to invest in a 10G switch and a bunch of 10G NICs. I discovered that Multiport GE NICs are very cheap on Ebay; my i340-T4 was only $15! I will disconnect the cables to the second port everywhere for now and hope that gets rid of any potential instability or data corruption issues.
  11. Not able to access my server at the moment but the VM configuration has two VCPUs, 2GB of memory, a hard disk for the bootloader on an IDE controller, another for a RDM'ed hard drive on a SATA controller and a E1000 network adapter connected to vswitch0. The Lenovo TS140 came with a i217LM on the motherboard which shows as vmnic0 configured as the uplink on vswitch0 The 4 new ports are showing up as vmnic1 through vmnic4 but are not configured to anything Since my opening post, I did try adding one of the new ports as an additional uplink to vswitch0 but that did nothing for file transfer performance.
  12. Having had good luck with using multiple NICs to increase file transfer speeds between a Windows 10 desktop and a baremetal Xpenology install, I'd like to do the same with a Xpenology-on-ESXi-6.7 system. To this end, I installed a quad-port Intel Gigabit NIC alongside the built-in i217LM and VMware sees the 4 new physical NICs. I'm at a complete loss for how to get an additional NIC mapped over to the Xpenology VM. I don't want to do Teaming but rather use SMB 3.0 Multichannel end-to-end so that the network does not have to be configured. I will leave the other three Gigabit ports unconnected for now. Should I add a physical NIC to the existing vswitch as a additional uplink or will the E1000 driver only allow 1Gbps of throughput to the VM? If so, do I create an additional vswitch for one of the new Gigabit Ethernet ports? Any help will be greatly appreciated. Thanks.
  13. In the process of trying out Jun's 1.03b bootloader on a new VM on my ESXI Server, I accidentally reused the serial number from another VM that was using the 1.02b bootloader. From other threads, I figured out how to edit the grub.cfg in the vmdk version of the bootloader to provide a new serial number and the VM booted but the Control Panel -> Info Center -> General page has no information on it Any suggestions for how to fix this will be greatly appreciated. (It is possible that this tab was blank when the original VM was created since I did not bother to look). P.S.: Synology Assistant does show the correct new serial number
  14. I'm guessing this is for me, so I will take this to General Questions ...
  15. I came upon this thread when I accidentally reused the same serial number when I created a test VM on my ESXi server to migrate from 1.02b to 1.03b bootloader. Unlike sbv3000's experience, one or the other XPE VM's crash after a while when both are running and I need to do a controlled experiment to see if the serial number is the root cause. In any case, is there a quick way for me to change one of the serial numbers without going back to the original bootloader image file, finding and editing the grub.cfg and making the vmdk file? Thanks.