unmesh

Members
  • Content Count

    109
  • Joined

  • Last visited

Community Reputation

0 Neutral

About unmesh

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm lookin for advice on best practices for replacing hard drives with larger ones. Some background. My first foray into Xpenology was with the N54L when I threw in some spare hard drives in a JBOD configuration. Subsequently, I've spooled up a couple of Xpenology VMs each of which uses a large capacity RDMed hard drive and the N54L is used as a backup for one of the VMs. I would now like to replace the smattering of small drives in the N54L with a single large hard drive and was planning on doing the following. Add the new hard drive and let Xpenology create the partit
  2. I crossed my fingers and fired up the VM. The GUI said I had moved my hard disk and showed a button to recover my installation. I clicked it, the VM rebooted and I'm up and running! Now if I can only get the WMware website to let me download the ESXi 7 ISO ...
  3. I accidentally deleted the folder containing the second of two DSM VMs on ESXI 6.7U3 and am hoping I can recover the installation because it was a simple one as these things go. I have a copy of the customized v1.03b DS3615x 6.2 synoboot file which I was using along with a single RDM'ed hard drive for the data and think I can recreate the VM settings from memory. However, I updated DSM from the GUI one or more times since the original installation (I'm guessing to 6.2.2 24922 Update 6 rather than U4 in my .sig) and vaguely remember reading that synoboot gets modified when that happ
  4. Yes, open-vm-tools_x64-6.1_10.2.0-1
  5. When initiated from the DSM GUI, the OS appears to shut down but not all the way as shown by the Vmware GUI. I set up a serial-over-LAN console and got the follwing output [ 357.190206] init: synonetd main process (5393) killed by TERM signal [ 357.192923] init: ddnsd main process (9158) terminated with status 1 [ 357.200603] init: synostoraged main process (9417) terminated with status 15 [ 357.218126] init: hotplugd main process (9940) killed by TERM signal [ 357.233112] init: smbd main process (10212) killed by TERM signal System is going to poweroff. [ 361.195232] init: sy
  6. Thanks @luchuma I put identical MAC addresses in grub.cfg and the VM settings and everything works fine now.
  7. I added a second virtual NIC to a test DSM6.2 installation that was connected to the same vswitch as the first one, changed the grub.cfg to add a second MAC address, created new .vmdk files for the bootloader, rebooted and DSM showed the second NIC and an IP address for it obtained through DHCP. I then created a new vswitch connected to a new physical NIC and moved the second vNIC to this vswitch. This configuration will not pick up an IP address from the DHCP server even though the GUI "console" shows the relevant port icons as green. Do I need to change something with
  8. Thanks, IG-88 I took a close look at the working VM and realized that I needed to make changes to things like NIC type etc. In the end, I essentially cloned both the VM and the boot image file and made the necessary changes to get it up and running as a distinct DSM instances.
  9. I've had a DSM 6.1.7-15284 VM running on Jun's 1.02b loader for a while and a DSM 6.2.2-24922 running for a shorter while on Jun's 1.03b loader. Each VM uses a different RDM hard disk. Earlier today, I accidentally hit update on the GUI for the first one when it offered to upgrade to 6.2.2-24922 and it (obviously!) won't boot correctly. So I replaced the loader in the VM with a suitably configured 1.03b, chose the ESXi option in the console, but it still won't boot DSM. I have the serial port on the VM configured to map to a network port which displays "mount failed" an
  10. I was able to run DSM DS918+ 6.2.1 Update 6 over Jun's Mod V1.04b loader successfully though I don't run it all the time.
  11. I'm assuming you mean M.2 NVMe and not SATA Things may have changed for the better but I've only been successful in using M.2 NVMe SSDs by running Xpenology as a Guest OS on ESXi.
  12. Olegin, That is what I had done too. Thanks.
  13. I haven't done stress testing but it absolutely splits traffic across both GE NICs as reported by both the Windows 10 desktop and the baremetal DSM. I suppose I could time the transfer with a watch too and report back to this thread After some experimentation, I managed to reproduce the results under ESXi. I created a new vswitch, attached an unused GE port as an uplink, created a new port group that used this vswitch and added the port group as a new network interface to the DSM VM. I then configured SMB to use DSM3 through the GUI and by making an edit to smb.conf and was off and
  14. Not able to access my server at the moment but the VM configuration has two VCPUs, 2GB of memory, a hard disk for the bootloader on an IDE controller, another for a RDM'ed hard drive on a SATA controller and a E1000 network adapter connected to vswitch0. The Lenovo TS140 came with a i217LM on the motherboard which shows as vmnic0 configured as the uplink on vswitch0 The 4 new ports are showing up as vmnic1 through vmnic4 but are not configured to anything Since my opening post, I did try adding one of the new ports as an additional uplink to vswitch0 but tha