unmesh

Members
  • Content Count

    97
  • Joined

  • Last visited

Community Reputation

0 Neutral

About unmesh

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Olegin, That is what I had done too. Thanks.
  2. I haven't done stress testing but it absolutely splits traffic across both GE NICs as reported by both the Windows 10 desktop and the baremetal DSM. I suppose I could time the transfer with a watch too and report back to this thread After some experimentation, I managed to reproduce the results under ESXi. I created a new vswitch, attached an unused GE port as an uplink, created a new port group that used this vswitch and added the port group as a new network interface to the DSM VM. I then configured SMB to use DSM3 through the GUI and by making an edit to smb.conf and was off and running. One thing that had concerned me was how Windows would know which two IP addresses to use for the server. It turns out that my network attached drive from the single port days was enough to make the connection. I had considered using 10G but put it off because I wanted to do multipoint and wasn't ready to invest in a 10G switch and a bunch of 10G NICs. I discovered that Multiport GE NICs are very cheap on Ebay; my i340-T4 was only $15! I will disconnect the cables to the second port everywhere for now and hope that gets rid of any potential instability or data corruption issues.
  3. Not able to access my server at the moment but the VM configuration has two VCPUs, 2GB of memory, a hard disk for the bootloader on an IDE controller, another for a RDM'ed hard drive on a SATA controller and a E1000 network adapter connected to vswitch0. The Lenovo TS140 came with a i217LM on the motherboard which shows as vmnic0 configured as the uplink on vswitch0 The 4 new ports are showing up as vmnic1 through vmnic4 but are not configured to anything Since my opening post, I did try adding one of the new ports as an additional uplink to vswitch0 but that did nothing for file transfer performance.
  4. Having had good luck with using multiple NICs to increase file transfer speeds between a Windows 10 desktop and a baremetal Xpenology install, I'd like to do the same with a Xpenology-on-ESXi-6.7 system. To this end, I installed a quad-port Intel Gigabit NIC alongside the built-in i217LM and VMware sees the 4 new physical NICs. I'm at a complete loss for how to get an additional NIC mapped over to the Xpenology VM. I don't want to do Teaming but rather use SMB 3.0 Multichannel end-to-end so that the network does not have to be configured. I will leave the other three Gigabit ports unconnected for now. Should I add a physical NIC to the existing vswitch as a additional uplink or will the E1000 driver only allow 1Gbps of throughput to the VM? If so, do I create an additional vswitch for one of the new Gigabit Ethernet ports? Any help will be greatly appreciated. Thanks.
  5. In the process of trying out Jun's 1.03b bootloader on a new VM on my ESXI Server, I accidentally reused the serial number from another VM that was using the 1.02b bootloader. From other threads, I figured out how to edit the grub.cfg in the vmdk version of the bootloader to provide a new serial number and the VM booted but the Control Panel -> Info Center -> General page has no information on it Any suggestions for how to fix this will be greatly appreciated. (It is possible that this tab was blank when the original VM was created since I did not bother to look). P.S.: Synology Assistant does show the correct new serial number
  6. I'm guessing this is for me, so I will take this to General Questions ...
  7. I came upon this thread when I accidentally reused the same serial number when I created a test VM on my ESXi server to migrate from 1.02b to 1.03b bootloader. Unlike sbv3000's experience, one or the other XPE VM's crash after a while when both are running and I need to do a controlled experiment to see if the serial number is the root cause. In any case, is there a quick way for me to change one of the serial numbers without going back to the original bootloader image file, finding and editing the grub.cfg and making the vmdk file? Thanks.
  8. I did a successful install on the AsRock!
  9. Disabling the onboard NIC fixed the power off problem! Thanks, guys.
  10. @bearcat I might repurpose a seldom used AsRock Q1900DC-ITX with v1.04b and DS918+ based on your experience. I've been wanting to play with BTRFS and this will give me a platform to try it out on before I switch my "production" NASes over.
  11. @jarugut I have a different BIOS ("The Bay") that does not allow me to choose the ACPI version but will check that the onboard NIC is disabled as per your suggestion and from @bearcat earlier. Thanks
  12. @bearcatThat was an old signature! And I will look at the NIC setting tonight.
  13. I recently upgraded from 1.02b to 1.03b loader and 6.1.7 to 6.2.1 DS3615xs and everything works except for shutdown. I have C1E disabled in BIOS and my N54L uses an Intel NIC. Nothing extra. Any suggestions? Thanks.
  14. Be aware that it was not completely successful for me! I downloaded the V1.03 bootloader and edited grub.cfg to put in my serial number and MAC address. I shutdown the DSM 6.1.7 VM and pointed the virtual HD to the new bootloader. I also changed the virtual NIC to e1000e. I then booted the VM and picked the ESXi entry from the bootloader menu. I opened a web browser to the DSM's IP address and did a manual update to the DS3615 6.2-23739 pat file, let it reboot and then logged in to the DSM GUI. That is when I noticed that several of the Control Panel screens were messed up and went back to 6.1.7
  15. For now, I have gone back to 6.1.7