berwhale

Members
  • Content Count

    183
  • Joined

  • Last visited

Community Reputation

1 Neutral

About berwhale

  • Rank
    Advanced Member

Recent Profile Visitors

559 profile views
  1. What does your VM settings look like? Here's mine with a 4 port SATA adapter and a USB controller passed through... Note: Hard disk 2 is a vmdk on my main data store which is an SSD, I did this so that Plex metadata is on the fastest available drive.
  2. I wouldn't worry about it too much, ESXi will manage CPU resources across all your VMs and CPU cores. 'over allocating' vCPUs to a little used VM is unlikely to impact your main systems as by definition, it's not doing much. I allocate 4 vCPU for my main server with Plex and 2 vCPUs for the other DSM instances (downloader, surveillance, test) - this is on a Dell T20 (3.4Ghz quad core Xeon) with 24GB RAM.
  3. Hi Jokerigno, It's quite easy to convert from bare metal (Physical) to Virtual (P2V) without loosing any data. DSM and it's configuration is stored on your data disks, so all you need to do is virtualize the bootloader (Xpenology) - i.e. replace the USB key with a Vmdk attached to a virtual machine. You then attach your existing disks to the virtual machine and it will retain all of your configuration and data. If you match the bootloader versions on both physical and virtual machines, then you should avoid any DSM upgrade during the P2V. When trying this for the 1s
  4. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 - Loader version and model: Jun's Loader v1.03b - DS3615xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7 on Dell T20 - Additional comments: Switch VMNIC to Intel1000e (was Vmxnet3)
  5. Yes and Yes (as long as the disks are passed through or accessed as RDMs) Several years ago, I migrated a set of disks attached to baremetal Xpenology install to DSM VM hosted on ESXi. I have the disks attach to a dedicated SATA adapter and this is passed through to the DSM VM,. DSM has complete control of the disks (they're not visible in ESXi). I've also tested going back the other way. You need a motherboard and CPU that supports DirectPathIO to pull this trick off. I believe it's also possible by passing the disks through as Raw Device Mappings (RDM), I do have a disk pass
  6. What's the reason to attach 2 drives rather than one bigger one? Are they on different data stores?
  7. What type of drives are attached to SATA 1? What is the Compatibility and Disk Mode set to? Does the install work if you just attach one of the drives?
  8. Your 1st drive, Synoboot.vmdk, should be attached to SATA 0. If you attach it to SATA 1, DSM will try to wipe it during install.
  9. The USB drive is just a bootloader, so just recreate it with the correct version of Xpenology. DSM and it's config is stored across all the HDDs in your array.
  10. I tried changing various off-load settings on the Realtek NIC on my motherboard. I got some small improvements and increased throughput up to 49MB/Sec, but it's still nowhere near it should be. I've ordered a Dell Broadcom server NIC to see if that helps.
  11. I tried the settings suggested for smb.conf in this promissing guide: https://turlucode.com/synology-optimizing-samba-2/ Transfers are still running at ~40MB/sec *hint*: both etc/smb.conf and etc.defaults/smb.conf are editable using the Config File Editor package if you don't fancy using vi. *edit* It's my PC. I tested from another Windows 10 PC (with a much lower spec) and I get 100-110MB/sec.
  12. Tried SMB2, SMB2 + Large MTU and SMB3 - it makes no difference (SMB1 doesn't appear to work with SMB1).
  13. Hi, did you ever find out what the problem was? I seem to have a similar problem. I know it's not a network issue, I can run IPERF and consistently transfer over 100MB/sec both ways.
  14. Interesting, I'll play with the NIC settings. I get the same speeds with larger files. It's always the same speed, it doesn't start fast and then degrade. I've ordered a 2nd hand HP Smart Array P410 and a couple of SFF-8087 to SATA cables from eBay, i'm hoping that's going to help.
  15. I did a bit more testing... Copy 1.2GB MKV from main 'production' DSM VM to workstation = 45MB/Sec (Data is hosted on HDDs connected via Marvell SATA adapter which is passed through with DirectPathIO Copy 1.2GB MKV from 'test' DSM VM to workstation = 80MB/Sec (Data hosted on SSD data store) Both DSM VMs are at the same version and are hosted on the same vSwitch. So maybe there is an issue with DirectPathIO on the cheap Marvell adapter that I'm using? I wonder if it's worth swapping it out for a cheap SAS HBA off eBay...