Jump to content
XPEnology Community

berwhale

Member
  • Posts

    183
  • Joined

  • Last visited

Posts posted by berwhale

  1. Another option is to buy a cheap, but supported by Xpenology, SATA controller like this...

     

    http://www.dx.com/p/iocrest-marvell-88s ... een-282997

     

    ...and use VMDirectPath I/O to hand control of the adapter (and all it's disks) directly to the Xpenology VM. This avoids the need to spend $$$ on an HBA on the supported HW list for ESXi.

     

    More about VMDirectPath I/O here: https://kb.vmware.com/selfservice/searc ... Id=2142307

  2. Yes, you can see that I paid the price of the CPU for the whole server, the rest of it was effectively free :smile:

     

    That said, the 1.1Ghz Celeron 847 in my old box ran all of the apps that I needed just fine. I only upgraded because it was getting a little long in the tooth and I wanted to be able to transcode stuff at a decent speed in Plex (the Celery would only manage 1.1x realtime)

     

    You can buy the T20 with a G3220 CPU and it's been going for around £100 in the UK after rebate. That's a great box for the money, but I don't think that the CPU supports IOMMU which is required for VMDirectPath I/O in ESXi (which enabled me to buy a cheap SATA card, rather than one supported by ESXi)

  3. Hi,

     

    I have a problem mounting remote folders to access media files in Plex. See the servers involved in my sig below.

     

    I'm trying to connect a VM running DSM and Plex on the T20 to a CIFS share on the physical DSM box. I created a shared folder on the VM and then remote mounted the share on the physical. This appears to work, but when I try to access the share in File Station it just gets stuck with a 'Loading...' prompt (I'm connecting using the admin account).

     

    I've setup a test share on another DSM VM and I can connect to that fine (I copied a couple of MKVs there and Plex was able to find, index and play them perfectly).

     

    When I first tried this, the physical box was running an older version of DSM, but I've now upgraded to the latest version and update and the problem persists.

     

    I created a new share on the physical server and copied 4 or 5 MKVs there. This seemed to be working, as I could browse the files via File Station. However, as soon as I pointed Plex at it, it went crazy slow and I was unable to browse via File Station or see anything in my Plex library.

     

    I always end-up rebooting the DSM VM to clear the remote mount. If I try to remove it via the GUI or console, it always hangs or tells me that the mount is in use.

     

    Any ideas?

  4. DSM needs to be installed to one or more HDDs, so I would disconnect your existing drives and plug in a spare HDD or two.

     

    PMS is free, although you get more features with a Plexpass.

     

    Plex notifies you that an upgrade is available via a banner on the web page of the app. You have to click on the link to the Plex website, download the .spk file and then manually install the package on the DSM package manager app - it usually takes me less than 5 minutes in total.

  5. @berwhale,

    Congrats, sounds like a beast. What processor is it? e3-1220 / 1225 v3?

    £400 seems incredibly low priced for a server... must have been one heck of a rebate!

     

    I bought the server for £250 from eBuyer with a £70 rebate from Dell. eBuyer were bundling an extra 4GB of RAM with it at the time, so it started with 8GB. Processor is the E3-1225 v3 3.2Ghz.

     

    16GB ECC RAM from eBuyer = £80

    Samsung EVO 850 250GB from Amazon = £71

    2.5" WD Black 750GB from Amazon = £47

    Marvell 88SE9215 PCI-e SATA from DX = £22

     

    Total = £399

     

    Both the EVO and the WD Black fit in the optical drive bay at the top of the case, this leaves all four 3.5" bays free for the data drives i'll be moving from my old Xpenology box. A friend gave me the quad port Intel NIC, so I saved a bit there.

     

    I think I've ended up with significantly more grunt than a £400 Synology NAS :smile:

  6. FYI - I tested VMDirectPath as follows...

     

    1. Did a bare metal install on a spare PC with a couple of 1TB drives. Copied some test data to volume1

    2. Moved 1TB drives to the T20 and connected them to the Marvell SATA controller.

    3. Configured PCI-e Pass-through in the vSphere Native Client - connected the Marvell card directly to my test DSM VM.

    4. Confirmed that the drives and volume1 was accessible in the test DSM VM - tested both reading and writing to volume1.

     

    I haven't moved my 'production' drives to the T20 yet as I need to sort out the config of all my apps first (SABNZBD+, Plex, SickRage, etc.) - the apps will reside on separate DSM VMs in the new setup (I want to separate file serving, downloading and media streaming functions).

  7. I paid about £400 for the Dell T20 in my signature (after a rebate from Dell). That price includes the server, an additional 16GB of ECC RAM, the SSD and HDD for the ESXi datastores and a cheap Marvell 4 port SATA card (this is passed through to one of the DSM VMs and will have the 4 3TB drives from my old server attached to it soon).

     

    This will handle everything you mention above and a lot more. To give you an idea of how fast the Xeon is, it will transcode a 1080p MKV to 720p 4Mbps at around 12x realtime (so so around 5 minutes for an hour of video). The T20 is also extremely quiet (in fact, it's quieter than my water cooled main PC).

  8. If you have an ESXi server with a CPU that supports VMDirectPath I/O (Like the Xeon E3 in my T20) you can buy a cheap PCI controller that is supported by Xpenology (Like this http://www.dx.com/p/iocrest-marvell-88s ... een-282997) and pass control of the PCI-e card over to the Synology VM. DSM then has direct control of your data disks (they're not even visible to ESX). This avoids the need to buy an expensive HBA that is supported by ESXi.

     

    This also means you can transplant disks from a bare metal Xpenology installation into a virtualized setup - which is what I'm preparing to do right now :smile:

  9. Once you have ESXi up and running, you can install pretty much whatever you want in the guest virtual machines. I've just started playing with it myself and I have the following vms configured: 4 Xpenology servers, 1 Windows 8.1 64-bit, 1 Ubuntu 15.04 64-bit and 1 turnkey linux appliance running Wordpress.

     

    There's a pretty good install guide here: http://www.vladan.fr/vmware-esxi-6-installation-guide/

     

    You will need a Windows PC to run the vSphere Client to control the ESX server.

  10. Is this in an HP Microserver? If so, there are some specific changes you need to make to the BIOS to get this working. I used to have 6 drives in my N36L, 4 in the cages and 2 in the ODD bay (one connected to ODD SATA, the other to the eSATA).

  11. OK, I've built an Ubuntu 15.04 64-bit vm and updated it. I've installed the SynoCommunity packaging framework and I've managed to successfully compile a package from the syno git repo (Transmission) for the bromolow architecture (which is what XPEnology needs).

     

    In order to compile open-vm-tools, I think I need to clone the git repo for open-vm-tools, create a makefile and then compile it for bromolow. I think the skill is in creating the makefile (the open-vm-tools documentation mentions GNU automake - which I have no idea how to use, yet)

     

    I've run out of time today, I may get a chance to take another look at the week-end.

  12. Hmmm, it still reverts to 10 Full after a reboot, even if I've fixed the line speed to 1000 Full. I'll try switching the port and cable to see if that helps.

     

    I think I will probably use the on-board NIC for out of band management (Intel AMT) and one of the ports on the Intel quad card for ESXi management.

     

    Update: I created another vSwitch for the guest network and teamed two ports of the quad card to it. I've left the management network on the internal NIC and it seems to negotiate 1000 Full fine now.

  13. Hi,

     

    I've only just started playing with ESXi myself, but I'll have a go at answering your questions.

     

    I believe that ESXi host will try to honor the power state request made by the guest OS. So when the Windows vm tries to hibernate, ESXi will capture this request and suspend the vm. I assume that this is also the case for DSM, but I'd be interested to hear a definitive answer.

     

    N.B. 'bare metal' refers to an OS being deployed to a physical server, it's the opposite of deploying to a hypervisor (e.g. ESXi, Hyper-V, etc.)

×
×
  • Create New...