Jump to content
XPEnology Community

Tuatara

Member
  • Posts

    60
  • Joined

  • Last visited

Posts posted by Tuatara

  1. Is there an idiots guide on how to install this on ESXi 5.1 please?

     

    I've tried with IDE disks, with SCSI disks, with E1000, with VMXnet, and each time the VM boots and is found by the Synology Assistant but I get an Error 38 when the installer tried to format/install.

     

    Error 38 means that the motherboard cannot communicate with the HDD. Basically something is wrong. [Ref: Synology Forums]

     

    From the initial instructions:

    Create the virtual machine to esxi and add hardware:

    IDE controller with single harddisk (vmdk file in archive). Boot from this.

    PVSCSI controller and raw harddisks or vmdk files for data. I've tested with 3 disks so far.

    VMXNET3 network. Only single interface in bridged mode is tested so far. MAC address can be anything.

     

    My recommendation would be to use RDM and PVSCSI controllers.

     

    I'll take notes during my attempts (ESXi 5.1, Vt-d & VMDirectPath I/O & SIL controller (SMART / Spin Down), RDM using PVSCSI ...) and post back a write up when I'm successful.

  2. For reference, I've found the answer to my earlier question:

    (http://blogs.vmware.com/vsphere/tag/pvscsi)

     

    I'm hoping to be able to use 10 drives (Raw Device Mapping) in a SHR-2 (two drive data redundancy) combination under ESXi. Do you know if your version will support this many (physical pass-through PVSCSI) drives?

     

    The following is true for all the Virtual SCSI Controllers that are supported on VMs at this time (LSILogic, BUSLogic & PVSCSI).

    • 1. VMDK (Virtual Machine Disks) approach. VMDKs have a maximum size of 2TB – 512 bytes. Maximum amount of storage that can be assigned to a VM using VMDKs is as follows: 4 controllers x 15 disks each x 2TB (-512 bytes) = ~120TB.
      2. Virtual (non pass-thru) RDMs approach. vRDMs also have a maximum size of 2TB – 512 bytes (same as VMDK). Therefore, the maximum amount of storage that can be assigned to a VM using vRDMs is as follows: 4 controllers x 15 disks each x (2TB – 512) = ~120TB
      3. Physical (pass-thru) RDMs approach. The maximum size of a pRDM since vSphere 5.0 is ~64TB. Hence, the maximum amount of storage that can be assigned to a VM using pRDMs (assuming vSphere 5.0) is as follows: 4 controllers x 15 disks each x 64TB = ~3.75PB

    Of course, these are theoretical maximums & should be considered as such. I personally don't know of any customers who are close to this maximum size.

     

    So ... my goal of 10 drives, using PVSCSI controllers with RDM passthrough should in theory be possible. Now I just need the time to set all this up!

  3. Also, could you test if HDD hibernation is working ? Thanks.

     

    I've only got Intel SSD's and WD Caviar Green drives available at the moment, which are low power & auto-hibernate (why I got them). I'll see what I can get to test though ... I must have another old drive around I could use. What drive(s) are you wanting to test? Seagates?

     

    Note: I'm assuming you mean on VMDirectPath I/O & Synology controlled spin-down time.

    Oh, I didn't knew that some hard drives can auto-hibernate. That's a good news, I was afraid that my fake DSM box will let my hard drives spinning 24/7, which cannot be a good thing unless these drives are server-class super-expensive hard drives. I only have two WD 500GB and one Samsung 500GB for now, for testing purposes, and I was planning to buy 2x3TB WD Green or Red if I'm able to make the firmware run smoothly on my box.

     

    So, yes, could you test the synology hibernation with your WD drives in VMDirectPath I/O please ? I don't see how synology hibernation could work with VHD or RDM, but if that's actually the case, please let us know :smile:

     

    AFAIK any VHD or RDM drive will not be controllable through the Synology hibernation, as in my (limited) testing, none of the SMART control or diagnostics are available to the VM OS. Therefore, I believe that only a VMDirectPath I/O attached drive would have SMART attributes available.

     

    I've found a 1Tb Seagate I could wipe & use, and a SIL 3124 for VMDirectPath I/O testing. I'll see when I can do this ... but it won't be for a while - at least until the weekend, maybe sometime next week. Although, it would be nice to get an ESXi Synology up and humming ... :mrgreen:

  4. Also, could you test if HDD hibernation is working ? Thanks.

     

    I've only got Intel SSD's and WD Caviar Green drives available at the moment, which are low power & auto-hibernate (why I got them). I'll see what I can get to test though ... I must have another old drive around I could use. What drive(s) are you wanting to test? Seagates?

     

    Note: I'm assuming you mean on VMDirectPath I/O & Synology controlled spin-down time.

  5. Anyone tried with VmDirectPath I/O for a direct access to a a SATA/RAID controler ?

     

    Not yet. I was going to use RDM (Raw Device Mapping) for the drives, but I do have a box running ESXi with Vt-d support, so I could give it a go that way too. What SATA controllers are supported?

    Some people reported that RDM performances are similar to "classic" virtual hard drive: poor. Regarding the supported SATA controllers, you should check this post. VmDirectPath I/O is tricky, everything has to support the pci passthrough (motherboard, BIOS, CPU, and of course the controllers), but it could be the most practical way to use XPEnology regarding performances and reliability (snapshots).

     

    I wish I could test this but none of my current hardware supports XPEnology in standalone mode, let alone VT-d virtualization.

     

    I've found RDM performance to be very good - better than any VMFS configuration (either iSCSI / local), but not quite up to a native or VMDirectPath I/O setup. Performance loss for my (informal) testing was less than 5% (not 50% as in the link) - but I was using an i7 CPU. I've only used Vt-d and VMDirectPath I/O in a high-I/O requirement database system, otherwise the performance loss has not been noticeable for anything else.

     

    For me, RDM works quite well - I had used it for my storage with FreeNAS (RAID-Z) without issues. Synology has since taken over my storage needs, but having an ESXi VM would provide a backup and also a virtualised testing/deployment environment - especially quick on an i7. :grin:

     

    Here's a link to a whole bunch of test/reviews - probably less biased than a 'personal' experience.

    http://communities.vmware.com/docs/DOC-10799

     

    Because I have a machine with VMDirectPath support, I'll give it a try (soon I hope), and will report back. I've got a Silicon Image 3124 lying around somewhere, and Intel ICH on the motherboard ... possibly have a Marvell in hiding somewhere.

  6. It's easy, just take some tcpdump captures; in the first 5 minutes connected to these IP addresses:

     

    Disclaimer: I have yet to install XPEnology, (hence the question), but I will do so shortly. (ESXi)

     

    Just use nslookup to get domain names and add it to the hosts file as Tuatara said.

     

    The problem with a tcpdump/firewall capture is that this does not provide the original DNS lookup performed, as the IPs themselves (as you provided) don't have rDNS entries for nslookup (except one).

     

    59.124.41.250 - 250.41.124.59.in-addr.arpa name = 59-124-41-250.HINET-IP.hinet.net.

    59.124.41.245 - 245.41.124.59.in-addr.arpa name = 59-124-41-245.HINET-IP.hinet.net.

    91.121.40.14 - 14.40.121.91.in-addr.arpa name = 91-121-40-14.ovh.net.

    188.92.232.154 - 154.232.92.188.in-addr.arpa name = ukdl.synology.com.

     

    The lookups could be going to round-robin servers or be handled by any of a number of methods - which is probably why the IPs have no rDNS entries.

     

    It would be best to know the DNS lookup which has been placed in the source code itself, as opposed to just viewing the output. But if that's not possible (i.e. it's all binary blobs), then WireShark will be the next step. :cool:

     

    On the bad side, you will not able to update packages :roll:

     

    True ... but unless you now register the XPEnology box in the MyDS Center with a (real/fake/duplicated) Serial Number, using the automatic download/update will not be available - so this is probably not a "recommended" option either. As I use Synology products daily for my work - my benefits from XPEnology would be personal, and for virtual deployment & infrastructure testing (scale, loading, size, etc.) before purchasing and installing actual Synology hardware.

     

    Again ... it all depends on how paranoid you are. :wink:

  7. I wouldn't use this for anything else than evaluation yet.

     

    I haven't yet installed it, but I will shortly on ESXi 5.1. I have an older (slow) consumer Synology myself and use FreeNAS for personal large data storage. I'm hoping to be able to use 10 drives (Raw Device Mapping) in a SHR-2 (two drive data redundancy) combination under ESXi. Do you know if your version will support this many (physical pass-through PVSCSI) drives?

    (FYI: Awesome Raid Calculator: http://www.synology.com/support/RAID_ca ... hp?lang=us).

     

    It would be great if I'm able to combine everything and leave FreeNAS behind. As I also support many businesses which are using Synology; to also have a virtual development & deployment testing environment before deploying the actual Synology hardware would be incredible.

     

    Cheers & Thanks for all your work.

  8. To satisfy the paranoid: would you be able to extract/list the servers which are contacted? It would (should?) then be a simple matter to create a hosts file which prevents any "unexpected" contact from occurring.

×
×
  • Create New...