Jump to content
XPEnology Community

berwhale

Member
  • Posts

    183
  • Joined

  • Last visited

Posts posted by berwhale

  1. When you deal with Network Transfer is better to use Mbps, as it's less confusing

    When dealing with network transfer speeds of over 1Gbit/s its easier to use MBps IMO.

     

    Horses for courses. MB more appropriate if you want to get a feel for how fast files will transfer (as file size is usually measured in bytes), Mb is more appropriate if you're trying to understand utilization (or not) of a fixed bandwidth channel like 1Gb Ethernet.

     

    You can translate between both values if you factor in protocol overheads. As a rule of thumb, I assume that 1 byte of data (i.e. 8 bits) will consume 10 bits when transmitted. So 1GbE Ethernet (1 gigabit per second) will transmit 100MB of data per second.

  2. Xurasao, what are the specs of your NAS?

     

    I don't think that your issue lies with PowerLine, but I would say that the newness of your house doesn't necessarily help. Modern wiring tends to include things like earth leakage and surge protection technologies that can filter out the high frequency signals used by PowerLine and stop it working. That's one of the reasons I don't use PL in my own home.

  3. Yes, but the whole server is only £20 more than buying the CPU on it's own. Also, you can fit more drives in the T20 - 2x 2.5" in the optical bay and 4x 3.5" in the normal bays, plus there's space to add one 3.5" or two 2.5" drives in the floppy drive hanger. There's also room to add an internal drive rack if you have some DIY skills.

     

    I have 2x 2.5" drives as ESXi datastores and 4X 3.5" drives passed-though to DSM. All I had to buy was a couple of SATA power splitters.

  4. The OS is stored in a partition that gets duplicated to all of the drives that you install. If a drive fails, you can replace the drive and the data volume and OS partition will be rebuilt - you don't have to re-install anything.

  5. With DirectIO Passthrough vSphere hands over complete control of the disk adapter to the VM. Any disks attached to the adapter can only be seen within that VM.

     

    RDM is similar, but at the disk level. So I think you could map the 2nd RAID array to a vmdk using RDM - However, I've not tried this, I went straight down the DirectIO route as I had an existing SHR array to migrate from a physical installation of DSM.

  6. If you pass through a controller, you have to assign a fixed amount of memory - the vm won't be able to use ram balooning anymore (=only consume as much host ram as the guest uses).

     

    That's a good point - I allocated 4GB to my DSM VM, but I have 24GB in the server, so it's no problem - it may be an issue for others.

     

    Note that you also lose the ability to hot plug devices to VM using DirectIO - that's why I also pass-through a cheap PCI-E USB3 card to DSM.

  7. Excellent news. Can you SSH into DSM as root and run 'lspci -q' to enumerate your PCI devices? You should get something like the listing below... (obviously i'm running on ESXi, but you can see my Marvell SATA and Renasys USB3 adapter near the bottom that are passed through to DSM)

     

    Tonka> lspci -q
    00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01)
    00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01)
    00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08)
    00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)
    00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08)
    00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10)
    00:0f.0 VGA compatible controller: VMware SVGA II Adapter
    00:11.0 PCI bridge: VMware PCI bridge (rev 02)
    00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01)
    00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01)
    03:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11)
    0b:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02)
    13:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01)
    1b:00.0 USB controller: VMware USB3 xHCI 0.96 Controller
    

  8. Noober69, I have recently carried out exactly the procedure that you propose. I moved 4x 3TB drives in an SHR array from a physical server running a Celeron SoC to ESXi 6.0U2 running on the Xeon server in my signature. There are a few caveats with PCI Pass-through (AKA DirectIO) on ESXi, namely that you need a CPU and chipset that supports the feature (the Xeon will, the G3240 won't) and you lose some control over power management, plug and play devices, etc. It's worth reading the DirectIO documentation on vmware's website before going down this route. But I would say that's it's worked very well for me.

     

    I would strongly recommend that you pick vSphere/ESXi over Hyper-V - there's a lot more people running XPEnology under vmware's hypervisor, so you'll find more information and help is available when things inevitably go wrong.

  9. That's the part I've no problem with, I always have plenty of spare parts to swap around.

     

    People gets rid of their old PCs for new PCs, and I recycle them all for other causes, so spare parts are abundant.

     

    Turning them into NAS devices is one of the better option for slower machines, machines that are a little faster, I load them up with Edubuntu (Ubuntu for education) and donate them to schools for little kids.

     

    Ah, now I understand. Your advice was predicated on the unstated assumption that Maelstrom should take up PC recycling as a hobby and means of philanthropy. In that case, I agree with everything you said.

  10. but I've always been skeptical of the SoC motherboards, basically if the CPU dies, or the motherboard dies, for whatever reason, then you'll have to replace the whole thing again.

     

    that's why I always got with regular motherboard, then pair it up with a low power consumption CPU like Celeron or i3

     

    So you prefer the option that is more expensive, more difficult to troubleshoot and to repair?

     

    If the SoC board fails, you replace it, either under warranty or with a much newer board if it's old.

     

    In your prefered option; you need to work out if is the CPU or the motherboard that blown - in which case you need either a spare motherboard or CPU (or both) to confirm the faulty component to be replaced. If the faulty component is under warranty, then you wait for a replacement. If it's not, you have to decide if it worth investing more money in an old platform or ditching it for a whole new setup.

  11. Maybe a MicroATX board, this one only has 2x SATA on-board, but has 3 PCI-e slots for additional adapters...

     

    https://www.scan.co.uk/products/asus-n3 ... d-graphics

     

    Note: I believe that the N3150 is being replaced by the N3160 - they're the essentially the same CPU, but the earlier version had some issue with it's microcode. You might want to hang around for more N3160 board versions to hit the market.

     

    Or there's a J3160 board here...

     

    http://www.biostar.com.tw/app/en/mb/int ... p?S_ID=838

     

    P.S. I ran Xepenology successfully for several years on this fanless MSI board...

     

    https://www.msi.com/Motherboard/C847MS- ... o-overview

     

    It's perfectly adequate for general NAS duties, I only upgraded to a Xeon server because I wanted extra grunt to play with vSphere and transcode in Plex.

  12. Hi andyl8u, I setup Xpenology as follows:

     

    1. Created a Xpenology VM with a temporary virtual drive on one of my data stores.

    2. Added the SATA adapter to the server and then to the VM via pass-through.

    3. Removed the virtual drive.

    4. Relocated the 3TB HDDs from my physical Xpenology server, connected them to the pass-through SATA adapter.

     

    The Xpenology VM picked up the personality of the old physical server (i.e all data, permissions, apps, etc. was functioning as before).

×
×
  • Create New...