-
Posts
183 -
Joined
-
Last visited
Posts posted by berwhale
-
-
Yeah, best to play it safe and get a T20 with a Xeon
Note: I've got nothing against HP, I owned an N36L and N54L as was happy with both of them - plus i've specified $m's worth of HP kit to go into data centres over the last 20 years
-
Xurasao, what are the specs of your NAS?
I don't think that your issue lies with PowerLine, but I would say that the newness of your house doesn't necessarily help. Modern wiring tends to include things like earth leakage and surge protection technologies that can filter out the high frequency signals used by PowerLine and stop it working. That's one of the reasons I don't use PL in my own home.
-
Have you reserved all guest memory for the VM - I think it does this automatically when enabling pass-through, but it's worth checking.
-
Yes, but the whole server is only £20 more than buying the CPU on it's own. Also, you can fit more drives in the T20 - 2x 2.5" in the optical bay and 4x 3.5" in the normal bays, plus there's space to add one 3.5" or two 2.5" drives in the floppy drive hanger. There's also room to add an internal drive rack if you have some DIY skills.
I have 2x 2.5" drives as ESXi datastores and 4X 3.5" drives passed-though to DSM. All I had to buy was a couple of SATA power splitters.
-
I tested passthrough on my server by creating an SHR array on a couple of spare 1TB drives. Have you tried this? Also, can you boot DSM with the P212 passed though, but with no drives attached?
-
If you're in the UK, you can buy the Dell T20 with a E3-1225 v3 Xeon for £200 after cashback...
http://www.serversplus.com/servers/towe ... rs/20-3708
That's not much more than the CPU would cost on it's own.
-
Unfortunately I don't see any VMDirectPath I/O pass-through devices on my HP Microserver so its RDM for me
You need CPU support for DirectPath I/O - a Xeon or an i5+ I think.
-
I believe that Synology's advice is to have both NAS's running the same version of DSM. So, ideally you should wait for Xpenology to support DSM 6.0. You could try a migration and upgrade on the new NAS, but make sure you have a backup of your data first.
-
The Nvidia Shield TV now runs both the Plex Client and Server - it's built into the latest firmware...
https://shield.nvidia.co.uk/blog/plex-media-server
So you could upgrade both your client and server without touching the NAS - just point the Shield TV at your NAS over SMB.
-
I manually created the VM and attached it to Xpenology boot vmdk. BTW, I'm running DSM 5.2, i've not played with 6.0.
-
@Backslash - I think you have to select the 'quick' option when setting up the volume. You only get offered RAID with the custom option.
-
-
With DirectIO Passthrough vSphere hands over complete control of the disk adapter to the VM. Any disks attached to the adapter can only be seen within that VM.
RDM is similar, but at the disk level. So I think you could map the 2nd RAID array to a vmdk using RDM - However, I've not tried this, I went straight down the DirectIO route as I had an existing SHR array to migrate from a physical installation of DSM.
-
If you pass through a controller, you have to assign a fixed amount of memory - the vm won't be able to use ram balooning anymore (=only consume as much host ram as the guest uses).
That's a good point - I allocated 4GB to my DSM VM, but I have 24GB in the server, so it's no problem - it may be an issue for others.
Note that you also lose the ability to hot plug devices to VM using DirectIO - that's why I also pass-through a cheap PCI-E USB3 card to DSM.
-
Interesting, the Marvel controller that I use also has an 88se9215 chipset, so I know that works.
-
Yes, it's hard coded to match the CPU used in the Synology NAS hardware for the version of DSM we use (DS3615xs).
-
Excellent news. Can you SSH into DSM as root and run 'lspci -q' to enumerate your PCI devices? You should get something like the listing below... (obviously i'm running on ESXi, but you can see my Marvell SATA and Renasys USB3 adapter near the bottom that are passed through to DSM)
Tonka> lspci -q 00:00.0 Host bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge (rev 01) 00:01.0 PCI bridge: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge (rev 01) 00:07.0 ISA bridge: Intel Corporation 82371AB/EB/MB PIIX4 ISA (rev 08) 00:07.1 IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01) 00:07.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 08) 00:07.7 System peripheral: VMware Virtual Machine Communication Interface (rev 10) 00:0f.0 VGA compatible controller: VMware SVGA II Adapter 00:11.0 PCI bridge: VMware PCI bridge (rev 02) 00:15.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:15.7 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:16.7 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:17.7 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.0 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.1 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.2 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.3 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.4 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.5 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.6 PCI bridge: VMware PCI Express Root Port (rev 01) 00:18.7 PCI bridge: VMware PCI Express Root Port (rev 01) 03:00.0 SATA controller: Marvell Technology Group Ltd. Device 9215 (rev 11) 0b:00.0 USB controller: Renesas Technology Corp. uPD720202 USB 3.0 Host Controller (rev 02) 13:00.0 Ethernet controller: VMware VMXNET3 Ethernet Controller (rev 01) 1b:00.0 USB controller: VMware USB3 xHCI 0.96 Controller
-
Noober69, I have recently carried out exactly the procedure that you propose. I moved 4x 3TB drives in an SHR array from a physical server running a Celeron SoC to ESXi 6.0U2 running on the Xeon server in my signature. There are a few caveats with PCI Pass-through (AKA DirectIO) on ESXi, namely that you need a CPU and chipset that supports the feature (the Xeon will, the G3240 won't) and you lose some control over power management, plug and play devices, etc. It's worth reading the DirectIO documentation on vmware's website before going down this route. But I would say that's it's worked very well for me.
I would strongly recommend that you pick vSphere/ESXi over Hyper-V - there's a lot more people running XPEnology under vmware's hypervisor, so you'll find more information and help is available when things inevitably go wrong.
-
That's the part I've no problem with, I always have plenty of spare parts to swap around.
People gets rid of their old PCs for new PCs, and I recycle them all for other causes, so spare parts are abundant.
Turning them into NAS devices is one of the better option for slower machines, machines that are a little faster, I load them up with Edubuntu (Ubuntu for education) and donate them to schools for little kids.
Ah, now I understand. Your advice was predicated on the unstated assumption that Maelstrom should take up PC recycling as a hobby and means of philanthropy. In that case, I agree with everything you said.
-
but I've always been skeptical of the SoC motherboards, basically if the CPU dies, or the motherboard dies, for whatever reason, then you'll have to replace the whole thing again.
that's why I always got with regular motherboard, then pair it up with a low power consumption CPU like Celeron or i3
So you prefer the option that is more expensive, more difficult to troubleshoot and to repair?
If the SoC board fails, you replace it, either under warranty or with a much newer board if it's old.
In your prefered option; you need to work out if is the CPU or the motherboard that blown - in which case you need either a spare motherboard or CPU (or both) to confirm the faulty component to be replaced. If the faulty component is under warranty, then you wait for a replacement. If it's not, you have to decide if it worth investing more money in an old platform or ditching it for a whole new setup.
-
What do you think.. will this work with Xpenology?
https://www.amazon.com/Ableconn-PEX10-S ... nsion+sata
It´s a cheap optio for just a JBOD. Isn´t?
depends if...
Chipset: ASMedia ASM1062 + 2x JMicron JMB575
...is supported by Xpenology. If it uses SATA port multiplication, then it almost certainly won't work.
-
Maybe a MicroATX board, this one only has 2x SATA on-board, but has 3 PCI-e slots for additional adapters...
https://www.scan.co.uk/products/asus-n3 ... d-graphics
Note: I believe that the N3150 is being replaced by the N3160 - they're the essentially the same CPU, but the earlier version had some issue with it's microcode. You might want to hang around for more N3160 board versions to hit the market.
Or there's a J3160 board here...
http://www.biostar.com.tw/app/en/mb/int ... p?S_ID=838
P.S. I ran Xepenology successfully for several years on this fanless MSI board...
https://www.msi.com/Motherboard/C847MS- ... o-overview
It's perfectly adequate for general NAS duties, I only upgraded to a Xeon server because I wanted extra grunt to play with vSphere and transcode in Plex.
-
-
Hi andyl8u, I setup Xpenology as follows:
1. Created a Xpenology VM with a temporary virtual drive on one of my data stores.
2. Added the SATA adapter to the server and then to the VM via pass-through.
3. Removed the virtual drive.
4. Relocated the 3TB HDDs from my physical Xpenology server, connected them to the pass-through SATA adapter.
The Xpenology VM picked up the personality of the old physical server (i.e all data, permissions, apps, etc. was functioning as before).
Slower read from xponology NAS by network
in Archives
Posted
Horses for courses. MB more appropriate if you want to get a feel for how fast files will transfer (as file size is usually measured in bytes), Mb is more appropriate if you're trying to understand utilization (or not) of a fixed bandwidth channel like 1Gb Ethernet.
You can translate between both values if you factor in protocol overheads. As a rule of thumb, I assume that 1 byte of data (i.e. 8 bits) will consume 10 bits when transmitted. So 1GbE Ethernet (1 gigabit per second) will transmit 100MB of data per second.