jaesii
-
Posts
9 -
Joined
-
Last visited
Posts posted by jaesii
-
-
I use an Intel X520-DA2 10 Gig Converged Network Adapter.
I mainly use my Xpenology box as a SAN for my esxi servers.
I connected each server to the xpenology using DAC cables.
I peak over 10G at times when accessing the storage over iSCSI.
10GbE isnt cheap, if youre willing to jump in, I think I spent 400$ between 3 network cards and 3 DAC cables.
-
are you using IDE controllers in hyper-v?
-
-
MB: Supermicro X8DT6-F
CPU: 2x Intel Xeon L5630 Quad-Core with HT @2.13GHz (16 Cores Total)
RAM: 12x 4GB DDR3 ECC Kingston 1066MHz Registered DIMMs (48GB Total)
Chassis: Chenbro RM23612 12-Bay Hotswap SAS/SATA 2U Chassis
DISK Controller: LSI 2008 Onboard 8 port SAS flashed to IT mode + 6x Onboard SATA (14 drive capacity)
LAN: 4x Intel GbE Nics (2x onboard 2x PCI-E ) + 1 Dedicated IPMI
HDD: 6x Western Digital RE4 2TB Enterprise disks (Block level iSCSI) + 6x 1.5TB Seagate Barracuda (Movies/TV/DATA)
SSD: 2x OCZ Deneva 2 SLC 32GB drives for cache
PSU: Seasonic 400W 80+ Gold
DSM: XPEnoboot 5.2-5644.5 + DSM 5.2-5644 update 3
I will soon be replacing the Dual Nic PCI-E card for a dual port Chelsio 10GbE card
This box is mainly used for iSCSI to my ESXi hosts. I added a second array of disks as a standard volume for Movies/TV/Data. I will soon be migrating my plex server to run off this box.
-
My box is extremely overkill.
Supermicro X8DT6-F 2x Mini SAS and 6x SATA flashed for JBOD mode for up to 14 drives.
Dual Xeon L5630 Hyperthreaded Quad cores
48GB DDR3 ECC
12 Bay Hot Swap Chenbro 2U Disk array chassis.
I converted my freeNAS box to XPEnology. Surprisingly this thing only uses about 130W of power.
-
That looks like that fixed it. I was using 5644.4 boot image. It looks like the 5644.5 image came out a day after I downloaded .4
-
this may or may not help, but here is some output from the messages log related to iscsi
-
Hello,
This is my first time using XPEnology. I just installed it on my supermicro disk array server and attempted to create an iSCSI LUN at block level. After completing the wizard, I am told "There is no iSCSI LUN in your system". There is a target but it is offline. If I reboot the system, the LUN appears and the target is online, but there is no LUN mounted to the target. As soon as I try to mount the LUN to the target, it takes the target offline.
I have tried replicating this in a virtual machine as well without updating to update 3 and I have the same result.
I am able to make a successful iSCSI LUN and mounted target when I create a volume first and choose normal files for the iSCSI LUN.
Since im going to use this device as storage for my VMware lab, I need block-level.
Has anyone been able to get a block level LUN created and mounted in 5.2-5644 ?
Terrible iSCSI performance w/ 10G Networking
in Archives
Posted
So I've been using XPE for a while now as a SAN for my esxi lab. I have been very unhappy with the performance.
My XPE box is loaded up with resources and I can't figure out why the performance is so bad.
Its got the following:
1x Intel Xeon L5630 Quad Core W/ HT
16GB ECC DDR3 1066MHz Memory
LSI 9211-8i + LSI 9211-4i
Intel X520-DA2 Dual port 10 Gigabit network card
5x WD Enterprise Drives 3.5"
2x OCZ Deneva 2 SLC SSDs for R/W cache
Both my my esxi servers are directly attached to the xpenology using 10GbE SFP+ Direct Attach Cables
I have a Raid5 volume setup as an iSCSI target and here is a screenshot of the performance I get on a Windows 2012 R2 VM
I also have a FreeNAS Box that is setup very similar to the Xpenology
1x Intel Xeon L5630 Quad Core W/ HT
16GB ECC DDR3 1066MHz Memory
LSI 1068E Raid Controller
Intel X520-DA2 Dual port 10 Gigabit network card
6x HGST 1TB 2.5" Consumer drives
Here is the same test on the same VM
Any reason DSM gives me such bad disk performance?