flyride
-
Posts
2,438 -
Joined
-
Last visited
-
Days Won
127
Posts posted by flyride
-
-
My comment does refer to the case. The case assembly/disassembly is a bit intricate and requires some careful cable routing for airflow. And I made it somewhat more complicated by adding in the U.2 drives and doing a custom power supply.
That said, my ESXi environment has a lot of tweaks in it as well - you can find the ESXi/NVMe thread on here with a little searching.
-
I'm running almost everything you're inquiring about.
- 8-bay U-NAS 810A (8 hot swap drive chassis, MicroATX)
- SuperMicro X11SSH-F (MicroATX)
- E3-1230V6
- 64GB RAM
- 8x 4TB in RAID 10
- Mellanox Connect-X 3 dual 10Gbe
- 2x Intel P3500 2TB NVMe (these are U.2 drives)
A few items of note. XPenology is running as a VM under ESXi. This allows the NVMe drives to be RDM'd as SCSI, which works fine. NVMe native doesn't work as DSM doesn't currently support it for regular storage. The NVMe drives are attached via PCIe U.2 adapters since I don't need the slots. I'm using the motherboard M.2 slot for the ESXi scratch/VM drive. The SATA controller and Mellanox are passed through to the VM so DSM is using mostly native drivers, which work fine baremetal or VM with Mellanox and the C236 chipset.
So all this in a package in the footprint of, and just a bit taller than a DS1817. I'm pretty happy with it, except that case was the hardest to set up of any server I've ever built.
I'm also running another XPenology server on an ITX board in a U-NAS 410, which is a 4-bay ITX chassis. Works fine but it is running a low-power embedded chip. I'd pay a lot of attention to cooling and cooler compatibility if you want to run a 95W chip in an ITX case.
You should know that DSM code bases that work with XPenology only support 8 threads total (including hyperthreading) so most E5s might not be a fit.
- 1
-
4 hours ago, Black6spdZ said:
what's the cutoff then?
Haswell.
See this:
- 2
-
Please review this.
-
Then the PCIe enumeration (controller order) is not known - you really need to figure that out before you do anything else.
Why are you trying this with a production array??
-
So, did you try DiskIdxMap=0C00?
-
I'm not 100% sure of the controller order on your system, but assuming your PCIe enumeration = SATA1x2, SATA2x4, LSI, try DiskIdxMap=080A00
You can also add SataPortMap=228 but it should not be necessary since you just want to cut off 2 of the SATA2 devices
- 1
-
I just need to point out, that you post a problem, then conclude it's not fixable, and now complain when offered the tools that will fix your issue? How about just trying it out?
The tutorial does not contradict itself, you just misapply the syntax to the first example. You must have two controllers or this wouldn't be an issue in the first place. So if you have no drives on the first controller, you could use SataPortMap=0 and it would make the drives on that controller disappear. However, I don't think this is your problem, but you don't offer any information about the loader version or anything about your system. This is probably what you are trying to do, which leverages DIskIdxMap, but maybe not...
The tutorials aren't exhaustive. The one you reference introduces the correct tools as a solution. There are many, many configuration examples here if you just search for the port mapping terms. Also, I just put those same search terms into Google and the second hit takes me to this page, which has some excellent descriptions of how they work and interact.
Please be assured there is a relatively simple solution. Good luck.
EDIT: I only noticed after posting you basically asked the same question before, and I offered up the same answer. In that thread you say you are using 6.2.1 and the synoboot on the vSATA controller. You could use DiskIdxMap to move the enumeration of your first (SATA) controller to something high like 1F and your second controller (LSI) to 00 and your problem would probably be solved. Alternatively if you were using less than 12 drives, you could use SataPortMap=1 and the SATA controller would only use one slot for the synoboot (which is useless but you can see how your problem could be solved).
-
SataPortMap, SASIdxMap, and DiskIdxMap - it's in the 6.1.x installation tutorial. Don't try it on a production Disk Group.
-
I'm not sure SuperMicro can advise you on how DSM should be configured. In any case, using a combination of hardware (motherboard) and software (DSM) RAID seems counterproductive. You will get the most out of DSM if you present raw drives and let DSM do all the RAID operations you want to do. That would mean increasing MaxDrives in your case.
If you want to delete an entire controller or subset of drives from a controller, look into SataPortMap, SASIdxMap, and DiskIdxMap. This is on the main tutorial here.
-
It's been awhile since I tested this with baremetal DSM, but I don't think you can see NVMe drives at all until you go into "add cache to volume" and they will be listed there.
syno_hdd_util --ssd_detect will only return SSD's that are able to be used for disk groups - i.e. SATA SSD's.
-
The bonding strategies available to DSM (i.e. LACP) use some combination of source/target IP and MAC addresses to choose which of the NICs to use. It doesn't connect them together simultaneously for twice the bandwidth. For many clients connecting to a bonded NIC destination, there is an equal chance to select one or the other. Therefore the traffic theoretically gets distributed across both NICs and (again, theoretically) twice the traffic is possible.
For a single PC connected to single NAS, you probably won't ever see an increase in speed.
Always worth benchmarking your changes, and make sure you don't get fooled by cached performance.
- 1
-
Link aggregation cannot double bandwidth for individual TCP/IP conversations such as would be encountered in a streaming video. You need a high traffic, multi-client environment for it to do any good whatsoever.
- 1
-
You can use 1.03b on DS3615 as long as you have an Intel NIC in your system and use BIOS mode for boot, not UEFI.
This might help you.
-
There is already a volume and Basic raid group on the second drive from a previous action on your part. Delete those and you will be able to create your RAID1.
-
You need a second drive installed first, before you can change to RAID1.
-
Confirmed: VMXNET and E1000 cause kernel panics, e1000e is ok. Note that this is only with the explicit combination of ESXi, 1.03b loader and DSM for DS3615/3617 version 6.2+
Good catch, and thank you for pointing it out.
-
-
Consider fixing the drive numbering before trying to migrate?
Check for unnecessary vSATA controllers in your VM?
SataPortMap? DiskIdxMap? This stuff is in the basic tutorials.
-
Synology fixes the partition size when the drive is "Initialized." I can't recall anyone being able to change the partition structure and still have the drive be usable in the GUI.
I think DSM aggressively swaps if you have the "Enable Memory Compression" option ticked in Control Panel/Hardware & Power.
-
You said you want managed. I'm running a prerelease Ubiquiti ES10X and it's a full featured L2 switch with GUI and CLI, plus a couple of SFP ports too for very little money.
-
Same driver works for ConnectX-3 (Synology's part) and ConnectX-2. I've run both in XPenology using the standard driver set. I'd be incredibly surprised if you had a problem.
-
No, in order to run RAID10 you have to tear down your array and build from scratch. You will be adding a 4th drive to either SHR (RAID 5) or converting to SHR2 (RAID6).
If your drives are the same size, SHR doesn't offer any advantage over the equivalent RAID and is slightly slower. If you want to run mismatch drive sizes, SHR/SHR2 will give you somewhat more storage available than RAIDx.
-
Good to know for future reference. Thanks for posting about it.
Tutorial/Reference: 6.x Loaders and Platforms
in Tutorials and Guides
Posted
More specifically, 1.03b and DS3615/17 only supports an Intel-type NIC on 6.2.1+ (e1000e on ESXi is an emulated Intel NIC).
On earlier versions of DSM, or on other loaders, other NICs may be supported depending upon your combination of drivers available and hardware - e.g. the above Intel limitation is not applicable.