flyride
-
Posts
2,438 -
Joined
-
Last visited
-
Days Won
127
Posts posted by flyride
-
-
Start here: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107979 ignoring the lvm commands (vchange -ay) and substituting your /dev/md2 for the lv device. I'd begin with the sudo mount's and follow the work that thread's OP did.
-
You have a simple RAID5 so the logical volume manager (lvm) is probably not being used and you won't have vg's. You need to figure out what device your array is. Try a "df" and see if you can match /dev/md... to your volume. If that is inconclusive because the volume isn't mounting "cat /etc/fstab"
See this thread for some options: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/#comment-108013
-
-
As you are familiar with LAG or other network interface aggregation tech, you'll agree that it won't help you get a single client (i.e. gaming machine) to go any faster than a single port.
To put this into perspective:
A single SATA SSD (or 2 in RAID 1) will easily read faster than a 1 Gbe interface.2 SATA SSD's in RAID 0 will nearly fill a 10Gbe interface.
4 SATA SSD's in RAID 5 will certainly saturate a 10Gbe interface.
- 1
-
-
Yes, there should be 24 threads but DSM cannot support that many.
To use all your cores, you must disable SMT (Simultaneous Multi-Threading, or Hyperthreading) in your motherboard BIOS.
-
14 minutes ago, nick413 said:
CPU - sees 16 cores (Check - cat /proc/cpuinfo)
Again, DSM will only use 16 THREADS not cores. You have 12 cores, and 12 SMT (Hyperthreading) threads. So DSM is actually only using 8 cores, and 8 threads.
You will get better performance if you disable SMT and then DSM will report 12 actual cores.
-
Well that makes sense why you want everything you can out of that 40Gbps card, as the theoretical drive throughput is 2.5x your network bandwidth. So maybe it's not quite so critical that you get the iSCSI hardware support working natively as that won't be the limiting factor. But good luck however it turns out.
You may know this already, but:
DS361x image has a native maximum of 12 drives
DS918 image has a native maximum of 16 drives
These can be modified, but every time that you update DSM the maximum will revert and your array will be compromised. It SHOULD come right back once you fix the MaxDisks setting.
-
26 minutes ago, nick413 said:
Thanks for the advice, but my infrastructure is just built on the ESXi, I know its capabilities and weaknesses.
The storage with SSD with RAID F1 will be connected to the Dell M1000E factory via the ISCSI protocol through the Dell Networking MXL blade switch QSFP+ port, it is important for me to have a bandwidth of 40G.
It is the LSI SAS 9300-i8 that has powerful chips for transmitting data via the ISCSI protocol, which suits me.
How many drives will you have in your RAID F1?
-
52 minutes ago, nick413 said:
The data storage system with SSD based on ESXi is a significant drop in performance.
+ ESXI does not fully support 40G data transfer with Mellanox network cards.
My system is very close in design to yours (see my signature). If you virtualize your network and storage, you may be correct. However, ESXi allows you to be selective as to what it manages and what it does not.
I am using 2x enterprise NVMe drives that are presented to DSM via physical RDM, which is a simple command/protocol translation. The disks are not otherwise managed by ESXi. This allows me to use them as SATA or SCSI within DSM (they would be totally inaccessible otherwise). If you have a difficult to support storage controller, the same tactic may apply. From a performance standpoint, if there is overhead it is negligible, as I routinely see 1.4MBps (that's megaBYTES) throughput, which is very close to the stated limits of the drive.
If the hardware is directly supported by DSM, ESXi can passthrough the device and not touch it at all. I do this with my dual Mellanox 10Gbps card and can easily max out the interfaces simultaneously. In the case of SATA, I pass that through as well so there is no possible loss of performance on that controller and attached drives.
The point is that ESXi can help resolve a problematic device in a very elegant way, and can still provide direct access to hardware that works well with DSM.
- 1
-
ESXi assigns a random MAC the first time a VM is booted. If you are using a prebuilt VM, it probably doesn't do the MAC assignment unless you delete the virtual Ethernet card, save the VM, and then add it back in.
-
On 12/10/2019 at 3:14 PM, nick413 said:
I have 2 processors, how do I know if the system uses one processor or two?
DSM representation is cosmetic and is hard coded to the DSM image you're using. cat /proc/cpuinfo if you want to see what is actually recognized in the system. There is a limit of 16 threads. You will need to disable SMT if you want to use all the cores (you are using two hexacore CPU's).
https://xpenology.com/forum/topic/15022-maximum-number-of-cores/?do=findComment&comment=115359
Just a general comment on this thread (which I am following with interest): this task would be a lot easier if you ran the system as a large VM within ESXi.
- 1
-
FYI there is almost no overhead with RDMs and you get access to the entire disk, so it should be portable.
-
Under "Features and Services" within the TOS:
2. QuickConnect and Dynamic Domain Name Service (DDNS)
Users who wish to use this service must register their Synology device to a Synology Account.When using XPenology, you are not using a Synology device. Therefore you aren't able to register that device to a Synology account. If you do, you are violating the TOS.
This is tantamount to stealing proprietary cloud services, and is discouraged here and by the cited FAQ.
-
-
Nobody knows. The current 1.03b and 1.04b loaders seem to work with DSM 6.2.x but any new DSM patch can (and does with surprising regularity) fail to work with them. The community has found workarounds in most cases. That's the reason for this thread here:
https://xpenology.com/forum/forum/78-dsm-updates-reporting/
Look for folks with similar hardware, virtualization, loader and DSM versions being successful before attempting any DSM update. And seeing as you are planning to use ESXi, there really is no excuse not to have a test XPenology DSM instance to see if the upgrade fails or succeeds before committing the update to your production VM.
When Synology releases DSM 7.0, it's a virtual certainty that the current loaders will not work. Someone will have to develop a new DSM 7.0 loader hack, and there is really no information about how long it might take or how difficult it may be.
-
It's not possible for you to agree to Synology's Terms of Service. By using XPenology to connect to Synology services, you are directly violating them. Please note this from the FAQ:
https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=82390
-
Clicking the upgrade button would be unwise. You will need to burn a new boot loader, and you will need to evaluate your hardware to see what combination of loader and code to use.
https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
Please have a backup/backout plan as it's not always straightforward depending on your system.
-
2 hours ago, jsmith032 said:
Another question, the USB thumb drive needs to stay in at all times correct?
Yes, that's your boot loader and is a required runtime filesystem for DSM. It's not an installation key.
-
27 minutes ago, Jamzor said:
I have a HP Microserver gen 8 with the intel E3-1265L v2 CPU. Running Xpenology on ESXi 6.5 U3 - HP custom, at the moment.
Now question is, I saw in the other post that hardware transcoding is only possible with 918+ ? But I see everyone is using 3615xs on this machine? Why not 918+? Is that not working even if you buy the correct network card?I'm not an HP expert, but I can answer the distilled-down question above. The CPU you have is an Ivy Bridge architecture, which is too old to run the DS918 version of DSM, compiled to use new instructions present in Haswell or later. So those running Ivy Bridge architecture have no choice but to run DS3615xs.
Hardware transcoding requires Intel Quicksync drivers that are only implemented on DS918 DSM. This post may help you understand the limitations further.
-
MBR and Legacy are two different things. If you can support a GPT partition, definitely do so.
Loader 1.02b (for 6.1.x) can work in either Legacy or UEFI mode
Loader 1.03b (for 6.2.x) works in Legacy mode
Loader 1.04b (for 6.2.x) works only in UEFI mode
-
On 12/1/2019 at 10:17 PM, test4321 said:
Problems with XPE is that you cant have them online, because each update is like having a major surgery. And if you dont have your box updated it's going to get hacked.
Nothing that a VPN won't solve. If you think your "patched up" Synology box can't be hacked, you need to meet some white hat security folks.
- 5
- 1
-
9 hours ago, bughatti said:
I have a [raid 5] that correlates to volume 1. I moved my setup a few days ago and when I plugged it back in, my raid 5 lost 2 of the 4 drives. 1 drive was completely hosed, not readable in anything else.
[snip]
I tried alot of commands ( I apologize but I do not remember them all) to get the raid 5 back. In the end I just replaced the bad drive, so at this point I had 2 original raid 5 good drives, and 2 other drives that did not show in the raid 5.
I ended up do mdadm --create /dev/md2 --assume-clean --level=5 --verbose --raid-devices=4 /dev/sda3 missing /dev/sdc3 /dev/sdd3
this put the raid back in a degraded stat which allowed me to repair using the newly replaced drive. The repair completed but now volume1 which did show up under volumes as crashed, is missing under volumes.
Sorry for the event and to bring you bad news. As you know, RAID 5 spans parity across the array such that all members, less one must be present for data integrity. Your data may have been recoverable at one time, but once the repair operation was initiated with only 2 valid drives, the data on all four drives was irreparably lost. I've highlighted the critical items above.
-
If the array was healthy when shut down, it should work out of order. But that is a last resort. I'd use some blank drives and figure out the order of the ports before installing.
- 1
Help I lost access to ~45 Tb of data ...
in General Post-Installation Questions/Discussions (non-hardware specific)
Posted
Another one bites the dust. At some point folks should heed the warnings about this time bomb waiting to happen.
What type of SSD drive were you using for cache? Were you monitoring for SSD health? How much SSD lifespan do you believe was remaining?