Jump to content
XPEnology Community

make configuration from FS3017


nick413

Recommended Posts

On ‎12‎/‎10‎/‎2019 at 3:14 PM, nick413 said:

I have 2 processors, how do I know if the system uses one processor or two?

 

 

DSM representation is cosmetic and is hard coded to the DSM image you're using.  cat /proc/cpuinfo if you want to see what is actually recognized in the system.  There is a limit of 16 threads.  You will need to disable SMT if you want to use all the cores (you are using two hexacore CPU's).

 

https://xpenology.com/forum/topic/15022-maximum-number-of-cores/?do=findComment&comment=115359

 

Just a general comment on this thread (which I am following with interest): this task would be a lot easier if you ran the system as a large VM within ESXi.

  • Like 1
Link to comment
Share on other sites

17 минут назад, flyride сказал:

 

DSM representation is cosmetic and is hard coded to the DSM image you're using.  cat /proc/cpuinfo if you want to see what is actually recognized in the system.  There is a limit of 16 threads.  You will need to disable SMT if you want to use all the cores (you are using two hexacore CPU's).

 

https://xpenology.com/forum/topic/15022-maximum-number-of-cores/?do=findComment&comment=115359

 

Just a general comment on this thread (which I am following with interest): this task would be a lot easier if you ran the system as a large VM within ESXi.

The data storage system with SSD based on ESXi is a significant drop in performance.

+ ESXI does not fully support 40G data transfer with Mellanox network cards.

Edited by nick413
Link to comment
Share on other sites

12 hours ago, IG-88 said:

 

i already did this in the (not public available) test version, your problem is the mpt3sas driver, if the one coming with dsm does not work with your controller i dont see how to get it working atm - it might hurt but either you get yourself sas controllers working with 3617 or you throw out sas and use sata

3617:

mpt3sas = version 22.00.02.00

mpt2sas = version 20.00.00.00

this version must be from external source, as kernel 3.10.105 has different version numbers

 

whats the vendor id of your controllers (lspci)?

edit:

if you want to try to build your own drivers and extra.lzma, my "old" descrition is still good, just look at the end of the thread for some infos about waht to change for dsm 6.2.2

https://xpenology.com/forum/topic/7187-how-to-build-and-inject-missing-drivers-in-jun-loader-102a/

 

edit2:

if you take the source of the mpt3sas 22.00.02.00

mp3sas_base.h contains names and id's of controllers supported (thats the version from dsm)

so if you insist on sas3 there would be the Dell HBA330 12G but there are others too

 

  mpt3sas_base.h (Показать содержимое)

...

/*
 * Intel SAS2 HBA branding
 */
#define MPT2SAS_INTEL_RMS25JB080_BRANDING    \
    "Intel(R) Integrated RAID Module RMS25JB080"
#define MPT2SAS_INTEL_RMS25JB040_BRANDING    \
    "Intel(R) Integrated RAID Module RMS25JB040"
#define MPT2SAS_INTEL_RMS25KB080_BRANDING    \
    "Intel(R) Integrated RAID Module RMS25KB080"
#define MPT2SAS_INTEL_RMS25KB040_BRANDING    \
    "Intel(R) Integrated RAID Module RMS25KB040"
#define MPT2SAS_INTEL_RMS25LB040_BRANDING    \
    "Intel(R) Integrated RAID Module RMS25LB040"
#define MPT2SAS_INTEL_RMS25LB080_BRANDING    \
    "Intel(R) Integrated RAID Module RMS25LB080"
#define MPT2SAS_INTEL_RMS2LL080_BRANDING    \
    "Intel Integrated RAID Module RMS2LL080"
#define MPT2SAS_INTEL_RMS2LL040_BRANDING    \
    "Intel Integrated RAID Module RMS2LL040"
#define MPT2SAS_INTEL_RS25GB008_BRANDING    \
    "Intel(R) RAID Controller RS25GB008"
#define MPT2SAS_INTEL_SSD910_BRANDING        \
    "Intel(R) SSD 910 Series"

/*
 * Intel SAS2 HBA SSDIDs
 */
#define MPT2SAS_INTEL_RMS25JB080_SSDID         0x3516
#define MPT2SAS_INTEL_RMS25JB040_SSDID         0x3517
#define MPT2SAS_INTEL_RMS25KB080_SSDID         0x3518
#define MPT2SAS_INTEL_RMS25KB040_SSDID         0x3519
#define MPT2SAS_INTEL_RMS25LB040_SSDID         0x351A
#define MPT2SAS_INTEL_RMS25LB080_SSDID         0x351B
#define MPT2SAS_INTEL_RMS2LL080_SSDID          0x350E
#define MPT2SAS_INTEL_RMS2LL040_SSDID          0x350F
#define MPT2SAS_INTEL_RS25GB008_SSDID          0x3000
#define MPT2SAS_INTEL_SSD910_SSDID             0x3700

/*
 * Intel SAS3 HBA branding
 */
#define MPT3SAS_INTEL_RMS3JC080_BRANDING       \
    "Intel(R) Integrated RAID Module RMS3JC080"
#define MPT3SAS_INTEL_RS3GC008_BRANDING       \
        "Intel(R) RAID Controller RS3GC008"
#define MPT3SAS_INTEL_RS3FC044_BRANDING       \
        "Intel(R) RAID Controller RS3FC044"
#define MPT3SAS_INTEL_RS3UC080_BRANDING       \
        "Intel(R) RAID Controller RS3UC080"
#define MPT3SAS_INTEL_RS3PC_BRANDING       \
        "Intel(R) RAID Integrated RAID RS3PC"

/*
 * Intel SAS3 HBA SSDIDs
 */
#define MPT3SAS_INTEL_RMS3JC080_SSDID        0x3521
#define MPT3SAS_INTEL_RS3GC008_SSDID         0x3522
#define MPT3SAS_INTEL_RS3FC044_SSDID         0x3523
#define MPT3SAS_INTEL_RS3UC080_SSDID         0x3524
#define MPT3SAS_INTEL_RS3PC_SSDID            0x3527

/*
 * Dell SAS2 HBA branding
 */
#define MPT2SAS_DELL_6GBPS_SAS_HBA_BRANDING        "Dell 6Gbps SAS HBA"
#define MPT2SAS_DELL_PERC_H200_ADAPTER_BRANDING    "Dell PERC H200 Adapter"
#define MPT2SAS_DELL_PERC_H200_INTEGRATED_BRANDING "Dell PERC H200 Integrated"
#define MPT2SAS_DELL_PERC_H200_MODULAR_BRANDING    "Dell PERC H200 Modular"
#define MPT2SAS_DELL_PERC_H200_EMBEDDED_BRANDING   "Dell PERC H200 Embedded"
#define MPT2SAS_DELL_PERC_H200_BRANDING            "Dell PERC H200"
#define MPT2SAS_DELL_6GBPS_SAS_BRANDING            "Dell 6Gbps SAS"

/*
 * Dell SAS2 HBA SSDIDs
 */
#define MPT2SAS_DELL_6GBPS_SAS_HBA_SSDID           0x1F1C
#define MPT2SAS_DELL_PERC_H200_ADAPTER_SSDID       0x1F1D
#define MPT2SAS_DELL_PERC_H200_INTEGRATED_SSDID    0x1F1E
#define MPT2SAS_DELL_PERC_H200_MODULAR_SSDID       0x1F1F
#define MPT2SAS_DELL_PERC_H200_EMBEDDED_SSDID      0x1F20
#define MPT2SAS_DELL_PERC_H200_SSDID               0x1F21
#define MPT2SAS_DELL_6GBPS_SAS_SSDID               0x1F22
/*
 * Dell SAS3 HBA branding
 */
#define MPT3SAS_DELL_HBA330_ADP_BRANDING    \
    "Dell HBA330 Adp"
#define MPT3SAS_DELL_12G_HBA_BRANDING       \
        "Dell 12Gbps SAS HBA"
#define MPT3SAS_DELL_HBA330_MINI_BRANDING    \
    "Dell HBA330 Mini"

/*
 * Dell SAS3 HBA SSDIDs
 */
#define MPT3SAS_DELL_HBA330_ADP_SSDID    0x1F45
#define MPT3SAS_DELL_12G_HBA_SSDID    0x1F46
#define MPT3SAS_DELL_HBA330_MINI_SSDID    0x1F53

/*
 * Cisco SAS3 HBA branding
 */
#define MPT3SAS_CISCO_12G_8E_HBA_BRANDING       \
        "Cisco 9300-8E 12G SAS HBA"
#define MPT3SAS_CISCO_12G_8I_HBA_BRANDING       \
        "Cisco 9300-8i 12G SAS HBA"
#define MPT3SAS_CISCO_12G_AVILA_HBA_BRANDING       \
        "Cisco 12G Modular SAS Pass through Controller"
#define MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_BRANDING       \
        "UCS C3X60 12G SAS Pass through Controller"        
/*
 * Cisco SAS3 HBA SSSDIDs
 */
#define MPT3SAS_CISCO_12G_8E_HBA_SSDID  0x14C
#define MPT3SAS_CISCO_12G_8I_HBA_SSDID  0x154
#define MPT3SAS_CISCO_12G_AVILA_HBA_SSDID  0x155
#define MPT3SAS_CISCO_12G_COLUSA_MEZZANINE_HBA_SSDID  0x156

/*
 * HP SAS2 HBA branding
 */
#define MPT2SAS_HP_3PAR_SSVID            0x1590
#define MPT2SAS_HP_2_4_INTERNAL_BRANDING    \
    "HP H220 Host Bus Adapter"
#define MPT2SAS_HP_2_4_EXTERNAL_BRANDING    \
    "HP H221 Host Bus Adapter"
#define MPT2SAS_HP_1_4_INTERNAL_1_4_EXTERNAL_BRANDING    \
    "HP H222 Host Bus Adapter"
#define MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_BRANDING    \
    "HP H220i Host Bus Adapter"
#define MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_BRANDING    \
    "HP H210i Host Bus Adapter"

/*
 * HP SAS2 HBA SSDIDs
 */
#define MPT2SAS_HP_2_4_INTERNAL_SSDID            0x0041
#define MPT2SAS_HP_2_4_EXTERNAL_SSDID            0x0042
#define MPT2SAS_HP_1_4_INTERNAL_1_4_EXTERNAL_SSDID    0x0043
#define MPT2SAS_HP_EMBEDDED_2_4_INTERNAL_SSDID        0x0044
#define MPT2SAS_HP_DAUGHTER_2_4_INTERNAL_SSDID        0x0046

...

 

 

0000:02:00.0 Class 0107: Device 1000:0097 (rev 02)
        Subsystem: Device 1000:30e0
        Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+
        Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx-
        Latency: 0, Cache Line Size: 32 bytes
        Interrupt: pin A routed to IRQ 28
        Region 0: I/O ports at 5000
        Region 1: Memory at c7200000 (64-bit, non-prefetchable)
        Expansion ROM at c7100000 [disabled]
        Capabilities: [50] Power Management version 3
                Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
                Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME-
        Capabilities: [68] Express (v2) Endpoint, MSI 00
                DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us
                        ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+
                DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+
                        RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset-
                        MaxPayload 256 bytes, MaxReadReq 512 bytes
                DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
                LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported, Exit Latency L0s <2us, L1 <4us
                        ClockPM- Surprise- LLActRep- BwNot-
                LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+
                        ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
                LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
                DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported
                DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled
                LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis-
                         Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS-
                         Compliance De-emphasis: -6dB
                LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+
                         EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest-
        Capabilities: [d0] Vital Product Data
                Unknown small resource type 00, will not decode more.
        Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+
                Address: 0000000000000000  Data: 0000
                Masking: 00000000  Pending: 00000000
        Capabilities: [c0] MSI-X: Enable+ Count=96 Masked-
                Vector table: BAR=1 offset=0000e000
                PBA: BAR=1 offset=0000f000
        Capabilities: [100 v2] Advanced Error Reporting
                UESta:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UEMsk:  DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol-
                UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol-
                CESta:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr-
                CEMsk:  RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+
                AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn-
        Capabilities: [1e0 v1] #19
        Capabilities: [1c0 v1] Power Budgeting <?>
        Capabilities: [190 v1] #16
        Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI)
                ARICap: MFVC- ACS-, Next Function: 0
                ARICtl: MFVC- ACS-, Function Group: 0
        Kernel driver in use: mpt3sas


 

Command use: lspci -vvv

Edited by Polanskiman
Added code tag.
Link to comment
Share on other sites

52 minutes ago, nick413 said:

The data storage system with SSD based on ESXi is a significant drop in performance.

+ ESXI does not fully support 40G data transfer with Mellanox network cards.

 

My system is very close in design to yours (see my signature).  If you virtualize your network and storage, you may be correct.  However, ESXi allows you to be selective as to what it manages and what it does not.

 

I am using 2x enterprise NVMe drives that are presented to DSM via physical RDM, which is a simple command/protocol translation.  The disks are not otherwise managed by ESXi. This allows me to use them as SATA or SCSI within DSM (they would be totally inaccessible otherwise).  If you have a difficult to support storage controller, the same tactic may apply. From a performance standpoint, if there is overhead it is negligible, as I routinely see 1.4MBps (that's megaBYTES) throughput, which is very close to the stated limits of the drive.

 

If the hardware is directly supported by DSM, ESXi can passthrough the device and not touch it at all.  I do this with my dual Mellanox 10Gbps card and can easily max out the interfaces simultaneously.  In the case of SATA, I pass that through as well so there is no possible loss of performance on that controller and attached drives.

 

The point is that ESXi can help resolve a problematic device in a very elegant way, and can still provide direct access to hardware that works well with DSM.

Edited by flyride
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, nick413 said:

0000:02:00.0 Class 0107: Device 1000:0097 (rev 02)
        Subsystem: Device 1000:30e0

 

thats strange, its sas3008, the same as the mentioned dell type

https://pci-ids.ucw.cz/read/PC/1000/0097

 

in the driver it checks for different branded sas3008, it theory it should also work with a generic type

can you provide dmesg from a installed 3617?

would be interesting to see whats in the log when loading the mpt3sas drive

 

now i did some additional version checks

 

what dsm version did you use/install

6.2.0 (and juns loader) come with mpt3sas version 13.00.00.0

6.2.2 has version 22.00.02.0 as mentioned above, and my source check was against 22.00.02.0

so you would need 6.2.2 - but that breaks divers that do not come with dsm (for that you would need my new extra.lzma that is not published yet)

you can try to install 3617 6.2.2, you i350 is driver igb.ko and thats inside dsm so you should be able to use network and your sas3008 should work too

 

 

 

 

 

Edited by IG-88
Link to comment
Share on other sites

2 часа назад, flyride сказал:

 

My system is very close in design to yours (see my signature).  If you virtualize your network and storage, you may be correct.  However, ESXi allows you to be selective as to what it manages and what it does not.

 

I am using 2x enterprise NVMe drives that are presented to DSM via physical RDM, which is a simple command/protocol translation.  The disks are not otherwise managed by ESXi. This allows me to use them as SATA or SCSI within DSM (they would be totally inaccessible otherwise).  If you have a difficult to support storage controller, the same tactic may apply. From a performance standpoint, if there is overhead it is negligible, as I routinely see 1.4MBps (that's megaBYTES) throughput, which is very close to the stated limits of the drive.

 

If the hardware is directly supported by DSM, ESXi can passthrough the device and not touch it at all.  I do this with my dual Mellanox 10Gbps card and can easily max out the interfaces simultaneously.  In the case of SATA, I pass that through as well so there is no possible loss of performance on that controller and attached drives.

 

The point is that ESXi can help resolve a problematic device in a very elegant way, and can still provide direct access to hardware that works well with DSM.

Thanks for the advice, but my infrastructure is just built on the ESXi, I know its capabilities and weaknesses.

The storage with SSD with RAID F1 will be connected to the Dell M1000E factory via the ISCSI protocol through the Dell Networking MXL blade switch QSFP+ port, it is important for me to have a bandwidth of 40G.

 

It is the LSI SAS 9300-i8 that has powerful chips for transmitting data via the ISCSI protocol, which suits me.

Edited by nick413
Link to comment
Share on other sites

26 minutes ago, nick413 said:

Thanks for the advice, but my infrastructure is just built on the ESXi, I know its capabilities and weaknesses.

The storage with SSD with RAID F1 will be connected to the Dell M1000E factory via the ISCSI protocol through the Dell Networking MXL blade switch QSFP+ port, it is important for me to have a bandwidth of 40G.

 

It is the LSI SAS 9300-i8 that has powerful chips for transmitting data via the ISCSI protocol, which suits me.

How many drives will you have in your RAID F1?

Link to comment
Share on other sites

Well that makes sense why you want everything you can out of that 40Gbps card, as the theoretical drive throughput is 2.5x your network bandwidth.  So maybe it's not quite so critical that you get the iSCSI hardware support working natively as that won't be the limiting factor.  But good luck however it turns out.

 

You may know this already, but:

 

DS361x image has a native maximum of 12 drives

DS918 image has a native maximum of 16 drives

 

These can be modified, but every time that you update DSM the maximum will revert and your array will be compromised.  It SHOULD come right back once you fix the MaxDisks setting.

Edited by flyride
Link to comment
Share on other sites

1 hour ago, flyride said:

DS361x image has a native maximum of 12 drives

DS918 image has a native maximum of 16 drives

 

These can be modified, but every time that you update DSM the maximum will revert and your array will be compromised.  It SHOULD come right back once you fix the MaxDrives setting.

 

kind of annoying and it can be fixed in the extra.lzma with a little diff and patch, if my job would be more friendly to me (or my new boss) i wouldn't be so fed up after work (with lots of over time working atm)

imho its not that difficult to achieve, jun does it in 918+ as the default is just 4 drives, he did not care about that for 3615/17 as the default is 12 and in most cases that is enough

its just doing a diff with juns mod and a added 24 drive mod against the original synoinfo.conf an then puting that into juns mod in the extra.lzma

Edited by IG-88
Link to comment
Share on other sites

So.

IG-88 create for me extra.izma 

Tutorial.

 

1. Download Jun 1.03b - https://mega.nz/#!zcogjaDT!qIEazI49daggE2odvSwazn3VqBc_wv0zAvab6m6kHbA

2. Replace in Jun 1.03b «rd.gz» and «zImage» from https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3617xs_24922.pat

3. Replace in Jun 1.03b - extra.lzma for loader 1.03b_mod ds3617 DSM 6.2.2 v0.4_Beta
http://s000.tinyupload.com/?file_id=81158484589811846693

4.After start need need to get root rights and and open access for WinCSP - https://suboshare.blogspot.com/2019/02/synology-nas-dsm-62-root-access.html

Restrart.

5. Open up synoinfo.conf

Search for maxdisks.  Change the 12 to a 24.

Search for internalportcfg. Change the 0xffff to 0xffffff for 24. 

Restart.

 

Results:

Enternet connection to Intel i350 NIC dual-ports 1G - ready

Mellanox 40G (Check in Data Center) - ready

12 drives up to 24 drives - ready

12 SSD in RAID F1 - it's work

CPU - sees 16 threads (Check - cat /proc/cpuinfo)

 

 

image.png.6082c674a9bb74c2d6445c464f49505f.png

Edited by nick413
  • Like 3
Link to comment
Share on other sites

14 minutes ago, nick413 said:

CPU - sees 16 cores (Check - cat /proc/cpuinfo)

 

Again, DSM will only use 16 THREADS not cores.  You have 12 cores, and 12 SMT (Hyperthreading) threads.  So DSM is actually only using 8 cores, and 8 threads.

 

You will get better performance if you disable SMT and then DSM will report 12 actual cores.

Link to comment
Share on other sites

1 час назад, flyride сказал:

 

Again, DSM will only use 16 THREADS not cores.  You have 12 cores, and 12 SMT (Hyperthreading) threads.  So DSM is actually only using 8 cores, and 8 threads.

 

You will get better performance if you disable SMT and then DSM will report 12 actual cores.

I agree that called threads, well, OK, I will write professional terms.

When I write the command cat / proc / cpuinfo in Putty, then I get a report on 16 threads.

It’s strange. If I have 12 cores (2 processors, 6 cores in each), then there should be 24 threads.

 

How can I disable SMT?

 

Edited by nick413
Link to comment
Share on other sites

2 минуты назад, flyride сказал:

Yes, there should be 24 threads but DSM cannot support that many.

 

To use all your cores, you must disable SMT (Simultaneous Multi-Threading, or Hyperthreading) in your motherboard BIOS.

Thank you, I will try this setting.

Link to comment
Share on other sites

8 minutes ago, nick413 said:

It’s strange. If I have 12 cores (2 processors, 6 cores in each), then there should be 24 threads.

 

5 minutes ago, flyride said:

Yes, there should be 24 threads but DSM cannot support that many.

 

more precise its a limit in the dsm kernel synology build and we are bound to use the kernel (same reason we dont have hyper-v support)

we can add modules/drivers as long as its nothing that needs to be made into the kernel

 

Edited by IG-88
Link to comment
Share on other sites

  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...