Jump to content
XPEnology Community

nick413

Member
  • Posts

    44
  • Joined

  • Last visited

  • Days Won

    1

nick413 last won the day on April 24 2021

nick413 had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

nick413's Achievements

Junior Member

Junior Member (2/7)

6

Reputation

  1. Thank you, I will try this setting.
  2. I agree that called threads, well, OK, I will write professional terms. When I write the command cat / proc / cpuinfo in Putty, then I get a report on 16 threads. It’s strange. If I have 12 cores (2 processors, 6 cores in each), then there should be 24 threads. How can I disable SMT?
  3. So. IG-88 create for me extra.izma Tutorial. 1. Download Jun 1.03b - https://mega.nz/#!zcogjaDT!qIEazI49daggE2odvSwazn3VqBc_wv0zAvab6m6kHbA 2. Replace in Jun 1.03b «rd.gz» and «zImage» from https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3617xs_24922.pat 3. Replace in Jun 1.03b - extra.lzma for loader 1.03b_mod ds3617 DSM 6.2.2 v0.4_Beta http://s000.tinyupload.com/?file_id=81158484589811846693 4.After start need need to get root rights and and open access for WinCSP - https://suboshare.blogspot.com/2019/02/synology-nas-dsm-62-root-access.html Restrart. 5. Open up synoinfo.conf Search for maxdisks. Change the 12 to a 24. Search for internalportcfg. Change the 0xffff to 0xffffff for 24. Restart. Results: Enternet connection to Intel i350 NIC dual-ports 1G - ready Mellanox 40G (Check in Data Center) - ready 12 drives up to 24 drives - ready 12 SSD in RAID F1 - it's work CPU - sees 16 threads (Check - cat /proc/cpuinfo)
  4. Now 16, next time increase to 24.
  5. Thanks for the advice, but my infrastructure is just built on the ESXi, I know its capabilities and weaknesses. The storage with SSD with RAID F1 will be connected to the Dell M1000E factory via the ISCSI protocol through the Dell Networking MXL blade switch QSFP+ port, it is important for me to have a bandwidth of 40G. It is the LSI SAS 9300-i8 that has powerful chips for transmitting data via the ISCSI protocol, which suits me.
  6. Does not work, does not see the network. I connected a USB flash drive, I boot, but the device does not appear on the network.
  7. 0000:02:00.0 Class 0107: Device 1000:0097 (rev 02) Subsystem: Device 1000:30e0 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 28 Region 0: I/O ports at 5000 Region 1: Memory at c7200000 (64-bit, non-prefetchable) Expansion ROM at c7100000 [disabled] Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported, Exit Latency L0s <2us, L1 <4us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest- Capabilities: [d0] Vital Product Data Unknown small resource type 00, will not decode more. Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [c0] MSI-X: Enable+ Count=96 Masked- Vector table: BAR=1 offset=0000e000 PBA: BAR=1 offset=0000f000 Capabilities: [100 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [1e0 v1] #19 Capabilities: [1c0 v1] Power Budgeting <?> Capabilities: [190 v1] #16 Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 0 ARICtl: MFVC- ACS-, Function Group: 0 Kernel driver in use: mpt3sas Command use: lspci -vvv
  8. The data storage system with SSD based on ESXi is a significant drop in performance. + ESXI does not fully support 40G data transfer with Mellanox network cards.
  9. Intel I350 -T2 Dual Port NIC Information from Supermicro. This release includes Base Drivers for Intel(R) Ethernet Network Connections. - igb driver supports all 82575-, 82576-, 82580-, I350-, I210-, I211- and I354-based gigabit network connections. - igbvf driver supports 82576-based virtual function devices that can only be activated on kernels that support SR-IOV. - e1000e driver supports all PCI Express gigabit network connections, except those that are 82575-, 82576-, 82580-, and I350-, I210-, and I211-based*. * NOTES: - The Intel(R) PRO/1000 P Dual Port Server Adapter is supported by the e1000 driver, not the e1000e driver due to the 82546 part being used behind a PCI Express bridge. - Gigabit devices based on the Intel(R) Ethernet Controller X722 are supported by the i40e driver. From the information I need to implement the e1000 driver for 3617 or 3615.
  10. Thanks for the answer, 3617 or 3615 bootloader will be enough for me for RAID F1 But the network doesn't work with them - Intel i350 and also LSI 9300-8i If you give me information on how to compile the drivers and insert them into extra.izma, I will try to implement the necessary drivers myself. I need instructions, I think I can do it.
  11. chip is SAS3008, driver mpt3sas, ds3617 image from synology comes with this driver in dsm (3615 does not) I think this is just the right driver for the LSI 9300-8i, no?
  12. How do you like the idea of getting firewood from the image.pat of FS3017? My server model is assembled on iron the same way with the contents of the internals of the FS3017. Chipset C612 2 X SAS LSI 9300-8i CPU - 2X Xeon E5-2620v3 RAM 64GB
  13. I brought synoinfo.conf settings to the same settings as the FS3017 model I left the parameters for the WOL network. RAID F1 appeared in the menu. But does not work. 1 in 1 settings like FS 3017 but if i turn on - support_sas = "yes" then everything disappears, the bootloader does not see the disks
  14. I have only SSD drives and need better performance than SATA 6G My SAS controllers supports 12G
×
×
  • Create New...