Jump to content
XPEnology Community

nick413

Member
  • Posts

    44
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by nick413

  1. Thank you, I will try this setting.
  2. I agree that called threads, well, OK, I will write professional terms. When I write the command cat / proc / cpuinfo in Putty, then I get a report on 16 threads. It’s strange. If I have 12 cores (2 processors, 6 cores in each), then there should be 24 threads. How can I disable SMT?
  3. So. IG-88 create for me extra.izma Tutorial. 1. Download Jun 1.03b - https://mega.nz/#!zcogjaDT!qIEazI49daggE2odvSwazn3VqBc_wv0zAvab6m6kHbA 2. Replace in Jun 1.03b «rd.gz» and «zImage» from https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3617xs_24922.pat 3. Replace in Jun 1.03b - extra.lzma for loader 1.03b_mod ds3617 DSM 6.2.2 v0.4_Beta http://s000.tinyupload.com/?file_id=81158484589811846693 4.After start need need to get root rights and and open access for WinCSP - https://suboshare.blogspot.com/2019/02/synology-nas-dsm-62-root-access.html Restrart. 5. Open up synoinfo.conf Search for maxdisks. Change the 12 to a 24. Search for internalportcfg. Change the 0xffff to 0xffffff for 24. Restart. Results: Enternet connection to Intel i350 NIC dual-ports 1G - ready Mellanox 40G (Check in Data Center) - ready 12 drives up to 24 drives - ready 12 SSD in RAID F1 - it's work CPU - sees 16 threads (Check - cat /proc/cpuinfo)
  4. Now 16, next time increase to 24.
  5. Thanks for the advice, but my infrastructure is just built on the ESXi, I know its capabilities and weaknesses. The storage with SSD with RAID F1 will be connected to the Dell M1000E factory via the ISCSI protocol through the Dell Networking MXL blade switch QSFP+ port, it is important for me to have a bandwidth of 40G. It is the LSI SAS 9300-i8 that has powerful chips for transmitting data via the ISCSI protocol, which suits me.
  6. Does not work, does not see the network. I connected a USB flash drive, I boot, but the device does not appear on the network.
  7. 0000:02:00.0 Class 0107: Device 1000:0097 (rev 02) Subsystem: Device 1000:30e0 Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 28 Region 0: I/O ports at 5000 Region 1: Memory at c7200000 (64-bit, non-prefetchable) Expansion ROM at c7100000 [disabled] Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd- ExtTag+ PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #0, Speed 8GT/s, Width x8, ASPM not supported, Exit Latency L0s <2us, L1 <4us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest- Capabilities: [d0] Vital Product Data Unknown small resource type 00, will not decode more. Capabilities: [a8] MSI: Enable- Count=1/1 Maskable+ 64bit+ Address: 0000000000000000 Data: 0000 Masking: 00000000 Pending: 00000000 Capabilities: [c0] MSI-X: Enable+ Count=96 Masked- Vector table: BAR=1 offset=0000e000 PBA: BAR=1 offset=0000f000 Capabilities: [100 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [1e0 v1] #19 Capabilities: [1c0 v1] Power Budgeting <?> Capabilities: [190 v1] #16 Capabilities: [148 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 0 ARICtl: MFVC- ACS-, Function Group: 0 Kernel driver in use: mpt3sas Command use: lspci -vvv
  8. The data storage system with SSD based on ESXi is a significant drop in performance. + ESXI does not fully support 40G data transfer with Mellanox network cards.
  9. Intel I350 -T2 Dual Port NIC Information from Supermicro. This release includes Base Drivers for Intel(R) Ethernet Network Connections. - igb driver supports all 82575-, 82576-, 82580-, I350-, I210-, I211- and I354-based gigabit network connections. - igbvf driver supports 82576-based virtual function devices that can only be activated on kernels that support SR-IOV. - e1000e driver supports all PCI Express gigabit network connections, except those that are 82575-, 82576-, 82580-, and I350-, I210-, and I211-based*. * NOTES: - The Intel(R) PRO/1000 P Dual Port Server Adapter is supported by the e1000 driver, not the e1000e driver due to the 82546 part being used behind a PCI Express bridge. - Gigabit devices based on the Intel(R) Ethernet Controller X722 are supported by the i40e driver. From the information I need to implement the e1000 driver for 3617 or 3615.
  10. Thanks for the answer, 3617 or 3615 bootloader will be enough for me for RAID F1 But the network doesn't work with them - Intel i350 and also LSI 9300-8i If you give me information on how to compile the drivers and insert them into extra.izma, I will try to implement the necessary drivers myself. I need instructions, I think I can do it.
  11. chip is SAS3008, driver mpt3sas, ds3617 image from synology comes with this driver in dsm (3615 does not) I think this is just the right driver for the LSI 9300-8i, no?
  12. How do you like the idea of getting firewood from the image.pat of FS3017? My server model is assembled on iron the same way with the contents of the internals of the FS3017. Chipset C612 2 X SAS LSI 9300-8i CPU - 2X Xeon E5-2620v3 RAM 64GB
  13. I brought synoinfo.conf settings to the same settings as the FS3017 model I left the parameters for the WOL network. RAID F1 appeared in the menu. But does not work. 1 in 1 settings like FS 3017 but if i turn on - support_sas = "yes" then everything disappears, the bootloader does not see the disks
  14. I have only SSD drives and need better performance than SATA 6G My SAS controllers supports 12G
  15. https://www.synology.com/en-global/knowledgebase/DSM/tutorial/Storage/Which_Synology_NAS_models_support_RAID_F1 There is support for RAID F1 model DS3617xs. But the server does not see hard drives in Jun's Loader v1.03b DS3617xs. Similar picture from the internet* -------- loading time is still long. 1. 10 minutes after the start, the IP address appears 2. In another 8-10 minutes the web interface starts to load. I will try this, thanks for the answers.
  16. I performed this, but RAID F1 did not appear. Before checking, I restarted the server. There are ideas, what else can be done? 918+ according to the technical documentation does not support RAID F1.
  17. Запустил все диски. Как оказалось у меня 2 Backplane стоит, для них нужно было всего 2 контроллера SAS. А я третий подключил в свободный порт, в итоге третий отключил. Отключил все SATA порты в биосе, eSATA тоже. Первый физический диск в сервере соответствует первому диску в веб интерфейсе, все 16 SSD завелись. Осталось RAID F1 активировать и Mellanox 40G Network card. Прогресс на лицо.
  18. There are no problems with the i350. It was loaded for a long time due to the extra SAS controller.
  19. I had 3 SAS controllers. I disabled one SAS controller, disabled all SATA ports, eSATA too. It became faster to boot and the device began to appear faster on the network, according to the logs, I realized that the problem was in the third SAS controller. Left to do: - activate RAID F1 - activate Mellanos 40G network card I have 2 processors, how do I know if the system uses one processor or two?
  20. I completely uploaded this file, its contents are strange. dmesg
  21. On every boot How to upload logs correctly? I in the previous message, unloaded the information Where can I disable? grub.cfg or jun.patch or 'synoinfo.conf' file, located under '/etc.defaults'
  22. 0000:82:00.0 Class 0200: Device 15b3:1007 Subsystem: Device 15b3:0006 Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR+ FastB2B- DisINTx- Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 11 Region 0: Memory at fbc00000 (64-bit, non-prefetchable) Region 2: Memory at fb000000 (64-bit, prefetchable) Expansion ROM at fbb00000 [disabled] Capabilities: [40] Power Management version 3 Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [48] Vital Product Data Product Name: CX314A - ConnectX-3 Pro QSFP Read-only fields: [PN] Part number: MCX314A-BCCT [EC] Engineering changes: AC [SN] Serial number: MT1843X01291 [V0] Vendor specific: PCIe Gen3 x8 [RV] Reserved: checksum good, 0 byte(s) reserved Read/write fields: [V1] Vendor specific: N/A [YA] Asset tag: N/A [RW] Read-write area: 101 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 253 byte(s) free [RW] Read-write area: 252 byte(s) free End Capabilities: [9c] MSI-X: Enable- Count=128 Masked- Vector table: BAR=0 offset=0007c000 PBA: BAR=0 offset=0007d000 Capabilities: [60] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 512 bytes, PhantFunc 0, Latency L0s <64ns, L1 unlimited ExtTag- AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable- Non-Fatal- Fatal- Unsupported- RlxdOrd- ExtTag- PhantFunc- AuxPwr- NoSnoop- FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend- LnkCap: Port #8, Speed 8GT/s, Width x8, ASPM L0s, Exit Latency L0s unlimited, L1 unlimited ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk+ ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 8GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range ABCD, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 8GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete+, EqualizationPhase1+ EqualizationPhase2+, EqualizationPhase3+, LinkEqualizationRequest- Capabilities: [c0] Vendor Specific Information: Len=18 <?> Capabilities: [100 v1] Alternative Routing-ID Interpretation (ARI) ARICap: MFVC- ACS-, Next Function: 0 ARICtl: MFVC- ACS-, Function Group: 0 Capabilities: [148 v1] Device Serial Number 98-03-9b-03-00-cf-01-e0 Capabilities: [108 v1] Single Root I/O Virtualization (SR-IOV) IOVCap: Migration-, Interrupt Message Number: 000 IOVCtl: Enable- Migration- Interrupt- MSE- ARIHierarchy+ IOVSta: Migration- Initial VFs: 8, Total VFs: 8, Number of VFs: 0, Function Dependency Link: 00 VF offset: 1, stride: 1, Device ID: 1004 Supported Page Size: 000007ff, System Page Size: 00000001 Region 2: Memory at 0000000000000000 (64-bit, prefetchable) VF Migration: offset: 00000000, BIR: 0 Capabilities: [154 v2] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES- TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr- CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [18c v1] #19
  23. Jun's Loader v1.04b DS918+ работает без танцев с бубном, дальше допиливать буду.
×
×
  • Create New...