Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 02/02/2022 in all areas

  1. Hi, I'm just leaving this here in case anyone needs it. In order to start the telnet service once you end up in a recovery page, or on install page etc. You open your browser and run the following http://<your-dsm-ip-address>:5000/webman/start_telnet.cgi
    1 point
  2. Hello everyone, there have been too many discussions about the LSI array card of DSM7.0, in various posts on github, xpenology forums, but so far, there is no clear solution. The problems that everyone focuses on appear in the following points: 1. After adding the mpt2sas/mpt3sas driver, the array card cannot be recognized 2. The array card can be correctly identified, but the smart information of the hard disk cannot be obtained These two questions have troubled me for a long time, and I have made a lot of attempts, so I thought of publishing some results and experiences of my attempts for your reference. Regarding question 1: There are many types and versions of LSI array cards, and the key to whether mpt2sas/mpt3sas can identify the driver is whether the driver has the Vendor ID and Device ID of the corresponding card. So first we need to use lspci -nn | grep LSI On my machine, I get results like the following: 01:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS2308 PCI-Express Fusion-MPT SAS-2 [1000:0087] (rev 05) Next, we can check whether the driver contains this id, and we can execute modinfo mpt3sas.ko on any machine (the driver is from pocopico's github repository rp-ext) results are as follows: filename: /root/test/mpt3sas.ko alias: mpt2sas version: 22.00.00.00 license: GPL description: LSI MPT Fusion SAS 3.0 & SAS 3.5 Device Driver author: Avago Technologies <MPT-FusionLinux.pdl@avagotech.com> srcversion: 80624A1362CD953ED59AF65 alias: pci:v00001000d000000D2sv*sd*bc*sc*i* alias: pci:v00001000d000000D1sv*sd*bc*sc*i* alias: pci:v00001000d000000D0sv*sd*bc*sc*i* alias: pci:v00001000d000000ACsv*sd*bc*sc*i* alias: pci:v00001000d000000ABsv*sd*bc*sc*i* alias: pci:v00001000d000000AAsv*sd*bc*sc*i* alias: pci:v00001000d000000AFsv*sd*bc*sc*i* alias: pci:v00001000d000000AEsv*sd*bc*sc*i* alias: pci:v00001000d000000ADsv*sd*bc*sc*i* alias: pci:v00001000d000000C3sv*sd*bc*sc*i* alias: pci:v00001000d000000C2sv*sd*bc*sc*i* alias: pci:v00001000d000000C1sv*sd*bc*sc*i* alias: pci:v00001000d000000C0sv*sd*bc*sc*i* alias: pci:v00001000d000000C8sv*sd*bc*sc*i* alias: pci:v00001000d000000C7sv*sd*bc*sc*i* alias: pci:v00001000d000000C6sv*sd*bc*sc*i* alias: pci:v00001000d000000C5sv*sd*bc*sc*i* alias: pci:v00001000d000000C4sv*sd*bc*sc*i* alias: pci:v00001000d000000C9sv*sd*bc*sc*i* alias: pci:v00001000d00000095sv*sd*bc*sc*i* alias: pci:v00001000d00000094sv*sd*bc*sc*i* alias: pci:v00001000d00000091sv*sd*bc*sc*i* alias: pci:v00001000d00000090sv*sd*bc*sc*i* alias: pci:v00001000d00000097sv*sd*bc*sc*i* alias: pci:v00001000d00000096sv*sd*bc*sc*i* alias: pci:v00001000d0000007Esv*sd*bc*sc*i* alias: pci:v00001000d000002B0sv*sd*bc*sc*i* alias: pci:v00001000d0000006Esv*sd*bc*sc*i* alias: pci:v00001000d00000087sv*sd*bc*sc*i* alias: pci:v00001000d00000086sv*sd*bc*sc*i* alias: pci:v00001000d00000085sv*sd*bc*sc*i* alias: pci:v00001000d00000084sv*sd*bc*sc*i* alias: pci:v00001000d00000083sv*sd*bc*sc*i* alias: pci:v00001000d00000082sv*sd*bc*sc*i* alias: pci:v00001000d00000081sv*sd*bc*sc*i* alias: pci:v00001000d00000080sv*sd*bc*sc*i* alias: pci:v00001000d00000065sv*sd*bc*sc*i* alias: pci:v00001000d00000064sv*sd*bc*sc*i* alias: pci:v00001000d00000077sv*sd*bc*sc*i* alias: pci:v00001000d00000076sv*sd*bc*sc*i* alias: pci:v00001000d00000074sv*sd*bc*sc*i* alias: pci:v00001000d00000072sv*sd*bc*sc*i* alias: pci:v00001000d00000070sv*sd*bc*sc*i* depends: retpoline: Y vermagic: 4.4.180+ SMP mod_unload parm: logging_level: bits for enabling additional logging info (default=0) parm: sdev_queue_depth: globally setting SAS device queue depth parm: max_sectors:max sectors, range 64 to 32767 default=32767 (ushort) parm: command_retry_count: Device discovery TUR command retry count: (default=144) (int) parm: missing_delay: device missing delay , io missing delay (array of int) parm: host_lock_mode:Enable SCSI host lock if set to 1(default=0) (int) parm: max_lun: max lun, default=16895 (int) parm: hbas_to_enumerate: 0 - enumerates all SAS 2.0, SAS3.0 & above generation HBAs 1 - enumerates only SAS 2.0 generation HBAs 2 - enumerates SAS 3.0 & above generation HBAs (default=-1, Enumerates all SAS 2.0, SAS 3.0 & above generation HBAs else SAS 3.0 & above generation HBAs only) (int) parm: mpt3sas_multipath: enabling mulipath support for target resets (default=0) (int) parm: multipath_on_hba: Multipath support to add same target device as many times as it is visible to HBA from various paths (by default: SAS 2.0,SAS 3.0 HBA & SAS3.5 HBA-This will be disabled) (int) parm: disable_eedp: disable EEDP support: (default=0) (uint) parm: diag_buffer_enable: post diag buffers (TRACE=1/SNAPSHOT=2/EXTENDED=4/default=0) (int) parm: disable_discovery: disable discovery (int) parm: allow_drive_spindown: allow host driver to issue START STOP UNIT(STOP) command to spindown the drive before shut down or driver unload, default=1, Dont spindown any SATA drives =0 / Spindown SSD but not HDD = 1/ Spindown HDD but not SSD =2/ Spindown all SATA drives =3 (uint) parm: prot_mask: host protection capabilities mask, def=0x7f (int) parm: protection_guard_mask: host protection algorithm mask, def=3 (int) parm: issue_scsi_cmd_to_bringup_drive: allow host driver to issue SCSI commands to bring the drive to READY state, default=1 (int) parm: sata_smart_polling: poll for smart errors on SATA drives: (default=0) (uint) parm: max_queue_depth: max controller queue depth (int) parm: max_sgl_entries: max sg entries (int) parm: msix_disable: disable msix routed interrupts (default=0) (int) parm: smp_affinity_enable:SMP affinity feature enable/disbale Default: enable(1) (int) parm: max_msix_vectors: max msix vectors (int) parm: mpt3sas_fwfault_debug: enable detection of firmware fault and halt firmware - (default=0) I can find my corresponding id in this line, alias: pci:v00001000d00000087sv*sd*bc*sc*i* Where v00001000 corresponds to Vendor ID 1000 d00000087 corresponds to Device ID 0087. So you can also use the same method to check whether the driver supports your LSI adapter. Regarding question 2: Because the situation is more complicated, I made a table to show the results of my test. DSM model DSM version drivers souce drivers version LSI card1 model smart infomation result LSI card2 model smart infomation result DS918+ 7.0.1-42218 pocopico/rp-ext/mpt3sas 22.00.00.00 LSI SAS2308<LSI 9207-8i><1000:0087> no smart information LSI SAS3416<LSI 9400-16i><1000:00ac> OK DS918+ 7.0.1-42218 broadcom latest drivers for LSI 9207-8i 20.00.04.00 LSI SAS2308<LSI 9207-8i><1000:0087> no smart information LSI SAS3416<LSI 9400-16i><1000:00ac> NO PCI ID DS918+ 7.0.1-42218 broadcom latest drivers forLSI SAS3416 40.00.00.00 LSI SAS2308<LSI 9207-8i><1000:0087> no smart information LSI SAS3416<LSI 9400-16i><1000:00ac> OK I also tried ig-88's suggestion, using the mpt3sas driver from broadwell's 7.0-41890 GPL source to try to compile the driver for the 918+, but got the following error. make: Entering directory '/home/dog/dog/nas/toolkit/build_env/ds.apollolake-7.0/usr/local/x86_64-pc-linux-gnu/x86_64-pc-linux-gnu/sys-root/usr/lib/modules/DSM-7.0/build' CC [M] /home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas/mpt3sas_scsih.o /home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas/mpt3sas_scsih.c: In function ‘_scsih_probe’: /home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas/mpt3sas_scsih.c:8790:14: error: ‘struct scsi_host_template’ has no member named ‘syno_set_sashost_disk_led’ shost->hostt->syno_set_sashost_disk_led = syno_scsih_lsi3008_set_led; ^~ scripts/Makefile.build:277: recipe for target '/home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas/mpt3sas_scsih.o' failed make[1]: *** [/home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas/mpt3sas_scsih.o] Error 1 Makefile:1445: recipe for target '_module_/home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas' failed make: *** [_module_/home/dog/dog/nas/sourcecode/broadwell/linux-4.4.x/drivers/scsi/mpt3sas] Error 2 make: Leaving directory '/home/dog/dog/nas/toolkit/build_env/ds.apollolake-7.0/usr/local/x86_64-pc-linux-gnu/x86_64-pc-linux-gnu/sys-root/usr/lib/modules/DSM-7.0/build'
    1 point
  3. Did you wipe the disk before the new install without network, make sure the disk is clean without old failed install
    1 point
  4. NO, yes, yes NO, not in general yes, only if the controller has its own pcie bridge chip (expansive controller) or yes, when thy system/bios supports bifurication on a 8 or 16 lane pcie connector https://peine-braun.net/shop/index.php?route=information/information&information_id=7 so most cheap cards like that (4 x nvme on a 16x card) for a low amount of money will need bifurication support, if the card costs 300-500 bugs it might be one with a bridge chip
    1 point
  5. This is purely cosmetic & merely reflects the CPU used in that model, all cores will be used.
    1 point
  6. Thanks! Will this boot my 6.2.3 so I can upgrade to 6.2.4 through DSM?
    1 point
×
×
  • Create New...