ozool

Members
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

0 Neutral

About ozool

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Did you manage to get this working?
  2. Hi, I'm trying to switch from a Highpoint SGL2720SGL to an LSI 9207-8i. The system is running correctly on ESXi 6.7 with the Highpoint card but seems to crash when I switch to the 9207-8i. The 9207-8i works perfectly when passed through to a Windows VM, but nothing shows up in Synology Assistant when passed to the Xpenology VM. I'm using loader 1.03a2 with DS918+ and I have synoboot on SATA0:0 with a second virtual disk on SATA1:0. Can anyone help? Pasting my serial port output below. Thanks. patching file etc/rc patching file etc/synoinfo.conf Hunk #2 FAILED at 266. Hunk #3 FAILED at 311. 2 out of 3 hunks FAILED -- saving rejects to file etc/synoinfo.conf.rej patching file linuxrc.syno patching file usr/sbin/init.post START /linuxrc.syno Insert basic USB modules... :: Loading module usb-common ... [ OK ] :: Loading module usbcore ... [ OK ] :: Loading module ohci-hcd ... [ OK ] :: Loading module xhci-hcd ... [ OK ] :: Loading module xhci-pci ... [ OK ] :: Loading module usb-storage ... [ OK ] :: Loading module BusLogic ... [ OK ] :: Loading module vmw_pvscsi ... [ OK ] :: Loading module megaraid_mm ... [ OK ] :: Loading module megaraid_mbox ... [ OK ] :: Loading module scsi_transport_spi ... [ OK ] :: Loading module scsi_transport_sas ... [ OK ] :: Loading module libsas ... [ OK ] :: Loading module raid_class ... [ OK ] :: Loading module mptbase ... [ OK ] :: Loading module mptscsih ... [ OK ] :: Loading module mptspi ... [ OK ] :: Loading module mptctl ... [ OK ] :: Loading module megaraid ... [ OK ] :: Loading module megaraid_sas ... [ OK ] :: Loading module mptsas ... [ OK ] :: Loading module mpt2sas ... [ OK ] :: Loading module mdio ... [ OK ] :: Loading module 3w-9xxx ... [ OK ] :: Loading module 3w-sas ... [ OK ] :: Loading module aacraid ... [FAILED] :: Loading module arcmsr ... [ OK ] :: Loading module gdth ... [ OK ] :: Loading module hpsa ... [ OK ] :: Loading module hptiop ... [ OK ] :: Loading module isci ... [ OK ] :: Loading module mvsas ... [ OK ] :: Loading module mvumi ... [ OK ] :: Loading module aic94xx ... [ OK ] :: Loading module cciss ... [ OK ] :: Loading module ips ... [FAILED] :: Loading module sx8 ... [ OK ] Insert net driver(Mindspeed only)... [ 3.212932] BUG: unable to handle kernel paging request at 0000000000306d73 [ 3.213522] IP: [<ffffffff813f0c19>] syno_ahci_disk_led_enable+0x39/0x170 [ 3.213936] PGD 72e85067 PUD 76f5c067 PMD 0 [ 3.213980] Oops: 0000 [#1] PREEMPT SMP [ 3.214025] Modules linked in: apollolake_synobios(PO+) etxhci_hcd sx8(E) cciss(E) aic94xx(E) mvumi(E) mvsas(E) isci(E) hptiop(E) hpsa(E) gdth(E) arcmsr(E) 3w_sas(E) 3w_9xxx(E) mdio(E) mpt2sas(OE) mptsas(E) megaraid_sas(E) megaraid(E) mptctl(E) mptspi(E) mptscsih(E) mptbase(E) raid_class(E) libsas(E) scsi_transport_sas(E) scsi_transport_spi(E) megaraid_mbox(E) megaraid_mm(E) vmw_pvscsi(E) BusLogic(E) usb_storage xhci_pci xhci_hcd ohci_hcd(E) usbcore usb_common bfex(OE) [ 3.218936] CPU: 1 PID: 4901 Comm: insmod Tainted: P OE 4.4.59+ #23739 [ 3.218981] Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 07/28/2017 [ 3.219936] task: ffff880070a3c480 ti: ffff880036b54000 task.ti: ffff880036b54000 [ 3.219981] RIP: 0010:[<ffffffff813f0c19>] [<ffffffff813f0c19>] syno_ahci_disk_led_enable+0x39/0x170 [ 3.220937] RSP: 0018:ffff880036b57b90 EFLAGS: 00010206 [ 3.220981] RAX: ffff880070162000 RBX: 0000000000306d63 RCX: 0000000000000000 [ 3.221025] RDX: 0000000000000001 RSI: 0000000000000000 RDI: ffff880070162500 [ 3.221070] RBP: ffff880036b57bc0 R08: 0000000300000001 R09: 0000000700000005 [ 3.221114] R10: 0000000300000001 R11: 0000000700000005 R12: ffff88007375b740 [ 3.221117] R13: ffff880070162000 R14: 0000000000000001 R15: ffff88007a776ac0 [ 3.221898] FS: 00007fa87ede2700(0000) GS:ffff88007fd00000(0000) knlGS:0000000000000000 [ 3.221942] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 3.221986] CR2: 0000000000306d73 CR3: 0000000072c4c000 CR4: 00000000001606f0 [ 3.222031] Stack: [ 3.222075] 0000000000000286 ffffffff8180e040 ffff88007375b740 0000000000000000 [ 3.222937] ffffffffa02e15f0 ffff88007a776ac0 ffff880036b57cb8 ffffffffa02e516a [ 3.222981] ffff88007d0da248 0000000000000000 0000000000000286 ffff880036b57c20 [ 3.223025] Call Trace: [ 3.223070] [<ffffffffa02e15f0>] ? remove_card_detect_proc+0x20/0x20 [apollolake_synobios] [ 3.223937] [<ffffffffa02e516a>] synobios_model_init+0x10a/0x650 [apollolake_synobios] [ 3.223981] [<ffffffff812b63ff>] ? ida_pre_get+0x4f/0xe0 [ 3.224026] [<ffffffff8119da2d>] ? proc_alloc_inum+0x3d/0xc0 [ 3.224070] [<ffffffff8119dba9>] ? proc_register+0xb9/0x110 [ 3.224114] [<ffffffffa02e15f0>] ? remove_card_detect_proc+0x20/0x20 [apollolake_synobios] [ 3.224937] [<ffffffff8119dd12>] ? proc_mkdir_data+0x62/0x90 [ 3.224981] [<ffffffff8119dd60>] ? proc_mkdir+0x10/0x20 [ 3.225026] [<ffffffffa02e1663>] synobios_init+0x73/0x470 [apollolake_synobios] [ 3.225070] [<ffffffff810003b6>] do_one_initcall+0x86/0x1b0 [ 3.225114] [<ffffffff810df1d8>] do_init_module+0x56/0x1be [ 3.225159] [<ffffffff810b554d>] load_module+0x1ded/0x2070 [ 3.225203] [<ffffffff810b28b0>] ? __symbol_put+0x50/0x50 [ 3.225247] [<ffffffff81112da5>] ? map_vm_area+0x35/0x50 [ 3.225292] [<ffffffff811140ec>] ? __vmalloc_node_range+0x13c/0x240 [ 3.225937] [<ffffffff810b5858>] ? SYSC_init_module+0x88/0x110 [ 3.225981] [<ffffffff810b58cf>] SYSC_init_module+0xff/0x110 [ 3.226026] [<ffffffff810b5969>] SyS_init_module+0x9/0x10 [ 3.226070] [<ffffffff81566b17>] entry_SYSCALL_64_fastpath+0x12/0x6a [ 3.226114] Code: 41 55 41 54 53 48 8d 64 24 f8 e8 93 0d fb ff 48 85 c0 4c 8d 28 0f 84 39 01 00 00 48 8b 98 b0 07 00 00 48 85 db 0f 84 22 01 00 00 <48> 8b 7b 10 4c 8b a3 20 38 00 00 e8 07 5b 17 00 31 d2 31 ff 48 [ 3.228937] RIP [<ffffffff813f0c19>] syno_ahci_disk_led_enable+0x39/0x170 [ 3.228982] RSP <ffff880036b57b90> [ 3.229026] CR2: 0000000000306d73 [ 3.229153] ---[ end trace 13b0883ac134e798 ]--- Killed Starting /usr/syno/bin/synocfgen... /usr/syno/bin/synocfgen returns 255 rmmod: can't unload 'apollolake_synobios': Device or resource busy [ 3.234956] md: Autodetecting RAID arrays. [ 3.240187] md: invalid raid superblock magic on sda3 [ 3.240227] md: sda3 does not have a valid v0.90 superblock, not importing! [ 3.242073] md: invalid raid superblock magic on sdc3 [ 3.242113] md: sdc3 does not have a valid v0.90 superblock, not importing! [ 3.242946] md: Scanned 6 and added 4 devices. [ 3.242985] md: autorun ... [ 3.243024] md: considering sda1 ... [ 3.243063] md: adding sda1 ... [ 3.243102] md: sda2 has different UUID to sda1 [ 3.243142] md: adding sdc1 ... [ 3.243181] md: sdc2 has different UUID to sda1 [ 3.243220] md: created md0 [ 3.243259] md: bind<sdc1> [ 3.243298] md: bind<sda1> [ 3.243337] md: running: <sda1><sdc1> [ 3.243377] md: kicking non-fresh sdc1 from array! [ 3.243944] md: unbind<sdc1> [ 3.247927] md: export_rdev(sdc1) [ 3.249110] md/raid1:md0: active with 1 out of 12 mirrors [ 3.249944] md0: detected capacity change from 0 to 2549940224 [ 3.249988] md: considering sda2 ... [ 3.250027] md: adding sda2 ... [ 3.250066] md: sdc2 has different UUID to sda2 [ 3.251287] md: created md1 [ 3.251546] md: bind<sda2> [ 3.251802] md: running: <sda2> [ 3.254118] md/raid1:md1: active with 1 out of 24 mirrors [ 3.254684] md1: detected capacity change from 0 to 2147418112 [ 3.254954] md: considering sdc2 ... [ 3.255062] md: adding sdc2 ... [ 3.255101] md: md1 already running, cannot run sdc2 [ 3.255143] md: export_rdev(sdc2) [ 3.255182] md: ... autorun DONE. Partition Version=8 /sbin/e2fsck exists, checking /dev/md0... /sbin/e2fsck -pvf returns 0 Mounting /dev/md0 /tmpRoot [ 3.381474] EXT4-fs (md0): barriers disabled [ 3.390031] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: ------------upgrade Begin upgrade procedure No upgrade file exists End upgrade procedure ============upgrade Wait 2 seconds for synology manufactory device [ 3.725047] clocksource: Switched to clocksource tsc Mon Apr 15 15:04:58 UTC 2019 /dev/md0 /tmpRoot ext4 rw,relatime,data=ordered 0 0 [ 7.503481] random: nonblocking pool is initialized none /sys/kernel/debug debugfs rw,relatime 0 0 sys /sys sysfs rw,relatime 0 0 none /dev devtmpfs rw,relatime,size=1008764k,nr_inodes=252191,mode=755 0 0 [ 7.524078] VFS: opened file in mnt_point: (/dev), file: (/ttyS1) umount: can't umount /dev: Device or resource busy proc /proc proc rw,relatime 0 0 [ 7.529589] VFS: opened file in mnt_point: (/dev), file: (/ttyS1) linuxrc.syno executed successfully. Post init mount: mounting none on /dev failed: Device or resource busy [ 7.555508] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: barrier=1 [ 7.784802] VFS: opened file in mnt_point: (/dev), file: (/ttyS1) umount: can't umount /dev: Device or resource busy [ 7.861617] EXT4-fs (md0): re-mounted. Opts: (null) [ 8.398568] sd 0:0:0:0: Attached scsi generic sg0 type 0 [ 8.398609] sd 1:0:0:0: Attached scsi generic sg1 type 0 [ 8.398646] sd 2:0:0:0: Attached scsi generic sg2 type 0 [ 8.495758] md1: detected capacity change from 2147418112 to 0 [ 8.496515] md: md1: set sda2 to auto_remap [0] [ 8.496536] md: md1 stopped. [ 8.496557] md: unbind<sda2> [ 8.502495] md: export_rdev(sda2) [ 8.666881] md: bind<sda2> [ 8.666940] md: bind<sdc2> [ 8.668243] md/raid1:md1: not clean -- starting background reconstruction [ 8.668532] md/raid1:md1: active with 2 out of 24 mirrors [ 8.676550] md1: detected capacity change from 0 to 2147418112 [ 8.676730] md: md1: current auto_remap = 0 [ 8.676898] md: resync of RAID array md1 [ 8.677152] md1: Failed to send sync event: (sync type: resync, finish: 0, interrupt: 0) [ 8.677541] md: minimum _guaranteed_ speed: 1000 KB/sec/disk. [ 8.677751] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync. [ 8.678182] md: using 128k window, over a total of 2097088k. [ 8.845603] Adding 2097084k swap on /dev/md1. Priority:-1 extents:1 across:2097084k [ 9.147903] zram: Added device: zram0 [ 9.207625] zram0: detected capacity change from 0 to 1260388352 [ 9.209665] Adding 1230844k swap on /dev/zram0. Priority:1 extents:1 across:1230844k SS [ 9.488735] NET: Registered protocol family 10 [ 9.493694] sit: IPv6 over IPv4 tunneling driver [ 10.021693] init: syno-auth-check main process (6043) killed by TERM signal [ 10.435755] AVX2 version of gcm_enc/dec engaged. [ 10.435773] AES CTR mode by8 optimization enabled [ 10.441756] fuse init (API version 7.23) [ 10.446778] e1000e: Intel(R) PRO/1000 Network Driver - 3.4.0.2-NAPI [ 10.446803] e1000e: Copyright(c) 1999 - 2017 Intel Corporation. [ 10.449769] Intel(R) Gigabit Ethernet Network Driver - version 5.3.5.3 [ 10.449795] Copyright (c) 2007-2015 Intel Corporation. [ 10.451908] Intel(R) 10GbE PCI Express Linux Network Driver - version 5.0.4 [ 10.452740] Copyright(c) 1999 - 2017 Intel Corporation. [ 10.453866] i40e: Intel(R) 40-10 Gigabit Ethernet Connection Network Driver - version 2.3.6 [ 10.454751] i40e: Copyright(c) 2013 - 2017 Intel Corporation. [ 10.455817] tn40xx low_mem_msg proc entry initialized [ 10.455842] tn40xx low_mem_counter proc entry initialized [ 10.455867] tn40xx debug_msg proc entry initialized [ 10.455893] tn40xx: Tehuti Network Driver, 0.3.6.12.3 [ 10.461198] Compat-mlnx-ofed backport release: c22af88 [ 10.461753] Backport based on mlnx_ofed/mlnx-ofa_kernel-4.0.git c22af88 [ 10.461778] compat.git: mlnx_ofed/mlnx-ofa_kernel-4.0.git [ 10.477762] bnx2x: QLogic 5771x/578xx 10/20-Gigabit Ethernet Driver bnx2x 1.712.30-0 (2014/02/10) [ 10.488801] input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0 [ 10.489740] ACPI: Power Button [PWRF] [ 10.494749] Linux agpgart interface v0.103 [ 10.497207] agpgart-intel 0000:00:00.0: Intel 440BX Chipset [ 10.497876] agpgart-intel 0000:00:00.0: AGP aperture is 256M @ 0x0 [ 10.505796] [drm] Initialized drm 1.1.0 20060810 [ 10.537958] Btrfs loaded, crc32c=crc32c-intel [ 10.539776] exFAT: Version 1.2.9 [ 10.551145] jme: JMicron JMC2XX ethernet driver version 1.0.8 [ 10.554430] sky2: driver version 1.30 [ 10.559786] QLogic 1/10 GbE Converged/Intelligent Ethernet Driver v5.3.63 [ 10.562522] QLogic/NetXen Network Driver v4.0.82 [ 10.564209] Solarflare NET driver v4.0 [ 10.566789] e1000: Intel(R) PRO/1000 Network Driver - version 7.3.21-k8-NAPI [ 10.566826] e1000: Copyright (c) 1999-2006 Intel Corporation. [ 10.568258] pcnet32: pcnet32.c:v1.35 21.Apr.2008 tsbogend@alpha.franken.de [ 10.570188] VMware vmxnet3 virtual NIC driver - version 1.4.5.0-k-NAPI [ 10.570792] vmxnet3 0000:03:00.0: # of Tx queues : 2, # of Rx queues : 2 [ 10.572074] vmxnet3 0000:03:00.0 eth0: NIC Link is Up 10000 Mbps [ 10.577335] cnic: QLogic cnicDriver v2.5.22 (July 20, 2015) [ 10.591508] usbcore: registered new interface driver ax88179_178a [ 10.596086] usbcore: registered new interface driver asix [ 10.598361] Atheros(R) L2 Ethernet Driver - version 2.2.3 [ 10.598842] Copyright (c) 2007 Atheros Corporation. [ 10.602784] bna: QLogic BR-series 10G Ethernet driver - version: 3.2.25.1 [ 10.604273] usbcore: registered new interface driver cx82310_eth [ 10.612370] e100: Intel(R) PRO/100 Network Driver, 3.5.24-k2-NAPI [ 10.612771] e100: Copyright(c) 1999-2006 Intel Corporation [ 10.614175] enic: Cisco VIC Ethernet NIC Driver, ver 2.3.0.20 [ 10.618850] usbcore: registered new interface driver MOSCHIP usb-ethernet driver [ 10.626113] pegasus: v0.9.3 (2013/04/25), Pegasus/Pegasus II USB Ethernet driver [ 10.626784] usbcore: registered new interface driver pegasus [ 10.628121] usbcore: registered new interface driver plusb [ 10.630172] usbcore: registered new interface driver rtl8150 [ 10.634832] sis900.c: v1.08.10 Apr. 2 2006 [ 10.637167] via_rhine: v1.10-LK1.5.1 2010-10-09 Written by Donald Becker [ 10.641351] usbcore: registered new interface driver r8152 [ 10.645195] vxge: Copyright(c) 2002-2010 Exar Corp. [ 10.645773] vxge: Driver version: 2.5.3.22640-k [ 12.108738] systemd-udevd[7169]: starting version 204 [ 26.245853] md: md1: resync done. [ 26.246916] md1: Failed to send sync event: (sync type: resync, finish: 1, interrupt: 0) [ 26.248022] md: md1: current auto_remap = 0 [ 26.258551] RAID1 conf printout: [ 26.259003] --- wd:2 rd:24 [ 26.259390] disk 0, wo:0, o:1, dev:sda2 [ 26.259951] disk 1, wo:0, o:1, dev:sdc2
  3. My ESXi-based Xpenology system (DSM 6.2-23739 1.03a2) is now showing Disk1 as 'Crashed'. Disk1 is actually a vmdk file hosted on ESXi which seems to be healthy. Clicking the 'Health Info' button returns 'Access Error. An Error occurred when accessing this drive...' Is there any way to restore access to the disk in DSM?
  4. My ESXi-based Xpenology system (DSM 6.2-23739 1.03a2) is now showing Disk1 as 'Crashed'. Disk1 is actually a vmdk file hosted on ESXi which seems to be healthy. Clicking the 'Health Info' button returns 'Access Error. An Error occurred when accessing this drive...' Is there any way to restore access to the disk in DSM?
  5. Hello again IG-88, This has been working fine for me with four disks attached to the first mini-SAS port on the controller, however, when I add a disk to the second mini-SAS port it seems to occupy the same HDD slot in DSM as one of the four on the first port. i.e. DSM can't see all 8 ports as individual drives. My card is a Highpoint 2722 SGL with 2x internal mini-SAS ports for a maximum of 8 SATA ports. looking in my logs, I see the following: [ 10.448356] scsi host4: mvsas [ 10.449301] sas: phy-4:4 added to port-4:0, phy_mask:0x1 ( 400000000000000) [ 10.449326] sas: phy-4:5 added to port-4:1, phy_mask:0x2 ( 500000000000000) [ 10.449345] sas: phy-4:6 added to port-4:2, phy_mask:0x4 ( 600000000000000) [ 10.449366] sas: phy-4:7 added to port-4:3, phy_mask:0x8 ( 700000000000000) [ 10.449382] sas: DOING DISCOVERY on port 0, pid:49 [ 10.449384] sas: DONE DISCOVERY on port 0, pid:49, result:0 [ 10.449386] sas: DOING DISCOVERY on port 1, pid:49 [ 10.449387] sas: DONE DISCOVERY on port 1, pid:49, result:0 [ 10.449393] sas: DOING DISCOVERY on port 2, pid:49 [ 10.449394] sas: DONE DISCOVERY on port 2, pid:49, result:0 [ 10.449399] sas: DOING DISCOVERY on port 3, pid:49 [ 10.449400] sas: DONE DISCOVERY on port 3, pid:49, result:0 [ 10.449414] sas: Enter sas_scsi_recover_host busy: 0 failed: 0 [ 10.449750] sas: ata0: end_device-4:0: dev error handler I don't fully understand this, but does it mean ports 4-7 are being remapped to ports 0-3 thereby causing the problem I've outlined above? Is there a way to make DSM able to see all 8 drives?
  6. Is a custom ramdisk for 3617xs or 3615x DSM 6.2 planned for the future or is the intention to support only the 918+ loader?
  7. That seems to have fixed it. My drives are now visible on the RocketRAID card.
  8. I Checked the log and I can see the mvsas driver has loaded. I'm not really sure what most of this means, but I'm seeing four instances of this type of error (I assume one for each of my four attached disks): [Fri Jul 27 12:30:46 2018] sas: DOING DISCOVERY on port 0, pid:6 [Fri Jul 27 12:30:46 2018] sas: ATA device seen but CONFIG_SCSI_SAS_ATA=N so cannot attach [Fri Jul 27 12:30:46 2018] sas: unhandled device 5 So it looks like something is preventing the drive from attaching. Thanks again for your help.
  9. Thanks! I now have all four NICs recognised and working. lspci -k | grep 'Kernel driver' reveals that mvsas is in use, but I still can't see my drives. Is this likely to be something to do with the sataportmap setting? Seeing as I'm running on ESXi, I'm not sure how to configure it or how to count the SATA controllers. I have two virtual disks attached (on SATA 0:0 and SATA 0:1). There are currently four drives attached to the RocketRAID card, but none are visible to DSM. Any suggestions?
  10. Hi, after a lot of Googling how to fix this, I ended up here. I couldn't find synoconfig.conf, but I did find synoinfo.conf which contained the same info. Unfortunately, after editing maxlanport="2" to maxlanport="8" I was still left with only two NICs visible from withing DSM. Can you offer any help? Am I editing the correct file? I'm running 1.03a2 on ESXi 6.7. It seems to be working well apart from not seeing my passed-through RocketRAID 2720SGL and the NIC issue above. Thanks.