Jump to content
XPEnology Community

RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

1 hour ago, ThorGroup said:

A quick update (we will write more tomorrow): the native mpt2sas driver works but it's present only on 3615xs platform. To activate it and make it working properly without any hacks just add this small extension: https://github.com/RedPill-TTG/redpill-sas-activator

 

We also updated the kernel module to rewrite SAS => SATA ports so it should work with any off-the-shelf mpt2sas or mpt3sas driver.

 

Running "load-builtin-sas.sh" for thethorgroup.sas-activator->on_boot
Loading SAS controller(s) driver(s)
Loading LSI SAS 6Gb driver from /lib/modules/mpt2sas.ko
[    2.111757] mpt2sas version 20.00.00.00 loaded
[    2.117289] mpt2sas0: 32 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (2022888 kB)
[    2.191254] mpt2sas0: MSI-X vectors supported: 1, no of cores: 2, max_msix_vectors: -1
[    2.193247] mpt2sas 0000:0b:00.0: irq 73 for MSI/MSI-X
[    2.195245] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 73
[    2.195719] mpt2sas0: iomem(0x00000000fd3f0000), mapped(0xffffc90008180000), size(65536)
[    2.197125] mpt2sas0: ioport(0x0000000000005000), size(256)
[    2.308337] mpt2sas0: Allocated physical memory: size(4964 kB)
[    2.309771] mpt2sas0: Current Controller Queue Depth(3305), Max Controller Queue Depth(3432)
[    2.310989] mpt2sas0: Scatter Gather Elements per IO(128)
[    2.370223] mpt2sas0: LSISAS2008: FWVersion(16.00.00.00), ChipRevision(0x03), BiosVersion(00.00.00.00)
[    2.372036] mpt2sas0: Dell 6Gbps SAS HBA: Vendor(0x1000), Device(0x0072), SSVID(0x1028), SSDID(0x1F1C)
[    2.373611] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.376558] mpt2sas0: sending port enable !!
[    2.384638] mpt2sas0: host_add: handle(0x0001), sas_addr(0x590b11c027f7ff00), phys(8)
[    2.387142] mpt2sas0: detecting: handle(0x0009), sas_address(0x4433221100000000), phy(0)
[    2.388790] mpt2sas0: REPORT_LUNS: handle(0x0009), retries(0)
[    2.389874] mpt2sas0: TEST_UNIT_READY: handle(0x0009), lun(0)
[    2.396730] scsi 1:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x50014ee26320545e)
[    2.408946] mpt2sas0: detecting: handle(0x000b), sas_address(0x4433221101000000), phy(1)
[    2.410586] mpt2sas0: REPORT_LUNS: handle(0x000b), retries(0)
[    2.411748] mpt2sas0: TEST_UNIT_READY: handle(0x000b), lun(0)
[    2.422008] scsi 1:0:1:0: SATA: handle(0x000b), sas_addr(0x4433221101000000), phy(1), device_name(0x50014ee26320252f)
[   17.648369] mpt2sas0: detecting: handle(0x000a), sas_address(0x4433221102000000), phy(2)
[   17.651123] mpt2sas0: REPORT_LUNS: handle(0x000a), retries(0)
[   17.652856] mpt2sas0: TEST_UNIT_READY: handle(0x000a), lun(0)
[   17.662223] scsi 1:0:2:0: SATA: handle(0x000a), sas_addr(0x4433221102000000), phy(2), device_name(0x50014ee20ef68994)
[   32.723942] mpt2sas0: detecting: handle(0x000c), sas_address(0x4433221103000000), phy(3)
[   32.726898] mpt2sas0: REPORT_LUNS: handle(0x000c), retries(0)
[   32.728908] mpt2sas0: TEST_UNIT_READY: handle(0x000c), lun(0)
[   32.743439] scsi 1:0:3:0: SATA: handle(0x000c), sas_addr(0x4433221103000000), phy(3), device_name(0x50014ee20dcd759f)
[   47.822440] mpt2sas0: detecting: handle(0x000d), sas_address(0x4433221104000000), phy(4)
[   47.825418] mpt2sas0: REPORT_LUNS: handle(0x000d), retries(0)
[   47.833947] mpt2sas0: TEST_UNIT_READY: handle(0x000d), lun(0)
[   47.847305] scsi 1:0:4:0: SATA: handle(0x000d), sas_addr(0x4433221104000000), phy(4), device_name(0x50014ee2647a1448)
[   62.933161] mpt2sas0: detecting: handle(0x000e), sas_address(0x4433221105000000), phy(5)
[   62.936117] mpt2sas0: REPORT_LUNS: handle(0x000e), retries(0)
[   62.938119] mpt2sas0: TEST_UNIT_READY: handle(0x000e), lun(0)
[   62.947998] scsi 1:0:5:0: SATA: handle(0x000e), sas_addr(0x4433221105000000), phy(5), device_name(0x5000c5008c5e912d)
[   78.018894] mpt2sas0: detecting: handle(0x000f), sas_address(0x4433221106000000), phy(6)
[   78.028297] mpt2sas0: REPORT_LUNS: handle(0x000f), retries(0)
[   78.030129] mpt2sas0: TEST_UNIT_READY: handle(0x000f), lun(0)
[   78.039803] scsi 1:0:6:0: SATA: handle(0x000f), sas_addr(0x4433221106000000), phy(6), device_name(0x5002538e49b3b3ca)
[   93.110287] mpt2sas0: port enable: SUCCESS

 

i can confirm that it works now with the stock v20.0 mpt2sas.ko from DSM v7.0.1, no need for the custom .ko from @pocopico , all drivers (7 in my case) show up as /dev/sd* and DSM wants to install itself (did not try to proceed but i guess it works).

still persist the problem that if the loader is on sata0:0 it reserves /dev/sda, so the first disk connected to the LSI shows up as /dev/sdb.

Edited by pigr8
  • Thanks 1
Link to comment
Share on other sites

another minor issue using the sas-activator and stock v20.0 driver is that on bootup it's slow to load the drivers, for each one the serial spams this (example on the first port detection, seen as /dev/sdb on redpill but as /dev/sda on jun's loader):

 

[    2.339686] scsi 1:0:1:0: Direct-Access     WDC      WD30EFRX-68EUZN0         0A82 PQ: 0 ANSI: 6
[    2.341594] sd 1:0:0:0: [sdb] Write Protect is off
[    2.341789] scsi 1:0:1:0: SATA: handle(0x000b), sas_addr(0x4433221101000000), phy(1), device_name(0x50014ee26320252f)
[    2.341791] scsi 1:0:1:0: SATA: enclosure_logical_id(0x590b11c027f7ff00), slot(2)
[    2.341866] scsi 1:0:1:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
[    2.342130] scsi 1:0:1:0: serial_number(     WD-WCC4N0LDREV8)
[    2.342133] scsi 1:0:1:0: qdepth(32), tagged(1), simple(0), ordered(0), scsi_level(7), cmd_que(1)
[    2.347292] want_idx 1 index 2. delay and reget
[    2.351539] sd 1:0:0:0: [sdb] Mode Sense: 7f 00 10 08
[    2.352633] sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA
[    2.357909]  sdb: sdb1 sdb2 sdb5
[    2.365253] sd 1:0:0:0: [sdb] Attached SCSI disk
[    3.346388] want_idx 1 index 2
[    3.347047] want_idx 1 index 2. delay and reget
[    4.349242] want_idx 1 index 2
[    4.350358] want_idx 1 index 2. delay and reget
[    5.352080] want_idx 1 index 2
[    5.352188] want_idx 1 index 2. delay and reget
[    6.354932] want_idx 1 index 2
[    6.356676] want_idx 1 index 2. delay and reget
[    7.359783] want_idx 1 index 2
[    7.359895] want_idx 1 index 2. delay and reget
[    8.361635] want_idx 1 index 2
[    8.361747] want_idx 1 index 2. delay and reget
[    9.364484] want_idx 1 index 2
[    9.366152] want_idx 1 index 2. delay and reget
[   10.369338] want_idx 1 index 2
[   10.369958] want_idx 1 index 2. delay and reget
[   11.371201] want_idx 1 index 2
[   11.371841] want_idx 1 index 2. delay and reget
[   12.374042] want_idx 1 index 2
[   12.374712] want_idx 1 index 2. delay and reget
[   13.376895] want_idx 1 index 2
[   13.377569] want_idx 1 index 2. delay and reget
[   14.537123] want_idx 1 index 2
[   14.538402] want_idx 1 index 2. delay and reget
[   15.540697] want_idx 1 index 2
[   15.541951] want_idx 1 index 2. delay and reget
[   16.544593] want_idx 1 index 2
[   16.545475] want_idx 1 index 2. delay and reget
[   17.547442] want_idx 1 index 2

 

edit 1: with @pocopico .ko it does not do that.

 

edit 2: another problem could be that with either stock mpt2sas.ko or the compiled one, disks larger than a 2tb wont show up as they should.

 

- disk 1 on lsi port 1, fdisk (/dev/sda with jun's loader)

 

Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 337DF73B-5685-46E1-9DA5-2960BA2BA3C8

Device       Start        End    Sectors  Size Type
/dev/sda1     2048    4982527    4980480  2.4G Linux RAID
/dev/sda2  4982528    9176831    4194304    2G Linux RAID
/dev/sda5  9453280 5860326239 5850872960  2.7T Linux RAID

 

- same disk 1 on lsi port 1, fdisk (/dev/sdb on redpill)

 

Disk /dev/sdb: 2048 GB, 2199023255040 bytes, 4294967295 sectors
267349 cylinders, 255 heads, 63 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Device  Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/sdb1    0,0,1       1023,254,63          1 4294967295 4294967295 2047G ee EFI GPT
fdisk: device has more than 2^32 sectors, can't use all of them

 

Edited by pigr8
  • Sad 1
Link to comment
Share on other sites

@ThorGroup Regardiing Surveillance Station licences there is an online registering when you add your bought licence inside the app.

As online Synology services require real SN/Mac pair to work, you must have them real in order to buy/add new camera licence.

If I'm not mistaken, you even can "unregister"/"register" your bought licences from old NAS when you switch to a new one.

 

There is en offline procedure too if I remember correctly.

https://www.synology.com/en-global/products/Device_License_Pack

 

https://kb.synology.com/en-global/Surveillance/tutorial/Can_I_install_or_delete_surveillance_device_licenses_offline

 

Edit : As SS works with its default  2 licences with generated SN on your current loaders.

I bet we would have the 8 default licences on DVA even with generated SN.

Real question is about AI features (like HW transcoding on DS918+ without real SN not working without a patch)

https://github.com/likeadoc/synocodectool-patch

Edited by Orphée
  • Like 1
Link to comment
Share on other sites

Does docker build KO for everyone or just me ?

Quote

./redpill_tool_chain.sh build bromolow-6.2.4-25556
[+] Building 30.3s (3/3) FINISHED
 => [internal] load build definition from Dockerfile                                                                                                                                                                                                                    0.1s
 => => transferring dockerfile: 38B                                                                                                                                                                                                                                     0.0s
 => [internal] load .dockerignore                                                                                                                                                                                                                                       0.1s
 => => transferring context: 2B                                                                                                                                                                                                                                         0.0s
 => ERROR [internal] load metadata for docker.io/library/debian:8-slim                                                                                                                                                                                                 30.0s
------
 > [internal] load metadata for docker.io/library/debian:8-slim:
------
failed to solve with frontend dockerfile.v0: failed to create LLB definition: failed to do request: Head "https://registry-1.docker.io/v2/library/debian/manifests/8-slim": dial tcp 3.223.82.39:443: i/o timeout

 

Link to comment
Share on other sites

1 hour ago, pigr8 said:

 


Running "load-builtin-sas.sh" for thethorgroup.sas-activator->on_boot
Loading SAS controller(s) driver(s)
Loading LSI SAS 6Gb driver from /lib/modules/mpt2sas.ko
[    2.111757] mpt2sas version 20.00.00.00 loaded
[    2.117289] mpt2sas0: 32 BIT PCI BUS DMA ADDRESSING SUPPORTED, total mem (2022888 kB)
[    2.191254] mpt2sas0: MSI-X vectors supported: 1, no of cores: 2, max_msix_vectors: -1
[    2.193247] mpt2sas 0000:0b:00.0: irq 73 for MSI/MSI-X
[    2.195245] mpt2sas0-msix0: PCI-MSI-X enabled: IRQ 73
[    2.195719] mpt2sas0: iomem(0x00000000fd3f0000), mapped(0xffffc90008180000), size(65536)
[    2.197125] mpt2sas0: ioport(0x0000000000005000), size(256)
[    2.308337] mpt2sas0: Allocated physical memory: size(4964 kB)
[    2.309771] mpt2sas0: Current Controller Queue Depth(3305), Max Controller Queue Depth(3432)
[    2.310989] mpt2sas0: Scatter Gather Elements per IO(128)
[    2.370223] mpt2sas0: LSISAS2008: FWVersion(16.00.00.00), ChipRevision(0x03), BiosVersion(00.00.00.00)
[    2.372036] mpt2sas0: Dell 6Gbps SAS HBA: Vendor(0x1000), Device(0x0072), SSVID(0x1028), SSDID(0x1F1C)
[    2.373611] mpt2sas0: Protocol=(Initiator,Target), Capabilities=(TLR,EEDP,Snapshot Buffer,Diag Trace Buffer,Task Set Full,NCQ)
[    2.376558] mpt2sas0: sending port enable !!
[    2.384638] mpt2sas0: host_add: handle(0x0001), sas_addr(0x590b11c027f7ff00), phys(8)
[    2.387142] mpt2sas0: detecting: handle(0x0009), sas_address(0x4433221100000000), phy(0)
[    2.388790] mpt2sas0: REPORT_LUNS: handle(0x0009), retries(0)
[    2.389874] mpt2sas0: TEST_UNIT_READY: handle(0x0009), lun(0)
[    2.396730] scsi 1:0:0:0: SATA: handle(0x0009), sas_addr(0x4433221100000000), phy(0), device_name(0x50014ee26320545e)
[    2.408946] mpt2sas0: detecting: handle(0x000b), sas_address(0x4433221101000000), phy(1)
[    2.410586] mpt2sas0: REPORT_LUNS: handle(0x000b), retries(0)
[    2.411748] mpt2sas0: TEST_UNIT_READY: handle(0x000b), lun(0)
[    2.422008] scsi 1:0:1:0: SATA: handle(0x000b), sas_addr(0x4433221101000000), phy(1), device_name(0x50014ee26320252f)
[   17.648369] mpt2sas0: detecting: handle(0x000a), sas_address(0x4433221102000000), phy(2)
[   17.651123] mpt2sas0: REPORT_LUNS: handle(0x000a), retries(0)
[   17.652856] mpt2sas0: TEST_UNIT_READY: handle(0x000a), lun(0)
[   17.662223] scsi 1:0:2:0: SATA: handle(0x000a), sas_addr(0x4433221102000000), phy(2), device_name(0x50014ee20ef68994)
[   32.723942] mpt2sas0: detecting: handle(0x000c), sas_address(0x4433221103000000), phy(3)
[   32.726898] mpt2sas0: REPORT_LUNS: handle(0x000c), retries(0)
[   32.728908] mpt2sas0: TEST_UNIT_READY: handle(0x000c), lun(0)
[   32.743439] scsi 1:0:3:0: SATA: handle(0x000c), sas_addr(0x4433221103000000), phy(3), device_name(0x50014ee20dcd759f)
[   47.822440] mpt2sas0: detecting: handle(0x000d), sas_address(0x4433221104000000), phy(4)
[   47.825418] mpt2sas0: REPORT_LUNS: handle(0x000d), retries(0)
[   47.833947] mpt2sas0: TEST_UNIT_READY: handle(0x000d), lun(0)
[   47.847305] scsi 1:0:4:0: SATA: handle(0x000d), sas_addr(0x4433221104000000), phy(4), device_name(0x50014ee2647a1448)
[   62.933161] mpt2sas0: detecting: handle(0x000e), sas_address(0x4433221105000000), phy(5)
[   62.936117] mpt2sas0: REPORT_LUNS: handle(0x000e), retries(0)
[   62.938119] mpt2sas0: TEST_UNIT_READY: handle(0x000e), lun(0)
[   62.947998] scsi 1:0:5:0: SATA: handle(0x000e), sas_addr(0x4433221105000000), phy(5), device_name(0x5000c5008c5e912d)
[   78.018894] mpt2sas0: detecting: handle(0x000f), sas_address(0x4433221106000000), phy(6)
[   78.028297] mpt2sas0: REPORT_LUNS: handle(0x000f), retries(0)
[   78.030129] mpt2sas0: TEST_UNIT_READY: handle(0x000f), lun(0)
[   78.039803] scsi 1:0:6:0: SATA: handle(0x000f), sas_addr(0x4433221106000000), phy(6), device_name(0x5002538e49b3b3ca)
[   93.110287] mpt2sas0: port enable: SUCCESS

 

i can confirm that it works now with the stock v20.0 mpt2sas.ko from DSM v7.0.1, no need for the custom .ko from @pocopico , all drivers (7 in my case) show up as /dev/sd* and DSM wants to install itself (did not try to proceed but i guess it works).

still persist the problem that if the loader is on sata0:0 it reserves /dev/sda, so the first disk connected to the LSI shows up as /dev/sdb.

please add comment here if you can:

https://github.com/RedPill-TTG/redpill-lkm/issues/19

Link to comment
Share on other sites

@ThorGroup, Here is a few things regarding the FAn control with PMU

 

 

I've tested the following way to set speed of the CPU FAN on 6.2.3

 

insmod hwmon-vid.ko
insmod nct6775.ko

optional install sensors(lm-sensors) via ipkg

 

Then the fan speed can be set like this:
 

#default value

echo 5 > /sys/devices/platform/nct6775.656/hwmon/hwmon1/pwm2_enable

 

Here is a few userful commands:

#find all pwm:

find /sys/devices -type f -name "pwm*"

#find beep

/sys/devices/platform/nct6775.656/hwmon/hwmon1/beep_enable

#enable /disable beep

echo 0 > /sys/devices/platform/nct6775.656/hwmon/hwmon1/beep_enable | sensors | grep beep

 

 

So, the vPMU can use it for manipulating the CPU FAN speed.

 

But we need to resolve the following issues:

- build nct6775 for dsm 7.0 (I now that this module is one of the many modules for different IO chips on motherboard), let's start from it, then add an additional one

- build pcspeaker module

- implement vPMU to use software methods for manipulating the FAN speed

 

@pocopico, could you please build nct6775 module for 918 dsm 7.0 ?? Thanks.

 

I will prepare the RP extension for that module

 

Thanks in advance.

 

 

 

 

  • Like 1
Link to comment
Share on other sites

3 hours ago, Schyzo said:

Hmm, you're right, sorry for that.

Server boot on the USB, and display exactly that.

But it wont connect to the router wired by ethernet, my router dont see it.

Any issue with the network card ?

Thanks a lot for your replies.

Afaik only Intel network cards are supported ootb with the DS3615xs software. Synology removed other network drivers sometimes ago (DSM 6.1.x or so). So one solution is to use an Intel card (change mac address). Other solution is the driver extension for your onboard Broadcom network card (if already working?).

  • Thanks 1
Link to comment
Share on other sites

il y a une heure, paro44 a dit :

Afaik only Intel network cards are supported ootb with the DS3615xs software. Synology removed other network drivers sometimes ago (DSM 6.1.x or so). So one solution is to use an Intel card (change mac address). Other solution is the driver extension for your onboard Broadcom network card (if already working?).

Tried to add modules from @pocopico(bnx2x and bnx2) with OSFMount after image was created, but no success, no network 😕

We have to include modules (*.ko) BEFORE image is created ?

Link to comment
Share on other sites

Hello @ThorGroup

I'm trying to run redpill for 6.2.3-25556 on proxmox.

Thank you for your work so far, the virtio network drivers are working perfectly and i've been able to connect to the DSM. :D

 

I was wondering if it's not yet supported using the virtio block/virtio SCSI drivers for speeding up disk access.

I suppose sata emulation would be fine for normal HDDs but for running an Nvme SSD cache it's going to be a bottleneck.

 

I know it's definetely not a must-be-done-now kinda issue for the beta release but would it be possible to add support for these kind of virtual drives?

There's two different virtio implementation that can be used:

  •  Virtio Block (1 pcie address per device)
  •  Virtio SCSI (1 pcie address per controller, many devices per controller)

The virtio SCSI one seems the most interesting since it claims "Standard device naming". Documentation: https://www.ovirt.org/develop/release-management/features/storage/virtio-scsi.html

 

If you think this is better reported in the virtio git issues. I'll move it there.

 

Edit 1:

Forgot to add:

They are not detected at all during the install wizard, i'm going to do more testing to confirm they aren't detected at all awell in pool creation.

 

Edit 2:

Well i was completely wrong DSM seems to detect them both at setup time and later.

It even updates the disks listing in realtime matching what i add or remove from proxmox.

I'm going to go touch grass now.

Earlier it wasn't doing either so i guess i had the wrong chipset? (current i440fx).

Edit 3:
The chipset wasn't it.
It's working aswell with Q35 now and the diskmap is shifted as others pointed out.

I guess the problem solved itself or i was doing something wrong

I'm pretty sure earlier thought that i did run the VM multiple times with atleast one disk attached via virtio SCSI and the setup page complained that there where no disks available (i'm currently testing with 32GB virtual drives).

On any case. Thanks again for your work. :D

Edited by luckcolors
Link to comment
Share on other sites

quick question: Per the loader documentation, one can remove vid and pid if they are using SATA based boot. Unfortunately, when I do that, my build fails with 'unbound parameter extra_cmdline('vid')'. 

 

For now, I am still leaving vid/pid in there for my esxi but not sure why the build fails. Any thoughts?

Link to comment
Share on other sites

I was able to successfully compile on ESXI V7. I am using LSI SaS controller.

 

If I use the sas_activator from thorgroup, I end up with no disks found.

 

With @pocopico's extensions, it finds the disks and I was able to install the pat. It goes to 100% completion and ends with the 'Restarting Synology' with the count down timer for 10 minutes.

 

But, after that, when I go to the IP address (ip:5000), it still presents 'Not installed' and prompts me to go through the pat installation again. I have attempted this few times and it ends up this way - it's almost like the install is not recognized. I tried with the latest LKM (with the SAS->SATA mapping) as well. Nothing obvious in the logs either.

 

Any thoughts or guidance from the experts will greatly help.

Link to comment
Share on other sites

19 minutes ago, urundai said:

I was able to successfully compile on ESXI V7. I am using LSI SaS controller.

 

If I use the sas_activator from thorgroup, I end up with no disks found.

 

With @pocopico's extensions, it finds the disks and I was able to install the pat. It goes to 100% completion and ends with the 'Restarting Synology' with the count down timer for 10 minutes.

 

But, after that, when I go to the IP address (ip:5000), it still presents 'Not installed' and prompts me to go through the pat installation again. I have attempted this few times and it ends up this way - it's almost like the install is not recognized. I tried with the latest LKM (with the SAS->SATA mapping) as well. Nothing obvious in the logs either.

 

Any thoughts or guidance from the experts will greatly help.

 

try not to add either sas-activator nor pocopico extension, add this in user config and rebuild redpill again and report the attempt :)

 

  "synoinfo": {
    "supportsas": "yes"
  },

 

 

Link to comment
Share on other sites

1 hour ago, pigr8 said:

 

try not to add either sas-activator nor pocopico extension, add this in user config and rebuild redpill again and report the attempt :)

 


  "synoinfo": {
    "supportsas": "yes"
  },

 

 

Thank you @pigr8. That results in 'No Drives Detected in DS3615xs' message. Here is my user config:

 

{
  "extra_cmdline": {
    "pid": "0x6300",
    "vid": "0x13fe",
    "sn": "xxx",
    "mac1": "324D220C3CD1",
    "sata_uid" : "1",
    "sata_pcislot" : "5",
    "synoboot_satadom" : "1",
    "DiskIdxMap" : "0C00",
    "SataPortMap" : "18",
    "SasIdxMap" : "0"
  },
  "synoinfo": {
     "supportsas": "yes"
  },
  "ramdisk_copy": {}
}

 

Let me know if I messed any of the bromolow config :(

Link to comment
Share on other sites

@ThorGroup 

Thanks for your work, the problem with docker on my side (Only pihole running for the moment) is fixed.

Also the universal search is now working back ! 

 

Actually on 42218, i have fixed my issue with VMM because i made a mistake in my config.json on the mac adresses, i put ":" in the mac adresses but it was not necessary.. 

Thanks @WiteWulf

 

 

Edited by Buny74
Link to comment
Share on other sites

1 hour ago, urundai said:

{
  "extra_cmdline": {
    "pid": "0x6300",
    "vid": "0x13fe",
    "sn": "xxx",
    "mac1": "324D220C3CD1",
    "DiskIdxMap" : "0C00",
    "SataPortMap" : "18",
    "SasIdxMap" : "0"
  },
  "synoinfo": {
     "supportsas": "yes"
  },
  "ramdisk_copy": {}
}

 

Let me know if I messed any of the bromolow config

 

Remove this :

"sata_uid" : "1", "sata_pcislot" : "5", "synoboot_satadom" : "1",

 

Thorgroup said more than once these values shoult not be set.

satadom is part of the bootmenu, you have to select SATA instead of USB at boot no need to add it in extra_cmdline parameters

 

On 10/1/2021 at 7:31 AM, ThorGroup said:

Third: do NOT add sata_uid or sata_pcislot - these are Jun's custom parameters which were never implemented but were left there. The difference is Jun's loader was removing them from cmdline so that DSM cannot see them - RP does not as we never used them. Making them visible to DSM shows the DSM clearly it's running on non-official hardware.

  • Like 1
Link to comment
Share on other sites

Thank you @Orphée. I have removed those lines. The problem still remains the same.

 

With both sas_activator and without that, I continue to get 'No Drives Detected'. With @pocopico extensions, I am able to get to the install but after completion, it continues to prompt 'Not Installed' with directions to install pat again.

 

I have attached my vm config - don't think it's any different what I had for jun's loader before.

 

Screen Shot 2021-10-03 at 2.40.32 PM.png

Link to comment
Share on other sites

@urundai You should remove USB 3.1 and only leave USB 2.0

You set 4 CPU ? did you set 4 cores ? if no, do it. don't leave more than 1 socket.

 

Why is there USB Seagate device and Security devices ?

Change SataPortMap to 4 and check how it is.

 

from serial console : fdisk -l

 

How to access serial console with ESXi :

 

 

Edited by Orphée
Link to comment
Share on other sites

22 hours ago, Schyzo said:

Hmm, you're right, sorry for that.

Server boot on the USB, and display exactly that.

No problem, it's just when reporting problems or asking for help it's in everyone's interest to be as precise as possible in the initial report.

 

"Booting" or "bootstrapping" (to use the old fashioned term) usually refers to the very first stage of loading code from a storage device once the BIOS has loaded. If you get as far as the "Booting the kernel." message your system has booted. It may not have successfully finished loading an operating system, but it's *booted* ;) 

 

What a lot of people are incorrectly assuming (and this bears repeating) is that when they see the "Booting the kernel." message, and nothing else follows, that the system has failed to boot. The Synology kernel has no video drivers or framebuffer in it (as "real" Synology devices don't have a video card or display connected), so the console output is all sent to serial console (virtual or physical) one the kernel is loaded. You will not see anything else on screen after this point.

 

If you have connectivity issues after this point (you don't see the system requesting a DHCP lease from your network or the Synology finder tool fails to locate your new system) the chances are that the NIC you're using isn't supported by the default image build. You either need to load the relevant drivers or get a NIC that's supported by the default build.

 

(I appreciate you've already figured out this last bit, @Schyzo, but was posting this for the benefit of others who may be in this situation as it pops up regularly on here)

  • Like 5
Link to comment
Share on other sites

10 hours ago, urundai said:

Thank you @Orphée. I have removed those lines. The problem still remains the same.

 

With both sas_activator and without that, I continue to get 'No Drives Detected'. With @pocopico extensions, I am able to get to the install but after completion, it continues to prompt 'Not Installed' with directions to install pat again.

 

I have attached my vm config - don't think it's any different what I had for jun's loader before.

 

does dmesg | grep sas from terminal show the hba loaded? if yes as @Orphée said check with fdisk -l if the disks show up.

Edited by pigr8
Link to comment
Share on other sites

On 9/23/2021 at 2:26 AM, haydibe said:

@ThorGroup thank you for the update! And indeed, I spoted and incorporated the new make targets into the new toolchain builder version :) 

 

Taken from the README.md:

  Supports the make target to specify the redpill.ko build configuration. Set <platform version>.redpill_lkm_make_target to `dev-v6`, `dev-v7`, `test-v6`, `test-v7`, `prod-v6` or `prod-v7`.
  Make sure to use the -v6 ones on DSM6 build and -v7 on DSM7 build. By default the targets `dev-v6` and `dev-v7` are used.

 

I snatched following details from the redpill-lkm Makefile:

  - dev: all symbols included, debug messages included
  - test: fully stripped with only warning & above (no debugs or info)
  - prod: fully stripped with no debug messages


 See README.md for usage.

redpill-tool-chain_x86_64_v0.10.zip 9.38 kB · 488 downloads

 

Can you apply a Github repo to update your script? Thx!

Link to comment
Share on other sites

1 hour ago, WiteWulf said:

ot of people are incorrectly assuming (and this bears repeating) is that when they see the "Booting the kernel." message, and nothing else follows, that the system has failed to boot. The Synology kernel has no video drivers or framebuffer in it (as "real" Synology devices don't have a video card or display connected), so the console output is all sent to serial console (virtual or physical) one the kernel is loaded. You will not see anything else on screen after this point.

 

 

Hi WiteWulf, just a quick question, is it because of the change in DSM6/7 that we can no longer see console booting message like what we did in the DSM5 xpenology loader? No I am not urging anybody to include this nostalgic function, just curious, and it also seems to be easier for debugging.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...