Jump to content
XPEnology Community

RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

Hi @ThorGroup ! Many thanks to your contribution. 
I was trying to build a new bare-metal setup using DSM 7.0, and then migrate data from my existing DSM 6.2.3 (using Jun's mod).
 
Hardware brief:
- Intel Core i7-6700K (Skylake) with ASUS Z170 motherboard
- 2 SATA SSDs connected to onboard SATA ports
- LSI 9300-8i HBA card (requires mpt3sas driver) with 2 SATA, 2 NLSAS drives
- Intel X710-DA2 10GbE NIC (requires i40e driver)
- RedPill-Loader image burned to USB flash disk using Rufus
 
After tried both DS918+ and DS3615xs RedPill-LKM images, I stucked:
- When using DS918+ image, the DSM installer can only recognize SSDs on onboard SATA ports
- When using DS3615xs, the installer cannot recognize any disks at all (maybe the kernel too old?)
- Onboard Intel i219-V NIC works well (since I can access DSM installer web page), while I hadn't confirm whether X710 works or not yet
 
Seems it just lack some recent drivers in the image. And luckily I can confirm which driver is used by these hardware.
So is there any way to inject / include / copy some driver files (probably *.ko) into the RP image?
I think you misread ThorGroups latest post. SAS support is not there yet, it's one of the things on the list to be addressed before they will release first beta. No SAS no LSI support. Also official support for adding unsupported hardware is also on the list of things that will be solved.

Skickat från min GM1913 via Tapatalk

Link to comment
Share on other sites

7 hours ago, Polanskiman said:

@Orphée As much as I agree that the user should make some efforts to read and understand, I advise you to tone it down and to read the rules. We are not here to bash people. You could have conveyed the same message without the aggression and foul language. I have edited your post accordingly.

Hi!

I expected to be sworded sooner or later.

I just hope he was able to read the whole comment before edit. It loose some flavour after edit.

those kind of kid deserve some slap to reorder the few neurons they have.

But I understand don't worry!

Have a nice day!

 

Edited by Orphée
  • Like 2
Link to comment
Share on other sites

15 hours ago, ThorGroup said:

The current ramdisk, especially on systems without compression, is almost at the limit and on slower systems can crash with even a few megabytes more added to it.

Pseudocode for compression

final int SIGN_LENGTH = 64;
// FIXME 可能是(4倍对齐 补00)的魔法值
final int MAGIC_LENGTH = 4;
long length = filesize("rd.cpio");
cli>lzma -9 rd.cpio
try (RandomAccessFile lzmaFile = new RandomAccessFile("rd.cpio.lzma", "rw")) {
    // add 00 and 64 byte 00 fake sign
    lzmaFile.setLength(((lzmaFile.length() + MAGIC_LENGTH - 1) / MAGIC_LENGTH) * MAGIC_LENGTH + SIGN_LENGTH);
    // rewrite lzma 8 byte uncompressed size from -1 to real size
    lzmaFile.seek(5);
    final ByteBuffer order = ByteBuffer.allocate(8).putLong(length).order(ByteOrder.LITTLE_ENDIAN);
    order.rewind();
    lzmaFile.writeLong(order.getLong());
    order.clear();
}

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

1 hour ago, u357 said:

 

We're not there yet, don't use important data which is value to you. Keep your data in where it is, use test data for test only. I am using LSI 9311 for hdd pool, X520 for 10g nic, I haven't found out how to inject those driver yet, still under observation

Thanks!

Looking forward to have custom driver support.

Link to comment
Share on other sites

1 hour ago, Piteball said:

I think you misread ThorGroups latest post. SAS support is not there yet, it's one of the things on the list to be addressed before they will release first beta. No SAS no LSI support. Also official support for adding unsupported hardware is also on the list of things that will be solved.

Skickat från min GM1913 via Tapatalk
 

 

Thanks, Piteball.

Really excited and can't wait to try out the new loader. I thought there is already some way to 'inject' driver file into the loader and DSM installer as well.

Then I will wait and also discover myself to see if I can help.

Link to comment
Share on other sites

47 minutes ago, Orphée said:

Hi!

I expected to be sworded sooner or later.

I just hope he was able to read the whole comment before edit. It loose some flavour after edit.

those kind of kid deserve some slap to reorder the few neurons they have.

But I understand don't worry!

Have a nice day!

 

 

Don't push your luck. I have already asked you to tone it down. There is no need to be condescending to anyone.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

 
Thanks, Piteball.
Really excited and can't wait to try out the new loader. I thought there is already some way to 'inject' driver file into the loader and DSM installer as well.
Then I will wait and also discover myself to see if I can help.
The mpt2sas and mpt3sas drivers can be injected, but as DSM won't recognize SAS drives as working hard drives it won't help. That's why it's on the Todo list. [emoji16]

Skickat från min GM1913 via Tapatalk

Link to comment
Share on other sites

4 minutes ago, Piteball said:

The mpt2sas and mpt3sas drivers can be injected, but as DSM won't recognize SAS drives as working hard drives it won't help. That's why it's on the Todo list. emoji16.png

Skickat från min GM1913 via Tapatalk
 

 

Well - What will happen when I connect some SATA drives to my LSI SAS HBA? Do DSM recognize them properly?

Link to comment
Share on other sites

@ThorGroup

Great update!

Waitting for custom drivers and sas support!

 

I'm sorry for ask the face detect question again.

Is loader disable the network access to the syno server?

As log below, maybe blocks the face detect?

 

2021-09-23T14:36:54+08:00 DSM7 synofoto-face-extraction[15497]: uncaught thread task exception /source/synofoto/src/daemon/plugin/plugin_worker.cpp:90 plugin init failed: /var/packages/SynologyPhotos/target/usr/lib/libsynophoto-plugin-face.so
2021-09-23T14:36:54+08:00 DSM7 synofoto-face-extraction[15497]: /source/synophoto-plugin-face/src/face_plugin/lib/face_detection.cpp:214 Error: (face plugin) load network failed

Link to comment
Share on other sites

1 minute ago, seanone said:

@ThorGroup

Great update!

Waitting for custom drivers and sas support!

 

I'm sorry for ask the face detect question again.

Is loader disable the network access to the syno server?

As log below, maybe blocks the face detect?

 

2021-09-23T14:36:54+08:00 DSM7 synofoto-face-extraction[15497]: uncaught thread task exception /source/synofoto/src/daemon/plugin/plugin_worker.cpp:90 plugin init failed: /var/packages/SynologyPhotos/target/usr/lib/libsynophoto-plugin-face.so
2021-09-23T14:36:54+08:00 DSM7 synofoto-face-extraction[15497]: /source/synophoto-plugin-face/src/face_plugin/lib/face_detection.cpp:214 Error: (face plugin) load network failed

Face detection works for me with bromolow just plug & play as long as I have a real SN and MAC, and CPU cores set to 2 at least in VM conf.

Link to comment
Share on other sites

17 hours ago, ThorGroup said:

@WiteWulf Can you try deleting the line with "register_pmu_shim" from redpill_main.c (in init_(void) function) and rebuilding the kernel module? You can then use inject_rp_ko.sh (lkm repo) script to inject it into an existing loader image or rebuild the image. With that you shouldn't have PMU emulation anymore (so the instance can be killed in ~24-48h) but we can see if it's kernel panicking.

@ThorGroup I tried to remove the line myself and rebuild the image...

Does not seems to make any difference while importing photos in Synology Moments :

 

Quote

IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready
init: dhcp-client (eth0) main process (8142) killed by TERM signal
md: md2 stopped.
md: bind<sda3>
md/raid1:md2: active with 1 out of 1 mirrors
md2: detected capacity change from 0 to 12242124800
 md2: unknown partition table
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
BTRFS: device label 2021.09.22-14:16:50 v25556 devid 1 transid 383 /dev/md2
BTRFS info (device md2): enabling auto syno reclaim space
BTRFS info (device md2): use ssd allocation scheme
BTRFS info (device md2): using free space tree
BTRFS: has skinny extents
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
usbcore: registered new interface driver usblp
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
Synotify use 16384 event queue size
Synotify use 16384 event queue size
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd5
<redpill/smart_shim.c:514> Generating fake WIN_SMART log=0 entries
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd1
<redpill/smart_shim.c:455> Generating fake SMART thresholds
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd1
<redpill/smart_shim.c:455> Generating fake SMART thresholds
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
iSCSI:target_core_rodsp_server.c:1027:rodsp_server_init RODSP server started, login_key(001132417efd).
iSCSI:extent_pool.c:766:ep_init syno_extent_pool successfully initialized
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
iSCSI:target_core_device.c:617:se_dev_align_max_sectors Rounding down aligned max_sectors from 4294967295 to 4294967288
iSCSI:target_core_lunbackup.c:361:init_io_buffer_head 512 buffers allocated, total 2097152 bytes successfully
Synotify use 16384 event queue size
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd1
<redpill/smart_shim.c:455> Generating fake SMART thresholds
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
iSCSI:target_core_file.c:146:fd_attach_hba RODSP plugin for fileio is enabled.
iSCSI:target_core_file.c:153:fd_attach_hba ODX Token Manager is enabled.
iSCSI:target_core_multi_file.c:91:fd_attach_hba RODSP plugin for multifile is enabled.
iSCSI:target_core_ep.c:786:ep_attach_hba RODSP plugin for epio is enabled.
iSCSI:target_core_ep.c:793:ep_attach_hba ODX Token Manager is enabled.
usbcore: registered new interface driver usbhid
usbhid: USB HID core driver
input: VMware VMware Virtual USB Mouse as /devices/pci0000:00/0000:00:11.0/0000:02:00.0/usb2/2-1/2-1:1.0/input/input0
hid-generic 0003:0E0F:0003.0001: input: USB HID v1.10 Mouse [VMware VMware Virtual USB Mouse] on usb-0000:02:00.0-1/input0
loop: module loaded
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
warning: `nginx' uses 32-bit capabilities (legacy support in use)
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
ata5.00: configured for UDMA/100
ata5: EH complete
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd8
<redpill/smart_shim.c:654> Attempted ATA_SMART_ENABLE modification!
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there

Xpen_624 login:
ata5.00: configured for UDMA/100
ata5: EH complete
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
<redpill/intercept_execve.c:87> Blocked /usr/syno/bin/syno_pstore_collect from running
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
<redpill/memory_helper.c:18> Disabling memory protection for page(s) at ffffffffa09f4c50+12/1 (<<ffffffffa09f4000)
<redpill/override_symbol.c:244> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Writing original code to <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Released lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Writing trampoline code to <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Released lock for <ffffffffa09f4c50>
<redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=3)->support => real=1 [org_fout=0, ovs_fout=0]
<redpill/smart_shim.c:359> ATA_CMD_ID_ATA confirmed *no* SMART support - pretending it's there
synobios write K to /dev/ttyS1 failed
<redpill/bios_shims_collection.c:43> mfgBIOS: nullify zero-int for VTK_SET_HDD_ACT_LED
<redpill/override_symbol.c:244> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Writing original code to <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Released lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Writing trampoline code to <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Released lock for <ffffffffa09f4c50>
<redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=2)->support => real=1 [org_fout=0, ovs_fout=0]
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:42> mfgBIOS: nullify zero-int for VTK_SET_PHY_LED
<redpill/bios_shims_collection.c:36> mfgBIOS: nullify zero-int for VTK_SET_PWR_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
<redpill/bios_shims_collection.c:35> mfgBIOS: nullify zero-int for VTK_SET_DISK_LED
init: syno-check-disk-compatibility main process (12582) terminated with status 255
ip_tables: (C) 2000-2006 Netfilter Core Team
nf_conntrack version 0.5.0 (16384 buckets, 65536 max)
ip6_tables: (C) 2000-2006 Netfilter Core Team
aufs 3.10.x-20141110
Bridge firewalling registered
IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
Synotify use 16384 event queue size
Synotify use 16384 event queue size
<redpill/bios_shims_collection.c:44> mfgBIOS: nullify zero-int for VTK_GET_MICROP_ID
<redpill/override_symbol.c:244> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Writing original code to <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Released lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Writing trampoline code to <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Released lock for <ffffffffa09f4c50>
<redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=3)->support => real=1 [org_fout=0, ovs_fout=0]
Synotify use 16384 event queue size
Synotify use 16384 event queue size
init: synocontentextractd main process ended, respawning
<redpill/override_symbol.c:244> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Writing original code to <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Released lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Writing trampoline code to <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Released lock for <ffffffffa09f4c50>
<redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=3)->support => real=1 [org_fout=0, ovs_fout=0]
Synotify use 16384 event queue size
<redpill/bios_shims_collection.c:44> mfgBIOS: nullify zero-int for VTK_GET_MICROP_ID
<redpill/override_symbol.c:244> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Writing original code to <ffffffffa09f4c50>
<redpill/override_symbol.c:244> Released lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Obtaining lock for <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Writing trampoline code to <ffffffffa09f4c50>
<redpill/override_symbol.c:219> Released lock for <ffffffffa09f4c50>
<redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=3)->support => real=1 [org_fout=0, ovs_fout=0]
<redpill/smart_shim.c:644> Got SMART *command* - looking for feature=0xd0
<redpill/smart_shim.c:388> Generating fake SMART values
BUG: soft lockup - CPU#1 stuck for 41s! [fileindexd:12879]
Modules linked in: bridge stp aufs macvlan veth xt_conntrack xt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipt_MASQUERADE xt_REDIRECT xt_nat iptable_nat nf_nat_ipv4 nf_nat xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables cifs udf isofs loop hid_generic tcm_loop(O) iscsi_target_mod(O) target_core_ep(O) target_core_multi_file(O) target_core_file(O) target_core_iblock(O) target_core_mod(O) syno_extent_pool(PO) rodsp_ep(O) usbhid hid usblp bromolow_synobios(PO) exfat(O) btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca vxlan fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd usb_storage xhci_hcd uhci_hcd ehci_pci ehci_hcd usbcore usb_common redpill(OF) [last unloaded: bromolow_synobios]
CPU: 1 PID: 12879 Comm: fileindexd Tainted: PF        C O 3.10.105 #25556
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
task: ffff8801327bd800 ti: ffff88011e8b4000 task.ti: ffff88011e8b4000
RIP: 0010:[<ffffffff81087d0e>]  [<ffffffff81087d0e>] generic_exec_single+0x6e/0xe0
RSP: 0018:ffff88011e8b7cc0  EFLAGS: 00000202
RAX: 00000000000008fb RBX: 0000000000000001 RCX: 0000000000000014
RDX: ffffffff816057c8 RSI: 00000000000000fb RDI: ffffffff816057c8
RBP: ffff88013dc12a80 R08: ffff88011448fe58 R09: 0000000000000000
R10: 0000000000000022 R11: ffff8800b49decc0 R12: ffff8800b0596890
R13: ffff88011a1c4170 R14: ffffffff81108d4d R15: ffff88011e8b7da0
FS:  00007fba3cf4c700(0000) GS:ffff88013dd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fba3ce4b000 CR3: 000000011b332000 CR4: 00000000001607e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Stack:
 0000000000000000 ffff88011e8b7d50 0000000000000001 ffffffff8186ec10
 ffffffff8102f9e0 ffffffff81087e55 0000000000000001 0000000000000000
 ffff88013dc12a80 ffff88013dc12a80 ffffffff8102f9e0 ffff88011e8b7d70
Call Trace:
 [<ffffffff8102f9e0>] ? do_flush_tlb_all+0x170/0x170
 [<ffffffff81087e55>] ? smp_call_function_single+0xd5/0x160
 [<ffffffff8102f9e0>] ? do_flush_tlb_all+0x170/0x170
 [<ffffffff8102ff5c>] ? flush_tlb_mm_range+0x22c/0x300
 [<ffffffff810d36c9>] ? tlb_flush_mmu.part.66+0x29/0x80
 [<ffffffff810d3ded>] ? tlb_finish_mmu+0x3d/0x40
 [<ffffffff810daa4e>] ? unmap_region+0xbe/0x100
 [<ffffffff810dad91>] ? vma_rb_erase+0x121/0x260
 [<ffffffff810dc8cd>] ? do_munmap+0x2ed/0x690
 [<ffffffff810dcca6>] ? vm_munmap+0x36/0x50
 [<ffffffff810ddb35>] ? SyS_munmap+0x5/0x10
 [<ffffffff814cfdc4>] ? system_call_fastpath+0x22/0x27
Code: 08 4c 89 ef 48 89 2b 48 89 53 08 48 89 1a e8 aa 68 44 00 4c 39 f5 74 6b f6 43 20 01 74 0f 0f 1f 80 00 00 00 00 f3 90 f6 43 20 01 <75> f8 5b 5d 41 5c 41 5d 41 5e c3 0f 1f 80 00 00 00 00 4c 8d 6d
BUG: soft lockup - CPU#1 stuck for 44s! [fileindexd:12879]
Modules linked in: bridge stp aufs macvlan veth xt_conntrack xt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipt_MASQUERADE xt_REDIRECT xt_nat iptable_nat nf_nat_ipv4 nf_nat xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables cifs udf isofs loop hid_generic tcm_loop(O) iscsi_target_mod(O) target_core_ep(O) target_core_multi_file(O) target_core_file(O) target_core_iblock(O) target_core_mod(O) syno_extent_pool(PO) rodsp_ep(O) usbhid hid usblp bromolow_synobios(PO) exfat(O) btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca vxlan fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd usb_storage xhci_hcd uhci_hcd ehci_pci ehci_hcd usbcore usb_common redpill(OF) [last unloaded: bromolow_synobios]
CPU: 1 PID: 12879 Comm: fileindexd Tainted: PF        C O 3.10.105 #25556
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
task: ffff8801327bd800 ti: ffff88011e8b4000 task.ti: ffff88011e8b4000
RIP: 0010:[<ffffffff81087d0a>]  [<ffffffff81087d0a>] generic_exec_single+0x6a/0xe0
RSP: 0018:ffff88011e8b7cc0  EFLAGS: 00000202
RAX: 00000000000008fb RBX: 0000000000000001 RCX: 0000000000000014
RDX: ffffffff816057c8 RSI: 00000000000000fb RDI: ffffffff816057c8
RBP: ffff88013dc12a80 R08: ffff88011448fe58 R09: 0000000000000000
R10: 0000000000000022 R11: ffff8800b49decc0 R12: ffff8800b0596890
R13: ffff88011a1c4170 R14: ffffffff81108d4d R15: ffff88011e8b7da0
FS:  00007fba3cf4c700(0000) GS:ffff88013dd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fba3ce4b000 CR3: 000000011b332000 CR4: 00000000001607e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Stack:
 0000000000000000 ffff88011e8b7d50 0000000000000001 ffffffff8186ec10
 ffffffff8102f9e0 ffffffff81087e55 0000000000000001 0000000000000000
 ffff88013dc12a80 ffff88013dc12a80 ffffffff8102f9e0 ffff88011e8b7d70
Call Trace:
 [<ffffffff8102f9e0>] ? do_flush_tlb_all+0x170/0x170
 [<ffffffff81087e55>] ? smp_call_function_single+0xd5/0x160
 [<ffffffff8102f9e0>] ? do_flush_tlb_all+0x170/0x170
 [<ffffffff8102ff5c>] ? flush_tlb_mm_range+0x22c/0x300
 [<ffffffff810d36c9>] ? tlb_flush_mmu.part.66+0x29/0x80
 [<ffffffff810d3ded>] ? tlb_finish_mmu+0x3d/0x40
 [<ffffffff810daa4e>] ? unmap_region+0xbe/0x100
 [<ffffffff810dad91>] ? vma_rb_erase+0x121/0x260
 [<ffffffff810dc8cd>] ? do_munmap+0x2ed/0x690
 [<ffffffff810dcca6>] ? vm_munmap+0x36/0x50
 [<ffffffff810ddb35>] ? SyS_munmap+0x5/0x10
 [<ffffffff814cfdc4>] ? system_call_fastpath+0x22/0x27
Code: c6 48 89 5d 08 4c 89 ef 48 89 2b 48 89 53 08 48 89 1a e8 aa 68 44 00 4c 39 f5 74 6b f6 43 20 01 74 0f 0f 1f 80 00 00 00 00 f3 90 <f6> 43 20 01 75 f8 5b 5d 41 5c 41 5d 41 5e c3 0f 1f 80 00 00 00
BUG: soft lockup - CPU#1 stuck for 41s! [fileindexd:12879]
Modules linked in: bridge stp aufs macvlan veth xt_conntrack xt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipt_MASQUERADE xt_REDIRECT xt_nat iptable_nat nf_nat_ipv4 nf_nat xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables cifs udf isofs loop hid_generic tcm_loop(O) iscsi_target_mod(O) target_core_ep(O) target_core_multi_file(O) target_core_file(O) target_core_iblock(O) target_core_mod(O) syno_extent_pool(PO) rodsp_ep(O) usbhid hid usblp bromolow_synobios(PO) exfat(O) btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) qede(O) qed(O) atlantic(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca vxlan fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd usb_storage xhci_hcd uhci_hcd ehci_pci ehci_hcd usbcore usb_common redpill(OF) [last unloaded: bromolow_synobios]
CPU: 1 PID: 12879 Comm: fileindexd Tainted: PF        C O 3.10.105 #25556
Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 11/12/2020
task: ffff8801327bd800 ti: ffff88011e8b4000 task.ti: ffff88011e8b4000
RIP: 0010:[<ffffffff81087d08>]  [<ffffffff81087d08>] generic_exec_single+0x68/0xe0
RSP: 0018:ffff88011e8b7cc0  EFLAGS: 00000202
RAX: 00000000000008fb RBX: 0000000000000001 RCX: 0000000000000014
RDX: ffffffff816057c8 RSI: 00000000000000fb RDI: ffffffff816057c8
RBP: ffff88013dc12a80 R08: ffff88011448fe58 R09: 0000000000000000
R10: 0000000000000022 R11: ffff8800b49decc0 R12: ffff8800b0596890
R13: ffff88011a1c4170 R14: ffffffff81108d4d R15: ffff88011e8b7da0
FS:  00007fba3cf4c700(0000) GS:ffff88013dd00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fba3ce4b000 CR3: 000000011b332000 CR4: 00000000001607e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400
Stack:
 0000000000000000 ffff88011e8b7d50 0000000000000001 ffffffff8186ec10
 ffffffff8102f9e0 ffffffff81087e55 0000000000000001 0000000000000000
 ffff88013dc12a80 ffff88013dc12a80 ffffffff8102f9e0 ffff88011e8b7d70
Call Trace:
 [<ffffffff8102f9e0>] ? do_flush_tlb_all+0x170/0x170
 [<ffffffff81087e55>] ? smp_call_function_single+0xd5/0x160
 [<ffffffff8102f9e0>] ? do_flush_tlb_all+0x170/0x170
 [<ffffffff8102ff5c>] ? flush_tlb_mm_range+0x22c/0x300
 [<ffffffff810d36c9>] ? tlb_flush_mmu.part.66+0x29/0x80
 [<ffffffff810d3ded>] ? tlb_finish_mmu+0x3d/0x40
 [<ffffffff810daa4e>] ? unmap_region+0xbe/0x100
 [<ffffffff810dad91>] ? vma_rb_erase+0x121/0x260
 [<ffffffff810dc8cd>] ? do_munmap+0x2ed/0x690
 [<ffffffff810dcca6>] ? vm_munmap+0x36/0x50
 [<ffffffff810ddb35>] ? SyS_munmap+0x5/0x10
 [<ffffffff814cfdc4>] ? system_call_fastpath+0x22/0x27
Code: 48 89 c6 48 89 5d 08 4c 89 ef 48 89 2b 48 89 53 08 48 89 1a e8 aa 68 44 00 4c 39 f5 74 6b f6 43 20 01 74 0f 0f 1f 80 00 00 00 00 <f3> 90 f6 43 20 01 75 f8 5b 5d 41 5c 41 5d 41 5e c3 0f 1f 80 00

 

Link to comment
Share on other sites

22 minutes ago, rdidier75 said:

Same for me on J3455

 

Scheduled power on function to my understanding at least, is a PMU function on a real Syno device. For the time being, we are emulating PMU at the software level but we are still missing the hardware to implement power on. It's in the roadmap, there is a line that says something about it.

 

And here is a deeper explanation : 

https://github.com/RedPill-TTG/dsm-research/blob/master/quirks/pmu.md

Edited by pocopico
  • Like 3
Link to comment
Share on other sites

On 9/21/2021 at 6:31 PM, pocopico said:

 

As it was with Jun loader some files in the loader will be overwritten (Mainly rd.gz and zImage). Recover the loader from a known working image and try to see if  this works.

Simple replacement of rd.gz and zImage from build img loader file by ones extracted from update2 pat doesnt work. Looking into rd.gz, they seem different.

 

Any method making it happen on 6.2.4update2?

 

Link to comment
Share on other sites

16 minutes ago, man275 said:

Simple replacement of rd.gz and zImage from build img loader file by ones extracted from update2 pat doesnt work. Looking into rd.gz, they seem different.

 

Any method making it happen on 6.2.4update2?

 

 

The rd.gz that exists on the loader + the zImage have to be the ones that are patched during the loader creation process. If you manually update using a pat file from Syno then you most probably will overwrite the rd.gz and zImage rendering your loader useless. 

 

All the redpill magic happens in these two files. You need to put back the redpill patched ones and not the ones from the update2 pat.

 

 

 

Edited by pocopico
Link to comment
Share on other sites

12 hours ago, apriliars3 said:

For test DS315xs with VMWare I add to global_config.json this lines:

 


{
            "id": "bromolow-7.0.1-42214",
            "platform_version": "bromolow-7.0.1-42214",
            "user_config_json": "bromolow_user_config.json",
            "docker_base_image": "debian:8-slim",
            "compile_with": "toolkit_dev",
            "redpill_lkm_make_target": "prod-v7",
            "downloads": {
                "kernel": {
                    "url": "https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/bromolow-source/linux-4.4.x.txz/download",
                    "sha256": "af815ee065775d2e569fd7176e25c8ba7ee17a03361557975c8e5a4b64230c5b"
                },
                "toolkit_dev": {
                    "url": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.bromolow-7.0.dev.txz/download",
                    "sha256": "a5fbc3019ae8787988c2e64191549bfc665a5a9a4cdddb5ee44c10a48ff96cdd"
                }
            },
            "redpill_lkm": {
                "source_url": "https://github.com/RedPill-TTG/redpill-lkm.git",
                "branch": "master"
            },
            "redpill_load": {
                "source_url": "https://github.com/jumkey/redpill-load.git",
                "branch": "develop"
            }
        },

 

For Apollolake need to add this lines:

 

 

It's very fast and easy make a build an .img for test, only need linux (In my case use Terminal on Ubuntu):

 

1. Install Docker       


sudo apt-get update
sudo apt install docker.io

 

2. install jq & curl:       


sudo apt install jq
sudo apt install curl

 

3. download redpill-tool-chain_x86_64_v0.10 https://xpenology.com/forum/applications/core/interface/file/attachment.php?id=13072

 

4. Go to folder and permissions to .sh


cd redpill-tool-chain_x86_64_v0.10
chmod +x redpill_tool_chain.sh

 

5. If you want edit vid,pid,sn,mac:


#edit apollolake
vi apollolake_user_config.json
  
#edit bromolow 
vi bromolow_user_config.json

 

6. build img


#for apollolake
./redpill_tool_chain.sh build apollolake-7.0.1-42214 && ./redpill_tool_chain.sh auto apollolake-7.0.1-42214

#for bromolow
./redpill_tool_chain.sh build bromolow-7.0.1-42214 && ./redpill_tool_chain.sh auto bromolow-7.0.1-42214

 

then the file was in redpill-tool-chain_x86_64_v0.10/images

 

7. For VMWare I convert .img to .vmdk with StarWind V2V Converter, and then add to Virtual Machine like sata. Also, change ethernet0.VirtualDeb = "e1000" to "e1000e on file .vmx

 

Thanks ThorGroup for the great work

photo_2021-09-22_14-50-27.jpg

global_config.json 8.09 kB · 42 downloads

Is bromolow with linux-4.4.x.txz? Shouldnt it be 3.x kernel?

  • Like 1
Link to comment
Share on other sites

49 minutes ago, man275 said:

Is bromolow with linux-4.4.x.txz? Shouldnt it be 3.x kernel?

I think you're right. This should be a correct bromolow 7.0.1-RC1 stanza:

 

{
            "id": "bromolow-7.0.1-42214",
            "platform_version": "bromolow-7.0.1-42214",
            "user_config_json": "bromolow_user_config.json",
            "docker_base_image": "debian:8-slim",
            "compile_with": "toolkit_dev",
            "redpill_lkm_make_target": "prod-v7",
            "downloads": {
                "kernel": {
                    "url": "https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/bromolow-source/linux-3.10.x.txz/download",
                    "sha256": "18aecead760526d652a731121d5b8eae5d6e45087efede0da057413af0b489ed"
                },
                "toolkit_dev": {
                    "url": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.bromolow-7.0.dev.txz/download",
                    "sha256": "a5fbc3019ae8787988c2e64191549bfc665a5a9a4cdddb5ee44c10a48ff96cdd"
                }
            },
            "redpill_lkm": {
                "source_url": "https://github.com/RedPill-TTG/redpill-lkm.git",
                "branch": "master"
            },
            "redpill_load": {
                "source_url": "https://github.com/jumkey/redpill-load.git",
                "branch": "develop"
            }
        },

 

Edited by WiteWulf
  • Like 1
Link to comment
Share on other sites

@ThorGroup it still kernel panics when launching an influxdb docker container with "register_pmu_shim" deleted from redpill_main.c as suggested:

[  338.055690] Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 6
[  338.091670] CPU: 6 PID: 21097 Comm: containerd-shim Tainted: PF          O 3.10.108 #42214
[  338.132114] Hardware name: HP ProLiant MicroServer Gen8, BIOS J06 04/04/2019
[  338.168045]  ffffffff814a2759 ffffffff814a16b1 0000000000000010 ffff880409b88d60
[  338.205031]  ffff880409b88cf8 0000000000000000 0000000000000006 0000000000000001
[  338.241507]  0000000000000006 ffffffff80000001 0000000000000030 ffff8803f4d4dc00
[  338.278173] Call Trace:
[  338.290006]  <NMI>  [<ffffffff814a2759>] ? dump_stack+0xc/0x15
[  338.318839]  [<ffffffff814a16b1>] ? panic+0xbb/0x1df
[  338.342727]  [<ffffffff810a9eb8>] ? watchdog_overflow_callback+0xa8/0xb0
[  338.375043]  [<ffffffff810db7d3>] ? __perf_event_overflow+0x93/0x230
[  338.405804]  [<ffffffff810da612>] ? perf_event_update_userpage+0x12/0xf0
[  338.438356]  [<ffffffff810152a4>] ? intel_pmu_handle_irq+0x1b4/0x340
[  338.469218]  [<ffffffff814a9d06>] ? perf_event_nmi_handler+0x26/0x40
[  338.500130]  [<ffffffff814a944e>] ? do_nmi+0xfe/0x440
[  338.525060]  [<ffffffff814a8a53>] ? end_repeat_nmi+0x1e/0x7e
[  338.552408]  <<EOE>>
[  338.562333] Rebooting in 3 seconds..

 

Tried it another time, similar crash, this time directly referncing influxdb in the output:

Kernel panic - not syncing: Watchdog detected hard LOCKUP on cpu 0
[  165.610117] CPU: 0 PID: 21435 Comm: influxd Tainted: PF          O 3.10.108 #42214
[  165.646799] Hardware name: HP ProLiant MicroServer Gen8, BIOS J06 04/04/2019
[  165.680969]  ffffffff814a2759 ffffffff814a16b1 0000000000000010 ffff880409a08d60
[  165.717087]  ffff880409a08cf8 0000000000000000 0000000000000000 0000000000000001
[  165.753547]  0000000000000000 ffffffff80000001 0000000000000030 ffff8803f5267c00
[  165.789827] Call Trace:
[  165.801740]  <NMI>  [<ffffffff814a2759>] ? dump_stack+0xc/0x15
[  165.830687]  [<ffffffff814a16b1>] ? panic+0xbb/0x1df
[  165.855249]  [<ffffffff810a9eb8>] ? watchdog_overflow_callback+0xa8/0xb0
[  165.888617]  [<ffffffff810db7d3>] ? __perf_event_overflow+0x93/0x230
[  165.919963]  [<ffffffff810da612>] ? perf_event_update_userpage+0x12/0xf0
[  165.952655]  [<ffffffff810152a4>] ? intel_pmu_handle_irq+0x1b4/0x340
[  165.983546]  [<ffffffff814a9d06>] ? perf_event_nmi_handler+0x26/0x40
[  166.013584]  [<ffffffff814a944e>] ? do_nmi+0xfe/0x440
[  166.038375]  [<ffffffff814a8a53>] ? end_repeat_nmi+0x1e/0x7e
[  166.065408]  <<EOE>>
[  166.075520] Rebooting in 3 seconds..

 

  • Thanks 1
  • Sad 1
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...