Jump to content
XPEnology Community

RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

SataPortMap, DiskIdxMap, etc are not in /etc/synoinfo.conf, it's part of options passed to the Linux kernel via grub, i.e. it's just assembling the argument string from the options you provide.  I don't think that particular RedPill provisioning option will help with /etc/synoinfo.conf parameters.

 

Just to be clear, /etc.defaults/synoinfo.conf is copied to /etc/synoinfo.conf each boot.  And some of the /etc.defaults/synoinfo.conf parameters are "reset" by DSM on upgrades or on each boot, hence the Jun loader patch.

Edited by flyride
Link to comment
Share on other sites

Right.  It's a little confusing. 

SataPortMap, DiskIdxMap, etc are in the boot string, so, they really should be in the "extra_cmdline" portion of the user_config.json, no?

While the maxdisks, internalportcfg, etc are in the synoinfo.conf, so should be in the "synoinfo" portion of the user_config.json.

 

Using this as the user_conf.json does change the synoinfo.conf to contain the desired values:

{
  "extra_cmdline": {
    "vid": "<fill me>";
    "pid": "<fill me>";
    "sn": "<fill me>",
    "mac1": "<fill me>",
    "DiskIdxMap": "00",
    "SataPortMap": "1",
    "SasIdxMap":"0"
  },
  "synoinfo":{
    "maxdisks": "32",
    "internalportcfg": "0xffffffff"
  },
  "ramdisk_copy":{}
  }
}

Link to comment
Share on other sites

However DSM interprets that value is beyond me.  With Jun's loader on DSM 6.2.3, maxdisks=6, but the storage manager shows 5 used slots and 5 available slots.

It must be some combination of the maxdisks and the internalportcfg settings.

 

These new settings on 7.0 have storage manager showing disks 1, 17 and 18.  Disk 17 is the loader USB device, even though I do have synoboot, synoboot1, and synoboot2 in the /dev directory.

Link to comment
Share on other sites

9 hours ago, jhoughten said:

My face recognition is also not working.  In /dev/dri there is a card0 and renderD128

 

In /var/log/synofoto.log, I get this error over and over again:

2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: /source/synophoto-plugin-face/src/face_plugin/main.cpp:22 face plugin init
2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: uncaught thread task exception /source/synofoto/src/daemon/plugin/plugin_worker.cpp:90 plugin init failed: /var/packages/SynologyPhotos/target/usr/lib/libsynophoto-plugin-face.so

 

In /var/log/messages, I get this error over and over again:

2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: uncaught thread task exception /source/synofoto/src/daemon/plugin/plugin_worker.cpp:90 plugin init failed: /var/packages/SynologyPhotos/target/usr/lib/libsynophoto-plugin-face.so
2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: /source/synophoto-plugin-face/src/face_plugin/lib/face_detection.cpp:214 Error: (face plugin) load network failed

 

I am using a valid SN, but the mac is the actual mac of my nic.

 

I installed video station and the /usr/syno/etc/codec/activation.conf showed successful activation of the various codecs.

According to some post, you will need a valid SN and MAC.

Mine is using a valid SN and MAC.

Link to comment
Share on other sites

2 minutes ago, jforts said:

According to some post, you will need a valid SN and MAC.

Mine is using a valid SN and MAC.

AFAIK, the valid mac is just needed to be input to the grub, but the actual hardware mac dose not matter, can you confirm?

I used it under proxmox so I acutally put the valid mac into virtual machine as well.

Edited by mcdull
Link to comment
Share on other sites

12 hours ago, tbc0309 said:

help me build r8125.ko,thanks

r8125-9.006.04.tar.bz2 86.48 kB · 5 downloads

not tested

make CONFIG_SYNO_LSP_RTD1619=y CONFIG_R8168=n CONFIG_R8168_PG=n CONFIG_R8125=m CROSS_COMPILE=/root/build/apollolake-DSM-7.0-toolchain/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu- -C /root/build/apollolake-DSM-7.0-toolkit/build/ M=`pwd` modules

 

r8125.ko

Link to comment
Share on other sites

7 hours ago, jhoughten said:

Right.  It's a little confusing. 

SataPortMap, DiskIdxMap, etc are in the boot string, so, they really should be in the "extra_cmdline" portion of the user_config.json, no?

While the maxdisks, internalportcfg, etc are in the synoinfo.conf, so should be in the "synoinfo" portion of the user_config.json.

 

Using this as the user_conf.json does change the synoinfo.conf to contain the desired values:

{
  "extra_cmdline": {
    "vid": "<fill me>";
    "pid": "<fill me>";
    "sn": "<fill me>",
    "mac1": "<fill me>",
    "DiskIdxMap": "00",
    "SataPortMap": "1",
    "SasIdxMap":"0"
  },
  "synoinfo":{
    "maxdisks": "32",
    "internalportcfg": "0xffffffff"
  },
  "ramdisk_copy":{}
  }
}

 

How can I change synoinfo parameters on an already generated image in grub.cfg ? 

 

Thanks. This is a part of my grub.cfg

 

Also, excuse me but I am a noob in some regards of xpenology, how to I calculate the value of internalportconfig?  I have 2 SATA onboard and 2 more with a pcie addon card. Thanks !

 

L.E. OK, got it, 

 

0000 0000 1111 internalportcfg="0xf"

0011 1111 0000 usbportcfg="0x3f0"

 

4 sata (2 internal + 2 addon) , 6 usb ports, 0 esata.

 

Generated a new image with synoinfo parameters inserted but they are not found in grub.cfg.

 

menuentry 'RedPill DS918+ v7.0.1-42214 (USB, Verbose)' {
	savedefault
	set root=(hd0,msdos1)
	echo Loading Linux...
	linux /zImage HddHotplug=0 withefi console=ttyS0,115200n8 netif_num=1 syno_hdd_detect=0 syno_port_thaw=1 vender_format_version=2 earlyprintk mac1=XXXXXXXXXX syno_hdd_powerup_seq=1 pid=0xc96a log_buf_len=32M syno_hw_version=DS918+ vid=0x125f earlycon=uart8250,io,0x3f8,115200n8 sn=XXXXXXXXXXX elevator=elevator root=/dev/md0 loglevel=15 
	echo Loading initramfs...
	initrd /rd.gz
	echo Starting kernel
}

 

Edited by ct85msi
Link to comment
Share on other sites

Hi

 

Im trying to install DSM 7.01 apollolake 918+ on ESXI.

 

When running fdisk before installing PAT file I get this:

Disk /dev/sda: 120 GB, 128849018880 bytes, 251658240 sectors
15665 cylinders, 255 heads, 63 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Disk /dev/sda doesn't contain a valid partition table
Disk /dev/synoboot: 128 MB, 134217728 bytes, 262144 sectors
1008 cylinders, 5 heads, 52 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Device       Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/synoboot1    0,32,33     6,62,56           2048     100351      98304 48.0M 83 Linux
Partition 1 has different physical/logical start (non-Linux?):
     phys=(0,32,33) logical=(7,4,21)
Partition 1 has different physical/logical end:
     phys=(6,62,56) logical=(385,4,44)
/dev/synoboot2    6,62,57     15,205,62       100352     253951     153600 75.0M 83 Linux
Partition 2 has different physical/logical start (non-Linux?):
     phys=(6,62,57) logical=(385,4,45)
Partition 2 has different physical/logical end:
     phys=(15,205,62) logical=(976,3,36)
/dev/synoboot3    15,205,63   16,81,1         253952     262143       8192 4096K 83 Linux
Partition 3 has different physical/logical start (non-Linux?):
     phys=(15,205,63) logical=(976,3,37)
Partition 3 has different physical/logical end:
     phys=(16,81,1) logical=(1008,1,12)

and the logs when trying to install the PAT file from serial console

[  183.720933] <redpill/rtc_proxy.c:37> MfgCompatTime raw data: sec=20 min=3 hr=6 wkd=5 day=10 mth=8 yr=121
[  183.723631] <redpill/rtc_proxy.c:95> Writing BCD-based RTC
[  183.724954] RTC time set to 2021-09-10  6:03:20 (UTC)
[  186.800143] md: bind<sda1>
[  186.801637] md/raid1:md0: active with 1 out of 16 mirrors
[  186.802992] md0: detected capacity change from 0 to 2549940224
[  189.811875] md: bind<sda2>
[  189.813208] md/raid1:md1: active with 1 out of 16 mirrors
[  189.814563] md1: detected capacity change from 0 to 2147418112
[  190.198325] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  190.209674] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  190.240130] <redpill/rtc_proxy.c:222> Got an invalid call to rtc_proxy_set_auto_power_on
[  190.246693] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  190.257611] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  190.272865] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  190.283741] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  196.595667] ext2: synoboot2 mounted, process=updater
[  196.599505] synoboot2 unmounted, process=updater
[  196.607208] ext2: synoboot2 mounted, process=updater
[  196.609221] synoboot2 unmounted, process=updater
[  196.616749] ext2: synoboot2 mounted, process=updater
[  196.619137] synoboot2 unmounted, process=updater
[  196.625799] ext2: synoboot2 mounted, process=updater
[  198.179513] synoboot2 unmounted, process=updater
[  203.191011] ext2: synoboot2 mounted, process=updater
[  203.250955] synoboot2 unmounted, process=updater
[  203.261041] ext2: synoboot1 mounted, process=updater
[  203.265540] vfat: synoboot1 mounted, process=updater
[  203.268169] FAT-fs (synoboot1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

Can this be fixed?

Link to comment
Share on other sites

32 minutes ago, ressof said:

Hi

 

Im trying to install DSM 7.01 apollolake 918+ on ESXI.

 

When running fdisk before installing PAT file I get this:


Disk /dev/sda: 120 GB, 128849018880 bytes, 251658240 sectors
15665 cylinders, 255 heads, 63 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Disk /dev/sda doesn't contain a valid partition table
Disk /dev/synoboot: 128 MB, 134217728 bytes, 262144 sectors
1008 cylinders, 5 heads, 52 sectors/track
Units: sectors of 1 * 512 = 512 bytes

Device       Boot StartCHS    EndCHS        StartLBA     EndLBA    Sectors  Size Id Type
/dev/synoboot1    0,32,33     6,62,56           2048     100351      98304 48.0M 83 Linux
Partition 1 has different physical/logical start (non-Linux?):
     phys=(0,32,33) logical=(7,4,21)
Partition 1 has different physical/logical end:
     phys=(6,62,56) logical=(385,4,44)
/dev/synoboot2    6,62,57     15,205,62       100352     253951     153600 75.0M 83 Linux
Partition 2 has different physical/logical start (non-Linux?):
     phys=(6,62,57) logical=(385,4,45)
Partition 2 has different physical/logical end:
     phys=(15,205,62) logical=(976,3,36)
/dev/synoboot3    15,205,63   16,81,1         253952     262143       8192 4096K 83 Linux
Partition 3 has different physical/logical start (non-Linux?):
     phys=(15,205,63) logical=(976,3,37)
Partition 3 has different physical/logical end:
     phys=(16,81,1) logical=(1008,1,12)

and the logs when trying to install the PAT file from serial console


[  183.720933] <redpill/rtc_proxy.c:37> MfgCompatTime raw data: sec=20 min=3 hr=6 wkd=5 day=10 mth=8 yr=121
[  183.723631] <redpill/rtc_proxy.c:95> Writing BCD-based RTC
[  183.724954] RTC time set to 2021-09-10  6:03:20 (UTC)
[  186.800143] md: bind<sda1>
[  186.801637] md/raid1:md0: active with 1 out of 16 mirrors
[  186.802992] md0: detected capacity change from 0 to 2549940224
[  189.811875] md: bind<sda2>
[  189.813208] md/raid1:md1: active with 1 out of 16 mirrors
[  189.814563] md1: detected capacity change from 0 to 2147418112
[  190.198325] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  190.209674] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  190.240130] <redpill/rtc_proxy.c:222> Got an invalid call to rtc_proxy_set_auto_power_on
[  190.246693] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  190.257611] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  190.272865] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  190.283741] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  196.595667] ext2: synoboot2 mounted, process=updater
[  196.599505] synoboot2 unmounted, process=updater
[  196.607208] ext2: synoboot2 mounted, process=updater
[  196.609221] synoboot2 unmounted, process=updater
[  196.616749] ext2: synoboot2 mounted, process=updater
[  196.619137] synoboot2 unmounted, process=updater
[  196.625799] ext2: synoboot2 mounted, process=updater
[  198.179513] synoboot2 unmounted, process=updater
[  203.191011] ext2: synoboot2 mounted, process=updater
[  203.250955] synoboot2 unmounted, process=updater
[  203.261041] ext2: synoboot1 mounted, process=updater
[  203.265540] vfat: synoboot1 mounted, process=updater
[  203.268169] FAT-fs (synoboot1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

Can this be fixed?

seems not fatal problem here in log.  

Link to comment
Share on other sites

4 minutes ago, ct85msi said:

yes. I know...but I can`t find where to insert the parameters on the mounted image. I generated a new image with those synoinfo parameters inserted but grub.cfg hasn`t changed.

As per jhoughton’s post, synoinfo parameters are patched into the running system, whereas extra_cmdline parameters are added to grub.cfg

 

synoinfo doesn’t change grub, they’re two separate parts of the redpill process. 

Link to comment
Share on other sites

booted with the new image, it has all the parameters OK now in synoinfo.conf but md0 and md1 remained with 16 devices. 

 

root@Apollo:~# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Thu Sep  9 11:12:29 2021
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 16
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent

    Update Time : Fri Sep 10 09:43:13 2021
          State : clean, degraded
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : a31d1e01:60fd6c6a:3017a5a8:c86610be
         Events : 0.125

    Number   Major   Minor   RaidDevice State
       0       8        2        0      active sync   /dev/sda2
       1       8       18        1      active sync   /dev/sdb2
       2       8       34        2      active sync   /dev/sdc2
       3       8       50        3      active sync   /dev/sdd2
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
       -       0        0       12      removed
       -       0        0       13      removed
       -       0        0       14      removed
       -       0        0       15      removed
root@Apollo:~# mdadm --grow /dev/md1 --raid-devices=4
raid_disks for /dev/md1 set to 4

 

now is all good :)

Link to comment
Share on other sites

2 hours ago, ressof said:

But the installer won't continue after 



FAT-fs (synoboot1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

 

 

2 hours ago, ressof said:

But the installer won't continue after 



FAT-fs (synoboot1): Volume was not properly unmounted. Some data may be corrupt. Please run fsck.

 

did you set the partition to active (sorry typo before) for synoboot1?

Edited by mcdull
Link to comment
Share on other sites

FWIW, I've got all four drive bays in my Gen8 populated, plus an SSD on the ODD SATA port (configured as read cache, so not included in the system or data volumes, it's RAID0 on /dev/md4) and mdadm reports the following for md0 and md1 (Raid devices = 12, clean/degraded):

bash-4.4# mdadm --detail /dev/md1
/dev/md1:
        Version : 0.90
  Creation Time : Wed Jan 23 18:53:26 2019
     Raid Level : raid1
     Array Size : 2097088 (2047.94 MiB 2147.42 MB)
  Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB)
   Raid Devices : 12
  Total Devices : 4
Preferred Minor : 1
    Persistence : Superblock is persistent
    Update Time : Thu Sep  9 12:23:55 2021
          State : clean, degraded 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

 

...and this for md2 and md3:

bash-4.4# mdadm --detail /dev/md2
/dev/md2:
        Version : 1.2
  Creation Time : Sat Apr  4 16:26:16 2015
     Raid Level : raid5
     Array Size : 5846338944 (5575.50 GiB 5986.65 GB)
  Used Dev Size : 1948779648 (1858.50 GiB 1995.55 GB)
   Raid Devices : 4
  Total Devices : 4
    Persistence : Superblock is persistent
    Update Time : Fri Sep 10 08:42:38 2021
          State : clean 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

Is it "normal" for the system partition to be in a degraded state (never looked at this before when I was on Jun's bootloader or xpenoboot, or is this something that needs to be fixed with the maxdisks parameter? A real DS3615XS has 12 drive bays, obviously, so that would be why it's expecting to find up to 12 devices...

Edited by WiteWulf
Link to comment
Share on other sites

look at my posts. There should be no problem if it runs in degraded state but you can fix this with maxdisks and internalportcfg/usbportcfg/esataportcfg and then --grow the array with only the maximum hard drives your system supports.

 

This is my config:

 

  },
  "synoinfo": {
    "maxdisks": "4",
    "internalportcfg": "0xf",
    "usbportcfg": "0x3f0"
},

esataportcfg is already 0x0 and I have no esata port.

 

root@Apollo:~# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdc3[0] sdd3[1]
      971940544 blocks super 1.2 [2/2] [UU]

md2 : active raid1 sda3[0] sdb3[1]
      483564544 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sda2[0] sdd2[3] sdc2[2] sdb2[1]
      2097088 blocks [4/4] [UUUU]

md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      2490176 blocks [4/4] [UUUU]

unused devices: <none>

 

root@Apollo:~# mdadm --detail /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Thu Sep  9 11:12:22 2021
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Fri Sep 10 10:57:06 2021
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : cbf9a42e:9fc3aaf1:3017a5a8:c86610be
         Events : 0.5993

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1

 

Edited by ct85msi
  • Like 1
Link to comment
Share on other sites

I even modded my Gen8 to have a fifth drive sticked to the side with velkro lol.

Using the sata cable that's meant for the disc drive.

 

I must say that 7.0-41222 works really well so far.

I setup a RAID5 pool with btrfs volume and even Surveillance station is working with my camera (one of the main features for choosing Synology over Unraid/TrueNAS imo).

So It's just like my geniune Synology.. now I have two running DSM7.

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...