RedPill - the new loader for 6.2.4 - Discussion


Recommended Posts

"it means the boot device is not properly mounted. in that case you should share again the boot device, config, check pid/vid and grub config, etc."

 

So basically its a case of trial and error with a USB stick and then re-burning the image and then trying a different USB port each time perhaps?

Link to post
Share on other sites
17 minutes ago, Orphée said:

SN & MAC must match the current DSM model running.

 

Ok! I supposed that but thanks for confirming. It is hard to find a real SN&MAC from an old NAS rn.

Link to post
Share on other sites
4 minutes ago, MastaG said:

oh right.. I didn't know that..
I just put my own macs there and a serial number by some keygen.

It is only needed for specific stuffs from Syno products like HW decoding...

Not needed for usual NAS activity.

Link to post
Share on other sites
Just now, Orphée said:

It is only needed for specific stuffs from Syno products like HW decoding...

Not needed for usual NAS activity.

 

If I'm not wrong, it is needed for indexing and face recognition, right? Yesterday I was monitoring /var/log/messages file in the middle of a file indexing process (photo and video files) and it constantly prompts "SN is incorrect" error, related to .mp4 files.

Link to post
Share on other sites

yeah it doesn't seem to be required for basic nas operation.

other then that I noticed that 7.0-41222 is actually a beta firmware for the ds3615xs and it's pretty outdated.

I see there were newer builds, even a RC, but they didnt release it for this specific model?

Link to post
Share on other sites

it's harder than I thought. It looks that the toolchain process builds an image which won't work for me. If I download one in this threat it starts at least. 

 

Is there someone who can prepare / build a workable bare metal SynoBoot image file for 6.2.4.25556 ?

  • CPU i3 8100 quad core
  • Motherboard H370M with 6 Sata ports (all from intel chipset)
  • 2 Intel NICs one = I219V and I211AT one of the two is working natively other is activated by a driver. Don't care if only one will work.

Before I will use this on my production I will first try it on a single clean sata test disk and play with it.

 

SN and MACs I can add myself with OSFmount. For the genius(es) already a virtual beer on my behalf! 

 

 

Link to post
Share on other sites

Do not use this in production, redpill is beta, dsm is beta, wait for stable release.

 

11 minutes ago, hannibal1969 said:

it's harder than I thought. It looks that the toolchain process builds an image which won't work for me. If I download one in this threat it starts at least. 

 

Is there someone who can prepare / build a workable bare metal SynoBoot image file for 6.2.4.25556 ?

  • CPU i3 8100 quad core
  • Motherboard H370M with 6 Sata ports (all from intel chipset)
  • 2 Intel NICs one = I219V and I211AT one of the two is working natively other is activated by a driver. Don't care if only one will work.

Before I will use this on my production I will first try it on a single clean sata test disk and play with it.

 

SN and MACs I can add myself with OSFmount. For the genius(es) already a virtual beer on my behalf! 

 

 

 

  • Like 1
Link to post
Share on other sites
16 hours ago, pocopico said:

Just for testing purposes, It looks that it is possible to install 918+ 7.0.1 in vmware. Steps include linking the detected SATA loader to /dev/synoboot like that 

 

# ln -s /dev/sdXX       /dev/synoboot
# ln -s /dev/sdXX1      /dev/synoboot1
# ln -s /dev/sdXX2     /dev/synoboot2
# ln -s /dev/sdXX3      /dev/synoboot3

 

where sdXX is your loader vmdk disk whose disk name can be found running

 

# fdisk -l 

 

Once you have it installed there will be no need to fake it again as md raid devices are discovered and mounted during boot and are not related to the loader. Anyway you may relink using the same commands. 
 

*** edit : you need to relink after boot, I will do some more testing 

 

Again, wait for developers to release a final version before installing on your production systems.

 

That is what FixSynoboot is intended to correct (and does exactly what you show above)

https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/

 

Synoboot devices are not required after install, until you want to do an upgrade.  The loader storage is always modified by an upgrade.  I realize that is getting ahead of things with regard to RedPill, but that's how it works.

  • Thanks 1
Link to post
Share on other sites
2 hours ago, MastaG said:

other then that I noticed that 7.0-41222 is actually a beta firmware for the ds3615xs and it's pretty outdated.

I see there were newer builds, even a RC, but they didnt release it for this specific model?

Yeah, that's why I skipped 7.0 for my Gen8 and went straight to the RC for 7.0.1. Other than the kernel panics with my influxdb docker container it's been very stable and fast.

Link to post
Share on other sites
1 hour ago, djvas335 said:

Do not use this in production, redpill is beta, dsm is beta, wait for stable release.

 

 

I Know. I won't use it on production. I just pullout the production discs and usb stick which I use now. I just want to see if I can manage to use 6.2.4.25556 (which is not Beta) will work. I will use new USB stick and New clean drive. 

Link to post
Share on other sites
17 hours ago, pocopico said:

It’s only for kernel module compilation. Kernel itself will not build as it relies on unreleased - from Synology - code.

 

kernel modules were built using the described process. Which modules are you looking to build maybe I can provide. 

 

How to build intel 10G netcard driver to *.ko ?  synology can't run "make" command!no kernel-devel!

 

Can you help me build it for 7.0.1 ?tks

ixgbe-5.12.5.tar.gz

Link to post
Share on other sites

My face recognition is also not working.  In /dev/dri there is a card0 and renderD128

 

In /var/log/synofoto.log, I get this error over and over again:

2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: /source/synophoto-plugin-face/src/face_plugin/main.cpp:22 face plugin init
2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: uncaught thread task exception /source/synofoto/src/daemon/plugin/plugin_worker.cpp:90 plugin init failed: /var/packages/SynologyPhotos/target/usr/lib/libsynophoto-plugin-face.so

 

In /var/log/messages, I get this error over and over again:

2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: uncaught thread task exception /source/synofoto/src/daemon/plugin/plugin_worker.cpp:90 plugin init failed: /var/packages/SynologyPhotos/target/usr/lib/libsynophoto-plugin-face.so
2021-09-09T11:48:15-06:00 DS918 synofoto-face-extraction[19489]: /source/synophoto-plugin-face/src/face_plugin/lib/face_detection.cpp:214 Error: (face plugin) load network failed

 

I am using a valid SN, but the mac is the actual mac of my nic.

 

I installed video station and the /usr/syno/etc/codec/activation.conf showed successful activation of the various codecs.

Link to post
Share on other sites

Found a bug, when I run the integrated benchmark tool on any disk, from Storage Manager, the raid array is corrupted. The benchmark never finishes and when I stop it, the array will resync.

 

20253.145546] md_error: sdd2 is being to be set faulty
[20253.145836] raid1: Disk failure on sdd2, disabling device. 
               	Operation continuing on 3 devices
[20253.182475] RAID1 conf printout:
[20253.182659]  --- wd:3 rd:16
[20253.182815]  disk 0, wo:0, o:1, dev:sda2
[20253.183027]  disk 1, wo:0, o:1, dev:sdb2
[20253.183239]  disk 2, wo:0, o:1, dev:sdc2
[20253.183466]  disk 3, wo:1, o:0, dev:sdd2
[20253.188464] RAID1 conf printout:
[20253.188648]  --- wd:3 rd:16
[20253.188804]  disk 0, wo:0, o:1, dev:sda2
[20253.189016]  disk 1, wo:0, o:1, dev:sdb2
[20253.189227]  disk 2, wo:0, o:1, dev:sdc2
[20254.150881] md: unbind<sdd2>
[20254.161450] md: export_rdev(sdd2)
[21149.167214] md: bind<sdd2>
[21149.196351] RAID1 conf printout:
[21149.196559]  --- wd:3 rd:16
[21149.196714]  disk 0, wo:0, o:1, dev:sda2
[21149.196925]  disk 1, wo:0, o:1, dev:sdb2
[21149.197136]  disk 2, wo:0, o:1, dev:sdc2
[21149.197347]  disk 3, wo:1, o:1, dev:sdd2
[21149.197912] md: md1: current auto_remap = 0
[21149.198229] md: recovery of RAID array md1
[21149.198487] md: minimum _guaranteed_  speed: 10000 KB/sec/disk.
[21149.198804] md: using maximum available idle IO bandwidth (but not more than 600000 KB/sec) for recovery.
[21149.199308] md: using 128k window, over a total of 2097088k.
[21229.131072] perf interrupt took too long (2510 > 2500), lowering kernel.perf_event_max_sample_rate to 50000
[21232.348680] md: md1: recovery done.
[21232.363216] md: md1: current auto_remap = 0
[21232.396664] RAID1 conf printout:
[21232.396863]  --- wd:4 rd:16
[21232.397018]  disk 0, wo:0, o:1, dev:sda2
[21232.397229]  disk 1, wo:0, o:1, dev:sdb2
[21232.397440]  disk 2, wo:0, o:1, dev:sdc2
[21232.397691]  disk 3, wo:0, o:1, dev:sdd2

 

Link to post
Share on other sites

Also, it seems like the maxdrives and internalportcfg settings in /etc/synoinfo.conf keep reverting to their original settings.

I have made the change in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf.

The usbportcfg setting is retained, but the others revert back to maxdrives=16 and internalportcfg=0xffff

 

 

Link to post
Share on other sites
Just now, jhoughten said:

Also, it seems like the maxdrives and internalportcfg settings in /etc/synoinfo.conf keep reverting to their original settings.

I have made the change in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf.

The usbportcfg setting is retained, but the others revert back to maxdrives=16 and internalportcfg=0xffff

I believe Jun's loader patches these on each boot.  This may need to be added to RedPill.

Link to post
Share on other sites

That was my next problem, maxdisks="16" is reverted to the original value after reboot. I want to set maxdisks to 4 but no luck ..

 


 

root@Apollo:~# cat /proc/mdstat 
Personalities : [raid1] 
md2 : active raid1 sda3[0] sdb3[1]
      483564544 blocks super 1.2 [2/2] [UU]
      
md3 : active raid1 sdc3[0] sdd3[1]
      971940544 blocks super 1.2 [2/2] [UU]
      
md1 : active raid1 sdd2[3] sda2[0] sdc2[2] sdb2[1]
      2097088 blocks [16/4] [UUUU____________]
      
md0 : active raid1 sda1[0] sdd1[3] sdc1[2] sdb1[1]
      2490176 blocks [16/4] [UUUU____________]
      
unused devices: <none>

 

I need to set the drives to 4 because the System Partitions want to replicate to 16 drives and the System arrays are always in degraded state.

 

root@Apollo:~# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Thu Sep  9 11:12:22 2021
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 16
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Sep  9 22:06:34 2021
          State : clean, degraded 
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : cbf9a42e:9fc3aaf1:3017a5a8:c86610be
         Events : 0.4873

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1
       -       0        0        4      removed
       -       0        0        5      removed
       -       0        0        6      removed
       -       0        0        7      removed
       -       0        0        8      removed
       -       0        0        9      removed
       -       0        0       10      removed
       -       0        0       11      removed
       -       0        0       12      removed
       -       0        0       13      removed
       -       0        0       14      removed
       -       0        0       15      removed

 

This is from a genuine DS918+

 

root@heimdall:~# mdadm -D /dev/md0
/dev/md0:
        Version : 0.90
  Creation Time : Mon Nov 18 16:53:50 2019
     Raid Level : raid1
     Array Size : 2490176 (2.37 GiB 2.55 GB)
  Used Dev Size : 2490176 (2.37 GiB 2.55 GB)
   Raid Devices : 4
  Total Devices : 4
Preferred Minor : 0
    Persistence : Superblock is persistent

    Update Time : Thu Sep  9 22:17:38 2021
          State : clean
 Active Devices : 4
Working Devices : 4
 Failed Devices : 0
  Spare Devices : 0

           UUID : f42bd872:0362117a:3017a5a8:c86610be
         Events : 0.14903

    Number   Major   Minor   RaidDevice State
       0       8        1        0      active sync   /dev/sda1
       1       8       17        1      active sync   /dev/sdb1
       2       8       33        2      active sync   /dev/sdc1
       3       8       49        3      active sync   /dev/sdd1

 

Edited by ct85msi
Link to post
Share on other sites

Hi, also quite interested how to solve the no synoboot device.

Used the following loader img: redpill-DS918+_7.0.1-42214_b1630747150.img for a VM on Unraid. Not sure though if the loader is for a VM or baremetal.

But fails (therefore) at 55%.

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.