Jump to content
XPEnology Community

Recommended Posts

Posted
On 9/29/2021 at 12:04 PM, pocopico said:

 

Yes they are there :

 

https://github.com/pocopico/3.10.108-Modules

 

In 3615 you dont need to insmod scsi_transport_sas.ko or raid_class they are statically build into the kernel

Works for me with LSI card. insmod mpt2sas.ko see my drive (ESXi), and refresh install menu said that I can install it. However, the mpt2sas module won't load at boot, and when I restart, same process, need to load the driver, but it say that the syno is not installed.. Try to modify linuxrc to load it at kernel boot but not working...

Posted
2 minutes ago, RedwinX said:

Works for me with LSI card. insmod mpt2sas.ko see my drive (ESXi), and refresh install menu said that I can install it. However, the mpt2sas module won't load at boot, and when I restart, same process, need to load the driver, but it say that the syno is not installed.. Try to modify linuxrc to load it at kernel boot but not working...

 

I will release an RedPill extension soon for LSI cards (mptsas/mpt2sas/mpt3sas. I just need to find some time :| 

  • Like 3
  • Thanks 2
Posted
Just now, pocopico said:

 

I will release an RedPill extension soon for LSI cards (mptsas/mpt2sas/mpt3sas. I just need to find some time :| 

Oh ok, ty for your answer. So basically, if I passthough my lsi card and just add my redpill img on sata0:0 like my jun 1.03b it will just upgrade my 6.2.3 to 7 ? Because actually, when I boot to SATA mode, it say : no drive. When I manually load the driver, it see the drive, but said : not installed, it cannot upgrade

Posted
1 minute ago, RedwinX said:

Oh ok, ty for your answer. So basically, if I passthough my lsi card and just add my redpill img on sata0:0 like my jun 1.03b it will just upgrade my 6.2.3 to 7 ? Because actually, when I boot to SATA mode, it say : no drive. When I manually load the driver, it see the drive, but said : not installed, it cannot upgrade

 

There are a series of checks during the boot time. If the boot image doesn't find what they expect, they will fall back to install mode instead of upgrade mode. You need to have the driver loaded during boot time so that the boot image finds the drives and identifies that there was a previously installed system.

 

 

Posted
1 minute ago, pocopico said:

 

There are a series of checks during the boot time. If the boot image doesn't find what they expect, they will fall back to install mode instead of upgrade mode. You need to have the driver loaded during boot time so that the boot image finds the drives and identifies that there was a previously installed system.

 

 

Thx for the clarification and your works :)

Posted (edited)

I'm playing with @pocopico mpt2sas driver and ESXi settings to make drives shown as drive 1.

But currently I'm missing something...

I currently have :

DiskIdxMap=0C

SataPortMap=4

 

SATA0 for bootloader IMG/VMDK

and directly LSI HBA passthrough (no virtual SATA1 controller)

 

image.png.ad34dd7e0f2a7c75d0983f7041de50c3.png

image.png.f45c93a3413276a5a89fbbf6cc4266ff.png

 

Disk shown as drive 5.

I tried DiskIdxMap=0C00 but same result

 

Whereas DiskIdxMap=0C00 with virtual SATA1 controller and virtual disk work and disk is drive 1.

 

	linux /zImage DiskIdxMap=0C00 mac1=001132XXXXXX netif_num=1 earlycon=uart8250,io,0x3f8,115200n8 syno_hdd_powerup_seq=0 vid=0x46f4 synoboot_satadom=1 syno_hdd_detect=0 pid=0x0001 console=ttyS0,115200n8 elevator=elevator sn=XXXXXX root=/dev/md0 SataPortMap=4 earlyprintk loglevel=15 log_buf_len=32M syno_port_thaw=1 HddHotplug=0 withefi SasIdxMap=0 syno_hw_version=DS3615xs vender_format_version=2 

 

Do I make something wrong ?

 

Edit : changing to SataPortMap=1 make SSD drive seen and /dev/sdb

 

Better than nothing.

 

Edit 2 : Can't install DSM with SataPortMap=1

/dev/synoboot* not seen.

 

Reverting it back to SataPortMap=4 I can install DSM7. (disk seen as drive 5 still).

 

Edit 3 :

SMART Data working :

image.thumb.png.1efd15f921cd965e8abd7f9674a0741a.png

Edited by Orphée
Posted

D

1 hour ago, Orphée said:

I'm playing with @pocopico mpt2sas driver and ESXi settings to make drives shown as drive 1.

But currently I'm missing something...

I currently have :

DiskIdxMap=0C

SataPortMap=4

 

SATA0 for bootloader IMG/VMDK

and directly LSI HBA passthrough (no virtual SATA1 controller)

 

image.png.ad34dd7e0f2a7c75d0983f7041de50c3.png

image.png.f45c93a3413276a5a89fbbf6cc4266ff.png

 

Disk shown as drive 5.

I tried DiskIdxMap=0C00 but same result

 

Whereas DiskIdxMap=0C00 with virtual SATA1 controller and virtual disk work and disk is drive 1.

 


	linux /zImage DiskIdxMap=0C00 mac1=001132XXXXXX netif_num=1 earlycon=uart8250,io,0x3f8,115200n8 syno_hdd_powerup_seq=0 vid=0x46f4 synoboot_satadom=1 syno_hdd_detect=0 pid=0x0001 console=ttyS0,115200n8 elevator=elevator sn=XXXXXX root=/dev/md0 SataPortMap=4 earlyprintk loglevel=15 log_buf_len=32M syno_port_thaw=1 HddHotplug=0 withefi SasIdxMap=0 syno_hw_version=DS3615xs vender_format_version=2 

 

Do I make something wrong ?

Did you try DiskIdxMap=0C SataPortMap=4 ?

 

 

Posted (edited)
3 hours ago, Orphée said:

I'm playing with @pocopico mpt2sas driver and ESXi settings to make drives shown as drive 1.

But currently I'm missing something...

I currently have :

DiskIdxMap=0C

SataPortMap=4

@u357

 

 

Edit :

lspci result :

root@Xpen70:~# lspci -nn
0000:00:00.0 Host bridge [0600]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge [8086:7190] (rev 01)
0000:00:01.0 PCI bridge [0604]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge [8086:7191] (rev 01)
0000:00:07.0 ISA bridge [0601]: Intel Corporation 82371AB/EB/MB PIIX4 ISA [8086:7110] (rev 08)
0000:00:07.1 IDE interface [0101]: Intel Corporation 82371AB/EB/MB PIIX4 IDE [8086:7111] (rev 01)
0000:00:07.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113] (rev 08)
0000:00:07.7 System peripheral [0880]: VMware Virtual Machine Communication Interface [15ad:0740] (rev 10)
0000:00:0f.0 VGA compatible controller [0300]: VMware SVGA II Adapter [15ad:0405]
0000:00:11.0 PCI bridge [0604]: VMware PCI bridge [15ad:0790] (rev 02)
0000:00:15.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:02:00.0 USB controller [0c03]: VMware USB1.1 UHCI Controller [15ad:0774]
0000:02:01.0 USB controller [0c03]: VMware USB2 EHCI Controller [15ad:0770]
0000:02:03.0 SATA controller [0106]: VMware SATA AHCI controller [15ad:07e0]
0000:03:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
0000:0b:00.0 RAID bus controller [0104]: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] [1000:0072] (rev 03)
0001:07:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)
0001:08:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)
0001:09:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)
0001:0a:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)


Edit again :


Actually I can remove DiskIdxMap=0C it seems no matter what, this option is ignored.

As long as I keep SataPortMap=4. Everything works, and first SAS/SSD disk is detected as drive 5.

 

Edit again again :

So I confirm,

When I use virtual disk on virtual SATA1 controller, it works as expected :

DiskIdxMap=1000

SataPortMap=4

 

Virtual 16GB disk is shown as drive 1.

 

Disk /dev/sda: 16 GiB, 17179869184 bytes, 33554432 sectors
Disk model: Virtual SATA Hard Drive
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x2d4890c5

Device     Boot   Start      End  Sectors  Size Id Type
/dev/sda1          2048  4982527  4980480  2.4G fd Linux raid autodetect
/dev/sda2       4982528  9176831  4194304    2G fd Linux raid autodetect
/dev/sda3       9437184 33349631 23912448 11.4G fd Linux raid autodetect

 

Whereas same settings with virtual SATA1 controller removed and only HBA passtrough card, it always appears at drive 5.

 

lspci result with only virtual drives (working as drive 1)

root@Xpen_70:~# lspci -nn
0000:00:00.0 Host bridge [0600]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX Host bridge [8086:7190] (rev 01)
0000:00:01.0 PCI bridge [0604]: Intel Corporation 440BX/ZX/DX - 82443BX/ZX/DX AGP bridge [8086:7191] (rev 01)
0000:00:07.0 ISA bridge [0601]: Intel Corporation 82371AB/EB/MB PIIX4 ISA [8086:7110] (rev 08)
0000:00:07.1 IDE interface [0101]: Intel Corporation 82371AB/EB/MB PIIX4 IDE [8086:7111] (rev 01)
0000:00:07.3 Bridge [0680]: Intel Corporation 82371AB/EB/MB PIIX4 ACPI [8086:7113] (rev 08)
0000:00:07.7 System peripheral [0880]: VMware Virtual Machine Communication Interface [15ad:0740] (rev 10)
0000:00:0f.0 VGA compatible controller [0300]: VMware SVGA II Adapter [15ad:0405]
0000:00:11.0 PCI bridge [0604]: VMware PCI bridge [15ad:0790] (rev 02)
0000:00:15.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:15.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:16.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:17.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.0 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.1 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.2 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.3 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.4 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.5 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.6 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:00:18.7 PCI bridge [0604]: VMware PCI Express Root Port [15ad:07a0] (rev 01)
0000:02:00.0 USB controller [0c03]: VMware USB1.1 UHCI Controller [15ad:0774]
0000:02:01.0 USB controller [0c03]: VMware USB2 EHCI Controller [15ad:0770]
0000:02:03.0 SATA controller [0106]: VMware SATA AHCI controller [15ad:07e0]
0000:02:04.0 SATA controller [0106]: VMware SATA AHCI controller [15ad:07e0]
0000:03:00.0 Ethernet controller [0200]: Intel Corporation 82574L Gigabit Network Connection [8086:10d3]
0001:07:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)
0001:08:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)
0001:09:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)
0001:0a:00.0 SATA controller [0106]: Marvell Technology Group Ltd. 88SE9235 PCIe 2.0 x2 4-port SATA 6 Gb/s Controller [1b4b:9235] (rev 11)

 

Edited by Orphée
  • Like 2
Posted (edited)
7 hours ago, ThorGroup said:

Be careful with setting multiple vCPUs - they tinkered with number of CPUs (not cores) in the kernel. It cannot be trusted to work properly on systems which ship with single CPU (i.e. all but some rack stations? do they even ship with 2 CPUs?).

Actually, this must be something specific to ESXi/VMware.

You can't set 2 cores with only 1 "CPU" set.

image.png.f2b4ec6b62a4bb7f80997ed1c0671015.png

 

image.png.3a0b951ecc3f0ad1d044c75027c53a41.png

 

If you switch CPU to 1, only 1 core available by socket

image.png.56ce301ec8b826e6a120b9cdfc5a217c.png

 

With 2 CPUs set

you can set :

- 2 sockets / 1 core per socket

- 1 socket / 2 cores per socket (current one with Face Detection working)

Edited by Orphée
Posted
14 minutes ago, buggy25200 said:

 

Yeah, fixed that and trying to fix other the script section :( do we have to specify and run a script for loading the modules ? 

Posted
il y a 5 minutes, pocopico a dit :

Yeah, fixed that and trying to fix other the script section :( do we have to specify and run a script for loading the modules ? 

it seems to me that ext-manager.sh do the job  from line 659

Posted
1 hour ago, buggy25200 said:

it seems to me that ext-manager.sh do the job  from line 659

 

I think i've got all shorted out .. the extensions is a bit heavier work than just dumping all mods to a file like extra.lzma in the past

Posted
48 minutes ago, pocopico said:

 

I think i've got all shorted out .. the extensions is a bit heavier work than just dumping all mods to a file like extra.lzma in the past

Guys, i have created the repo based on popopico repo for adding the r8196 driver. but during the build the loader via toolchain 0.11

./redpill_tool_chain.sh auto apollolake-7.0-41890

faced with error.

[!] Extension "t-rex-xp.r8169" is not added/installed - did you misspell the name or forgot to do "ext-manager.sh add <URL>" first?

 

I have done the following steps:

1. ./redpill_tool_chain.sh build apollolake-7.0-41890

2. ./redpill_tool_chain.sh run apollolake-7.0-41890

3. cd ./redpill-loader 

4. ./ext-manager.sh add 'https://github.com/T-REX-XP/rp-ext-realtek/blob/main/r8169/rpext-index.json'

The extension has been installed via the following command inside the container

 

So, What is the right manual for toolkit to integrate the extensions?

Thanks

 

 

  • Like 3
Posted (edited)

 

1 hour ago, T-REX-XP said:

I have done the following steps:

1. ./redpill_tool_chain.sh build apollolake-7.0-41890

2. ./redpill_tool_chain.sh run apollolake-7.0-41890

3. cd ./redpill-loader 

4. ./ext-manager.sh add 'https://github.com/T-REX-XP/rp-ext-realtek/blob/main/r8169/rpext-index.json'

The extension has been installed via the following command inside the container

 

 

For time beeing this is the approach, until it's sorted out with TTG on how to implement the integration.

 

To build the bootloader you then simply have to execute the command `make -C /opt/ build_all` or follow the steps in the redpill-load documentation.

As per my understanding, the bootloader should be build with the added extension.

 

As a quick solution to get rid of step 3. I could embbed the symlink to the ext-manager.sh like TTG suggested.

Edited by haydibe
Posted
16 minutes ago, haydibe said:

 

 

For time beeing this is the approach, until it's sorted out with TTG on how to implement the integration.

 

To build the bootloader you then simply have to execute the command `make -C /opt/ build_all` or follow the steps in the redpill-load documentation.

As per my understanding, the bootloader should be build with the added extension.

 

As a quick solution to get rid of step 3. I could embbed the symlink to the ext-manager.sh like TTG suggested.

Quick answer:

I have successfully fixed reported issue by adding the following line to the docker/entrypoint.sh before build_all

 echo "----Add ext modules"
/opt/redpill-load/ext-manager.sh add "https://github.com/T-REX-XP/rp-ext-realtek/raw/main/r8169/rpext-index.json"  

 

But it's hardcode.

Now the toolkit v0.11 do not have valid way to use ext loader.

It seems to be a section inside platform config, with the links to the modules, then this modules should be installed before the build all.

 

 

Posted
5 minutes ago, T-REX-XP said:

Quick answer:

I have successfully fixed reported issue by adding the following line to the docker/entrypoint.sh before build_all


 echo "----Add ext modules"
/opt/redpill-load/ext-manager.sh add "https://github.com/T-REX-XP/rp-ext-realtek/raw/main/r8169/rpext-index.json"  

 

But it's hardcode.

Now the toolkit v0.11 do not have valid way to use ext loader.

It seems to be a section inside platform config, with the links to the modules, then this modules should be installed before the build all.

 

 

 

Also it seems to be the following repo should contains all available extensions in feature - https://github.com/RedPill-TTG/redpill-extensions. Like a marketplace or repository with a links. So it will be enough to define available extensions unique name inside extensions section in the config.

"extensions": [ "thethorgroup.virtio", "example_dev.example_extension" ]

 

  • Like 1
Posted (edited)

Hello everyone, can you tell me if I can update my xpenology? I am using the june 1.0.4 bootloader and the dsm version DS1019+ DSM 6.2.3-25426 Update 3. I have everything installed on hardware, I don't use esxi.

Edited by aportnov
Posted
17 minutes ago, aportnov said:

Hello everyone, can you tell me if I can update my xpenology? I am using the june 1.0.4 bootloader and the dsm version DS1019+ DSM 6.2.3-25426 Update 3. I have everything installed on hardware, I don't use esxi.

Hi you are sure that you have installed dsm version 1019+ on your 1.04b loader ? 

Posted
45 minutes ago, T-REX-XP said:

Quick answer:

I have successfully fixed reported issue by adding the following line to the docker/entrypoint.sh before build_all


 echo "----Add ext modules"
/opt/redpill-load/ext-manager.sh add "https://github.com/T

Uhm, so you didn't ask before what the next commands would be to follow up on the steps you posted before? 

 

I already had the implementation for configurable ext-manager support finished on wednesday, but had no extension repos to test. Now that we have some existing repos, I can test it. But the release will be on hold to make sure my implementation alligns with what @ThorGroup has in mind. 

  • Thanks 1
Posted (edited)
8 minutes ago, haydibe said:

I already had the implementation for configurable ext-manager support finished on wednesday, but had no extension repos to test.

Please use my repos. They are working -


 

./ext-manager.sh add https://github.com/T-REX-XP/rp-ext-realtek/raw/main/r8169/rpext-index.json

./ext-manager.sh add https://github.com/T-REX-XP/rp-ext-realtek/raw/main/r8152/rpext-index.json

 

Also I have found that current version of toolchain 0.11 copying the entrypoint.sh to the container instead of passing from the host machine.

I'm working on improved version of the toolkit right now

 

TODO: How about to add the power button modules as extension?)

 

 

Edited by T-REX-XP
  • Like 1
Posted
16 minutes ago, haydibe said:

Uhm, so you didn't ask before what the next commands would be to follow up on the steps you posted before? 

 

1. modify entrypoint.sh as I mentioned before

2. run  ./redpill_tool_chain.sh build apollolake-7.0-41890  # due to copying entrypoint.sh inside container

3. run  ./redpill_tool_chain.sh auto apollolake-7.0-41890

4. See build logs then check logs inside VM during the boot. I have already checked it on my vm. So my custom modules are loaded during the boot.

 

Posted (edited)
18 hours ago, vbap said:

From what I have read in this thread, this is fine (I believe @WiteWulf requested ThorGroup to put a better message here, as Jun's loader did).

Have you looked for your diskstation on the network (find.synology.com) and started the install process?

Thank you @vbap. Very much appreciated.

 

I tried that and it's not found with find.synology.com. But, I do see that the IP address has been assigned. Interestingly, find.synology.com is not finding my other synology box either. So, need to do some local troubleshooting as to what's going on. Maybe pihole is black holing it. I will some more digging and will post an update.

 

Thank you for the help,

Edited by urundai
Posted (edited)
12 hours ago, coint_cho said:

Was hoping for an answer to whether there’s any workarounds for old hardware, but since no harm in bumping, would like to ask ThorGroup if this is bypassable :D, disks work in 6.1.7 MBR modified by Genysys but not other images sadly. 

 

I had a similar issue some weeks ago with only 1 of4 disks recognised. My baremetal is old, too (Intel SS4200, core2duo, ICH7R, mbr, no uefi). I started from scratch with tool-chain 0.11 and DS3615xs. Before starting I cleaned old tool-chain and cache, then put pid, vid, s/n and mac and it worked (it is PID first, then VID, i mixed that up on first try I think or the order was changed).

What hardware do you have (chipset southbridge)?

Edited by paro44
Guest
This topic is now closed to further replies.
×
×
  • Create New...