Jump to content
XPEnology Community

TinyCore RedPill Loader (TCRP)


pocopico

Recommended Posts

On 12/7/2022 at 1:43 PM, xenpology said:

Hi Guys,

 

have an issue

I got a S740 Futro with gemini lake intel celeron 4105

 

When i execute

./rploader.sh build geminilake-7.1.0-42661

 

i get: Error: Plattform not found!

 

Any ideas?

I am using newest img 0.9.3

Yes. 

 

@pocopico has changed names of Synology models....

 

In v0.8.0.5 you have for instance 

 

....broadwellnk-7.1.0-42661

 

In v0.9.3 you have 

 

......DS3622xsp-7.1.0-42661 instead

 

Before you bild just typ in

 

./rploader.sh into terminal and execute it......that will list you which models (names) specific loader supports..... 

Edited by Kamele0N
Link to comment
Share on other sites

Hi,

I'm trying to install DSM 7.1 on KVM (RHEL8) by using the following guide:

All of the steps complete successfully but after step8 (exitcheck.sh reboot) I don't get a working network card for my guest (tried virtio and e1000e)

 

If I use 'e1000e' as the H/W type for the NIC, the serial console shows this on bootup:

 

  Quote

 

Mon Jan  2 22:29:25 2023

SynologyNAS login: [   49.725380] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   49.727706] e1000e 0000:01:00.0 eth0: NIC Link is Down
[   66.301763] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   66.303984] e1000e 0000:01:00.0 eth0: NIC Link is Down
[   71.602002] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   71.614607] e1000e 0000:01:00.0 eth0: NIC Link is Down
[   74.747194] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   74.749222] e1000e 0000:01:00.0 eth0: NIC Link is Down
[   79.934335] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[   79.936743] e1000e 0000:01:00.0 eth0: NIC Link is Down
[   80.564343] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx

 


if I try to use virtio for the NIC and add the driver, no NIC is detected and an ls of /lib/modules does not show any virtio NIC driver.

 

 

./rploader.sh ext ds3622xsp-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/redpill-load/master/redpill-virtio/rpext-index.json

./rploader.sh build ds3622xsp-7.1.0-42661

 

 

SynologyNAS> ls
adt7475.ko             leds-atmega1608.ko     syno_hddmon.ko
cdc-acm.ko             leds-lp3943.ko         synobios.ko
dca.ko                 lockd.ko               synofsbd.ko
e1000e.ko              loop.ko                synorbd.ko
ehci-hcd.ko            mpt3sas.ko             udp_tunnel.ko
ehci-pci.ko            mv14xx.ko              usb-common.ko
fat.ko                 nfs.ko                 usb-storage.ko
grace.ko               nfsv3.ko               usbcore.ko
hid.ko                 pgdrv.ko               usbhid.ko
i2c-algo-bit.ko        phy_alloc_0810_x64.ko  vfat.ko
i40e.ko                r8168.ko               vxlan.ko
igb.ko                 sg.ko                  xhci-hcd.ko
ip6_udp_tunnel.ko      sha256_generic.ko      xhci-pci.ko
ixgbe.ko               sunrpc.ko

Expand  


 

The steps I am using are as follows:

./rploader.sh update
./rploader.sh fullupgrade
./rploader.sh|grep 3622
./rploader.sh identifyusb
./rploader.sh serialgen DS3622xs+
./rploader.sh satamap
tce-load -iw nano
nano user_config.json
./rploader.sh listmods ds3622xsp-7.1.0-42661

./rploader.sh ext ds3622xsp-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/redpill-load/master/redpill-virtio/rpext-index.json (ONLY FOR VIRTIO)

./rploader.sh build ds3622xsp-7.1.0-42661
exitcheck.sh reboot

 

Expand  

 

Any ideas?

 

I'd like to get DSM 7.1.1 working in a VM so I can test out things without doing it on the real NAS (my only NAS).
Thanks for the help,


V.

 

Edited by ElCoyote_
Link to comment
Share on other sites

Interestingly, I found a workaround for e1000e:

 

- Let the KVM DSM guest boot and login on the serial port.

- On the virtual serial port, login as 'root'

- when logged in, type 'rmmod e1000e' (this unloads the driver)

- then type insmod /lib/modules/e1000e.ko

-then- things started working and I could ping my virtual NAS.

 

I was then able to find the NAS in the Synology assistant and install it.

Upon bootup, I was greeted by the same kind of e1000e messages:
 

[  108.413268] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  108.416259] e1000e 0000:01:00.0 eth0: NIC Link is Down
[  117.923412] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  117.926688] e1000e 0000:01:00.0 eth0: NIC Link is Down
[  119.126404] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  119.129814] e1000e 0000:01:00.0 eth0: NIC Link is Down
[  120.003781] e1000e 0000:01:00.0 eth0: NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  120.007264] e1000e 0000:01:00.0 eth0: NIC Link is Down

 

Again, I did the module unload/reload, had to configure the IP manually and things started working:

root@SynologyNAS:~# rmmod e1000e
[  160.173250] Module [e1000e] is removed. 
[  160.238040] e1000e 0000:01:00.0 eth0: NIC Link is Down

root@SynologyNAS:~# insmod /lib/modules/e1000e.ko 
[  173.549954] e1000e: Intel(R) PRO/1000 Network Driver - 3.4.2.4-NAPI
[  173.551544] e1000e: Copyright(c) 1999 - 2019 Intel Corporation.
[  173.555129] e1000e 0000:01:00.0: Interrupt Throttling Rate (ints/sec) set to dynamic conservative mode
[  173.601491] e1000e 0000:01:00.0 0000:01:00.0 (uninitialized): registered PHC clock
[  173.657412] e1000e 0000:01:00.0 eth0: (PCI Express:2.5GT/s:Width x1) 52:54:00:8c:0b:bd
[  173.658556] e1000e 0000:01:00.0 eth0: Intel(R) PRO/1000 Network Connection
[  173.659545] e1000e 0000:01:00.0 eth0: MAC: 3, PHY: 8, PBA No: 000000-000

root@SynologyNAS:~# ifconfig eth0 10.0.128.227 netmask 255.255.252.0 up
[  292.929087] e1000e 0000:01:00.0 eth0: MSI interrupt test failed, using legacy interrupt.
[  292.932374] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
[  292.933235] 8021q: adding VLAN 0 to HW filter on device eth0
root@SynologyNAS:~# [  293.312637] e1000e: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: Rx/Tx
[  293.318196] IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready

It really seems as if the e1000e driver does not work the first time it is loaded

 

this is what the virtual e1000e Ethernet H/w shows up like:

 

root@SynologyNAS:~# lspci -v -s  0000:01:00.0
0000:01:00.0 Class 0200: Device 8086:10d3
    Subsystem: Device 8086:0000
    Flags: bus master, fast devsel, latency 0, IRQ 22
    Memory at fca40000 (32-bit, non-prefetchable) [size=128K]
    Memory at fca60000 (32-bit, non-prefetchable) [size=128K]
    I/O ports at d000 [size=32]
    Memory at fca80000 (32-bit, non-prefetchable) [size=16K]
    Expansion ROM at fca00000 [disabled] [size=256K]
    Capabilities: [c8] Power Management version 2
    Capabilities: [d0] MSI: Enable- Count=1/1 Maskable- 64bit+
    Capabilities: [e0] Express Endpoint, MSI 00
    Capabilities: [a0] MSI-X: Enable- Count=5 Masked-
    Capabilities: [100] Advanced Error Reporting
    Capabilities: [140] Device Serial Number 52-54-00-ff-ff-8c-0b-bd
    Kernel driver in use: e1000e

 

It also affects the VM itself each time it boots so I just created a task on bootup that does the following:

(DSM -> Task scheduler -> Triggered task -> Bootup -> user-defined script (run as roo)

/sbin/rmmod e1000e
/sbin/insmod /lib/modules/e1000e.ko
/bin/systemctl restart synoovs-vswitch.service
/sbin/ifconfig eth0 up
/sbin/ifconfig eth1 up
/sbin/ifconfig ovs_bond0 up

 

Link to comment
Share on other sites

Is it possible to update DSM from within DSM (like you'd do on a regular Synology)?

 

my virtual DSM KVM guest:

1) reports an error when going to DSM->Controle Panel->Update & restore: "Connection failed. Please check your internet connection"
2) no longer boots if I try to apply the DSM .pat file manually ("Manual DSM update").

Link to comment
Share on other sites

Awesome stuff, got my server upgraded thanks to fantastic work. It wasn't hassle free however, I had some issues with DSM not installing, but after digging forum I found the solution by adding internalportcfg="0xfff" and it worked. Could someone explain in plain English what is the correct value for your particular system? Can some one tell me how to clear HDD logs in Storage Manager? Thank you. 

@Peter Suh, awesome menu and easy to use. Thank you!!

Link to comment
Share on other sites

I'm trying to get an Adaptec 72405 (24 port card) working on DS3622xs+ under Proxmox, with DSM installed on the (mirrored) Proxmox VM storage drive, but there is something odd happening. I've spent days trying to get this to work and really need some help!

 

I can get a basic build working following the tutorial here https://www.wundertech.net/how-to-install-xpenology-on-proxmox-dsm-7/

This gets DSM running on the Proxmox storage drive.

 

The next step (and where I am stuck) has been trying to create a build passing an Adaptec 72405 (24 port card set to HBA mode) through - with the DSM will still installing on the VM drive and the disks on the Adaptec card are then seen as additional storage (to create storage pools etc).

Essentially the same process as above, but adding the Adaptec driver extension. (./rploader.sh ext broadwellnk-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/rp-ext/master/aacraid/rpext-index.json )

 

 

When booting into DSM and running the Synology installer:

The DSM installer format fails - but DSM does is install and is available on rebooting the VM.

DSM fails to see the 20G VM disk

The drives on the Adaptec card start from slot 3

Only 14 of the 15 drives I currently have installed are picked up (if I add extra drives they are not listed)*

*I don't understand this as maxdisks was set to 32 in the user_config.json, so I was expecting to at least to see all the drives (just not necessarily in the right order).

 

I suspect I need to provide some manual values for SataPortMap and DiskIdxMap (assuming these will work using the Adaptec card), but I'm guessing this needs to see all the drives to start with?

I'm not sure why the Intel SATA controller is showing 8 Drives - my understanding is that this should only show the single 20gb Proxmox drive.

Based on the satamap output below I've set Maxdisks to 32 (8 Intel + 24 Adaptec)

 

Spoiler

tc@box:~$ ./rploader.sh satamap now
Machine is VIRTUAL Hypervisor=KVM
Found SCSI HBAs, We need to install the SCSI modules
scsi-5.10.3-tinycore64 is already installed!
Succesfully installed SCSI modules

Found "00:1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)"
Detected 6 ports/0 drives. Mapping KVM q35 bogus controller after maxdisks

Found "06:07.0 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)"
Detected 6 ports/2 drives. Mapping SATABOOT drive after maxdisks
WARNING: Other drives are connected that will not be accessible!

Found SCSI/HBA "01:00.0 Adaptec Series 7 6G SAS/PCIe 3 (rev 01)" (15 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid bus number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
Found SCSI/HBA "06:07.0 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)" (8 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
Found SCSI/HBA "05:03.0 Red Hat, Inc. QEMU PCI-PCI bridge 06:03.0 Red Hat, Inc Virtio memory balloon" (81 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)

Computed settings:
SataPortMap=11
DiskIdxMap=2020

 

At the point the DSM installer errors during formatting, fdisk (via ssh) shows the following:

Spoiler

fdisk -l |grep Disk | grep sd
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
Disk /dev/sdc: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdd: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sde: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdf: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdg: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdh: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdi: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdj: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdk: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdl: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdm: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdn: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdo: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdp: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdq: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdq doesn't contain a valid partition table

I think the 2048 GB lines here are the DSM created partitions.

 

On reboot with DSM running, fdisk from ssh shows 15 drives (but not the 20gb virtual drive):

Spoiler

Disk /dev/sdc: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdd: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdf: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdg: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdh: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdi: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdj: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdl: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdm: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdo: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdp: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdn: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sde: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors

 

LBOISYZ.jpg

Edited by DarkBlade
Link to comment
Share on other sites

2 hours ago, DarkBlade said:

I'm trying to get an Adaptec 72405 (24 port card) working on DS3622xs+ under Proxmox, with DSM installed on the (mirrored) Proxmox VM storage drive, but there is something odd happening. I've spent days trying to get this to work and really need some help!

 

I can get a basic build working following the tutorial here https://www.wundertech.net/how-to-install-xpenology-on-proxmox-dsm-7/

This gets DSM running on the Proxmox storage drive.

 

The next step (and where I am stuck) has been trying to create a build passing an Adaptec 72405 (24 port card set to HBA mode) through - with the DSM will still installing on the VM drive and the disks on the Adaptec card are then seen as additional storage (to create storage pools etc).

Essentially the same process as above, but adding the Adaptec driver extension. (./rploader.sh ext broadwellnk-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/rp-ext/master/aacraid/rpext-index.json )

 

 

When booting into DSM and running the Synology installer:

The DSM installer format fails - but DSM does is install and is available on rebooting the VM.

DSM fails to see the 20G VM disk

The drives on the Adaptec card start from slot 3

Only 14 of the 15 drives I currently have installed are picked up (if I add extra drives they are not listed)*

*I don't understand this as maxdisks was set to 32 in the user_config.json, so I was expecting to at least to see all the drives (just not necessarily in the right order).

 

I suspect I need to provide some manual values for SataPortMap and DiskIdxMap (assuming these will work using the Adaptec card), but I'm guessing this needs to see all the drives to start with?

I'm not sure why the Intel SATA controller is showing 8 Drives - my understanding is that this should only show the single 20gb Proxmox drive.

Based on the satamap output below I've set Maxdisks to 32 (8 Intel + 24 Adaptec)

 

  TinyCore satamap output (Hide contents)

tc@box:~$ ./rploader.sh satamap now
Machine is VIRTUAL Hypervisor=KVM
Found SCSI HBAs, We need to install the SCSI modules
scsi-5.10.3-tinycore64 is already installed!
Succesfully installed SCSI modules

Found "00:1f.2 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)"
Detected 6 ports/0 drives. Mapping KVM q35 bogus controller after maxdisks

Found "06:07.0 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)"
Detected 6 ports/2 drives. Mapping SATABOOT drive after maxdisks
WARNING: Other drives are connected that will not be accessible!

Found SCSI/HBA "01:00.0 Adaptec Series 7 6G SAS/PCIe 3 (rev 01)" (15 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid bus number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
Found SCSI/HBA "06:07.0 Intel Corporation 82801IR/IO/IH (ICH9R/DO/DH) 6 port SATA Controller [AHCI mode] (rev 02)" (8 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
Found SCSI/HBA "05:03.0 Red Hat, Inc. QEMU PCI-PCI bridge 06:03.0 Red Hat, Inc Virtio memory balloon" (81 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)
lspci: -s: Invalid slot number
Found SCSI/HBA "" (0 drives)

Computed settings:
SataPortMap=11
DiskIdxMap=2020

 

At the point the DSM installer errors during formatting, fdisk (via ssh) shows the following:

  Hide contents

fdisk -l |grep Disk | grep sd
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
fdisk: device has more than 2^32 sectors, can't use all of them
Disk /dev/sdc: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdd: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sde: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdf: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdg: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdh: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdi: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdj: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdk: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdl: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdm: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdn: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdo: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdp: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdq: 2048 GB, 2199023255040 bytes, 4294967295 sectors
Disk /dev/sdq doesn't contain a valid partition table

I think the 2048 GB lines here are the DSM created partitions.

 

On reboot with DSM running, fdisk from ssh shows 15 drives (but not the 20gb virtual drive):

  Hide contents

Disk /dev/sdc: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdd: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdf: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdg: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors
Disk /dev/sdh: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdi: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdj: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdk: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdl: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdm: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdo: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdp: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdq: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sdn: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors
Disk /dev/sde: 10.9 TiB, 12000138625024 bytes, 23437770752 sectors

 

LBOISYZ.jpg

 

What was your maxdisks setting in user_confg.json when you were building the loader ? The default is 16. You will not see additional drives unless you modify that and re-create the loader.

 

"internalportcfg": "0xffffff",

"maxdisks": "24",

 

Link to comment
Share on other sites

4 minutes ago, pocopico said:

 

What was your maxdisks setting in user_confg.json when you were building the loader ? The default is 16. You will not see additional drives unless you modify that and re-create the loader.

 

"internalportcfg": "0xffffff",

"maxdisks": "24",

 

 

Before building the loader I changed maxdisks from 16 to 32 (assuming 8 Intel + 24 Adaptec as reported by ./rploader.sh satamap) using the TinyCore GUI text editor.

internalportcfg was left at the default "0xffff"

 

Yet the resulting build appears to be limited to 16 disks? (so any disks over the 16 slots are ignored in DSM)

Link to comment
Share on other sites

22 minutes ago, DarkBlade said:

 

Before building the loader I changed maxdisks from 16 to 32 (assuming 8 Intel + 24 Adaptec as reported by ./rploader.sh satamap) using the TinyCore GUI text editor.

internalportcfg was left at the default "0xffff"

 

Yet the resulting build appears to be limited to 16 disks? (so any disks over the 16 slots are ignored in DSM)

 

yes you need to change the internalportcfg as well to "0xffffffff".

 

Its the hexadecimal representation of 32 drives all set to enabled:

 

Drives : 1111 1111 1111 1111 1111 1111 1111 1111

 

It has been discussed several times in this forum. We can look this up.

 

I've also found this : 

 

https://github.com/fbelavenuto/arpl/issues/304#issuecomment-1306796079

 

Edited by pocopico
Link to comment
Share on other sites

pls let me know this steps are correct for 3622xs 42962u3 with tinycore without friend thank you ??

 

install
./rploader.sh update
./rploader.sh fullupgrade
./rploader.sh identifyusb
./rploader.sh serialgen DS3622xs+ realmac
./rploader.sh satamap
./rploader.sh backup
./rploader.sh build ds3622xsp-7.1.0-42962 manual ( i need manual because without the ixgbe driver modul )(there is kernel panic when the ixgbe driver is added from tinycore because this driver is nativ in DSM)

exitcheck.sh reboot

 

install finish 

then install in DSM Update 42962u3 reboot an then boot in tinycore 

 

./rploader.sh update
./rploader.sh fullupgrade
sudo ./rploader.sh postupdate ds3622xsp-7.1.0-42962 
exitcheck.sh reboot

 

 

Edited by nemesis122
Link to comment
Share on other sites

On 1/4/2023 at 12:42 PM, pocopico said:

 

yes you need to change the internalportcfg as well to "0xffffffff".

 

Its the hexadecimal representation of 32 drives all set to enabled:

 

Drives : 1111 1111 1111 1111 1111 1111 1111 1111

 

It has been discussed several times in this forum. We can look this up.

 

I've also found this : 

 

https://github.com/fbelavenuto/arpl/issues/304#issuecomment-1306796079

 

 

Thanks for this - I can vaguely recall doing this for my old DSM 6 install! I should have remembered really.

All the drives are seen on the raid card now.

However, it's still not seeing the drive on the virtual controller for some reason - but I've given up with that (too much time spent already).

Alongside the new drives, it even sees the DSM 6 volume on the older drives from my old DSM server.

 

With the dual 10Gbe NIC, this thing is FAST at moving large files!!! Especially onto another raid array on a backup server.

Link to comment
Share on other sites

On 1/11/2023 at 7:55 PM, nemesis122 said:

Hi 

All is running fine 42962 u3 only with tinycore (no Friend) but the loader.img is cleaned during the process i need to save this file.

How can i do that?  ty 

 

Quote

Save the TinyCore configuration state as the default, so that the next boot of TInyCore starts with all your settings

./rploader.sh backup

 

Back up the generated RedPill loader partition to available space on the USB flash drive

./rploader.sh backuploader

 

  • Like 1
Link to comment
Share on other sites

Can someone please help me with this one?

 

No matter what I do, and no matter wich USB device I choose.......my problem with correct PID&VID stays the same.

 

I am asuming that this is the reason why my installations fail (@ around 65-66%....file is corrupted!)

 

I am also tying to input PID&VID manually in user_config.json........but for no avail!

 

 

IMG_20230112_194632.jpg

Edited by Kamele0N
Link to comment
Share on other sites

Tell me, I wanted to update the system from dsm 6.1.7 to something newer.  But when using tcrp it does not see the disks.
 When configuring the bootloader, it does not correctly determine the parameters satamap 1, DiskIdxMap 00.
 mother gigabyte ga-g41m-combo.

Edited by personany
Link to comment
Share on other sites

1 hour ago, personany said:

Tell me, I wanted to update the system from dsm 6.1.7 to something newer.  But when using tcrp it does not see the disks.
 When configuring the bootloader, it does not correctly determine the parameters satamap 1, DiskIdxMap 00.
 mother gigabyte ga-g41m-combo.

What platform are you trying to build? Can you post your output?

Link to comment
Share on other sites

26 минут назад, rojoone2 сказал:

What platform are you trying to build? Can you post your output?

tried ds3617 dsm 7.1.1/ds3615xs dsm 6.2.4 tcrp 0.9.4. 

jun ds3617xs dsm 6.1.7 bootloader is now installed. 
 

I assume that the motherboard does not support AHCI, maybe because of this?

Edited by personany
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...