Jump to content
XPEnology Community

Develop and refine the DS3622xs+ loader


yanjun

Recommended Posts

4 minutes ago, neonflx said:

Supermicro X10SDV-6C+-TLN4F

You have a board with IPMI, so you should be able to access them via dedicated port as in specification.
Via IPMI you can access to a serial port where you can see debug message and where it stops 
Try to install without any internet connection.

 

Intel® Node Manager, IPMI (Intelligent Platform Management Interface) v2.0 with KVM support, IPMI2.0, KVM with dedicated LAN, NMI, SUM, SuperDoctor® 5, Watchdog

 

Edited by Aigor
Link to comment
Share on other sites

@pocopico comparing shutdown/restart issue with DS3615xs loader :

 

[  OK  ] Stopped Synology swap.
[  OK  ] Stopped System Logger Daemon.
[  OK  ] Removed slice Synology DSM internal service.
[  OK  ] Stopped Synology task manager.
[  OK  ] Stopped synodsmnotify.
         Stopping synodsmnotify...
[  OK  ] Stopped Create Volatile Files and Directories.
         Stopping Create Volatile Files and Directories...
[  OK  ] Reached target Shutdown.
[   73.949785] systemd-shutdown[1]: Sending SIGTERM to remaining processes...
[   73.953160] systemd-journald[5230]: Received SIGTERM from PID 1 (systemd-shutdow).
[   74.954840] systemd-shutdown[1]: Sending SIGKILL to remaining processes...
[   74.957865] systemd-shutdown[1]: Unmounting file systems.
[   74.958880] systemd-shutdown[1]: Unmounting /config.
[   74.959732] systemd-shutdown[1]: Unmounting /tmp.
[   75.082367] EXT4-fs (md0): re-mounted. Opts: (null)
[   75.085373] EXT4-fs (md0): re-mounted. Opts: (null)
[   75.086235] EXT4-fs (md0): re-mounted. Opts: (null)
[   75.087066] systemd-shutdown[1]: All filesystems unmounted.
[   75.088058] systemd-shutdown[1]: Deactivating swaps.
[   75.088925] systemd-shutdown[1]: All swaps deactivated.
[   75.089810] systemd-shutdown[1]: Detaching loop devices.
[   75.091176] systemd-shutdown[1]: All loop devices detached.
[   75.092111] systemd-shutdown[1]: Detaching DM devices.
[   75.093085] systemd-shutdown[1]: All DM devices detached.
systemd-shutdown: run synobios_uninit OK
[   75.135109] systemd-shutdown[1]: Powering off.
[   76.136658] sd 1:0:0:0: [sda] Stopping disk
[   76.137455] sd 0:0:0:0: [synoboot] Stopping disk
[   76.140596] ACPI: Preparing to enter system sleep state S5
[   76.141719] Power down.
[   76.142403] acpi_power_off called
Connection closed by foreign host.

As you can see the next line after "[   76.137455] sd 0:0:0:0: [synoboot] Stopping disk" is ACPI

 

So I bet something is missing about it.

Link to comment
Share on other sites

1 hour ago, Aigor said:

I suppose, i have both entry in gub USB ans sata 

So I installed your IMG on ESXi. after editing grub.cfg to match my ESXi configuration.

I successfully installed it

 

Shutdown and restart works on this one.

 

[  319.062661] systemd-shutdown[1]: Unmounting file systems.
[  319.063708] systemd-shutdown[1]: Unmounting /config.
[  319.072539] systemd-shutdown[1]: Unmounting /tmp.
[  319.184381] EXT4-fs (md0): re-mounted. Opts: (null)
[  319.186738] EXT4-fs (md0): re-mounted. Opts: (null)
[  319.187595] EXT4-fs (md0): re-mounted. Opts: (null)
[  319.188447] systemd-shutdown[1]: All filesystems unmounted.
[  319.189464] systemd-shutdown[1]: Deactivating swaps.
[  319.190361] systemd-shutdown[1]: All swaps deactivated.
[  319.191286] systemd-shutdown[1]: Detaching loop devices.
[  319.192613] systemd-shutdown[1]: All loop devices detached.
[  319.193649] systemd-shutdown[1]: Detaching DM devices.
[  319.194567] systemd-shutdown[1]: All DM devices detached.
systemd-shutdown: run synobios_uninit OK
[  319.217317] systemd-shutdown[1]: Powering off.
[  320.220243] <redpill/override_symbol.c:239> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/ffffffffa0b14430>
[  320.222261] <redpill/override_symbol.c:246> Writing original code to <ffffffffa0b14430>
[  320.223609] <redpill/override_symbol.c:239> Released lock for <ffffffffa0b14430>
[  320.224892] <redpill/override_symbol.c:207> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/ffffffffa0b14430>
[  320.226852] <redpill/override_symbol.c:217> Writing trampoline code to <ffffffffa0b14430>
[  320.228221] <redpill/override_symbol.c:207> Released lock for <ffffffffa0b14430>
[  320.229479] <redpill/bios_hwcap_shim.c:65> proxying GetHwCapability(id=8)->support => real=0 [org_fout=0, ovs_fout=0]
[  321.145234] sd 1:0:0:0: [sda] Stopping disk
[  321.146068] sd 0:0:0:0: [synoboot] Stopping disk
[  321.147140] e1000e: EEE TX LPI TIMER: 00000000
[  321.182932] e1000e 0000:03:00.0: Refused to change power state, currently in D0
[  321.186261] parameter error. gpiobase=00000000, pin=0, pValue=ffff88007b153da4
[  321.688016] Turned off USB vbus gpio 0 (ACTIVE_LOW)
[  321.688856] parameter error. gpiobase=00000000, pin=0, pValue=ffff88007b153da4
[  322.191100] Turned off USB vbus gpio 0 (ACTIVE_LOW)
[  322.191940] parameter error. gpiobase=00000000, pin=0, pValue=ffff88007b153da4
[  322.694204] Turned off USB vbus gpio 0 (ACTIVE_LOW)
[  322.695057] parameter error. gpiobase=00000000, pin=0, pValue=ffff88007b153da4
[  323.197373] Turned off USB vbus gpio 0 (ACTIVE_LOW)
[  323.198215] parameter error. gpiobase=00000000, pin=0, pValue=ffff88007b153da4
[  323.700347] Turned off USB vbus gpio 0 (ACTIVE_LOW)
[  323.709283] ACPI: Preparing to enter system sleep state S5
[  323.710428] reboot: Power down
[  323.711224] acpi_power_off called
[  323.711885] Confirm SLP_TYP poweroff status 0 pm1a 1 pm1b 1
[  323.722817] Confirm OS poweroff status 0 pm1a 2001 pm1b 2001
Connection closed by foreign host.

 

Your build flood with following message at boot  :

 

[   98.193601] Copyright(c) 1999 - 2019 Intel Corporation.
[   98.194602] ixgbe: probe of 0001:04:00.0 failed with error -5
[   98.195598] ixgbe: probe of 0001:04:00.1 failed with error -5
[   98.197389] Module [ixgbe] is removed.
[   98.205090] <redpill/bios_shims_collection.c:29> set_gpio pin info 0  4
[   98.206213] <redpill/bios_shims_collection.c:30> set_gpio p

but shutdown/restart works.

 

 

Link to comment
Share on other sites


SynologyNAS login: [  124.241961] EXT4-fs (md0): mounted filesystem with ordered data                                                                   mode. Opts: (null)
[  124.394974] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  124.442649] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  124.495216] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  124.593152] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  169.797817] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  169.887183] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  229.989726] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  230.081895] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  230.265004] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  230.332759] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  230.385358] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  230.573577] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  230.637467] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  230.675400] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  279.714133] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  279.800995] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  296.101481] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  296.196747] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  339.905760] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  339.944086] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  356.294697] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  356.333374] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  388.822489] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  388.912093] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  436.468565] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  436.554132] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  448.999431] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  449.038142] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  496.660067] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  496.698494] <redpill/smart_shim.c:352> ATA_CMD_ID_ATA confirmed SMART support - noop
[  496.784944] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  496.879306] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  497.062403] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  497.130310] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  497.182796] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  497.250593] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  497.303221] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  497.370990] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
[  504.881419] ext2: synoboot2 mounted, process=updater
[  504.907038] vfat: synoboot2 mounted, process=updater
[  504.963077] synoboot2 unmounted, process=updater
[  504.997788] ext2: synoboot2 mounted, process=updater
[  505.023036] vfat: synoboot2 mounted, process=updater
[  505.079066] synoboot2 unmounted, process=updater
[  505.113789] ext2: synoboot2 mounted, process=updater
[  505.139415] vfat: synoboot2 mounted, process=updater
[  505.195816] synoboot2 unmounted, process=updater
[  505.230792] ext2: synoboot2 mounted, process=updater
[  505.256414] vfat: synoboot2 mounted, process=updater
[  510.712079] synoboot2 unmounted, process=updater
[  515.746145] ext2: synoboot2 mounted, process=updater
[  515.771893] vfat: synoboot2 mounted, process=updater
[  516.309141] synoboot2 unmounted, process=updater
[  516.351024] ext2: synoboot1 mounted, process=updater
[  516.376780] vfat: synoboot1 mounted, process=updater
[  516.441321] synoboot1 unmounted, process=updater
[  517.857360] <redpill/intercept_execve.c:82> Blocked ./H2OFFT-Lx64 from running
[  517.891909] Module [phy_alloc] is removed.
[  552.155601] EXT4-fs (md0): couldn't mount as ext3 due to feature incompatibilities
[  552.247626] EXT4-fs (md0): mounted filesystem with ordered data mode. Opts: (null)
The system is going down NOW!
Sent SIGTERM to all processes
[  553.401802] md: md0 in immediate safe mode
[  553.401806] md: md1 in immediate safe mode
Sent SIGKILL to[  553.440316] Module [usb_storage] is removed.
 all processes
[  553.461172] usbcore: deregistering interface driver usb-storage
rmmod usb_storage
Requesting system reboot
[  558.511884] <redpill/usb_boot_shim.c:98> Previously shimmed boot device gone away
[  558.547602] <redpill/override_symbol.c:250> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/fffff                               fffa052e430>
[  558.603538] <redpill/override_symbol.c:250> Writing original code to <ffffffffa052e430>
[  558.641924] <redpill/override_symbol.c:250> Released lock for <ffffffffa052e430>
[  558.677601] <redpill/override_symbol.c:221> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/fffff                               fffa052e430>
[  558.733769] <redpill/override_symbol.c:221> Writing trampoline code to <ffffffffa052e430>
[  558.772864] <redpill/override_symbol.c:221> Released lock for <ffffffffa052e430>
[  558.808172] <redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=8)->support => real=0 [org_fout=0, ovs_fout=0]
[  558.858996] <redpill/override_symbol.c:250> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/fffff                               fffa052e430>
[  558.915627] <redpill/override_symbol.c:250> Writing original code to <ffffffffa052e430>
[  558.953880] <redpill/override_symbol.c:250> Released lock for <ffffffffa052e430>
[  558.989236] <redpill/override_symbol.c:221> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/fffff                               fffa052e430>
[  559.045347] <redpill/override_symbol.c:221> Writing trampoline code to <ffffffffa052e430>
[  559.083725] <redpill/override_symbol.c:221> Released lock for <ffffffffa052e430>
[  559.119358] <redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=8)->support => real=0 [org_fout=0, ovs_fout=0]
[  559.170064] <redpill/override_symbol.c:250> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/fffff                               fffa052e430>
[  559.225983] <redpill/override_symbol.c:250> Writing original code to <ffffffffa052e430>
[  559.264414] <redpill/override_symbol.c:250> Released lock for <ffffffffa052e430>
[  559.299473] <redpill/override_symbol.c:221> Obtaining lock for <GetHwCapability+0x0/0x100 [broadwellnk_synobios]/fffff                               fffa052e430>
[  559.355553] <redpill/override_symbol.c:221> Writing trampoline code to <ffffffffa052e430>
[  559.394677] <redpill/override_symbol.c:221> Released lock for <ffffffffa052e430>
[  559.430073] <redpill/bios_hwcap_shim.c:66> proxying GetHwCapability(id=8)->support => real=0 [org_fout=0, ovs_fout=0]
[  559.481198] sd 0:0:0:0: [sda] Synchronizing SCSI cache
[  559.513587] sd 0:0:0:0: [sda] Stopping disk

this is the Log after first Boot from USB Stick and upload the .pat File.

Link to comment
Share on other sites

I have tried several methods, TC 3622 disabling onboard 10g nic I can build an image boot and start the install

I tried the latest redpill loader action and can boot and start the install

on either case I't goes to about 56% and then failed to install

 

I disabled internet when installing

I tried onboard controller only, the LSI card only or both still same results

 

Using a Supermicro X10SDV-6C+-TLN4F motherboard with embeded xeon 1528 and dual 10gb and dual 1gb which the 1gb are disabled

any sugestions

Link to comment
Share on other sites

1 hour ago, neonflx said:

I have tried several methods, TC 3622 disabling onboard 10g nic I can build an image boot and start the install

I tried the latest redpill loader action and can boot and start the install

on either case I't goes to about 56% and then failed to install

 

I disabled internet when installing

I tried onboard controller only, the LSI card only or both still same results

 

Using a Supermicro X10SDV-6C+-TLN4F motherboard with embeded xeon 1528 and dual 10gb and dual 1gb which the 1gb are disabled

any sugestions

Stupid me I had PID and VID backwards is working now

Link to comment
Share on other sites

2 hours ago, Aigor said:

I know, but i'm not so skilled to add pocopico code to disable 10gbit card 


you only need to add one line on your user_config.json when you are creating the loader to disable the 10G module loading. 
 

"synoinfo": {

"internalportcfg" : "0xffff",

"maxdisks" : "16",

"support_bde_internal_10g" : "no"

},

 

  • Like 2
Link to comment
Share on other sites

On 2/13/2022 at 11:31 PM, yanjun said:

 Caveat: This build is going to look for "certified" Synology drives. If you're using something other than what's on the official list, you're likely to receive the dreaded "Unverified" critical drive warning after logging in.

 

I was able to get around the error by using the tutorial here: https://linustechtips.com/topic/1371655-synology-dsm-7-drive-lock-bypass/

 

TL;DR: ssh into your instance and using sudo, edit the correct file(s) for your build and drive details under /var/lib/disk-compatibility.

 

In my case, I prepended 

{"model":"ST4000LM024-2AN17V","firmware”:”0001”,”rec_intvl”:[1]},

to the "ds3622xs+_host_v7.db" file, saved, rebooted, and the warning went away. (I didn't have a *.new file listed)

I'm also able to see all the SMART details now as well.

 

There is an easier way to do this.  Edit /etc.defaults/synoinfo.conf and change 

support_disk_compatibility="yes"
 

to "no" and reboot.  Then all drives can be used without error messages. Seems like it would be a good option to push with redpill.

Edited by flyride
  • Thanks 2
Link to comment
Share on other sites

I migrated my production environment from DS918+ 6.2.3 to DS3622xs+ 7.0.1. I have encountered a very strange problem.

 

1. In version 6.2.3, if there is a problem with the system partition, i.e. md0 and md1, click Repair to repair it at one time. On 3622xs+, when I put my 12 data hard disks in, I clicked several times. Repair, it looks like only two or three disks are repaired per repair.

 

2. After many tests, I found that if I repaired the system partition of 5-6 disks, it became a new installation interface after restarting. After I checked the serial port log, the following information may be the reason. 

 

Any suggestions? @flyride

xpenology Assemble args: -u fb2a1e5a:ddbddd41:05d949f7:b0bbaec7 /dev/sda1 /dev/sdb1 /dev/sdc1 /dev/sdd1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1 /dev/sdi1 /dev/sdj1 /dev/sdk1 /dev/sdl1 /dev/sdm1 
[   26.580114] md: md0 stopped. 
[   26.587142] md: bind<sdb1> 
[   26.587800] md: bind<sdc1> 
[   26.588369] md: bind<sdd1> 
[   26.588868] md: bind<sde1> 
[   26.589543] md: bind<sdf1> 
[   26.590169] md: bind<sdg1> 
[   26.590844] md: bind<sdh1> 
[   26.591857] md: bind<sdi1> 
[   26.592676] md: bind<sdk1> 
[   26.593449] md: bind<sdl1> 
[   26.594239] md: bind<sdm1> 
[   26.594800] md: bind<sda1> 
[   26.595212] md: kicking non-fresh sdm1 from array! 
[   26.595827] md: unbind<sdm1> 
[   26.601041] md: export_rdev(sdm1) 
[   26.601415] md: kicking non-fresh sdl1 from array! 
[   26.601932] md: unbind<sdl1> 
[   26.607042] md: export_rdev(sdl1) 
[   26.607416] md: kicking non-fresh sdk1 from array! 
[   26.607936] md: unbind<sdk1> 
[   26.613042] md: export_rdev(sdk1) 
[   26.614010] md/raid1:md0: active with 9 out of 25 mirrors 
[   26.614778] md: pers->run() failed ... 
[   26.615331] md: md0 stopped. 
[   26.615651] md: unbind<sda1> mdadm: failed to RUN_ARRAY /dev/md0: Invalid argument 
[   26.628082] md: export_rdev(sda1) 
[   26.628456] md: unbind<sdi1> 
[   26.635050] md: export_rdev(sdi1) 
[   26.635436] md: unbind<sdh1> 
[   26.641041] md: export_rdev(sdh1) 
[   26.641418] md: unbind<sdg1> 
[   26.647037] md: export_rdev(sdg1) 
[   26.647455] md: unbind<sdf1> 
[   26.652051] md: export_rdev(sdf1) 
[   26.652433] md: unbind<sde1> 
[   26.659049] md: export_rdev(sde1) 
[   26.659413] md: unbind<sdd1> 
[   26.665046] md: export_rdev(sdd1) 
[   26.665411] md: unbind<sdc1> 
[   26.670016] md: export_rdev(sdc1) 
[   26.670397] md: unbind<sdb1> 
[   26.677051] md: export_rdev(sdb1) Exit on error [12] No raid status in path /sys/block/md0/md/array_state, go to junior mode... Wed Feb 16 21:34:51 UTC 2022 none /sys/kernel/debug debugfs rw,relatime 0 0 
[   26.700104] VFS: opened file in mnt_point: (/dev), file: (/null), comm: (hotplug)
Edited by yanjun
Link to comment
Share on other sites

This is showing the result state, not how it got there. It would appear that the /dev/md0 array (/root partition for Linux) has been corrupted.  When /dev/md0 cannot be mounted, there is no DSM, and it will offer to install which is consistent with your report.

 

As to why it happened, I have no idea.  Diagnostically, I would probably choose to evaluate the state of the arrays before clicking "Repair" and then, see what it was actually doing during recovery (i.e. before and after cat /proc/mdstat).  But this doesn't help you much with the system as it is.  I hope you can reinstall or backrev and your data arrays are intact.

 

For me, redpill testing is nowhere near extensive enough for me to go to "production" but I realize the definition varies for different people.

  • Like 1
Link to comment
Share on other sites

Il y a 7 heures, altas a dit :

so long you have a version that is working why not Productiv.. never touch a running system ;) so never try Update DSM.

On a DSM 7.0.1  3617xs+

My prod server has been running for 44 days, no crashes, 10SATA disk, 9 dockers including 1 mariasql 1.3Go DBB/ with nextcloud. 1 VM  under ubuntu server "mailinabox", 11 machines on hyperbackup including 3 under linux!

It rock !

;)

Edited by buggy25200
  • Like 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...