jun

DSM 6.1.x Loader

Recommended Posts

I am confused. Do you mean that we still need to use a physical USB to boot from a VM environment to get rid of the 50MB hdd? In 5.2 age, I remembered I don't see this boot drive under DSM, yet I was booting from the vmdk directly.

Could it be the rmmod / mod_del doesn't work in this loader?

 

I am okay to leave this hdd in the system but it is annoying that it generate smart error once every few seconds.

 

You don't need to use USB drive for bootloading into xpenology under esxi. That is just the way I wanted it to work, so to a) save space on my SSD even though it is only few hundred MBs, b) making use of the already there internal usb port, in fact I even use the MircoSD slot for running esxi, c) in relation to previous point, booting from USB drive means I could easily backup and replace bootloader in case anything went wrong.

 

The entire concept of my setup is to isolate Xpenology as much as possible from the other VMs on the same Gen8 physical machine, so to up the reliability and increase revival chance. To do so I also passthrough the entire B120i (ran in AHCI mode) for Xpenology use only; that way the drives connected to B120i works just as a proper Synology drive without the layer of vmdk or use of RDM. Touch wood, if my Gen8 decide to not work, I can still unplug my drives and use Linux to read all data directly. This is abit off topic but I hope you get the idea.

 

Now I know there are ways to overcome the 50MB issue, especially when bootloader running from vmdk on SSD and such. Mine is just a way that I worked out and find suitable for my requirement. I think there are still things that I could fine tune, but my xpenology is running smooth and stable that I would rather stick with it for now. A side note that I happened to try upgrading my esxi 6 to 6.5 yesterday and for some reason it didn't work out well. I could not downgrade from 6.5 to 6 either, so I did a reinstallation of esxi 6. From this exercise, I have been able to add my xpenology back to esxi easily and everything runs as if I never upgraded; that proofs the reliability that I am after.

 

about how to upgrade from ESXi 6.0 to 6.5...

 

1st solution

 

try the HPE Custom Image for VMware ESXi 6.5 Install CD or the HPE Custom Image for VMware ESXi 6.5 Offline Bundle

https://my.vmware.com/fr/group/vmware/d ... ductId=614

for historical reason, I use the solution below since I begin with this.

well, in the begining the of year, there was no up to date HPE images, so, I prefer to build an up to date image instead of using a one year old image...

 

2nd solution

 

build yourself an up to date image

powercli

https://my.vmware.com/group/vmware/deta ... ductId=614

vmware tools

https://my.vmware.com/fr/group/vmware/d ... ductId=615

esxui (html5) & vmware remove console (vmrc) vibs

https://labs.vmware.com/flings/esxi-emb ... ost-client

esxi customizer

https://www.v-front.de/p/esxi-customizer-ps.html

HPE drivers

http://vibsdepot.hpe.com/hpe/nov2016/

http://vibsdepot.hpe.com/hpe/oct2016/

well, for faster download, I get the offline bundles, then extract the vibs from them

http://vibsdepot.hpe.com/hpe/nov2016/esxi-650-bundles/

http://vibsdepot.hpe.com/hpe/nov2016/es ... cedrivers/ (hpdsa, hpvsa & nhpsa)

http://vibsdepot.hpe.com/hpe/oct2016/es ... cedrivers/ (hpsa & nx1_tg3)

 

not used, for reference only...

additionnal drivers

https://vibsdepot.v-front.de/wiki/index ... i_packages

patches (none for 6.5 yet)

https://my.vmware.com/group/vmware/patc ... 241#search

 

built images, upload not yet terminated, 2-3 hours yet...

for offline upgrade

http://cyrillelefevre.free.fr/vmware/ES ... omized.iso

for online upgrade

http://cyrillelefevre.free.fr/vmware/ES ... omized.zip

 

for online installation, go in maintance mode, then :

well, don't remember if I do install or update... I suppose install :???:

esxcli software profile install -d $PWD/ESXi-6.5.0-4564106-standard-customized.zip -p ESXi-6.5.0-4564106-standard-customized

also, while the vmware tools are in the image, they don't get installed ! so, I installed them manually :

esxcli software vib install -v file://$PWD/VMware_locker_tools-light_6.5.0-0.0.4564106.vib

don't remember if -n tools-light is needed or not ?

 

to build the images yourself

 

contents of vibs subdirectory

 

amshelper-650.10.6.0-24.4240417.vib

conrep-6.0.0.01-02.00.1.2494585.vib

esxui-signed-4762574.vib

hpbootcfg-6.0.0.02-02.00.6.2494585.vib

hpe-cru_650.6.5.8.24-1.4240417.vib

hpe-esxi-fc-enablement-650.2.6.10-4240417.vib

hpe-ilo_650.10.0.1-24.4240417.vib

hpe-nmi-600.2.4.16-2494575.vib

hpe-smx-limited-650.03.11.00.13-4240417.vib

hpe-smx-provider-650.03.11.00.17-4240417.vib

hponcfg-6.0.0.4.4-2.4.2494585.vib

hptestevent-6.0.0.01-01.00.5.2494585.vib

net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585.vib

nhpsa-2.0.10-1OEM.650.0.0.4240417.x86_64.vib

scsi-hpdsa-5.5.0.54-1OEM.550.0.0.1331820.vib

scsi-hpsa_6.0.0.120-1OEM.600.0.0.2494585.vib

scsi-hpvsa-5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib

ssacli-2.60.18.0-6.0.0.vib

VMware_locker_tools-light_6.5.0-0.0.4564106.vib

VMware-Remote-Console-9.0.0-Linux.vib

VMware-Remote-Console-9.0.0-Windows.vib

 

under powercli

 

PS D:\vmw> .\ESXi-Customizer-PS-v2.5.ps1 -v65 -pkgDir vibs

This is ESXi-Customizer-PS Version 2.5.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)
(Call with -help for instructions)

Logging to D:\Users\xxx\AppData\Local\Temp\ESXi-Customizer-PS-8096.log ...

Running with PowerShell version 3.0 and VMware PowerCLI 6.5 Release 1 build 4624819

Connecting the VMware ESXi Online depot ... [OK]

Getting Imageprofiles, please wait ... [OK]

Using Imageprofile ESXi-6.5.0-4564106-standard ...
(dated 10/27/2016 05:43:44, AcceptanceLevel: PartnerSupported,
The general availability release of VMware ESXi Server 6.5.0 brings whole new levels of virtualization performance to da
tacenters and enterprises.)

Loading Offline bundles and VIB files from vibs ...
  Loading D:\vmw\vibs\amshelper-650.10.6.0-24.4240417.vib ... [OK]
     Add VIB amshelper 650.10.6.0-24.4240417 [OK, added]
  Loading D:\vmw\vibs\conrep-6.0.0.01-02.00.1.2494585.vib ... [OK]
     Add VIB conrep 6.0.0.01-02.00.1.2494585 [OK, added]
  Loading D:\vmw\vibs\esxui-signed-4762574.vib ... [OK]
     Add VIB esx-ui 1.13.0-4762574 [OK, replaced 1.8.0-4516221]
  Loading D:\vmw\vibs\hpbootcfg-6.0.0.02-02.00.6.2494585.vib ... [OK]
     Add VIB hpbootcfg 6.0.0.02-02.00.6.2494585 [OK, added]
  Loading D:\vmw\vibs\hpe-cru_650.6.5.8.24-1.4240417.vib ... [OK]
     Add VIB hpe-cru 650.6.5.8.24-1.4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-esxi-fc-enablement-650.2.6.10-4240417.vib ... [OK]
     Add VIB hpe-esxi-fc-enablement 650.2.6.10-4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-ilo_650.10.0.1-24.4240417.vib ... [OK]
     Add VIB hpe-ilo 650.10.0.1-24.4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-nmi-600.2.4.16-2494575.vib ... [OK]
     Add VIB hpe-nmi 600.2.4.16-2494575 [OK, added]
  Loading D:\vmw\vibs\hpe-smx-limited-650.03.11.00.13-4240417.vib ... [OK]
     Add VIB hpe-smx-limited 650.03.11.00.13-4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-smx-provider-650.03.11.00.17-4240417.vib ... [OK]
     Add VIB hpe-smx-provider 650.03.11.00.17-4240417 [OK, added]
  Loading D:\vmw\vibs\hponcfg-6.0.0.4.4-2.4.2494585.vib ... [OK]
     Add VIB hponcfg 6.0.0.4.4-2.4.2494585 [OK, added]
  Loading D:\vmw\vibs\hptestevent-6.0.0.01-01.00.5.2494585.vib ... [OK]
     Add VIB hptestevent 6.0.0.01-01.00.5.2494585 [OK, added]
  Loading D:\vmw\vibs\net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585.vib ... [OK]
     Add VIB net-tg3 3.137l.v60.1-1OEM.600.0.0.2494585 [OK, replaced 3.131d.v60.4-2vmw.650.0.0.4564106]
  Loading D:\vmw\vibs\nhpsa-2.0.10-1OEM.650.0.0.4240417.x86_64.vib ... [OK]
     Add VIB nhpsa 2.0.10-1OEM.650.0.0.4240417 [OK, replaced 2.0.6-3vmw.650.0.0.4564106]
  Loading D:\vmw\vibs\scsi-hpdsa-5.5.0.54-1OEM.550.0.0.1331820.vib ... [OK]
     Add VIB scsi-hpdsa 5.5.0.54-1OEM.550.0.0.1331820 [OK, added]
  Loading D:\vmw\vibs\scsi-hpsa_6.0.0.120-1OEM.600.0.0.2494585.vib ... [OK]
     Add VIB scsi-hpsa 6.0.0.120-1OEM.600.0.0.2494585 [OK, replaced 6.0.0.84-1vmw.650.0.0.4564106]
  Loading D:\vmw\vibs\scsi-hpvsa-5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib ... [OK]
     Add VIB scsi-hpvsa 5.5.0.102-1OEM.550.0.0.1331820 [OK, added]
  Loading D:\vmw\vibs\ssacli-2.60.18.0-6.0.0.vib ... [OK]
     Add VIB ssacli 2.60.18.0-6.0.0.2494585 [OK, added]
  Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Linux.vib ... [OK]
     Add VIB vmrc-linux 9.0.0-0.2 [OK, added]
  Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Windows.vib ... [OK]
     Add VIB vmrc-win 9.0.0-0.2 [OK, added]
  Loading D:\vmw\vibs\VMware_locker_tools-light_6.5.0-0.0.4564106.vib ... [OK]
     Add VIB tools-light 6.5.0-0.0.4564106 [iGNORED, already added]

Exporting the Imageprofile to 'D:\vmw\ESXi-6.5.0-4564106-standard-customized.iso'. Please be patient ...

All done.

D:\vmw> .\ESXi-Customizer-PS-v2.5.ps1 -v65 -pkgDir vibs -ozip

This is ESXi-Customizer-PS Version 2.5.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)
(Call with -help for instructions)

Logging to D:\Users\xxx\AppData\Local\Temp\ESXi-Customizer-PS-8096.log ...

Running with PowerShell version 3.0 and VMware PowerCLI 6.5 Release 1 build 4624819

Connecting the VMware ESXi Online depot ... [OK]

Getting Imageprofiles, please wait ... [OK]

Using Imageprofile ESXi-6.5.0-4564106-standard ...
(dated 10/27/2016 05:43:44, AcceptanceLevel: PartnerSupported,
The general availability release of VMware ESXi Server 6.5.0 brings whole new levels of virtualization performance to da
tacenters and enterprises.)

Loading Offline bundles and VIB files from vibs ...
  Loading D:\vmw\vibs\amshelper-650.10.6.0-24.4240417.vib ... [OK]
     Add VIB amshelper 650.10.6.0-24.4240417 [OK, added]
  Loading D:\vmw\vibs\conrep-6.0.0.01-02.00.1.2494585.vib ... [OK]
     Add VIB conrep 6.0.0.01-02.00.1.2494585 [OK, added]
  Loading D:\vmw\vibs\esxui-signed-4762574.vib ... [OK]
     Add VIB esx-ui 1.13.0-4762574 [OK, replaced 1.8.0-4516221]
  Loading D:\vmw\vibs\hpbootcfg-6.0.0.02-02.00.6.2494585.vib ... [OK]
     Add VIB hpbootcfg 6.0.0.02-02.00.6.2494585 [OK, added]
  Loading D:\vmw\vibs\hpe-cru_650.6.5.8.24-1.4240417.vib ... [OK]
     Add VIB hpe-cru 650.6.5.8.24-1.4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-esxi-fc-enablement-650.2.6.10-4240417.vib ... [OK]
     Add VIB hpe-esxi-fc-enablement 650.2.6.10-4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-ilo_650.10.0.1-24.4240417.vib ... [OK]
     Add VIB hpe-ilo 650.10.0.1-24.4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-nmi-600.2.4.16-2494575.vib ... [OK]
     Add VIB hpe-nmi 600.2.4.16-2494575 [OK, added]
  Loading D:\vmw\vibs\hpe-smx-limited-650.03.11.00.13-4240417.vib ... [OK]
     Add VIB hpe-smx-limited 650.03.11.00.13-4240417 [OK, added]
  Loading D:\vmw\vibs\hpe-smx-provider-650.03.11.00.17-4240417.vib ... [OK]
     Add VIB hpe-smx-provider 650.03.11.00.17-4240417 [OK, added]
  Loading D:\vmw\vibs\hponcfg-6.0.0.4.4-2.4.2494585.vib ... [OK]
     Add VIB hponcfg 6.0.0.4.4-2.4.2494585 [OK, added]
  Loading D:\vmw\vibs\hptestevent-6.0.0.01-01.00.5.2494585.vib ... [OK]
     Add VIB hptestevent 6.0.0.01-01.00.5.2494585 [OK, added]
  Loading D:\vmw\vibs\net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585.vib ... [OK]
     Add VIB net-tg3 3.137l.v60.1-1OEM.600.0.0.2494585 [OK, replaced 3.131d.v60.4-2vmw.650.0.0.4564106]
  Loading D:\vmw\vibs\nhpsa-2.0.10-1OEM.650.0.0.4240417.x86_64.vib ... [OK]
     Add VIB nhpsa 2.0.10-1OEM.650.0.0.4240417 [OK, replaced 2.0.6-3vmw.650.0.0.4564106]
  Loading D:\vmw\vibs\scsi-hpdsa-5.5.0.54-1OEM.550.0.0.1331820.vib ... [OK]
     Add VIB scsi-hpdsa 5.5.0.54-1OEM.550.0.0.1331820 [OK, added]
  Loading D:\vmw\vibs\scsi-hpsa_6.0.0.120-1OEM.600.0.0.2494585.vib ... [OK]
     Add VIB scsi-hpsa 6.0.0.120-1OEM.600.0.0.2494585 [OK, replaced 6.0.0.84-1vmw.650.0.0.4564106]
  Loading D:\vmw\vibs\scsi-hpvsa-5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib ... [OK]
     Add VIB scsi-hpvsa 5.5.0.102-1OEM.550.0.0.1331820 [OK, added]
  Loading D:\vmw\vibs\ssacli-2.60.18.0-6.0.0.vib ... [OK]
     Add VIB ssacli 2.60.18.0-6.0.0.2494585 [OK, added]
  Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Linux.vib ... [OK]
     Add VIB vmrc-linux 9.0.0-0.2 [OK, added]
  Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Windows.vib ... [OK]
     Add VIB vmrc-win 9.0.0-0.2 [OK, added]
  Loading D:\vmw\vibs\VMware_locker_tools-light_6.5.0-0.0.4564106.vib ... [OK]
     Add VIB tools-light 6.5.0-0.0.4564106 [iGNORED, already added]

Exporting the Imageprofile to 'D:\vmw\ESXi-6.5.0-4564106-standard-customized.zip'. Please be patient ...

All done.

Share this post


Link to post
Share on other sites

@gits68: Thank you very much for your detailed upgrade-how to!!

Do you run the builtin controller B120i in RAID-mode or AHCI-mode?

Do you suffer from bad performance from ssds or harddisks which are connected to the B120i?

These questions are the only things holding me back to go from baremetal Xpenology to ESXi. :grin:

Share this post


Link to post
Share on other sites
@gits68: Thank you very much for your detailed upgrade-how to!!

Do you run the builtin controller B120i in RAID-mode or AHCI-mode?

Do you suffer from bad performance from ssds or harddisks which are connected to the B120i?

These questions are the only things holding me back to go from baremetal Xpenology to ESXi. :grin:

 

Hi, no, I uses RDM, but some optimizations may be attempted.

 

root@xpenology:/usr/local/etc/rc.d# cat S00local-raid

#!/bin/bash

case $1 in

'start')
               : FALLTHROUGH
               ;;

'stop'|'prestart'|'prestop'|'status')
               exit 0
               ;;

*)
               echo "usage: ${0##*/} " >&2
               exit 1
               ;;

esac

shopt -s extglob

function log {
       typeset device=$1 variable=$2 source_value=$3 target_value=$4 path=$5
       typeset tmp=${device}_${variable##*/}

       [[ -s /tmp/${tmp} ]] ||
       echo ${source_value} > /tmp/${tmp}

       if [[ ${source_value} != ${target_value} ]]; then
               echo "${path}: ${source_value} -> ${target_value}"
               return 1
       fi
}

function sys_block {
       typeset device=$1 variable=$2 target_value=$3
       typeset path=/sys/block/${device}/${variable}

       read source_value < ${path}

       log ${device} ${variable} ${source_value} ${target_value} ${path} ||
       echo ${target_value} > ${path}
}
function block_dev {
       typeset device=$1 target_value=$2
       typeset path=/dev/${device}

       source_value=$(blockdev --getra ${path})

       log ${device} read_ahead ${source_value} ${target_value} ${path}/read_ahead ||
       blockdev --setra ${target_value} ${path}
}
function pow2
{
       awk '
       function log2(x) { return log(x) / log(2) }
       function ceil(x) { return x == int(x) ? x : int(x + 1) }
       function pow2(x) { return 2 ^ ceil(log2(x)) }
       BEGIN { print pow2(ARGV[1]); exit }' "$@"
}

physical_volumes= sep=
logical_volumes= _sep=
for model in /sys/block/sd*/device/model; do
       read source_value < ${model}
       [[ ${source_value} = 'Virtual disk' ]] && continue
       [[ ${source_value} = 'LOGICAL VOLUME' ]] && lv=1 || lv=0
       # [[ ${source_value} = *EFRX* ]] || continue

       source=${model%/device/model}
       target=$(readlink ${source})
       # was [[ ${target} = */ata*/host* ]] || continue
       [[ ${target} = */usb*/host* ]] && continue

       disk_device=${source#/sys/block/}
       if (( lv )); then
               logical_volumes=${logical_volumes}${_sep}${disk_device}; _sep=' '
       else
               physical_volumes=${physical_volumes}${sep}${disk_device}; sep=' '
       fi
done

## read_ahead=384 # default
read_ahead=2048
for disk_device in ${physical_volumes} ${logical_volumes}; do
       block_dev ${disk_device} ${read_ahead}
done

## queue_depth=31 # physical
## queue_depth=64 # logical
# queue_depth=1 # disabled
queue_depth=3 # almost disabled
for disk_device in ${physical_volumes}; do
       sys_block ${disk_device} device/queue_depth ${queue_depth}
done

## nr_requests=128 # default
# nr_requests=64
for disk_device in ${physical_volumes} ${logical_volumes}; do
       read queue_depth < /sys/block/${disk_device}/device/queue_depth
       (( nr_requests = $(pow2 ${queue_depth}) * 2 ))
       sys_block ${disk_device} queue/nr_requests ${nr_requests}
done

raid_device='md2'

## read_ahead=1536 # physical
## read_ahead=768 # logical
read_ahead=65536
block_dev ${raid_device} ${read_ahead}

## stripe_cache_size=1024 # physical
## stripe_cache_size=4096 # logical
stripe_cache_size=32768
# stripe_cache_size=8192
sys_block ${raid_device} md/stripe_cache_size ${stripe_cache_size}

# eof

 

this have to be done in maintenance mode

 

# add dir_index to defaults
sed -i -e '/base_features/s/$/,dir_index/' /etc/mke2fs.conf

syno_poweroff_task -d; vgchange -a y

tune2fs -O dir_index /dev/vg1/volume_#
e2fsck -fDC0 /dev/vg1/volume_#

# parity = raid0 ? 0 : raid 1/10 ? # disk/2 : raid6 ? 2 : 1)
# stride=mdstat chunk / block, stripe_width=stride * (# disks - parity)

awk '/md2/{disks=NF-4;parity=/raid0/?0:/raid1/?disks/2:/raid6/?2:1
getline;chunk=$7;stride=chunk/4;stripe=stride*(disks-parity)
printf "stride=%d,stripe-width=%d\n", stride, stripe }' /proc/mdstat

# 4/R5
tune2fs -E stride=16,stripe-width=48 /dev/vg1/volume_#
# 8/R6
tune2fs -E stride=16,stripe-width=96 /dev/vg1/volume_#

vgchange -a n; syno_poweroff_task -r

 

Regards.

Share this post


Link to post
Share on other sites

Hi, no, I uses RDM, but some optimizations may be attempted.

...

 

Thank you very much for your modifiying-scripts. :smile:

You change your raid-settings very heavily but if it works stable for you its all fine.

 

If I have time I will test which mode works for me best (all harddisks as single raid0 or to put the B120i-controller in plain AHCI-Mode). I read that a lot people have performance problems (slow reads and writes) under ESXi 6.5 with the onboard B120i-Controller (the harddisks/ssds for the the VM-storage are affected by this problem not the disks given to Xpenology itself).

Share this post


Link to post
Share on other sites

Hi, no, I uses RDM, but some optimizations may be attempted.

...

 

Thank you very much for your modifiying-scripts. :smile:

You change your raid-settings very heavily but if it works stable for you its all fine.

 

If I have time I will test which mode works for me best (all harddisks as single raid0 or to put the B120i-controller in plain AHCI-Mode). I read that a lot people have performance problems (slow reads and writes) under ESXi 6.5 with the onboard B120i-Controller (the harddisks/ssds for the the VM-storage are affected by this problem not the disks given to Xpenology itself).

 

about raid settings, not my fault if synology boxes aren't optimized at all... :roll:

 

forgot about sysctl tunnings

 

double ## are defaults

 

kernel.core_pattern = /volume1/@core/%e

# Use the full range of ports
## net.ipv4.ip_local_port_range = 32768 61000
net.ipv4.ip_local_port_range = 32768 65535

# Increase Linux autotuning TCP buffer limits
## net.core.rmem_max = 212992
## net.core.wmem_max = 212992
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
## net.ipv4.tcp_rmem = 4096 16384 4194304
## net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# mdadm resync limits
## dev.raid.speed_limit_max = 200000
## dev.raid.speed_limit_min = 10000
dev.raid.speed_limit_min = 50000

# don't remember
net.ipv4.tcp_window_scaling = 1

# swappiness is a parameter which sets the kernel's balance between reclaiming
# pages from the page cache and swapping process memory. The default value is 60.
# If you want kernel to swap out more process memory and thus cache more file
# contents increase the value. Otherwise, if you would like kernel to swap less
# decrease it.
## vm.swappiness = 60
vm.swappiness = 50

# Contains, as a percentage of total system memory, the number of pages at
# which a process which is generating disk writes will itself start writing
# out dirty data.
## vm.dirty_ratio = 20
vm.dirty_ratio = 30

 

pending changes when I'll have time

 

# Increase number of incoming connections that can queue up before dropping
### net.core.somaxconn = 128
## net.core.somaxconn = 65535

# don't remember
## net.core.optmem_max = 20480
# net.core.optmem_max = 40960

# Disable source routing and redirects
# net.ipv4.conf.all.send_redirects = 0
# net.ipv4.conf.default.send_redirects = 0
## net.ipv4.conf.all.accept_redirects = 0
# net.ipv4.conf.default.accept_redirects = 0
## net.ipv4.conf.all.accept_source_route = 0
# net.ipv4.conf.default.accept_source_route = 0

# Disable TCP slow start on idle connections
## net.ipv4.tcp_slow_start_after_idle = 1
# net.ipv4.tcp_slow_start_after_idle = 0

# Disconnect dead TCP connections after 1 minute
## net.ipv4.tcp_keepalive_time = 7200
# net.ipv4.tcp_keepalive_time = 60

# Increase the number of outstanding syn requests allowed.
## net.ipv4.tcp_syncookies = 1

# Determines the wait time between isAlive interval probes
## net.ipv4.tcp_keepalive_intvl = 75
# net.ipv4.tcp_keepalive_intvl = 15

# Determines the number of probes before timing out
## net.ipv4.tcp_keepalive_probes = 9
# net.ipv4.tcp_keepalive_probes = 5

# don't remember
## net.ipv4.tcp_timestamps = 1

# Increase the length of the network device input queue
## net.core.netdev_max_backlog = 1000
# net.core.netdev_max_backlog = 5000

# Handle SYN floods and large numbers of valid HTTPS connections
## net.ipv4.tcp_max_syn_backlog = 128
# net.ipv4.tcp_max_syn_backlog = 8096

# Wait a maximum of 5 * 2 = 10 seconds in the TIME_WAIT state after a FIN, to handle
# any remaining packets in the network.
## net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
# net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 5

# Allow a high number of timewait sockets
## net.ipv4.tcp_max_tw_buckets = 16384
# net.ipv4.tcp_max_tw_buckets = 65536

# Timeout broken connections faster (amount of time to wait for FIN)
## net.ipv4.tcp_fin_timeout = 60
# net.ipv4.tcp_fin_timeout = 10

# Let the networking stack reuse TIME_WAIT connections when it thinks it's safe to do so
## net.ipv4.tcp_tw_reuse = 0
# net.ipv4.tcp_tw_reuse = 1

# If your servers talk UDP, also up these limits
## net.ipv4.udp_rmem_min = 4096
## net.ipv4.udp_wmem_min = 4096
# net.ipv4.udp_rmem_min = 8192
# net.ipv4.udp_wmem_min = 8192

# don't remember
## kernel.sched_migration_cost_ns = 500000

# Contains, as a percentage of total system memory, the number of pages at
# which the pdflush background writeback daemon will start writing out dirty
# data.
## vm.dirty_background_ratio = 10

# This tunable is used to define when dirty data is old enough to be eligible
# for writeout by the pdflush daemons. It is expressed in 100'ths of a second.
# Data which has been dirty in memory for longer than this interval will be
# written out next time a pdflush daemon wakes up.
## vm.dirty_expire_centisecs = 3000
# vm.dirty_expire_centisecs = 6000

# This is used to force the Linux VM to keep a minimum number of kilobytes
# free. The VM uses this number to compute a pages_min value for each lowmem
# zone in the system. Each lowmem zone gets a number of reserved free pages
# based proportionally on its size.
## vm.min_free_kbytes = 65536

# Controls the tendency of the kernel to reclaim the memory which is used for
# caching of directory and inode objects.
# At the default value of vfs_cache_pressure = 100 the kernel will attempt to
# reclaim dentries and inodes at a "fair" rate with respect to pagecache and
# swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
# to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100
# causes the kernel to prefer to reclaim dentries and inodes.
## vm.vfs_cache_pressure=100
# vm.vfs_cache_pressure=400

# 64k (sector size) per I/O operation in the swap
## vm.page-cluster=3
# vm.page-cluster=16

 

RAID0 is really dangerous... if you lost 1 drive, you lost everything ! hope you have backups ?

 

HP gen8 + esxi + RDM : 4x6T in RAID 5

 

root@xpenology:/volume1# dd if=/dev/zero of=zero bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 15.8706 s, 258 MB/s

root@xpenology:/volume1# dd if=zero of=/dev/null bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 27.6033 s, 148 MB/s

 

plain synology DS1812+ : 8x3TB in RAID 6

 

root@synology:/volume1# dd if=/dev/zero of=zero bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 26.3938 s, 155 MB/s

root@synology:/volume1# dd if=zero of=/dev/null bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 28.238 s, 145 MB/s

Edited by Guest

Share this post


Link to post
Share on other sites
update to 7 fails too.

also tried update 6 first, exactly the same results.

 

so, remains update 5 version. waiting for new progress...

 

BTW: AMD cpu, N54L, ...

 

Have you tried the upgrade fromthe control panel on the disk station ? Just did the 7'th update and everything went fine and straight forward...

 

after online upgrade failed, try d/l update file first, then manual update. still no lucky.

Share this post


Link to post
Share on other sites
update to 7 fails too.

also tried update 6 first, exactly the same results.

 

so, remains update 5 version. waiting for new progress...

 

BTW: AMD cpu, N54L, ...

 

 

Did you upgraded from DSM 5->DSM6? Or Clean installed DSM 6?

 

I'm asking because i also have an HP N54L and successfully installed Update 7 with no problem at all.

 

Clean installed DSM 6 with update 5.

I did not install the update 6. just do online upgrade to update 7, 100% done, but prompt 'update failed...'

try manual update 6/update 7, fails too.

 

It's grade to know N54L works with update 7, I'll try it again. thx.

Share this post


Link to post
Share on other sites
update to 7 fails too.

also tried update 6 first, exactly the same results.

 

so, remains update 5 version. waiting for new progress...

 

BTW: AMD cpu, N54L, ...

 

Did you upgraded from DSM 5->DSM6? Or Clean installed DSM 6?

 

I'm asking because i also have an HP N54L and successfully installed Update 7 with no problem at all.

 

Clean installed DSM 6 with update 5.

I did not install the update 6. just do online upgrade to update 7, 100% done, but prompt 'update failed...'

try manual update 6/update 7, fails too.

 

It's grade to know N54L works with update 7, I'll try it again. thx.

 

I didn't know the jun loader, I'm using quicknick ones.

however, to install the update 6, I had to symlink both boot drives to synonoot.

 

a kind of

 

ssh diskstation
sdef=$(sfdisk -l | awk '/dev/&&/ ef /{print $1}')
ln -s ${sdef#/dev/} /dev/synoboot1
sd83=$(sfdisk -l | awk '/dev/&&/ 83 /{print $1}')
ln -s ${sd83#/dev/} /dev/synoboot2

 

yes, synoupgrade have to mount /dev/synoboot1 and/or synoboot2

to update the kernel contained in flashupdate from the .pat tar archive.

Share this post


Link to post
Share on other sites
Worked perfectly on N54L from 5.0 on Nanoboot.

Thanks you very much.

 

Just one thing, i have two Network card, one integrated, worked.

And another on pci-e port, and nothing...

I see on the interface, but when i plug the cable, cannot ping....

 

It's RTL8111 chipset...

 

Maybe i forgot something ???

 

EDIT:

It's worked perfectly, i made a misstake....

I use the same MAC than my other NAS, and with the rule on my router for this mac, the same IP was set...

So IP address was in conflict and the new Synology was not been reachable... I saw the problem when i plugged two network card in same time.

So any problem for me with N54L and 6x2To.

 

Congratulations.

 

 

PS: I made Update 6.

 

This is good to know as I will be making the same upgrade today

Share this post


Link to post
Share on other sites
Worked perfectly on N54L from 5.0 on Nanoboot.

Thanks you very much.

 

Just one thing, i have two Network card, one integrated, worked.

And another on pci-e port, and nothing...

I see on the interface, but when i plug the cable, cannot ping....

 

It's RTL8111 chipset...

 

Maybe i forgot something ???

 

EDIT:

It's worked perfectly, i made a misstake....

I use the same MAC than my other NAS, and with the rule on my router for this mac, the same IP was set...

So IP address was in conflict and the new Synology was not been reachable... I saw the problem when i plugged two network card in same time.

So any problem for me with N54L and 6x2To.

 

Congratulations.

 

 

PS: I made Update 6.

 

This is good to know as I will be making the same upgrade today

Share this post


Link to post
Share on other sites

 

I didn't know the jun loader, I'm using quicknick ones.

however, to install the update 6, I had to symlink both boot drives to synonoot.

 

a kind of

 

ssh diskstation
sdef=$(sfdisk -l | awk '/dev/&&/ ef /{print $1}')
ln -s ${sdef#/dev/} /dev/synoboot1
sd83=$(sfdisk -l | awk '/dev/&&/ 83 /{print $1}')
ln -s ${sd83#/dev/} /dev/synoboot2

 

yes, synoupgrade have to mount /dev/synoboot1 and/or synoboot2

to update the kernel contained in flashupdate from the .pat tar archive.

 

YES! it's correct. now update 7.

the problem was, jun's loader did not mount the usb stick device with synoboot official vid 0xf400 / pid 0xf401. sfdisk -l shows no synoboot device.

thanks a lot!

 

add:

In other words, jun's loader protected the usb stick not to be changed. ^_^

Share this post


Link to post
Share on other sites

i have tired to search for it but how do i edit the img file on a mac. i seem to remember reading on now to do it but i have forgotten on now to. please help

Share this post


Link to post
Share on other sites

Hi, I have the HP Gen 8.

Yesterday I went to put the update 7, but it did not work (Fail to update the file. The file is probably corrupt).

Then at boot loader I choose the option 2 .All has been installed again maintaining the data. But now I can not find synology. It hasn´t IP.

Any solution?

Share this post


Link to post
Share on other sites

Anyone know if there is a way to make it bootloader select the AMD option automatically. So u don't need a keyboard and mo itor??

Share this post


Link to post
Share on other sites

I am trying this on an Intel 82574L NIC. Is that supported? The hardware is a TheCUS N5550 NAS. It works fine with DSM 5.2. When I try this loader the DHCP server is not showing an active session with my MAC address.

Share this post


Link to post
Share on other sites

just one question... curtrently i toy around with my new xpe baremetal system and used jun's 1.01 loader

it seems everything is fine, but i can't create brtfs volumes....isn't it expected to work in this loader?

 

i tried

- single disk with EXT4 ok

- single disk with BRTFS NOK

- Diskgroup with EXT4 ok

- Diskgroup with BRTFS NOK

 

my HBA is a LSI 9301-16i

 

any suggestions?

 

Update..

solved it by complete fresh install of DSM... stick to "new" raid-groups maybe in next DSM version no support of SHR at all anymore...just to "stay" safe

Share this post


Link to post
Share on other sites
stick to "new" raid-groups maybe in next DSM version no support of SHR at all anymore...just to "stay" safe

 

don't expect this to happen from Synology side as SHR is one of the USP's they use for their systems.

I understand the raid part for more commercial hardware and systems with large amount of disks as SHR takes more CPU/Mem load and its not enterprise advised.

But still this should not go in the direction to remove SHR total out of DSM, in 6 its still supported but not on all types of machines.

 

But ok, thats all assumptions we will see....

Share this post


Link to post
Share on other sites

I was on DSM 5.2, my APC was detected without problem.

On DSM 6.0, nothing.... I plug the USB cable, and Any inverter are detected.

 

I am on HP Proliant.

Share this post


Link to post
Share on other sites

Is there is a hardware list for Jun's loader? I cannot get it to see my NIC. If I boot with a 5.2 loader it says it's using the Intel e1000e driver.

Share this post


Link to post
Share on other sites

Hi, i am trying to install it on an 6.5 server using the OVF but i keep getting "postNFCData failed".

 

Does anyone know a fix or workaround?

Share this post


Link to post
Share on other sites

Due to different reports on succesfull updates to UPdate 7 I wanted to share my experience.

 

- N54L /4GB/ 4* 3TB disks (in SHR raid 5)

- Jun 1.01 loader

- Started from Version: 6.0.2-8451 Update 6

- Used the standard GUI option to update to Update 7

- Using SHR in my setup

 

Update went succesfull.

Share this post


Link to post
Share on other sites

I finally am able to get the Jun loader to see my NIC. I disabled the EFI in my bios and now it sees it. I'm confused on the next step. Do I select a manual install of DSM or do I get the .PAT file from here? I don't see any of the DSM 6 images.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now