Jump to content
XPEnology Community

DSM 6.1.x Loader


jun

Recommended Posts

@gits68: Thank you very much for your detailed upgrade-how to!!

Do you run the builtin controller B120i in RAID-mode or AHCI-mode?

Do you suffer from bad performance from ssds or harddisks which are connected to the B120i?

These questions are the only things holding me back to go from baremetal Xpenology to ESXi. :grin:

Link to comment
Share on other sites

@gits68: Thank you very much for your detailed upgrade-how to!!

Do you run the builtin controller B120i in RAID-mode or AHCI-mode?

Do you suffer from bad performance from ssds or harddisks which are connected to the B120i?

These questions are the only things holding me back to go from baremetal Xpenology to ESXi. :grin:

 

Hi, no, I uses RDM, but some optimizations may be attempted.

 

root@xpenology:/usr/local/etc/rc.d# cat S00local-raid

#!/bin/bash

case $1 in

'start')
               : FALLTHROUGH
               ;;

'stop'|'prestart'|'prestop'|'status')
               exit 0
               ;;

*)
               echo "usage: ${0##*/} " >&2
               exit 1
               ;;

esac

shopt -s extglob

function log {
       typeset device=$1 variable=$2 source_value=$3 target_value=$4 path=$5
       typeset tmp=${device}_${variable##*/}

       [[ -s /tmp/${tmp} ]] ||
       echo ${source_value} > /tmp/${tmp}

       if [[ ${source_value} != ${target_value} ]]; then
               echo "${path}: ${source_value} -> ${target_value}"
               return 1
       fi
}

function sys_block {
       typeset device=$1 variable=$2 target_value=$3
       typeset path=/sys/block/${device}/${variable}

       read source_value < ${path}

       log ${device} ${variable} ${source_value} ${target_value} ${path} ||
       echo ${target_value} > ${path}
}
function block_dev {
       typeset device=$1 target_value=$2
       typeset path=/dev/${device}

       source_value=$(blockdev --getra ${path})

       log ${device} read_ahead ${source_value} ${target_value} ${path}/read_ahead ||
       blockdev --setra ${target_value} ${path}
}
function pow2
{
       awk '
       function log2(x) { return log(x) / log(2) }
       function ceil(x) { return x == int(x) ? x : int(x + 1) }
       function pow2(x) { return 2 ^ ceil(log2(x)) }
       BEGIN { print pow2(ARGV[1]); exit }' "$@"
}

physical_volumes= sep=
logical_volumes= _sep=
for model in /sys/block/sd*/device/model; do
       read source_value < ${model}
       [[ ${source_value} = 'Virtual disk' ]] && continue
       [[ ${source_value} = 'LOGICAL VOLUME' ]] && lv=1 || lv=0
       # [[ ${source_value} = *EFRX* ]] || continue

       source=${model%/device/model}
       target=$(readlink ${source})
       # was [[ ${target} = */ata*/host* ]] || continue
       [[ ${target} = */usb*/host* ]] && continue

       disk_device=${source#/sys/block/}
       if (( lv )); then
               logical_volumes=${logical_volumes}${_sep}${disk_device}; _sep=' '
       else
               physical_volumes=${physical_volumes}${sep}${disk_device}; sep=' '
       fi
done

## read_ahead=384 # default
read_ahead=2048
for disk_device in ${physical_volumes} ${logical_volumes}; do
       block_dev ${disk_device} ${read_ahead}
done

## queue_depth=31 # physical
## queue_depth=64 # logical
# queue_depth=1 # disabled
queue_depth=3 # almost disabled
for disk_device in ${physical_volumes}; do
       sys_block ${disk_device} device/queue_depth ${queue_depth}
done

## nr_requests=128 # default
# nr_requests=64
for disk_device in ${physical_volumes} ${logical_volumes}; do
       read queue_depth < /sys/block/${disk_device}/device/queue_depth
       (( nr_requests = $(pow2 ${queue_depth}) * 2 ))
       sys_block ${disk_device} queue/nr_requests ${nr_requests}
done

raid_device='md2'

## read_ahead=1536 # physical
## read_ahead=768 # logical
read_ahead=65536
block_dev ${raid_device} ${read_ahead}

## stripe_cache_size=1024 # physical
## stripe_cache_size=4096 # logical
stripe_cache_size=32768
# stripe_cache_size=8192
sys_block ${raid_device} md/stripe_cache_size ${stripe_cache_size}

# eof

 

this have to be done in maintenance mode

 

# add dir_index to defaults
sed -i -e '/base_features/s/$/,dir_index/' /etc/mke2fs.conf

syno_poweroff_task -d; vgchange -a y

tune2fs -O dir_index /dev/vg1/volume_#
e2fsck -fDC0 /dev/vg1/volume_#

# parity = raid0 ? 0 : raid 1/10 ? # disk/2 : raid6 ? 2 : 1)
# stride=mdstat chunk / block, stripe_width=stride * (# disks - parity)

awk '/md2/{disks=NF-4;parity=/raid0/?0:/raid1/?disks/2:/raid6/?2:1
getline;chunk=$7;stride=chunk/4;stripe=stride*(disks-parity)
printf "stride=%d,stripe-width=%d\n", stride, stripe }' /proc/mdstat

# 4/R5
tune2fs -E stride=16,stripe-width=48 /dev/vg1/volume_#
# 8/R6
tune2fs -E stride=16,stripe-width=96 /dev/vg1/volume_#

vgchange -a n; syno_poweroff_task -r

 

Regards.

Link to comment
Share on other sites

Hi, no, I uses RDM, but some optimizations may be attempted.

...

 

Thank you very much for your modifiying-scripts. :smile:

You change your raid-settings very heavily but if it works stable for you its all fine.

 

If I have time I will test which mode works for me best (all harddisks as single raid0 or to put the B120i-controller in plain AHCI-Mode). I read that a lot people have performance problems (slow reads and writes) under ESXi 6.5 with the onboard B120i-Controller (the harddisks/ssds for the the VM-storage are affected by this problem not the disks given to Xpenology itself).

Link to comment
Share on other sites

Hi, no, I uses RDM, but some optimizations may be attempted.

...

 

Thank you very much for your modifiying-scripts. :smile:

You change your raid-settings very heavily but if it works stable for you its all fine.

 

If I have time I will test which mode works for me best (all harddisks as single raid0 or to put the B120i-controller in plain AHCI-Mode). I read that a lot people have performance problems (slow reads and writes) under ESXi 6.5 with the onboard B120i-Controller (the harddisks/ssds for the the VM-storage are affected by this problem not the disks given to Xpenology itself).

 

about raid settings, not my fault if synology boxes aren't optimized at all... :roll:

 

forgot about sysctl tunnings

 

double ## are defaults

 

kernel.core_pattern = /volume1/@core/%e

# Use the full range of ports
## net.ipv4.ip_local_port_range = 32768 61000
net.ipv4.ip_local_port_range = 32768 65535

# Increase Linux autotuning TCP buffer limits
## net.core.rmem_max = 212992
## net.core.wmem_max = 212992
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
## net.ipv4.tcp_rmem = 4096 16384 4194304
## net.ipv4.tcp_wmem = 4096 16384 4194304
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216

# mdadm resync limits
## dev.raid.speed_limit_max = 200000
## dev.raid.speed_limit_min = 10000
dev.raid.speed_limit_min = 50000

# don't remember
net.ipv4.tcp_window_scaling = 1

# swappiness is a parameter which sets the kernel's balance between reclaiming
# pages from the page cache and swapping process memory. The default value is 60.
# If you want kernel to swap out more process memory and thus cache more file
# contents increase the value. Otherwise, if you would like kernel to swap less
# decrease it.
## vm.swappiness = 60
vm.swappiness = 50

# Contains, as a percentage of total system memory, the number of pages at
# which a process which is generating disk writes will itself start writing
# out dirty data.
## vm.dirty_ratio = 20
vm.dirty_ratio = 30

 

pending changes when I'll have time

 

# Increase number of incoming connections that can queue up before dropping
### net.core.somaxconn = 128
## net.core.somaxconn = 65535

# don't remember
## net.core.optmem_max = 20480
# net.core.optmem_max = 40960

# Disable source routing and redirects
# net.ipv4.conf.all.send_redirects = 0
# net.ipv4.conf.default.send_redirects = 0
## net.ipv4.conf.all.accept_redirects = 0
# net.ipv4.conf.default.accept_redirects = 0
## net.ipv4.conf.all.accept_source_route = 0
# net.ipv4.conf.default.accept_source_route = 0

# Disable TCP slow start on idle connections
## net.ipv4.tcp_slow_start_after_idle = 1
# net.ipv4.tcp_slow_start_after_idle = 0

# Disconnect dead TCP connections after 1 minute
## net.ipv4.tcp_keepalive_time = 7200
# net.ipv4.tcp_keepalive_time = 60

# Increase the number of outstanding syn requests allowed.
## net.ipv4.tcp_syncookies = 1

# Determines the wait time between isAlive interval probes
## net.ipv4.tcp_keepalive_intvl = 75
# net.ipv4.tcp_keepalive_intvl = 15

# Determines the number of probes before timing out
## net.ipv4.tcp_keepalive_probes = 9
# net.ipv4.tcp_keepalive_probes = 5

# don't remember
## net.ipv4.tcp_timestamps = 1

# Increase the length of the network device input queue
## net.core.netdev_max_backlog = 1000
# net.core.netdev_max_backlog = 5000

# Handle SYN floods and large numbers of valid HTTPS connections
## net.ipv4.tcp_max_syn_backlog = 128
# net.ipv4.tcp_max_syn_backlog = 8096

# Wait a maximum of 5 * 2 = 10 seconds in the TIME_WAIT state after a FIN, to handle
# any remaining packets in the network.
## net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
# net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 5

# Allow a high number of timewait sockets
## net.ipv4.tcp_max_tw_buckets = 16384
# net.ipv4.tcp_max_tw_buckets = 65536

# Timeout broken connections faster (amount of time to wait for FIN)
## net.ipv4.tcp_fin_timeout = 60
# net.ipv4.tcp_fin_timeout = 10

# Let the networking stack reuse TIME_WAIT connections when it thinks it's safe to do so
## net.ipv4.tcp_tw_reuse = 0
# net.ipv4.tcp_tw_reuse = 1

# If your servers talk UDP, also up these limits
## net.ipv4.udp_rmem_min = 4096
## net.ipv4.udp_wmem_min = 4096
# net.ipv4.udp_rmem_min = 8192
# net.ipv4.udp_wmem_min = 8192

# don't remember
## kernel.sched_migration_cost_ns = 500000

# Contains, as a percentage of total system memory, the number of pages at
# which the pdflush background writeback daemon will start writing out dirty
# data.
## vm.dirty_background_ratio = 10

# This tunable is used to define when dirty data is old enough to be eligible
# for writeout by the pdflush daemons. It is expressed in 100'ths of a second.
# Data which has been dirty in memory for longer than this interval will be
# written out next time a pdflush daemon wakes up.
## vm.dirty_expire_centisecs = 3000
# vm.dirty_expire_centisecs = 6000

# This is used to force the Linux VM to keep a minimum number of kilobytes
# free. The VM uses this number to compute a pages_min value for each lowmem
# zone in the system. Each lowmem zone gets a number of reserved free pages
# based proportionally on its size.
## vm.min_free_kbytes = 65536

# Controls the tendency of the kernel to reclaim the memory which is used for
# caching of directory and inode objects.
# At the default value of vfs_cache_pressure = 100 the kernel will attempt to
# reclaim dentries and inodes at a "fair" rate with respect to pagecache and
# swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
# to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100
# causes the kernel to prefer to reclaim dentries and inodes.
## vm.vfs_cache_pressure=100
# vm.vfs_cache_pressure=400

# 64k (sector size) per I/O operation in the swap
## vm.page-cluster=3
# vm.page-cluster=16

 

RAID0 is really dangerous... if you lost 1 drive, you lost everything ! hope you have backups ?

 

HP gen8 + esxi + RDM : 4x6T in RAID 5

 

root@xpenology:/volume1# dd if=/dev/zero of=zero bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 15.8706 s, 258 MB/s

root@xpenology:/volume1# dd if=zero of=/dev/null bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 27.6033 s, 148 MB/s

 

plain synology DS1812+ : 8x3TB in RAID 6

 

root@synology:/volume1# dd if=/dev/zero of=zero bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 26.3938 s, 155 MB/s

root@synology:/volume1# dd if=zero of=/dev/null bs=4k count=1000000

4096000000 bytes (4.1 GB) copied, 28.238 s, 145 MB/s

Edited by Guest
Link to comment
Share on other sites

update to 7 fails too.

also tried update 6 first, exactly the same results.

 

so, remains update 5 version. waiting for new progress...

 

BTW: AMD cpu, N54L, ...

 

Have you tried the upgrade fromthe control panel on the disk station ? Just did the 7'th update and everything went fine and straight forward...

 

after online upgrade failed, try d/l update file first, then manual update. still no lucky.

Link to comment
Share on other sites

update to 7 fails too.

also tried update 6 first, exactly the same results.

 

so, remains update 5 version. waiting for new progress...

 

BTW: AMD cpu, N54L, ...

 

 

Did you upgraded from DSM 5->DSM6? Or Clean installed DSM 6?

 

I'm asking because i also have an HP N54L and successfully installed Update 7 with no problem at all.

 

Clean installed DSM 6 with update 5.

I did not install the update 6. just do online upgrade to update 7, 100% done, but prompt 'update failed...'

try manual update 6/update 7, fails too.

 

It's grade to know N54L works with update 7, I'll try it again. thx.

Link to comment
Share on other sites

update to 7 fails too.

also tried update 6 first, exactly the same results.

 

so, remains update 5 version. waiting for new progress...

 

BTW: AMD cpu, N54L, ...

 

Did you upgraded from DSM 5->DSM6? Or Clean installed DSM 6?

 

I'm asking because i also have an HP N54L and successfully installed Update 7 with no problem at all.

 

Clean installed DSM 6 with update 5.

I did not install the update 6. just do online upgrade to update 7, 100% done, but prompt 'update failed...'

try manual update 6/update 7, fails too.

 

It's grade to know N54L works with update 7, I'll try it again. thx.

 

I didn't know the jun loader, I'm using quicknick ones.

however, to install the update 6, I had to symlink both boot drives to synonoot.

 

a kind of

 

ssh diskstation
sdef=$(sfdisk -l | awk '/dev/&&/ ef /{print $1}')
ln -s ${sdef#/dev/} /dev/synoboot1
sd83=$(sfdisk -l | awk '/dev/&&/ 83 /{print $1}')
ln -s ${sd83#/dev/} /dev/synoboot2

 

yes, synoupgrade have to mount /dev/synoboot1 and/or synoboot2

to update the kernel contained in flashupdate from the .pat tar archive.

Link to comment
Share on other sites

Worked perfectly on N54L from 5.0 on Nanoboot.

Thanks you very much.

 

Just one thing, i have two Network card, one integrated, worked.

And another on pci-e port, and nothing...

I see on the interface, but when i plug the cable, cannot ping....

 

It's RTL8111 chipset...

 

Maybe i forgot something ???

 

EDIT:

It's worked perfectly, i made a misstake....

I use the same MAC than my other NAS, and with the rule on my router for this mac, the same IP was set...

So IP address was in conflict and the new Synology was not been reachable... I saw the problem when i plugged two network card in same time.

So any problem for me with N54L and 6x2To.

 

Congratulations.

 

 

PS: I made Update 6.

 

This is good to know as I will be making the same upgrade today

Link to comment
Share on other sites

Worked perfectly on N54L from 5.0 on Nanoboot.

Thanks you very much.

 

Just one thing, i have two Network card, one integrated, worked.

And another on pci-e port, and nothing...

I see on the interface, but when i plug the cable, cannot ping....

 

It's RTL8111 chipset...

 

Maybe i forgot something ???

 

EDIT:

It's worked perfectly, i made a misstake....

I use the same MAC than my other NAS, and with the rule on my router for this mac, the same IP was set...

So IP address was in conflict and the new Synology was not been reachable... I saw the problem when i plugged two network card in same time.

So any problem for me with N54L and 6x2To.

 

Congratulations.

 

 

PS: I made Update 6.

 

This is good to know as I will be making the same upgrade today

Link to comment
Share on other sites

 

I didn't know the jun loader, I'm using quicknick ones.

however, to install the update 6, I had to symlink both boot drives to synonoot.

 

a kind of

 

ssh diskstation
sdef=$(sfdisk -l | awk '/dev/&&/ ef /{print $1}')
ln -s ${sdef#/dev/} /dev/synoboot1
sd83=$(sfdisk -l | awk '/dev/&&/ 83 /{print $1}')
ln -s ${sd83#/dev/} /dev/synoboot2

 

yes, synoupgrade have to mount /dev/synoboot1 and/or synoboot2

to update the kernel contained in flashupdate from the .pat tar archive.

 

YES! it's correct. now update 7.

the problem was, jun's loader did not mount the usb stick device with synoboot official vid 0xf400 / pid 0xf401. sfdisk -l shows no synoboot device.

thanks a lot!

 

add:

In other words, jun's loader protected the usb stick not to be changed. ^_^

Link to comment
Share on other sites

Hi, I have the HP Gen 8.

Yesterday I went to put the update 7, but it did not work (Fail to update the file. The file is probably corrupt).

Then at boot loader I choose the option 2 .All has been installed again maintaining the data. But now I can not find synology. It hasn´t IP.

Any solution?

Link to comment
Share on other sites

just one question... curtrently i toy around with my new xpe baremetal system and used jun's 1.01 loader

it seems everything is fine, but i can't create brtfs volumes....isn't it expected to work in this loader?

 

i tried

- single disk with EXT4 ok

- single disk with BRTFS NOK

- Diskgroup with EXT4 ok

- Diskgroup with BRTFS NOK

 

my HBA is a LSI 9301-16i

 

any suggestions?

 

Update..

solved it by complete fresh install of DSM... stick to "new" raid-groups maybe in next DSM version no support of SHR at all anymore...just to "stay" safe

Link to comment
Share on other sites

stick to "new" raid-groups maybe in next DSM version no support of SHR at all anymore...just to "stay" safe

 

don't expect this to happen from Synology side as SHR is one of the USP's they use for their systems.

I understand the raid part for more commercial hardware and systems with large amount of disks as SHR takes more CPU/Mem load and its not enterprise advised.

But still this should not go in the direction to remove SHR total out of DSM, in 6 its still supported but not on all types of machines.

 

But ok, thats all assumptions we will see....

Link to comment
Share on other sites

Due to different reports on succesfull updates to UPdate 7 I wanted to share my experience.

 

- N54L /4GB/ 4* 3TB disks (in SHR raid 5)

- Jun 1.01 loader

- Started from Version: 6.0.2-8451 Update 6

- Used the standard GUI option to update to Update 7

- Using SHR in my setup

 

Update went succesfull.

Link to comment
Share on other sites

ok. Problem is done. My mistake. Bios was resetet and the Settings for the modification too.

 

I have a few problems:

 

- HP N54L with russian bios mod for five hdd and hotswap

- fifth hdd not supported in dsm 6.0.1

- SHR in 5.2 (dont know how to activate in 6.0.1) Is this the problem?

- Don't know with sataportmap - Read different. But try 411/41/51/42- nothing works.

- Try to downgrade (like tutorial in the forum or youtube) - "error: file/grub/i386-pc/normal.mod not found entering rescue mode... grub rescue> "

 

i dont know what to do now. im very tired.....

Link to comment
Share on other sites

Hello Guys,

I will post my experience with a new motherboard i got and everything worked with the older version of Jun's Loader just by swapping board, and using the same sata port numbers for the disks.

The motherboard is Amazing with 8 ports sata and 1151 slot with E3 1200 series Xeon.

Its the Asrock C236 WSI , you can take a look here : http://asrockrack.com/general/productde ... C236%20WSI

I use the internal USB 3 Port for the jun's loader and 8 barracuda hard disk's all on the C236 on board controller.

Both Net card works with the old version of jun's loader, I had just setup link aggregation (teaming) and all works fine.

Now I need to find a 4 port sata 3 that works nicely on my config to be able to expand my storage further..

 

I only post this in hope to help anyone holding back from c232 or c 236 chipsets...

 

Hey JUN! thanks again!

 

/wave

Edited by Guest
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...