Jump to content
XPEnology Community

gits68

Member
  • Posts

    16
  • Joined

  • Last visited

Posts posted by gits68

  1. here is another, at boot, I'll reconfigure syslog-ng to drop thoses error messages :

     

    b='\033[34m' # setaf 4
    y='\033[33m' # setaf 3
    g='\033[32m' # setaf 2
    r='\033[31m' # setaf 1
    w='\033[m' # sgr0
    tab=${tab:-$(printf '\t')}
    ws="[ ${tab}]"
    foo() { file=$1; tag=$1; echo -n "${tag} ..."; }
    bar() { eval "$@" && echo -e " ${b}updated${w}" || echo -e " ${r}failure${w}"; }
    foobar() {
            foo "$1"
            if eval "$2"; then
                    echo -e " ${g}success${w}"
            else
                    eval "$3"
                    bar "${4:-$2}"
            fi
    }
    
    case $(/bin/uname -u) in *cedarview*) ;; *) # xpenology
    restart=0
    foobar  /etc/syslog-ng/patterndb.d/smart.conf \
            '[ -f "${file}" ]' \
            'cat > "${file}"; restart=1' << EOF
    filter f_smart_failed { (program("synostoraged") or program("sk") or program("SystemInfo.cgi")) and match("^(disk|[Ss]mart).*[Ff]ail" value("MESSAGE")) };
    EOF
    foobar  /etc/syslog-ng/patterndb.d/include/not2msg/smart \
            '[ -f "${file}" ]' \
            'cat > "${file}"; restart=1' << EOF
    and not filter(f_smart_failed)
    EOF
    [ ${restart} = 1 ] && killall -1 syslog-ng
    esac


     

  2. I got a DSM (6.0.2-8451 Update 7) reboot when set MTU to 9000 for ESXi vmxnet3. Has anyone faced with the same issue?

     

    Hi Hunterok, I am having the same problem, everytime I set MTU to 9000, the server rebooted. It's seems no solution yet.

     

    subsidiary question, have you enabled MTU 9000 at the vswitch level ?

     

    [root@esxi:~] esxcli network vswitch standard list
    vSwitch0
      Name: vSwitch0
      Class: etherswitch
      Num Ports: 1536
      Used Ports: 11
      Configured Ports: 128
      MTU: 1500
      CDP Status: listen
      Beacon Enabled: false
      Beacon Interval: 1
      Beacon Threshold: 3
      Beacon Required By:
      Uplinks: vmnic1, vmnic0
      Portgroups: VM Network, Management Network

     

    https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1007654

     

    PS : I'am not using jumbo frames.

  3. hi guys,

     

    i keep getting this error : "Segmentation fault (core dumped)" when i try to mount /dev/synoboot1 /mnt

     

    try the following command before to mount /dev/synoboot*

     

    echo 1 > /proc/sys/kernel/syno_install_flag

  4. Hi,

     

    forgot to say that $sataportmap isn't expanded well

     

    cat /proc/cmdline
    ihd_num=0 syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS3615xs vender_format_version=2 console=ttyS0,115200n8 withefi root=/dev/md0 sn=XXXXXXXXXX netif_num=2 mac1=001132XXXXXX mac2=001132XXXXXX synoboot_satadom=1 DiskIdxMap=0C $sataportmap SasIdxMap=0 quiet

     

    for instance, I've made it hard in grub-quicknick-v2.cfg

     

    set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=4 SasIdxMap=0'

  5. update to 7 fails too.

    also tried update 6 first, exactly the same results.

     

    so, remains update 5 version. waiting for new progress...

     

    BTW: AMD cpu, N54L, ...

     

    Did you upgraded from DSM 5->DSM6? Or Clean installed DSM 6?

     

    I'm asking because i also have an HP N54L and successfully installed Update 7 with no problem at all.

     

    Clean installed DSM 6 with update 5.

    I did not install the update 6. just do online upgrade to update 7, 100% done, but prompt 'update failed...'

    try manual update 6/update 7, fails too.

     

    It's grade to know N54L works with update 7, I'll try it again. thx.

     

    I didn't know the jun loader, I'm using quicknick ones.

    however, to install the update 6, I had to symlink both boot drives to synonoot.

     

    a kind of

     

    ssh diskstation
    sdef=$(sfdisk -l | awk '/dev/&&/ ef /{print $1}')
    ln -s ${sdef#/dev/} /dev/synoboot1
    sd83=$(sfdisk -l | awk '/dev/&&/ 83 /{print $1}')
    ln -s ${sd83#/dev/} /dev/synoboot2

     

    yes, synoupgrade have to mount /dev/synoboot1 and/or synoboot2

    to update the kernel contained in flashupdate from the .pat tar archive.

  6. Hi, no, I uses RDM, but some optimizations may be attempted.

    ...

     

    Thank you very much for your modifiying-scripts. :smile:

    You change your raid-settings very heavily but if it works stable for you its all fine.

     

    If I have time I will test which mode works for me best (all harddisks as single raid0 or to put the B120i-controller in plain AHCI-Mode). I read that a lot people have performance problems (slow reads and writes) under ESXi 6.5 with the onboard B120i-Controller (the harddisks/ssds for the the VM-storage are affected by this problem not the disks given to Xpenology itself).

     

    about raid settings, not my fault if synology boxes aren't optimized at all... :roll:

     

    forgot about sysctl tunnings

     

    double ## are defaults

     

    kernel.core_pattern = /volume1/@core/%e
    
    # Use the full range of ports
    ## net.ipv4.ip_local_port_range = 32768 61000
    net.ipv4.ip_local_port_range = 32768 65535
    
    # Increase Linux autotuning TCP buffer limits
    ## net.core.rmem_max = 212992
    ## net.core.wmem_max = 212992
    net.core.rmem_max = 16777216
    net.core.wmem_max = 16777216
    ## net.ipv4.tcp_rmem = 4096 16384 4194304
    ## net.ipv4.tcp_wmem = 4096 16384 4194304
    net.ipv4.tcp_rmem = 4096 87380 16777216
    net.ipv4.tcp_wmem = 4096 65536 16777216
    
    # mdadm resync limits
    ## dev.raid.speed_limit_max = 200000
    ## dev.raid.speed_limit_min = 10000
    dev.raid.speed_limit_min = 50000
    
    # don't remember
    net.ipv4.tcp_window_scaling = 1
    
    # swappiness is a parameter which sets the kernel's balance between reclaiming
    # pages from the page cache and swapping process memory. The default value is 60.
    # If you want kernel to swap out more process memory and thus cache more file
    # contents increase the value. Otherwise, if you would like kernel to swap less
    # decrease it.
    ## vm.swappiness = 60
    vm.swappiness = 50
    
    # Contains, as a percentage of total system memory, the number of pages at
    # which a process which is generating disk writes will itself start writing
    # out dirty data.
    ## vm.dirty_ratio = 20
    vm.dirty_ratio = 30

     

    pending changes when I'll have time

     

    # Increase number of incoming connections that can queue up before dropping
    ### net.core.somaxconn = 128
    ## net.core.somaxconn = 65535
    
    # don't remember
    ## net.core.optmem_max = 20480
    # net.core.optmem_max = 40960
    
    # Disable source routing and redirects
    # net.ipv4.conf.all.send_redirects = 0
    # net.ipv4.conf.default.send_redirects = 0
    ## net.ipv4.conf.all.accept_redirects = 0
    # net.ipv4.conf.default.accept_redirects = 0
    ## net.ipv4.conf.all.accept_source_route = 0
    # net.ipv4.conf.default.accept_source_route = 0
    
    # Disable TCP slow start on idle connections
    ## net.ipv4.tcp_slow_start_after_idle = 1
    # net.ipv4.tcp_slow_start_after_idle = 0
    
    # Disconnect dead TCP connections after 1 minute
    ## net.ipv4.tcp_keepalive_time = 7200
    # net.ipv4.tcp_keepalive_time = 60
    
    # Increase the number of outstanding syn requests allowed.
    ## net.ipv4.tcp_syncookies = 1
    
    # Determines the wait time between isAlive interval probes
    ## net.ipv4.tcp_keepalive_intvl = 75
    # net.ipv4.tcp_keepalive_intvl = 15
    
    # Determines the number of probes before timing out
    ## net.ipv4.tcp_keepalive_probes = 9
    # net.ipv4.tcp_keepalive_probes = 5
    
    # don't remember
    ## net.ipv4.tcp_timestamps = 1
    
    # Increase the length of the network device input queue
    ## net.core.netdev_max_backlog = 1000
    # net.core.netdev_max_backlog = 5000
    
    # Handle SYN floods and large numbers of valid HTTPS connections
    ## net.ipv4.tcp_max_syn_backlog = 128
    # net.ipv4.tcp_max_syn_backlog = 8096
    
    # Wait a maximum of 5 * 2 = 10 seconds in the TIME_WAIT state after a FIN, to handle
    # any remaining packets in the network.
    ## net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 120
    # net.ipv4.netfilter.ip_conntrack_tcp_timeout_time_wait = 5
    
    # Allow a high number of timewait sockets
    ## net.ipv4.tcp_max_tw_buckets = 16384
    # net.ipv4.tcp_max_tw_buckets = 65536
    
    # Timeout broken connections faster (amount of time to wait for FIN)
    ## net.ipv4.tcp_fin_timeout = 60
    # net.ipv4.tcp_fin_timeout = 10
    
    # Let the networking stack reuse TIME_WAIT connections when it thinks it's safe to do so
    ## net.ipv4.tcp_tw_reuse = 0
    # net.ipv4.tcp_tw_reuse = 1
    
    # If your servers talk UDP, also up these limits
    ## net.ipv4.udp_rmem_min = 4096
    ## net.ipv4.udp_wmem_min = 4096
    # net.ipv4.udp_rmem_min = 8192
    # net.ipv4.udp_wmem_min = 8192
    
    # don't remember
    ## kernel.sched_migration_cost_ns = 500000
    
    # Contains, as a percentage of total system memory, the number of pages at
    # which the pdflush background writeback daemon will start writing out dirty
    # data.
    ## vm.dirty_background_ratio = 10
    
    # This tunable is used to define when dirty data is old enough to be eligible
    # for writeout by the pdflush daemons. It is expressed in 100'ths of a second.
    # Data which has been dirty in memory for longer than this interval will be
    # written out next time a pdflush daemon wakes up.
    ## vm.dirty_expire_centisecs = 3000
    # vm.dirty_expire_centisecs = 6000
    
    # This is used to force the Linux VM to keep a minimum number of kilobytes
    # free. The VM uses this number to compute a pages_min value for each lowmem
    # zone in the system. Each lowmem zone gets a number of reserved free pages
    # based proportionally on its size.
    ## vm.min_free_kbytes = 65536
    
    # Controls the tendency of the kernel to reclaim the memory which is used for
    # caching of directory and inode objects.
    # At the default value of vfs_cache_pressure = 100 the kernel will attempt to
    # reclaim dentries and inodes at a "fair" rate with respect to pagecache and
    # swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer
    # to retain dentry and inode caches. Increasing vfs_cache_pressure beyond 100
    # causes the kernel to prefer to reclaim dentries and inodes.
    ## vm.vfs_cache_pressure=100
    # vm.vfs_cache_pressure=400
    
    # 64k (sector size) per I/O operation in the swap
    ## vm.page-cluster=3
    # vm.page-cluster=16

     

    RAID0 is really dangerous... if you lost 1 drive, you lost everything ! hope you have backups ?

     

    HP gen8 + esxi + RDM : 4x6T in RAID 5

     

    root@xpenology:/volume1# dd if=/dev/zero of=zero bs=4k count=1000000

    4096000000 bytes (4.1 GB) copied, 15.8706 s, 258 MB/s

    root@xpenology:/volume1# dd if=zero of=/dev/null bs=4k count=1000000

    4096000000 bytes (4.1 GB) copied, 27.6033 s, 148 MB/s

     

    plain synology DS1812+ : 8x3TB in RAID 6

     

    root@synology:/volume1# dd if=/dev/zero of=zero bs=4k count=1000000

    4096000000 bytes (4.1 GB) copied, 26.3938 s, 155 MB/s

    root@synology:/volume1# dd if=zero of=/dev/null bs=4k count=1000000

    4096000000 bytes (4.1 GB) copied, 28.238 s, 145 MB/s

  7. @quicknick

     

    S30acpid.sh may start multiple instances of acpid if relaunched

     

    fixed script

    #!/bin/sh
    #
    # This goes in /usr/local/etc/rc.d and gets run at boot-time.
    
    PATH_ACPID=/bin/acpid
    
    status() { killall -q -0 acpid; }
    
    case "$1" in
    
    start)
           if [ -x "$PATH_ACPID" ] ; then
    #       insmod /lib/modules/button.ko
    #       insmod /lib/modules/evdev.ko
                   status && exit
                   echo "start acpid"
                   $PATH_ACPID
                   logger -p daemon.info "$0 started acpid"
           fi
           ;;
    
    stop)
           status || exit 0
           echo "stop acpid"
           killall -q acpid
           logger -p daemon.info "$0 stopped acpid"
           ;;
    
    status)
           status
           ;;
    
    prestart|prestop)
           exit 0
           ;;
    
    *)
           echo "usage: $0 { start | stop | status }" >&2
           exit 1
           ;;
    
    esac

  8. @gits68: Thank you very much for your detailed upgrade-how to!!

    Do you run the builtin controller B120i in RAID-mode or AHCI-mode?

    Do you suffer from bad performance from ssds or harddisks which are connected to the B120i?

    These questions are the only things holding me back to go from baremetal Xpenology to ESXi. :grin:

     

    Hi, no, I uses RDM, but some optimizations may be attempted.

     

    root@xpenology:/usr/local/etc/rc.d# cat S00local-raid

    #!/bin/bash
    
    case $1 in
    
    'start')
                   : FALLTHROUGH
                   ;;
    
    'stop'|'prestart'|'prestop'|'status')
                   exit 0
                   ;;
    
    *)
                   echo "usage: ${0##*/} " >&2
                   exit 1
                   ;;
    
    esac
    
    shopt -s extglob
    
    function log {
           typeset device=$1 variable=$2 source_value=$3 target_value=$4 path=$5
           typeset tmp=${device}_${variable##*/}
    
           [[ -s /tmp/${tmp} ]] ||
           echo ${source_value} > /tmp/${tmp}
    
           if [[ ${source_value} != ${target_value} ]]; then
                   echo "${path}: ${source_value} -> ${target_value}"
                   return 1
           fi
    }
    
    function sys_block {
           typeset device=$1 variable=$2 target_value=$3
           typeset path=/sys/block/${device}/${variable}
    
           read source_value < ${path}
    
           log ${device} ${variable} ${source_value} ${target_value} ${path} ||
           echo ${target_value} > ${path}
    }
    function block_dev {
           typeset device=$1 target_value=$2
           typeset path=/dev/${device}
    
           source_value=$(blockdev --getra ${path})
    
           log ${device} read_ahead ${source_value} ${target_value} ${path}/read_ahead ||
           blockdev --setra ${target_value} ${path}
    }
    function pow2
    {
           awk '
           function log2(x) { return log(x) / log(2) }
           function ceil(x) { return x == int(x) ? x : int(x + 1) }
           function pow2(x) { return 2 ^ ceil(log2(x)) }
           BEGIN { print pow2(ARGV[1]); exit }' "$@"
    }
    
    physical_volumes= sep=
    logical_volumes= _sep=
    for model in /sys/block/sd*/device/model; do
           read source_value < ${model}
           [[ ${source_value} = 'Virtual disk' ]] && continue
           [[ ${source_value} = 'LOGICAL VOLUME' ]] && lv=1 || lv=0
           # [[ ${source_value} = *EFRX* ]] || continue
    
           source=${model%/device/model}
           target=$(readlink ${source})
           # was [[ ${target} = */ata*/host* ]] || continue
           [[ ${target} = */usb*/host* ]] && continue
    
           disk_device=${source#/sys/block/}
           if (( lv )); then
                   logical_volumes=${logical_volumes}${_sep}${disk_device}; _sep=' '
           else
                   physical_volumes=${physical_volumes}${sep}${disk_device}; sep=' '
           fi
    done
    
    ## read_ahead=384 # default
    read_ahead=2048
    for disk_device in ${physical_volumes} ${logical_volumes}; do
           block_dev ${disk_device} ${read_ahead}
    done
    
    ## queue_depth=31 # physical
    ## queue_depth=64 # logical
    # queue_depth=1 # disabled
    queue_depth=3 # almost disabled
    for disk_device in ${physical_volumes}; do
           sys_block ${disk_device} device/queue_depth ${queue_depth}
    done
    
    ## nr_requests=128 # default
    # nr_requests=64
    for disk_device in ${physical_volumes} ${logical_volumes}; do
           read queue_depth < /sys/block/${disk_device}/device/queue_depth
           (( nr_requests = $(pow2 ${queue_depth}) * 2 ))
           sys_block ${disk_device} queue/nr_requests ${nr_requests}
    done
    
    raid_device='md2'
    
    ## read_ahead=1536 # physical
    ## read_ahead=768 # logical
    read_ahead=65536
    block_dev ${raid_device} ${read_ahead}
    
    ## stripe_cache_size=1024 # physical
    ## stripe_cache_size=4096 # logical
    stripe_cache_size=32768
    # stripe_cache_size=8192
    sys_block ${raid_device} md/stripe_cache_size ${stripe_cache_size}
    
    # eof

     

    this have to be done in maintenance mode

     

    # add dir_index to defaults
    sed -i -e '/base_features/s/$/,dir_index/' /etc/mke2fs.conf
    
    syno_poweroff_task -d; vgchange -a y
    
    tune2fs -O dir_index /dev/vg1/volume_#
    e2fsck -fDC0 /dev/vg1/volume_#
    
    # parity = raid0 ? 0 : raid 1/10 ? # disk/2 : raid6 ? 2 : 1)
    # stride=mdstat chunk / block, stripe_width=stride * (# disks - parity)
    
    awk '/md2/{disks=NF-4;parity=/raid0/?0:/raid1/?disks/2:/raid6/?2:1
    getline;chunk=$7;stride=chunk/4;stripe=stride*(disks-parity)
    printf "stride=%d,stripe-width=%d\n", stride, stripe }' /proc/mdstat
    
    # 4/R5
    tune2fs -E stride=16,stripe-width=48 /dev/vg1/volume_#
    # 8/R6
    tune2fs -E stride=16,stripe-width=96 /dev/vg1/volume_#
    
    vgchange -a n; syno_poweroff_task -r

     

    Regards.

  9. I am confused. Do you mean that we still need to use a physical USB to boot from a VM environment to get rid of the 50MB hdd? In 5.2 age, I remembered I don't see this boot drive under DSM, yet I was booting from the vmdk directly.

    Could it be the rmmod / mod_del doesn't work in this loader?

     

    I am okay to leave this hdd in the system but it is annoying that it generate smart error once every few seconds.

     

    You don't need to use USB drive for bootloading into xpenology under esxi. That is just the way I wanted it to work, so to a) save space on my SSD even though it is only few hundred MBs, b) making use of the already there internal usb port, in fact I even use the MircoSD slot for running esxi, c) in relation to previous point, booting from USB drive means I could easily backup and replace bootloader in case anything went wrong.

     

    The entire concept of my setup is to isolate Xpenology as much as possible from the other VMs on the same Gen8 physical machine, so to up the reliability and increase revival chance. To do so I also passthrough the entire B120i (ran in AHCI mode) for Xpenology use only; that way the drives connected to B120i works just as a proper Synology drive without the layer of vmdk or use of RDM. Touch wood, if my Gen8 decide to not work, I can still unplug my drives and use Linux to read all data directly. This is abit off topic but I hope you get the idea.

     

    Now I know there are ways to overcome the 50MB issue, especially when bootloader running from vmdk on SSD and such. Mine is just a way that I worked out and find suitable for my requirement. I think there are still things that I could fine tune, but my xpenology is running smooth and stable that I would rather stick with it for now. A side note that I happened to try upgrading my esxi 6 to 6.5 yesterday and for some reason it didn't work out well. I could not downgrade from 6.5 to 6 either, so I did a reinstallation of esxi 6. From this exercise, I have been able to add my xpenology back to esxi easily and everything runs as if I never upgraded; that proofs the reliability that I am after.

     

    about how to upgrade from ESXi 6.0 to 6.5...

     

    1st solution

     

    try the HPE Custom Image for VMware ESXi 6.5 Install CD or the HPE Custom Image for VMware ESXi 6.5 Offline Bundle

    https://my.vmware.com/fr/group/vmware/d ... ductId=614

    for historical reason, I use the solution below since I begin with this.

    well, in the begining the of year, there was no up to date HPE images, so, I prefer to build an up to date image instead of using a one year old image...

     

    2nd solution

     

    build yourself an up to date image

    powercli

    https://my.vmware.com/group/vmware/deta ... ductId=614

    vmware tools

    https://my.vmware.com/fr/group/vmware/d ... ductId=615

    esxui (html5) & vmware remove console (vmrc) vibs

    https://labs.vmware.com/flings/esxi-emb ... ost-client

    esxi customizer

    https://www.v-front.de/p/esxi-customizer-ps.html

    HPE drivers

    http://vibsdepot.hpe.com/hpe/nov2016/

    http://vibsdepot.hpe.com/hpe/oct2016/

    well, for faster download, I get the offline bundles, then extract the vibs from them

    http://vibsdepot.hpe.com/hpe/nov2016/esxi-650-bundles/

    http://vibsdepot.hpe.com/hpe/nov2016/es ... cedrivers/ (hpdsa, hpvsa & nhpsa)

    http://vibsdepot.hpe.com/hpe/oct2016/es ... cedrivers/ (hpsa & nx1_tg3)

     

    not used, for reference only...

    additionnal drivers

    https://vibsdepot.v-front.de/wiki/index ... i_packages

    patches (none for 6.5 yet)

    https://my.vmware.com/group/vmware/patc ... 241#search

     

    built images, upload not yet terminated, 2-3 hours yet...

    for offline upgrade

    http://cyrillelefevre.free.fr/vmware/ES ... omized.iso

    for online upgrade

    http://cyrillelefevre.free.fr/vmware/ES ... omized.zip

     

    for online installation, go in maintance mode, then :

    well, don't remember if I do install or update... I suppose install :???:

    esxcli software profile install -d $PWD/ESXi-6.5.0-4564106-standard-customized.zip -p ESXi-6.5.0-4564106-standard-customized

    also, while the vmware tools are in the image, they don't get installed ! so, I installed them manually :

    esxcli software vib install -v file://$PWD/VMware_locker_tools-light_6.5.0-0.0.4564106.vib

    don't remember if -n tools-light is needed or not ?

     

    to build the images yourself

     

    contents of vibs subdirectory

     

    amshelper-650.10.6.0-24.4240417.vib

    conrep-6.0.0.01-02.00.1.2494585.vib

    esxui-signed-4762574.vib

    hpbootcfg-6.0.0.02-02.00.6.2494585.vib

    hpe-cru_650.6.5.8.24-1.4240417.vib

    hpe-esxi-fc-enablement-650.2.6.10-4240417.vib

    hpe-ilo_650.10.0.1-24.4240417.vib

    hpe-nmi-600.2.4.16-2494575.vib

    hpe-smx-limited-650.03.11.00.13-4240417.vib

    hpe-smx-provider-650.03.11.00.17-4240417.vib

    hponcfg-6.0.0.4.4-2.4.2494585.vib

    hptestevent-6.0.0.01-01.00.5.2494585.vib

    net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585.vib

    nhpsa-2.0.10-1OEM.650.0.0.4240417.x86_64.vib

    scsi-hpdsa-5.5.0.54-1OEM.550.0.0.1331820.vib

    scsi-hpsa_6.0.0.120-1OEM.600.0.0.2494585.vib

    scsi-hpvsa-5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib

    ssacli-2.60.18.0-6.0.0.vib

    VMware_locker_tools-light_6.5.0-0.0.4564106.vib

    VMware-Remote-Console-9.0.0-Linux.vib

    VMware-Remote-Console-9.0.0-Windows.vib

     

    under powercli

     

    PS D:\vmw> .\ESXi-Customizer-PS-v2.5.ps1 -v65 -pkgDir vibs
    
    This is ESXi-Customizer-PS Version 2.5.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)
    (Call with -help for instructions)
    
    Logging to D:\Users\xxx\AppData\Local\Temp\ESXi-Customizer-PS-8096.log ...
    
    Running with PowerShell version 3.0 and VMware PowerCLI 6.5 Release 1 build 4624819
    
    Connecting the VMware ESXi Online depot ... [OK]
    
    Getting Imageprofiles, please wait ... [OK]
    
    Using Imageprofile ESXi-6.5.0-4564106-standard ...
    (dated 10/27/2016 05:43:44, AcceptanceLevel: PartnerSupported,
    The general availability release of VMware ESXi Server 6.5.0 brings whole new levels of virtualization performance to da
    tacenters and enterprises.)
    
    Loading Offline bundles and VIB files from vibs ...
      Loading D:\vmw\vibs\amshelper-650.10.6.0-24.4240417.vib ... [OK]
         Add VIB amshelper 650.10.6.0-24.4240417 [OK, added]
      Loading D:\vmw\vibs\conrep-6.0.0.01-02.00.1.2494585.vib ... [OK]
         Add VIB conrep 6.0.0.01-02.00.1.2494585 [OK, added]
      Loading D:\vmw\vibs\esxui-signed-4762574.vib ... [OK]
         Add VIB esx-ui 1.13.0-4762574 [OK, replaced 1.8.0-4516221]
      Loading D:\vmw\vibs\hpbootcfg-6.0.0.02-02.00.6.2494585.vib ... [OK]
         Add VIB hpbootcfg 6.0.0.02-02.00.6.2494585 [OK, added]
      Loading D:\vmw\vibs\hpe-cru_650.6.5.8.24-1.4240417.vib ... [OK]
         Add VIB hpe-cru 650.6.5.8.24-1.4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-esxi-fc-enablement-650.2.6.10-4240417.vib ... [OK]
         Add VIB hpe-esxi-fc-enablement 650.2.6.10-4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-ilo_650.10.0.1-24.4240417.vib ... [OK]
         Add VIB hpe-ilo 650.10.0.1-24.4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-nmi-600.2.4.16-2494575.vib ... [OK]
         Add VIB hpe-nmi 600.2.4.16-2494575 [OK, added]
      Loading D:\vmw\vibs\hpe-smx-limited-650.03.11.00.13-4240417.vib ... [OK]
         Add VIB hpe-smx-limited 650.03.11.00.13-4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-smx-provider-650.03.11.00.17-4240417.vib ... [OK]
         Add VIB hpe-smx-provider 650.03.11.00.17-4240417 [OK, added]
      Loading D:\vmw\vibs\hponcfg-6.0.0.4.4-2.4.2494585.vib ... [OK]
         Add VIB hponcfg 6.0.0.4.4-2.4.2494585 [OK, added]
      Loading D:\vmw\vibs\hptestevent-6.0.0.01-01.00.5.2494585.vib ... [OK]
         Add VIB hptestevent 6.0.0.01-01.00.5.2494585 [OK, added]
      Loading D:\vmw\vibs\net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585.vib ... [OK]
         Add VIB net-tg3 3.137l.v60.1-1OEM.600.0.0.2494585 [OK, replaced 3.131d.v60.4-2vmw.650.0.0.4564106]
      Loading D:\vmw\vibs\nhpsa-2.0.10-1OEM.650.0.0.4240417.x86_64.vib ... [OK]
         Add VIB nhpsa 2.0.10-1OEM.650.0.0.4240417 [OK, replaced 2.0.6-3vmw.650.0.0.4564106]
      Loading D:\vmw\vibs\scsi-hpdsa-5.5.0.54-1OEM.550.0.0.1331820.vib ... [OK]
         Add VIB scsi-hpdsa 5.5.0.54-1OEM.550.0.0.1331820 [OK, added]
      Loading D:\vmw\vibs\scsi-hpsa_6.0.0.120-1OEM.600.0.0.2494585.vib ... [OK]
         Add VIB scsi-hpsa 6.0.0.120-1OEM.600.0.0.2494585 [OK, replaced 6.0.0.84-1vmw.650.0.0.4564106]
      Loading D:\vmw\vibs\scsi-hpvsa-5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib ... [OK]
         Add VIB scsi-hpvsa 5.5.0.102-1OEM.550.0.0.1331820 [OK, added]
      Loading D:\vmw\vibs\ssacli-2.60.18.0-6.0.0.vib ... [OK]
         Add VIB ssacli 2.60.18.0-6.0.0.2494585 [OK, added]
      Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Linux.vib ... [OK]
         Add VIB vmrc-linux 9.0.0-0.2 [OK, added]
      Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Windows.vib ... [OK]
         Add VIB vmrc-win 9.0.0-0.2 [OK, added]
      Loading D:\vmw\vibs\VMware_locker_tools-light_6.5.0-0.0.4564106.vib ... [OK]
         Add VIB tools-light 6.5.0-0.0.4564106 [iGNORED, already added]
    
    Exporting the Imageprofile to 'D:\vmw\ESXi-6.5.0-4564106-standard-customized.iso'. Please be patient ...
    
    All done.
    
    D:\vmw> .\ESXi-Customizer-PS-v2.5.ps1 -v65 -pkgDir vibs -ozip
    
    This is ESXi-Customizer-PS Version 2.5.0 (visit https://ESXi-Customizer-PS.v-front.de for more information!)
    (Call with -help for instructions)
    
    Logging to D:\Users\xxx\AppData\Local\Temp\ESXi-Customizer-PS-8096.log ...
    
    Running with PowerShell version 3.0 and VMware PowerCLI 6.5 Release 1 build 4624819
    
    Connecting the VMware ESXi Online depot ... [OK]
    
    Getting Imageprofiles, please wait ... [OK]
    
    Using Imageprofile ESXi-6.5.0-4564106-standard ...
    (dated 10/27/2016 05:43:44, AcceptanceLevel: PartnerSupported,
    The general availability release of VMware ESXi Server 6.5.0 brings whole new levels of virtualization performance to da
    tacenters and enterprises.)
    
    Loading Offline bundles and VIB files from vibs ...
      Loading D:\vmw\vibs\amshelper-650.10.6.0-24.4240417.vib ... [OK]
         Add VIB amshelper 650.10.6.0-24.4240417 [OK, added]
      Loading D:\vmw\vibs\conrep-6.0.0.01-02.00.1.2494585.vib ... [OK]
         Add VIB conrep 6.0.0.01-02.00.1.2494585 [OK, added]
      Loading D:\vmw\vibs\esxui-signed-4762574.vib ... [OK]
         Add VIB esx-ui 1.13.0-4762574 [OK, replaced 1.8.0-4516221]
      Loading D:\vmw\vibs\hpbootcfg-6.0.0.02-02.00.6.2494585.vib ... [OK]
         Add VIB hpbootcfg 6.0.0.02-02.00.6.2494585 [OK, added]
      Loading D:\vmw\vibs\hpe-cru_650.6.5.8.24-1.4240417.vib ... [OK]
         Add VIB hpe-cru 650.6.5.8.24-1.4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-esxi-fc-enablement-650.2.6.10-4240417.vib ... [OK]
         Add VIB hpe-esxi-fc-enablement 650.2.6.10-4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-ilo_650.10.0.1-24.4240417.vib ... [OK]
         Add VIB hpe-ilo 650.10.0.1-24.4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-nmi-600.2.4.16-2494575.vib ... [OK]
         Add VIB hpe-nmi 600.2.4.16-2494575 [OK, added]
      Loading D:\vmw\vibs\hpe-smx-limited-650.03.11.00.13-4240417.vib ... [OK]
         Add VIB hpe-smx-limited 650.03.11.00.13-4240417 [OK, added]
      Loading D:\vmw\vibs\hpe-smx-provider-650.03.11.00.17-4240417.vib ... [OK]
         Add VIB hpe-smx-provider 650.03.11.00.17-4240417 [OK, added]
      Loading D:\vmw\vibs\hponcfg-6.0.0.4.4-2.4.2494585.vib ... [OK]
         Add VIB hponcfg 6.0.0.4.4-2.4.2494585 [OK, added]
      Loading D:\vmw\vibs\hptestevent-6.0.0.01-01.00.5.2494585.vib ... [OK]
         Add VIB hptestevent 6.0.0.01-01.00.5.2494585 [OK, added]
      Loading D:\vmw\vibs\net-tg3_3.137l.v60.1-1OEM.600.0.0.2494585.vib ... [OK]
         Add VIB net-tg3 3.137l.v60.1-1OEM.600.0.0.2494585 [OK, replaced 3.131d.v60.4-2vmw.650.0.0.4564106]
      Loading D:\vmw\vibs\nhpsa-2.0.10-1OEM.650.0.0.4240417.x86_64.vib ... [OK]
         Add VIB nhpsa 2.0.10-1OEM.650.0.0.4240417 [OK, replaced 2.0.6-3vmw.650.0.0.4564106]
      Loading D:\vmw\vibs\scsi-hpdsa-5.5.0.54-1OEM.550.0.0.1331820.vib ... [OK]
         Add VIB scsi-hpdsa 5.5.0.54-1OEM.550.0.0.1331820 [OK, added]
      Loading D:\vmw\vibs\scsi-hpsa_6.0.0.120-1OEM.600.0.0.2494585.vib ... [OK]
         Add VIB scsi-hpsa 6.0.0.120-1OEM.600.0.0.2494585 [OK, replaced 6.0.0.84-1vmw.650.0.0.4564106]
      Loading D:\vmw\vibs\scsi-hpvsa-5.5.0.102-1OEM.550.0.0.1331820.x86_64.vib ... [OK]
         Add VIB scsi-hpvsa 5.5.0.102-1OEM.550.0.0.1331820 [OK, added]
      Loading D:\vmw\vibs\ssacli-2.60.18.0-6.0.0.vib ... [OK]
         Add VIB ssacli 2.60.18.0-6.0.0.2494585 [OK, added]
      Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Linux.vib ... [OK]
         Add VIB vmrc-linux 9.0.0-0.2 [OK, added]
      Loading D:\vmw\vibs\VMware-Remote-Console-9.0.0-Windows.vib ... [OK]
         Add VIB vmrc-win 9.0.0-0.2 [OK, added]
      Loading D:\vmw\vibs\VMware_locker_tools-light_6.5.0-0.0.4564106.vib ... [OK]
         Add VIB tools-light 6.5.0-0.0.4564106 [iGNORED, already added]
    
    Exporting the Imageprofile to 'D:\vmw\ESXi-6.5.0-4564106-standard-customized.zip'. Please be patient ...
    
    All done.

  10. It is running perfectly on my system standalone.

     

    Upgrade from 5 to the latest version on 6 all want okay.

    Unfortunately I lose one drive, my mistake because I used diskpart -clean command and that make my EX4 empty :oops:

    I did that because on the first part on that disk was my boot diks :wink: but I could not changed the PID / VID on that one so I was hoping it cleans only the first part, but is was all :roll:

     

    Hi,

     

    all esx drives are partitionned the same, so, it should be possible to recover the detroyed partition table.

    the question is, is it a dos pt ou a gpt ?

    if dos pt, a simple dd from 1 drive to the other should suffice :

    dd if=/dev/sdX of=/dev/sdY bs=512 count=1

    sdX is the source disk et sdY the target disk.

    for gpt, it's possible with a recent fdisk :

    fdisk --version => fdisk from util-linux 2.26.2

    sfdisk -d /dev/sdX | sfdisk /dev/sdY

    well, this solution should work for dos pt too...

     

    Regards

  11. So how did you cracked it?

     

    I found the protection code and patched it. I guess this isn't a new protection with v6 and that XPEnology did the same thing for v5, so I don't know why a v6 hasn't already been done.

     

    Hi,

     

    could you explain what you patched ? and how ?

     

    thanks in advance.

     

    Regards.

×
×
  • Create New...