koroziv

Esxi DSM 6 ovf deploy - Jun's loader

Recommended Posts

Thank you for help.

I tested the OVF you provided with static address.unfortunately, it does not worked too.

I looked at deeper in the router what kind of exchange there is between the syno ant the router.

I see that the syno sent a DHCPDISCOVER.

The router send a DHCPOFFER

But after nothing else...

even the syno try again several time to initiate a new DHCP IP request.

I don't know what is wrong.

If someone have an idea to help debugging, I'd be very happy.

Share this post


Link to post
Share on other sites

I've found from playing around with this on ESXi 6.5 that I've had the most success when I set my VM to use EFI and not BIOS. EFI is in the VMX supplied with Jun's loader zip, but if you are creating your own VM then probably haven't set specifically to EFI.

 

I have the 50mb synoboot, an 8GB "system" drive VMDK on SSD, 2x 750GB HDs passed thru to the VM as RDMs, and 2x USB3 drives passed thru via ESXi (not Direct I/O)

Added a 2nd 8GB disk as VMDK on SSD and added this as SSD Cache in DSM.

 

Network-wise, I added 2x VMXNET3 NICs and bonded them in DSM :smile:

 

Love the way I can mount Windows shares on the Syno, and then share them out via NFS/AFP so it acts as a gateway.

 

Loving this software :smile:

Share this post


Link to post
Share on other sites
Posted (edited)

Hi,

 

Thanks for the ovf template, it made deploying DSM 6.1 a breeze.

Tested on ESXi 6.5 U1, updated DSM via config panel to 6.1.3-15152 Update 3 without any issue.

Please verify you have set MAC to 00:11:32:2C:A7:85 if having DHCP issue.

 

However, I noticed SMART status was not working as expected.

I first though it was due to disk being in physical RDM mode and incorrect SMART handling in VMware stack, which I expected.

I installed an IBM 1015 (flashed to 9211-8i IT mode) and setup passthrough to the virtual machine, but still had problem with SMART. My disks are 6TB Ironwolf, but ironwolf health monitoring was not working either.

 

symptoms :

- health info show as "unavailable" in storage manager UI for every disk, be it physical RDM or connected via a passthrough HBA in IT mode.

- /var/log/messages complain a lot about /dev/sda & /dev/sdb while in the storage manager UI

 

resolution :

- add a SATA controler to the virtual machine

- attach disk 1 as SATA 0:0  (this disk is 50MB, independant / non-persistant, provided with the ovf)

- attach disk 2 as SATA 0:1 (this disk is 8GB, independant / persistant, provided with the ovf). Both disk must be attached to the SATA controler or DSM won't boot.

 

caveat :

the paravirtual SCSI adapter is not used anymore (except if you have added disk to it on your own).

notice that the volume and raid group hosted on the 2nd disk won't be available anymore. You will have to reinstall some package (such as open-vm-tools-bromolow) to another volume. As I am new to xpenology, that might not be a best practice ? I have not yet figured it out.

 

 

As a side note, I also have an horror story to share about the 9211-8i :

symptoms :

- disk and RAID group are recognized, but soon after it is pure chaos as one or more disk are faulted.

- /var/log/messages show hundreds of SCSI sense code error

- RAID group and volume eventually become bricked as all error are randomized against all disk connected to the SAS HBA.

- rebooting or reconstructing the RAID is a no go as disk error will again show up very soon after.

- disk manager show system partition of one or more disk as faulted

- after erasing the faulty partition (use "benchmark" for a quick erase", disk is available again and SMART status show disk as healthy

 

resolution :

- figure if you have an overheat issue. The SAS 2008 chip can be very hot in small enclosure and do bad thing to your RAID group.

- do not use firmware 20.00.00.00. Flash it to 20.00.07.00 !
 

Edited by Exo7

Share this post


Link to post
Share on other sites

Hi!
Thanks so much to koroziv! After a failed update from DSM 6.0 to DSM 6.1, giving up on trying to restore I also failed to boot Jun's loader under ESXI to create a new VM.

I was panicking for a while (have backups of important stuff, but would take forever to restore) this saved my ass, and was so easy to setup!

 

After importing the ovf and vmdks I recreated my users and config everything is working great, but now I'm facing a different issue. I need to increase the max disks, trying to follow the idmedia guide on YouTube (going for 36 disks). The same guide worked perfectly with my previous install.
However with this template after I edit the synoinfo.conf file and reboot, the VM pulls a new address via DHCP (I set a static one previously) and forces me to reinstall DSM. I can reinstall and my data, config and packages remain intact, but the max drives is reset to 12.

 

This is what I'm doing:

- ssh in to the VM deployed from koroziv's template

- pull the synoinfo.conf file via scp to my destop

- edit values following the idmedia guide

- rename synoinfo.conf on the VM for backup

- scp the edited file in place of the orignal

- cat the contents to make confirm the new file is in the correct place and has new values

- reboot via DSM web gui

 

What is the "proper" way to increase max disks?

 

Thanks for any help!

 

/Config:

 

Running the VM under ESXI 6.5 with 2 cores and 4GB RAM and a Dell RAID card flashed to HBA mode

These are the values I'm trying:

maxdisks 36

Binary:
0000 0000 0000 1111 1111 0000 0000 0000 0000 0000 0000 0000 0000 0000   -   esataportcfg
0000 0000 0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000   -   usbportcfg
0000 0000 0000 0000 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111   -   internalportcfg

 

Hex (writen to file):

ff000000000   -   esataportcfg
300000000000   -   usbportcfg
fffffffff   -   internalportcfg

Share this post


Link to post
Share on other sites

hi guys

i will migrate from 5.2 to 6.1 with "koroziv" ovf. i have esxi 6.5u1, please can enyone help me what i can do for don't loosing my files on RAID1 disks?

the disks are configured in vmdk mode. i create 2 disks, 1 on datastore1 and 2 on datastore2.

thanks in advanced

Share this post


Link to post
Share on other sites

Anyone can share a OVA that works? Without the DHCP/getting IP issue, for VMware ESX.

 

Thanks.

Share this post


Link to post
Share on other sites
Anyone can share a OVA that works? Without the DHCP/getting IP issue, for VMware ESX.
 
Thanks.

What version of ESXi u’ve got?

Share this post


Link to post
Share on other sites

Please inform, I have a vm that I deploy with the ovf here that have Jun version 1.01alpha

- can the loader be updated to Jun 1.01b ? If not can anyone share an ovf with 1.01b?

 

Thanks

Share this post


Link to post
Share on other sites
On 18/11/2017 at 7:42 PM, codedmind said:

Please inform, I have a vm that I deploy with the ovf here that have Jun version 1.01alpha

- can the loader be updated to Jun 1.01b ? If not can anyone share an ovf with 1.01b?

 

Thanks

Correction, i have 1.02alpha and the question, is possible to upgrade to 1.02b

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now