breacH

Raid card passthrough from DSM 5.2 to DSM 6.2.1 (same ESXI host, from one VM to another)

Recommended Posts

Hello,

 

First of all, thanks to all the contributers of the Xpenology project.
I've been using it for years. It is just great!

 

I need to upgrade my main VM because some packages requires it.


So, here is my current setup:

- ESXI 6.7 (recently upgraded from 5.5)

- Xpenology VM with DSM 5.2 (been running for years without any problems)

- The VM OS disk is hosted as a virtual hard disk on the main SSD

- The datas are on 5x3TB WD drives in RAID 5, all connected to LSI SAS9220-8i (HP M1015)
- The LSI card is in IT mode, directly passthrough into the Xpenology VM

 

Today i just installed a new VM with DSM 6.2.1.
Everything went fine, the system is up and running.

Now it is time to move the datas onto the new VM...


If i just shutdown the DSM 5.2 VM and then passthrough the LSI card to the new DSM 6.2.1 VM, you think it will be ok?
I am a bit nervous trying this.
 


I would like to know what do you think of this move, and if anyone has already tried this manipulation.


Thanks forward for reply.


Have a nice day!

Edited by breacH

Share this post


Link to post
Share on other sites

If you take precautions things will play out nicely:

- setup serial number and MAC in the grub.cfg of synoboot.img

- be sure to set you vNIC to Intel 1000e, otherwise the DS3615SX bootloader 1.03b won't be able to find your nic on DSM6.2.1.

- be sure synoboot is connected as SATA0:0

- if you want to attach additional vmdks, add another SATA controller and assign those vmdks to SATA1:x, starting from 0. If you don't use additional vmdks don't add the additional SATA controller.

 

Your LSI-Controller is supported by DSM6.2.1. I have two ESXi hosts running DSM6.2.1 with the same controller passed in.

 

Usualy it is not necessary to tinker arround with the settings for DiskIdxMap=0C or SataPortMap=1. If the controller SATA1 is added you might need to append values for the second controller.

Edited by haydibe

Share this post


Link to post
Share on other sites

Thanks for your reply.

Glad to see this LSI raid is compatible with DSM 6.2.1

All the settings are as you tell.
I will give a try when i have some time, and keep up informed here.

I just noticed something strange.
In ESXI, the new DSM 6.2.1 VM doesn't show any IP adressed by DHCP in the vNIC.
But it has one! I can access the NAS without problem and also find.synology.me does find it.
Any thoughts?

(my old DSM 5.2 does show an IP in ESXI in comparison)

Share this post


Link to post
Share on other sites

I assume you succesfully installed DSM6.2.1 and miss the ip in the build in web console of ESXi?

 

Did you install the open-vm-tools package? Add this location http://spk.4sag.ru/ as a package source and install it from there :)

 

Oh, and the vNIC needs to be E1000e, I missed the first E in my earlier post. When set to E1000 it will not work.

Edited by haydibe

Share this post


Link to post
Share on other sites

I still did not make the move with the raid card.

 

I'm now struggling with vnic problem (and also sata) on my esxi build...

 

Will keep up to date when the raid card will move!

Share this post


Link to post
Share on other sites

I'm in the same scenario as you were, pretty much have exact same setup and now finally need to upgrade DSM so I can use/upgrade other packages I have.

 

How did you get on in the end?

So I guess this is the best way to go, build new VM with DSM 6.2.1 on and then switch the direct passthrough of LSI card to the new VM when ready?

 

Anthony

Share this post


Link to post
Share on other sites

Hello,

Unfortunately i still did not make the move of the LSI card from a VM to another..
I had to deal with problems related to ethernet, sata and usb driver of ESXI. (now solved)

I don't really know when i will do the move. But sure i will report here when done.
(there are dependancies between VMs, i can't have the NAS VM down more than half a day)

Still, i'm 99% sure that yes, you just need to prepare the new VM, and then switch the IO passthrough from the old VM to the new one.

Actually i think i already sorta tested the passthrough switch between VM.. When i updated ESXI, the LSI card did not show up in the VM when booted. (so they were not any drives)
I had to shutdown the VM, disable the IO passthrough, reboot the host, enable IO passthrough, and put back the LSI card in the VM.
All went fine when i did this. So i guess that moving between VM must be the same.

Let us know if you try!

Cheers!

Share this post


Link to post
Share on other sites

Actually did it yesterday all went well. 

Upgrade esxi from 5.5 to 6.7 first as hadn't done that yet. 

Then built a brand new VM as described in the couple of 6.2.1 on esxi 6.7 guides on here. Made sure the new diskstation was working completely etc. Then I was reading lots of posts about possibly tweaks to the DiskIdxMap or SataPortMap settings as I was expecting I might need to change something. However, turned out I didn't need to change anything. Just shutdown old diskstation VM, moved the LSI pass through to new VM and disks were detected without issue. Diskstation reported issue with system partition or something and said you can run repair without impacting data, so did that and within a few seconds it was all good. Went a lot smoother that I expected. 

Share this post


Link to post
Share on other sites

Hello,

Great news it went well!
Also thanks for beeing a beta-tester to me. ;)
Now i know that i can make the move without having trouble.

If you have problems with your new esxi 6.7 like host beeing unreachable, vm crashing or poor sata performance, try to disable native esxi drivers (the defaults one on esxi 6.7) to revert back to linux legacy drivers. I've had so many problems with these during weeks, now all gone.

Cheers!

Share this post


Link to post
Share on other sites

Cheers for info.

Just to let you know a had an issue when I installed Open VM Tools, not sure if its because of wrong version or something.

Once installed and diskstation rebooted, it would not fully load and couldn't connect to it. Only way to get it to boot again was to remove the PCI passthrough. Once diskstation was online again I uninstalled Open VM Tools, then I was able to shutdown, re-attach the PCI passthrough and it was back to normal again, although I did have to do the System partition repair again, but that all went fine.

 

Share this post


Link to post
Share on other sites

Hello,

Thanks for the information.
This issue is a bit strange. I dont see how OpenVMTools could mess up with the PCI Passthrough..
Anyway, glad you sorted it!

By the way, did you manage to reinstall OpenVMTools after? (cause it is still practical)

Share this post


Link to post
Share on other sites

Hello,

 

I finally manage to take some time to do the migration.
And it did not went well... 

 

I had the VMs ready and all set up, shut down the VMs, moved the RAID card passthrough and powered up the new 6.2.2 VM.

When i wanted to login, the web GUI went to the Synology configurator and it said it detected a previous installation of DSM. (i had only datas on these drives, the OS was on a separate vmdk)


I tried to push the install and then it didnt boot.
I tried to move back the RAID card to the previous 5.1 VM but then i had no boot either...

I decided to cut down the whole thing, now i'm rebuilding the array on the new 6.2.2 VM.


I have all data backup up so it's just some time to wait. And in the mean time i have no backup if something fails.. 😕

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.