Jump to content
XPEnology Community

breacH

Member
  • Posts

    12
  • Joined

  • Last visited

breacH's Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. Ok thanks a lot for the informations. 👍
  2. Thanks for your reply. So it means that even if the first install is on a single disk, and then a raid array is added, DSM will "expand itself" on these new drives? Thanks again, and have a nice day!
  3. Hello, I would like to know how is DSM installed on virtual Xpenology setup. In standard Synology setup, lets say for example a 4 bays device with 4 disk in RAID 5, the system is spread out on all disks to prevent failure. (every drive gets its own DSM parition) But how does it work in the case mentionned in the title? My setup is the following: - Esxi 6.7 - DSM 6.2 - 5x3 TB in RAID 5 I did follow the installation tutorial for DSM 6.2 on esxi, so DSM was first install on a single VMDK (located on a single SSD). Once this was up, i did shut down the system, plugged my hard drives in and booted it up again. Only then i created a RAID 5 array within DSM. My question is: - Is DSM only installed on the first VMDK? - Or has DSM been spread out on the hard drives aswell, once i created the RAID 5 array? Thanks forward for any explaination. Kind regards.
  4. Hello, I finally manage to take some time to do the migration. And it did not went well... I had the VMs ready and all set up, shut down the VMs, moved the RAID card passthrough and powered up the new 6.2.2 VM. When i wanted to login, the web GUI went to the Synology configurator and it said it detected a previous installation of DSM. (i had only datas on these drives, the OS was on a separate vmdk) I tried to push the install and then it didnt boot. I tried to move back the RAID card to the previous 5.1 VM but then i had no boot either... I decided to cut down the whole thing, now i'm rebuilding the array on the new 6.2.2 VM. I have all data backup up so it's just some time to wait. And in the mean time i have no backup if something fails.. 😕
  5. Hello, Thanks for the information. This issue is a bit strange. I dont see how OpenVMTools could mess up with the PCI Passthrough.. Anyway, glad you sorted it! By the way, did you manage to reinstall OpenVMTools after? (cause it is still practical)
  6. Hello, Great news it went well! Also thanks for beeing a beta-tester to me. ;) Now i know that i can make the move without having trouble. If you have problems with your new esxi 6.7 like host beeing unreachable, vm crashing or poor sata performance, try to disable native esxi drivers (the defaults one on esxi 6.7) to revert back to linux legacy drivers. I've had so many problems with these during weeks, now all gone. Cheers!
  7. Hello, Unfortunately i still did not make the move of the LSI card from a VM to another.. I had to deal with problems related to ethernet, sata and usb driver of ESXI. (now solved) I don't really know when i will do the move. But sure i will report here when done. (there are dependancies between VMs, i can't have the NAS VM down more than half a day) Still, i'm 99% sure that yes, you just need to prepare the new VM, and then switch the IO passthrough from the old VM to the new one. Actually i think i already sorta tested the passthrough switch between VM.. When i updated ESXI, the LSI card did not show up in the VM when booted. (so they were not any drives) I had to shutdown the VM, disable the IO passthrough, reboot the host, enable IO passthrough, and put back the LSI card in the VM. All went fine when i did this. So i guess that moving between VM must be the same. Let us know if you try! Cheers!
  8. I still did not make the move with the raid card. I'm now struggling with vnic problem (and also sata) on my esxi build... Will keep up to date when the raid card will move!
  9. Ahh okay it's because i dont have installed the VM tools! vNIC is E1000e, no problem. Thanks again!
  10. Thanks for your reply. Glad to see this LSI raid is compatible with DSM 6.2.1 All the settings are as you tell. I will give a try when i have some time, and keep up informed here. I just noticed something strange. In ESXI, the new DSM 6.2.1 VM doesn't show any IP adressed by DHCP in the vNIC. But it has one! I can access the NAS without problem and also find.synology.me does find it. Any thoughts? (my old DSM 5.2 does show an IP in ESXI in comparison)
  11. Hello, First of all, thanks to all the contributers of the Xpenology project. I've been using it for years. It is just great! I need to upgrade my main VM because some packages requires it. So, here is my current setup: - ESXI 6.7 (recently upgraded from 5.5) - Xpenology VM with DSM 5.2 (been running for years without any problems) - The VM OS disk is hosted as a virtual hard disk on the main SSD - The datas are on 5x3TB WD drives in RAID 5, all connected to LSI SAS9220-8i (HP M1015) - The LSI card is in IT mode, directly passthrough into the Xpenology VM Today i just installed a new VM with DSM 6.2.1. Everything went fine, the system is up and running. Now it is time to move the datas onto the new VM... If i just shutdown the DSM 5.2 VM and then passthrough the LSI card to the new DSM 6.2.1 VM, you think it will be ok? I am a bit nervous trying this. I would like to know what do you think of this move, and if anyone has already tried this manipulation. Thanks forward for reply. Have a nice day!
  12. Hello! Im not that new here, since i read a lot of topics about xpenology etc. But i finally register! I have myself a synology DS1513+ and a xpenology (running on ESXI). The DS1513+ is running 5x3 TB WD AV-GP in RAID 5 and the ESXI is powered by an i7 2600 and is now hosting at the moment 3x1 TB WD Green and another 3TB WD Green. Everything is running well, but i have some projects.. I am planning on moving everything on the Xpenology since i have the room for all the drives in my xpenology. I already know that it is possible to migrate between Synology and Xpenology since the DSM version are the same (actually they are). But the thing is that i want to keep my Xpenology system drive on the SSD (where ESXI is installed and also its datastore). Then i can make snapshots and backup of the system in case of crash. Here are my questions (and they are a lot sorry): What will happen if i just plug my drives from the 1513+ in xpenology then boot the xpenology? Will it recognize the RAID 5 and just show the data partition? Will my old Synology system will be seen? Or maybe it will try to boot on the old Synology system? How Synology manage the system partition in the RAID 5 array i have now (in the DS1513+)? That's a lot of questions i know.. My main goal is to have a clean install with Xpenology system hosted on the SSD and only the datas on the RAID 5 array. I hope it is not too much for a first post hahaha! Thanks forward for the answers, and have a nice day!
×
×
  • Create New...