Chrisoo1 Posted June 3, 2016 Share #26 Posted June 3, 2016 (edited) oops Edited June 3, 2016 by Guest Quote Link to comment Share on other sites More sharing options...
Chrisoo1 Posted June 3, 2016 Share #27 Posted June 3, 2016 Running 13 drives here with ESXi 6.0 and all 13 drives connected to the Xpenology VM guest using VT-d PCIe passthrough with a LSI 9211-8i SAS card in IT mode + SAS2 Expander. Not seeing any issues here. Oh, and I'm only using a Corsair CS450M PSU. That is just it in a nutshell.... Get yourself a SAS card and set it up as passthrough!!! For multiple reasons, I would never never never setup vmdk drives if you are running in a virtual environment. You are asking for trouble. Quote Link to comment Share on other sites More sharing options...
Rusty Posted June 4, 2016 Share #28 Posted June 4, 2016 Multiple vmdk on top of an existing array is a massive waste of time. Either use one huge vmdk and let the hardware handle the redundancy, or pass the physical disks through and let DSM do it. Quote Link to comment Share on other sites More sharing options...
Diverge Posted June 4, 2016 Share #29 Posted June 4, 2016 I gotta agree with the last 2 guys. If you pass through your controller, and let DSM manage the disks, at least you can mount your array in linux and get to your data if DSM fails somehow... or at least you have a chance to rebuild your array outside of DSM if it fails, physically or w/ another VM by moving the controller to the new VM. And you also get smart data in DSM Quote Link to comment Share on other sites More sharing options...
Ben0 Posted June 5, 2016 Share #30 Posted June 5, 2016 So I can't passthrough the controller because my hardware doesn't support VT-D, however I've just gone the bare-bone approach, this didn't work in the past because my NIC wasn't supported (Dell 2950), seems to work now in the latest release. This is what happens; 1. Power up, says either moved from another disk station or it has been reset and needs to re-install 2. Re-install, boot, only 12 drives (new install), modify both synoinfo.confg files 3. Reboot, it tries then crashes then restarts again 4. Back to Step 1 At step 2, after i modify synoinfo.conf files, I don't reboot then in ssh go; 1. mdadm -Asf 2. vgchange -ay 3. mount /dev/vg1000/lv /volume1 4. All files there still 5. But the DSM says drives are green - spare The data volume raid is intact, no matter what i do i can't get DSM to recognize it because the synoinfo.conf files keep resetting and only shows 12 disks. I can get a 8TB external drive and backup then start again but they cost $420 here and I want to avoid departing that much money unless i really have to. I'm going to try now to have only 8 drives, install dsm, modify synoinfo.conf, hot plug in the other 7 and see if it recognizes the other disks. What I really need is a boot image and .pat file that has the synoinfo.conf modified already. I looked into it but was unable to re-pack the zimage kernal file. I don't know if this makes any sense but appreciate the feedback Thanks EDIT: I hot plugged in the drives then did mdadm --stop /dev/md(x) mdadm -Asf vgchange -ay mount /dev/vg1000/lv /volume1 DSM now sees the volume, doesn't show in file station though, still says crashed. Under disk info it says "System Partition Crashed" I would expect something like this. I know as soon as I reboot its going to revert back to 12 drives. EDIT EDIT: If I create new shared folders with the same names, the same data is in there. Example creates "Photos", then all my photos are in there. When I go "Repair" in storage manager to repair the system partition, it pops up with a blank message and nothing happens. EDIT EDIT EDIT: Rebooted, and guess what... Yes back to step 1 Quote Link to comment Share on other sites More sharing options...
Rusty Posted June 5, 2016 Share #31 Posted June 5, 2016 You don't need VT-d for RDM. Quote Link to comment Share on other sites More sharing options...
Ben0 Posted June 6, 2016 Share #32 Posted June 6, 2016 You don't need VT-d for RDM. I know, you are right, I had RDM for about 2 years working fine until I tried 15 drives in one DSM. In previous post Kanedo said they had success with 13 drives when doing a whole PCIe controller pass-through not RDM. I've given up on having more than 12 drives after reading through pages and pages of threads. Seems some people can get it to work somehow and others can't, it doesn't see very reliable in any case. Especially when there is eventually going to be a xpenology DSM 6.0, will have to re-install anyway being a different file system. Cheers Quote Link to comment Share on other sites More sharing options...
NeoID Posted August 26, 2016 Share #33 Posted August 26, 2016 I'm experiencing the same issue. I've been using XPenology without issues for many month. Using ESXI 6 with LSI SAS 9201-16i HBA in passthrough mode. Once I've added disk #12, expanded and then ran a scrub the volume crashed. It's a complete mayhem. No issue yet on what causes this, but I'll look into it (or not use more then 12 drives anymore). Quote Link to comment Share on other sites More sharing options...
exfiltrate Posted January 6, 2017 Share #34 Posted January 6, 2017 Any resolution to this issue at all? I have been able to consistently replicate the exact same issue as Ben0, even with my 9200-16e passed through to ESXI 6.5 Used both the latest release of the Xpenology bootloader, in addition to the beta 6.0 loader on the forum here. Basically, if the disks are blank and I set it up (modifying the config to allow the extra drives, then it works fine). But whenever I update, the system resets like the others and asks me to set it up again. Quote Link to comment Share on other sites More sharing options...
brantje Posted January 8, 2017 Share #35 Posted January 8, 2017 I see everbody editing files in /etc/, but files in /etc are recovered after reboot, try editing them in /etc.defaults/ Quote Link to comment Share on other sites More sharing options...
exfiltrate Posted January 17, 2017 Share #36 Posted January 17, 2017 I always edit both of the files as does everyone else (I'm assuming). It works until the next update is done Quote Link to comment Share on other sites More sharing options...
Ben0 Posted April 17, 2017 Share #37 Posted April 17, 2017 Bump, I came back to see if there were a solution yet. I still don't understand what exfiltrate and myself are doing wrong? It seems that as soon as the system partition is replicated to new disks it resets the .conf files after reboot. Perhaps (only speculating) the .conf files aren't replicated to the other disks on the system partitions. Then on reboot, it detects inconsistencies and reverts back to the original or thinks it's a new unrecognized installation? From what I understand the system partition is a raid 1 across all disks. I don't think this will work however when I get time (or if someone else can) I will try to edit the files, expand the volume, edit the files again. Or if there were some way to disable the system files check or whatever it does during bootup. I'm clutching at straws here, hoping it might spark an idea for someone else. Quote Link to comment Share on other sites More sharing options...
GaryM Posted May 26, 2017 Share #38 Posted May 26, 2017 OK, I thought it to be a good idea to share my experience (and frustration) here to help others and prevent them form loosing time etc. I have been running Xpenology for a while now, first native on my HP proliant micro server G7 and then moved to a HP proliant xenon based server with ESXI 5.5 and now 6.0 with vmdk files. .................. Thank for your patience if you red though the whole thing. If you are going to use ESXI you should be passing the controller through to the VM and let DSM handle the drives directly. If you use an expander, it might work for sata drives but is not recomended unless you, use SAS drives. What is your data worth to you? Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.