Well I've given up on this. I had this working once upon a time, and everything was perfectly fine until I had to shut down my server to add a PCIe card for another VM. Shut down the VMs gracefully, shut down VMWare gracefully, powered off the server and disconnected all the cables to I could pull it out and open it, put in the card, plugged all the cables back in to the same spots (yes I made sure the SAS cables went back to the same ports they had been plugged into before, didn't swap them), and when I rebooted the VM, my array crashed and that's when the saga of missing disks began.
After I got it up the day before yesterday with the duplicate disk and got an array setup (not using the duplicate disk), I once again had to reboot the server for other VM stuff. Had to turn on passthrough on another card and that needed an esxi reboot. didn't even unplug cables this time. When I started my Xpenology VM up, it said my disks had been moved from another synology unit and I had to reinstall. Failed to format the disk. So I shut the vm down, attached the sas controller to a linux vm and formatted the drives, went back to xpenology... failed to format the disk.
Deleted the VM, started over from scratch, formatted and/or wiped the drives in windows, in linux, 0 filled the first 5GB of the drives, made them into vmware datastores then wiped them (this has worked in the past to get me past the failed to format the disk error). Nothing. Every time I would start up the VM and try to install, failed to format the disk. More than 8 hours I spent trying to get this damn thing to format the disks again. The drives work fine everywhere else I attach them - linux, windows, vmware, and now unraid. All say they are smart status healthy, none have given me any errors elsewhere. It's only in DSM in my VM that I seem to be running into so much trouble.
I don't want to let VMWare handle the drives and just attach them to the xpenology VM as RDMs is because then I lose the ability to use my SSDs for cache, and from what I've read there is a performance penalty to using drives as RDMs vs. passing the controller through and letting the VM directly go to the hardware since you're adding another abstraction layer.
From what I can see on here, the mpt2sas driver (which is what the LSI SAS card would be using) seems to work fine for people, so I don't think it's the fact that I'm using a SAS card and SAS drives. Besides, I have had this working before. I suspect that it's something to do with using the SAS enclosure (maybe the particular enclosure I'm using does something that DSM doesn't like), if I had a 12 bay LFF server it might work better.
In any case, for my situation it seems like DSM with the xpenology loader in a VM just isn't going to work for me. I wanted to make this work because I love my DS918+, DSM has so many features and is so user friendly. I'm an IT guy so I can deal with less friendly systems, but it's nice to have one that just makes doing things quick and simple. But if I can't have it running stable, and I'm going to have to worry about whether it's going to suddenly freak out on me after a graceful VM and server reboot, then I'm going to have to abandon it