Thanks for the feedback guys.
***But, after testing in real production and test labs I've found that iSCSI is no match for NFS when it comes to transfer rates. Even with MPIO or LACP setup iSCSI is simply no good for even multipile clients. I read what other people had experienced and they concur. Many of them flat out opted to create multiple NFS shares and assigning small groups of people per share in order to get the full glory of the performance of their Synology NAS, especially when they have chosen to add Read Write Cache, which if you look around every thinks it's a waste and that's only because the iSCSI performance really tops out at 500mb/s where as users who have NFS configured will get the full 1gb/s bandwidth per adapter. It's actually funny because when I setup MPIO for iSCSI on the synology, instead of at least getting 500mb/s on each adapter, it splits 500mb/s across all four meaning only 125/mbs each adapter. This is really unacceptable performance in my book. Especially since I have a Synology at home and I was really looking forward to trying to use it almost as a direct attach storage to my vmware server. I would love to get 4gb/s transfer rates to and from my Synology so I can use it as an elegant datastore and get the most out of my home server environment. I think that there's alot of people who just don't really say anything and just take it as it is, but I know better. The capacity is there for the synology to perform tremendously faster and be a serious competitor to direct attach storage devices like dell's Powervault MD series storage arrays.
*** All tests be they in a lab or in production were with VMWare 6.0+ and Synology DSM 6.0+