• Content count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Tuatara

  • Rank
    Regular Member
  1. Fantastic. There is always just one more detail to remember!
  2. OK, I thought you passedthrough M1015, I'm curious if this card being recognized by DSM or not. I wasted two evenings trying to install SIL3112 based PCIe-to-SATA card. I either got DOA card or my only PCI slot is broken .. going to ship it back. I tried passthrough of the M1015 ages back. Right now, I can't remember if I had it working ... I think so, but I probably compiled/added the driver myself. I did put in a number of different manufacturers cards - Sil3114 for certain, and also a JMicron JMB363 - in which I compiled and added the driver to the Synology, and also into ESXi (for fun & profit). I then ran a few tests ... and decided that the performance loss of RDM (less than 1% from my informal tests) wasn't worth it. RDM works well, is very reliable, and the M1015 is supported by ESXi directly. No messing with kernel drivers in the Synology - other than adding the one Paravirtual. With that driver added, I set up all drives as RDM Paravirtual and tested only that one driver to exhaustion (since all other hardware is natively supported in ESXi and from VMWare). Rock solid. I haven't looked back. Works great. Regards, Tuatara
  3. Paravirtual. Exactly as I'd specified in the "Idiot's Guide" (found earlier in this thread). I boot off the [Datastore] Synology/esxi_synoboot_3202_v2.vmdk - Configured as IDE (0:0) Hard Disk 1 SCSI Controller 0 - Paravirtual - No SCSI Bus Sharing All physical disks are RDM (Mapped Raw LUN) - Compatibility Mode (Physical) - vmdk files are stored in the VM directory (for example: /dev/sda) - vml.020000000050014ee25daf1b94574443205744 / [Datastore] Synology/WDC_2.0TB_1.vmdk - Configured as SCSI (0:0) Hard Disk 2 (8 total drives RDM on a M1015, with additional 5 drives RDM off Intel Motherboard SATA - all WD Reds, and last Intel SATA is an Intel SSD as ESXi DataStore) All memory locked - as I also use VT-d for USB and PCI-E Hauppauge Card (which I haven't set up yet). Regards, Tuatara.
  4. For interest, I'll give it a go ... shut down Synology ... clear the RDMFilter flag ... restart VM ... synology running ... SSH login ... and we're ready. Let's check my first RDM drive in the array ... [spoiler=smartctl -a /dev/sda]mediacat> smartctl -a /dev/sda smartctl 5.42 2011-10-20 r3458 [x86_64-linux-3.2.30] (local build) Copyright © 2002-11 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: WDC WD20EFRX-68AX9N0 Serial Number: WD-WCC1T0773149 LU WWN Device Id: 5 0014ee 25daf1b94 Firmware Version: 80.00A80 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: ACS-2 (revision not indicated) Local Time is: Fri Jan 31 21:47:24 2014 NZDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (27540) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 255) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x70bd) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 180 176 021 Pre-fail Always - 5983 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 73 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 099 099 000 Old_age Always - 1249 10 Spin_Retry_Count 0x0032 100 253 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 73 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 72 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 0 194 Temperature_Celsius 0x0022 118 114 000 Old_age Always - 32 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. All good ... no problems at all on DSM 4.2, ESXi 5.1 with RDMFilter flag cleared. The SMART data is not available in the DSM interface though, so there must be something different about the method by which it gathers/collects that data. I might look into it sometime ... after I migrate to 4.3 and ESXi 5.5. Check everything you've done over again (I'm not sure what could have gone wrong - ParaVirtual Controller? Can not be LSI!). It worked first time for me. Survived a reboot too. Regards, Tuatara.
  5. yes, i did reboot it. Rebooting won't do it. You have to RESTART the Virtual Machine in order for it to pick up the changed settings for the RDM controller. Restarting the Synology is still going to use the same VM setup!
  6. Thanks for the hint =) I'm keen to try it, even though I'm running 5.5 currently, but can easily rebuild it with 5.1 as I'm not 'live' yet with my home lab. VMWare doesn't often remove base features in point updates. I'm positive that these options are also available in 5.5 ... it's just in 5.1 when they were first noted as being available. Regards, Tuatara
  7. SMARTd is running on vSynology? I thought if one does RDM (ie created via vmkfstools {-z,-r} ), then SMART stats are not being exposed to a guest VM, vSynology in this case. Absolutely correct. There are NO SMART STATISTICS when using a RDM drive. This is BY DESIGN. However, we all know about pushing design limits! If you are using ESXi 5.1, and are willing to push boundaries, you can follow the guide available here: ESXi 5.1 and SMART monitoring In a nutshell (first post in the thread): All you need to do is the following and then any disk (not USB) you plugin in thereafter will be available for RDM: In the ESXi Console window, highlight your server Go to the Configuration tab Under Software, click Advanced Settings Click RdmFilter Uncheck the box for RdmFilter.HbaIsShared Click OK This will set the RDM Filtering to assume that NO DRIVE IS SHARED. You eliminate all virtualization capabilities provided by RDM. By doing this, using an application which has Direct access to the drive "hardware" you should be able to retrieve the SMART data. I have not personally done this (I'm still running ESXi 5.0 and DSM 4.2), but if you absolutely must have it ... AFAIK it's possible now without VT-d passthrough of the controller. YMMV. No guarantee it won't crash/hang/core dump. Regards, Tuatara
  8. After following the guide for wmware tools I could not get shutdown and reboot to work from the vSphere client. The problem was that the "shutdown" binary did not function on my Synology install. I decided to just create my own "shutdown" script and replace the binary that was in /sbin/ with it: You beauty! I remember doing a similar thing now! Umm ... yeah ... ... works great for me too. The script I had written was much simpler though ... mine only called poweroff, since that was all I cared about doing through vSphere at the time. I've updated my DSM 4.2 to your script - which IMHO is much better as it provides full shutdown & restart functionality. Cheers! [i'll also update my posts, so people looking in the forum can find your script] [EDIT: Link to post for opware bootstrap and open-vm-tools installation] viewtopic.php?f=2&t=558&start=280#p9968 My upgrade to DSM 4.3 Update 3 will happen sometime soon. I'm certain this will be a smooth update process. Thanks go to everyone for all their hard work!
  9. Nice Cookin' Doc! I've tried now on Trantor's 4.3 version. Works like a charm Sweet! That's great news! I blame my SFTP transfer program - I had exported the file from the running Synology and saved it to a Windows Machine to attach to my post. Fantastic News! Easy fix ... and I'll go update my post with a fixed version immediately. This means that I did not miss anything in my notes, and the Idiot's Guide to VMTools could have been written! However, I hope that for everyone, the above post is good enough. You're (all) very welcome! [EDIT: Link to post for opware bootstrap and open-vm-tools installation] viewtopic.php?f=2&t=558&start=280#p9968 Now, seeing that everyone is having a lot of success with 4.3, and the VMTools installation method I'd worked out before is working on DSM 4.3 and ESXi 5.5, I can see an upgrade process happening in the coming week.
  10. can you tell which logs should contain VMware tools entries and is there any tools specific logging which can be enabled? I don't have the time to research/debug what it is I've done to get my version working (I do remember spending a reasonable amount of time on it though). I've probably forgotten something somewhere. It may be a few days (weekend?) before I can get back to you on this.
  11. Ok ... I've got my installation working, and the vmtools always start up automatically without issue. Did you set the script to be executable? [chmod +x S22open-vm-tools.sh] Upon startup you should see "Starting VMWare Tools:" in the logfile if the [s22open-vm-tools.sh] script is executed. When starting the VMTools manually do they start up, and are visible in ESXi vSphere? [/opt/bin/vmtoolsd --background /var/run/vmtoolsd.pid] Can you control the running VMTools through vSphere [manually or automatic?] Check the logs to see if there is anything failing, or if the vmtools daemon can't start, etc.
  12. How to install optware bootstrap and open-vm-tools into DSM 4.2 and 4.3 (to Update 3 - tested & confirmed) open-vm-tools can be installed, after installing the bootstrap for optware. After installing the syno open-vm-tools kernel files, the open-vm-tools themselves are installed as the standard ipkg. Add in a startup script, for DSM 4.2 only replace the shutdown binary with a script, and open-vm-tools are running in ESXi. Good Luck! Tuatara UPDATE: Fixed the ^M (linefeed) characters from the end of each line. I blame my SFTP transfer application on Windoze. UPDATE 2: erocm123 figured out the step I'd forgotten about and created a better script for DSM 4.2 to handle shutdown and restart properly. [Ref: viewtopic.php?p=10163#p10163] NOTE: The shutdown script is NOT required for DSM 4.3 as the existing binary performs a shutdown/restart properly. S22open-vm-tools-v1.1.zip shutdown-erocm123.zip
  13. I have not yet migrated to DSM 4.3 (still watching the threads), and am personally still using DSM 4.2, so I can't really answer for any differences in DSM 4.3. I have VMWare tools compiled and installed under DSM 4.2, and have startup/shutdown/IP monitoring/etc. all working well for me. I'm not experiencing any (undue) long delays during shutdown.
  14. Interesting ... I'm using 4.2, and with this version only SCSI controller 0 can be active (AFAIK). I didn't extensively try SCSI 1 - no need. In any case, why would you want to skip controller 0? (i.e. the first in the scanning order)
  15. Tuatara I have run out of options. My DSM does not see any Disks, Volues VM loads and DSM runs, But no storage devices of any kind in Storage Manager. Build 4.3 v1.1 Config: HP Proliant Gen 8 16gb DDR Intel G1610 CPU 4 x WD-RE4 RDM's Separate SCSI Channel Would you look at my VMX? Thanks Hi NetSpider, No worries ... a quick look indicates that you have created a new SCSI Controller as Controller 1 and attached all of the drives to this controller. AFAIK the only controller supported must be SCSI Controller 0. You also have two E1000 network cards assigned to the VM ... not certain what the second one is for, but you could/should remove it in my opinion. It won't bring you any benefits unless your using your virtual XPEnology as a bridge or router. I've (quickly) edited your vmx file to remove the second ethernet, move the SCSI controller to 0 (from 1), and also to place all the VMDK drives in sequential order (0,1,2 not 0,1,3). I don't have the ability to test my changes, but I'm confident in the modifications. You can use WinDiff to see the modifications I made. Shutdown the VM. Backup your current VMX. Upload this VMX in place. Start the VM again. This time the drives should come up and all be visible as Disk 0, 1, 2. Regards, Tuatara. Synology-DiskStation-SCSI0.zip