• Announcements

    • Polanskiman

      DSM 6.2-23739   05/23/2018

      This is a MAJOR update of DSM. DO NOT UPDATE TO DSM 6.2 with Jun's loader 1.02b or earlier. Your box will be bricked.  You have been warned.   https://www.synology.com/en-global/releaseNote/DS3615xs

benok

Members
  • Content count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About benok

  • Rank
    Newbie
  1. 5.2 NFS VAAI broken?

    Recently I migrated my system from DSM5.2 to DSM 6 with NewLoader and checked VAAI feature. It works fine with NFS again. I can use thick provisioning with NFS and offline cloning is also working. There's no problem with SSD cache. I'm very happy with DSM6 with NewLoader. p.s. I didn't try to use btrfs yet for datastore, because I've heard btrfs has very bad performance as Virtual Machine's datastore... ZFS, BTRFS, XFS, EXT4 and LVM with KVM – A Storage Performance Comparison (2015) | Hacker News https://news.ycombinator.com/item?id=11749010 ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison http://www.ilsistemista.net/index.php/v ... print=true I'm interested in Snapshot Replication, but it's very hard to convert big datastore & check performance for me because of the lack of available hardware resources. https://www.synology.com/en-us/knowledg ... ection_mgr If someone tested performance of btrfs, please share the result on the forum
  2. SSD Cache works on 5.2

    I found 5.2's NFS trouble of my environment is hardware problem.(viewtopic.php?f=2&t=6513) VAAI plugin doesn't work on 5.2, but using NFS without VAAI seems to be stable. p.s. I found Time Backup can't be used with SSD read-write cache. see below. http://forum.synology.com/enu/viewtopic ... 6&t=102391
  3. 5.2 NFS VAAI broken?

    With further investigation, I found instability of my environment is not for NFS, but another hardware problem. After separating external esxi's RDM disks using port multiplier & internal intel ICH disks to different disk groups makes my env stable. Now, all NFS's file open problem went away. "WARNING: NFS: ###: Got error 13 from mount call" log still generated every 1 minute, but it doesn't seems matter. Does this exist in your vmkernel.log ? > all I'll choose 5.2 with SSD cache, NFS datastore without VAAI. Without VAAI, cloning is slow & can't use thick disk. But using DSM VM inside single ESXi server, cloning network traffic is local and CPU overhead with VMXNet3 would be a little(?).
  4. SSD Cache works on 5.2

    Hmm, I also may get back to 5.1 too. I'm using iSCSI with old servers, but it's inconvenient to tweak vmx file or backup VMs. NFS is so convenient to edit vmx directly from CIFS, and backup directly thin VMs from XPEnology with tar with sparse(S) option. So I want to migrate old iSCSI VMs to new server with NFS & VAAI. I didn't see kernel module difference between XPE5.1 & 5.2, but something would wrong with 5.2 arond NFS.
  5. SSD Cache works on 5.2

    SSD Cache works at last ! XPenoboot 5.2-5592.1 DSM 5.2-5592.1 ESXi 6.0 Marvell 88SE9230 (passthrough) Plextor PX-256M5S It was unstable first. In my case, 2nd SSD got several errors. (didn't get error on GUI. confirmed with tail -f /var/log/kern.log.) First I think it's trouble with my drive, but it was a problem of my SATA card. It works with port 1 & port 3 but not works with port 2 & port 4. After I choose port 1 & port 3 for my SSDs, it's very stable. My impression of SSD cache performance is "not so bad". A little worse than my expect. Perhaps, did you check the Syno's HCL ? https://www.synology.com/en-global/comp ... 2+SATA+SSD I was lucky to have two SSDs listed on HCL. p.s. I want to use 5.1 which doesn't have NFS VAAI problem (see http://xpenology.com/forum/viewtopic.php?f=2&t=6513), but with 5.1, VM hangs up with esxi too, without PSOD. (Need to hard power off). If only we don't have NFS problem with 5.2.
  6. 5.2 NFS VAAI broken?

    I think not only VAAI but NFS itself is unstable on 5.2. I tried NFS datastore without VAAI, it looks not so bad, but with heavy load (direct CIFS copy from local Win10 VM's USB3 harddrive (via passthroughed USB3I/F & VMXNet3), esxi got lost vm temporally, because of NFS mount error.(see vmkernel.log below) 2015-08-21T12:12:19.148Z cpu2:33147)WARNING: NFS: 221: Got error 13 from mount call 2015-08-21T12:12:50.536Z cpu6:33822)Config: 679: "SIOControlFlag1" = 33822, Old Value: 0, (Status: 0x0) 2015-08-21T12:12:51.144Z cpu2:36774)WARNING: NFS: 221: Got error 13 from mount call 2015-08-21T12:13:28.314Z cpu2:51281)FSS: 6146: Failed to open file '************.vmdk'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x4008, world: 36793 [vmx]): Busy 2015-08-21T12:13:28.316Z cpu2:51281)FSS: 6146: Failed to open file '************.vswp'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x8, world: 36793 [vmx]): Busy 2015-08-21T12:13:28.378Z cpu2:51281)FSS: 6146: Failed to open file '*****.vmdk'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x4008, world: 36773 [vmx]): Busy 2015-08-21T12:13:28.382Z cpu2:51281)FSS: 6146: Failed to open file '*****-2375d83f.vswp'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x8, world: 36773 [vmx]): Busy 2015-08-21T12:13:31.044Z cpu3:51281)FSS: 6146: Failed to open file '*****-000002-delta.vmdk'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x4008, world: 37725 [vmx]): Busy 2015-08-21T12:13:31.076Z cpu3:51281)FSS: 6146: Failed to open file '*******-1885038055-1.vswp'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x8, world: 34668 [hostd-worker]): Busy I also may have to choose iSCSI again... p.s. I finally got success working SSD cache on 5.2. (with 5.1, VM hang up with esxi without PSOD.) I'll post about that later on SSD cache thread.
  7. 5.2 NFS VAAI broken?

    Thank you for your info. So, we confirmed that XPEnology 5.2's NFS VAAI support is broken. I hope dev team to take care of this problem.
  8. 5.2 NFS VAAI broken?

    You think DSM 5.2-5565 has problem, but 5592 doesn't, right ? But it is not related on release note of 5592. If your DS415+ version were 5.2-5565, and your plugin status is "Not supported", we can hope it will be fixed on 5592.
  9. 5.2 NFS VAAI broken?

    vSphere5.1 & ESXi6.0. (also checked direct connection to ESXi5.1 from C# client.)
  10. 5.2 NFS VAAI broken?

    I confirmed DSM5.1-5055 works from my esxi6 host. Perhaps, it's a problem of 5.2 confirmed nfs share from 5.1 is VAAI supported (last one) [root@sd-esxi3:/var/log] esxcli storage nfs list Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration ----------- -------------- ------------------- ---------- ------- --------- ----- --------------------- datastore1 10.21.100.1 /volume2/datastore1 true true false false Not Supported ssd128x2 10.21.100.1 /volume3/ssd128x2 true true false false Not Supported nfs-test 10.21.0.22 /volume2/iso true true false false Supported success access log from hostd.log 2015-06-29T09:20:28.867Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: session request for 10.21.0.22 /volume2/iso /vmfs/volumes/bc0a7e1c-9d181d3b NFS 1 2015-06-29T09:20:28.870Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: mounted /volume2/iso on host 10.21.0.22 2015-06-29T09:20:28.870Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.871Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.871Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.875Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: session request for 10.21.0.22 /volume2/iso /vmfs/volumes/bc0a7e1c-9d181d3b NFS 1 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: mounted /volume2/iso on host 10.21.0.22 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status
  11. 5.2 NFS VAAI broken?

    Found errors on hostd.log. hostd.log:2015-06-26T13:23:54.512Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: session request for 10.21.100.1 /volume2/datastore1 /vmfs/volumes/809a7328-3a50ea07 NFS 1 hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: mounted /volume2/datastore1 on host 172.21.200.241 hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Remote machine does not support hardware offload hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Remote machine does not support hardware offload hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Remote machine does not support hardware offload Perhaps, we have to dump packets with wireshark and compare conversations with works version (XPenology 5.1 ?). But anyway, could you check the modules behind NFS & VAAI, please ? > Core dev team Can we solve this problem like iSCSI one ?
  12. 5.2 NFS VAAI broken?

    Recently, I've setup 2 new xpenology servers with xpenoboot 5.2-5565.2. One is bare metal, and the other is virtual one on esxi. Everything works fine, but NFS VAAI doesn't work. I'm new to NFS datastore with VAAI. (I used iSCSI before.) I tried to install with a similar procedure like the page below. (but I give vib's url to esxcli directly.) Install Synology NFS VAAI Plug-in for VMware - Mike Tabor https://miketabor.com/synology-nfs-vaai-plugin/ After reboot, I confirmed but doesn't seems to work on both cases below. (it says "Not Supported".) case1: esxi5.1 & baremetal5.2 ~ # esxcli software vib list | grep -i syno esx-nfsplugin 1.0-1 Synology VMwareAccepted 2015-06-28 ~ # esxcli storage nfs list Volume Name Host Share Accessible Mounted Read-Only Hardware Acceleration ----------- ---------- ------------------ ---------- ------- --------- --------------------- ds1-nfs 10.24.100.1 /volume1/datastore true true false Not Supported case2: esxi6 & virtual5.2 [root@esxi3:~] esxcli software vib list | grep -i syno esx-nfsplugin 1.0-1 Synology VMwareAccepted 2015-06-26 [root@esxi3:~] esxcli storage nfs list Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration ----------- -------------- ------------------- ---------- ------- --------- ----- --------------------- datastore1 10.21.100.1 /volume2/datastore1 true true false false Not Supported ssd128x2 10.21.100.1 /volume3/ssd128x2 true true false false Not Supported So I googled with 'xpenology vaai nfs' and found your post on top. I found NFS datastore is very convenient, and especially is if VAAI works, but it seems to be broken, at the moment. Any suggestions ?
  13. I found one below. Synology Nas Server Check Disk [ e2fsck Command ] Did you recover file system this way ? I'll try this, after backup files.
  14. Hi, I've upgraded an gnoboot system to nanoboot(4493) refering to following blog post. http://cyanlabs.net/tutorials/synology- ... o-nanoboot Upgrade completed successfuly, but I misconfigured esxi's RDM setting(around virtual RDM/differencial disk) and my volume was crashed.