benok

Members
  • Content count

    18
  • Joined

  • Last visited

  • Days Won

    1

benok last won the day on June 18

benok had the most liked content!

Community Reputation

1 Neutral

About benok

  • Rank
    Newbie
  1. I've read that boot loader of DSM6.x is required to be stored on read-write file system. Is it possible to make ISO bootable boot loader using cloop, tmpfs, overlay-fs as Live CD does ? (-> Building Your Own Live CD | Linux Journal ) If it's possible, we can run DSM6 on VPS hosting server. On VPS hosting environment, we can't easily add virtual disk. (I think if we want to run DSM6, we have to use qemu.) Is there any information required to persist on "boot loader partition" on installation process ? # Even so, we can replace iso image after installation. (copy back from vmdk of local vm install) If so, only we have to do is just cheat the installer to believe it's writable file system, is it ? @jun or other wizards here, how do you think ? Is it difficult to make ? # I think it's too simple, so there must be some pitfalls
  2. benok

    Loader and RDM

    I don't know how to support that, but I think it's hard to analyze & fix DSM & keep update and so on, and I think it have very little merit. I don't recommend to use RDM for XPEnology. I think RDM's merit is only for Vt-d unsupported CPU system can get some information from disk. It's not useful to configure & uneasy to replace disk, etc. You can't replace disk on crash, before you write rdm config for the new disk. I only used RDM on very old system which doesn't support Vt-d. I recommend to use pass-through SATA I/O chip (onboard / PCIEx) and choose vt-d enabled CPU, if you want to use esxi. My choice of setup for esxi system is something like this. It's for home workstation/server setup & not only for NAS, so I know it's not for everyone...
  3. I don't know such limitation exists. Is there before 5.2 ? I used 240gb x2 SSD for read/write cache for 3TBx4 RAID5 for around 2 years ? since DSM5.2. Last year I upgraded to another system with 1TB x2 cache and it also works fine. I think you should check the recent document again. It almost only says about the memory requirements. https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/StorageManager/genericssdcache DSM caches frequently used random access area automatically within the cache size. (I also recommend not to enable sequential cache, as document says. It just works fast only for a while.) Optimal cache size depends on your workload. I'm using 10-15 VMs constantly, but my cache usage is just 45%. (Before upgrade, cache usage was very high. I can't recall the real number.) My SSD cache seems to be over spec for my current workload. If we can use Virtual SSD for SSD cache, we could share SSD for cache and datastore, but I think it's not so good both for complexity & performance loss. I didn't have investigated, but I guess virtual SSD don't have required command for SSD cache or returns bad response for some command, and DSM refused VSSD. You should log in to console via ssh and check logs around enabling SSD cache. We might get work with flag tweak around VSSD, if such flags exists... I don't have good idea for small factor server like your HP Gen8. I built my system with mid-tower PC case as I wanted to use it as both for workstation & server.
  4. As far as I know, virtual SSD can't be used as SSD cache. In my experience, SSD cache can only be used with pass-through-ing host SATA I/F and SSDs. If you tried with success, please post about that. BTW, My recommended configuration for ESXi system is following. In the following config, whole system is hosted with SSD cache & (almost) all drives are managed by DSM. It performs well and notifies me on any disk troubles. I'm satisfied with this configuration. (I think) It's not so complex, but has enough performance & good flexibility. I hope this helps. My recommended XPEnology based configuration of ESXi system: boot ESXi from USB drive (as you do) Add 1 disk for VMFS datastore (& use it's disk inteface directly for ESXi). (this datastore is just for booting "Host DSM" VM) (*1) Make 1 XPEnology VM as "Host DSM" VM and pass through All disk interfaces (other than above one) to the VM. (This VM is used only for ESXi datastore & ESXi host.) Add all other HDDs / SSDs to make XPEnology VM & setup SSD cache & format disk group with ext4 (for VM performance). Create share folder for ESXi datastore using nfs export (better for performance & good maintainability with SMB access. you can add SMB access for direct maintenance of the datastore from client PCs.) Add that nfs exported datastore from ESXi Add your own VMs on that datastore (*2) Add Another XPEnology VM ("User DSM" VM) with thick provisioned vmdk, formatting with btrfs (for usual file sharing, etc.). Add users & apps only on "User DSM". (*3) *1) If you don't use USB sharing for VM, you can use USB disk for this datastore, perhaps. *2) I can also add Windows/MacOS Desktop VM with pass-through-ing GPU and USB. (Choose ESXi 6.0 for hosting mac. You can still use vCenter 6.5 or later.) *3) The only I wish but I can't is H/W encoding with DSM6.1 + DS916 VM. (Perhaps, you have to pass-through host's iGPU. I don't have such iGPU system.)
  5. benok

    5.2 NFS VAAI broken?

    Recently I migrated my system from DSM5.2 to DSM 6 with NewLoader and checked VAAI feature. It works fine with NFS again. I can use thick provisioning with NFS and offline cloning is also working. There's no problem with SSD cache. I'm very happy with DSM6 with NewLoader. p.s. I didn't try to use btrfs yet for datastore, because I've heard btrfs has very bad performance as Virtual Machine's datastore... ZFS, BTRFS, XFS, EXT4 and LVM with KVM – A Storage Performance Comparison (2015) | Hacker News https://news.ycombinator.com/item?id=11749010 ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison http://www.ilsistemista.net/index.php/v ... print=true I'm interested in Snapshot Replication, but it's very hard to convert big datastore & check performance for me because of the lack of available hardware resources. https://www.synology.com/en-us/knowledg ... ection_mgr If someone tested performance of btrfs, please share the result on the forum
  6. benok

    SSD Cache works on 5.2

    I found 5.2's NFS trouble of my environment is hardware problem.(viewtopic.php?f=2&t=6513) VAAI plugin doesn't work on 5.2, but using NFS without VAAI seems to be stable. p.s. I found Time Backup can't be used with SSD read-write cache. see below. http://forum.synology.com/enu/viewtopic ... 6&t=102391
  7. benok

    5.2 NFS VAAI broken?

    With further investigation, I found instability of my environment is not for NFS, but another hardware problem. After separating external esxi's RDM disks using port multiplier & internal intel ICH disks to different disk groups makes my env stable. Now, all NFS's file open problem went away. "WARNING: NFS: ###: Got error 13 from mount call" log still generated every 1 minute, but it doesn't seems matter. Does this exist in your vmkernel.log ? > all I'll choose 5.2 with SSD cache, NFS datastore without VAAI. Without VAAI, cloning is slow & can't use thick disk. But using DSM VM inside single ESXi server, cloning network traffic is local and CPU overhead with VMXNet3 would be a little(?).
  8. benok

    SSD Cache works on 5.2

    Hmm, I also may get back to 5.1 too. I'm using iSCSI with old servers, but it's inconvenient to tweak vmx file or backup VMs. NFS is so convenient to edit vmx directly from CIFS, and backup directly thin VMs from XPEnology with tar with sparse(S) option. So I want to migrate old iSCSI VMs to new server with NFS & VAAI. I didn't see kernel module difference between XPE5.1 & 5.2, but something would wrong with 5.2 arond NFS.
  9. benok

    SSD Cache works on 5.2

    SSD Cache works at last ! XPenoboot 5.2-5592.1 DSM 5.2-5592.1 ESXi 6.0 Marvell 88SE9230 (passthrough) Plextor PX-256M5S It was unstable first. In my case, 2nd SSD got several errors. (didn't get error on GUI. confirmed with tail -f /var/log/kern.log.) First I think it's trouble with my drive, but it was a problem of my SATA card. It works with port 1 & port 3 but not works with port 2 & port 4. After I choose port 1 & port 3 for my SSDs, it's very stable. My impression of SSD cache performance is "not so bad". A little worse than my expect. Perhaps, did you check the Syno's HCL ? https://www.synology.com/en-global/comp ... 2+SATA+SSD I was lucky to have two SSDs listed on HCL. p.s. I want to use 5.1 which doesn't have NFS VAAI problem (see http://xpenology.com/forum/viewtopic.php?f=2&t=6513), but with 5.1, VM hangs up with esxi too, without PSOD. (Need to hard power off). If only we don't have NFS problem with 5.2.
  10. benok

    5.2 NFS VAAI broken?

    I think not only VAAI but NFS itself is unstable on 5.2. I tried NFS datastore without VAAI, it looks not so bad, but with heavy load (direct CIFS copy from local Win10 VM's USB3 harddrive (via passthroughed USB3I/F & VMXNet3), esxi got lost vm temporally, because of NFS mount error.(see vmkernel.log below) 2015-08-21T12:12:19.148Z cpu2:33147)WARNING: NFS: 221: Got error 13 from mount call 2015-08-21T12:12:50.536Z cpu6:33822)Config: 679: "SIOControlFlag1" = 33822, Old Value: 0, (Status: 0x0) 2015-08-21T12:12:51.144Z cpu2:36774)WARNING: NFS: 221: Got error 13 from mount call 2015-08-21T12:13:28.314Z cpu2:51281)FSS: 6146: Failed to open file '************.vmdk'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x4008, world: 36793 [vmx]): Busy 2015-08-21T12:13:28.316Z cpu2:51281)FSS: 6146: Failed to open file '************.vswp'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x8, world: 36793 [vmx]): Busy 2015-08-21T12:13:28.378Z cpu2:51281)FSS: 6146: Failed to open file '*****.vmdk'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x4008, world: 36773 [vmx]): Busy 2015-08-21T12:13:28.382Z cpu2:51281)FSS: 6146: Failed to open file '*****-2375d83f.vswp'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x8, world: 36773 [vmx]): Busy 2015-08-21T12:13:31.044Z cpu3:51281)FSS: 6146: Failed to open file '*****-000002-delta.vmdk'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x4008, world: 37725 [vmx]): Busy 2015-08-21T12:13:31.076Z cpu3:51281)FSS: 6146: Failed to open file '*******-1885038055-1.vswp'; Requested flags 0x4001, world: 51281 [sdrsInjector], (Existing flags 0x8, world: 34668 [hostd-worker]): Busy I also may have to choose iSCSI again... p.s. I finally got success working SSD cache on 5.2. (with 5.1, VM hang up with esxi without PSOD.) I'll post about that later on SSD cache thread.
  11. benok

    5.2 NFS VAAI broken?

    Thank you for your info. So, we confirmed that XPEnology 5.2's NFS VAAI support is broken. I hope dev team to take care of this problem.
  12. benok

    5.2 NFS VAAI broken?

    You think DSM 5.2-5565 has problem, but 5592 doesn't, right ? But it is not related on release note of 5592. If your DS415+ version were 5.2-5565, and your plugin status is "Not supported", we can hope it will be fixed on 5592.
  13. benok

    5.2 NFS VAAI broken?

    vSphere5.1 & ESXi6.0. (also checked direct connection to ESXi5.1 from C# client.)
  14. benok

    5.2 NFS VAAI broken?

    I confirmed DSM5.1-5055 works from my esxi6 host. Perhaps, it's a problem of 5.2 confirmed nfs share from 5.1 is VAAI supported (last one) [root@sd-esxi3:/var/log] esxcli storage nfs list Volume Name Host Share Accessible Mounted Read-Only isPE Hardware Acceleration ----------- -------------- ------------------- ---------- ------- --------- ----- --------------------- datastore1 10.21.100.1 /volume2/datastore1 true true false false Not Supported ssd128x2 10.21.100.1 /volume3/ssd128x2 true true false false Not Supported nfs-test 10.21.0.22 /volume2/iso true true false false Supported success access log from hostd.log 2015-06-29T09:20:28.867Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: session request for 10.21.0.22 /volume2/iso /vmfs/volumes/bc0a7e1c-9d181d3b NFS 1 2015-06-29T09:20:28.870Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: mounted /volume2/iso on host 10.21.0.22 2015-06-29T09:20:28.870Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.871Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.871Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.875Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: session request for 10.21.0.22 /volume2/iso /vmfs/volumes/bc0a7e1c-9d181d3b NFS 1 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: mounted /volume2/iso on host 10.21.0.22 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status 2015-06-29T09:20:28.878Z info hostd[43281B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status
  15. benok

    5.2 NFS VAAI broken?

    Found errors on hostd.log. hostd.log:2015-06-26T13:23:54.512Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: session request for 10.21.100.1 /volume2/datastore1 /vmfs/volumes/809a7328-3a50ea07 NFS 1 hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: mounted /volume2/datastore1 on host 172.21.200.241 hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Remote machine does not support hardware offload hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Remote machine does not support hardware offload hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Checking support status hostd.log:2015-06-26T13:23:54.513Z info hostd[4CA80B70] [Originator@6876 sub=Libs] SynologyNasPlugin: Remote machine does not support hardware offload Perhaps, we have to dump packets with wireshark and compare conversations with works version (XPenology 5.1 ?). But anyway, could you check the modules behind NFS & VAAI, please ? > Core dev team Can we solve this problem like iSCSI one ?