Jump to content
XPEnology Community

benok

Member
  • Posts

    37
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by benok

  1. Ok, I understand. But if it's not included in your (official) repository, I think it's not useful and confusing. (Additional download steps required, additional tutorial variation, like m.sh.) I don't want to add another unofficial method, please forget my proposal. sorry.
  2. Sorry, I confirmed maxdisks is set to 16 correctly to synoboot.conf, and 13th disks can be used. Parhaps this was my mistake. (I don't know why the last drive of my array was not detected at that time, anyway.) But, figure of the "Drive Information" at the overview of Storage Manager is still 12. Is there any known hacks to fix this?
  3. Hi, @pocopico, First, I want to say thank you for your (and many others here :-) great contribution to this project. I really appreciate your work! TLDR; How about adding Makefile to TCRP to organize build tasks? At first, I downloaded @Peter Suh's "m.sh" and used it a few times. I feel it's very handy for the first installation, but not suitable for repeating to tweak parameters. (Of course, it is as intended I think.) So, I extracted what m.sh do and wrote this Makefile. I used this Makefile to upgrade 3 XPenology VMs. It's not complex and please look inside, but I explain a bit. For a new install, use "make new-install" to configure, build, and install. After tweaking user_config.json manually, "make rebuild" to update & rebuild, and install. You can specify several partial tasks at once, like "make build install clean reboot" I post this Makefile as a basis for discussion because it was written just for me. It's not sophisticated yet. (e.g. PLATFORM can be derived from MODEL, target name definition like "new-install" is not typing friendly, rploader.sh unnecessarily backups several times while new-install (because of "yes" passed "y" every time)...) I think recent @flyride's installation tutorial is a bit complex because of the variety of the commands, but a supplemental Makefile like this can organize and simplify build tasks. I think it's useful but, what do you think about it? > @pocopico, @flyride, @Peter Suh Could you consider adding some Makefile like this to TCRP? Makefile
  4. Might be these are the reason ? BTW, I thnk I'll have to read rploader.sh throughly. I want to know all the parameters supported by user_config.json.
  5. Ah, yes. I have to insert *real* USB stick and attach that to the VM. (I didn't connect VM to serial console, but seems to stop booting after kernel loading. ) OK, I'll try. But, I thought rploader.sh(or my operation mistake) didn't seem to apply user_config.json correctly. (Default settings of maxdisks was 16 (I didn't modified), but my DSM7 VM maxdisk was 12 and couldn't find My 13th disk. So I have to rebuild(resync?) array now.
  6. Thank you for this report. (investigation? study? I'm not sure what is appropriate word :-) ) I really respect and appreciate your contributions, as always. Yesterday, I finally upgraded my main machine (ESXi 7, LSI SAS(+Chipset SATA) pass through, DSM6.1x DS3615xs) to DSM7 DS3622xs+. (I decided to do with your architecture comparison table post, and successful update of 2 sub DSM6.2x VMs before that.) Today, I tried to fix this gap problem and found SasIdxMap doesn't work anymore, and I tried many tweaking SataPortMap / DiskIdxMap and I conclude I couldn't avoid this. (I'm satisfied I can cofirm my conclusion is correct with your post.) Today I even tried booting from USB within ESXi VM. Changing boot setting to EFI, TCRP can boot from USB. (Recently, VMware support USB boot from EFI firmware.) But, I don't know why, booting DSM from TCRP Boot USB (or EFI ?) doesn't work :-( (I also tried Plop Boot Manager which(at least PBM5) USB boot can be worked with a bit older VMware, but no luck.(I tried all of them)) So it seems it can't be mitigated with USB boot. I'm sure it's not proper way booting from USB, but I think I should post my struggle today. p.s. By the way, it seems I should increase maxdisks. Is it safe to edit /etc/{,defaults/}synoboot.cfg directly ? Is this persists after DSM update ? I tried to search around, but couldn't find yet. (It seems to be, but not clear.)
  7. I tried this before (with 2 DSM6/jun's Loader/LSI HBA systems), and serched around workarounds, but no luck. So, I wrote a script to confirm mapped drive location. https://gist.github.com/benok/c117badbfb6af744578d48e2ab1d4d4f (Just periodically do heavy access to specified drive) But I've encountered one of my array enclosure's LED got broken(no light). it's not so reliable, though. You should write down each Drive's S/N and paste to your caddies. I think it must be done with LSI controllers.
  8. I've just been in trouble with crashed btrfs volume, too. Also can't mount the volume & storage manager shows crashed volume "0 bytes / 0 bytes". I found a page about btrfs recovery below, and followed those steps described. https://lists.opensuse.org/opensuse/2017-02/msg00930.html In my case also, "btrfs restore"(Step 8 ) seems to solve the problem. I try to restore 2.5TB volume(used over 2TB), but only I can restore is about 500GB. I followed steps further, on Step11, I could recover almost all chunks (failed only 1 chunk), but chunk tree it self couldn't be recovered. I tried to restore again on Step12, but this step doesn't recover any other files other than that of Step 8. (btrfs recover doesn't overwrite already recovered file, so you can choose same destination to skip already recovered files). ------------<excursion>----------- By the way, My ESXi Setup is a bit complex. (Please read the linked page.) A PSOD crashes the SSD cache, and this makes "User DSM" VM's btrfs volume can't be mount. This was caused by a hardware trouble. (I moved my server (changed location) recently. After that, my system become unstable...) There were 3 PSODs, two didn't crash the volume at all, but 1 PSOD cause severe crash of that btrfs volume and a Win10 VM's NTFS drive. (User files are not lost, but It crashes Win10 system drive severely !) I have to restore Win10 VM from a backup. In my setup, my disk was virtualized, so I can use snapshot and try any btrfs recovery without any risk. (I also copied 2.5TB ext4 vmdk to another volume. It's not required though.) Umm, but there are several trade-offs, and I might not choose same setup if I make another... ------------</excursion>----------- I almost abandoned, but I found this answer on ask ubuntu, which says, I confirmed with 'uname -a' and kernels of DSM6.1 was 3.10. So, I tried to use ubuntu live cd(used 18.04LTS), and it magically mount my crashed btrfs volume ! I could restore almost all files from crashed volume without losing metadata. (Using 'btrfs recover', all the metadatas like permissions, timestamps are gone.) I confirmed that 'btrfs chunk-recover' and mount using newer btrfs system is the key for recovering my crash. (I rolled back to initial state & tried to recover from scratch using Ubuntu Live CD only, but I can't mount volume without 'btrfs chunk-recover' before Step 11. I also tried Step14 without Step11, it also failed.) I recovered using rsync, from crashed btrfs volume to nfs exported "Host DSM" volume (ext4). You need to exclude "@" started folders (see below sample). (Most important is exclude "@sharesnap", because it is snapshot backup files, and it has so many duplicated files and can't restore if you took many snapshots.) Install gnu screen or tmux to make session detachable & have several work windows. (without that, you can't shutdown your PC while restoring) You can monitor read error with "tail -f rsync-recover.log | grep '^rsync:' "(rsync) , "sudo tail -f /var/log/syslog | grep -i btrfs"(btrfs system) (I used script(1) to save error log, because tee doesn't work. It seems something blocks output. Error log is important to find which file is unreliable.) I feel relief when I found I could recover almost all the files. I want to follow "3-2-1 backup" rule, though. : -)
  9. I'm sorry, not to reply. I can not use Proxmox or KVM virutalization on my VPS. That's why I post this question. I think it's not bad idea. If it can be implemented, I think it may solve the problem for DSM6.x on Hyper-V...
  10. If you want to accelerate copying/moving files via SMB, You can try SMB multi channel, by editing smb.conf manually. I don't have experience, but some Synology users got doubled or 4 times transfer speed, gathering 2,4 links with this. SMB multi channel working in 6.1 release - Synology Forum https://forum.synology.com/enu/viewtopic.php?t=128482 SMB 3 Multichannel support on DSM 6.2? : synology SMB Multichannel: How It Works & Troubleshooting Guide | Level One Techs https://level1techs.com/video/smb-multichannel-how-it-works-troubleshooting-guide Let us know the result, if you try this : -)
  11. Hi, Thank you for posting this very very useful matrix. I noticed that working NIC for esxi is not e1000, but e1000e for me. Could you confirm ? (kernel panic log from serial console)
  12. benok

    DSM 6.2 Loader

    Do we need to add "disconnect all display cables from video cards" to the check list ? I've read that on above post, but recently no one wrote about display cable. Is that still required to work transcoding ? I didn't have experience those because I don't have 918p enabled hardware, but I guess some of you might miss display cable disconnection.
  13. benok

    DSM 6.2 Loader

    @Tattoofreak, did you confirm CPU options of your DSM VM on ESXi ? I think you have to set options below if you didn't yet. Enable "Expose hardware assisted virtualization to the guest OS" Choose "CPU/MMU Virtualization" to "Hardware CPU and MMU"(or try another) It's necessary to enable those to work nested virtualization, generally. # But I'm not confident because VDSM error message didn't point out virtualization capability. # So, I think you did those already...
  14. Ah really, so that screenshot says it's "Cache Device". I didn't notice such limitation exists in NVMe on DS918. If NVMe is brought to higher models like DS3619(?), they would remove such a limit. I have no experience of cache corruption, using SSD cache several years, but it's heavy access wares SSD more, and SSD cache doesn't accelerate sequential read/write. Umm, I was attracted huge datastore & good random access performance for < ~10 VMs online. I agree that is not efficient resource usage. (But it's not so bad job for old SATA SSD.) I also use SSD only datastore (via DSM nfs) on another env, but I should consider my setup again. (I just migrating data & app from slow DSM5.2 baremetal NAS to my esxi server with new external enclosure.) As far as my memory is correct, several business vSphere users heavily complain about bad performance of btrfs on DSM6 beta (& Syno's support). (but I'm not sure it was nfs or iSCSI.) That thread continues after DSM6 release. So I thought I should not choose btrfs for VM datastore. I googled Syno's forum again, but no luck. (It might be deleted by Syno because of such a content and it was beta thread.)
  15. Hi, again. (Your post is very interesting for me. I think I should follow you. ) Have you ever tried NVMe passthrough with 918+ VM ? As you know, some 918+ baremetal users successfully setup NVMe as a SSD cache. I wanted to know is it possible to work 918+VM with NVMe pass through like other OS's VM. I think pRDM is a bit more cumbersome than passthrough because of vmdk setup. (But I appreciate your sharing experience about those pRDM. ) p.s. My configuration is something like below. I've read btrfs performance is very bad for VM workload on some Synology forum. So I choose btrfs VM over ext4 fs. I don't measure performance seriously, but I'm satisfied using that. With this setup, you can safely upgrade child DSM VM using snapshot, so it's easier to follow new DSM. It's not recommended for everyone, but for those who uses many VMs or who loves flexibility than simplicity. (But I'm thinking recently this design is not so good because of inefficient energy usage, complexity, can't directly migrate to Synology...) p.s. 2 It's related only to esxi, but It might be possible to passthrough iGPU for transcoding to child DSM VM . (with some recent XEON, or Core i) I don't have hardware(& budget ) to try NVMe, iGPU passthrough now, but I want to know if is there someone who has tried those. Sorry for a bit thread hijacking ..
  16. benok

    DSM 6.2 Loader

    You can use serial console. EliteDesk has a serial port. Buy a USB serial cable and connect HP and another PC with that. You can see some boot log and can use serial console using some serial terminal app like Putty. I guess, if you disable serial port on BIOS, that might stop booting. I've once experienced that on a VM without serial port.
  17. Wow, thank you for sharing all of this. I don't have enough time to try this, but I want to someday soon. It's quite interesting and useful. I think it's better to post this contents to another thread to notice at least DVR users. Currently, I feel I have to check every contents in this forum, because of the improper or inaccurate thread title.
  18. Very interesting. NanoPi Neo behaves something like virtual USB (network) mass storage. right ? I think it's awesome idea. What device you connect this to ? Personally, I don't have any good usage for this now, but someone using DVR function of TV or gamer using PS4, XBox, etc. may be interested in this. Could you share more detail on somewhere ?
  19. SHR has several drawbacks on upgrade, if you want to use SHR, I recommend you to read this article carefully. Don't Roll Into Trouble When Expanding NAS Storage - SmallNetBuilder https://www.smallnetbuilder.com/nas/nas-features/32649-don-t-roll-into-trouble-when-expanding-nas-storage I stopped using SHR after I read this.
  20. I've read that boot loader of DSM6.x is required to be stored on read-write file system. Is it possible to make ISO bootable boot loader using cloop, tmpfs, overlay-fs as Live CD does ? (-> Building Your Own Live CD | Linux Journal ) If it's possible, we can run DSM6 on VPS hosting server. On VPS hosting environment, we can't easily add virtual disk. (I think if we want to run DSM6, we have to use qemu.) Is there any information required to persist on "boot loader partition" on installation process ? # Even so, we can replace iso image after installation. (copy back from vmdk of local vm install) If so, only we have to do is just cheat the installer to believe it's writable file system, is it ? @jun or other wizards here, how do you think ? Is it difficult to make ? # I think it's too simple, so there must be some pitfalls
  21. I don't know how to support that, but I think it's hard to analyze & fix DSM & keep update and so on, and I think it have very little merit. I don't recommend to use RDM for XPEnology. I think RDM's merit is only for Vt-d unsupported CPU system can get some information from disk. It's not useful to configure & uneasy to replace disk, etc. You can't replace disk on crash, before you write rdm config for the new disk. I only used RDM on very old system which doesn't support Vt-d. I recommend to use pass-through SATA I/O chip (onboard / PCIEx) and choose vt-d enabled CPU, if you want to use esxi. My choice of setup for esxi system is something like this. It's for home workstation/server setup & not only for NAS, so I know it's not for everyone...
  22. I don't know such limitation exists. Is there before 5.2 ? I used 240gb x2 SSD for read/write cache for 3TBx4 RAID5 for around 2 years ? since DSM5.2. Last year I upgraded to another system with 1TB x2 cache and it also works fine. I think you should check the recent document again. It almost only says about the memory requirements. https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/StorageManager/genericssdcache DSM caches frequently used random access area automatically within the cache size. (I also recommend not to enable sequential cache, as document says. It just works fast only for a while.) Optimal cache size depends on your workload. I'm using 10-15 VMs constantly, but my cache usage is just 45%. (Before upgrade, cache usage was very high. I can't recall the real number.) My SSD cache seems to be over spec for my current workload. If we can use Virtual SSD for SSD cache, we could share SSD for cache and datastore, but I think it's not so good both for complexity & performance loss. I didn't have investigated, but I guess virtual SSD don't have required command for SSD cache or returns bad response for some command, and DSM refused VSSD. You should log in to console via ssh and check logs around enabling SSD cache. We might get work with flag tweak around VSSD, if such flags exists... I don't have good idea for small factor server like your HP Gen8. I built my system with mid-tower PC case as I wanted to use it as both for workstation & server.
  23. As far as I know, virtual SSD can't be used as SSD cache. In my experience, SSD cache can only be used with pass-through-ing host SATA I/F and SSDs. If you tried with success, please post about that. BTW, My recommended configuration for ESXi system is following. In the following config, whole system is hosted with SSD cache & (almost) all drives are managed by DSM. It performs well and notifies me on any disk troubles. I'm satisfied with this configuration. (I think) It's not so complex, but has enough performance & good flexibility. I hope this helps. My recommended XPEnology based configuration of ESXi system: boot ESXi from USB drive (as you do) Add 1 disk for VMFS datastore (& use it's disk inteface directly for ESXi). (this datastore is just for booting "Host DSM" VM) (*1) Make 1 XPEnology VM as "Host DSM" VM and pass through All disk interfaces (other than above one) to the VM. (This VM is used only for ESXi datastore & ESXi host.) Add all other HDDs / SSDs to make XPEnology VM & setup SSD cache & format disk group with ext4 (for VM performance). Create share folder for ESXi datastore using nfs export (better for performance & good maintainability with SMB access. you can add SMB access for direct maintenance of the datastore from client PCs.) Add that nfs exported datastore from ESXi Add your own VMs on that datastore (*2) Add Another XPEnology VM ("User DSM" VM) with thick provisioned vmdk, formatting with btrfs (for usual file sharing, etc.). Add users & apps only on "User DSM". (*3) *1) If you don't use USB sharing for VM, you can use USB disk for this datastore, perhaps. *2) I can also add Windows/MacOS Desktop VM with pass-through-ing GPU and USB. (Choose ESXi 6.0 for hosting mac. You can still use vCenter 6.5 or later.) *3) The only I wish but I can't is H/W encoding with DSM6.1 + DS916 VM. (Perhaps, you have to pass-through host's iGPU. I don't have such iGPU system.)
  24. Recently I migrated my system from DSM5.2 to DSM 6 with NewLoader and checked VAAI feature. It works fine with NFS again. I can use thick provisioning with NFS and offline cloning is also working. There's no problem with SSD cache. I'm very happy with DSM6 with NewLoader. p.s. I didn't try to use btrfs yet for datastore, because I've heard btrfs has very bad performance as Virtual Machine's datastore... ZFS, BTRFS, XFS, EXT4 and LVM with KVM – A Storage Performance Comparison (2015) | Hacker News https://news.ycombinator.com/item?id=11749010 ZFS, BTRFS, XFS, EXT4 and LVM with KVM – a storage performance comparison http://www.ilsistemista.net/index.php/v ... print=true I'm interested in Snapshot Replication, but it's very hard to convert big datastore & check performance for me because of the lack of available hardware resources. https://www.synology.com/en-us/knowledg ... ection_mgr If someone tested performance of btrfs, please share the result on the forum
  25. I found 5.2's NFS trouble of my environment is hardware problem.(viewtopic.php?f=2&t=6513) VAAI plugin doesn't work on 5.2, but using NFS without VAAI seems to be stable. p.s. I found Time Backup can't be used with SSD read-write cache. see below. http://forum.synology.com/enu/viewtopic ... 6&t=102391
×
×
  • Create New...