benok

Members
  • Content Count

    30
  • Joined

  • Last visited

  • Days Won

    3

benok last won the day on April 4

benok had the most liked content!

Community Reputation

3 Neutral

About benok

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've just been in trouble with crashed btrfs volume, too. Also can't mount the volume & storage manager shows crashed volume "0 bytes / 0 bytes". I found a page about btrfs recovery below, and followed those steps described. https://lists.opensuse.org/opensuse/2017-02/msg00930.html In my case also, "btrfs restore"(Step 8 ) seems to solve the problem. I try to restore 2.5TB volume(used over 2TB), but only I can restore is about 500GB. I followed steps further, on Step11, I could recover almost all chunks (failed only 1 chunk), but chunk tree it self couldn't be recovered. I tried to restore again on Step12, but this step doesn't recover any other files other than that of Step 8. (btrfs recover doesn't overwrite already recovered file, so you can choose same destination to skip already recovered files). ------------<excursion>----------- By the way, My ESXi Setup is a bit complex. (Please read the linked page.) A PSOD crashes the SSD cache, and this makes "User DSM" VM's btrfs volume can't be mount. This was caused by a hardware trouble. (I moved my server (changed location) recently. After that, my system become unstable...) There were 3 PSODs, two didn't crash the volume at all, but 1 PSOD cause severe crash of that btrfs volume and a Win10 VM's NTFS drive. (User files are not lost, but It crashes Win10 system drive severely !) I have to restore Win10 VM from a backup. In my setup, my disk was virtualized, so I can use snapshot and try any btrfs recovery without any risk. (I also copied 2.5TB ext4 vmdk to another volume. It's not required though.) Umm, but there are several trade-offs, and I might not choose same setup if I make another... ------------</excursion>----------- I almost abandoned, but I found this answer on ask ubuntu, which says, I confirmed with 'uname -a' and kernels of DSM6.1 was 3.10. So, I tried to use ubuntu live cd(used 18.04LTS), and it magically mount my crashed btrfs volume ! I could restore almost all files from crashed volume without losing metadata. (Using 'btrfs recover', all the metadatas like permissions, timestamps are gone.) I confirmed that 'btrfs chunk-recover' and mount using newer btrfs system is the key for recovering my crash. (I rolled back to initial state & tried to recover from scratch using Ubuntu Live CD only, but I can't mount volume without 'btrfs chunk-recover' before Step 11. I also tried Step14 without Step11, it also failed.) I recovered using rsync, from crashed btrfs volume to nfs exported "Host DSM" volume (ext4). You need to exclude "@" started folders (see below sample). (Most important is exclude "@sharesnap", because it is snapshot backup files, and it has so many duplicated files and can't restore if you took many snapshots.) Install gnu screen or tmux to make session detachable & have several work windows. (without that, you can't shutdown your PC while restoring) You can monitor read error with "tail -f rsync-recover.log | grep '^rsync:' "(rsync) , "sudo tail -f /var/log/syslog | grep -i btrfs"(btrfs system) (I used script(1) to save error log, because tee doesn't work. It seems something blocks output. Error log is important to find which file is unreliable.) I feel relief when I found I could recover almost all the files. I want to follow "3-2-1 backup" rule, though. : -)
  2. I'm sorry, not to reply. I can not use Proxmox or KVM virutalization on my VPS. That's why I post this question. I think it's not bad idea. If it can be implemented, I think it may solve the problem for DSM6.x on Hyper-V...
  3. If you want to accelerate copying/moving files via SMB, You can try SMB multi channel, by editing smb.conf manually. I don't have experience, but some Synology users got doubled or 4 times transfer speed, gathering 2,4 links with this. SMB multi channel working in 6.1 release - Synology Forum https://forum.synology.com/enu/viewtopic.php?t=128482 SMB 3 Multichannel support on DSM 6.2? : synology SMB Multichannel: How It Works & Troubleshooting Guide | Level One Techs https://level1techs.com/video/smb-multichannel-how-it-works-troubleshooting-guide Let us know the result, if you try this : -)
  4. Hi, Thank you for posting this very very useful matrix. I noticed that working NIC for esxi is not e1000, but e1000e for me. Could you confirm ? (kernel panic log from serial console)
  5. benok

    DSM 6.2 Loader

    Do we need to add "disconnect all display cables from video cards" to the check list ? I've read that on above post, but recently no one wrote about display cable. Is that still required to work transcoding ? I didn't have experience those because I don't have 918p enabled hardware, but I guess some of you might miss display cable disconnection.
  6. benok

    DSM 6.2 Loader

    @Tattoofreak, did you confirm CPU options of your DSM VM on ESXi ? I think you have to set options below if you didn't yet. Enable "Expose hardware assisted virtualization to the guest OS" Choose "CPU/MMU Virtualization" to "Hardware CPU and MMU"(or try another) It's necessary to enable those to work nested virtualization, generally. # But I'm not confident because VDSM error message didn't point out virtualization capability. # So, I think you did those already...
  7. Ah really, so that screenshot says it's "Cache Device". I didn't notice such limitation exists in NVMe on DS918. If NVMe is brought to higher models like DS3619(?), they would remove such a limit. I have no experience of cache corruption, using SSD cache several years, but it's heavy access wares SSD more, and SSD cache doesn't accelerate sequential read/write. Umm, I was attracted huge datastore & good random access performance for < ~10 VMs online. I agree that is not efficient resource usage. (But it's not so bad job for old SATA SSD.) I also use SSD only datastore (via DSM nfs) on another env, but I should consider my setup again. (I just migrating data & app from slow DSM5.2 baremetal NAS to my esxi server with new external enclosure.) As far as my memory is correct, several business vSphere users heavily complain about bad performance of btrfs on DSM6 beta (& Syno's support). (but I'm not sure it was nfs or iSCSI.) That thread continues after DSM6 release. So I thought I should not choose btrfs for VM datastore. I googled Syno's forum again, but no luck. (It might be deleted by Syno because of such a content and it was beta thread.)
  8. Hi, again. (Your post is very interesting for me. I think I should follow you. ) Have you ever tried NVMe passthrough with 918+ VM ? As you know, some 918+ baremetal users successfully setup NVMe as a SSD cache. I wanted to know is it possible to work 918+VM with NVMe pass through like other OS's VM. I think pRDM is a bit more cumbersome than passthrough because of vmdk setup. (But I appreciate your sharing experience about those pRDM. ) p.s. My configuration is something like below. I've read btrfs performance is very bad for VM workload on some Synology forum. So I choose btrfs VM over ext4 fs. I don't measure performance seriously, but I'm satisfied using that. With this setup, you can safely upgrade child DSM VM using snapshot, so it's easier to follow new DSM. It's not recommended for everyone, but for those who uses many VMs or who loves flexibility than simplicity. (But I'm thinking recently this design is not so good because of inefficient energy usage, complexity, can't directly migrate to Synology...) p.s. 2 It's related only to esxi, but It might be possible to passthrough iGPU for transcoding to child DSM VM . (with some recent XEON, or Core i) I don't have hardware(& budget ) to try NVMe, iGPU passthrough now, but I want to know if is there someone who has tried those. Sorry for a bit thread hijacking ..
  9. benok

    DSM 6.2 Loader

    You can use serial console. EliteDesk has a serial port. Buy a USB serial cable and connect HP and another PC with that. You can see some boot log and can use serial console using some serial terminal app like Putty. I guess, if you disable serial port on BIOS, that might stop booting. I've once experienced that on a VM without serial port.
  10. Wow, thank you for sharing all of this. I don't have enough time to try this, but I want to someday soon. It's quite interesting and useful. I think it's better to post this contents to another thread to notice at least DVR users. Currently, I feel I have to check every contents in this forum, because of the improper or inaccurate thread title.
  11. Very interesting. NanoPi Neo behaves something like virtual USB (network) mass storage. right ? I think it's awesome idea. What device you connect this to ? Personally, I don't have any good usage for this now, but someone using DVR function of TV or gamer using PS4, XBox, etc. may be interested in this. Could you share more detail on somewhere ?
  12. SHR has several drawbacks on upgrade, if you want to use SHR, I recommend you to read this article carefully. Don't Roll Into Trouble When Expanding NAS Storage - SmallNetBuilder https://www.smallnetbuilder.com/nas/nas-features/32649-don-t-roll-into-trouble-when-expanding-nas-storage I stopped using SHR after I read this.
  13. I've read that boot loader of DSM6.x is required to be stored on read-write file system. Is it possible to make ISO bootable boot loader using cloop, tmpfs, overlay-fs as Live CD does ? (-> Building Your Own Live CD | Linux Journal ) If it's possible, we can run DSM6 on VPS hosting server. On VPS hosting environment, we can't easily add virtual disk. (I think if we want to run DSM6, we have to use qemu.) Is there any information required to persist on "boot loader partition" on installation process ? # Even so, we can replace iso image after installation. (copy back from vmdk of local vm install) If so, only we have to do is just cheat the installer to believe it's writable file system, is it ? @jun or other wizards here, how do you think ? Is it difficult to make ? # I think it's too simple, so there must be some pitfalls
  14. I don't know how to support that, but I think it's hard to analyze & fix DSM & keep update and so on, and I think it have very little merit. I don't recommend to use RDM for XPEnology. I think RDM's merit is only for Vt-d unsupported CPU system can get some information from disk. It's not useful to configure & uneasy to replace disk, etc. You can't replace disk on crash, before you write rdm config for the new disk. I only used RDM on very old system which doesn't support Vt-d. I recommend to use pass-through SATA I/O chip (onboard / PCIEx) and choose vt-d enabled CPU, if you want to use esxi. My choice of setup for esxi system is something like this. It's for home workstation/server setup & not only for NAS, so I know it's not for everyone...
  15. I don't know such limitation exists. Is there before 5.2 ? I used 240gb x2 SSD for read/write cache for 3TBx4 RAID5 for around 2 years ? since DSM5.2. Last year I upgraded to another system with 1TB x2 cache and it also works fine. I think you should check the recent document again. It almost only says about the memory requirements. https://www.synology.com/en-us/knowledgebase/DSM/help/DSM/StorageManager/genericssdcache DSM caches frequently used random access area automatically within the cache size. (I also recommend not to enable sequential cache, as document says. It just works fast only for a while.) Optimal cache size depends on your workload. I'm using 10-15 VMs constantly, but my cache usage is just 45%. (Before upgrade, cache usage was very high. I can't recall the real number.) My SSD cache seems to be over spec for my current workload. If we can use Virtual SSD for SSD cache, we could share SSD for cache and datastore, but I think it's not so good both for complexity & performance loss. I didn't have investigated, but I guess virtual SSD don't have required command for SSD cache or returns bad response for some command, and DSM refused VSSD. You should log in to console via ssh and check logs around enabling SSD cache. We might get work with flag tweak around VSSD, if such flags exists... I don't have good idea for small factor server like your HP Gen8. I built my system with mid-tower PC case as I wanted to use it as both for workstation & server.