Jump to content
XPEnology Community

benok

Member
  • Posts

    37
  • Joined

  • Last visited

  • Days Won

    3

benok last won the day on April 4 2019

benok had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

benok's Achievements

Junior Member

Junior Member (2/7)

6

Reputation

  1. Ok, I understand. But if it's not included in your (official) repository, I think it's not useful and confusing. (Additional download steps required, additional tutorial variation, like m.sh.) I don't want to add another unofficial method, please forget my proposal. sorry.
  2. Sorry, I confirmed maxdisks is set to 16 correctly to synoboot.conf, and 13th disks can be used. Parhaps this was my mistake. (I don't know why the last drive of my array was not detected at that time, anyway.) But, figure of the "Drive Information" at the overview of Storage Manager is still 12. Is there any known hacks to fix this?
  3. Hi, @pocopico, First, I want to say thank you for your (and many others here :-) great contribution to this project. I really appreciate your work! TLDR; How about adding Makefile to TCRP to organize build tasks? At first, I downloaded @Peter Suh's "m.sh" and used it a few times. I feel it's very handy for the first installation, but not suitable for repeating to tweak parameters. (Of course, it is as intended I think.) So, I extracted what m.sh do and wrote this Makefile. I used this Makefile to upgrade 3 XPenology VMs. It's not complex and please look inside, but I explain a bit. For a new install, use "make new-install" to configure, build, and install. After tweaking user_config.json manually, "make rebuild" to update & rebuild, and install. You can specify several partial tasks at once, like "make build install clean reboot" I post this Makefile as a basis for discussion because it was written just for me. It's not sophisticated yet. (e.g. PLATFORM can be derived from MODEL, target name definition like "new-install" is not typing friendly, rploader.sh unnecessarily backups several times while new-install (because of "yes" passed "y" every time)...) I think recent @flyride's installation tutorial is a bit complex because of the variety of the commands, but a supplemental Makefile like this can organize and simplify build tasks. I think it's useful but, what do you think about it? > @pocopico, @flyride, @Peter Suh Could you consider adding some Makefile like this to TCRP? Makefile
  4. Might be these are the reason ? BTW, I thnk I'll have to read rploader.sh throughly. I want to know all the parameters supported by user_config.json.
  5. Ah, yes. I have to insert *real* USB stick and attach that to the VM. (I didn't connect VM to serial console, but seems to stop booting after kernel loading. ) OK, I'll try. But, I thought rploader.sh(or my operation mistake) didn't seem to apply user_config.json correctly. (Default settings of maxdisks was 16 (I didn't modified), but my DSM7 VM maxdisk was 12 and couldn't find My 13th disk. So I have to rebuild(resync?) array now.
  6. Thank you for this report. (investigation? study? I'm not sure what is appropriate word :-) ) I really respect and appreciate your contributions, as always. Yesterday, I finally upgraded my main machine (ESXi 7, LSI SAS(+Chipset SATA) pass through, DSM6.1x DS3615xs) to DSM7 DS3622xs+. (I decided to do with your architecture comparison table post, and successful update of 2 sub DSM6.2x VMs before that.) Today, I tried to fix this gap problem and found SasIdxMap doesn't work anymore, and I tried many tweaking SataPortMap / DiskIdxMap and I conclude I couldn't avoid this. (I'm satisfied I can cofirm my conclusion is correct with your post.) Today I even tried booting from USB within ESXi VM. Changing boot setting to EFI, TCRP can boot from USB. (Recently, VMware support USB boot from EFI firmware.) But, I don't know why, booting DSM from TCRP Boot USB (or EFI ?) doesn't work :-( (I also tried Plop Boot Manager which(at least PBM5) USB boot can be worked with a bit older VMware, but no luck.(I tried all of them)) So it seems it can't be mitigated with USB boot. I'm sure it's not proper way booting from USB, but I think I should post my struggle today. p.s. By the way, it seems I should increase maxdisks. Is it safe to edit /etc/{,defaults/}synoboot.cfg directly ? Is this persists after DSM update ? I tried to search around, but couldn't find yet. (It seems to be, but not clear.)
  7. I tried this before (with 2 DSM6/jun's Loader/LSI HBA systems), and serched around workarounds, but no luck. So, I wrote a script to confirm mapped drive location. https://gist.github.com/benok/c117badbfb6af744578d48e2ab1d4d4f (Just periodically do heavy access to specified drive) But I've encountered one of my array enclosure's LED got broken(no light). it's not so reliable, though. You should write down each Drive's S/N and paste to your caddies. I think it must be done with LSI controllers.
  8. I've just been in trouble with crashed btrfs volume, too. Also can't mount the volume & storage manager shows crashed volume "0 bytes / 0 bytes". I found a page about btrfs recovery below, and followed those steps described. https://lists.opensuse.org/opensuse/2017-02/msg00930.html In my case also, "btrfs restore"(Step 8 ) seems to solve the problem. I try to restore 2.5TB volume(used over 2TB), but only I can restore is about 500GB. I followed steps further, on Step11, I could recover almost all chunks (failed only 1 chunk), but chunk tree it self couldn't be recovered. I tried to restore again on Step12, but this step doesn't recover any other files other than that of Step 8. (btrfs recover doesn't overwrite already recovered file, so you can choose same destination to skip already recovered files). ------------<excursion>----------- By the way, My ESXi Setup is a bit complex. (Please read the linked page.) A PSOD crashes the SSD cache, and this makes "User DSM" VM's btrfs volume can't be mount. This was caused by a hardware trouble. (I moved my server (changed location) recently. After that, my system become unstable...) There were 3 PSODs, two didn't crash the volume at all, but 1 PSOD cause severe crash of that btrfs volume and a Win10 VM's NTFS drive. (User files are not lost, but It crashes Win10 system drive severely !) I have to restore Win10 VM from a backup. In my setup, my disk was virtualized, so I can use snapshot and try any btrfs recovery without any risk. (I also copied 2.5TB ext4 vmdk to another volume. It's not required though.) Umm, but there are several trade-offs, and I might not choose same setup if I make another... ------------</excursion>----------- I almost abandoned, but I found this answer on ask ubuntu, which says, I confirmed with 'uname -a' and kernels of DSM6.1 was 3.10. So, I tried to use ubuntu live cd(used 18.04LTS), and it magically mount my crashed btrfs volume ! I could restore almost all files from crashed volume without losing metadata. (Using 'btrfs recover', all the metadatas like permissions, timestamps are gone.) I confirmed that 'btrfs chunk-recover' and mount using newer btrfs system is the key for recovering my crash. (I rolled back to initial state & tried to recover from scratch using Ubuntu Live CD only, but I can't mount volume without 'btrfs chunk-recover' before Step 11. I also tried Step14 without Step11, it also failed.) I recovered using rsync, from crashed btrfs volume to nfs exported "Host DSM" volume (ext4). You need to exclude "@" started folders (see below sample). (Most important is exclude "@sharesnap", because it is snapshot backup files, and it has so many duplicated files and can't restore if you took many snapshots.) Install gnu screen or tmux to make session detachable & have several work windows. (without that, you can't shutdown your PC while restoring) You can monitor read error with "tail -f rsync-recover.log | grep '^rsync:' "(rsync) , "sudo tail -f /var/log/syslog | grep -i btrfs"(btrfs system) (I used script(1) to save error log, because tee doesn't work. It seems something blocks output. Error log is important to find which file is unreliable.) I feel relief when I found I could recover almost all the files. I want to follow "3-2-1 backup" rule, though. : -)
  9. I'm sorry, not to reply. I can not use Proxmox or KVM virutalization on my VPS. That's why I post this question. I think it's not bad idea. If it can be implemented, I think it may solve the problem for DSM6.x on Hyper-V...
  10. If you want to accelerate copying/moving files via SMB, You can try SMB multi channel, by editing smb.conf manually. I don't have experience, but some Synology users got doubled or 4 times transfer speed, gathering 2,4 links with this. SMB multi channel working in 6.1 release - Synology Forum https://forum.synology.com/enu/viewtopic.php?t=128482 SMB 3 Multichannel support on DSM 6.2? : synology SMB Multichannel: How It Works & Troubleshooting Guide | Level One Techs https://level1techs.com/video/smb-multichannel-how-it-works-troubleshooting-guide Let us know the result, if you try this : -)
  11. Hi, Thank you for posting this very very useful matrix. I noticed that working NIC for esxi is not e1000, but e1000e for me. Could you confirm ? (kernel panic log from serial console)
  12. benok

    DSM 6.2 Loader

    Do we need to add "disconnect all display cables from video cards" to the check list ? I've read that on above post, but recently no one wrote about display cable. Is that still required to work transcoding ? I didn't have experience those because I don't have 918p enabled hardware, but I guess some of you might miss display cable disconnection.
  13. benok

    DSM 6.2 Loader

    @Tattoofreak, did you confirm CPU options of your DSM VM on ESXi ? I think you have to set options below if you didn't yet. Enable "Expose hardware assisted virtualization to the guest OS" Choose "CPU/MMU Virtualization" to "Hardware CPU and MMU"(or try another) It's necessary to enable those to work nested virtualization, generally. # But I'm not confident because VDSM error message didn't point out virtualization capability. # So, I think you did those already...
  14. Ah really, so that screenshot says it's "Cache Device". I didn't notice such limitation exists in NVMe on DS918. If NVMe is brought to higher models like DS3619(?), they would remove such a limit. I have no experience of cache corruption, using SSD cache several years, but it's heavy access wares SSD more, and SSD cache doesn't accelerate sequential read/write. Umm, I was attracted huge datastore & good random access performance for < ~10 VMs online. I agree that is not efficient resource usage. (But it's not so bad job for old SATA SSD.) I also use SSD only datastore (via DSM nfs) on another env, but I should consider my setup again. (I just migrating data & app from slow DSM5.2 baremetal NAS to my esxi server with new external enclosure.) As far as my memory is correct, several business vSphere users heavily complain about bad performance of btrfs on DSM6 beta (& Syno's support). (but I'm not sure it was nfs or iSCSI.) That thread continues after DSM6 release. So I thought I should not choose btrfs for VM datastore. I googled Syno's forum again, but no luck. (It might be deleted by Syno because of such a content and it was beta thread.)
  15. Hi, again. (Your post is very interesting for me. I think I should follow you. ) Have you ever tried NVMe passthrough with 918+ VM ? As you know, some 918+ baremetal users successfully setup NVMe as a SSD cache. I wanted to know is it possible to work 918+VM with NVMe pass through like other OS's VM. I think pRDM is a bit more cumbersome than passthrough because of vmdk setup. (But I appreciate your sharing experience about those pRDM. ) p.s. My configuration is something like below. I've read btrfs performance is very bad for VM workload on some Synology forum. So I choose btrfs VM over ext4 fs. I don't measure performance seriously, but I'm satisfied using that. With this setup, you can safely upgrade child DSM VM using snapshot, so it's easier to follow new DSM. It's not recommended for everyone, but for those who uses many VMs or who loves flexibility than simplicity. (But I'm thinking recently this design is not so good because of inefficient energy usage, complexity, can't directly migrate to Synology...) p.s. 2 It's related only to esxi, but It might be possible to passthrough iGPU for transcoding to child DSM VM . (with some recent XEON, or Core i) I don't have hardware(& budget ) to try NVMe, iGPU passthrough now, but I want to know if is there someone who has tried those. Sorry for a bit thread hijacking ..
×
×
  • Create New...