Balrog

Members
  • Content Count

    125
  • Joined

  • Last visited

  • Days Won

    1

Balrog last won the day on December 3 2017

Balrog had the most liked content!

Community Reputation

15 Good

About Balrog

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Today I worked a little bit on the wireguard setup and changed the the first setup from the normal "docker run"-command to "docker-compose". I append the working "docker-compose.yml" as I have some struggles with the current syntax for the host-network-mode. Just copy the file to the same directory as the wg0.conf (for me its /volume1/docker/wireguard) . Afterwards you can use the following commands to start and stop the container: # cd /volume1/docker/wireguard # docker-compose up -d Building with native build. Learn about native build in Compose here:
  2. Hi Ferno! As i wrote: - if the temperature of the NVME ssd is at 310 K (36,9°C) the QNAP Card is quiet and comfortable. - if the temperature of the NVME goes up to about 317 K (43,9°C) the little fan on the QNAP Card rises up the rpm and have an annoying high frequency noise. - with the additional Nocuta I can manage the temperature of the NVME pretty much the whole time at about 307 K (32,9°C) and the Nocuta has a lot warmer and pleasant sound character than the QNAP fan. I have to say that there is not QNAP to blame: It is the little space between th
  3. Hi@sebg35! I think I have to make some points more clear: - I use the 2tb-nvme-ssd as a local storage for the esxi-host as I have a bunch of VMs and not only the xpenology vm itself - I use a 256 Gbyte-part as vmdk of the nvme as a "volume1" (without redundancy and formatted with btrfs) in xpenology for e.g. Docker container - I do regularly backups of the Xpenology vm with Veeam so I have some sort of redundancy/backup of "volume1" (not in real-time but it's okay for me as the data of my docker containers do not change that much) - the main data is on the 4 x 14TB-HDDs in RAID10. The risc
  4. As promised here some notes I have choosen to remember: Enable passthrough of the onboard Cannon Lake AHCI controller: - enable SSH access in ESXi and login as root - edit the file "/etc/vmware/passthru.map" (it is NOT "passthrough.map" : vi /etc/vmware/passthru.map - add this at the end of this file: # Intel Cannon Lake PCH-H Controller [AHCI mode] 8086 a352 d3d0 false - reboot ESXi - login to the ESXi-GUI and enable passthrough of the Cannon Lake AHCI controller - reboot ESXi again - Now you are able to attach the Cannon L
  5. I never used raw device mapping for xpenology. There is a little trick for enabling the possibility for the passthrough. I will later have a look in my notes and write it here. And yes: I am very satisfied with the 2236. It has lots of power (for a home-solution of course). But the Microserver 10+ is at his power supply limits with this kind of cpu, 64 gbit RAM, the Qnap card with Nvme-ssd and 4 HDDs. But I never run a type of hard disk benchmark parallel to a prime95 benchmark over all cpu cores for hours, so this is no problem for me. It is a pretty little powerful server and till now very
  6. Hi ferno, I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm. The local esxi storage is done via a 2tb-nvme-ssd on a qnap QM2-2P10G1TA. Not the fastest solution for the nvme but I get about 6.6 gbit via iperf3 which is more than the RAID10 of 4 HDDs is able to deliver. I must add that I have to cut a hole onto the right side of the case to be able to attach an additional noctua fan (otherwise the fan from the qnap card has a high annoying noise). With the additional Noctua fan the Microserver is cool and silent enough for me.
  7. I have a Microserver Gen 10+ with a Xeon E-2236 cpu under ESXi 7.0b. For me a VM with 4 CPUs as 918 works fine and fast and with all cores. So maybe it is an issue with baremetal installations.
  8. I have played around with deduplication and btrfs in the past. Even with good results. But as dedup & encryption is not compatible and also one never does run a defragmentation or the deduplication will be destroyed I deleted my tests. Deduplication is a very interesting feature for me but only if it's supported neat.
  9. I have some news about other log entries which are anyoing and useless: The open-vm-tools package logs every 30 seconds this: root@ds:~# tail -f /var/log/vmware-vmsvc.log [Oct 09 19:50:17.681] [ warning] [vmsvc] HostinfoOSData: Error: no distro file found [Oct 09 19:50:17.681] [ warning] [guestinfo] Failed to get OS info. [Oct 09 19:50:17.683] [ warning] [vmsvc] HostinfoOSData: Error: no distro file found Solution: root@ds:~# cat /proc/version > /etc/release If this file exists the open-vm-tools are happy and do not throw any error anymore. __________________
  10. Hello! I just expand a vmdk in a virtual Xpenology (a simple basic volume). The instructions are working very good besides two little differences: 1. After "mdadm --grow /dev/md2 --size=max" I have to use "lvextend -l +100%FREE /dev/vg1/volume_1" to grow the volume itself. 2. "btrfs filesystem resize max ..." does not work for me. I have to use "Storage Manager - Volume - Action - Expand" instead in DSM itself. But after all, the expanding works very well. Thanks very much for the information!
  11. You can't create a snapshot of a a powered on virtual machine if you have a passthrough controller attached to this virtual machine. Then you must first shutdown the vm and then you are able to create a snapshot.
  12. So I have now build your setup by myself. Everything went well and only one step was missing I used some info from this article to get the missing pieces: https://www.stavros.io/posts/how-to-configure-wireguard/ After adding these two lines into the server configuration file I am able to access my LAN and also the DSM via my smartphone over wireguard! I think the procedure itself will be running fine even on a raspberry pi as there is also docker available. Thank you very much again for this fine idea. Edit: I was building
  13. So I just got the time to read through all information: Thank you very much for this!! Btw: We have a similar backup routine for backup "real" linux equipment (I use also Veeam Linux Free). For now I have the only issue that Veeam currently is not supporting LUKS2-encryption (only LUKS1). But this is much off topic and only a matter of time.
  14. The first thing coming to my mind during reading your text is: "check speed and duplex settings!". Sometimes there is a mismatch between the NIC and the connected network switch regarding these settings which results exactly into the issues you are describing. The default setting in DSM is "auto" but maybe it helps if you to set speed/duplex fix to "1000mbit/full duplex". You can find the setting in the NIC settings of DSM. Before testing this I would suggest that you configure the second NIC as a fallback with a own fixed IP if the change doesn't work and you lost completely your connection