Balrog

Members
  • Content Count

    130
  • Joined

  • Last visited

  • Days Won

    1

Balrog last won the day on December 3 2017

Balrog had the most liked content!

Community Reputation

17 Good

About Balrog

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ahhh, v0.5.4 did the trick! Thank you very much for the hint! Now the loader will compile. I only get some warnings: ... include/linux/kfifo.h:390:37: warning: initialization of 'unsigned char' from 'int *' makes integer from pointer without a cast [-Wint-conversion] ... /opt/redpill-lkm/config/cmdline_delegate.c:405:74: warning: value computed is not used [-Wunused-value] ... /opt/redpill-lkm/config/runtime_config.c:168:53: warning: passing argument 2 of 'validate_nets' discards 'const' qualifier from pointer target type [-Wdiscarded-qualifiers] I will be able to tes
  2. I am currently not able to build the Apollolake 7.0 image for #41890: ... > [stage-1 3/9] RUN git clone https://github.com/RedPill-TTG/redpill-lkm.git -b master /opt/redpill-lkm && git clone https://github.com/jumkey/redpill-load.git -b 7.0-41890 /opt/redpill-load: #6 0.438 Cloning into '/opt/redpill-lkm'... #6 1.081 Cloning into '/opt/redpill-load'... #6 1.426 fatal: Remote branch 7.0-41890 not found in upstream origin ------ executor failed running [/bin/sh -c git clone ${REDPILL_LKM_REPO} -b ${REDPILL_LKM_BRANCH} ${REDPILL_LKM_SRC} && git clone ${R
  3. Hello together! I have never read such a good thread ever before!! Absolutely awesome! Thanks to @ThorGroup, @haydibe, @flyride, @mcdull and all other I forgot for the very good information! I am very busy at work so I can't work much in here but I read a lot. I have spin up an Ubuntu 20.04 VM via Vagrant under VMware Workstation (under Windows) and was able to build with the help of the docker from @haydibe in a few minutes the loader (Apollolake, 6.2.4 and 7.0). (Testing of the loaders are note done yet due to lack of time). If anyone is i
  4. So, I just go the lazy route: - Deinstall Docker 20.10.3-0552 without wiping the config and images - manual download the old docker: https://global.download.synology.com/download/Package/spk/Docker/18.09.0-0519/Docker-x64-18.09.0-0519.spk - install Docker 18.09.0-0519 - all Containers including Wireguard are up and running For the long term I have to move Wireguard to another machine but for now everything is running fine.
  5. I just updated the docker package in DSM to 20.10.3-0552. Now the Wireguard-Docker container won't run anymore: root@bla:/volume1/docker/wireguard# docker-compose up Removing wireguard Starting c3e3ba4b8f24_wireguard ... error ERROR: for c3e3ba4b8f24_wireguard Cannot start service wireguard: OCI runtime create failed: sysctl "net.ipv4.ip_forward" not allowed in host network namespace: unknown ERROR: for wireguard Cannot start service wireguard: OCI runtime create failed: sysctl "net.ipv4.ip_forward" not allowed in host network namespace: unknown ERROR: Encount
  6. Today I worked a little bit on the wireguard setup and changed the the first setup from the normal "docker run"-command to "docker-compose". I append the working "docker-compose.yml" as I have some struggles with the current syntax for the host-network-mode. Just copy the file to the same directory as the wg0.conf (for me its /volume1/docker/wireguard) . Afterwards you can use the following commands to start and stop the container: # cd /volume1/docker/wireguard # docker-compose up -d Building with native build. Learn about native build in Compose here:
  7. Hi Ferno! As i wrote: - if the temperature of the NVME ssd is at 310 K (36,9°C) the QNAP Card is quiet and comfortable. - if the temperature of the NVME goes up to about 317 K (43,9°C) the little fan on the QNAP Card rises up the rpm and have an annoying high frequency noise. - with the additional Nocuta I can manage the temperature of the NVME pretty much the whole time at about 307 K (32,9°C) and the Nocuta has a lot warmer and pleasant sound character than the QNAP fan. I have to say that there is not QNAP to blame: It is the little space between th
  8. Hi@sebg35! I think I have to make some points more clear: - I use the 2tb-nvme-ssd as a local storage for the esxi-host as I have a bunch of VMs and not only the xpenology vm itself - I use a 256 Gbyte-part as vmdk of the nvme as a "volume1" (without redundancy and formatted with btrfs) in xpenology for e.g. Docker container - I do regularly backups of the Xpenology vm with Veeam so I have some sort of redundancy/backup of "volume1" (not in real-time but it's okay for me as the data of my docker containers do not change that much) - the main data is on the 4 x 14TB-HDDs in RAID10. The risc
  9. As promised here some notes I have choosen to remember: Enable passthrough of the onboard Cannon Lake AHCI controller: - enable SSH access in ESXi and login as root - edit the file "/etc/vmware/passthru.map" (it is NOT "passthrough.map" : vi /etc/vmware/passthru.map - add this at the end of this file: # Intel Cannon Lake PCH-H Controller [AHCI mode] 8086 a352 d3d0 false - reboot ESXi - login to the ESXi-GUI and enable passthrough of the Cannon Lake AHCI controller - reboot ESXi again - Now you are able to attach the Cannon L
  10. I never used raw device mapping for xpenology. There is a little trick for enabling the possibility for the passthrough. I will later have a look in my notes and write it here. And yes: I am very satisfied with the 2236. It has lots of power (for a home-solution of course). But the Microserver 10+ is at his power supply limits with this kind of cpu, 64 gbit RAM, the Qnap card with Nvme-ssd and 4 HDDs. But I never run a type of hard disk benchmark parallel to a prime95 benchmark over all cpu cores for hours, so this is no problem for me. It is a pretty little powerful server and till now very
  11. Hi ferno, I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm. The local esxi storage is done via a 2tb-nvme-ssd on a qnap QM2-2P10G1TA. Not the fastest solution for the nvme but I get about 6.6 gbit via iperf3 which is more than the RAID10 of 4 HDDs is able to deliver. I must add that I have to cut a hole onto the right side of the case to be able to attach an additional noctua fan (otherwise the fan from the qnap card has a high annoying noise). With the additional Noctua fan the Microserver is cool and silent enough for me.
  12. I have a Microserver Gen 10+ with a Xeon E-2236 cpu under ESXi 7.0b. For me a VM with 4 CPUs as 918 works fine and fast and with all cores. So maybe it is an issue with baremetal installations.
  13. I have played around with deduplication and btrfs in the past. Even with good results. But as dedup & encryption is not compatible and also one never does run a defragmentation or the deduplication will be destroyed I deleted my tests. Deduplication is a very interesting feature for me but only if it's supported neat.
  14. I have some news about other log entries which are anyoing and useless: The open-vm-tools package logs every 30 seconds this: root@ds:~# tail -f /var/log/vmware-vmsvc.log [Oct 09 19:50:17.681] [ warning] [vmsvc] HostinfoOSData: Error: no distro file found [Oct 09 19:50:17.681] [ warning] [guestinfo] Failed to get OS info. [Oct 09 19:50:17.683] [ warning] [vmsvc] HostinfoOSData: Error: no distro file found Solution: root@ds:~# cat /proc/version > /etc/release If this file exists the open-vm-tools are happy and do not throw any error anymore. __________________
  15. Hello! I just expand a vmdk in a virtual Xpenology (a simple basic volume). The instructions are working very good besides two little differences: 1. After "mdadm --grow /dev/md2 --size=max" I have to use "lvextend -l +100%FREE /dev/vg1/volume_1" to grow the volume itself. 2. "btrfs filesystem resize max ..." does not work for me. I have to use "Storage Manager - Volume - Action - Expand" instead in DSM itself. But after all, the expanding works very well. Thanks very much for the information!