Jump to content
XPEnology Community

haydibe

Contributor
  • Posts

    705
  • Joined

  • Last visited

  • Days Won

    35

Everything posted by haydibe

  1. The equipment is not the issue I have Connexant 10Gbps nics and a managed 10Gbps switch that support a mtu of 9k, that's whay I was wondering why it was only not available in dsm7. With vmnet3 and june's loader on 6.2.1 it works like a charm, and the same is true for every other linux vm I am running with virtio drivers. So the problem must be specific to DSM7 on DS918 somehow... I still need to spin up a ds3615xs and test and also test if it's different with DS918+ with vmnet3 interface.
  2. it's most likekly a problem as @paro44pointed out a couple of times. DiskIdxMap, SataPortMap, SasIdxMap are relevant when the drive order needs fixing or expected drives are missing.
  3. @devid79 The picture does not tell how many controllers are found. please share the output of these three commands: lspci -k | grep -EiA2 'SATA|SCSI' ls /dev/sd? cat /proc/scsi/sg/devices
  4. There is no need to perfom manual changes in the grub.cfg. Just do what @shibbywrote about the setting in your user_config.json (needs to be set BEFORE the image is build!): But this time try different values. This setting IS system specific - and you should set as many `SataPortMap` slots (shibby's example shows 1 controller with 4 port, u357's example shows two controller, one with 4 ports and a second controller with 2 port) as you have in your system and adjust the position of the first harddisk on the controllers using `DiskIdxMap` (shibby's example starts with Disk1 (hexdecimal representation of 0) on the single 4 port controller, which makes the disk for the next existing controller start from 5 ; u357's example start with Disk 14 (hex representation of 13) on the first 4 port controller, disk 1 (hex representation of 0) one the second controller with 2 ports... The DiskIdxMap is the two digit hex representation of "drive number - 1" for the first disk of each controller. The SataPortMap is a single decimal digit representation of the number of ports for each controller. Maker sure both values are in sync: if you declare 3 Controllers in SataPortMap, make sure to declare 3 DiskIdxMap settings as well. I am quite sure that a short forum search will provide the right documentation that explains SataPortMap and DiskIdxMap in detail.
  5. Hmm. I only see 1300, 1400 and 1500 in the mtu value list. If I enter any value above 1500, it intantly results in an error that indicates the mtu must be between 1300 and 1500. Is it because of Virtio-Io simply lacks that feature (even though I can use it on various other linux vms without issues), or is it because Synology withholds jumboframes from its consumer grade products?! @WiteWulf thank you for sharing the details
  6. This one is easy to spot: your user_config.json is invalid. You need to add a comma after mac1, mac2 and mac3 in order to be a valid json. Also you need to make sure the mac addresses are unique, which they are not right now.
  7. @WiteWulf Thanks for checking! Did you had to enable jumbo frames somewhere? Now I am currious why it's not working for me. I wonder if it's Proxmox related?
  8. Did anyone succeed to use a MTU > 1500 with DSM7? My apollolake 7.0.0 is locked between 1300 and 1500. Does it work on bromolow and/or 7.0.1?
  9. Yep, with the build-in forced extension. Works like a charm.
  10. No Passthrough, PVE uses the nic directly. All my vm's use virtio-vnics. The RP DS918+ DSM7 vm uses the build-in virtio-driver provided by RP. Speed amongst VM's with virtio-vnics on the same PVE host is about 25Gbps. Accross nodes, it is limited by the 10gbps connection.
  11. I get stable (close to) 10gbps with DSM7 on Proxmox 7 with a Mellanox Connectx-3 card. Though, only tested it with DS918+
  12. No, problem I can continue to repeast myself: It is as open source as it can get: do whatever you like with it, I don't claim any ownership, in fact if you push it github, I would like to ask you to not add any reference to my nick. I am not realy keen in having a "pet project" putting my day2day job at risk.
  13. @seanone the output of `fdisk -l` lis missing the device /dev/synoboot with its three partitions. How are you booting from the bootloader image? boot from sata? boot from usb stick? boot from virtual usb device? You might want to check the logs on serial port 0. I use Proxmox using a virtual usb device. The pid an vid you use match those default values used with Proxmox. Not sure if unraid uses the same. If you use an usb stick, you must identify and use the vid/pid of the actualy stick. If you use sata dom boot, the vid/pid should be irrelvant, but you need to make sure that you specificly pick the sata boot entry in grub, when starting the vm.
  14. It is like with every good mariage: you are always doing the same things since years, but suddenly they are wrong 😆
  15. Is it possible those drives or other components have a high energy drain? You might want to check if the drives are connected evenly to the power cables. Also a too weak power supply can cause trouble. I had two wd red 10tb disks with a high `drive reconnection count`, one with close over 20k and the other with close over 40k. The numbers were raised within a timespan of a single day. After replacing the power supply, the number remained static since. I must admit, I didn't like it that much that 2 drives of a 4 drive area went dark. As 3 drives had the same mdadm event id, I could re-join the array without data loss. In my case the 290W power supply was simply not sufficient for the Xeon E3-1275v5 cpu' + LSI HBA + 8 drives when the cpu was full on under steam.
  16. You are overriding more settings manualy in the "-args" settings then I do. I prefer to keep my configuration as clean and simple as possible, and as complicated as necessary Observations: I am not limiting the cpu to a specific architecture, I just use the host cpu as is, to not artificialy deactivate features of the cpu. Even though my config lacks the seabios setting for the bios, it applies to it, as it is the default setting. The next thing that hit's the eye is that I am use the q35 chipset, and you probabaly use the i440fx? I for instance do not add the `nec-usb-xhci` usb 3.1 controller, I simply use the already existing "ehci.0" usb 2.0 controller. Jun injects the nec usb3.1 drivers, so this shouldn't be the issue. Your usb drive has the addition attribute "removable=off" and you do set an id for that drive (which i don't at the handle is not required for anything later on). I have never added a network driver using args and as such lack the experiece to judge the validity of your configuration for the network part. Since you define a MAC, bus and addr, I would expect that it "nails down" possible dynamic aspects, but again: I have no idea if it does. Also I don't have hotplug configured at all and I did not specificly set any usb related stuff in the UI. I realy just hook into the default ehci device and just declare the bootdrive and attach it to the ehci bus. Probably you are right and I added the extra.lzma of ig88, I am not entirely sure, though it would make sense, as I am sure I at least tried to use virtio-nic instead. But for whatever reason ended up using the vmxnet3 driver. It must have had problems with the virtio driver. I don't remember. All my three nodes have a 10gbps nic. Of course I was going to aim for a 10gbps capable interface in XPE as well.
  17. @titoum: I have some notes to your approach: Instead of doing this: You can simply download the file from yumkey's repo and put it e.g. in the rp-helper folder, then use the custom_bind_mounts like mentioned earlier. The intended approach for rp-helper v0.12 is to create a custom_config.json for your custom configurations There is no need to butcher the global_config.json.... On Linux, you can get the file with `wget -L https://raw.githubusercontent.com/jumkey/redpill-load/develop/bundled-exts.json -o bundled-exts.json` in your rp-helper folder and use a custom_config.json like this: { "docker": { "use_custom_bind_mounts": "true", "custom_bind_mounts": [ { "host_path": "bundled-exts.json", "container_path" :"/opt/redpill-load/bundled-exts.json" } ] }, "build_configs": [ { "id": "bromolow-7.0.1-42218", "platform_version": "bromolow-7.0.1-42218", "user_config_json": "bromolow_user_config.json", "redpill_lkm_make_target": "test-v7", "docker_base_image": "debian:8-slim", "compile_with": "toolkit_dev", "downloads": { "kernel": { "url": "https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/25426branch/bromolow-source/linux-3.10.x.txz/download", "sha256": "18aecead760526d652a731121d5b8eae5d6e45087efede0da057413af0b489ed" }, "toolkit_dev": { "url": "https://sourceforge.net/projects/dsgpl/files/toolkit/DSM7.0/ds.bromolow-7.0.dev.txz/download", "sha256": "a5fbc3019ae8787988c2e64191549bfc665a5a9a4cdddb5ee44c10a48ff96cdd" } }, "redpill_lkm": { "source_url": "https://github.com/RedPill-TTG/redpill-lkm.git", "branch": "master" }, "redpill_load": { "source_url": "https://github.com/jimmyGALLAND/redpill-load.git", "branch": "develop" } } ] } Afterwards you can use the auto action for this build config without manual steps required...
  18. You should realy speak to the github repo maintainer of the rp-load fork you use, as it is in their responsibility to provide the extensions that work with their repo (or at least point to compatible existing extenions made by someone else for that platform and revision), as TTG only provdes extenions for 6.2.4 and 7.0 Everything related to 7.0.1 is in the hands whoever made the 7.0.1 repo. The forced extensions are declared in the `bundled-exts.json` of the redill-load repository you declared. I don't believe that removing the offending extensions is the right way to go. I would recommend to raise this concern to the repo owner and ask him to take care of this issue. Fun fact: you can use `custom_bind_mounts` to mount a local copy of your own bundled-extensions.json into the container at /opt/redpill-load/bundles-extensions.json and make it mount on top of the existing file from the repo. On linux (and how I recently learned on DDf MacOS) you can mount single files(!), on DDf Windows this is not possible.
  19. ... and the error message is the same like it was with the build config in custom_config.json, wasn't it? If an extension does not support a platform and dsm revision (in you case ds3615xs_42218), then the extension needs to be removed for a successfull build. Currently TG's repos do not support 7.0.1 builds, neither do their exensions. Please raise an issue at whoever's rp-load repo you use.
  20. I don't understand WHY it works, but appearently it does! The Synocommunity repository can be added/accessed from the ui by replacing /etc/ssl/certs/ca-certificates.crt with the most recent file from https://curl.se/ca/cacert.pem. update: So apparently the explenation is that the older openssl libs requires ALL certificate chains it verifies to be valid, while newer openssl libs (> v1.1.0) only require at least one certificate chain it verifies to be valid. What's the difference with the cacert.pem of curl.se? Unlike the Synology cacert.pem's (regardless wether dsm 6.x or current 7.x) It does not include the expired CA. Thus, the verification that led to an invalid chain due to the expired CA certificate is not included. So after all it was not the algorithms that LE used to sign the certificates, but rather a suffering from an openssl implementation detail in combination with outdated CA's.
  21. This update of the redpill_tool_chain helper is long overdue. From now on the name will be redpill-helper, as it realy is just a helper for redpill-lkm and redpill-load. You can find the redpill-helper v0.12 attached to this post. It now supports an "ext" action, which delegates the commands to the ext-manager.sh script inside a container. The extensions are cached on the host, thus extensions need to be added only once and will apply for all build profiles! This should put an end to the need to modify the script or to use the run action and add the extensions everytime a bootload image is build. Additionaly a custom_config.json is introduced, which is the place to store your custom configurations - it needs to be created by yourself and won't be overriden by any future updates of the rp-helper. Please read the README.md for usage instructions. Thanks at Pocopico, WiteWulf and Orphée for testing it thoroughly! Special thanks to WiteWulf for helping me to transform the README.md to a usefull document redpill-helper-v0.12.zip
  22. This is how I run DS3615 DSM6.2.3u3 with Jun's 1.03b and PVE 7.0.1 with latest updates: args: -drive 'if=none,id=synoboot,format=raw,file=/var/lib/vz/images/xxx/synoboot.img' -device 'usb-storage,bus=ehci.0,drive=synoboot,bootindex=5' boot: order=sata0 cores: 8 cpu: host,flags=+aes hostpci0: 0000:04:00,pcie=1 machine: q35 memory: 8192 name: DSM net0: vmxnet3=xx:xx:xx:xx:xx:xx,bridge=vmbr0 numa: 0 onboot: 1 ostype: l26 sata0: local-lvm:vm-xxx-disk-0,discard=on,size=100G,ssd=1 scsihw: megasas serial0: socket serial1: socket serial2: socket smbios1: uuid=${random uuid} sockets: 1 tablet: 0 vmgenid: ${random uuid} Notes: - I specifly didn't add a usb device and re-use the pre-existing ehci.0 bus (=usb2.0). - hostpci0 is a pci-passthrough of a lsi9211 controller with 8 drives - without passthrough, the line is irrelevant to you - I do use vmxnet3 and can archive close to full 10gbps on the nic. - serial ports are to access log output There is an extra.lzma on the 2nd partition of the bootloader image, but I don't recall If I added it myself or it pre-existed. I modifed of grub\grub.cfg in the 1st partition of the bootloader image: set vid=0x46f4 set pid=0x0001 set extra_args_3615='DiskIdxMap=00080D SataPortMap=866 SasIdxMap=0' The first two are required to hide the usb boot device from DSM, the third to correct the drive order in DSM - though the last line will be indivual for each setup. SataPortMap settings explained: 8 = 1st controller has 8 ports -> my pci passthrough lsi controller with 8 ports 6 = 2nd controller has 6 ports -> the sata controller with a 100gb additional disk (6=max drive count on PVE sata) 6 = 3rd controller has 6 ports -> for the additional sata controller I get listed with lspci -k | grep -A2 -i -E '(SCSI|SATA) Controller'` DiskIdxMap settings explained: 00 = first drive of the 1st controller starts at drive 1 ( up to drive 8 ) 08 = first drive of the 2nd controller starts at drive 9 ( up to drive 14 ) 0D = frist drive of the 3rd controller starts at drive 15 ( up to drive 20 ) Like I wrote the DiskIdxMap and SataPortMap are indidual to each setup.
  23. I doubt that replacing curl with a static compiled version will solve much issues, other than maybee with custom scripts that depend on curl. You can prebuild static compiled curl binaries from github right away and test it... Without having identified what exact dependcy an affected application requires, it is hard to figure out what needs to be replaced. .. All 3rd party spk packages should come with updated version of openssl - but those are private to the package. Is it safe to replace the system libraries with those of the packages? I highly doubt that... What might be worth investigating is, if replacing the runtime environment that servs the DSM UI: I guess nginx could be replaced with a static compiled version, but what about synocgid - on of both must be responsible to execute the package manager's certificate cheks... If you feel bold, you can download this cli tool to check the dependencies of a binary: wget -L https://github.com/haampie/libtree/releases/download/v2.0.0/libtree_x86_64 chmod +x libtree_x86_64 # example usage: ./libtree_x86_64 $(which curl) It will provide an output like this on DSM6.2.3u3:
  24. If you run Docker Desktop or docker in WSL2, try to shutdown the WSL2 base vm with `wsl --shutdown` and retry again. If it's another Linux distro try to restart the docker engine with `service docker restart`. Appearently you are not using Synology Docker, as you showed that a link on the host(?) works without issues. Appart from that: No idea. Not reproducable for me.
×
×
  • Create New...