haydibe

Members
  • Content Count

    634
  • Joined

  • Last visited

  • Days Won

    33

haydibe last won the day on November 29

haydibe had the most liked content!

Community Reputation

282 Excellent

About haydibe

  • Rank
    Guru

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @Hammerfine you have to add a custom config for 7.0.1 yourself. rp-tool-chain/rp-helper support whatever is supported from the TTG rp-load repository.
  2. Yep, that totaly fixes the problem! Jun's loader was and still is not open source and a one shot contribution. The difference is that TTG open sourced the whol effort. This is a whole different situation. So, your argument is for that sake of some, we shouldn't care that it risks everything for everyone. I am not sure if I am able to understand that spirit. This is not a finished and polished product. At this stage RP targets an audience that provides some skills.. so if someone lack that skills, then I frankly feel those people should wait until the final product (or at least a b
  3. Guys, are you serious?! It is unwanted that prebuild bootloader images are shared as they include proprietary code by Synology and are NOT opensource. When I see how careless people treat the warning of TTG to not share prebuild bootloader images due to them containting proprietary Synology code, it kind of makes me regret that I ever have shared the rp-helper/rp-toolch-chain with the public. I don't understand why it's imposible for some people to respect the wishes of the authors. Thus said, please stop publishing any prebuild images! Maybe TTG not beeing around is alr
  4. Thanks for the clearification! At least I know now it is not because of DS918+ or DSM7.0 in general Ofc the pyhysical 10gpbs interface of the Proxmox host and the linux bridge (which does what the vSwitch does) were set to mtu9000 - it works without issues for everything else. Probably it's the virtio network driver then. But I kind of lost interesst as I found no way to run my Swarm Cluster with mtu9000 - even though the container networks can be set to mtu 90000... the docker0 and docker_gwbridge interface used for the communication can not be change from mtu1500 to some
  5. The equipment is not the issue I have Connexant 10Gbps nics and a managed 10Gbps switch that support a mtu of 9k, that's whay I was wondering why it was only not available in dsm7. With vmnet3 and june's loader on 6.2.1 it works like a charm, and the same is true for every other linux vm I am running with virtio drivers. So the problem must be specific to DSM7 on DS918 somehow... I still need to spin up a ds3615xs and test and also test if it's different with DS918+ with vmnet3 interface.
  6. it's most likekly a problem as @paro44pointed out a couple of times. DiskIdxMap, SataPortMap, SasIdxMap are relevant when the drive order needs fixing or expected drives are missing.
  7. @devid79 The picture does not tell how many controllers are found. please share the output of these three commands: lspci -k | grep -EiA2 'SATA|SCSI' ls /dev/sd? cat /proc/scsi/sg/devices
  8. There is no need to perfom manual changes in the grub.cfg. Just do what @shibbywrote about the setting in your user_config.json (needs to be set BEFORE the image is build!): But this time try different values. This setting IS system specific - and you should set as many `SataPortMap` slots (shibby's example shows 1 controller with 4 port, u357's example shows two controller, one with 4 ports and a second controller with 2 port) as you have in your system and adjust the position of the first harddisk on the controllers using `DiskIdxMap` (shibby's example starts with
  9. Hmm. I only see 1300, 1400 and 1500 in the mtu value list. If I enter any value above 1500, it intantly results in an error that indicates the mtu must be between 1300 and 1500. Is it because of Virtio-Io simply lacks that feature (even though I can use it on various other linux vms without issues), or is it because Synology withholds jumboframes from its consumer grade products?! @WiteWulf thank you for sharing the details
  10. This one is easy to spot: your user_config.json is invalid. You need to add a comma after mac1, mac2 and mac3 in order to be a valid json. Also you need to make sure the mac addresses are unique, which they are not right now.
  11. @WiteWulf Thanks for checking! Did you had to enable jumbo frames somewhere? Now I am currious why it's not working for me. I wonder if it's Proxmox related?
  12. Did anyone succeed to use a MTU > 1500 with DSM7? My apollolake 7.0.0 is locked between 1300 and 1500. Does it work on bromolow and/or 7.0.1?
  13. Yep, with the build-in forced extension. Works like a charm.
  14. No Passthrough, PVE uses the nic directly. All my vm's use virtio-vnics. The RP DS918+ DSM7 vm uses the build-in virtio-driver provided by RP. Speed amongst VM's with virtio-vnics on the same PVE host is about 25Gbps. Accross nodes, it is limited by the 10gbps connection.
  15. I get stable (close to) 10gbps with DSM7 on Proxmox 7 with a Mellanox Connectx-3 card. Though, only tested it with DS918+