Jump to content
XPEnology Community

uxora-com

Member
  • Posts

    42
  • Joined

  • Last visited

Everything posted by uxora-com

  1. It still works for me, I think it is something to do with your macvlan directly connected to the vswitch instead of a ethX interface. Well it does not work - posix acl does not seem to be compiled in 9p modules, even if it works it is not use by dsm which use proprietary syno_acl compile in their fs - xattr seems to works but it's not use by dsm - Dsm office seems to be based on synoacl (for security and share management) and cannot work without it So there is nothing much to do about that. HTH,
  2. Can you provide me your "Docker run" command line and your "9p mount" command line ... and I may give a try to see if dsm ACL works with 9p.
  3. UPDATE of xpenology-docker: - Instead of using BOOTLOADER_URL, you can now pass local bootloader file with: "-v /local_path/synoboot.tgz:/bootloader" - Add BOOTLOADER_FORCE_REPLACE variable to replace existing bootloader in DISK_PATH Well, let me know if something does not work. Enjoy. HTH,
  4. UPDATE of xpenology-docker: - DHCP is now available (Check README for more information) - Now works without "--privileged" docker option for more security # On docker host # Create a macvlan matching your local network $ docker network create -d macvlan \ --subnet=192.168.0.0/24 \ --gateway=192.168.0.1 \ --ip-range=192.168.0.100/28 \ -o parent=eth0 \ macvlan0 # Run xpenology docker (Warning: --device-cgroup-rule number may be different for you) $ docker run --name="xpenodock" --hostname="xpenodock" \ --cap-add=NET_ADMIN --device-cgroup-rule='c 235:* rwm' \ --device=/dev/net/tun --device=/dev/kvm --device=/dev/vhost-net \ --network macvlan0 -e VM_NET_DHCP="Y" \ -e BOOTLOADER_URL="http://myurl/synoboot.tgz" \ -e RAM="512" -e DISK_SIZE="16G" \ -e GRUBCFG_SN="1234ABC012345" \ -e GRUBCFG_SATAPORTMAP="6" -e GRUBCFG_DISKIDXMAP="00" \ -e DISK_PATH="/xpy/diskvm" -e VM_9P_PATH="/xpy/share9p" \ -v /host_dir/kvm:/xpy/diskvm -v /host_dir/data:/xpy/share9p \ uxora/xpenology HTH,
  5. Just to let you know that I have updated my xpenology on docker for redpill loader compatibility (and still works with jun's loader). I have tested it against DS3615xs bootloaders and it seems to works pretty well. This is not mean to be used as a production system but it's pretty good for testing things and fast to deploy with an docker image of 67MB only. For example: - You can quickly test your newly build bootloader - You can test an ugrade or another risky thing ... by doing a snapshot before and if everything goes wrong, restore the snapshot or destroy container - Etc. Quickly deploy as follow (check github for more options): # On docker host # Create a macvlan matching your local network $ docker network create -d macvlan \ --subnet=192.168.0.0/24 \ --gateway=192.168.0.1 \ -o parent=eth0 \ macvlan0 # Run xpenology docker with ip 192.168.0.50 $ docker run --name="xpenodock" --hostname="xpenodock"\ --privileged --cap-add=NET_ADMIN \ --device=/dev/net/tun --device=/dev/kvm \ --network macvlan0 --ip=192.168.0.50 \ -e BOOTLOADER_URL="http://myurl/synoboot.tgz" \ -e GRUBCFG_SN="1234ABC012345" \ -e GRUBCFG_DISKIDXMAP="00" -e GRUBCFG_SATAPORTMAP="2" \ -v /host_dir/kvm:/xpy/diskvm \ uxora/xpenology Open web on 192.168.0.50:5000 and enjoy! # Some snapshot commands $ docker exec xpenodock vm-snap-create SnapB4Upd $ docker exec xpenodock vm-snap-restore SnapB4Upd $ docker exec xpenodock vm-snap-delete SnapB4Upd # Poweroff $ docker exec xpenodock vm-power-down Source: - https://github.com/uxora-com/xpenology-docker -
  6. If you can ping it, then it should work as long as you use the right port to access it and this is not block by any firewall! (sometimes I'm not sure you even tried to find a solution before posting ... and again, as i said in my last post, you don't need port mapping option "-p" no more ... and you should find your solution.) Well, what you need to know is that I already made it working ... and this is already a big thing to know ... so it should work for u as well normally! 9p seems to have posix acl abilities ... but i don't know if modules has been compile with this option and even if it does, I not sure it will still works with dsm ACL. You need to try and test and if it succeed to make it working let us know. Otherwise you have other fs option like SMB/CIFS or NFS. HTH,
  7. The answer is on my last post about that:
  8. What is the step to reproduce this issue? I did hundred of restart of the same containers without any issue.
  9. No, I never get this issue ... maybe this is something to do with trying to 9p fs mountpoint /xpy_syst folder. (I should remove this in my README example). You can try to different VM_9P_OPTS and mount option to see if ACL works with xpenology. Maybe you want your xpenology to be more expose by setting it its own fixed IP. You can do it by creating a macvlan then add "--network MyMacvlanName --ip=192.168.x.x" (then i wont need -p option no more) Check this : https://docs.docker.com/network/network-tutorial-macvlan/ HTH,
  10. I just did some test and i did have some issue of editing a file with FileEditor and it cannot save the change. This 9p folder is not configure to use dsm ACL permission, but use the basic linux filesystem permission. I just run these commands in ssh to fix this: $ sudo chown -R root:users /volume1/datashare9p $ sudo chmod -R g+w /volume1/datashare9p And it may depends on your qemu and mount option as well, you may want to check this about that (especially "access" option) : https://wiki.qemu.org/Documentation/9psetup I didn't really play and try to optimize qemu and mounting options about 9pfs, I just get and keep the same command from an older project. I just see that it works and never check its reliability and speed performance. HTH,
  11. Actually if it's just to change the mount point, you don't need to delete bootloader ... you can just recreate a new container with the same options but just changing mount point option (-v). The instruction with delete, I gave you, is if you want to change an option about the bootloader ... otherwise you just need to recreate the container without any delete (except the old container if you want). Don't worry about that, you should get back your dsm xpenology with all your data in place even in a new container as long as it uses the same vm disks.
  12. Congratulation! For smb, you need to port forward smb port on docker with option: "-p 137-139:137-139 -p 445:445". Then you can access smb with its IP (ie. "\\192.168.0.10"). If you want to name it, you can add it to hosts file on your windows/linux. (Needs administrator privileges) Not sure how to change option (-e and -v) of an existing container ... it was possible in old docker version, maybe it still possible ... well, I will let u find it in google, and let us know if u find it. Otherwise as alternative, you can recreate a new container with new options (with keeping vm disks) as follow : - In "syst/" folder, uncompress : "$ "tar -xzf bootloader.img.tar.gz" - Delete (or move it): "$ rm bootloader.img.tar.gz bootloader.qcow2" - Delete (if you want): "$ docker container rm $( docker container ls -qf 'ancestor=uxora/xpenology' )" - Then recreate a container with new option: "$ docker run --privileged [...]" And about automount 9p at boot time, use "Control Panel > Task Scheduler > Create > Triggered Task" to run your command line as root. HTH,
  13. Here it is, I just pushed modification on github and docker server. You can now add the following option to docker command line: -e GRUBCFG_DISKIDXMAP="00" -e GRUBCFG_SATAPORTMAP="1" This should fix your disk sata error. About your your iptables thing, I don't know if it's fixed because I cannot test it (and i dont want it your cp/mv stuff coz it break in other system) but let me know.
  14. Ok, I think I figured it out. It's something about the bootloader which has set DiskIdxMap and SataPortMap values too high that xpenology-docker does not seem to like. There is GRUBCFG_DISKIDXMAP and GRUBCFG_SATAPORTMAP variable to fix that ... but it does not seems functional with the way how it changes the value in bootloader, I will fix that later with iptables stuff.
  15. What is the ouput of the script (at the beginning without the booting process log)?
  16. I just figured out that this repository of this guy contains several bootloader for proxmox. I tested its 1.03b / redpill for ds3615xs and it seems to be working pretty well with xpenology-docker. Maybe you can give a try with this.
  17. In your log posted, it seems you have some error especially with the bootloader ... so maybe the bootloader or virtual disk has issue. And the "df" command you did, it's just to see mounted partition ... it does not show you disks attached to the machine. Can you give me all your command line to run the xpenology docker ? Did you remove all image container before running the docker command ? If not, you can do it with the following command: docker rmi $( docker image ls --filter 'reference=uxora/*' -q ) And you need to delete all file in "/xpenology/syst/" as well. (except "bootloader.img" if you do not use BOOTLOADER_URL).
  18. Ok, ty. Typo fixed in source code and push in docker. Yes, it seems to work pretty well. It's usually that you updated to a DSM version that the bootloader does not support. Well... You do too much posting and give too information about different things ... and mix up things ... that i am lost about what u are trying or succeed to do (especially reading on a phone). Calm down and just do step by step. Delete all docker container and image about uxora/xpenology . Then just run the docker command to launch xpenology (it should pull the latest image on docker server) And just post the output to see if any error. (I mean just the script output without log of the booting process) And in your last post, I don't see any issue. You go up to the login prompt without any serious error and even login. Just open a web browser on IP:5000 and you should see DSM working for installation step.
  19. @henry I just have updated docker base image to update linux debian bullseye and kvm 5.2 for better compatibility. Maybe you can give another try. I don't know what you mean by that. And I cannot see any error in your output. I just tried a new container with BOOTLOADER_URL pointing to a redpill bootloader (virtio/9p) and it still works. I don't know ... I never tried. And I dont have qnap.
  20. @seanone @hendry @s2k7 Hi all, It's been a while ... I had some free time, so I have just updated https://github.com/uxora-com/xpenology-docker repository : - to take in count Redpill bootloader compatibility - to be able to change vid, pid and sn just by providing variables in cmd line (GRUBCFG_VID, GRUBCFG_PID and GRUBCFG_SN) - to be able to run docker without BOOTLOADER_URL To compile your own redpill loader with virtio and 9p, you may want to use RedPill Tinycore loader img and check this: - https://github.com/uxora-com/rpext I did a quick test with redpill for DS3615xs with virtio/9p and it seems to be working pretty well. I did not test it long enough to know if it's as much stable as Jun's loader 1.03b. HTH,
  21. Hi, can u please provide me your compile redpill bootloader img, please ? (in private or in here) I want to do a quick test but i'm lazy to download and compile all the stuff. ty.
  22. @s2k7 Hi, actually for 9p, tag changed a bit ... it's now hostdata0, hostdata1, ... Thanks for this info, i may give it a try From proxmox, it is possible to passtrough pcie GPU to vm (tried and it works well) and it seems possible to passtrough some iGPU as well (but never tried with iGPU) Now, in this case of docker kvm, it seems more complicated cause u will need to (first) passtrough iGPU through docker then again pass it through KVM ... maybe it can be done but i never tried ... and then i am not sure which iGPU, ds918+ can work with.
  23. I'm not sure what you are missing but it works on Proxmox and docker kvm with bootloader which has virtio drivers. Maybe you may want to check docker kvm code project which use kvm cmd line to run DSM on VM:
  24. Add a "serial Port (serial0) : socket" to your proxmox vm. Then when your vm is running, you will be able to connect to it from your proxmox server terminal with this following command : $ qm terminal <vmID> HTH
×
×
  • Create New...