• Content count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About segator

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. segator

    Xpenology runing on docker

    Yes, is this one
  2. segator

    Xpenology runing on docker

  3. Hi guys, I'm trying to add KVM drivers to xpenology junmod. I succesfull load some modules using the tutorial on this forum, lsmod show modules loaded. virtio network is working but i am not able to have 9p work (9P is used to mount host folder to guest, better performance than NFS,CIFS etc, it's almost native performance) Driver is loaded but when trying to mount I get this error on the VM log Command used mount -t 9p -o trans=virtio,version=9p2000.L hostshare /tmp/host_files [ 144.451391] BUG: unable to handle kernel paging request at 00007f000183c9ec [ 144.452284] IP: [<ffffffffa0bbcb4c>] p9_fid_create+0x7c/0xe0 [9pnet] [ 144.452284] PGD 0 [ 144.452284] Oops: 0000 [#1] SMP [ 144.452284] Modules linked in: 9p bridge stp aufs macvlan veth xt_conntrack xt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipt_MASQUERADE xt_REDIRECT xt_nat iptable_nat nf_nat_ipv4 nf_nat xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables cifs udf isofs loop iscsi_target_mod(O) target_core_ep(O) target_core_file(O) target_core_iblock(O) target_core_mod(O) syno_extent_pool(PO) rodsp_ep(O) hid_generic usbhid hid usblp bromolow_synobios(PO) 9pnet_virtio 9pnet virtio_balloon virtio_net button ax88179_178a usbnet tg3 r8169 cnic bnx2 vmxnet3 pcnet32 e1000 sfc netxen_nic qlge qlcnic qla3xxx pch_gbe ptp_pch sky2 skge jme ipg uio alx atl1c atl1e atl1 libphy mii btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) r8168(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha512_generic sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd aacraid virtio_scsi virtio_blk virtio_pci virtio_mmio virtio_ring virtio mpt3sas mpt2sas(O) megaraid_sas ata_piix mptctl mptsas mptspi mptscsih mptbase scsi_transport_spi megaraid megaraid_mbox megaraid_mm vmw_pvscsi BusLogic usb_storage xhci_hcd uhci_hcd ohci_hcd ehci_pci ehci_hcd usbcore usb_common el000(O) [last unloaded: bromolow_synobios] [ 144.452284] CPU: 1 PID: 12863 Comm: mount Tainted: P C O 3.10.102 #15152 [ 144.452284] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 [ 144.452284] task: ffff88007a8b1800 ti: ffff8800790d0000 task.ti: ffff8800790d0000 [ 144.452284] RIP: 0010:[<ffffffffa0bbcb4c>] [<ffffffffa0bbcb4c>] p9_fid_create+0x7c/0xe0 [9pnet] [ 144.452284] RSP: 0018:ffff8800790d3c48 EFLAGS: 00010246 [ 144.452284] RAX: 00007f000183c9d0 RBX: ffff8800784bf640 RCX: 0000000000000000 [ 144.452284] RDX: ffffffffa0bc533c RSI: ffffffffa0bc4060 RDI: ffff88007b55d000 [ 144.452284] RBP: ffff88007b55d000 R08: ffff8800787cfb80 R09: ffff880076c4a640 [ 144.452284] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88007b55d000 [ 144.452284] R13: ffff880067b30c48 R14: ffff88007b62b500 R15: ffff88007bc51e40 [ 144.452284] FS: 00007efc7192d7c0(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000 [ 144.452284] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 144.452284] CR2: 00007f000183c9ec CR3: 000000007b6de000 CR4: 00000000000407e0 [ 144.452284] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 144.452284] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 144.452284] Stack: [ 144.452284] ffff880067b30c00 ffff8800790d3cb0 ffffffffa0bbed7d ffffffff8127138b [ 144.452284] ffff8800790d3ce0 ffffffff7b7ca640 ffff880077260000 0000000000000000 [ 144.452284] ffff880067b30c00 0000000000000000 ffff880067b30c48 ffff88007b7ca640 [ 144.452284] Call Trace: [ 144.452284] [<ffffffffa0bbed7d>] ? p9_client_attach+0x17d/0x1a0 [9pnet] [ 144.452284] [<ffffffff8127138b>] ? match_token+0x20b/0x220 [ 144.452284] [<ffffffffa0fc6556>] ? v9fs_session_init+0x206/0x6c0 [9p] [ 144.452284] [<ffffffff810c8baf>] ? pcpu_alloc_area+0x14f/0x360 [ 144.452284] [<ffffffffa0fc12c6>] ? v9fs_mount+0x66/0x340 [9p] [ 144.452284] [<ffffffff810f9bd1>] ? mount_fs+0x31/0x1b0 [ 144.452284] [<ffffffff8111899d>] ? vfs_kern_mount+0x5d/0xf0 [ 144.452284] [<ffffffff81119efc>] ? do_new_mount+0xac/0x2c0 [ 144.452284] [<ffffffff810296e8>] ? __do_page_fault+0x1b8/0x480 [ 144.452284] [<ffffffff81238c05>] ? apparmor_capable+0x15/0x140 [ 144.452284] [<ffffffff8111b230>] ? do_mount+0x3b0/0x990 [ 144.452284] [<ffffffff8111b8ab>] ? SyS_mount+0x9b/0x110 [ 144.452284] [<ffffffff814adf72>] ? system_call_fastpath+0x16/0x1b [ 144.452284] Code: 08 48 c7 43 10 00 00 00 00 48 89 ef 48 c7 43 18 00 00 00 00 c7 43 0c ff ff ff ff 65 48 8b 04 25 c0 a7 00 00 48 8b 80 80 03 00 00 <8b> 40 1c 48 89 2b 48 c7 43 28 00 00 00 00 89 43 24 e8 9e 09 8f [ 144.452284] RIP [<ffffffffa0bbcb4c>] p9_fid_create+0x7c/0xe0 [9pnet] [ 144.452284] RSP <ffff8800790d3c48> [ 144.452284] CR2: 00007f000183c9ec [ 144.519077] ---[ end trace 9905cd69f413eb3e ]--- Any idea what could be the problem? Thank you
  4. segator

    Synology on a Docker

    Hi guys, I create a first version of Xpenology on a Docker(on a KVM VM) My goal was get unraid HDD system + DSM backup/Cloud applications.
  5. segator

    Xpenology runing on docker

    Yes exactly is what you are thinking!! But there are some traps Yes we can run xpenology on docker fully functional but inside a KVM VM but works pretty fine really. Right now what I have is a simple docker that execute an instance of xpenology (clean, so the page of installation is shown to install) I thinking which features do you think could be interesting for you? give support to docker volumes inside the xpenology? -v /myhost/data:/Volume1/myshare ? Please let me know what do you think could be interesting for you. when I have a more stable version I will publish to github for all of you. The image is based in an image a Did for my work (W2012 inside docker) but I reused to use for xpenology I Did the image because I wanted pretty UI of xpenology + Unraid Disk management(almost always spin down the HDD) You can share your unraid data using NFS/glusterFS and (if someone add 9p virtio driver to the synoboot image) 9p
  6. segator

    DSM 6.1.x Loader

    Hi, thank you for your work, it's working perfectly I running DSM6 on KVM in the host unRAID, mouting NFS shares of my unraid server to DSM so I can use almost all the features that DSM have (except btrfs things) the only problem is the virtio (para-virtualitzed) drivers doesn't work, Could be possible to add this drivers? now i'm using sata emulation and e1000 for network
  7. segator

    DSM 6.1.x Loader

    Hi guys I can't help you to test but I was thinking maybe you need some place to keep files and a Automatic build system, if any of you can explain me how to compile the whole system I will create an a simple continuos integration(jenkins)
  8. segator

    Working DSM 6

    I tested your vmdk in vmware and works pretty fine, thank guys! but i would like to use it on KVM (because I use unraid as a backend I prefer this storage system more than typical mdadm raid) and share volumes over NFS Anyone know how can I install it on KVM?
  9. segator


    Esperate que con la 5.2 vienen los dockers y tendras acceso a cualquier SO linux que quieras SI no sabes lo que es busca en google ya veras esta muy bien hace meses que los uso.
  10. segator

    XPEnology vs UNRAID?

    I'm a happy owner of unraid Server. I change XPenology to unraid for 4 things: -Individual HDD Hibernation(Unraid Only spin up the drive you need to get the file you are trying to read) -More posibilities of disk add/delete(you can add whatever disk you want, no restriction to create array, the only is that the parity drive need to be the bigger) -More Secure(If you are using SHR DSM then you have 1 disk protection, if 2 drives fail at same time, you lost ALL the data on the array, unraid OS only lose data of the 2 failed drives) -Posibility to virtualize(KVM/Xen) and dockers(not Virtualitzacion but i's seems) The problem with unraid is: -Very slow write(if you need performance you need to install SSD cache) -Read speed = speed of the drive where is the file you want to read(slower than mdadm raid's like synology DSM) -GUI with few possibilities compared with DSM(But exist lot of open source application that do the same, but of course you won't have the centralitzed GUI with all thing together) -Require some system knowledge
  11. segator

    End od

    I propose to use Bittorrent Sync to share the files
  12. segator

    Xpenology + unRAID

    I have Xpenology VM Guest using KVM on unRAID Host It's possible to mount 9p virtual shares on XPenology? I want to mount directories from unraid to Xpenology. I try NFS, but it's slow because need to pass data on eth(VM bridged)
  13. if I put in IDE mode, my 6TB and 8TB drive will work? IDE mode is slower than AHCI mode?, change the mode require drive format? The Serial port you are explaining, finally don't have any effect? Thanks, I'm having the same problem. For now I run 5.0 perfect on c2550d4i
  14. segator

    Xpenology on Dedicated Server, Perfect Working

    First you need to put your ETH VM on NAT mode, proxmox asign IP like on your proxmox server you need to forward port 5001 to Now you can connect with your Host IP server and you will see the synology assistant, when installed you need to enter with ssh and change your IP to your IP failover. Now stop your VM and change to VMBR0. It's Done. Sorry for my bad english.
  15. segator

    XPenology en Proxmox [SOLVED]

    Si teneis algun problema yo tengo un Xpenology corriendo en una VM en un servidor de OVH va de lujo!