• Content Count

  • Joined

  • Last visited

Community Reputation

4 Neutral

1 Follower

About segator

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. segator

    NVMe cache support

    ash-4.3# synonvme --m2-card-model-get /dev/nvme0 Not M.2 adapter card ash-4.3# synodiskport -cache nvme0n1 nvme1n1 ash-4.3# fdisk -l /dev/nvme0n1 Disk /dev/nvme0n1: 232.9 GiB, 250059350016 bytes, 488397168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x1dff4e7c Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 488392064 488390017 232.9G fd Linux raid autodetect ash-4.3# udevadm info /dev/nvme0n1 P: /devices/pci0000:00/0000:00:13.1/0000:02:00.0/nvme/nvme0/nvme0n1 N: nvme0n1 E: DEVNAME=/dev/nvme0n1 E: DEVPATH=/devices/pci0000:00/0000:00:13.1/0000:02:00.0/nvme/nvme0/nvme0n1 E: DEVTYPE=disk E: ID_PART_TABLE_TYPE=dos E: MAJOR=259 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:13.1/0000:02:00.0 E: SUBSYSTEM=block E: SYNO_ATTR_SERIAL=S465NF0K846478F E: SYNO_DEV_DISKPORTTYPE=CACHE E: SYNO_INFO_PLATFORM_NAME=apollolake E: SYNO_KERNEL_VERSION=4.4 E: USEC_INITIALIZED=704649
  2. segator

    NVMe cache support

    Hey, I have a synology ds918+ with 2 nvme samsung 970 EVO 250gigs read-write cache. Hope it helps ash-4.3# lspci -k 00:00.0 Class 0600: Device 8086:5af0 (rev 0b) Subsystem: Device 8086:7270 00:02.0 Class 0300: Device 8086:5a85 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: i915 00:0e.0 Class 0403: Device 8086:5a98 (rev 0b) Subsystem: Device 8086:7270 00:0f.0 Class 0780: Device 8086:5a9a (rev 0b) Subsystem: Device 8086:7270 00:11.0 Class 0050: Device 8086:5aa2 (rev 0b) Subsystem: Device 8086:7270 00:12.0 Class 0106: Device 8086:5ae3 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: ahci 00:13.0 Class 0604: Device 8086:5ad8 (rev fb) Kernel driver in use: pcieport 00:13.1 Class 0604: Device 8086:5ad9 (rev fb) Kernel driver in use: pcieport 00:13.2 Class 0604: Device 8086:5ada (rev fb) Kernel driver in use: pcieport 00:13.3 Class 0604: Device 8086:5adb (rev fb) Kernel driver in use: pcieport 00:14.0 Class 0604: Device 8086:5ad6 (rev fb) Kernel driver in use: pcieport 00:15.0 Class 0c03: Device 8086:5aa8 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: xhci_hcd 00:16.0 Class 1180: Device 8086:5aac (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:18.0 Class 1180: Device 8086:5abc (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:18.1 Class 1180: Device 8086:5abe (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:18.2 Class 1180: Device 8086:5ac0 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:18.3 Class 1180: Device 8086:5aee (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:19.0 Class 1180: Device 8086:5ac2 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:19.1 Class 1180: Device 8086:5ac4 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:19.2 Class 1180: Device 8086:5ac6 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: intel-lpss 00:1a.0 Class 0c80: Device 8086:5ac8 (rev 0b) Subsystem: Device 8086:7270 00:1f.0 Class 0601: Device 8086:5ae8 (rev 0b) Subsystem: Device 8086:7270 00:1f.1 Class 0c05: Device 8086:5ad4 (rev 0b) Subsystem: Device 8086:7270 Kernel driver in use: i801_smbus 01:00.0 Class 0106: Device 1b4b:9215 (rev 11) Subsystem: Device 1b4b:9215 Kernel driver in use: ahci 02:00.0 Class 0108: Device 144d:a808 Subsystem: Device 144d:a801 Kernel driver in use: nvme 03:00.0 Class 0108: Device 144d:a808 Subsystem: Device 144d:a801 Kernel driver in use: nvme 04:00.0 Class 0200: Device 8086:1539 (rev 03) Subsystem: Device 8086:0000 Kernel driver in use: igb 05:00.0 Class 0200: Device 8086:1539 (rev 03) Subsystem: Device 8086:0000 Kernel driver in use: igb ash-4.3# ls /dev/nvm* /dev/nvme0 /dev/nvme0n1 /dev/nvme0n1p1 /dev/nvme1 /dev/nvme1n1 /dev/nvme1n1p1 ash-4.3# udevadm info /dev/nvme0 P: /devices/pci0000:00/0000:00:13.1/0000:02:00.0/nvme/nvme0 N: nvme0 E: DEVNAME=/dev/nvme0 E: DEVPATH=/devices/pci0000:00/0000:00:13.1/0000:02:00.0/nvme/nvme0 E: MAJOR=250 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:13.1/0000:02:00.0 E: SUBSYSTEM=nvme E: SYNO_INFO_PLATFORM_NAME=apollolake E: SYNO_KERNEL_VERSION=4.4 E: USEC_INITIALIZED=704606 ash-4.3# synonvme --get-location /dev/nvme0 Can't get the location of /dev/nvme0 ash-4.3# synonvme --port-type-get /dev/nvme0 Unknown. ash-4.3# ls /sys/block dm-0 loop1 loop3 loop5 loop7 md1 md4 nvme1n1 ram1 ram11 ram13 ram15 ram3 ram5 ram7 ram9 sdb synoboot zram1 zram3 loop0 loop2 loop4 loop6 md0 md3 nvme0n1 ram0 ram10 ram12 ram14 ram2 ram4 ram6 ram8 sda sdd zram0 zram2 ash-4.3# ls /run/synostorage/disks nvme0n1 nvme1n1 sda sdb sdd ash-4.3#
  3. Hi guys, I'm trying to add KVM drivers to xpenology junmod. I succesfull load some modules using the tutorial on this forum, lsmod show modules loaded. virtio network is working but i am not able to have 9p work (9P is used to mount host folder to guest, better performance than NFS,CIFS etc, it's almost native performance) Driver is loaded but when trying to mount I get this error on the VM log Command used mount -t 9p -o trans=virtio,version=9p2000.L hostshare /tmp/host_files [ 144.451391] BUG: unable to handle kernel paging request at 00007f000183c9ec [ 144.452284] IP: [<ffffffffa0bbcb4c>] p9_fid_create+0x7c/0xe0 [9pnet] [ 144.452284] PGD 0 [ 144.452284] Oops: 0000 [#1] SMP [ 144.452284] Modules linked in: 9p bridge stp aufs macvlan veth xt_conntrack xt_addrtype nf_conntrack_ipv6 nf_defrag_ipv6 ip6table_filter ip6_tables ipt_MASQUERADE xt_REDIRECT xt_nat iptable_nat nf_nat_ipv4 nf_nat xt_recent xt_iprange xt_limit xt_state xt_tcpudp xt_multiport xt_LOG nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack iptable_filter ip_tables x_tables cifs udf isofs loop iscsi_target_mod(O) target_core_ep(O) target_core_file(O) target_core_iblock(O) target_core_mod(O) syno_extent_pool(PO) rodsp_ep(O) hid_generic usbhid hid usblp bromolow_synobios(PO) 9pnet_virtio 9pnet virtio_balloon virtio_net button ax88179_178a usbnet tg3 r8169 cnic bnx2 vmxnet3 pcnet32 e1000 sfc netxen_nic qlge qlcnic qla3xxx pch_gbe ptp_pch sky2 skge jme ipg uio alx atl1c atl1e atl1 libphy mii btrfs synoacl_vfs(PO) zlib_deflate hfsplus md4 hmac bnx2x(O) libcrc32c mdio mlx5_core(O) mlx4_en(O) mlx4_core(O) mlx_compat(O) compat(O) r8168(O) tn40xx(O) i40e(O) ixgbe(O) be2net(O) igb(O) i2c_algo_bit e1000e(O) dca fuse vfat fat crc32c_intel aesni_intel glue_helper lrw gf128mul ablk_helper arc4 cryptd ecryptfs sha512_generic sha256_generic sha1_generic ecb aes_x86_64 authenc des_generic ansi_cprng cts md5 cbc cpufreq_conservative cpufreq_powersave cpufreq_performance cpufreq_ondemand mperf processor thermal_sys cpufreq_stats freq_table dm_snapshot crc_itu_t crc_ccitt quota_v2 quota_tree psnap p8022 llc sit tunnel4 ip_tunnel ipv6 zram(C) sg etxhci_hcd aacraid virtio_scsi virtio_blk virtio_pci virtio_mmio virtio_ring virtio mpt3sas mpt2sas(O) megaraid_sas ata_piix mptctl mptsas mptspi mptscsih mptbase scsi_transport_spi megaraid megaraid_mbox megaraid_mm vmw_pvscsi BusLogic usb_storage xhci_hcd uhci_hcd ohci_hcd ehci_pci ehci_hcd usbcore usb_common el000(O) [last unloaded: bromolow_synobios] [ 144.452284] CPU: 1 PID: 12863 Comm: mount Tainted: P C O 3.10.102 #15152 [ 144.452284] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS Ubuntu-1.8.2-1ubuntu1 04/01/2014 [ 144.452284] task: ffff88007a8b1800 ti: ffff8800790d0000 task.ti: ffff8800790d0000 [ 144.452284] RIP: 0010:[<ffffffffa0bbcb4c>] [<ffffffffa0bbcb4c>] p9_fid_create+0x7c/0xe0 [9pnet] [ 144.452284] RSP: 0018:ffff8800790d3c48 EFLAGS: 00010246 [ 144.452284] RAX: 00007f000183c9d0 RBX: ffff8800784bf640 RCX: 0000000000000000 [ 144.452284] RDX: ffffffffa0bc533c RSI: ffffffffa0bc4060 RDI: ffff88007b55d000 [ 144.452284] RBP: ffff88007b55d000 R08: ffff8800787cfb80 R09: ffff880076c4a640 [ 144.452284] R10: 0000000000000000 R11: 0000000000000000 R12: ffff88007b55d000 [ 144.452284] R13: ffff880067b30c48 R14: ffff88007b62b500 R15: ffff88007bc51e40 [ 144.452284] FS: 00007efc7192d7c0(0000) GS:ffff88007fc80000(0000) knlGS:0000000000000000 [ 144.452284] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 [ 144.452284] CR2: 00007f000183c9ec CR3: 000000007b6de000 CR4: 00000000000407e0 [ 144.452284] DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 [ 144.452284] DR3: 0000000000000000 DR6: 00000000ffff0ff0 DR7: 0000000000000400 [ 144.452284] Stack: [ 144.452284] ffff880067b30c00 ffff8800790d3cb0 ffffffffa0bbed7d ffffffff8127138b [ 144.452284] ffff8800790d3ce0 ffffffff7b7ca640 ffff880077260000 0000000000000000 [ 144.452284] ffff880067b30c00 0000000000000000 ffff880067b30c48 ffff88007b7ca640 [ 144.452284] Call Trace: [ 144.452284] [<ffffffffa0bbed7d>] ? p9_client_attach+0x17d/0x1a0 [9pnet] [ 144.452284] [<ffffffff8127138b>] ? match_token+0x20b/0x220 [ 144.452284] [<ffffffffa0fc6556>] ? v9fs_session_init+0x206/0x6c0 [9p] [ 144.452284] [<ffffffff810c8baf>] ? pcpu_alloc_area+0x14f/0x360 [ 144.452284] [<ffffffffa0fc12c6>] ? v9fs_mount+0x66/0x340 [9p] [ 144.452284] [<ffffffff810f9bd1>] ? mount_fs+0x31/0x1b0 [ 144.452284] [<ffffffff8111899d>] ? vfs_kern_mount+0x5d/0xf0 [ 144.452284] [<ffffffff81119efc>] ? do_new_mount+0xac/0x2c0 [ 144.452284] [<ffffffff810296e8>] ? __do_page_fault+0x1b8/0x480 [ 144.452284] [<ffffffff81238c05>] ? apparmor_capable+0x15/0x140 [ 144.452284] [<ffffffff8111b230>] ? do_mount+0x3b0/0x990 [ 144.452284] [<ffffffff8111b8ab>] ? SyS_mount+0x9b/0x110 [ 144.452284] [<ffffffff814adf72>] ? system_call_fastpath+0x16/0x1b [ 144.452284] Code: 08 48 c7 43 10 00 00 00 00 48 89 ef 48 c7 43 18 00 00 00 00 c7 43 0c ff ff ff ff 65 48 8b 04 25 c0 a7 00 00 48 8b 80 80 03 00 00 <8b> 40 1c 48 89 2b 48 c7 43 28 00 00 00 00 89 43 24 e8 9e 09 8f [ 144.452284] RIP [<ffffffffa0bbcb4c>] p9_fid_create+0x7c/0xe0 [9pnet] [ 144.452284] RSP <ffff8800790d3c48> [ 144.452284] CR2: 00007f000183c9ec [ 144.519077] ---[ end trace 9905cd69f413eb3e ]--- Any idea what could be the problem? Thank you
  4. Hi guys, I create a first version of Xpenology on a Docker(on a KVM VM) My goal was get unraid HDD system + DSM backup/Cloud applications.
  5. Yes exactly is what you are thinking!! But there are some traps Yes we can run xpenology on docker fully functional but inside a KVM VM but works pretty fine really. Right now what I have is a simple docker that execute an instance of xpenology (clean, so the page of installation is shown to install) I thinking which features do you think could be interesting for you? give support to docker volumes inside the xpenology? -v /myhost/data:/Volume1/myshare ? Please let me know what do you think could be interesting for you. when I have a more stable version I will publish to github for all of you. The image is based in an image a Did for my work (W2012 inside docker) but I reused to use for xpenology I Did the image because I wanted pretty UI of xpenology + Unraid Disk management(almost always spin down the HDD) You can share your unraid data using NFS/glusterFS and (if someone add 9p virtio driver to the synoboot image) 9p
  6. segator

    DSM 6.1.x Loader

    Hi, thank you for your work, it's working perfectly I running DSM6 on KVM in the host unRAID, mouting NFS shares of my unraid server to DSM so I can use almost all the features that DSM have (except btrfs things) the only problem is the virtio (para-virtualitzed) drivers doesn't work, Could be possible to add this drivers? now i'm using sata emulation and e1000 for network
  7. segator

    DSM 6.1.x Loader

    Hi guys I can't help you to test but I was thinking maybe you need some place to keep files and a Automatic build system, if any of you can explain me how to compile the whole system I will create an a simple continuos integration(jenkins)
  8. segator

    Working DSM 6

    I tested your vmdk in vmware and works pretty fine, thank guys! but i would like to use it on KVM (because I use unraid as a backend I prefer this storage system more than typical mdadm raid) and share volumes over NFS Anyone know how can I install it on KVM?
  9. segator


    Esperate que con la 5.2 vienen los dockers y tendras acceso a cualquier SO linux que quieras SI no sabes lo que es busca en google ya veras esta muy bien hace meses que los uso.
  10. I'm a happy owner of unraid Server. I change XPenology to unraid for 4 things: -Individual HDD Hibernation(Unraid Only spin up the drive you need to get the file you are trying to read) -More posibilities of disk add/delete(you can add whatever disk you want, no restriction to create array, the only is that the parity drive need to be the bigger) -More Secure(If you are using SHR DSM then you have 1 disk protection, if 2 drives fail at same time, you lost ALL the data on the array, unraid OS only lose data of the 2 failed drives) -Posibility to virtualize(KVM/Xen) and dockers(not Virtualitzacion but i's seems) The problem with unraid is: -Very slow write(if you need performance you need to install SSD cache) -Read speed = speed of the drive where is the file you want to read(slower than mdadm raid's like synology DSM) -GUI with few possibilities compared with DSM(But exist lot of open source application that do the same, but of course you won't have the centralitzed GUI with all thing together) -Require some system knowledge
  11. I propose to use Bittorrent Sync to share the files
  12. I have Xpenology VM Guest using KVM on unRAID Host It's possible to mount 9p virtual shares on XPenology? I want to mount directories from unraid to Xpenology. I try NFS, but it's slow because need to pass data on eth(VM bridged)
  13. if I put in IDE mode, my 6TB and 8TB drive will work? IDE mode is slower than AHCI mode?, change the mode require drive format? The Serial port you are explaining, finally don't have any effect? Thanks, I'm having the same problem. For now I run 5.0 perfect on c2550d4i