• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About laris

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. As a good NAS product, there are some essential key points like: 1, data integrity, make sure data safe 2, data service and management, make sure client can access and data are easy to mange 3, flexibility, like all platform clients, easy to integrated with external service, 3rd-party application 4, maintenance availability (upgrade) and rescue plan, etc. Back to the DSM and item by item, 1, DSM is not a full strong data integrity but very strong for rootfs (md0/RAID1) with multi disks if no additional HW RAID 2, DSM integrate with very good data services like basic samba/nfs/afp(netatalk) and PS/DS/synoDrive/Moments/etc., have good disclosed API to access 3, DSM have almost crossed platform client like pc/mac/android/iOS, good 3rd-party apps 4, upgrade is big problem for non-DSM product and not easy to rescue for rootfs/data So, the IT service is trending to micro service and de-coupling, docker/lxc is very successful than VM. And DSM 6.x also have DDSM as a service (but with license issue, that's another topic not technical this post focus on), does porting the DDSM to linux docker/lxc make sense in future? The advantage like here: 1, data integrity can be provided by host, like ZFS will be good solution 2, DDSM can mount host volume to container and service the data for client 3, host can easy to integrity other 3rd-party service via docker, not by DSM 3rd-party applications 4, enable and maintenance DDSM upgrade will be big challenge. 5, no any HW compatibility issue, linux can do it, so the bootloader is not essential work. Recently some porting works done and the container can run, the DSM basic system service can start manually, but still have login issue with unknown error, maybe s30_synocheckuser fail checkuser/group. Any developer have interesting about this topic and work together to contribute? Thanks, -L
  2. I solved. adjust the kernel module load sequence like here: EXTRA_MODULES="mii mdio libphy atl1 atl1e atl1c alx uio ipg jme skge sky2 ptp_pch pch_gbe qla3xxx qlcnic qlge netxen_nic sfc e1000 pcnet32 vmxnet3 bnx2 libcrc32c bnx2x cnic e1000e igb ixgbe r8101 r8168 r8169 tg3 usbnet ax88179_178a button evdev ohci-hcd virtio virtio_ring virtio_pci virtio_mmio virtio_balloon virtio_net virtio_blk virtio_scsi virtio_console 9pnet 9pnet_virtio fscache 9p aufs" but still have some issue with other modules root@DSMPVE:~# dmesg|grep virtio [ 14.131531] virtio-pci 0000:00:11.0: setting latency timer to 64 [ 14.136262] virtio_balloon: Unknown symbol balloon_mapping_alloc (err 0) [ 14.145472] virtio_console: Unknown symbol hvc_remove (err 0) [ 14.145487] virtio_console: Unknown symbol hvc_kick (err 0) [ 14.145495] virtio_console: Unknown symbol hvc_alloc (err 0) [ 14.145503] virtio_console: Unknown symbol hvc_poll (err 0) [ 14.145513] virtio_console: Unknown symbol hvc_instantiate (err 0) [ 14.145521] virtio_console: Unknown symbol __hvc_resize (err 0) [ 14.150363] virtio-pci 0000:00:11.0: irq 44 for MSI/MSI-X [ 14.150382] virtio-pci 0000:00:11.0: irq 45 for MSI/MSI-X
  3. Hi @segator Did you test the latest junboot 103b for DSM 62? I complied the 9p kernel module but failed to mount host shared dir in proxmox. And also some virtio Unknown symbol dmesg error root@DSMPVE:/sys/class# dmesg|grep virtio [ 11.462819] virtio_pci: Unknown symbol vring_transport_features (err 0) [ 11.462831] virtio_pci: Unknown symbol vring_interrupt (err 0) [ 11.462837] virtio_pci: Unknown symbol vring_new_virtqueue (err 0) [ 11.462848] virtio_pci: Unknown symbol vring_del_virtqueue (err 0) [ 11.464985] virtio_mmio: Unknown symbol vring_transport_features (err 0) [ 11.464996] virtio_mmio: Unknown symbol vring_interrupt (err 0) [ 11.465004] virtio_mmio: Unknown symbol vring_new_virtqueue (err 0) [ 11.465039] virtio_mmio: Unknown symbol vring_del_virtqueue (err 0) [ 11.467112] virtio_balloon: Unknown symbol virtqueue_get_buf (err 0) [ 11.467118] virtio_balloon: Unknown symbol virtqueue_add_outbuf (err 0) [ 11.467124] virtio_balloon: Unknown symbol virtqueue_kick (err 0) [ 11.467133] virtio_balloon: Unknown symbol balloon_mapping_alloc (err 0) [ 11.469214] virtio_net: Unknown symbol virtqueue_enable_cb_prepare (err 0) [ 11.469221] virtio_net: Unknown symbol virtqueue_detach_unused_buf (err 0) [ 11.469226] virtio_net: Unknown symbol virtqueue_poll (err 0) [ 11.469243] virtio_net: Unknown symbol virtqueue_get_vring_size (err 0) [ 11.469250] virtio_net: Unknown symbol virtqueue_disable_cb (err 0) [ 11.469257] virtio_net: Unknown symbol virtqueue_add_sgs (err 0) [ 11.469263] virtio_net: Unknown symbol virtqueue_get_buf (err 0) [ 11.469268] virtio_net: Unknown symbol virtqueue_add_outbuf (err 0) [ 11.469274] virtio_net: Unknown symbol virtqueue_kick (err 0) [ 11.469280] virtio_net: Unknown symbol virtqueue_add_inbuf (err 0) [ 11.469287] virtio_net: Unknown symbol virtqueue_enable_cb_delayed (err 0) [ 331.115294] 9pnet_virtio: no channels available 11.473865] 9pnet: Installing 9P2000 support [ 11.478168] 9p: Installing v9fs 9p2000 file system support
  4. You can use qm terminal 100 to monitor the DSM VM serial port to debug any failure. I also got a "mount failed" as others on Proxmox, but it was successful when using a sata image disk I guess maybe there are some error in the bootloader for 6.2 (junboot 103b)
  5. Hi @segator Do you solve the problem and find the root cause?
  6. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.1.5-15254 - Loader version and model: Jun's Loader v1.02b - DS3615XS - Using custom extra.lzma: NO - Installation type: VM - Proxmox - Additional comments: DSM 6.1.5-15254 update to 6.1.6-15266 auto reboot // 6.1.6-15266 update to 6.1.6-15266 Update 1 no reboot
  7. Thank you! I will try it. But maybe VM will better maybe. I will try to compare.
  8. I checked your github and understand the docker VM. Proxmox also can run docker and I will try your disclose code. Is this what your mentioned? or any update one?
  9. Hi Segator, I search in github, and I guess this should what you mentioned?
  10. That sound excellent. I'm running Proxmox on Gen8 and create zfs as data pool, if xpenology can run in LXC container, that will good to avoid create VM. Could you to share your code in github?