Jump to content
XPEnology Community
  • 0

What's the point of DSM6?


fonix232

Question

And I really mean it.

 

I've been around for quite some time now (roughly 3 years, maybe a bit more?), and have been using XPEnology as my NAS OS for quite some time.

 

Now I've re-initialized my server, and... It sucks. Because I installed 6.0, and a good majority of the 3rd party packages stopped working. Seriously guys, how do you do this? How do you live without almost 90% of all available apps? Is the 6.0 update really worth it that much?

 

For me, it isn't.

Link to comment
Share on other sites

14 answers to this question

Recommended Posts

  • 0
I don't know what apps are you using, but if they are not compatibile with DSM 6 that means that they haven't been updated for a year at least ..

 

Let's see...

 

Transmission, Sickbeard, Sonarr, CouchPotato, Deluge, Mosquitto, shall I go on? 90% of the SynoCommunity packages won't run because Synology changed lots of stuff, essentially breaking packages, without a way to fix them easily.

Link to comment
Share on other sites

  • 0

appart from virtualbox, everything else is (more or less) available as docker images.

 

It is high likely to get a version update for a dockerized application way earlier than it's synology package counterpart (if there is any at all).

Turning an application into a synology package is way harder than creating a docker image - everybody with basic linux-knowledge is is able to create a docker image.... though, not everything can be turned into a container (apps that require to load kernel drivers or a special device)

 

It's perfectly possible to stay at DSM5.2 and use docker as well. Though, lacking the ability to run containers in net=host mode and lacking docker-compose support are unpleasent limitations.

Link to comment
Share on other sites

  • 0

I guess its depend on your need and resource

..Im still waiting quicknick 2.3 actually, So I havent update yet (on 5.2-5967 now)

I certainly will try to install dsm6 if possible for my machine (intel atom 1,6 945gclf, or may be got myself a newer machine)

but for now I still prefer dsm 4.x compare to dsm 5.x...lower resource and all my app is working there.

IMO, dsm web managment working faster on 4.x compare to 5.x...and most likely faster than 6.x

 

But that's me and I'm only using transmission and virtualbox

Link to comment
Share on other sites

  • 0
appart from virtualbox, everything else is (more or less) available as docker images.

 

It is high likely to get a version update for a dockerized application way earlier than it's synology package counterpart (if there is any at all).

Turning an application into a synology package is way harder than creating a docker image - everybody with basic linux-knowledge is is able to create a docker image.... though, not everything can be turned into a container (apps that require to load kernel drivers or a special device)

 

It's perfectly possible to stay at DSM5.2 and use docker as well. Though, lacking the ability to run containers in net=host mode and lacking docker-compose support are unpleasent limitations.

 

For me, Docker is still a bit of an alien ground. I understand how it works, but setting it up most of the time is very tedious - especially when you have multiple tools to get to work together (Sonarr - Transmission - Plex, just to mention a trio).

 

I might try it in the future though. For now I'm back to 5.2.

Link to comment
Share on other sites

  • 0
appart from virtualbox, everything else is (more or less) available as docker images.

 

It is high likely to get a version update for a dockerized application way earlier than it's synology package counterpart (if there is any at all).

Turning an application into a synology package is way harder than creating a docker image - everybody with basic linux-knowledge is is able to create a docker image.... though, not everything can be turned into a container (apps that require to load kernel drivers or a special device)

 

It's perfectly possible to stay at DSM5.2 and use docker as well. Though, lacking the ability to run containers in net=host mode and lacking docker-compose support are unpleasent limitations.

 

For me, Docker is still a bit of an alien ground. I understand how it works, but setting it up most of the time is very tedious - especially when you have multiple tools to get to work together (Sonarr - Transmission - Plex, just to mention a trio).

 

I might try it in the future though. For now I'm back to 5.2.

 

I can imagine that working with Docker might appear to be be challanging... Though, once the basics are understood it realy is not.

Writting Dockerfiles is also not that hard if you know how to write bash scripts (and use other Dockerfiles to see how they solve things)

 

You need to figure out the port (-p {port-dsm-side}:{port-container-side} ) and volume ( -v {path-dsm-side}:{path-container-side} ) mappings as well as the enviorenment (-e {variablename-inside-container}={value} ) variables that are required to properly operate an image.

 

I would recommend to use following images for your purpose: linuxserver/sonarr, linuxserver/transmission, linuxserver/plex.

Take a look at the usage part on dockerhub to see which port and volumes mappings are required and which environment variables are usefull.

If you want to access the local mapped path of a volume from a share, i would sugguest that you set the PUID and PGID environement variables to a UID and GID (use 'id {username}' in shell to figure both out) that actualy has permissions to access the share. To keep things easy you can use http://{your-syno-ip-or-hostname}:{port-dsm-side} in a container to access another one.

Link to comment
Share on other sites

  • 0

I can imagine that working with Docker might appear to be be challanging... Though, once the basics are understood it realy is not.

Writting Dockerfiles is also not that hard if you know how to write bash scripts (and use other Dockerfiles to see how they solve things)

 

Dockerfiles aren't the problem. I can learn a new syntax and language in no time, and Dockerfiles are very straightforward

 

You need to figure out the port (-p {port-dsm-side}:{port-container-side} ) and volume ( -v {path-dsm-side}:{path-container-side} ) mappings as well as the enviorenment (-e {variablename-inside-container}={value} ) variables that are required to properly operate an image.

 

Yup, this is the main issue. I always specify the ports so that it maps on my DSM to the same (usual) port. E.g. in Plex, I map 32400 docker-internal to 32400 DSM. Same for the others.

 

I would recommend to use following images for your purpose: linuxserver/sonarr, linuxserver/transmission, linuxserver/plex.

Take a look at the usage part on dockerhub to see which port and volumes mappings are required and which environment variables are usefull.

If you want to access the local mapped path of a volume from a share, i would sugguest that you set the PUID and PGID environement variables to a UID and GID (use 'id {username}' in shell to figure both out) that actualy has permissions to access the share. To keep things easy you can use http://{your-syno-ip-or-hostname}:{port-dsm-side} in a container to access another one.

 

I used those images, and worked somewhat okay. Though with Transmission, I usually replace the web UI, which AFAIK is not possible with the Docker version.

 

I also like to refer to my server, no matter what container, as "localhost". But the containers get their own little network, thus Sonarr can't refer to Transmission as "localhost:9091".

Link to comment
Share on other sites

  • 0

I can imagine that working with Docker might appear to be be challanging... Though, once the basics are understood it realy is not.

Writting Dockerfiles is also not that hard if you know how to write bash scripts (and use other Dockerfiles to see how they solve things)

 

Dockerfiles aren't the problem. I can learn a new syntax and language in no time, and Dockerfiles are very straightforward

agree on that :smile:

 

Most people forget to add supervisors and/or make user permissions, timezone and locale (and the keyboard layout) configurable via environement parameters. without a supervisor you can end up with zombi processes and ungracefull shutdowns, especialy if more than a single process is started.

Are those things mandatory? absolutly not! Would I want to live without those additions: depends on the use case. most times supervisor and user permissions are enough.

 

That's why i prefer linuxserver.io images - all images i used until now had at least the s6 supervisor and take care of the user permissions.

 

You need to figure out the port (-p {port-dsm-side}:{port-container-side} ) and volume ( -v {path-dsm-side}:{path-container-side} ) mappings as well as the enviorenment (-e {variablename-inside-container}={value} ) variables that are required to properly operate an image.

 

Yup, this is the main issue. I always specify the ports so that it maps on my DSM to the same (usual) port. E.g. in Plex, I map 32400 docker-internal to 32400 DSM. Same for the others.

I do the same where possible. Though, from time to time there are collisions with port that DSM uses itself.

 

I would recommend to use following images for your purpose: linuxserver/sonarr, linuxserver/transmission, linuxserver/plex.

Take a look at the usage part on dockerhub to see which port and volumes mappings are required and which environment variables are usefull.

If you want to access the local mapped path of a volume from a share, i would sugguest that you set the PUID and PGID environement variables to a UID and GID (use 'id {username}' in shell to figure both out) that actualy has permissions to access the share. To keep things easy you can use http://{your-syno-ip-or-hostname}:{port-dsm-side} in a container to access another one.

 

I used those images, and worked somewhat okay. Though with Transmission, I usually replace the web UI, which AFAIK is not possible with the Docker version.

Figure out the target folder and use a volume to map a folder "over" the existing folder having your own files..

A missing Volume declarion in a Dockerfile does not mean that you can't map a volume to a folder of your choice anyway.

 

Or create a Dockerfile that does replace/add the missing bits based on the exsting image.

 

I also like to refer to my server, no matter what container, as "localhost". But the containers get their own little network, thus Sonarr can't refer to Transmission as "localhost:9091".

Sure it can't. It has to access the docker hosts hostname or ip and the local port of the target container. if your dsm6 box has the hostname dsm, you would use dsm:9091, or link the containers together and use their linknames as references.. It makes sense to link container where an applications-container relies on other containers, e.g. a database-container. Though, i would not consider that sonarr relies on transmission. transmission is just one of the available options for download agents. Thus, the sonarr and transmission containers are loosly coupled - I wouldn't link those containers and use the dockerhost:localport approach instead.

Link to comment
Share on other sites

  • 0
transmission works great on DSM 6. I was a long time 5.x even 4.x user back in the day. 6 has been great to me.

 

Sickbeard is available,

Couch Potato is available

Mosquitto is available

 

Looks like you need to add the community sources.

 

Screen_Shot_2017_01_29_at_7_42_54_PM.png

 

They work great if you installed them on 5.x and then migrated to 6.0...

 

The issue is with the installer. The community packages used a very specific way of creating users with specific rights (often reaching for sudo to create specific folders during install) within the installation process. That part fails, thus the packages can't start.

Link to comment
Share on other sites

  • 0

I would also, as haydibe, suggest to stzart using Docker more and stop waiting for Syno third party packages to get some attention. Example: http://tools.linuxserver.io/dockers

 

You can find basically everythiing that is able to run on linux on Docker hub already, and the good thing is, you can mess with containers as much as you want and at the end of the day just recreate them without putting your box or data in danger.

Link to comment
Share on other sites

  • 0
I would also, as haydibe, suggest to stzart using Docker more and stop waiting for Syno third party packages to get some attention. Example: http://tools.linuxserver.io/dockers

 

You can find basically everythiing that is able to run on linux on Docker hub already, and the good thing is, you can mess with containers as much as you want and at the end of the day just recreate them without putting your box or data in danger.

 

The key part of using a NAS, remember what a Synology box is?, is storing your data & without putting your box or data in danger. All the other software or garbage attached can be accomplished in any number of other ways. Secure your data first! The problem is Synology wants profits to grow so they have to dream up the next thing that a user might want and they just keep adding to the pile.

If you run Xpenology in an ESXi VM, keep it as a NAS, then run all the other stuff in VMs safely.

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Answer this question...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...