• 0
Sign in to follow this  
fonix232

What's the point of DSM6?

Question

And I really mean it.

 

I've been around for quite some time now (roughly 3 years, maybe a bit more?), and have been using XPEnology as my NAS OS for quite some time.

 

Now I've re-initialized my server, and... It sucks. Because I installed 6.0, and a good majority of the 3rd party packages stopped working. Seriously guys, how do you do this? How do you live without almost 90% of all available apps? Is the 6.0 update really worth it that much?

 

For me, it isn't.

Share this post


Link to post
Share on other sites

14 answers to this question

Recommended Posts

  • 0

I don't know what apps are you using, but if they are not compatibile with DSM 6 that means that they haven't been updated for a year at least ..

Share this post


Link to post
Share on other sites
  • 0
I don't know what apps are you using, but if they are not compatibile with DSM 6 that means that they haven't been updated for a year at least ..

 

Let's see...

 

Transmission, Sickbeard, Sonarr, CouchPotato, Deluge, Mosquitto, shall I go on? 90% of the SynoCommunity packages won't run because Synology changed lots of stuff, essentially breaking packages, without a way to fix them easily.

Share this post


Link to post
Share on other sites
  • 0

Use them in Docker (I guess they all have prepared containers on the hub already). If nobody updated those packages in a year, I guess they have been abandoned.

Share this post


Link to post
Share on other sites
  • 0

appart from virtualbox, everything else is (more or less) available as docker images.

 

It is high likely to get a version update for a dockerized application way earlier than it's synology package counterpart (if there is any at all).

Turning an application into a synology package is way harder than creating a docker image - everybody with basic linux-knowledge is is able to create a docker image.... though, not everything can be turned into a container (apps that require to load kernel drivers or a special device)

 

It's perfectly possible to stay at DSM5.2 and use docker as well. Though, lacking the ability to run containers in net=host mode and lacking docker-compose support are unpleasent limitations.

Share this post


Link to post
Share on other sites
  • 0

I guess its depend on your need and resource

..Im still waiting quicknick 2.3 actually, So I havent update yet (on 5.2-5967 now)

I certainly will try to install dsm6 if possible for my machine (intel atom 1,6 945gclf, or may be got myself a newer machine)

but for now I still prefer dsm 4.x compare to dsm 5.x...lower resource and all my app is working there.

IMO, dsm web managment working faster on 4.x compare to 5.x...and most likely faster than 6.x

 

But that's me and I'm only using transmission and virtualbox

Share this post


Link to post
Share on other sites
  • 0
appart from virtualbox, everything else is (more or less) available as docker images.

 

It is high likely to get a version update for a dockerized application way earlier than it's synology package counterpart (if there is any at all).

Turning an application into a synology package is way harder than creating a docker image - everybody with basic linux-knowledge is is able to create a docker image.... though, not everything can be turned into a container (apps that require to load kernel drivers or a special device)

 

It's perfectly possible to stay at DSM5.2 and use docker as well. Though, lacking the ability to run containers in net=host mode and lacking docker-compose support are unpleasent limitations.

 

For me, Docker is still a bit of an alien ground. I understand how it works, but setting it up most of the time is very tedious - especially when you have multiple tools to get to work together (Sonarr - Transmission - Plex, just to mention a trio).

 

I might try it in the future though. For now I'm back to 5.2.

Share this post


Link to post
Share on other sites
  • 0
appart from virtualbox, everything else is (more or less) available as docker images.

 

It is high likely to get a version update for a dockerized application way earlier than it's synology package counterpart (if there is any at all).

Turning an application into a synology package is way harder than creating a docker image - everybody with basic linux-knowledge is is able to create a docker image.... though, not everything can be turned into a container (apps that require to load kernel drivers or a special device)

 

It's perfectly possible to stay at DSM5.2 and use docker as well. Though, lacking the ability to run containers in net=host mode and lacking docker-compose support are unpleasent limitations.

 

For me, Docker is still a bit of an alien ground. I understand how it works, but setting it up most of the time is very tedious - especially when you have multiple tools to get to work together (Sonarr - Transmission - Plex, just to mention a trio).

 

I might try it in the future though. For now I'm back to 5.2.

 

I can imagine that working with Docker might appear to be be challanging... Though, once the basics are understood it realy is not.

Writting Dockerfiles is also not that hard if you know how to write bash scripts (and use other Dockerfiles to see how they solve things)

 

You need to figure out the port (-p {port-dsm-side}:{port-container-side} ) and volume ( -v {path-dsm-side}:{path-container-side} ) mappings as well as the enviorenment (-e {variablename-inside-container}={value} ) variables that are required to properly operate an image.

 

I would recommend to use following images for your purpose: linuxserver/sonarr, linuxserver/transmission, linuxserver/plex.

Take a look at the usage part on dockerhub to see which port and volumes mappings are required and which environment variables are usefull.

If you want to access the local mapped path of a volume from a share, i would sugguest that you set the PUID and PGID environement variables to a UID and GID (use 'id {username}' in shell to figure both out) that actualy has permissions to access the share. To keep things easy you can use http://{your-syno-ip-or-hostname}:{port-dsm-side} in a container to access another one.

Share this post


Link to post
Share on other sites
  • 0

I can imagine that working with Docker might appear to be be challanging... Though, once the basics are understood it realy is not.

Writting Dockerfiles is also not that hard if you know how to write bash scripts (and use other Dockerfiles to see how they solve things)

 

Dockerfiles aren't the problem. I can learn a new syntax and language in no time, and Dockerfiles are very straightforward

 

You need to figure out the port (-p {port-dsm-side}:{port-container-side} ) and volume ( -v {path-dsm-side}:{path-container-side} ) mappings as well as the enviorenment (-e {variablename-inside-container}={value} ) variables that are required to properly operate an image.

 

Yup, this is the main issue. I always specify the ports so that it maps on my DSM to the same (usual) port. E.g. in Plex, I map 32400 docker-internal to 32400 DSM. Same for the others.

 

I would recommend to use following images for your purpose: linuxserver/sonarr, linuxserver/transmission, linuxserver/plex.

Take a look at the usage part on dockerhub to see which port and volumes mappings are required and which environment variables are usefull.

If you want to access the local mapped path of a volume from a share, i would sugguest that you set the PUID and PGID environement variables to a UID and GID (use 'id {username}' in shell to figure both out) that actualy has permissions to access the share. To keep things easy you can use http://{your-syno-ip-or-hostname}:{port-dsm-side} in a container to access another one.

 

I used those images, and worked somewhat okay. Though with Transmission, I usually replace the web UI, which AFAIK is not possible with the Docker version.

 

I also like to refer to my server, no matter what container, as "localhost". But the containers get their own little network, thus Sonarr can't refer to Transmission as "localhost:9091".

Share this post


Link to post
Share on other sites
  • 0

I can imagine that working with Docker might appear to be be challanging... Though, once the basics are understood it realy is not.

Writting Dockerfiles is also not that hard if you know how to write bash scripts (and use other Dockerfiles to see how they solve things)

 

Dockerfiles aren't the problem. I can learn a new syntax and language in no time, and Dockerfiles are very straightforward

agree on that :smile:

 

Most people forget to add supervisors and/or make user permissions, timezone and locale (and the keyboard layout) configurable via environement parameters. without a supervisor you can end up with zombi processes and ungracefull shutdowns, especialy if more than a single process is started.

Are those things mandatory? absolutly not! Would I want to live without those additions: depends on the use case. most times supervisor and user permissions are enough.

 

That's why i prefer linuxserver.io images - all images i used until now had at least the s6 supervisor and take care of the user permissions.

 

You need to figure out the port (-p {port-dsm-side}:{port-container-side} ) and volume ( -v {path-dsm-side}:{path-container-side} ) mappings as well as the enviorenment (-e {variablename-inside-container}={value} ) variables that are required to properly operate an image.

 

Yup, this is the main issue. I always specify the ports so that it maps on my DSM to the same (usual) port. E.g. in Plex, I map 32400 docker-internal to 32400 DSM. Same for the others.

I do the same where possible. Though, from time to time there are collisions with port that DSM uses itself.

 

I would recommend to use following images for your purpose: linuxserver/sonarr, linuxserver/transmission, linuxserver/plex.

Take a look at the usage part on dockerhub to see which port and volumes mappings are required and which environment variables are usefull.

If you want to access the local mapped path of a volume from a share, i would sugguest that you set the PUID and PGID environement variables to a UID and GID (use 'id {username}' in shell to figure both out) that actualy has permissions to access the share. To keep things easy you can use http://{your-syno-ip-or-hostname}:{port-dsm-side} in a container to access another one.

 

I used those images, and worked somewhat okay. Though with Transmission, I usually replace the web UI, which AFAIK is not possible with the Docker version.

Figure out the target folder and use a volume to map a folder "over" the existing folder having your own files..

A missing Volume declarion in a Dockerfile does not mean that you can't map a volume to a folder of your choice anyway.

 

Or create a Dockerfile that does replace/add the missing bits based on the exsting image.

 

I also like to refer to my server, no matter what container, as "localhost". But the containers get their own little network, thus Sonarr can't refer to Transmission as "localhost:9091".

Sure it can't. It has to access the docker hosts hostname or ip and the local port of the target container. if your dsm6 box has the hostname dsm, you would use dsm:9091, or link the containers together and use their linknames as references.. It makes sense to link container where an applications-container relies on other containers, e.g. a database-container. Though, i would not consider that sonarr relies on transmission. transmission is just one of the available options for download agents. Thus, the sonarr and transmission containers are loosly coupled - I wouldn't link those containers and use the dockerhost:localport approach instead.

Share this post


Link to post
Share on other sites
  • 0

transmission works great on DSM 6. I was a long time 5.x even 4.x user back in the day. 6 has been great to me.

 

Sickbeard is available,

Couch Potato is available

Mosquitto is available

 

Looks like you need to add the community sources.

 

Screen_Shot_2017_01_29_at_7_42_54_PM.png

Share this post


Link to post
Share on other sites
  • 0
transmission works great on DSM 6. I was a long time 5.x even 4.x user back in the day. 6 has been great to me.

 

Sickbeard is available,

Couch Potato is available

Mosquitto is available

 

Looks like you need to add the community sources.

 

Screen_Shot_2017_01_29_at_7_42_54_PM.png

 

They work great if you installed them on 5.x and then migrated to 6.0...

 

The issue is with the installer. The community packages used a very specific way of creating users with specific rights (often reaching for sudo to create specific folders during install) within the installation process. That part fails, thus the packages can't start.

Share this post


Link to post
Share on other sites
  • 0

I would also, as haydibe, suggest to stzart using Docker more and stop waiting for Syno third party packages to get some attention. Example: http://tools.linuxserver.io/dockers

 

You can find basically everythiing that is able to run on linux on Docker hub already, and the good thing is, you can mess with containers as much as you want and at the end of the day just recreate them without putting your box or data in danger.

Share this post


Link to post
Share on other sites
  • 0
I would also, as haydibe, suggest to stzart using Docker more and stop waiting for Syno third party packages to get some attention. Example: http://tools.linuxserver.io/dockers

 

You can find basically everythiing that is able to run on linux on Docker hub already, and the good thing is, you can mess with containers as much as you want and at the end of the day just recreate them without putting your box or data in danger.

 

The key part of using a NAS, remember what a Synology box is?, is storing your data & without putting your box or data in danger. All the other software or garbage attached can be accomplished in any number of other ways. Secure your data first! The problem is Synology wants profits to grow so they have to dream up the next thing that a user might want and they just keep adding to the pile.

If you run Xpenology in an ESXi VM, keep it as a NAS, then run all the other stuff in VMs safely.

Share this post


Link to post
Share on other sites
  • 0

Agree with the "last speaker",

a NAS is first and most a ways to secure your files, if that fails, then the complete chain fails...

 

In regards to Docker, have they solved all the security issues yet?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Sign in to follow this