Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Posts posted by flyride

  1. 3 hours ago, veelove said:

     

    How to correctly display the CPU?

     

    It's cosmetic.  cat /proc/cpuinfo will display the correct CPU configuration which is what is actually running.  If you care to "fix" it, see this thread.

  2. - Outcome of the installation/update: SUCCESSFUL

    - DSM version prior update: DSM 6.1.7-15284U1, also repeated with 6.1.7-15284U2

    - Loader version and model: Jun v1.02b - DS3615

    - Using custom extra.lzma: NO

    - Installation type: VM - ESXi 6.5 with C236 SATA passthrough, Mellanox ConnectX-3 passthrough

  3. 57 minutes ago, bearcat said:

    Depending on what you want to achieve, or what you want to do on your NAS while away from home, 

    there is also the option to install TeamViewer, giving you a "Remote Desk Top" to your NAS.

     

    But if you want to ie. stream your homevideos using Plex (or any other mediaserver), you will have to "loosen up your security" and use portforwarding and DDNS.

     

    @flyride: how would services like that, or DS Photo etc. work with your Guacamole setup?

     

    TeamViewer is a PC app which would give you the same functionality as RDP.  So nothing to be gained there.  I don't think you can TeamViewer the NAS console itself (all you will see is the boot screen).

     

    The whole point of using Guacamole is NOT to open ports for remote access (aside from SSL to the proxy device which is separate from the NAS). In order to stream media (Plex, DS Video) to Internet devices, you will need to open ports - there is no other option.  This is also well outside the scope of OP's question about Quick Connect remote access services.

     

  4. Quick Connect is a combination of DDNS and remote proxy through a Synology server.  It uses outbound polling to determine whether a connection request is pending.  Therefore it does not need any ports opened on your firewall/router.  But you are 1) using their cloud service and not paying for it (hence the negative recommendation), 2) providing them your access passwords and 3) giving them access to whatever data you are transferring.  You are also trusting their security model with your access and your data.

     

    You can get the same access by opening ports on your firewall/router for the services you want to consume.  There are any number of free DDNS services that will resolve your home IP if you do not have a static address.  This is what most folks do and there is even a Synology feature you can invoke (External Access | Router Configuration) where it will modify your firewall/router configuration for you, either via uPNP or actually editing the port configuration on your behalf.

     

    Personally, I don't like the security implications of either of the above.  Instead I use Guacamole, an open source remote proxy, for remote RDP access to my PC (and therefore to the NAS as necessary), or SSH directly to the NAS/firewall/ESXi/whatever consoles. It runs on a VM or a Pi-type Linux device, and I am able to 2-factor to my phone for free using Duo Mobile.

     

  5.  

    On 12/30/2018 at 6:20 AM, bearcat said:

    A bit off-topic I guess, but seeing some of you talk so warmly about Docker, before I try it out:

    Some time ago, I read massive critics about the lack of security in Docker, has this been taken care of?

     

    It depends on what you are using it for.  If you are comparing it to Syno apps, they all run as root with full access to the filesystem, so there isn't much of a standard to compare with.

     

    Comparing with a full virtualization platform like ESXi, it is not secure. Docker needs to run as root to work correctly, and many apps need direct access to the O/S hardware (network I/O features etc) so they end up running with elevated privilege too.

     

    Outside of this, you can limit file access to Linux groups and users, and it containerizes the app runtime environment (on Synology, inside of a BTRFS partition).  You can map in configuration folders (to make upgrades simple) and any other mountable filesystem directories.

    • Thanks 1
  6. Synology's OEM cards are Mellanox Connect X-3

     

    Connect X-2 and Connect X- 3 work with the DS3615/17 images and are directly supported.  The drivers are not present in DS916/918 images.

  7. The processor is adequate.  I'm not sure of source/destination of the backup, but your resource graph is showing very little network activity.

    How many volumes do you have?  If you have four volumes, the lower left graph is indicating 100% disk utilization.

     

    I'm not a fan of Hyperbackup.  When I did use it it was temperamental.  Did you port the existing setup over or create a new one using the 6.1.7 Hyperbackup binaries?

  8. BTRFS CPU requirements are somewhat higher.  Synology released BTRFS support on the Atom chips (DS412+/1512+/1812+) and performance was measurably worse.  It's not as dramatic with faster CPU's.

     

     I'd hazard that something isn't right.  What CPU?  What disk controller?

     

    10 hours ago, TVJunky said:

    Hello,

    I've made the Update 5.2 to 6.17 on HP Gen Baremetal, 1.02b.

     

    On 5.2 I had: 250GB / 1 TB / 1 TB / 2 TB , SHR/ext4 - and the speed was about 100-109 MBit/s

    On 6.17 I have: 1 TB / 1 TB / 2 TB / 4 TB, SHR/btrfs - and the speed is max. 80, most times about 60 MBit/s

     

    A complete Backup took with 5.2 18-20 hours.

    Now I've tried a Backup: after 22 hours it was at 49% and I cancelled it. The speed was only a few kB.

     

    What can I do? Back to ext4?

     

  9. A few reasons... docker apps, including Plex, do not modify your DSM folders or configuration in any way.  Not always true with Synology apps.  The Plex releases on Synology lag pretty far behind development.  Docker is always a zero-day build.  A docker app is super easy to migrate to another platform as a backup/redundancy strategy.  You can easily integrate docker apps and their data repositories as it makes sense.  And, on docker you have full control over the application data location, for performance tuning or other reasons.

     

    Once you Docker, you will never go back to Syno apps.

     

    • Like 1
  10. AFAIK if you are not running DS916 or DS918, there will not be Linux driver support for the Intel GPU.  If the GPU is detected by Linux, you should see /dev/dri.

     

    Video Station requires a serial number, and Plex specifically requires Plex Pass for hardware acceleration to be available.

     

    Lastly, all Intel GPU silicon is not equal.  There have been improvements with each processor family update, and the kernel driver and media software may have dependencies on specific features only available in later processors.

     

    Bottom line, a lot of things have to line up for hw transcoding to work.

  11. 1 hour ago, viper359 said:

    I want to use 13 with SHR2. Its over 50TB.

     

    Can I use 1.04B with the DS918 image from Synology, running the latest DSM, with more than the 4 drives? Will it show automatically, or am I going to had to make some edits to the syno.conf file?

     

    1.04b (DS918) has 16 drives preconfigured in synoinfo.conf

    1.03b (DS3615/17) has 12 drives just like earlier loaders

     

    6.2.1 has less hardware support per platform (extra.lzma has limitations or is not available).  You should pre-validate that LSI works on DS918.  If it does, I would expect you to migrate without much trouble.  Otherwise you will need to use 1.03b and DS3615/17, and edit synoinfo.conf as before. Alternatively, you might be able to move to ESXi and RDM all the drives into a DS918 VM. I haven't actually tested that yet though.

     

    I guess the other question to ask is why?  If working well, why not stay on 6.1.x even on new hardware?  It's fully supported for a number of years to come, and there isn't much new in 6.2.1.

  12. Again, just my personal opinion and continued thread drift:

     

    I'm not trying to talk you out of SSD cache, but you seem to want to talk me into it!  I do agree that Synology read cache is less risk than write.

     

    There are going to be 5 HDDs in your RAID5. If a typical NAS drive is capable of 75MBps sequential read, 4 (net throughput from a 5xRAID5) of them can do 300MBps.  Let's round down to 250MBps for rotational latency and other overhead.  With a Gigabit Ethernet interface, the maximum throughput is about 125MBps (1Gbps divided by 8 bits). This is half the sequential throughput of your HDDs.

     

    SATA SSD maximum throughput is 550MBps.  550 is more than 250, but if it all has to fit in a 125 pipe, it doesn't matter much.  The only benefit is for small, random reads that happen to be cached already. In that unique case, the SSD cache is probably "faster." The SSD Cache feature visually markets to you how great the cache is ("90% cache effective!") but it doesn't explain how fast the HDDs would have retrieved the same data without the cache.

     

    If the main workload is single-user, then it is also are going to be affected by the performance of the client. Very often, the small random reads that the cache can improve are workloads that the client takes the most time to process, and therefore can't make requests fast enough to fill the pipe.  We want to blame the NAS performance but it is the client PC or OS that is at fault.

     

    So if you have a 10Gbe interface and a specific workload (e.g. multi-user) that you are sure that cache can optimize, then by all means do it.  For most general file and media serving activities that 90% of us might be doing on our systems, cache offers little performance benefit, and rapidly wears out your SSD.  That SSD can be put to much better use isolated to disk-intensive activities WITHIN the NAS where all the performance can be leveraged, such as Synology apps or Docker or Virtual Machine.

     

    I strongly encourage you to set up some workloads that are meaningful to you, and benchmark both with and without SSD cache. You may be surprised.

    • Like 3
  13. 2 hours ago, ilovepancakes said:

    6.2 works on 3615xs 1.03b loader but 6.2.1 does not seem to work. And I have an Ivy Bridge CPU... So did they change the kernel again between 6.2 and 6.2.1 and 6.2.1 requires Haswell?

     

    I just booted up a DS3615 image on DSM 6.2.1 patched to latest.

     

    Kernel version on 6.2 on DS918 - 4.4 (known to require Haswell)

    Kernel version on 6.1 on DS3615 - 3.10 (known to work on Nehalem or later)

    Kernel version on 6.2.1 on DS3615 - 3.10.105

     

    So Synology did definitely recompile the DS3615/17 kernels for 6.2.  3.10.105 is the LTS release of the 3.10 kernel, with many security and core driver enhancements but can technically run on any x86 architecture unless specifically compiled to use processor-specific features (I have a DS412+ running 6.2.1 and 3.10.105 kernel. So 6.2.1 itself doesn't inherently require a Haswell CPU).

     

    Is the FMA/AVX2 Haswell instruction requirement compiled into the DS3615 6.2.1 kernel? You're not the first person to report that earlier CPUs might not be supported.  However, it's hard to tell conclusively without ensuring it's not a network or driver problem. The most direct way to find out would be to set up a serial console and see if booting panics on your Ivy Bridge CPU.

     

×
×
  • Create New...