• Announcements

    • Polanskiman

      DSM 6.2-23739   05/23/2018

      This is a MAJOR update of DSM. DO NOT UPDATE TO DSM 6.2 with Jun's loader 1.02b or earlier. Your box will be bricked.  You have been warned.   https://www.synology.com/en-global/releaseNote/DS3615xs Release note: Version: 6.2-23739   (2018-05-23) Important Note The update is expected to be available for all regions within the next few days, although the time of release in each region may vary slightly. Compatibility and Installation DSM 6.2 can only be installed on Synology products running DSM 6.0 and above. Starting from DSM 6.2, the core replication function is centrally managed by a new package, Replication Service. Packages with replication feature, such as Snapshot Replication, must install the Replication Service package. For the following models, DSM 6.2 will be the last upgradable version. DSM 6.2 will continue to receive critical and security updates for a minimum of 2 years after the official release. XS Series : RS3412xs, RS3412RPxs, RS3411xs, RS3411RPxs, DS3612xs, DS3611xs Plus Series : RS2212+, RS2212RP+, RS2211+, RS2211RP+, RS812+, RS812RP+, DS2411+, DS1812+, DS1512+, DS1511+, DS712+, DS412+, DS411+II, DS411+, DS213+, DS212+, DS211+, DS112+ Value Series : RS812, RS212, DS413, DS411, DS213, DS212, DS211, DS112, DS111 J Series : DS413j, DS411j, DS411slim, DS213air, DS212j, DS211j, DS112j Others : DDSM What’s New in DSM 6.2 DSM For easier management, Key Manager can now be stored locally on Synology NAS. Encrypted shared folders can be auto-mounted without the need of a USB drive. (To achieve better data protection, users are suggested to store Key Manager on an external USB drive.) Standard users can now right-click on shared folders in File Station to view Properties. Added support for IBM WebSphere SSO. Enhanced password strength policy for better account security. Added support for a new SMS provider SendinBlue and Clickatell API (RESTful). Added support for Thai language. Updated Privacy Statement and related settings in the installation flow. iSCSI Manager Brand new iSCSI Manager built for IT administrators, providing a new management user interface for an optimized iSCSI management and monitoring experience. Advanced LUN provides lightning-fast LUN snapshot creation and recovery, and LUN cloning within seconds, as well as VAAI/ODX, storage acceleration commands, and support for better VM performance. Since Advanced LUNs can utilize file system cache to provide better efficiency, block-level LUNs will no longer be supported on DSM 6.2 and onward. DSM 6.2 will still be compatible with the block-level LUNs upgraded from DSM 6.1 and prior. Supports cross-volume LUN clones. Added support for network binding settings to allow each iSCSI target to map to a network interface. Users can now disable storage reclamation for thin-provisioned LUNs to enhance I/O performance. Storage Manager The brand new Overview shows the health status of all storage components on your NAS, displaying a clearer and better look on the system overview. Introduced Storage Pool, a new storage component, to replace the original Disk Group and RAID Group, and rearrange storage-related functions to provide users with a more consistent and smoother experience. Smart Data Scrubbing detects the supported file system and RAID type to perform data scrubbing automatically while enhancing data scrubbing usability. A new built-in scheduler allows users to do data scrubbing periodically with just a few clicks, improving data integrity and consistency. RAID resync speed can now be adjusted to accommodate IT management needs. Users can remotely deactivate drives via Storage Manager for better management. Added a default monthly S.M.A.R.T. test on both new and existing drives that were not previously configured. After each DSM upgrade, DSM will remind users if a bad sector or lifespan alert is not set up.  The health status of disks is now uniform with storage pools and volumes. Log Center now includes disk logs. Provided an option to change stripe cache size when the RAID type of Storage pool is RAID 5, RAID 6, RAID F1, or SHR (three disks or above). High Availability Manager 2.0 High Availability Manager has been modularized into a package to ensure better system maintenance and offers greater update  flexibility. New mechanism can now eliminate unnecessary system reboot during major update to keep your cluster secure and maintain high service availability. SHA 2.0 can handle situations more effectively especially when the servers are in a vulnerable state. A simpler yet more intuitive user interface,  offering you a comprehensive cluster overview and management. The new interface design provides more detailed and visual information as well as easy-to-follow solutions. Brand new user interface is introduced to display more details of both active server and passive server, giving you an in-depth system utilization overview.  When first creating SHA cluster, user new to SHA can choose to only replicate system configuration to shorten setup time. Extended Btrfs File System Coverage Btrfs file system is now available on more Synology NAS models utilizing ARM platforms. Enjoy the powerful features of the next-generation file system. Applied Models: 18-series: DS218, DS418 and more x18 series to come.  Azure AD support Added the capability to join existing Azure AD as an SSO client. Utilizing the single sign-on feature, daily productivity is increased by saving time spent on re-authentication. Security Advisor User login details and abnormal geolocation information are dissected using intelligent analysis techniques and reported via DSM notifications.  Daily and monthly reports provide more comprehensive information, allowing IT administrator to review abnormal activity and security scan regularly. TLS/SSL Profile Level An advanced way to configure your security level profile based on different services to meet different security level requirements, providing flexibility to suit your network security requirements. 2-step authentication Synology NAS administrators must set up an email notification when 2-step authentication is enabled. Domain/LDAP management Flexibly assign specified domain groups with admin privileges. FTP Added ECDSA certificate support for FTPS connections. NFS Added NFS v4.1 multipathing for load balancing and network redundancy to fully support VMware vSphere® 6.5. Applied models: 18-series : DS3018xs, DS418play, DS918+, DS418j, DS418, DS718+, DS218+ 17-series : FS3017, FS2017, RS4017xs+, RS3617xs, RS3617RPxs, DS3617xs, RS3617xs+, RS18017xs+, DS1817+, DS1817, DS1517+, DS1517, RS217 16-series : RS2416RP+, RS2416+, RS18016xs+, DS416play, RS816, DS916+, DS416slim, DS416j, DS416, DS216play, DS716+II, DS716+, DS216j, DS216+II, DS216+, DS216, DS116 Package Center The brand new user interface design brings a more intuitive experience and allows users to quickly find everything they need. IHM tool Added Seagate IronWolf Health Management (IHM) support on DS118, DS218play, DS418j and DS418. Known Issues & Limitations DSM 6.2 is the last DSM version supporting IPv6 Tunneling in Network Interface. The Fedora utility will not be supported after DSM 6.2. Starting from DSM 6.2, the USB device drivers, including printers, cloud printers, DAC/speakers, Wi-Fi dongles, DTV dongles, LTE dongles, and Bluetooth dongles will no longer be updated. Wi-Fi dongle does not support Parental Control and Device List if set as Bridge Mode. Starting from DSM 6.2 Beta, Virtual Machine Manager will no longer support creating clusters with older DSM versions. Please update each host in the cluster to the same DSM version or above versions for the Virtual Machine Manager cluster to operate properly. SSH authentication by DSA public key is prohibited for security concerns. When logging in through VPN or Proxy server, some functionalities may have authentication issues. To fix this issue, please go to Control Panel > Security, and click the Trusted Proxies button to add the trusted proxy server to the list. The original RAID scrubbing scheduled tasks will be migrated to Smart Data Scrubbing scheduled tasks. If the upgrade is performed while running scheduled tasks, the Data Scrubbing process will automatically be re-executed shortly after the upgrade is completed.  Office 2.x and below are not compatible with DSM 6.2.  Hide    

mervincm

Members
  • Content count

    82
  • Joined

  • Last visited

Community Reputation

2 Neutral

About mervincm

  • Rank
    Regular Member
  1. DSM 6.1.3 and sas controller

    rebranded LSI controllers like the H200 (I have an HPE version) in IT mode are the default best option from my opinion. Inexpensive, TONS of them around, lots of support available from home-labbers. Most add 8 6G SAS/SATA ports with no complications around SMART etc.
  2. 10Gbe setup - will this work with 6.1?

    I agree with the MicroTik interface comment for the most part; that being said, for a flat switch, it's really nothing to do. Additionally, this model runs switch-OS, not the router OS you are used to. much much simpler. I have 4 10 gig switches in my home lab (Ubiquiti UNIFI 16x10gig, Microtik CRS-226 24x1, 2x10, quanta l4b, 48x1GB and 2x10G, and a DLINK 24x1G, 4x10G) so I have seen a range of less common options (no cisco/HPE/juniper) At this time I only really use the D-Link Enterprise line switch. a switch allows you to have a single IP address and a single name/DNS entry for your NAS. It's a nice to have, not a need to have. I do 10GBE main PC to physical NAS, and to vSphere Host. I also do multichannel SMB3 over multiple NICs to some secondary PC's and a second NAS (real Synology box) physical routing/firewall/IDS make do with 1 GBE because my WAN link is limited.
  3. 10Gbe setup - will this work with 6.1?

    I suggest getting a switch. it will cost you 1 more cable, but it makes your networking so much easier. managing 2 networks (1 Data, and 1 Storage) is definitely possible, but it's a mess that you don't have to deal with for under 200$. I have tried both. You can buy a brand new switch from Mikrotik with a couple 10 gig ports for under 150$ (CSS326-24G-2S+RM)
  4. Poor SSD Write speeds

    Seems like the right approach to me. Disable sequential caching till NVME support is available. It should not be too long, as models like the 918+ have nvme caching features. Once that is in Xpenology, you can LACP your 2 links from switch to NAS, and one day be pushing for 2 gigabytes per second transfers!
  5. Poor SSD Write speeds

    I have a lenovo Xpenology system with a 10gig intel 559 based card, xeon 1226v3 CPU, 32GB RAM, and 6HDD (WD RED) RAID5, LSI IT mode HBA (6g SATA/SAS) an well as a real synology 1815+ w 5HDD RAID5. I have not tried adding in SSD cache as my testing (back in 5.x days) showed that adding them in impacted my sequential performance in a negative way. my Win10 system is a 6700k w 32 GB ram, Intel SSD750 and a samsung 960 evo, and on the same 10gigE intel card. I can do some tests, so you have something to compare to.
  6. Poor SSD Write speeds

    Safe to assume you are using a 10Gbe network here? When you created your SSD cache you created a 2 SSD Read/Write and only for sequential transfer correct? Atom CPU (a great atom but still an atom) might be impacting performance, did you monitor CPU while this is happening? 3HDD RAID5 supplies a steady 500-600MB/s? that's not real, must be caching involved. the most you can hope for in writing to a 3 disk raid 5 array is 2x a single disk, and I doubt you have HDD disks that can go that fast my really old agility 3 SSDs five 120 write performance in the DSM write benchmark in my xeon system. maybe you have an issue with backplane? BTW very cool motherboard! I would love one!
  7. 10Gbe setup - will this work with 6.1?

    I have used a range of 10GB NIC's, optical transceivers as well as DAC cables, and 4 different switches with 10g. I don't think that the mellanox connectX-2 (I tested one in the past) have any sort of built in driver support in XPenology. I have used broadcom in the past, and now use intel 82599 based cards. If you just want to built a point to point 10GBE link, I had great luck with the IBM BNX2 broadcom cards their included transceiver, and a standard OM3 LC/LC fiber patch. I also use the 3615xs version
  8. This can be a number of things, some are not easily fixed. HBA, SATA/SAS Backplane, port multiplier, OS. The 6/i looks to be a bad choice for xpenology for two reasons, no IT mode firmware and no support for over 2TB disks. if you are sure you don't have a backplane limitation, this might be as easy to fix as purchasing a replacement SAS/SATA HBA. There are other forums that are more suited to server grade parts in the home lab you might try there for specific advice.
  9. I started with ESXi, but am now running it bare metal. ESXi was fine, but I didn't need any extra VM hosting. You can do that now inside DSM with virtual machine manager anyway. I wanted to give DSM local access ALL of the HDD for smart testing. I wanted push button power button support, USB support One item you should consider is that bare metal install means that you need DSM to include your driver support whereas with esxi you need vSphere to include your driver support. This might be a deal breaker either direction, depends what you have.
  10. 6+ functional w 10GBE NICs?

    update: it's working. moved the USB boot drive from my test server to the main Xpenology NAS (exact same hardware) and it is stable running at 10gbps on the Intel card (broadcom still non functional.) It seems that something in .1 or u4 fixed the problem. Still no luck with the broadcom, but I would rather use the intel anyway.
  11. Did a test today. I have a real Synology 1813+ and its setup (today) with a single SHR volume, 8 HDD + BTRFS FS. setup a 4gig link with link aggregation with its 4 NICS. removed all packages and turned off all reports, indexing etc. Created a best case scenario (without SSDs.) Directly from the 1813+, I did a rsync copy pulling huge files from my xpenology. at the same time I did a CIFS copy (huge files) from a workstation to the 1813+. I was able to get CUMULATIVE averages over 1gbps and CUMULATIVE peaks at about 1.6 gbps. If you had more clients looking the write to the 1813+ at the same time, it might have gone even higher. This is a typical scenario where link aggregation can help.
  12. 10 gigabit networking does not have to be expensive. IMO a starter setup is just 2 NICS (USED can be as low as 20-30$ each used) (not all work with xpenology at this time) 2 "cables" (cables for cheap 10 gig networking usually means 2 optical SFP+ transceivers and a fiber patch cable the length required, OR an SFP+ DAC cable) each cable is about 20$-75$ 1 switch with at least 2 SFP+ ports, new from 200$-400$, used from 100$ its doable for not too much.
  13. Do SSD Cache's make much difference on a GigE ?

    I have played with read and read/write caches. I found the read cache made no noticeable improvement, but never did it seem to get it the way. The Read/write cache actually made random recent reads noticeably better, but also negatively impacted by sequential writes. I took them both and build a dedicated SSD volume. MUCH MUCH better. it was big enough to run my apps and small VMs from, worked great.
  14. Before you spend more time on link aggregation, are you sure you understand what advantage it offers? If you are trying to get more that 1gbps between your NAS and a single client, IT DOESN'T WORK THAT WAY. No matter how many NICS in your PC or NAS, no matter how you create your bonded channel, 1 client maxes at 1gbps. What you can hope to achieve (and its hardly easy) is have multiple clients simultaneous aggregate to over 1gbps. And even if you can get the networking done right, you will still need to have the disks to keep up. When you have simultaneous devices accessing your NAS, thats a someone random activity, and you need ALOT of hard drives to exceed 1gbps of random activity, unless you create an all SSD volume. even if you get everything working how many minutes a week is everything gonna line up perfectly? there are two solutions that do work to give you over 1gbps somewhat often. a-10gbps NICS and a 10gbps switch - used cards and switches are very affordable - this is what I do b-multichannel SMB 3 - right now only works on windows systems, available in linux based systems using the latest samba systems if enable EXPERIMENTAL features.
  15. 6+ functional w 10GBE NICs?

    I should have added that I was able to get both the broadcom 10gbe and the intel 10 gbe NICs working with 5.x, ran it that way for months if not over a year. I have upgraded from 6.1 to 6.1.1u4 and there is a significant improvement, if not quite fixed? is it possible syno tweaked a driver or something? It is MUCH better, at least with the Intel Card I have been testing with so far. I can read from the NAS at about 300MB/sec (OK for a 4 Disk R5 BTRFS) I can only write to the NAS at about 105MB/sec (not great) The only thing I did differently was testing BTRFS this time, I used ETX4 for previous testing. I want to let it settle for a while, make sure it is stable, then I will try the broadcom card and then both again with the new ram drive (assuming that this ram drive is NOT the same as you had it your 6.02 tutorial, as I used that one in yesterdays testing)