Jump to content
XPEnology Community

Search the Community

Showing results for tags 'cache'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 7 results

  1. I have four drives shr-1 volume (of course, with data protection) and one drive SSD cache (of course, with NO data protection, read-only). I know, in case of any single drive of shr-1 volume will crash I will not lose any data, that is what data protection is. But what gonna happen if my SSD read-only cache drive crashed? Will I loose data on the volume? I believe not, but I would like to get some confirmation :-).
  2. Hello. After the system was restored, the cache disappeared. Unsuccessfully switched to a new version which did not work .... What can you do about it? There are disks with cache, but the system does not connect them ... . . . ash-4.3# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] md2 : active raid5 sda3[5] sdd3[4] sde3[3] sdc3[2] sdb3[1] 7794770176 blocks super 1.2 level 5, 64k chunk, algorithm 2 [5/5] [UUUUU] md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] sde2[4] sdf2[5] sdg2[6] 2097088 blocks [16/7] [UUUUUUU_________] md0 : active raid1 sda1[0] sdb1[1] sdc1[2] sdd1[3] sde1[4] sdf1[5] sdg1[6] 2490176 blocks [16/7] [UUUUUUU_________] ash-4.3# mdadm --detail /dev/md0 /dev/md0: Version : 0.90 Creation Time : Tue Jun 22 19:18:07 2021 Raid Level : raid1 Array Size : 2490176 (2.37 GiB 2.55 GB) Used Dev Size : 2490176 (2.37 GiB 2.55 GB) Raid Devices : 16 Total Devices : 7 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Fri Jun 25 18:59:43 2021 State : clean, degraded Active Devices : 7 Working Devices : 7 Failed Devices : 0 Spare Devices : 0 UUID : bc535ace:18245e6d:3017a5a8:c86610be Events : 0.11337 Number Major Minor RaidDevice State 0 8 1 0 active sync /dev/sda1 1 8 17 1 active sync /dev/sdb1 2 8 33 2 active sync /dev/sdc1 3 8 49 3 active sync /dev/sdd1 4 8 65 4 active sync /dev/sde1 5 8 81 5 active sync /dev/sdf1 6 8 97 6 active sync /dev/sdg1 - 0 0 7 removed - 0 0 8 removed - 0 0 9 removed - 0 0 10 removed - 0 0 11 removed - 0 0 12 removed - 0 0 13 removed - 0 0 14 removed - 0 0 15 removed ash-4.3# mdadm --detail /dev/md1 /dev/md1: Version : 0.90 Creation Time : Tue Jun 22 19:18:10 2021 Raid Level : raid1 Array Size : 2097088 (2047.94 MiB 2147.42 MB) Used Dev Size : 2097088 (2047.94 MiB 2147.42 MB) Raid Devices : 16 Total Devices : 7 Preferred Minor : 1 Persistence : Superblock is persistent Update Time : Fri Jun 25 17:50:04 2021 State : clean, degraded Active Devices : 7 Working Devices : 7 Failed Devices : 0 Spare Devices : 0 UUID : 6b7352a0:2dd09c09:3017a5a8:c86610be Events : 0.17 Number Major Minor RaidDevice State 0 8 2 0 active sync /dev/sda2 1 8 18 1 active sync /dev/sdb2 2 8 34 2 active sync /dev/sdc2 3 8 50 3 active sync /dev/sdd2 4 8 66 4 active sync /dev/sde2 5 8 82 5 active sync /dev/sdf2 6 8 98 6 active sync /dev/sdg2 - 0 0 7 removed - 0 0 8 removed - 0 0 9 removed - 0 0 10 removed - 0 0 11 removed - 0 0 12 removed - 0 0 13 removed - 0 0 14 removed - 0 0 15 removed ash-4.3# mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Wed Dec 9 02:15:41 2020 Raid Level : raid5 Array Size : 7794770176 (7433.67 GiB 7981.84 GB) Used Dev Size : 1948692544 (1858.42 GiB 1995.46 GB) Raid Devices : 5 Total Devices : 5 Persistence : Superblock is persistent Update Time : Fri Jun 25 17:50:15 2021 State : clean Active Devices : 5 Working Devices : 5 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : memedia:2 (local to host memedia) UUID : ae85cc53:ecc1226b:0b6f21b5:b81b58c5 Events : 34755 Number Major Minor RaidDevice State 5 8 3 0 active sync /dev/sda3 1 8 19 1 active sync /dev/sdb3 2 8 35 2 active sync /dev/sdc3 3 8 67 3 active sync /dev/sde3 4 8 51 4 active sync /dev/sdd3
  3. I used two used Samsung EVO 256 GB SSDs as SSD Cache for about one year and a half. Now DSM notified me about the SSDs estimated lifetime reaching end. When looking at the values of my SMART tests I am unsure what to believe. As far as I know DSM predicts the lifetime based on the Wear_Leveling_Count. However when comparing the SMART values those two SSDs makes it unclearer for me. DSM reports this SSD is OK, although the "raw value" is way higher than the limitthreshold whereas this SSD should be replaced So is there a possibility that DSM or the SMART test isn't reading the values properly or am I misinformed?
  4. My curiosity to look for further applications for my Xpenology NAS led me to lancache, which enables content (such as games, windows and mac updates) that was downloaded once from the Internet to be available locally (via my NAS) the second time and save internet bandwidth. Further information regarding lancache can be found here: http://lancache.net However, since I already have a Pihole (tracking and advertising blocker) in use, I had to find a way to let them communicate with each other. In order to save resources, I decided against operating the Lancache in a virtual machine and using docker. Therefore, below I share my approach for those who are interested. ATTENTION, with this procedure the ports 80 and 443 in DSM are assigned to another port, since the two ports are required for the lancache. This means that if you host a website it will no longer be accessible via "yourdomain.com" but in the future via "yourdomain.com:81". Furthermore, I do not accept any liability for any data loss or damage caused by using this tutorial(s). So let's start. First, ports 80 and 443 must be released. Thanks to Tony Lawrence's instructions (see tonylawrence.com), this is relatively easy. Connect to the NAS via SSH. In macOS you can do this via the terminal app with this line for example ssh -p 22 tim@192.168.0.100 Check which applications are currently using the port with the command sudo netstat -lntup | grep ":80" sudo netstat -lntup | grep ":443" Now three files have to be edited. The vi editor is used for this purpose Enter the command sudo vi /usr/syno/share/nginx/server.mustache Press i once (for insert) and replace the values 80 and 443 with 81 and 444, respectively. It should look like this afterwards. listen 81{{#reuseport}} reuseport{{/reuseport}}; listen [::]:81{{#reuseport}} reuseport{{/reuseport}}; and listen 444 ssl{{#https.http2}} http2{{/https.http2}}{{#reuseport}} reuseport{{/reuseport}}; listen [::]:444 ssl{{#https.http2}} http2{{/https.http2}}{{#reuseport}} reuseport{{/reuseport}}; Then write :wq (for write and quit) and confirm with enter. Do the same with those two files: sudo vi /usr/syno/share/nginx/DSM.mustache sudo vi /usr/syno/share/nginx/WWWService.mustache Next, nginx must be restarted with the command: sudo -i synoservice --restart nginx Now you can check whether the ports are really no longer in use with sudo netstat -lntup | grep ":80" sudo netstat -lntup | grep ":443" If nothing shows up anymore, then we successfully freed the ports and this means the first of three steps is done. Then docker must be installed from the package center in DSM. Pihole has to be downloaded, adjusted and started as shown in the attached "Install-pihole.pdf" file. Thanks to Marius Bogdan Lixandru (see https://mariushosting.com ) for his instructions, which were slightly adapted for installing pihole with lancache. It is important that you create the following folder structure on your NAS. We need the folder /etc/dnsmasq.d later on to put some .conf files in there. Take the password from the protocol of the Docker container (as described in the pdf) and login as admin to Set the Interface listening behavior in Settings/DNS to IMPORTANT, DO NOT FORWARD THE PORTS 53 AND 80 FROM YOUR ROUTER TO YOUR NAS!!!! Now we have to download the domain lists, which should be cached afterwards. Download the folder from https://github.com/uklans/cache-domains and copy the folder to your NAS. (e.g. /volumes2/lancache) Then use ssh and change the directory to scripts cd /volumes2/lancache/cache-domain-master/scripts Run the script create-dnsmasq.sh which creates .config files for your pihole sudo ./create-dnsmasq.sh copy the created files from to your pihole dnsmasq.d-configs folder (which is in my case as follows) Finally, Lancache must be installed. The folder has to be downloaded from https://github.com/lancachenet/docker-compose and moved, for example, to on your NAS. Change to the /volume2/lancache/docker-compose-master folder via SSH cd /volume2/lancache/docker-compose-master Edit the .env file within the folder vi .env Set the three variables: Next login as root in SSH via sudo -i Change the directory to your docker-compose-master folder, which is in my case cd /volume2/lancache/docker-compose-master Run docker-compose up -d You will get two successfully installed docker container and one error. This is because the pihole already occupies the port 53. We can ignore this message and move on. exit exit Due to the fact that we use pihole as DNS service you can now delete the docker container lancache-dns. Now change the DNS server on your router to the IP address of your NAS. Flush DNS entries of running PCs or reboot them and you should get a decent lancache performance, depending on your xpenology / network setup. Feel free to correct me if I wrote something incorrectly or didn't explain it well enough. Install-pihole.pdf
  5. Hi, guys, Just a simple question, I'm thinking of buying 2xSSD sata for the passthrough at xpenology. I just want to make sure that the SSD cache also works with sata disks and not just NVME disks. Thank you & Sorry if the question has already been asked. ------------ DSM version: 6.2.2-24922 Update 4 Loader version and model: JUN'S LOADER v1.04b - DS918+ Using custom extra.lzma: No Installation type: VM - UNRAID 6.8.3 (Before ESXI 6.7u3) Disks : 3x4To + 2x6To Controllers : LSI SAS2308 IT mode + Intel Corporation Cannon Lake PCH SATA AHCI Controller
  6. Hello, community! Thank you in advance for your interest I'm new in this environment, I knew a part of the basic knowledge of xpenology. What I have in mind : Build a NAS powerful, more than 2gb/s. 10Gbps and Raid 10 for safety. So, I have thought about 12 IronWolf 4TB. Also, some SSD for cache (I really don't want a slow NAS). My first's questions : Can be possible to use my future NAS xpenology like a DAS too? What MB will I need? (With how many SATA ports? Because I don't really know how PCI controllers are working) Will be awesome if you have some hardware in mind. Thank you for you're helping.
  7. I bought this card and a 960EVO NVMe to use as cache. I'm not finding any info on compatibility and NVMe cache via expansion cards.... Can anyone point me in the right direction?
×
×
  • Create New...