Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 05/19/2020 in all areas

  1. One annoyance when running DSM under ESXi is that virtual disks can't properly handle its SMART interrogations. This is because Synology embedded a custom version of the smartctl binary into its own libraries and utilities, ignoring standard config files that could generate compatible queries or suppress them. The result is spurious error messages logged to /var/log/messages every few seconds, wasting disk space and SSD lifecycle, and making it hard to see what's happening. If you use virtual disks and are not familiar with this, monitor the messages logfile with the command below to see how frequently DSM tries (and fails) to query the drives. # tail -f /var/log/messages The problem has been around for a long time and is well-documented here. An indirect fix was discovered when the virtual disks were attached to the LSI Logic SAS dialect of the ESXi virtual SCSI controller, but this solution worked reliably only under 6.1.x. On 6.2.x, the virtual SCSI controller tends to result in corrupted (but recoverable) arrays. I recently migrated my 6.1.7 system to 6.2.3, so I had to convert my virtual SCSI controller to SATA, and of course, the logfile chatter was back. I don't really care about SMART on virtual disks (and you probably don't either) so I decided to get rid of the log messages once and for all. Syslog-ng has a lot of capability to manage the log message stream, so I knew it was possible. The results follow: We need to install two files, first a syslog-ng filter: # ESXiSmart.conf # edit the [bracket values] with drive slots where SMART should be suppressed # in this example /dev/sda through /dev/sdl are suppressed filter fs_disks { match("/sd[a-l]" value("MESSAGE")); }; filter fs_badsec { match("/exc_bad_sec_ct$" value("MESSAGE")); }; filter fs_errcnt { match("disk_monitor\.c:.*Failed\ to\ check" value("MESSAGE")); }; filter fs_tmpget { match("disk/disk_temperature_get\.c:" value("MESSAGE")); }; filter fs_health { match("disk/disk_current_health_get\.c:" value("MESSAGE")); }; filter fs_sdread { match("SmartDataRead.*read\ value\ /dev/.*fail$" value("MESSAGE")); }; filter fs_stests { match("SmartSelfTestExecutionStatusGet.*read\ value\ /dev/.*fail$" value("MESSAGE")); }; filter fs_tstget { match("smartctl/smartctl_test_status_get\.c:" value("MESSAGE")); }; filter fs_allmsgs { filter(fs_badsec) or filter(fs_errcnt) or filter(fs_tmpget) or filter(fs_health) or filter(fs_sdread) or filter(fs_stests) or filter(fs_tstget); }; filter f_smart { filter(fs_disks) and filter(fs_allmsgs); }; log { source(src); filter(f_smart); }; Save this to /usr/local/etc/syslog-ng/patterndb.d/ESXiSmart.conf You will need to edit the string inside the brackets on the first "fs_disks" line to refer to those disks that should be SMART suppressed. If you want all SMART errors suppressed, just leave it as is. In my system, I have both virtual and passthrough disks, and the passthrough disks SMART correctly. So as an example, I have [ab] selected for the virtuals/dev/sda and /dev/sdb, leaving SMART log messages intact for the passthrough disks. Please note that the file is extremely sensitive to syntax. A missing semicolon, slash or backslash error, or an extra space will cause syslog-ng to fail completely and you will have no logging. To make sure it doesn't suppress valid log messages, this filter matches SMART-related error messages with references to the selected disks. However, it cannot actually remove them from the log file because there is a superseding match command embedded in DSM's syslog-ng configuration. The second file adds our filter to a dynamic exclusion list that DSM's syslog-ng configuration compiles from a special folder. There is only one line: and not filter(f_smart) Save it to /usr/local/etc/syslog-ng/patterndb.d/include/not2msg/ESXiSmart Reboot to activate the new configuration, or just restart syslog-ng with this command: # synoservice --restart syslog-ng If you want to make sure that your syslog-ng service is working correctly, generate a test log: # logger -t "test" -p error "test" And then check /var/log/messages as above. If you have made no mistakes in the filter files, you should see the test log entry and the bogus SMART messages should stop. As this solution only modifies extensible structures under /usr/local, it should survive an upgrade as long as there is no major change to message syntax.
    1 point
  2. вот собрал все версии начиная с 7.2.0 в одну кучу. все версии патченые. начиная с 8.1.2 перезагрузка раз в сутки. как только Вирус закончит с последней, добавлю и ее. Парни, на arm пока нет рабочей версии. Возможно в будущем и будет. https://mega.nz/folder/q80zQATS#1VAWvg4Dr0rfSnRjM5X9pQ
    1 point
  3. Если говорить не конкретно об аптайме, а о времени использования сервера, то вот мои скромные цифры:
    1 point
  4. There really has not been a good solution to this problem. However, here is a method to suppress spurious SMART syslog messages.
    1 point
  5. Один очень хороший человек собрал для Synology установочные файлы с TorrServer http://4pda.ru/forum/index.php?showtopic=889960&st=10180#entry96542263 За что 100+ ему в карму. Для XPEnology представляет интерес только версия amd64.spk eё и прикрепляю. TorrServer_1.1.76_21-linux-amd64.spk
    1 point
  6. Без обид... "Такая шара" это какая шара? Smb по vpn ? Доступ к "шаре" Вам одному или несколько пользователей?
    1 point
  7. OVAs for DSM 6.2.1-23824 Update 4 • DS918+ (requires Haswell/Braswell or newer CPU) • VM HW Level 10 (ESXi 5.5 or newer) • PVSCSI • VMXNET3 https://mega.nz/#!slFUCIwT!QHzujgbJeGtMKE5W2pvg8UoK7T6TputqQwFZHuNhxmY • DS3617XS (requires hardware that's supported by default DSM) • VM HW Level 10 (ESXi 5.5 or newer) • SATA • E1000E https://mega.nz/#!5wlSQCLK!WHVVNloohedGa_nAB6pgPMvC-twWlR32arZ0JqaFMvM VMware Tools: https://mega.nz/#!Q1FGyAbY!lmrry2WXNd7Lp7AtSsrduPpnlWPzEpPV9L96jrZn6HQ Deploy OVF, add disk(s) or passthrough a controller, find with Synology Assist and click install. Optionally if you've got an ESXi enterprise license you can change the serial port to network, server telnet://:1024 and have remote telnet access. If you want to passthrough a Intel Controller that doesn't contain ESXi boot or a datastore install this VIB and restart: https://mega.nz/#!p0dAhYYb!7AWamOXE6y0z-PBlW4VqtS1gYNuw-uG-dKYTuyI5tQM By default it will work with any combination of one or two 2, 4, 6 and 8 port SATA/SAS controllers. Only when using two 8+ port SAS controllers or one 12+ port SAS/SATA, or more than two extra (so more than three if you count the SATA controller for synoboot) controller changes to SasIdxMap/SataPortMap/DiskIdxMap/MaxDisks are required.
    1 point
  8. Thanks for the advice. First, This is the system bash shell script, not node.js or javascript. The reason I made this tool binary is that most DSM users do not understand the shell script, so to minimize the possibility of malfunction caused by mistake. Also, I am not a professional developer, so I was embarrassed to publish a script that was made inexact. For that reason, I uploaded it as an executable file, and I thought enough about what could be misunderstood as an executable. It is up you to decide whether to believe or not. If you like, publishing the source is not difficult. Please refer to the following. Instead, please do not blame me for not making it.
    1 point
  9. USING PHYSICAL RDM TO ENABLE NVMe (or any other ESXi accessible disk device) AS REGULAR DSM DISK Summary: Heretofore, XPEnology DSM under ESXi using virtual disks is unable to retrieve SMART information from those disks. Disks connected to passthrough controllers work, however. NVMe SSDs are now verified to work with XPEnology using ESXi physical Raw Device Mapping (RDM). pRDM allows the guest to directly read/write to the device, while still virtualizing the controller. NVMe SSDs configured with pRDM are about 10% faster than as a VMDK, and the full capacity of the device is accessible. Configuring pRDM using the ESXi native SCSI controller set specifically to use the "LSI Logic SAS" dialect causes DSM to generate the correct smartctl commands for SCSI drives. SMART temperature, life remaining, etc are then properly displayed from DSM, /var/log/messages is not filled with spurious errors, and drive hibernation should now be possible. EDIT: SAS controller dialect works on 6.1.7 only (see this post). Like many other posters, I was unhappy with ESXi filling the logfiles with SMART errors every few seconds, mostly because it made the logs very hard to use for other things. Apparently this also prevents hibernation from working. I was able to find postings online using ESXi and physical RDM to enable SMART functionality under other platforms, but this didn't seem to work with DSM, which apparently tries to query all drives as ATA devices. This is also validated by synodisk --read_temp /dev/sdn returning "-1" I also didn't believe that pRDM would work with NVMe, but in hindsight I should have known better, as pRDM is frequently used to access SAN LUNs, and it is always presented as SCSI to the ESXi guests. Here's how pRDM is configured for a local device: https://kb.vmware.com/s/article/1017530 If you try this, understand that pRDM presents the whole drive to the guest - you must have a separate datastore to store your virtual machine and the pointer files to the pRDM disk! By comparison, a VMDK and the VM that uses it can coexist on one datastore. The good news is that none of the disk capacity is lost to ESXi, like it is with a VMDK. Once configured as a pRDM, the NVMe drive showed up with its native naming and was accessible normally. Now, the smartctl --device=sat,auto -a /dev/sda syntax worked fine! Using smartctl --device=test, I found that the pRDM devices were being SMART-detected as SCSI, but as expected, DSM would not query them. NVMe device performance received about a 10% boost, which was unexpected based on VMWare documentation. Here's the mirroring operation results: root@nas:/proc/sys/dev/raid# echo 1500000 >speed_limit_min root@nas:/proc/sys/dev/raid# cat /proc/mdstat Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] <snip> md2 : active raid1 sdb3[2] sda3[1] 1874226176 blocks super 1.2 [3/1] [__U] [==>..................] recovery = 11.6% (217817280/1874226176) finish=20.8min speed=1238152K/sec <snip> Once the pRDM drive mirrored and fully tested, I connected the other drive to my test VM to try a few device combinations. Creating a second ESXi SATA controller has never tested well for me. But I configured it anyway to see if I could get DSM to use SMART correctly. I tried every possible permutation and the last one was the "LSI Logic SAS" controller dialect associated with the Virtual SCSI controller... and it worked! DSM correctly identified the pRDM drive as a SCSI device, and both smartctl and synodisk worked! root@testnas:/dev# smartctl -a /dev/sdb smartctl 6.5 (build date Jan 2 2018) [x86_64-linux-3.10.102] (local build) Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Vendor: NVMe Product: INTEL SSDPE2MX02 Revision: 01H0 Compliance: SPC-4 User Capacity: 2,000,398,934,016 bytes [2.00 TB] <snip> SMART support is: Available - device has SMART capability. SMART support is: Enabled Temperature Warning: Disabled or Not Supported === START OF READ SMART DATA SECTION === SMART Health Status: OK Current Drive Temperature: 26 C Drive Trip Temperature: 85 C <snip> root@testnas:/dev# synodisk --read_temp /dev/sdb disk /dev/sdb temp is 26 Finally, /var/log/messages is now quiet. There is also a strong likelihood that drive hibernation is also possible, although I can't really test that with NVMe SSD's. Postscript PSA: My application for pRDM was to make Enterprise NVMe SSD's accessible to DSM. As DSM recognizes the devices as SSDs, it then provides the option for scheduled TRIM support (which I decided to turn on over 18 months later). The TRIM job resulted in a corruption of the array and flagged the member disks as faulty. While a full recovery was possible, I don't know if that is was due to incompatibility of NVMe drives with the DSM TRIM implementation, or if it is an unexpected problem with pRDM not supporting TRIM correctly. You've been warned!
    1 point
×
×
  • Create New...