Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Wow. Just wow. Try starting here instead of the loader development thread:
  2. try editing /etc.defaults/synoinfo.conf. You did not post much information about your system. Are your disks passthrough? How many controllers? How many ports on those controllers? Did you have a custom setting for DiskIdxMap and/or SataPortMap in your loader?
  3. No, DSM is installed to all available block storage devices for redundancy (which is another reason not to present a monolithic device to your VM). All you need is the boot loader on the first Virtual SATA controller as device 0:0 Connect your RDM drives in sequence to the second Virtual SATA controller as device 1:0, 1:1, 1:2, etc 2nd virtual disk isn't required if you have RDM devices online Every disk has three partitions, Linux, swap and data. When you run the DSM install it will RAID 1 DSM (Linux and swap) to the first two partitions, respectively. Once DSM is fully installed you can use the data space for whatever array configuration and volume layout you wish.
  4. Always use DSM RAID, btrfs self-healing features and advanced disk management are the whole point of the system.
  5. RDM is probably your best choice, connect the RDM disks to second virtual SATA controller for 6.2.x SMART does not work because the code is custom compiled into Synology's utilities and you can't override the SMART query strings like on a normal system. You can get rid of the incessant SMART errors in /var/log/messages by following instructions here:
  6. pfSense will need to route the vlan's (and IP networks that connect them). Your PC apparently is bound to both VLANs and IP networks so it can see both. This is a basic networking problem. If you aren't sure how to fix this problem, I would question the value of your multiple VLANs in the first place.
  7. Your loader version doesn't make sense. 1.01? Should be 1.03b for DS3615xs and DSM 6.2. Are you using someone's prebuilt VM?
  8. Or even print the screenshot of your Storage Manager HDD's?
  9. Even if you lose the SHR setting, that only affects the ability to build and manage the array. It will still mount and you can go back in and add the setting if necessary. You cannot upgrade from the GUI. You will have to update the loader to 1.03b, change boot mode to BIOS if not already, and then do a migration install. Have backups. It does work but there are a lot of ways for it to go wonky.
  10. I was able to correct with a migration install from DS3615xs to DS3617xs. This reinstalled enough of the OS that the issue (and a couple of other odd things) was resolved.
  11. This thread is a year old and OP hasn't been back since then. Seems unlikely that he will answer?
  12. That is strange. Tried it, but that did not do anything for me. What versions and patchlevels of ESXi are you on?
  13. It won't hurt to install it on your current system. But it can be installed after, it isn't a problem to boot without it, you will just have "ESATA" shares exposing your loader until you install it. If you think about a clean install (or a major version upgrade involving a loader change) there is no way to install the script ahead of time, so it must work without it.
  14. Did you ever discover any more about this? I'm encountering the same thing, also in a VM.
  15. - Outcome of the update: UNSUCCESSFUL - DSM version prior update: DSM 6.1.7 Update 3 - Loader version and model: Jun 1.03b - DS3615xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.7U2 - Additional comments: Error 21 encountered on Migration Install. Attempted a Recovery Install and this also resulted in Error 21. Eventually had to do a clean 6.2.3 install on a new drive and import the arrays manually. This particular system was very complex - RDM NVMe boot drives, RAIDF1, passthrough SATA and 10Gbe network and multiple storage pools. I performed a number of simulated upgrades directly from 6.1.7 to 6.2.3, and they all completed successfully. However, I see that a small number of upgrade attempts are also reporting Error 21. EDIT 2020-05/20 - it seems most, if not all of these are due to synoboot not being accessible which is required to complete the update. As I was attempting a migration upgrade, there was no opportunity to load FixSynoboot.sh on a clean 1.03b loader. Potential alternative resolutions might be to 1) edit DiskIdxMap and SataPortMap in grub runtime environment 2) use a serial console to manually create synoboot prior to initiating the migration install, or 3) upgrade first to 6.2.1, apply FixSynoboot.sh and then upgrade to 6.2.3.
  16. Is data still accessible now? Let the RAID transformation finish, then replace the crashed drive. RAID rebuild is a lot faster than conversion. What is the SMART status of the crashed drive? Are the sectors pending or reallocated? ESXi does not show you all SMART data. Sometimes if a drive runs out of temp spec it will mark sectors for replacement, but then if overwritten they will be recovered. If the drive actually has permanent bad sectors, replace it. But if not, once RAID transformation completes, just deallocate it and reallocate it and see if it recovers.
  17. There really has not been a good solution to this problem. However, here is a method to suppress spurious SMART syslog messages.
  18. The part of this thread dealing with virtual disks and SMART errors in syslog (/var/log/messages) by attaching them to SCSI controllers is essentially obsolete with 6.2.x, because the 1.03b and 1.04b loaders really work best with virtual SATA controllers. However, here's a different solution to suppress the messages:
  19. While it seems unlikely that we can easily map SMART data into DSM virtual disks (after 18 months nothing has happened), here is a solution that will suppress bogus syslogs.
  20. One annoyance when running DSM under ESXi is that virtual disks can't properly handle its SMART interrogations. This is because Synology embedded a custom version of the smartctl binary into its own libraries and utilities, ignoring standard config files that could generate compatible queries or suppress them. The result is spurious error messages logged to /var/log/messages every few seconds, wasting disk space and SSD lifecycle, and making it hard to see what's happening. If you use virtual disks and are not familiar with this, monitor the messages logfile with the command below to see how frequently DSM tries (and fails) to query the drives. # tail -f /var/log/messages The problem has been around for a long time and is well-documented here. An indirect fix was discovered when the virtual disks were attached to the LSI Logic SAS dialect of the ESXi virtual SCSI controller, but this solution worked reliably only under 6.1.x. On 6.2.x, the virtual SCSI controller tends to result in corrupted (but recoverable) arrays. I recently migrated my 6.1.7 system to 6.2.3, so I had to convert my virtual SCSI controller to SATA, and of course, the logfile chatter was back. I don't really care about SMART on virtual disks (and you probably don't either) so I decided to get rid of the log messages once and for all. Syslog-ng has a lot of capability to manage the log message stream, so I knew it was possible. The results follow: We need to install two files, first a syslog-ng filter: # ESXiSmart.conf # edit the [bracket values] with drive slots where SMART should be suppressed # in this example /dev/sda through /dev/sdl are suppressed filter fs_disks { match("/sd[a-l]" value("MESSAGE")); }; filter fs_badsec { match("/exc_bad_sec_ct$" value("MESSAGE")); }; filter fs_errcnt { match("disk_monitor\.c:.*Failed\ to\ check" value("MESSAGE")); }; filter fs_tmpget { match("disk/disk_temperature_get\.c:" value("MESSAGE")); }; filter fs_health { match("disk/disk_current_health_get\.c:" value("MESSAGE")); }; filter fs_sdread { match("SmartDataRead.*read\ value\ /dev/.*fail$" value("MESSAGE")); }; filter fs_stests { match("SmartSelfTestExecutionStatusGet.*read\ value\ /dev/.*fail$" value("MESSAGE")); }; filter fs_tstget { match("smartctl/smartctl_test_status_get\.c:" value("MESSAGE")); }; filter fs_allmsgs { filter(fs_badsec) or filter(fs_errcnt) or filter(fs_tmpget) or filter(fs_health) or filter(fs_sdread) or filter(fs_stests) or filter(fs_tstget); }; filter f_smart { filter(fs_disks) and filter(fs_allmsgs); }; log { source(src); filter(f_smart); }; Save this to /usr/local/etc/syslog-ng/patterndb.d/ESXiSmart.conf You will need to edit the string inside the brackets on the first "fs_disks" line to refer to those disks that should be SMART suppressed. If you want all SMART errors suppressed, just leave it as is. In my system, I have both virtual and passthrough disks, and the passthrough disks SMART correctly. So as an example, I have [ab] selected for the virtuals/dev/sda and /dev/sdb, leaving SMART log messages intact for the passthrough disks. Please note that the file is extremely sensitive to syntax. A missing semicolon, slash or backslash error, or an extra space will cause syslog-ng to fail completely and you will have no logging. To make sure it doesn't suppress valid log messages, this filter matches SMART-related error messages with references to the selected disks. However, it cannot actually remove them from the log file because there is a superseding match command embedded in DSM's syslog-ng configuration. The second file adds our filter to a dynamic exclusion list that DSM's syslog-ng configuration compiles from a special folder. There is only one line: and not filter(f_smart) Save it to /usr/local/etc/syslog-ng/patterndb.d/include/not2msg/ESXiSmart Reboot to activate the new configuration, or just restart syslog-ng with this command: # synoservice --restart syslog-ng If you want to make sure that your syslog-ng service is working correctly, generate a test log: # logger -t "test" -p error "test" And then check /var/log/messages as above. If you have made no mistakes in the filter files, you should see the test log entry and the bogus SMART messages should stop. As this solution only modifies extensible structures under /usr/local, it should survive an upgrade as long as there is no major change to message syntax.
  21. If you are not running heavy apps and are mostly seeking file sharing services (if I read your first post correctly), any of these platforms will do. The Xeon D and Atom platforms have the advantage of on-board 10Gbps. Facing a similar decision years ago, I selected the SuperMicro X11SSH platform for 8 SATA ports and added a dual-port Mellanox Connect X-3 PCIe card for 10Gbps networking. I'm not sure about your reasoning for dual 10Gbps networking? Do you have two workstations to connect with 10Gbps (that's my use case, dual ports to avoid a 10Gbps switch)? But binding two ports to one workstation will not be very helpful performance-wise, and the drives you have specified won't consistently saturate a single 10Gbps link. For your use case, I would (and have) selected enterprise SSD's instead of spindles; a 3-drive SATA SSD array maxes out the 10Gbps interface. Surplus market is basically US$90/TB now. Load up with a lot of RAM and forego any SSD cache, it's a waste for your workload. If you use SSD's for your array, you will want RAID F1 which precludes NVMe cache in any case.
  22. I noticed that the eSATA volume removal method left a reference in a volume table which caused constant errors to be posted to /var/log/messages. Also under certain circumstances the folder structure is left intact from the eSATA volume mount (this may actually be normal behavior with eSATA/USB shares). Neither of these are issues that affect functionality or are visible in normal operations, but I updated the script to clean them both up.
  23. Will update the platform and loader grid with this info, thanks.
×
×
  • Create New...