Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. https://www.youtube.com/watch?v=h27AcB70Mvc Me: https://webcache.googleusercontent.com/search?q=cache:-tO3NcOuWFgJ:https://www.synology.com/en-global/products/DS918%2B+&cd=2&hl=en&ct=clnk&gl=us I should have said 9 drives, not 8. Yes, they are via external bay. However, this is an example of both our factual opinions being totally useless and irrelevant to the thread. OP's potentially making an incorrect decision based on a misunderstanding of the way the loader is configured, and also based on faulty understanding of Synology's product line. So from a practical standpoint for his/her sake, my advice stands. But thanks for retrieving configuration file settings we both know to be there.
  2. ESXi uses read-only storage for its OS and runs in RAMdisk, in our configurations that storage is typically a USB pen drive (functionally replacing the USB loader for XPEnology) or DOM. Then ESXi also needs read/write storage to internally function, that is called "scratch." That is where ESXi stores swapfiles, logs and VM configuration, VM state information and virtual disk files. So you need to plan some sort of physical storage for that per my prior post. An SSD attached to the C224 per your last post will work fine.
  3. EDIT: Ack, I need to read. You can passthrough your LSI and use your onboard SATA ports for scratch disks. (take or leave this advice below, you have options) So your alternative is to individually RDM each drive and import the ones for DSM, while leaving one of the motherboard connected drives for scratch. I would prefer the simplicity of just passing through your C224 controller to your DSM VM. So for ESXi scratch, consider an NVMe drive connected via PCIe slot, like this: https://www.amazon.com/QNINE-Adapter-Express-Controller-Expansion/dp/B075MDH28Y
  4. If you are not changing the DSM version or platform, no migration is required. What are you running now? In short, use the vdisk image of the same loader you are running now, pass through your disk controller, and it will boot right up. With that strategy you will need some other connected storage for VMWare scratch, VM configurations, and vdisks for the non-DSM VM's (if you don't want to NFS from DSM). That's a good role for an M.2 disk if your motherboard has the capability. That said, have a backup in case it goes wrong.
  5. The loader overrides the native hardware device limit. However, by definition the 918+ supports 8 drives, the loader expands it to 16.
  6. The system needs to be shut down for a snapshot. Installing VM-Tools does not matter. There is nothing unique about it, the normal ESXi rules apply.
  7. I don't know that those are "cons." The synoboot problem happens to baremetal installs as well, but not as frequently. I'm not sure what you mean by "different boot process" as it's the same loader... the boot option just tries a different strategy to address/hide the boot device. Most folks select ESXi to help with hardware compatibility issues. You can also create test VM's and run trial upgrades without risking your production system. You have determined that VM flexibility is better (and I'd also suggest that the VM environment with ESXi is more featured/robust). The real cons, in my opinion, are that wrapping everything in a hypervisor takes some system resources (but if you want to run VM's inside of DSM it's a wash) and the recent DSM hardware features (transcoding and NVMe cache) are not really viable in a virtual machine.
  8. The lib file is overwritten by 6.2.3-25426-2 and the current patch no longer works. Anyone using a r/w cache is strongly urged to remove it prior to upgrading.
  9. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: Jun's Loader v1.04b DS918+ - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) Test system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: Jun's Loader v1.03b DS3615xs - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) Test system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: Jun's Loader v1.03b DS3617xs - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) Test system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: Jun's Loader v1.04b DS918+ - Using custom extra.lzma: No - Installation type: BAREMETAL J4105-ITX production system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: Jun's Loader v1.03b DS3617xs - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) production system
  10. I don't think the J4105 has a speaker/buzzer. If it won't init the embedded video card, there's something wrong.
  11. I'm glad you were able to sort out a solution - great initiative on your part!
  12. What's UEFI + CSM? You want to make sure you are booting into BIOS or Legacy mode. UEFI mode is incompatible with loader 1.03. That is likely to be your problem. Unfortunately the nomenclature is not standard from BIOS to BIOS, so you will have to do some experimenting. If needed, post some relevant BIOS setup screenshots.
  13. I don't think anyone has proven a Ryzen APU for transcoding under DSM yet. And obviously IIS won't work, you'll have to substitute another webserver. Everything else should be possible.
  14. Any of the Jxxxx-ITX motherboards will do in regard to your calculation. The J4105 chip is 10W and I doubt the rest of the mobo uses more than a few watts with no real PCIe expansion. I think you have plenty of power margin.
  15. Just do the math based on the high wattage rating of all the parts. This website will calculate it for you: https://pcpartpicker.com/ Based on what you posted, you should be fine IMHO
  16. Lots of people run headless. No keyboard or monitor at all.
  17. What type/how many SSD's on the NAS? The PC to NAS copy looks like the RAM cache on the NAS fills up and you realize the actual write capability. But that seems pretty slow. It wouldn't surprise me however if you are using consumer SSD's, they have really bad write performance. They will burst for a short while until their local write cache is full, and then they get real slow. Lastly, it will take the NAS a short while to flush its RAM cache before it will go back to the burst speed (this might actually affect your NAS->PC test too, depending on the timing). You can prove to yourself when drives keep running for a bit after the copy to the NAS is done. However, I'm confused by your video, at 1:28 are you initiating the copy from NAS to PC or were you running both copies at the same time? It seems to start at 20% which is weird. But if you ran in each direction at the same time your measurements would be highly skewed. The NAS to PC write seems slow regardless; two decent SSD's of any type should be able to fill a 10Gbit pipe, and your NVMe drive should write a lot faster than that. Don't forget you are testing both sides of the connection at all times. The 970 Pro is a good NVMe SSD but it won't sustain indefinite writes either, and in particular it does thermal throttle. What CPU/motherboard on your client side? Are you using jumbo frames? I'm not sure that's your problem right now but if you want to max the 10Gbit link it will be necessary using CIFS/NFS etc. Over time, I've been able to tweak out any variability or drops for reads or writes and now my system's 10Gbit transfer rate is 1+Gbps flat indefinitely. The biggest factors in achieving that were: 1) enterprise-class SSD's on the NAS, 2) jumbo frames and 3) adequate cooling on the client NVMe SSD. Oh, and I'm using RAID F1 (currently with five disks) but it will handle the write load with three enterprise drives. RAID 0 should not be necessary for the performance you're looking for. EDIT: just curious are you also using DSM's SSD cache (because that will probably give you worse performance with this setup)?
  18. If you really think it's the NIC, change the cable first.
  19. I believe you do not need to be concerned. The design of Linux in general is to manage log files in a way that does not exhaust the storage available. Spurious data in logs does use up space, but I did not suggest there was a crash problem solely due to log utilization. Logging is a good thing; zeroing log files is an unnecessary and forensically destructive practice. In DSM, syslog events are logged by default to /var/log/messages. Each Linux installation has syslog rules that split off certain logs to other files. There are also multiple ingress points. Kernel events, for instance, are independently logged to the kernel log and the syslog default. There are a number of unimpactful, essentially useless, and unmanageable error logs due to unsupported system events in XPEnology. My intention was solely to improve the signal to noise ratio in the log files (make them more useful) by suppressing those types of logs.
  20. I suggest you read up on logrotate. TL;DR the system won't run out of space for logs, it self-manages.
  21. Actually now that I'm looking at it, you are trying to filter non-disk messages. All the other filters presume and require that there is a /dev/sdx reference in the log entry. You'll need to modify the file in this way: filter fs_cachemonitor { match("cache_monitor\.c:.*Can't support DS with cpu number" value("MESSAGE")); }; filter fs_allmsgs { filter(fs_badsec) or filter(fs_errcnt) or filter(fs_tmpget) or filter(fs_health) or filter(fs_sdread) or filter(fs_stests) or filter(fs_tstget); }; filter fs_smart { filter(fs_disks) and filter(fs_allmsgs); }; filter f_smart { filter(fs_smart) or filter(fs_cachemonitor); }; log { source(src); filter(f_smart); };
  22. A couple of comments, most of which probably don't apply to your situation. Use care not to suppress messages that are actually informative. Logs are there for a reason, so know what you are suppressing and why If the log entry is visible in dmesg, then it's a kernel event and the stated strategy won't work without some minor modification Your example only needs one filter (the 'repeat' message is derived from the source line). I'd choose this filter over what you have so that it survives a version upgrade, as long as the text is not changed by Synology cache_monitor\.c:.*Can't support DS with cpu number Also, you can test your regular expression in this way: $ echo "2020-07-11T18:00:16+09:00 min synostoraged: cache_monitor.c:1557 Can't support DS with cpu number (1)" | grep -e "cache_monitor\.c:.*Can't support DS with cpu number" 2020-07-11T18:00:16+09:00 min synostoraged: cache_monitor.c:1557 Can't support DS with cpu number (1) Did you remember to add the exclusion file in not2msg? Did you restart syslog-ng?
  23. Try writing very large files. There is a lot of OS overhead in writing smaller files and you probably won't see gig speed on a single spindle with that workload. I'm not sure there is anything wrong, if you really want to know the transfer rate of the drives and system, you'll need to do more synthetic testing under controlled circumstances.
  24. If you want NVMe cache capability (or hardware transcoding), you need 1.04b/DS918+ If you want 16-thread capability or RAID F1 you need 1.03b/DS3617xs. It also has better base support for HBA's but extra.lzma updates extend that capability to DS918+. This information is prominently featured in post #1 of this thread. FWIW, DSM is otherwise the same version, the underlying kernel version is functionally irrelevant.
×
×
  • Create New...