flyride

Members
  • Content Count

    1,726
  • Joined

  • Last visited

  • Days Won

    90

Everything posted by flyride

  1. Yes, it should all work. However, that processor cannot be fully utilized (24 threads vs. a maximum supported 16 threads on DS3617xs). ECC is transparent to the Linux OS, so there will be benefit with no specific action on your part. I'm not a huge fan of Syno VMM implementation, you might consider running DSM in ESXi, and other VM's in parallel as needed (you can set up separate storage for ESXi VM's, or run NFS out of DSM).
  2. Any disk that is "Initialized" has the system partition on it. Just add a VMDK and put a Basic Storage Pool on it. The way I would do this is add in a second SATA Controller (SATA 0:0 is still the loader, SATA1:0 is the new VMDK). Then you need to understand the PCI slot order of each controller. It should be SATA0 with the lowest slot #, SATA1 with the next lowest, and finally your passthrough controller with the highest. If it differs from that, you will need to reorder the SataPortMap and DiskIdxMap parameters accordingly. Assuming that the slot order is SATA0, S
  3. There is no capability to install a different NIC. If there is a JMicron driver available in @IG-88's extra.lzma, you could try that. Otherwise that device is not suitable for XPe. https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  4. flyride

    DSM 6.2 Loader

    This is unlikely. Much more likely that the boot mode has not been properly set for 1.03b loader (CSM/Legacy mode is required).
  5. Fan control is via BIOS, there is no fan control in DSM for non-Synology hardware.
  6. If you can see the loader in DSM and you have synoboot devices, you need to adjust grub.cfg for your particular configuration. Post a new question thread (this is not a synoboot problem anymore) with detail on all your controllers, (virtual and passthrough), vmdk's attached to those controllers, and the number of physical disks on the passthrough controller and we can make suggestions on amending grub.cfg.
  7. Look at some of the other posts to troubleshoot. Any results from ls /dev/synoboot* Other things that could be factors are your sata args from grub.cfg and your virtual machine configuration (post these if you wish).
  8. I believe the Control Panel System Temp hover indication is the temperature of the hottest disk, not the CPU. The cache device(s) are excluded from this assessment of highest temperature.
  9. Unfortunately not a simple answer that applies to everyone. It depends on the hardware you have and the platform you have picked (DS3615xs/DS3617xs/DS918+). On my ASRock 4205-ITX baremetal I can see temperature per core. Virtual servers don't show anything within the VM's. Poke around /sys/class/hwmon, /sys/class/thermal, and /sys/devices/platform directory structures for files referencing temperature. Or, install lm-sensors or acpi may help.
  10. There are controllers that behave that way (assign slots based on drive connections) but most map port to slot in a consistent manner. Either way, all your ports are addressable and you can have some confidence in how it will work. I would not modify SataPortMap since all your ports are addressable by the default configuration.
  11. Per the previous post, you have some additional testing to do to make sure that all the drives will be seen. It is very useful to have a backup of your data as the migration process is If you are absolutely sure that all the drives can be reached by the new system install, you can migrate the drives in one of two ways: 1) Leave the test disk installed as the first drive (controller #1, port #1), remove its Storage Pool (but working DSM will remain intact). Then shut it down and install the 8 drives. When you boot back up, your old Storage Pool and volume should be vis
  12. One characteristic of XPe is that because of the hacky nature of what it is, everyone learns by doing - by trial and error. This is why testing is very important. Install system as test. Put test drive(s) on controller port(s). See where drive shows up (if at all) in Storage Manager. Make notes. Move drive to another controller/port until you understand what is happening. Make any adjustments necessary so that you know which controllers and ports go to which slots. If it seems "wrong" (ports not available, ports skipped, etc), make adjustments and/or report back wi
  13. Burn a new, clean loader each time you start an install. A partial or failed install alters the loader so it won't behave the same the second time around. If that loader copy tried to install 6.2.4 it also won't let you install 6.2.3, which obviously is needed.
  14. Regarding QuickConnect: https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-82390 DSM heavily leverages GPL code, which essentially says that source code and use must be freely made available to all. DSM is an "aggregate" program with GPL and a few non-GPL components - particularly those that rely on Synology's own cloud services such as QuickConnect. Synology uses real serial numbers and registrations to determine valid connections to their cloud services. In the interest of maintaining appropriate use of DSM under XPenology, trying to use QuickConnect
  15. i maintain the alternative of using open medial vault, if dsm breaks i can boot up omv from a disk or usb and have network access to my files and if synology gets nasty i will just switch to omv (btrfs and SHR are no problem, just work when starting omv) Just a quick comment on this, if folks do feel the need to start reposturing themselves away from DSM because they are concerned with access to security patches or just have to have the "latest" on whatever platform they want, it would probably make sense to steer away from using SHR - yes it can be supported on OMV or plain o
  16. You can do this but there will be some level of performance impact, and it's another layer of services to get corrupted and damaged. On ESXi my testing indicated something approaching 10% cost to virtualizing storage. If the storage is fast enough that probably doesn't matter. My preference is to pass through disks into a DSM VM and allow DSM to manage them completely. Part of the value of DSM is that it is really designed to work directly with the disk hardware for redundancy and data recovery. Of course, you then need separate storage for the hypervisor and other VM's (techni
  17. Definitly not recommended. I think everyone agrees that bare metal installations are recommended over virtualized installations. Though, I am using XPE in my private Homelab since ages in ESXI and a direct-io attached LSI-Controller without any issue or complaints. Thus said, even though it is not recommended it can be used. In a corporate environment I would always recommend to buy a Syno device and live trouble free when it commes to DSM updates. Hmm, not sure I 100% agree with this. Baremetal is generally simplest
  18. DDNS is not tied to one service or another. You don't need more than one. When you set up a DDNS, the public name that you choose then is dynamically updated to point to your real (temporary) IP. When your real IP changes, the reference is updated by DDNS. That lets someone outside your network find your outside IP. You still need to make that IP available to specific services, meaning you will need to open port forwarding allowing remote file access: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/File_Sharing/Configure_file_sharing_links https://www.synolo
  19. It's probably possible to disable swap (and it is certainly possible to omit slow drives from swap I/O by modifying the RAID1 array to use the slow drives as hotspares) but the swap space will always be reserved on every disk that is initialized for use with DSM (partition 1). So, if your objective is to recover the space, that is not possible. If your goal is to speed up the swap access and you have certain drives that are better able to handle the I/O, see this: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report
  20. Ok, a few things: First, SAS support is better on DS3617xs than DS918+. You should consider moving to 1.03b and DS3617xs for the best results. It looks like you decided to uncomment the grub command set sata_args='SataPortMap=4' If you stay on DS918+, I would try the following: set sata_args='SataPortMap=1 DiskIdxMap=1000' The SataPortMap argument tells DSM to only use 1 slot (the loader) from the first controller, then DiskIdxMap assigns the first controller to slot 17 (effectively hiding the loader), and the second controller (hopefully your p
  21. Sorry what does this mean? You should be using the loader file from the official link? Are you using one vmdk or two? Post some relevant screenshots (overview, HDD/SDD) from Storage Manager.
  22. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  23. 10Gbe is always worth it. SSD cache is often not worth it. Cache wears out an SSD quickly, and the write option has many instances of corruption. It has nothing to do with NVMe vs SATA. I always advocate for more RAM (which is automatically used for cache) than SSD cache. Or just run SSD's period (line rate 10Gbe read/write sustained indefinitely with SSD RAIDF1 and no cache). Most M.2 dual slots will take one port from the SATA controller when in SATA mode, but in NVMe mode they are just a PCIe device and no impact to SATA. You haven't said what disks you are usin
  24. I also run ESXi and passthrough my on-board SATA controller to DSM VM. I have a NVMe drive that is used for ESXi datastore and all the other VM's. How is the scratch storage/other VM datastores attached on your planned system? You can use the default serial number in the loader. No need to change unless you are going to run multiple DSM instances at the same time. The MAC is only critical if you are going to use Wake-On-LAN - unlikely with a VM. Set up your loader VMDK on SATA Controller 0 (0:0) and nothing else on that controller. Don't add another v
  25. DSM and MD are software solutions. No hardware RAID is desired or required. You cannot change RAID 5 to RAID 6 using the UI. If you are not using SHR (you really have a RAID5), it can technically be done via command line. Remove your cache before trying anything like this. Have a complete backup. Be advised, it will take an extremely long time (4-5 days) for the conversion to complete, and performance will be worse using RAID 6.