flyride

Members
  • Content Count

    1,786
  • Joined

  • Last visited

  • Days Won

    93

Everything posted by flyride

  1. Loader 1.03b, DS3617xs DSM 6.2.3-25426 and after install, upgrade to "update 3" Depending on the NIC you are using, you may need the updated drivers in extra.lzma. You must set the boot mode to CSM/Legacy if the BIOS supports UEFI (that server may predate UEFI however, and the default boot mode may work). FMI: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  2. You might consider using the FAQ and downloads from this site, and not elsewhere. VID/PID is clearly covered in the install procedures. https://xpenology.com/forum/topic/9394-installation-faq/
  3. I think you have the latest. Regardless it's a network driver, so if you have one that works, go with it; not much value in trying to get a more recent rev, as it's usually only to support newer silicon that you don't have.
  4. For most systems, Legacy == CSM https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  5. Why this: And not this: It's odd that you don't repeat what you have done for the 3020 that is working. In that vein, you already know you have to change from UEFI. This needs to be UEFI -> CSM otherwise you must rewrite the loader for MBR. You might need extra.lzma for the NIC but the only way to tell is to load it or run down the PCI ID of the embedded card.
  6. DS414 is a 32-bit ARM model and cannot be migrated directly to the platforms that XPe supports. Regardless, you should install DSM clean on an XPe box, then copy your data instead of migrate so that you are not hit later with the 16TB volume limit coming from the 32-bit platform. FMI: https://www.synology.com/en-global/knowledgebase/DSM/tutorial/Backup/How_to_migrate_between_Synology_NAS_DSM_6_0_and_later
  7. Technically more threads are better. In reality the difference will be imperceptible. However, the i3 has much newer silicon, which will benefit in 1) cooler operation and 2) better transcoding support, should you wish to do that.
  8. If you are connecting SATA disks to the motherboard, they need to be visible in BIOS. Solve that problem and try again.
  9. DSM is not particularly CPU limited. There is little to no impact from running as a VM, particularly if you pass through your SATA controller. But to each their own.
  10. Yes, it should all work. However, that processor cannot be fully utilized (24 threads vs. a maximum supported 16 threads on DS3617xs). ECC is transparent to the Linux OS, so there will be benefit with no specific action on your part. I'm not a huge fan of Syno VMM implementation, you might consider running DSM in ESXi, and other VM's in parallel as needed (you can set up separate storage for ESXi VM's, or run NFS out of DSM).
  11. Any disk that is "Initialized" has the system partition on it. Just add a VMDK and put a Basic Storage Pool on it. The way I would do this is add in a second SATA Controller (SATA 0:0 is still the loader, SATA1:0 is the new VMDK). Then you need to understand the PCI slot order of each controller. It should be SATA0 with the lowest slot #, SATA1 with the next lowest, and finally your passthrough controller with the highest. If it differs from that, you will need to reorder the SataPortMap and DiskIdxMap parameters accordingly. Assuming that the slot order is SATA0, S
  12. There is no capability to install a different NIC. If there is a JMicron driver available in @IG-88's extra.lzma, you could try that. Otherwise that device is not suitable for XPe. https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  13. flyride

    DSM 6.2 Loader

    This is unlikely. Much more likely that the boot mode has not been properly set for 1.03b loader (CSM/Legacy mode is required).
  14. Fan control is via BIOS, there is no fan control in DSM for non-Synology hardware.
  15. If you can see the loader in DSM and you have synoboot devices, you need to adjust grub.cfg for your particular configuration. Post a new question thread (this is not a synoboot problem anymore) with detail on all your controllers, (virtual and passthrough), vmdk's attached to those controllers, and the number of physical disks on the passthrough controller and we can make suggestions on amending grub.cfg.
  16. Look at some of the other posts to troubleshoot. Any results from ls /dev/synoboot* Other things that could be factors are your sata args from grub.cfg and your virtual machine configuration (post these if you wish).
  17. I believe the Control Panel System Temp hover indication is the temperature of the hottest disk, not the CPU. The cache device(s) are excluded from this assessment of highest temperature.
  18. Unfortunately not a simple answer that applies to everyone. It depends on the hardware you have and the platform you have picked (DS3615xs/DS3617xs/DS918+). On my ASRock 4205-ITX baremetal I can see temperature per core. Virtual servers don't show anything within the VM's. Poke around /sys/class/hwmon, /sys/class/thermal, and /sys/devices/platform directory structures for files referencing temperature. Or, install lm-sensors or acpi may help.
  19. There are controllers that behave that way (assign slots based on drive connections) but most map port to slot in a consistent manner. Either way, all your ports are addressable and you can have some confidence in how it will work. I would not modify SataPortMap since all your ports are addressable by the default configuration.
  20. Per the previous post, you have some additional testing to do to make sure that all the drives will be seen. It is very useful to have a backup of your data as the migration process is If you are absolutely sure that all the drives can be reached by the new system install, you can migrate the drives in one of two ways: 1) Leave the test disk installed as the first drive (controller #1, port #1), remove its Storage Pool (but working DSM will remain intact). Then shut it down and install the 8 drives. When you boot back up, your old Storage Pool and volume should be vis
  21. One characteristic of XPe is that because of the hacky nature of what it is, everyone learns by doing - by trial and error. This is why testing is very important. Install system as test. Put test drive(s) on controller port(s). See where drive shows up (if at all) in Storage Manager. Make notes. Move drive to another controller/port until you understand what is happening. Make any adjustments necessary so that you know which controllers and ports go to which slots. If it seems "wrong" (ports not available, ports skipped, etc), make adjustments and/or report back wi
  22. Burn a new, clean loader each time you start an install. A partial or failed install alters the loader so it won't behave the same the second time around. If that loader copy tried to install 6.2.4 it also won't let you install 6.2.3, which obviously is needed.
  23. Regarding QuickConnect: https://xpenology.com/forum/topic/9392-general-faq/?tab=comments#comment-82390 DSM heavily leverages GPL code, which essentially says that source code and use must be freely made available to all. DSM is an "aggregate" program with GPL and a few non-GPL components - particularly those that rely on Synology's own cloud services such as QuickConnect. Synology uses real serial numbers and registrations to determine valid connections to their cloud services. In the interest of maintaining appropriate use of DSM under XPenology, trying to use QuickConnect
  24. i maintain the alternative of using open medial vault, if dsm breaks i can boot up omv from a disk or usb and have network access to my files and if synology gets nasty i will just switch to omv (btrfs and SHR are no problem, just work when starting omv) Just a quick comment on this, if folks do feel the need to start reposturing themselves away from DSM because they are concerned with access to security patches or just have to have the "latest" on whatever platform they want, it would probably make sense to steer away from using SHR - yes it can be supported on OMV or plain o
  25. You can do this but there will be some level of performance impact, and it's another layer of services to get corrupted and damaged. On ESXi my testing indicated something approaching 10% cost to virtualizing storage. If the storage is fast enough that probably doesn't matter. My preference is to pass through disks into a DSM VM and allow DSM to manage them completely. Part of the value of DSM is that it is really designed to work directly with the disk hardware for redundancy and data recovery. Of course, you then need separate storage for the hypervisor and other VM's (techni