flyride

Members
  • Content Count

    1,786
  • Joined

  • Last visited

  • Days Won

    93

Everything posted by flyride

  1. Definitly not recommended. I think everyone agrees that bare metal installations are recommended over virtualized installations. Though, I am using XPE in my private Homelab since ages in ESXI and a direct-io attached LSI-Controller without any issue or complaints. Thus said, even though it is not recommended it can be used. In a corporate environment I would always recommend to buy a Syno device and live trouble free when it commes to DSM updates. Hmm, not sure I 100% agree with this. Baremetal is generally simplest
  2. DDNS is not tied to one service or another. You don't need more than one. When you set up a DDNS, the public name that you choose then is dynamically updated to point to your real (temporary) IP. When your real IP changes, the reference is updated by DDNS. That lets someone outside your network find your outside IP. You still need to make that IP available to specific services, meaning you will need to open port forwarding allowing remote file access: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/File_Sharing/Configure_file_sharing_links https://www.synolo
  3. It's probably possible to disable swap (and it is certainly possible to omit slow drives from swap I/O by modifying the RAID1 array to use the slow drives as hotspares) but the swap space will always be reserved on every disk that is initialized for use with DSM (partition 1). So, if your objective is to recover the space, that is not possible. If your goal is to speed up the swap access and you have certain drives that are better able to handle the I/O, see this: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report
  4. Ok, a few things: First, SAS support is better on DS3617xs than DS918+. You should consider moving to 1.03b and DS3617xs for the best results. It looks like you decided to uncomment the grub command set sata_args='SataPortMap=4' If you stay on DS918+, I would try the following: set sata_args='SataPortMap=1 DiskIdxMap=1000' The SataPortMap argument tells DSM to only use 1 slot (the loader) from the first controller, then DiskIdxMap assigns the first controller to slot 17 (effectively hiding the loader), and the second controller (hopefully your p
  5. Sorry what does this mean? You should be using the loader file from the official link? Are you using one vmdk or two? Post some relevant screenshots (overview, HDD/SDD) from Storage Manager.
  6. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  7. 10Gbe is always worth it. SSD cache is often not worth it. Cache wears out an SSD quickly, and the write option has many instances of corruption. It has nothing to do with NVMe vs SATA. I always advocate for more RAM (which is automatically used for cache) than SSD cache. Or just run SSD's period (line rate 10Gbe read/write sustained indefinitely with SSD RAIDF1 and no cache). Most M.2 dual slots will take one port from the SATA controller when in SATA mode, but in NVMe mode they are just a PCIe device and no impact to SATA. You haven't said what disks you are usin
  8. I also run ESXi and passthrough my on-board SATA controller to DSM VM. I have a NVMe drive that is used for ESXi datastore and all the other VM's. How is the scratch storage/other VM datastores attached on your planned system? You can use the default serial number in the loader. No need to change unless you are going to run multiple DSM instances at the same time. The MAC is only critical if you are going to use Wake-On-LAN - unlikely with a VM. Set up your loader VMDK on SATA Controller 0 (0:0) and nothing else on that controller. Don't add another v
  9. DSM and MD are software solutions. No hardware RAID is desired or required. You cannot change RAID 5 to RAID 6 using the UI. If you are not using SHR (you really have a RAID5), it can technically be done via command line. Remove your cache before trying anything like this. Have a complete backup. Be advised, it will take an extremely long time (4-5 days) for the conversion to complete, and performance will be worse using RAID 6.
  10. The default configuration for the loader is to support (1) virtual SATA controller to support the boot loader VMDK, and one additional SATA controller. If you are following the ESXi install guide, it suggests creating a second virtual SATA controller to support an example virtual disk for a Storage Pool/Volume. If you add a passthrough controller, this will cause you some issues as you may run out of disk slots (12 total: with 8 default dedicated to the virtual SATA #2 only allow you 4 on the passthrough). If you don't need virtual disk support and are only installing
  11. AFAIK VID/PID tells real Syno code what the bootloader device is so that it hides it properly. Since it is hardcoded and the VID/PID is Synology's it is a simple way for Syno to ensure DSM doesn't run on non-Synology hardware (except if hacked). VID/PID error is essentially their code rejecting non-Syno hardware. Anytime you attempt a (6.2.4) install on a loader device, it will write those files to the loader and then it cannot install a version earlier than that. It doesn't matter if it worked or not. So just write a new clean loader from a fresh download and this proble
  12. This is the base install and "official" instruction set here: https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ It's written for 6.1, but the install for 6.2 is the same; just substitute the 6.2 loaders and DSM PAT files. See this for more info: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  13. "Most all of your problems with going to 6.2.4 is because VMWare pushed out patches with deprecation to linuxkernel." VMWare may use a variant of Linux as its core hypervisor OS, but what it supplies as a hardware platform has nothing to with it - the VM doesn't even know what OS is going to run on it. Yes, we select "Linux 3.x 64-bit blah blah" as a profile, but that is not linking to anything that is actually Linux, it's just defining a standard virtual hardware configuration that is presented to the client OS installation. Furthermore, if this were somehow true, how
  14. Nice trick. Does this work to make 1.03b loader usable on UEFI-only systems?
  15. That message is usually because you are trying to migrate an existing system. You cannot backrev in that case. All installs back to (I think) DSM 4.0 are still on their website for download.
  16. Well, aside from it actually being linked in the post, there is this: https://xpenology.com/forum/topic/9394-installation-faq/
  17. For those who come across this thread, this can be a byproduct of a particular Unix/Linux feature. Filesystems are mounted to specific folders in the directory tree. In this case, it's whatever /dev/mdx or /dev/mapper device that corresponded to the volume1 array that was mounted to /volume1. As long as that volume is mounted, all writes within the /volume1 branch of the filesystem go to the array. If the array is unmounted, /volume1 still exists but has no files in it (for obvious reasons). That does not mean that files cannot be written to the /volume1 location. I
  18. You have a failing drive. You need to replace it, and have a backup. Then by your logic your data is lost already. You must have a backup. No. Don't make changes to your system until it is stable. I don't personally recommend a write cache using SSD at all, with modest SOHO-type workloads, it is of little benefit and you can get more value for adding RAM to your system instead. What is causing you to want to change to DS3617xs? You would need a good reason to do that if you have a working system (those reasons ex
  19. Yes, it will say SATA or NVMe. This discussion is starting to go sideways. Let's backtrack. Your NUC has the ability to connect to a single SATA drive with a SATA interface. It does not have an M.2 slot (required for an SSD on a card). M.2 slots can support either NVMe or SATA SSD's depending upon the chipset and motherboard capabilities. Again, you don't have one. So you cannot use an adapter either. So the discussion of NVMe drives is academic. You cannot use one with your NUC in any case. The Ironwolf page you linked has both SATA SSD and NVMe drives
  20. If the question is: "can you use an SSD as a regular disk for XPe/DSM" the answer is yes if it is a SATA SSD. NVMe SSD's cannot easily be used as DSM really only supports them for cache.
  21. --write-mostly only biases the reads to the fast drive, all writes continue to be dispatched to all member drives. So I didn't pursue it for my use case, and I suppose it wouldn't quite be what OP was looking for. But it might have some uses, if someone wanted to play with it and see if DSM gets upset about it. https://raid.wiki.kernel.org/index.php/Write-mostly This will result in the following idiosyncracies: Write performance will be equal to the slowest participant in the RAID-1 array.This can be mitigated with the --write-behind option (which caches write
  22. All that @IG-88 explained is correct. DSM installs a /root and /swap to each drive, period, implementing each with their own RAID1 array across all the disks. It is not possible to remove those partitions (there is no way to get that space back). But it is possible to force DSM to make them hotspares instead of active in each respective RAID1: https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report It is not inherently unsafe to do this. EXCEPT that as you describe your system, there would be no redundancy for DSM, so if something happened
  23. ASUS CPU is 1 Ghz. DS1518+ is 2.4 Ghz. Yes, buffering and and DMA and all that but IMHO this is one of those times that clock rate matters given the millions of packets and individual transactions that are trying to occur at line rate. I think you are getting all you can out of that CPU.
  24. Should we be concerned? Long term, with the intermediate updates not working? (assuming we already have backups and have all the functionality we need) Define concern? Is there anything that can be done with 6.2.4 that cannot be done with 6.2.3? The assumption has always been is that any update has the possibility of breaking the loader. Could DSM 6.2.3-25426 Update 3 possibly be the last safe Xpen Version for us using Jun's Loaders? Maybe. Is Jun still kicking around to come up with a possible solution? Or anyone else? Jun is the only one that can ans