flyride

Members
  • Content Count

    694
  • Joined

  • Last visited

  • Days Won

    44

flyride last won the day on March 17

flyride had the most liked content!

Community Reputation

242 Excellent

5 Followers

About flyride

  • Rank
    Guru

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. CPU is "Prestonia" which is Netburst architecture. Unlikely to work as the Linux kernel compile requires Nehalem or later. That chip is more than 15 years old!
  2. 6.2 has received a few patches for security fixes when 6.1 has not. However, Synology lags so badly with critical updates versus a normal Linux distro, I personally won't rely on 6.2.2's security state. My recommendation is that you should assume it's hackable and never expose your NAS to the Internet. VPN or 2-factor encrypted proxy service are really the only options to safely access it remotely. If you subscribe to that opinion, the difference between 6.1 and 6.2's security state is really irrelevant.
  3. https://xpenology.com/forum/topic/9392-general-faq/?do=findComment&comment=113011 Read the last paragraph in particular...
  4. Yes, if you do a DSM migration install with the new VM. Yes. They have to be seen as SSD by the DSM instance. Don't use cheap SSD for write cache or you risk your data. Better yet, don't use write cache at all.
  5. DS918 and DS3615 - 8 threads (4 cores + 4 hyperthreads, or 8 cores if you turn off HT in the BIOS) DS3617 - 16 threads (8 cores + 8 hyperthreads, or 16 cores if you turn off HT in the BIOS). But most finicky with regard to hardware compatibility. You can add as many threads as you want to the VM but DSM won't use more than stated; it's a kernel compile limitation.
  6. It depends on which loader you choose. 1.03b requires BIOS.
  7. Can you @flyride tell us exactly what files to copy? That information is in the thread you quoted. My first response explained three different options to you. If you pick the third option (1.03b loader, DS3615xs and DSM 6.2.2 without Intel NIC) you will probably need the drivers in extra.lzma. @zzgus just posted the link, you can follow that thread.
  8. I don't know if you can assume anything, and you need to know how your server works before you do anything else. ICH10R (Intel chipset) connected ports may have a simple BIOS-accessible RAID 0/1 option which you should turn "off." However, some server motherboards can have an embedded third-party RAID controller such as an LSI/MegaRAID which would not work well unless flashed to "IT" mode as previously described. Embedded controllers may not be flashable so ports connected to those controllers would not be good candidates for XPEnology.
  9. Yes, that's what passthrough is, independent from the virtual host. But at least with ESXi, you can't passthrough a disk, only a controller, in which case the attached disks will come along with it. Alternatively, RDM on ESXi provides the driver and connectivity and provides a raw translated interface to the VM, but without managing the disk in any way. So technically not a passthrough but works the same as far as what DSM sees. This is the same question and answer as above, just restated. If you don't use DSM to control your disk devices, then you may as well just run a generic Linux instance and NFS mount remote storage to that. DSM offers you no advantage if it cannot control the disks. The virtual disk on the host which is your bootloader image does not have DSM on it. As is the case with DSM5, DSM6 stores a copy of the OS and swap partitions on every disk that is allocated to it. Those partitions are ext4, not btrfs.
  10. Passthrough disks can be moved to a baremetal system or connected to another virtual system, no issue. But you should always have a backup method. Always.
  11. One tutorial, don't reinvent the wheel. The install should be the same. ICH10R is an ACPI-compliant controller, so no special treatment required. Do you also have a RAID controller on the system? Many of those don't work or need to be flashed to "IT" mode which basically defeats their RAID configuration. Again, pretty standard stuff. 1.02b loader and DS3615xs DSM 6.1.7 should work with no changes out of the box. 1.03b loader and DS3615xs DSM 6.2.2 will work if you add an Intel NIC or the extra.lzma to support the Broadcom NICs.
  12. This is what I'm using for mine. The case fan is near-silent, and the power supply fan does not turn at the loads presented by the J4105 motherboard. http://www.u-nas.com/xcart/product.php?productid=17636
  13. I'm passing through my C236 Sunrise Point SATA controller with no issues. I ran it this way on 6.5 and also now on 6.7.
  14. I don't think there is a way to consume external network storage in the UI. You might be able to manually set up an initiator from the command line, then format and mount similar to the NVMe strategy of spoofing in a volume. You'd need a script to reinitialize it on each boot and I think it would have a high likelihood of breaking. But you should be able to NFS mount into an active share via command line, and that could easily be scripted to start at boot with no stability concerns.
  15. There really isn't much CPU work for running the software RAID. These links may give you some confidence. I'm using a mix of RDM and controller passthrough. https://xpenology.com/forum/topic/12391-nvme-optimization-baremetal-to-esxi-report/?tab=comments#comment-88690 https://xpenology.com/forum/topic/13368-benchmarking-your-synology/?do=findComment&comment=137430