flyride

Members
  • Content Count

    423
  • Joined

  • Last visited

  • Days Won

    33

Everything posted by flyride

  1. Just for others that search and find this thread. 9th Gen Intel CPU's (and really all Haswell and later Intel CPU's) work fine with XPEnology. OP's problem is with his particular motherboard NIC being too new for the driver and no effective way to add a driver with the latest DSM versions. One solution would have been to add a compatible add-in Intel NIC as many have done. Virtualization also would solve the NIC compatibility problem.
  2. @Jastsai, unless someone has EXACTLY the hardware you do (i.e. you can find it in the DSM upgrade threads), you have to make the determination for yourself. Maybe try this: 1. Set aside your old loader and disks, and build up a new loader on a new USB on your AMD platform, with a spare drive. Install DSM 6.1.x like you already have running. 2. Install 1.03b on the new USB and boot it. It should prompt you to migrate. Once done, verify that it boots into DSM ok and your test configuration is intact. If so, congrats, your hardware works with the new loader on DSM 6.1 3. Now use Control Panel to try and upgrade to the latest 6.2, or download the 6.2 PAT file of your preference and install a prior version. 4. If it works and boots back into DSM, congratulations, your hardware works with DSM 6.2.x and you should expect to have the same result with your production system. To upgrade your production system (or return to the 6.1 version of your production system) just replace the test USB and disk with the ones you removed. Regardless, it would be wise to have a backup of your data prior to upgrade. If the above doesn't work at some step, you can bring your results back to the forum and folks will be more than willing to help.
  3. Hmm, calling people out as arrogant isn't the way to curry favor for help. Anyone you could possibly reach with that comment has spent countless hours trying to help many, many people such as yourself. Based on your questions, both you and OP present with little or no evidence of effort to understand DSM and how the XPenology loader works. And only slightly below the veneer of that request is a challenge to be assured you won't lose data, which you cannot possibly get from someone on an online forum. It's your responsibility, nobody else's, to make sure your data is safe. Let's spell it out: All the loader does it let us boot DSM on non-Synology hardware. Nothing more, nothing less. Any other behavior is attributable to DSM, Synology's operating system. Yes, it's based on Linux, but that's not a limiting factor. Many XPenology users never launch the shell, nor do they need to. If you want to be successful running DSM on XPenology, it will be in your interest to know something about DSM. There are many, many places to learn about how to do things with DSM, not the least of which are Synology's forums. So here are a couple of key points that ARE, literally, embedded in the tutorials. Hopefully they will help steer you in the right direction. Upgrading DSM from 6.1 to 6.2 is a function of DSM. Not the loader. If you want to upgrade from 6.1 to 6.2, you'll need to install a 6.2 compatible version of the loader (either 1.03b or 1.04b), otherwise DSM will crash once upgraded The 6.2 compatible loader must also work with your hardware, which isn't a guarantee even if you were successfully running DSM 6.1. Installing a new loader is analogous to moving your disks to a new Synology Diskstation - DSM will prompt for migration or upgrade. Migrating and/or upgrading DSM isn't inherently a data-destroying process, if done properly. Again, this is a DSM behavior Any upgrade or migration operation can fail for many reasons, including loader incompatibility (ref hardware issues above) or user mistake. Those who attempt an upgrade or migration operation without a data backup plan are, bluntly, foolish To you, OP and anyone else who wants to upgrade - it's very much in your interest to build up a test environment and validate your upgrade plan each and every time before subjecting your production system and data to risk. This is repeated again and again in the tutorials. It is one of the benefits of a virtualized (i.e. ESXi) environment - it makes it very easy to test without extra hardware. Good luck to you and OP. Your linarrogant friends online will be waiting to help if you run into trouble.
  4. There are literally entire tutorials dedicated to answering this question. Start here: https://xpenology.com/forum/forum/36-tutorials-and-guides/
  5. Your limitations on disk bays, and your procedure is making it much harder than it needs to be. Synology will automatically expand your volumes when you grow in a "normal" fashion (i.e. mirror a larger HDD, etc). Your idea of the RAID1 migration would have accomplished this. The only way to expand now is to manually edit via command line. There are a million different permutations between RAID/LVM/non-LVM/etc that Synology's scripts handle without fuss. Not so easy to address all those permutations manually... the solution is probably pretty simple, but be sure you know what you have before just blindly executing commands that edit your partition and filesystems. See: https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/
  6. flyride

    DSM 6.2 Loader

    This is what I did in a similar situation: set sata_args='SataPortMap=1 DiskIdxMap=0C00' Also see this: https://xpenology.com/forum/topic/7613-disablehide-50mb-vmware-virtual-disk/
  7. flyride

    DSM 6.2 Loader

    Make sure you select "ESXi" from the boot loader option menu
  8. The real answer is VMware or other hypervisor outside of XPenology. But while you are contemplating that, please read this and this.
  9. flyride

    DSM 6.2 Loader

    Legacy boot and MBR are two different things.... MBR (master boot record) is a partition type, and should only be required if you are using really old hardware that cannot support a GPT partition. Legacy boot is the BIOS boot model prior to the current UEFI implementation.
  10. - Outcome of the update: SUCCESSFUL - DSM version prior to update: DSM 6.2.1-23824U6 - Loader version and model: Jun v1.04b - DS918 plus real3x mod - Using custom extra.lzma: YES, see above - Installation type: BAREMETAL - ASRock J4105-ITX
  11. Tested functional with my J4105-ITX, except I did not check transcode which I don't use. Well done!
  12. Unless you have snapshot-aware storage (i.e. not local disk) you should shut down your VM prior to a snapshot to ensure its integrity.
  13. I realize that there are difficulties with language translation and subtleties in meaning here, but this is audacious. You don't have time to follow through but expect other folks to volunteer their time to help, and are instantly critical when they don't drop everything to research your own obscure hardware for you inside of 24 hours? A little courtesy and humility goes a long way.
  14. Loader doesn't have any knowledge of whether you are using cache. So this is working as Synology (or more specifically Facebook/flashcache developers) designed.
  15. FWIW, it is possible to use a USB loader boot drive in VM (if you map the USB device into the VM) but there really isn't a good reason to do so.
  16. I've done this successfully. But have a backup of your data.
  17. DSM recognizes physical RDM drives as SSD under ESXi. I realize that switching hypervisors is not your plan but it will accomplish your functional goal.
  18. You might read this from the FAQ:
  19. Not initialized just means that the drive is foreign to DSM and the DSM partition structure has not been built yet. As soon as you add to, or create an array using the drive, it will be initialized automatically. A drive will not automatically be joined to the volume - you must initiate it with a Repair operation. However, you cannot replace a 4TB drive with a 500GB drive and restore redundancy. You can only replace a drive with another one equal or larger in capacity. Because of this, DSM is not offering you the option to Repair the array.
  20. AFAIK the limit is in the kernel itself, as compiled by Synology. The platforms enabled by the current loaders have the following compute characteristics: DS3617xs - Xeon D-1527 - 4C/8T DS3615xs - Core i3-4130 - 2C/4T DS918 - J3455 - 4C/4T There has been some confusion about cores vs. threads. I think that 16 threads is the kernel limit. As you can see, 16 threads covers all these CPUs and we have evidence that 16 threads are supported on all three platforms. If you have more than 8 cores, you will get better performance by disabling SMT. @levifig, you are already doing this. I don't think there is any other way to support @Yossi1114's 10C/20T processor other than to disable SMT. If someone wants to develop a loader against a platform with more thread support, may I suggest investigating the FS3017 (E5-2620v3 x 2 = 12C/24T) FS2017 (D-1541 = 8C/16T) or RS3618xs (D-1521 = 8C/16T). It would stand to reason that the kernel thread limits might be higher for those platforms.
  21. Your BIOS may not support hot plugging, or you may need to enable it. What you want to avoid is booting the wrong DSM copy (the one from the drive you removed). Do you have a computer that you could use to wipe the WD disk not currently in the array? If you can install it to another computer and delete all the partitions, then you can install in your NAS, boot normally, then rebuild the clean drive back in to the array.
  22. Maybe, but I don't know how to do that without mounting it. It will take a long time and it will write regular files to the destination, so you could probably move stuff off well in time to make room if you don't have quite enough space.
  23. The restore only needs enough room to save the files stored on the volume, not the entire volume size, so potentially good news there. "Used dev" in mdadm output on the previous post refers to the size of the parity device in the array... nothing to do with the usage of the volume.
  24. Did you switch platforms? Better SCSI support on DS3615xs vs. DS918, but in any case I wouldn't be surprised if 6.2.1 and 6.2.2 break things that worked before, since that seems to be the way things go now.