flyride

Members
  • Content Count

    419
  • Joined

  • Last visited

  • Days Won

    33

flyride last won the day on August 13

flyride had the most liked content!

Community Reputation

160 Excellent

4 Followers

About flyride

  • Rank
    Super Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Your limitations on disk bays, and your procedure is making it much harder than it needs to be. Synology will automatically expand your volumes when you grow in a "normal" fashion (i.e. mirror a larger HDD, etc). Your idea of the RAID1 migration would have accomplished this. The only way to expand now is to manually edit via command line. There are a million different permutations between RAID/LVM/non-LVM/etc that Synology's scripts handle without fuss. Not so easy to address all those permutations manually... the solution is probably pretty simple, but be sure you know what you have before just blindly executing commands that edit your partition and filesystems. See: https://xpenology.com/forum/topic/14091-esxi-unable-to-expand-syno-volume-even-after-disk-space-increased/
  2. flyride

    DSM 6.2 Loader

    This is what I did in a similar situation: set sata_args='SataPortMap=1 DiskIdxMap=0C00' Also see this: https://xpenology.com/forum/topic/7613-disablehide-50mb-vmware-virtual-disk/
  3. flyride

    DSM 6.2 Loader

    Make sure you select "ESXi" from the boot loader option menu
  4. The real answer is VMware or other hypervisor outside of XPenology. But while you are contemplating that, please read this and this.
  5. flyride

    DSM 6.2 Loader

    Legacy boot and MBR are two different things.... MBR (master boot record) is a partition type, and should only be required if you are using really old hardware that cannot support a GPT partition. Legacy boot is the BIOS boot model prior to the current UEFI implementation.
  6. - Outcome of the update: SUCCESSFUL - DSM version prior to update: DSM 6.2.1-23824U6 - Loader version and model: Jun v1.04b - DS918 plus real3x mod - Using custom extra.lzma: YES, see above - Installation type: BAREMETAL - ASRock J4105-ITX
  7. Tested functional with my J4105-ITX, except I did not check transcode which I don't use. Well done!
  8. Unless you have snapshot-aware storage (i.e. not local disk) you should shut down your VM prior to a snapshot to ensure its integrity.
  9. I realize that there are difficulties with language translation and subtleties in meaning here, but this is audacious. You don't have time to follow through but expect other folks to volunteer their time to help, and are instantly critical when they don't drop everything to research your own obscure hardware for you inside of 24 hours? A little courtesy and humility goes a long way.
  10. Loader doesn't have any knowledge of whether you are using cache. So this is working as Synology (or more specifically Facebook/flashcache developers) designed.
  11. FWIW, it is possible to use a USB loader boot drive in VM (if you map the USB device into the VM) but there really isn't a good reason to do so.
  12. I've done this successfully. But have a backup of your data.
  13. DSM recognizes physical RDM drives as SSD under ESXi. I realize that switching hypervisors is not your plan but it will accomplish your functional goal.
  14. You might read this from the FAQ:
  15. Not initialized just means that the drive is foreign to DSM and the DSM partition structure has not been built yet. As soon as you add to, or create an array using the drive, it will be initialized automatically. A drive will not automatically be joined to the volume - you must initiate it with a Repair operation. However, you cannot replace a 4TB drive with a 500GB drive and restore redundancy. You can only replace a drive with another one equal or larger in capacity. Because of this, DSM is not offering you the option to Repair the array.