Jump to content
XPEnology Community

Benoire

Member
  • Posts

    239
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by Benoire

  1. Generally the rackstations which have larger expansion capacity use SAS HBAs and therefore are not limited by the sd(x) issue; as rackstations have SAS HBas this is one of the reasons why we do not have a rackstation image... For what its worth, a rackstation image with native LSI SAS drivers would probably allow for proper drive sequencing on LSI cards as well as large expansion potential.
  2. So as no one could appear to answer this, I did a full local backup and hit update... So far so good, latest DSM with drives and systems still running and connected to my AD infrastructure. Will see how it fairs over the next week!
  3. Hi Appreciate there is the very large topic on the Jun loader, but I've asked a number of times but so far had no response. I want to upgrade from version DSM 6.1-15047 Update 2 to the newest version on my esxi install. I've got the latest loader running but I recall reading that I would be unable to directly update due to needing to format the drives etc. Can someone clarify I can simply hit update and all the data, including system setup, will remain intact to the latest version? Thanks, Chris
  4. There you go Raw disk mapping is as close as you'll get to passthrough but DSM won't report on SMART errors as it can't read this data... You might have to create / find a script for esxi that can read the smart data.
  5. There are two different ways, one is to use Raw Disk Mapping (RDM) within ESXi, but this requires your controller to allow this or the simple PCI passthrough using DirectIO if your motherboard supports it and you're ESXi datastore is not on the controller you're passing through. RDM is an abstracted approach where by the disk is still managed by ESXi but the disk can only be used by the application you've passed it through to. It has the same full speed access as directio but often won't pass the SMART data through. RDM drives are added in a similar manner to virtual drives and you can make them appear in DSM in the order their loaded in to the machine.
  6. Firstly, you must remember that unless you're passing the entire drive through to DSM to manage, you're effectively creating virtual disks with all the limited overheads created by ESXi. Performance will effectively be less than normal as the virtual drive has to go through multiple layers before it finally hits the real drive. With that out of the way however, your best option is as wer suggested to use single 'drives' with the maximum size you want per drive and then add them. DSM won't likely recognise the increased size of the bigger drive as its already partitioned it and installed DSM on it. You could create 2 x 100 GB virtual drives and JBOD them and then expand the array later by adding an additional 100 GB virtual drive.
  7. Benoire

    DSM 6.1.x Loader

    So my VM has now been stable and running for 3 days and 3 hours now. Completely backed up the entire storage array to Crashplan using the docker image and there has been no issues with it what so ever. Would still like to know what happens if I directly update the DSM version to latest with the newest bootloader? I presume it will break? Is the only way to rebuild the OS partitions on the array?
  8. Benoire

    DSM 6.1.x Loader

    So my vSAN rebuild and upgrade to 6.6 has been completed which caused the failure in the VM. With it all synced and running on the cluster, the DSM VM is now up and running. New bootloader, but same DSM version as before (not latest). Will see how long it lasts!
  9. Benoire

    DSM 6.1.x Loader

    Not sure, I've just had to update my vSAN cluster so the VM was taken down. I can't remember when I updated my boot loader but it will be back up and running tonight so I'll know more in the next few days! It was certainly running for more than 24 hours and I thought more than 48. Its the same loader as everyone is using on baremetal, I hadn't updated the DSM version though as I'm not sure if I have to wipe my data or not in order for it to upgrade; no one seems to be able to confirm that!
  10. Yes, every time there is an update to install it will open the control panel first... No way to stop it from happening.
  11. Benoire

    DSM 6.1.x Loader

    Sorry the arg ihd_num=0 has to go in the grub.cfg. You can create edit the grub file from the img using PowerISO before conversion to the VMDK.
  12. Benoire

    DSM 6.1.x Loader

    set common_args_3615='syno_hdd_powerup_seq=0 HddHotplug=0 ihd_num=0 syno_hw_version=DS3615xs vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet syno_port_thaw=1' I've bolded/italiced the bit you need, if you then can boot with the ESXi boot option it will then not show the IDE drives.
  13. Benoire

    DSM 6.1.x Loader

    I think you can, but I've read that the latest update to 6.1 will not load properly unless you have clean built the installation; I'm cautious about moving my 6.1 to the latest for this reason! You'll be better of using the new VM and attaching what ever storage you have (PCI-passthrough etc.) to it and then get DSM to repair the drives to fix their boot partitions. Once you've done that you can then remove the virtual HDD you used to install DSM on for the 6.1 install... There is a thread on updating 5.2 to 6...
  14. Benoire

    DSM 6.1.x Loader

    All I did was use Starwinds v2v converter and convert the img to VMDK. I edited the grub first though to change serial, timeout, which boot line to default to and then finally added the command to hide the IDE drives so DSM only shows the sd(x) drives. Once you have synoboot.VMDK, upload it to the datastore and then replace the current 50MB VMDK in your VM...IF in doubt, create a new vm to test.
  15. Benoire

    DSM 6.1.x Loader

    Ok thanks. I do know that I can create a new install from the loader and let it install to a virtual disk and then re-add my current disks to repair and it will work but then I loose ALL my settings, configs, active directory setup, etc. What I want to know is how to upgrade without losing any configuration data.
  16. Benoire

    DSM 6.1.x Loader

    To update an existing esxi 6.5 install (DSM 6.1-15047 Update 2) to the latest I've read that a clean install is required due to a changed filesystem? The data partitions are already in BTRFS, but is there something else that needs doing in order to upgrade? I've got a lot of data which I don't want to have to add again so avoiding clean would be grand. @Polanskiman you're very aware of the issues at the moment and this thread has lots of snippets of information as well as people asking for advice... Perhaps updaing the OP again with details for ESXi and baremetal installs and what is required to upgrade what be very valuable. Thanks, Chris
  17. Benoire

    DSM 6.1.x Loader

    Is someone able to answer this directly? Can I use the latest loader from Jun, and then simply update to the latest version of DSM or does it require a completely clean install? If it requires a clean install, can O snapshot the VM, add a new virtual HDD and let the OS be mirrored, and then remove the proper drives, update boot loader and then boot from the virtual drive on the new version and then add the existing drives and repair the DSM partition for it all to work?
  18. Benoire

    DSM 6.1.x Loader

    I'm running DSM 6.1-15047 Update 2 on a ESXi 6.5 install. Am I able to simply install the new version b loader over my old boot image and then let DSM reboot and then allow it to update to the latest version without a clean install? I've got a large amount of data that I would rather not have to re-download using Hyperbackup...
  19. An SSD hot cahce can serves a primary function; to keep recently and continuously accessed data in a hot online state. The outcome of this is two fold, high speed access in terms of throughput which would saturate a 1GBe network (no difference really to RAID arrays with multiple drives) and also low latency. On 1GBe you can forget about the bandwidth as already mentioned, but it would deal with latency access e.g. the time to start transferring as well as continuous transfer speeds and consistency of read/writes. Unless you're running VMs from the machine, I personally don't think they serve much purpose that a standard array of spinners can do (or all flash array if you're lucky)!
  20. Other option IS to run ESXI 6.5, have a really small spare drive isntalled on the standard SATA connector for the esxi vm images for the xpenology and then run two instances of DSM with either PCI passthrough of controllers to each one (e.g. 2+2 SAS HBA etc.) or use RDM mapping to create 'virtual' drives which are just maps to the real drives; this way you can utilise a single controller setup within esxi and then pass individual drives to each instance. Advantage is 2 seperate systems, each with 12 drive capacity, with fast copying to each other without the headaches of a large number of drives... Or get a smaller chassis with less bays; e.g. Dell R510, SM 12 bay 2u chassis etc.
  21. I use SM x8 motherboards quite a bit and I think they're supported without issue. How are you going to connect the 24 drives up, I don't see a SAS HBA there? Synology is also expecting 12 drives and not any more; you have to edit the config files to support more drives and on every update there is a chance of it breaking the drive mapping. To support 24 drives you would have to manually download the PAT file, re-edit the config to support more drives & then recompress and use that; I don't think that there are checksums on the PAT files yet. Your other option to ensure complete compatability is to load vSphere Hypervisor 6.5 and create virtual hardware which will work fine as within the latest loaders for DSM 6.1; still have to edit the config for more than 12 drives but you can at least create snapshots and roll back your loader if something messes up.
  22. Benoire

    DSM 6.1.x Loader

    I'm not sure of the difference between the boot options, but I had to edit the image to increase the timeout to 5 seconds!
  23. Benoire

    DSM 6.1.x Loader

    With respect to powerful cpus, I can't say... I run a single Xeon L5630 as my esxi cpu and pass 3 vcores through. For memory however, ECC while not required will be better than non-ECC simply as it does more verification to avoid checksum errors in the bits. FreeNAS benefits from ECC due to the ZFS filesystem error checking as it goes, however as DSM doesn't have this function yet it serves little purpose apart from to maybe reduce one area of bit corruption until its written to the hard disk. Personally, ECC > No ECC, but don't go out of your way for DSM.
  24. Benoire

    DSM 6.1.x Loader

    Its hardcoded in DSM as the i3; the speed will be correct and if you use the cli to report cpu cores it will be correct... Just not in the information screen.
  25. No, SHR1/2 is only allowed on initial diskgroup setup within the interface as it requires repartitioning drives to get it working.
×
×
  • Create New...