Jump to content
XPEnology Community

Benoire

Member
  • Posts

    239
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Benoire

  1. Generally the rackstations which have larger expansion capacity use SAS HBAs and therefore are not limited by the sd(x) issue; as rackstations have SAS HBas this is one of the reasons why we do not have a rackstation image... For what its worth, a rackstation image with native LSI SAS drivers would probably allow for proper drive sequencing on LSI cards as well as large expansion potential.

    • Like 2
  2. Hi

     

    Appreciate there is the very large topic on the Jun loader, but I've asked a number of times but so far had no response.  I want to upgrade from version DSM 6.1-15047 Update 2 to the newest version on my esxi install.  I've got the latest loader running but I recall reading that I would be unable to directly update due to needing to format the drives etc.  Can someone clarify I can simply hit update and all the data, including system setup, will remain intact to the latest version?

     

    Thanks,

     

    Chris

  3. 11 hours ago, hpk said:


    Interesting ... How do you pass through a drive to DSM on ESXi 6.5?

    There are two different ways, one is to use Raw Disk Mapping (RDM) within ESXi, but this requires your controller to allow this or the simple PCI passthrough using DirectIO if your motherboard supports it and you're ESXi datastore is not on the controller you're passing through.

     

    RDM is an abstracted approach where by the disk is still managed by ESXi but the disk can only be used by the application you've passed it through to.  It has the same full speed access as directio but often won't pass the SMART data through.  RDM drives are added in a similar manner to virtual drives and you can make them appear in DSM in the order their loaded in to the machine.

  4. Firstly, you must remember that unless you're passing the entire drive through to DSM to manage, you're effectively creating virtual disks with all the limited overheads created by ESXi.  Performance will effectively be less than normal as the virtual drive has to go through multiple layers before it finally hits the real drive.

     

    With that out of the way however, your best option is as wer suggested to use single 'drives' with the maximum size you want per drive and then add them.  DSM won't likely recognise the increased size of the bigger drive as its already partitioned it and installed DSM on it.  You could create 2 x 100 GB virtual drives and JBOD them and then expand the array later by adding an additional 100 GB virtual drive.

    • Like 1
  5. On 30/06/2017 at 6:13 PM, mcdull said:

    I have done 2 updates on my ESXi box.  Both required fresh install.

    i.e. I need to manually format the first 2 partitions in the data disks and it will then allow me to install the new pat files.

    Or else the recovery process will kill the DSM.

     

    But it is weird that the DSM stop working in around 3 days.

     

    Is there anything I can check if I managed to get back to the DSM screen or SSH console?

    There are some chances that I can get it connected and then the instances will be stable.  But new connection cannot be made.

    So my VM has now been stable and running for 3 days and 3 hours now.  Completely backed up the entire storage array to Crashplan using the docker image and there has been no issues with it what so ever.

     

    Would still like to know what happens if I directly update the DSM version to latest with the newest bootloader?  I presume it will break?  Is the only way to rebuild the OS partitions on the array?

  6. 2 hours ago, mcdull said:

    I have done 2 updates on my ESXi box.  Both required fresh install.

    i.e. I need to manually format the first 2 partitions in the data disks and it will then allow me to install the new pat files.

    Or else the recovery process will kill the DSM.

     

    But it is weird that the DSM stop working in around 3 days.

     

    Is there anything I can check if I managed to get back to the DSM screen or SSH console?

    There are some chances that I can get it connected and then the instances will be stable.  But new connection cannot be made.

    So my vSAN rebuild and upgrade to 6.6 has been completed which caused the failure in the VM.  With it all synced and running on the cluster, the DSM VM is now up and running.  New bootloader, but same DSM version as before (not latest).  Will see how long it lasts!

  7. Not sure, I've just had to update my vSAN cluster so the VM was taken down.  I can't remember when I updated my boot loader but it will be back up and running tonight so I'll know more in the next few days!  It was certainly running for more than 24 hours and I thought more than 48.  Its the same loader as everyone is using on baremetal, I hadn't updated the DSM version though as I'm not sure if I have to wipe my data or not in order for it to upgrade; no one seems to be able to confirm that!

  8. 6 hours ago, crookedview said:

    I've not been able to figure this out either.  I have tried IDE and SATA disk types in ESXI 6, disk mode is independent non-persistent, but the DSM installer still wants to format my boot disk.

     

    I have tried adding rmmod=ata_piix to the end of the loadlinux line in the grub config.

    set common_args_3615='syno_hdd_powerup_seq=0 HddHotplug=0 ihd_num=0 syno_hw_version=DS3615xs vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet syno_port_thaw=1'

     

    I've bolded/italiced the bit you need, if you then can boot with the ESXi boot option it will then not show the IDE drives.

     

  9. 2 hours ago, TmRNL said:

    Do you also know how i could update my 5.2 to 6.1 with the new vmdk if i used an ISO to boot from? :S

    I think you can, but I've read that the latest update to 6.1 will not load properly unless you have clean built the installation; I'm cautious about moving my 6.1 to the latest for this reason!

     

    You'll be better of using the new VM and attaching what ever storage you have (PCI-passthrough etc.) to it and then get DSM to repair the drives to fix their boot partitions.  Once you've done that you can then remove the virtual HDD you used to install DSM on for the 6.1 install... There is a thread on updating 5.2 to 6...

    • Like 1
  10. 7 minutes ago, TmRNL said:

    How? :o How the hell do you get it to boot of the 1.02b .img file?

    All I did was use Starwinds v2v converter and convert the img to VMDK.  I edited the grub first though to change serial, timeout, which boot line to default to and then finally added the command to hide the IDE drives so DSM only shows the sd(x) drives.

     

    Once you have synoboot.VMDK, upload it to the datastore and then replace the current 50MB VMDK in your VM...IF in doubt, create a new vm  to test.

    • Like 1
  11. 2 hours ago, Polanskiman said:

     

    I am not very aware of ESXI unfortunately so I'll leave that to someone with the appropriate expertise.

    Ok thanks.  I do know that I can create a new install from the loader and let it install to a virtual disk and then re-add my current disks to repair and it will work but then I loose ALL my settings, configs, active directory setup, etc.  What I want to know is how to upgrade without losing any configuration data.

  12. To update an existing esxi 6.5 install (DSM 6.1-15047 Update 2) to the latest I've read that a clean install is required due to a changed filesystem?  The data partitions are already in BTRFS, but is there something else that needs doing in order to upgrade?  I've got a lot of data which I don't want to have to add again so avoiding clean would be grand. @Polanskiman you're very aware of the issues at the moment and this thread has lots of snippets of information as well as people asking for advice... Perhaps updaing the OP again with details for ESXi and baremetal installs and what is required to upgrade what be very valuable.

     

    Thanks,

     

    Chris

  13. On 2017-6-18 at 9:30 AM, Benoire said:

    I'm running DSM 6.1-15047 Update 2 on a ESXi 6.5 install.  Am I able to simply install the new version b loader over my old boot image and then let DSM reboot and then allow it to update to the latest version without a clean install? I've got a large amount of data that I would rather not have to re-download using Hyperbackup...

     

    Is someone able to answer this directly?  Can I use the latest loader from Jun, and then simply update to the latest version of DSM or does it require a completely clean install?  If it requires a clean install, can O snapshot the VM, add a new virtual HDD and let the OS be mirrored, and then remove the proper drives, update boot loader and then boot from the virtual drive on the new version and then add the existing drives and repair the DSM partition for it all to work?

  14. I'm running DSM 6.1-15047 Update 2 on a ESXi 6.5 install.  Am I able to simply install the new version b loader over my old boot image and then let DSM reboot and then allow it to update to the latest version without a clean install? I've got a large amount of data that I would rather not have to re-download using Hyperbackup...

  15. An SSD hot cahce can serves a primary function; to keep recently and continuously accessed data in a hot online state. The outcome of this is two fold, high speed access in terms of throughput which would saturate a 1GBe network (no difference really to RAID arrays with multiple drives) and also low latency. On 1GBe you can forget about the bandwidth as already mentioned, but it would deal with latency access e.g. the time to start transferring as well as continuous transfer speeds and consistency of read/writes. Unless you're running VMs from the machine, I personally don't think they serve much purpose that a standard array of spinners can do (or all flash array if you're lucky)!

  16. Other option IS to run ESXI 6.5, have a really small spare drive isntalled on the standard SATA connector for the esxi vm images for the xpenology and then run two instances of DSM with either PCI passthrough of controllers to each one (e.g. 2+2 SAS HBA etc.) or use RDM mapping to create 'virtual' drives which are just maps to the real drives; this way you can utilise a single controller setup within esxi and then pass individual drives to each instance.

     

    Advantage is 2 seperate systems, each with 12 drive capacity, with fast copying to each other without the headaches of a large number of drives...

     

    Or get a smaller chassis with less bays; e.g. Dell R510, SM 12 bay 2u chassis etc.

  17. I use SM x8 motherboards quite a bit and I think they're supported without issue. How are you going to connect the 24 drives up, I don't see a SAS HBA there? Synology is also expecting 12 drives and not any more; you have to edit the config files to support more drives and on every update there is a chance of it breaking the drive mapping. To support 24 drives you would have to manually download the PAT file, re-edit the config to support more drives & then recompress and use that; I don't think that there are checksums on the PAT files yet.

     

    Your other option to ensure complete compatability is to load vSphere Hypervisor 6.5 and create virtual hardware which will work fine as within the latest loaders for DSM 6.1; still have to edit the config for more than 12 drives but you can at least create snapshots and roll back your loader if something messes up.

  18. When you first boot the vm, you select to start the image with esx instead of baremetal in tbe bootmenu...

    Does that make any difference? And how do you do that, its so fast here i can even select it...

    I'm not sure of the difference between the boot options, but I had to edit the image to increase the timeout to 5 seconds!

  19. With respect to powerful cpus, I can't say... I run a single Xeon L5630 as my esxi cpu and pass 3 vcores through.

     

    For memory however, ECC while not required will be better than non-ECC simply as it does more verification to avoid checksum errors in the bits. FreeNAS benefits from ECC due to the ZFS filesystem error checking as it goes, however as DSM doesn't have this function yet it serves little purpose apart from to maybe reduce one area of bit corruption until its written to the hard disk. Personally, ECC > No ECC, but don't go out of your way for DSM.

  20. Question.. I have a Dell Precision T7910

    With 2 Xeon E5-2698v3(16 Cores Each) @ 2.3GHz Max 3.6Ghz Plus 128GB of ECC Ram

     

    Why I can only see my CPU's as i3?

     

    Its hardcoded in DSM as the i3; the speed will be correct and if you use the cli to report cpu cores it will be correct... Just not in the information screen.

×
×
  • Create New...