• Announcements

    • Polanskiman

      DSM 6.2-23739 - WARNING   05/23/2018

      This is a MAJOR update of DSM. DO NOT UPDATE TO DSM 6.2 with Jun's loader 1.02b or earlier. Your box will be bricked.  You have been warned.   https://www.synology.com/en-global/releaseNote/DS3615xs
    • Polanskiman

      Email server issues - FIXED   06/14/2018

      We have been experiencing email server issues. We are working on it. Bear with us. Thank you. ------------- Problem has been fixed. You might receive duplicate emails. Sorry for that.

Benoire

Members
  • Content count

    223
  • Joined

  • Last visited

  • Days Won

    1

Benoire last won the day on July 7 2017

Benoire had the most liked content!

Community Reputation

6 Neutral

About Benoire

  • Rank
    Super Member
  1. Hello I've been toying around with VMM on a spare machine and have got the cluster to work by setting the NIC MAC to the real mac address, but I've hit an error where by the virtual machine will not start with the error: "Failed to power on the virtual machine " name of my VM" on the host " name of my NAS" due to insufficient RAM on the host". The test machine is a Westmere based Intel Xeon L5630 chip on a Supermicro X8DTL-iF with 8GB Ram. Virtualisation is enabled in the Bios /VT-D/V-TX) and the machine is booting fine... but trying to create any VM in VMM displays the above error. The VMs all have less than 4GB of ram assigned so its not a real ram issue. Anyone use westmere based chips with VMM? Any pointers? Thanks, Chris
  2. VMM cannot create cluster

    So I changed the mac addresses to the real mac of the NICs and VMM now works fine.
  3. VMM cannot create cluster

    Did anyone resolve this? I'm moving from esxi hosted to baremetal but would like VMM to enable local hosting of certain VMs that are integral to the safe shutdown of my homelab... I'm just playing with this and I'm getting teh same outputs; that it fails to get the first MAC. I was using a generated MAC for the four NICs, but should I use the actual MAC of the quad 1GBe network card? Do I need a corresponding serial and if so, how do you generate it based on the MAC rather than the other way around? Thanks! Chris
  4. Cool and that was the plan, I have the image setup and a spare drive, will migrate across and then that should work. The entire vSphere ESXi machine is going to be baremetal and I've run these cards in Xpenology/DSM before so should be fine.
  5. Hi I'm in the process of moving back to baremetal but I've hit a snag with reboots and shutdowns - the system won't actually do anything when requested by DSM.... Instead, I have to go through the IPMI interface to undertake the request. The motherboard was used with ESXi 6.5 up until recently and that was fine but its never had DSM on it before. Are there any issue with Supermicro X8 class motherboards and reboots/shutdowns? Thanks, Chris
  6. @sbv3000 thanks and resureccting this! I've created the USB and it running on a spare machine with a different HDD so its now at the latest level that my vDSM is. The new baremetal setup is using a quad 1GBe NIC on a supermicro motherboard, the USB grub is setup to use four NICs with MAC addresses linked to the serial. My vDSM is configured with only 1 NIC, is there going to be any issues with the current setup if I move from virtual with 1 NIC to baremetal with 4 NIC? Do I need to use the reinstall option or just let it boot up?
  7. Thanks, 1) I've got an intel quad port so will use that, and disable the supermicro onboard NICs as I won't need them. 2) I'm on full passthrough of the drive controllers so DSM is on real HDDs, I moved the other way and essentially created a new loader VM disk but I presume I would need to build USB stick with the PID/VID setup and the MAC addresses of the NICs (as above)? Do this and then select reinstall option? 3) Its a rebranded APC, just reasonable power capacity and rack mounted. The management card does SNMP so that should be fine to do this. What functions do you have over SNMP?
  8. I currently utilise Xpenology in an ESXi environment and I am wishing to move to a baremetal for various reasons. I have a number of questions: 1) I intend to increase the number of NICs available using Intel 1GBe cards. Is there a maximum number of NICs that DSM supports? My install currently has 1 stupidly and I want to push to 4. 2) If I move my current VM to baremetal, how can I initiate the loader to re-detect the hardware, presume I use the reinstall option in the loader on boot and then re-install DSM? 3) Does Xpenology/DSM work with UPS's to shutdown? I use a 1920w Dell UPS with management card and I want to ensure that I can shut down the machine when power becomes critical. Thanks, Chris
  9. I believe the ports on this are infiniband connections which will be attached differently to sata, but I'm not sure how.
  10. Generally the rackstations which have larger expansion capacity use SAS HBAs and therefore are not limited by the sd(x) issue; as rackstations have SAS HBas this is one of the reasons why we do not have a rackstation image... For what its worth, a rackstation image with native LSI SAS drivers would probably allow for proper drive sequencing on LSI cards as well as large expansion potential.
  11. So as no one could appear to answer this, I did a full local backup and hit update... So far so good, latest DSM with drives and systems still running and connected to my AD infrastructure. Will see how it fairs over the next week!
  12. Hi Appreciate there is the very large topic on the Jun loader, but I've asked a number of times but so far had no response. I want to upgrade from version DSM 6.1-15047 Update 2 to the newest version on my esxi install. I've got the latest loader running but I recall reading that I would be unable to directly update due to needing to format the drives etc. Can someone clarify I can simply hit update and all the data, including system setup, will remain intact to the latest version? Thanks, Chris
  13. [DSM 6.1.2] Question about disk management in ESXi

    There you go Raw disk mapping is as close as you'll get to passthrough but DSM won't report on SMART errors as it can't read this data... You might have to create / find a script for esxi that can read the smart data.
  14. [DSM 6.1.2] Question about disk management in ESXi

    There are two different ways, one is to use Raw Disk Mapping (RDM) within ESXi, but this requires your controller to allow this or the simple PCI passthrough using DirectIO if your motherboard supports it and you're ESXi datastore is not on the controller you're passing through. RDM is an abstracted approach where by the disk is still managed by ESXi but the disk can only be used by the application you've passed it through to. It has the same full speed access as directio but often won't pass the SMART data through. RDM drives are added in a similar manner to virtual drives and you can make them appear in DSM in the order their loaded in to the machine.
  15. [DSM 6.1.2] Question about disk management in ESXi

    Firstly, you must remember that unless you're passing the entire drive through to DSM to manage, you're effectively creating virtual disks with all the limited overheads created by ESXi. Performance will effectively be less than normal as the virtual drive has to go through multiple layers before it finally hits the real drive. With that out of the way however, your best option is as wer suggested to use single 'drives' with the maximum size you want per drive and then add them. DSM won't likely recognise the increased size of the bigger drive as its already partitioned it and installed DSM on it. You could create 2 x 100 GB virtual drives and JBOD them and then expand the array later by adding an additional 100 GB virtual drive.