Jump to content
XPEnology Community

jaggerdude

Member
  • Posts

    17
  • Joined

  • Last visited

Posts posted by jaggerdude

  1. Hello,

     

    Was wondering if it's possible for multiple compatible configurations of DSM VMs could be mad to run simultaneously and share the same storage system?

     

    Thanks in advance.

    Jag

  2. I have managed to install on bare metal board Intel D2700MUD (http://ark.intel.com/products/56457/Intel-Desktop-Board-D2700MUD) and everything is working very well. Even managed fix problem with long boot. It seems that service hotplugd for soome reason is not starting, so we have time-out in script serv-bootup-timeout.sh. Also without hotplugd service started there is no USB devices visible in DSM. My quick fix/hack was to edit script /usr/syno/etc/rc.sysv/serv-bootup-timeout.sh and insert command "synoservicectl --start hotplugd" like this:

    # Increase bootup timeout from 180 to 600 when pgsql is update database

        time=0

        synoservicectl --start hotplugd

        while true; do

    It's not a nice hack, but hey, it's working ! :smile:

    Now I will try to play with BTRFS and other fixes, like booting from SSD and using SSD partition as tmp storage for MySQL (MariaDB), Plex and other 3rd party apps so main HDD  could enter standby, etc...

     

     

     

    Posted via Xpenology.us

     

    The hack/fix works on the DSM 6 in ESXi 6. Now I can see the USB devices using PCIe passthru.

     

    Thanks! :mrgreen:

  3. Hello everyone !

     

    Currently running XPEnoboot 5.2-5967 on ESXi 6.

    Drives are connected to LSI 2008 directly presented to VM using PCI passthrough.

     

    Already tested DSM6 on ESXi and it works, but without PCI passthrough and LSI (using virtual disk only)

    Any experience with LSI and passthrough on DSM 6 ?

     

    BTW, it seems that current use of DSM6 on ESXi is still not very widely tested yet. Any though on that ? Thus not really for 'production' for now ?

     

    Thanks in advance !

    KR,

    Titi

     

    The required driver for your LSI2008 chip based controller is included in DSM6 (at least in the boot/pat combinaton that oktisme created).

    I assume that it would work.

     

    Ready for production? Depends on your expectations

    I do trust my application data (mostly from docker-containers) to dsm6 using a vmdk disk.

    I don't trust my harddisks to dsm6.. they are still assigned to a a XPE vm with direct io using a LSI2008 chip based controller.

     

    We have same setup. And PCIe passthru works--except USB. That's the only issue I have so far....

  4. On a separate issue, I have an old DS211+ machine which I use for backup that I upgraded to DSM6. But I found out that only certain models was enabled by Synology for BTRFS. It didn't make sense to me because if the VM can be btrfs-enabled, then I don't see why the physical machine is disabled. I'm trying to compare the /etc/synoinfo.conf files between the VM and the physical to see if there is any setting that enables the btrfs functionality. But I'm not familiar enough with all the ins & outs and didn't want to mess up my backup machine. But I think it's in that configuration file.

     

    Does anyone here know?

     

    TIA.

    I tried the following--changed the defaultfs to btrfs and added systemfs="ext4"

     

    defaultfs="btrfs"

    systemfs="ext4"

     

    Added:

     

    support_btrfs="yes"

     

     

    Rebooted the machine and created a voulume. Before the modification I wasn't given the btrfs choice at all--it just defaulted to ext4. But this time it did and it created the btrfs volume. Immediately after it crashed just like the VM. I tried to issue the mount command but I get the same error as before....

  5. On a separate issue, I have an old DS211+ machine which I use for backup that I upgraded to DSM6. But I found out that only certain models was enabled by Synology for BTRFS. It didn't make sense to me because if the VM can be btrfs-enabled, then I don't see why the physical machine is disabled. I'm trying to compare the /etc/synoinfo.conf files between the VM and the physical to see if there is any setting that enables the btrfs functionality. But I'm not familiar enough with all the ins & outs and didn't want to mess up my backup machine. But I think it's in that configuration file.

     

    Does anyone here know?

     

    TIA.

  6. H! All

     

    I have DSM6 running on ESXi6 with SHR-BTRFS. Just noticed that my USB3 external drive is not being recognized by the sytem. And also my MFC Printer hooked up to USB3 is no longer recognized. However, in control panel under INFO, they are both 'seen' as connected USB devices.

     

    Any ideas why?

     

    TIA.

  7. Good. Try to use BTRFS functionality, and see if it works... :?:

     

    The functionality seems to work. I've only tried Hyper backup and Cloud Station ShareSynch....

     

    But it's still bugging me why during the baremetal install, the mount command to fix the crash btrfs volume1

    $ mount /dev/lg1000/lv/volume1

    gives me an error that says it can't find the volume1 in /etc/fstab ???

     

    Does anyone know what I did wrong there?

     

    TIA.

  8. Hey y'all,

     

    Someone said he'd rather be lucky than good. When it comes to 'results' I suppose some great discoveries happened by accident and I'm not saying that this is one of those. Heck

    I don't even want to repeat the process in order to say it's repeatable so I'm just going to post it here and leave it up to someone who's bored enough to give it a try. I'm sure the techies in here could easily explain the why....

     

    You see--the main reason I had wanted to upgrade to DSM6 was to see btrfs at work in my ESXi setup but no matter what I tried I was unsuccessful. I have a Lenovo TS140 stock which I just added a cheap marvell IO SATA controller and hooked up a couple of 3TB drives and a couple of SSDs hooked up to the onboard ATA controller. I had been using ESXi6 as my only OS with several VMs including DSM5 and the recently installed DSM6 courtesy of this thread (no real significant data stored, just media files managed by PLEX running in DSM).

     

    Seeing a post of a successful bare metal install here, I decided to try to see if I could do the same. So I disabled the SSD in BIOS where ESXi6 and all the VMs (so I didn't have to re-install everything again) are installed and sure enough I was able to configure DSM6 (by that I meant using the 'INSTALL' option only). I proceeded to create a SHR-BTRFS volume out of the pair of 3TB drives and as expected it crashed the volume immediately after. I tried to manually mount the volumes as suggested in the posts and it didn't work for me. For some reason it couldn't find the volume1 folder as mount point. Long story short, I gave up, re-enabled the ESXi boot drive and rebooted back to my ESXi6 and powered up the VMs. Now the DSM6-VM had those same pair of 3TB in a SHR-EXT4 configuration (using PCIe passthrough). And I fully anticipated a crashed volume because of the failed bare metal install which is what I got; but lo and behold, this time it gave me a REPAIR option! Which I promptly clicked on and voila--I have a fully working SHR-BTRFS volume in my DSM6 VM which was what I had been wanting in the first place! I guess the VM just mounted the volume correctlu since it was passthrough....

     

    :mrgreen:

  9. I was able to create the SHR using PCIe passthrough using 3-3TB SATA drives and 2 64 GB SSD drives I used for cache.

     

    Do you have better performance with your SSD cache ? Does it work well ? Last question, do you use two identical SSD ? Thank you !

    The synch between servers goes much faster it seems. And on-the-fly conversion of media files definitely faster.

  10. Has anyone been able to create a SHR volume with this?

    I don't see the option. I'm forced to create a raid group and cannot just create Volumes from the start.

     

    Is this normal?

    I was able to create the SHR using PCIe passthrough using 3-3TB SATA drives and 2 64 GB SSD drives I used for cache.

    Can I ask what sata card you have for PCIe pass through?

    Also, did you have to create a RAID group first before creating the Volume?

     

    It seems I have to create a raid group and then can create a volume from there. But there is no SHR option.

     

    Sent from my SM-G928I using Tapatalk

    I just used a cheap 4-port marvell card. No I did not have to create a RAID group. The volume manager wizard created the SHR.

  11. Has anyone been able to create a SHR volume with this?

    I don't see the option. I'm forced to create a raid group and cannot just create Volumes from the start.

     

    Is this normal?

    I was able to create the SHR using PCIe passthrough using 3-3TB SATA drives and 2 64 GB SSD drives I used for cache.

  12. I’ve found it in disk-0.vmdk - thanks

     

    My solution:

    set disk-0.vmdk to independent-persistent

    export disk-0.vmdk to Win

    use ImDisk Virtual Disk Driver to mount the vmdk

    edit syslinux.cfg

    ImDisk Virtual Disk Driver (as Administrator!!!) – remove mounted disk

    Import vmdk files to ESXi

    set disk-0.vmdk to independent-non persistent

     

    I'm able to download the existing VMDK and map it to a drive in windows using VMWare Workstation without having to convert. So I could edit the syslinux.cfg directly and then save, disconnect and re-upload to the datastore. The DSM then assumed the mac-id and serial number of my existing Synology box. However, I'm still getting a system error whenever I try to setup the QuickConnect to login to Synology. In the prior version, I was able to use QuickConnect, but it seems DSM 6 probably requires flashing the network adapter to change the mac-id.

  13. I’ve found it in disk-0.vmdk - thanks

     

    My solution:

    set disk-0.vmdk to independent-persistent

    export disk-0.vmdk to Win

    use ImDisk Virtual Disk Driver to mount the vmdk

    edit syslinux.cfg

    ImDisk Virtual Disk Driver (as Administrator!!!) – remove mounted disk

    Import vmdk files to ESXi

    set disk-0.vmdk to independent-non persistent

    VMware Workstation can mount the vmdk itself for editing, right?

  14. @koroziv @haydibe @otisme

     

    Deployed the OVF you shared in ESXi 6U2 and everything worked plus updates and PCIe passthroughs. So I'm thinking of moving my 'production' disks over to DSM6. But I wanted to ask if I could start fresh, then modify the syslinux.cfg (is it the one in syslinux folder) in the VMDK-0 to use my real Synology DS mac id and serial number?

     

    Thanks for the hard work effort and sharing your your finds with us....Much appreciated!

  15. Oh I forgot to mention -- I have an IOQUEST 4-port card which I setup as passthru instead of defining virtual disks. They're dedicated to the DSM anyways and I think you get the benefit of bypassing the ESXi overhead and the capability of retaining your SMART features. Just a thought. :smile:

     

    Thanks again!

  16. Thanks for this! I now have a working build on a Lenovo TS140. A question regarding packages--I'm able to install the original Synology packages like Video Station, Plex, Surveillance Station etc. Is there any reasons why there are community packages aside from the obvious customization features? I mean, does installing original packages break the custom build itself?

×
×
  • Create New...