• Content Count

  • Joined

  • Last visited

Everything posted by ferno

  1. Well, Got it working with SATA Controller passthrough on my HP Microserver GEN10PLUS. Now just wondering if I can safely upgrade to the new releases.
  2. Hi Everybody, been playing with this tutorial just to see if ProxMox is a way to go for Xpenology. Already have Xpenology running on Baremetal and ESXI so wanted to try this. Th back-up is brilliant and works really wel! Thank you for that! I am just wondering, what is the best way to attache storage to the VM? I already have a ZFS pool on the PROXMOX host and I could create a virtual disk, but not sure how the performance and reliability is. In ESXI I got some instability when I created a volume based on 4 vmdk. So I opted to just create a small VMDK and just mount
  3. Hi balrog, this is pretty much what I am trying to accomplish with my setup. But I think need the redundancy of RAID10 most dat ais pretty much backed up on other NAS drives (TRUENAS gets synced each sunday and the important files are synced with my one drive business in almost realtime). Thank you for the how to, I am still in the discovery mode and switching back and forth between PROXMOX and ESXI but in the meantime fiddling with Behive on Truenas and KVM on OMV. Probably will endup going back to ESXI, want to run ESXI from a attached SSD on the USB port and t
  4. Hi Balrog, Yep, I am going to install it tomorrow, the 64GB is already in. I will just run it with 4 HD drives a usb stick and a Seagate one touch attached to the USB3 port like they showed on the STH site. Th only expansion I would love is a Nvidia P400 for Transcoding but don't want to stress the PSU and since this server will run 24/7 I don't want it to draw more than 70W when not handling heavy workloads. My Gen8 Now draws 50W with 2 dimms 4 internal drives, internal usb and external USB drive. My ESXI ML310 Gen8 also the same with 32Gb and 4 10Tb drives some 720
  5. Hi UXORA-COM, I am still fiddling and trying things out. Am having some trouble starting the LXC but I probably did something wrong and have to go over it again. There are some parts of the tutorial that I might've interpreted wrongly. One question, does the boot loader have to be loaded from a URL? Also, I have to read-up on the virtio 9P driver etc.
  6. Hi balrog, How did you do this: "I have done a passthrough of the whole onboard SATA ahci-controller to the xpenology-vm." Does ESXI see the SATA controller when in AHCI modus? I thought the only way to pass the hardware on the microservers was by adding a raid card like the P222 ( That is the model that comes to mind but might be a different one) or by doing raw device mappings. Are you satisfied with the E-2236 Xeon, no issues so far?
  7. Hi UXORA-COM, This is exactly what I need! I have a working Xpenology (which I am using for years now) on Baremetal (Gen8 16Gb Xeon e3-1230v2) and several test installs on ESXI as a VM pulling content from NFS shares on the other Xpenology and OMV NAS. In the past I have tried virtualising Xpenology on ESXI with VMDK disks but that was not a succes, after a While disks got state unreadable etc. so I stoped with that approach. Also I the good thing about virtualisation is if you want to do a upgrade you can snapshot etc. But wil a lot of data on the system that is not an optio
  8. Hi Balrog, I just bought the exact same setup to replace my bare metal setup on a HPE Microserver Gen8 with a e3-1230v2 and 16GB ram. For the Gen10+ I have installed 64Gb ECC Micron dimes DDR4-3200 EUDIMM (MTA18asf4g72az-3g2b1). How is your setup handling? I have not installed the new Xeon yet (just got it today) is now running with the new memory and the Pentium G54XX that came with it. So before I install the CPU I would love to hear what your experience is. Also, did you pass the HD's through to the xpenology? In the past I had ran some test with VMDK's but got issues with several set
  9. Great Topic!! Should be pinned and there should be one for 6.X also. I also have updated my 5.X install to the latest update on a HP Gen8 without problems. No reboot yet.
  10. Ok since nobody answered i went ahead and did the upgrade. Works fine!
  11. Hi Everybody, I know for sure this question has been asked before but have been searching through the forum and cant find it so I am just going to ask it again. I have been running DSM 5.2-5967 Update 1 for a while now and am quite content with it, down want to move to DSM 6 just yet especially because there is no working Virtual Box package for DSM 6. So for the 5.2 version I am missing one update the DSM 5.2-5967 Update 2, now I am wondering if I can just safely update to this update though the DSM update feature or if I have to look into a new boot loader en go trough safe PAT down
  12. ferno

    DSM 6.1.x Loader

    yes and yes. Don't forget the serial, Id and sata thing in the grub.cfg an the ramdisk copy.
  13. ferno

    DSM 6.1.x Loader

    Hum, I can't get it installed on my N40L? The loader gets to booting the kernel but the server doesn't show in Synology assistant and no IP address is being assigned by the router. It seems like the nic isn't supported or something? I used the img linked on page 10 and the ramdisk file from page one EFI img (6688 KB), made the grub config changes but no joy. Which model Gen7 did you get it running on? Hi, On the N40L with 8 GB ram. Running fine, even the upgrade to Update 2 worked. Only issue I have eis the shutdown that hangs sometimes and the improper shutdown error in the
  14. ferno

    DSM 6.1.x Loader

    HI Have it working on a G7. Did not update to update 2 though. Did not do anything special other then using the bare metal mixed boot loader (viewtopic.php?f=2&t=20216&start=90#p73344) and copy of the fixed ramdisk. Will try the upgrade to update 2 shortly an report back Ok, Just did the update to update 2, went fine, the only issue I have is the Server not shutting down properly and the error reporting it after the reboot. Anyone found a fix for that yet?
  15. ferno

    DSM 6.1.x Loader

    HI Have it working on a G7. Did not update to update 2 though. Did not do anything special other then using the bare metal mixed boot loader (viewtopic.php?f=2&t=20216&start=90#p73344) and copy of the fixed ramdisk. Will try the upgrade to update 2 shortly an report back
  16. Hi, Have been having this issue also, its driving me crazy as there is no consistency to it. At a certain point I thought I had figured it out and thought the culprit was filling up all 12 bays but after a while my new setup (with only 7 vmdk files) ended up with a crashed volume also.
  17. Hi, We have almost the same setup. Do you have the 40 degree celsius bug? That the ILO always show 40 degree Celsius for the CPU temperature? I also have some issues but not the same one you are experiencing. Mine are more related to volumes crashing even though I have had the same setup running in the past without any issues. Mine started when I upgraded to the new xpenoboot but even when I went back still having the same issues. BTW. I use vodka files not RDW. I am now running on a new host (HP ML310e Gen8 v2) and without raid, just a couple of big volumes too see if that
  18. Hi, I just spent half a day solving this issue while pulling the few hairs I have left out of my head so I thought to share this. After installing my xpenology on a esxi 6 host everything seemed to go smoothly, but after a while my package center stopped working. For all packages, community kept loading for ages until it timed out en synology packages got the error synology server not available. After searching for hours and trying all the fixes I came across that had some resemblance I finally found it. here: https://xpenology.us/forum/general-disc ... nter-error The time
  19. OK, I have tried the expansion to 20 drives, At first I did not manage to get it running, but after some filling I got it running. But same issue, when I use the 20 drives volume crashes and when I leave one drive empty everything works fine. Even though I can live with this situation it seems like there is a real issue with xpenology, at least when using ESX and vmdk's
  20. Hi Good Point, Since I still want to have (SHR)RAID in synology I opted for a smaller size VMDK so I won't loose too much space because of the parity drive. AFAIK, DSM has no awareness of which drives vmdks physically reside on. So, with multiple vmdks per physical drive, I think you're at risk of SHR reporting that it's resilient, when in fact it isn't (because data and parity reside in separate vmdks on the same physical drive) I think that, in this scenario, you need to mirror physical drive topology in the logical topology and have a single vmdk per physical drive. Hi,
  21. Hi Good Point, Since I still want to have (SHR)RAID in synology I opted for a smaller size VMDK so I won't loose too much space because of the parity drive.
  22. How much power does this badboy draw?? Power is one of the things I pay attention to as it runs 24x7. My Proliant server now with 4 3Tb wd Red drives and 16GB ram only draws 59 Watt running ESXi 6, xpenology (with loads of services on it) VM, Vcenter VM and one windows 2012 R2 server core edition. I think that is pretty sweet since my original DS1512+ draw 50 Watt with 5 drives but did not have the punch to transcode several streams in plex and no Hypervisor options.
  23. Hi Guys, I feel we are getting offtrack here, I can almost certainly rule out PSU problems. My Nas is running on a HP Proliant server with top notch PSU and since the problems affect the VM and not the host I think it is safe to rule that out as the cause. One thing I have to rule out is if I start with the 12 drives right away it causes the same issues, until now I filled all available slots afterwards. During the rebuild to expand the volume problems start or when I start copying large amounts of data to it. Also, I will have to test if the size of the VMDK files can affect the
  24. Hi, exactly, was what I experienced, first it looks good but when you expand or try to copy large amounts of data etc. volume crashes and on disk keeps failing. After a reboot drive is green again but then the repair fails after 32%