Jump to content
XPEnology Community

ferno

Member
  • Posts

    44
  • Joined

  • Last visited

Everything posted by ferno

  1. ferno

    DSM 6.1.x Loader

    HI Have it working on a G7. Did not update to update 2 though. Did not do anything special other then using the bare metal mixed boot loader (viewtopic.php?f=2&t=20216&start=90#p73344) and copy of the fixed ramdisk. Will try the upgrade to update 2 shortly an report back Ok, Just did the update to update 2, went fine, the only issue I have is the Server not shutting down properly and the error reporting it after the reboot. Anyone found a fix for that yet?
  2. ferno

    DSM 6.1.x Loader

    HI Have it working on a G7. Did not update to update 2 though. Did not do anything special other then using the bare metal mixed boot loader (viewtopic.php?f=2&t=20216&start=90#p73344) and copy of the fixed ramdisk. Will try the upgrade to update 2 shortly an report back
  3. Hi, Have been having this issue also, its driving me crazy as there is no consistency to it. At a certain point I thought I had figured it out and thought the culprit was filling up all 12 bays but after a while my new setup (with only 7 vmdk files) ended up with a crashed volume also.
  4. Hi, We have almost the same setup. Do you have the 40 degree celsius bug? That the ILO always show 40 degree Celsius for the CPU temperature? I also have some issues but not the same one you are experiencing. Mine are more related to volumes crashing even though I have had the same setup running in the past without any issues. Mine started when I upgraded to the new xpenoboot but even when I went back still having the same issues. BTW. I use vodka files not RDW. I am now running on a new host (HP ML310e Gen8 v2) and without raid, just a couple of big volumes too see if that stays intact.
  5. Hi, I just spent half a day solving this issue while pulling the few hairs I have left out of my head so I thought to share this. After installing my xpenology on a esxi 6 host everything seemed to go smoothly, but after a while my package center stopped working. For all packages, community kept loading for ages until it timed out en synology packages got the error synology server not available. After searching for hours and trying all the fixes I came across that had some resemblance I finally found it. here: https://xpenology.us/forum/general-disc ... nter-error The time settings on my sinology host, it was set to 2014. After I fixed that an the regional settings in xpenology package center started working fine again. Thought I should share, maybe it helps someone out.
  6. OK, I have tried the expansion to 20 drives, At first I did not manage to get it running, but after some filling I got it running. But same issue, when I use the 20 drives volume crashes and when I leave one drive empty everything works fine. Even though I can live with this situation it seems like there is a real issue with xpenology, at least when using ESX and vmdk's
  7. Hi Good Point, Since I still want to have (SHR)RAID in synology I opted for a smaller size VMDK so I won't loose too much space because of the parity drive. AFAIK, DSM has no awareness of which drives vmdks physically reside on. So, with multiple vmdks per physical drive, I think you're at risk of SHR reporting that it's resilient, when in fact it isn't (because data and parity reside in separate vmdks on the same physical drive) I think that, in this scenario, you need to mirror physical drive topology in the logical topology and have a single vmdk per physical drive. Hi, yes, I am aware of that, I just use the RAID to create a larger volume, and even knowing that since I do not have real redundancy I should opt for no spare drive (raid 0) I thought (and my thought proved right a couple of times) this way ik I somehow mess with one of the vodka's I won't be screwed right away.
  8. Hi Good Point, Since I still want to have (SHR)RAID in synology I opted for a smaller size VMDK so I won't loose too much space because of the parity drive.
  9. How much power does this badboy draw?? Power is one of the things I pay attention to as it runs 24x7. My Proliant server now with 4 3Tb wd Red drives and 16GB ram only draws 59 Watt running ESXi 6, xpenology (with loads of services on it) VM, Vcenter VM and one windows 2012 R2 server core edition. I think that is pretty sweet since my original DS1512+ draw 50 Watt with 5 drives but did not have the punch to transcode several streams in plex and no Hypervisor options.
  10. Hi Guys, I feel we are getting offtrack here, I can almost certainly rule out PSU problems. My Nas is running on a HP Proliant server with top notch PSU and since the problems affect the VM and not the host I think it is safe to rule that out as the cause. One thing I have to rule out is if I start with the 12 drives right away it causes the same issues, until now I filled all available slots afterwards. During the rebuild to expand the volume problems start or when I start copying large amounts of data to it. Also, I will have to test if the size of the VMDK files can affect the outcome, this second part is harder to test as I do not have much space left on my datastores.
  11. Hi, exactly, was what I experienced, first it looks good but when you expand or try to copy large amounts of data etc. volume crashes and on disk keeps failing. After a reboot drive is green again but then the repair fails after 32%
  12. Hi Thanx for trying, really want to end out what si causing this. Also maybe it is related to using large drive and all bays filled so if possible try with 1 TB drives. How did you get 20 Bays? The DS3615XS only supports 12, did you manage to get a drive expansion bay virtualised? And if so how??
  13. Hi Brantje, What do you mean by brand?? Harddisks? Well the hardware is a HP Proliant microserver G8 with a xeon E3-1230 v2 processor and 4 3TB WD red drives and 16GB of ram. But whats is more relevant (I think) is the hypervisor, that is ESXi 6.0 an the VM is setup as such: NIC = E1000 SCSI = LSI Logic paralel Mem = 4GB CPU = 8 cores HARDDIKS = VMDK lazy zeroed (tried eager zeroed with same result) But when I do not fill up all slots it works like a charm.
  14. OK, I thought it to be a good idea to share my experience (and frustration) here to help others and prevent them form loosing time etc. I have been running Xpenology for a while now, first native on my HP proliant micro server G7 and then moved to a HP proliant xenon based server with ESXI 5.5 and now 6.0 with vmdk files. All was running super stable on 5.2-5592.X until I decided to upgrade to the latest version and a new xenoboot, then the problems started. First I ran into issues with the upgrade and after I was able to fix them I started having storage issues. My first assumptions was that something got corrupted during the first upgrade that went wrong so I started over after a backup and this time I decided to fill up all bays to use all space possible, I never had done that, alway had some bays left for future expandability, but this time I decided to use them all. And after the install etc. all problems started, one drive kept crashing, reboot and drive was normal again but had to repair the volume etc. and during the repair the same drive kept crashing. Shut down of the VM removing the vodka and recreating it did not solve the issue, same error. So at this time I am pulling out hair out of my head and decided to blame the 5644.X for the problems which was weird since my backup test penology station was running fine on 5644.X, but you probably guessed it already, with 4 bays empty and not filled up like the production one I was trying to setup. So I went back to the latest version I had been using for more then a year without problems and the only difference this time was a filled drive bay and once again after a while one drive crashed etc. So I started eliminating options, vmware tools, with and without = same issue Eager zeroed drive vs lazy zeroed = same issue Copying data in smaller amount = same issue etc. So at a certain point it dawned upon me, the only big difference was the filled up drive bays. So I started again with all the options I needed but with one bay free and BINGO!! Xpenology running fine for more then 2 weeks now, no problem what so ever. So my conclusion is that there is a issue with xenoboot when you use all bays of the DS3615xs. Anybody having these issues? I could reproduce them on my test environment so I am pretty sure this is an issue but would like to hear form other people. Nobody using all the 12 bays? I mean the 12 bays for storage and the usb xenoboot drive hidden as an ice drive. Would like to hear from people so we can report this as an real issue. Thank for your patience if you red though the whole thing.
  15. Hi, I have the exact same problem. Smaller size vmdk's seem to work fine though. This issue is driving me crazy. Can you post the exact setup of your vm? and which esx version you are running? Have you found a fix yet? The odd thing is that after a reboot the disk states as initialised and I can start rebuilding the volume but around 35% it crashes again.
  16. Hi NeoID, Yep, that did the trick, the stupid part is that I had the tag in there but in the wrong place. Now it works as suposed.
  17. Hi Everybody, First of all thanks to Leebeer for the great tutorial. One question though, I have been using this tutorial for a while now and it works perfectly but one thing that does not seem to work is the hiding of the bootdrive. I change the config file as explained and save it etc. but still it shows the first slot as used and available for initialisation, my first actual drive is drive 4. Not sure what I am doing wrong, the previous vodka file I have downloaded from xpenology.com just worked, without any adaptation so it is definitely vmdk related. I also extracted the config file from the working vodka and it looks the same as the one I am using now (same adaptation). Anybody tips on what I am doing wrong??
  18. ferno

    Hardware advise

    Hi Xpenology afficionados. I have been running Xpenology for 6 months now and am so pleased with it that I demoted my Synology DS1512+ to the backup server status. I am now running Xpenology 5.0--latest on a HP proliant microserver ML40 with 8GB ram and 5 HP WD red 3TB drives. All is well and it runs fine but I notice sometimes it has some trouble with Plex Media server. Plex media server is crucial for me so now I am planning to do an upgrade. I recently hav ebought and HP microserver GEN8 with the Celeron G1610T (similar to a low power i3) which is faster but now I am planning to upgrade the CPU on this baby to a Xeon E3 1230 V2 and install 16Gb of mem. I want to run ESXI 5.5 on it and run an virtualized xpenology on it. My question is of this setup will be powerfull enough to run plex transcoding in a vm. Also what other stuff will I be missing when running virtualized. I will be using VT passtrough for disk access so I am not expecting a big performance hit there. Will smart info work with VT passtrough in esxi? Wake on lan is not important as it is virtualized and I can access the ILO of the server and use VMWARE tools etc. Hope to get you input, do's and dont's and tips. Thank you in advance!
×
×
  • Create New...