kasteleman
-
Posts
32 -
Joined
-
Last visited
Posts posted by kasteleman
-
-
And there are a few people that claim they got esxi running on it. Unfortunately no luck on this side. Tried many options, still no go for esxi!
-
No measurements, but had some drops in throughput so decided to go for the Intel card
-
I use a Intel gigabit card with a risercard in the mini expres slot.
No problems at all. I've disabled the realtek card, but if i enabled it, both cards are seen.
-
Yes, just followed the instructions on xpenology.nl After first install, a syslog error, but that is gone after the next reboot. Volumes and programms intact! Don't know about wol, because i don't use it. But i have to say that i'm running xpenology within vmware workstation on a linuxbox. Even installed latest update. Scared the hell out of me because my volume was gone, but that was also fixed after a reboot. Keep in mind to use ide disks, because scsi isn't supported (yet)....
-
Yes, upgraded a few days ago. No Problems. Up and running just fine!
-
Yes indeed.
-
Did you change the default driver that is loaded during boot within nanoboot? At default, wol won't work. It's a driver issue regarding the realtek networkcard. It is mentioned somewhere in this topic.
-
Read the usual upgradeprocedure for nanoboot in this forum. I used the exact same. First download the upgradefile, but don't install it. Then you have to execute some command with sed in it and then upgrade. As said before: read the upgrade procedure for nanoboot.
-
Used the same procedure as with nanoboot. Use the sed part etc. Die not test update 5 and above!
-
bare metal? Wat disks are you using?
-
What is the power consumption (bare metal / ESXI) using the Asrock Q1900?
I'm going to build a second 24/7 XPENology server.
Currently I'm wondering, wether a Celeron J1900 will save some money comparing against a Celeron Haswell or not.
The J1900 - Bay Trail Intel chipset can't be (yet) used for VMware ESX-based installations as the setup is crashing.
My suggestion is to use a J1900 and go for either Hyper-V or bare metal. You can switch on the way too. Why J1900? Because it has a strong quad-core CPU that can handle and parallelize a lot of tasks. And because it has a TDP of 10W.
It's silent, strong and stingy in regards to electrical power.
There is a big difference between WD RED disks and other disks when talking about power consumption. Choose RED disks. You can also go for SSD drives as power consumption is near 0 in comparison to a WD RED drive but they're a bit pricey for their size
Take in consideration that (in theory), each PCI slot consumes in average between 10 and 15W of power (just with something plugged-in) If you want a 24/7 xpenobox and consume few watts/h don't use PCI slots.
As info: I use a HP Nic in the pci-e slot and it rises the powerconsumption with 1 Watt. But a tv-card or other will use more!
-
1.5 Tb -> 1.5TB, but mix of iso's with size up to 12Gb and small files (foto's) etc. Disk's are 3 years old and sata 300.
I've switched to the intel because of intermitted drops of networkspeed. Did not measure download speed!
-
From a ubuntu to xpenology with rsync: +/- 40 Mb/s. But that's with only one sata disk. So with multiple diskconfiguration it will be higher if the uploading machine can handle it. Have to mention that i'm using a HP nic with intel gigabit chip and disabled the onboard
-
Was also on my mind, but XBMC in chroot environment complained about x-server or something like that. Could not figure out how to do it, so aborted that project.
-
Running XBMC and Xpenology simultaneously. Installed Linux on a SSD with XBMC standalone. However Xpenology is running in VMWare Workstation. Disk (sata WD) directly passed through to the virtual machine running Xpenology. Disk was first running on the same hardwarebox with gnoboot usb boot disk. Migration was plug and play. All settings remaind including my data. However have no disk spindown active. Does not work because of the installed programms within Xpenology (syslog, database, webserver etc) Configured the vm to run with 2 cpu's. In XBMC i even connect to the Xpenology share for my content.
-
Did not find the answer for that, so still using gnoboot. Up and running with update 4!
-
if that does not help, boot from the usb again, but in install/upgrade mode. Do the same first steps as done by normal install and choose migration. Then pay attention to the next step in choosing the option for the volume. Select the one that suggest that you are moving you volume from one synology to a new synology and thus keeping you're data and programms. Then after a reboot you are good to go again and al the programms etc are still there, but without update 4. That worked as a charm for me. Had the same issue even after multiple reboots.
-
Bei mir arbeitet dass ganze einwandfrei. VMWare workstation 10 innerhalb Mint 17. Festplatte mittels SCSI eingebunden jedoch platte haengt am sata controller. Achtte aber darauf dass du die gesamte platte einbindest und dass ganze nur geht indem du die vm nicht als "shared" vm einsetzt.
-
Hatte genau dasselbe Problem. Letzendlich upgrade/install Modus gewaehlt beim Boot. Anschliessend nicht die Neuinstallation sondern "Migration" auswaehlen. Paar minuten spaeter wiederum alles im Griff....
-
HDD's kan mann in Workstation 10 durchreichen indem mann die HD als SCSI anbietet. Jedoch Spindown funktioniert nicht. Weiss jedoch nicht ob dies auch innerhalb Mikkiesoft geht (Momentan Linux installiert mit VMWare Workstation. Vielleicht HDparm verwenden?)
-
Wol bleibt immer schwierig. Realtek lan adapter sind zB problematisch bei Verwendung von Nanoboot, während Gnoboot bei mir Einwandfrei funktioniert. Unterschied im Treiber/Firmware
-
Awesome, will gnoboot , boot 4493 updtae 3 off a usb do you know ?
Yes it will. Running it as i write this message.
-
+1 for that in combination with a asrock q1900-itx. Would love it!
-
That is with real qnap hardware?
Asrock Q1900-ITX/Q1900DC-ITX
in DSM 5.2 and earlier (Legacy)
Posted
Option 2 did not work for me. Unfortunately don't have a pc to try option 1. Well that is, no pc that is esxi compatable.....
Anybody with a working image wanting to share?