Need advice for a 1000€ config


Recommended Posts

Hello,

 

I'm looking to build a home server under Xpenology with certain criteria, but I'm pretty bad at hardware so I'd like some help please.
It will not be storage oriented, but more application oriented. To mainly run MySQL databases and several other applications in Docker. (Not really a NAS so)

 

 

-I'm looking for a Micro or Mini-ITX motherboard so that it fits in a small case like the InWIN IW-MS04 MINI, I do not wish a tower chassis.

(Maybe some Asrock Industrial’s Mini-ITX?)

907987421_LD0002835154_2(2).thumb.jpg.f001107e410cc3240d12e5b67d5eb20c.jpg

 

-A recent and powerfull Xeon mono-CPU less than 2 years old.

 

-32 GB DDR4 RAM (ECC?)

 

-2x 128 GB NVMe SSD storage in RAID 1

 

-2x 1 GBps minimum LAN port (1 LAN for network and the second LAN for high availability with a second identical setup)

 

 

My budget would be about 1000€ / 1200$ (x2 for high availability)

 

Do you think that such a configuration could exist while being compatible with Xpenology (DSM 6.2.3), and if so, do you have any references?

 

ps: If you're wondering why I don't just install a Debian server if I want an application-oriented setup and not a storage-oriented one, it's because I'm just too much of a fan of the web interface and "simplicity" of DSM 😅

Edited by Avogadro
Link to post
Share on other sites
1 hour ago, Avogadro said:

-I'm looking for a Micro or Mini-ITX motherboard so that it fits in a small case like the InWIN IW-MS04 MINI, I do not wish a tower chassis.

mini-itx brings some limitations when it comes to extensions as it does not have much pcie slots (usually just one)

 

you did read about the general limitations of xpenology?

https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/

 

1 hour ago, Avogadro said:

-32 GB DDR4 RAM (ECC?)

when doing docker and vm then ram is more important, very few people use ecc as most build low cost systems with desktop hardware that does not support ecc, can't hurt to have it when its more important

is it worth the money? depends on how much more it costs and how you see it

 

2 hours ago, Avogadro said:

-A recent and powerfull Xeon mono-CPU less than 2 years old.

-2x 128 GB NVMe SSD storage in RAID 1

 

here we have the 1st problem, if you look for the limits you will see only 918+ supports nvme (and only as cache and manually patching) but its kernel is limited to 8 threads

even 3617 "only" hast 16 threads so its a max. 8 core with HT or more cores and disabling HT (if the bios can do that), like when having a 12 core with HT

 

you might be better off with esxi or proxmox and using a dsm vm, that way you can handle resources more flexible (cpu power for other vm's) and the nvme's can be made virtual ssd's and 3617 can handle them as normal drivers or as cache - but you loose a lot of simplicity compared to a baremetal install

 

2 hours ago, Avogadro said:

-2x 1 GBps minimum LAN port (1 LAN for network and the second LAN for high availability with a second identical setup)

 

that sounds a little low for HA, consult synologys design guides for that, maybe plan a added 10G nic for the pcie slot (4x or better) and  use a direct connection between the two HA units over the 10G connection (you don't need a switch for just connection the two units)

 

also check the forum if HA is available without a valid serial

xpenology is still a hacked dsm appliance and there are things not working without "tweaking" and you can loose functions when updating if synology changes things and if you can't install all security updates there are risk's involved like you cant install 6.2.4 that already contains new security fixes, also its easy to "semi" brick or damage the system on updates when doing things wrong (like not disabling write cache on updates when using 918+ with nvme cache or installing "to new" updates like 6.2.4) like when having other people not aware of the specialty's handling the system - keep that in mind)

 

2 hours ago, Avogadro said:

ps: If you're wondering why I don't just install a Debian server if I want an application-oriented setup and not a storage-oriented one, it's because I'm just too much of a fan of the web interface and "simplicity" of DSM

have you given open media vault a thought (i guess they will have no equivalent to HA but webgui, nas and docker will be there too)

 

i don't know how close to production that system is supposed to be but it can be riskier to do that then you expect it to be (if you dont have longer experience with xpenology)

 

  • Like 1
Link to post
Share on other sites
Posted (edited)

Thank you for helping me, these comments are very valuable to me.

 

il y a 49 minutes, IG-88 a dit :

you did read about the general limitations of xpenology?

 

I read them yes, but I didn't understand some things like the fact that NVMe only works for the cache. And for the Max CPU Threads for exemple, I had told myself that I just had to take a CPU of 8 or 16 threads but I didn't understand all the subtleties.

 

il y a 49 minutes, IG-88 a dit :

that sounds a little low for HA, consult synologys design guides for that, maybe plan a added 10G nic for the pcie slot

You are probably right. My current setup works in high availability (without a valid serial) with a 1 GBps usb adapter but it is not very stable, from time to time there are desynchronizations, so 10G is certainly better.

 

 

il y a 49 minutes, IG-88 a dit :

i don't know how close to production that system is supposed to be but it can be riskier to do that then you expect it to be (if you dont have longer experience with xpenology)

I have been experimenting with Xpenology for several years now (I am far from being an expert) But I have a setup that works quite well in production for a while on a low-cost pc. (2 small NUC in high availability)

 

To sum up, if I want something "powerful" I should drop Xpenology? I can manage without it if it's really complicated for my use case, but if I can get the DSM interface I'm willing to make some compromises. (I will check open media vault however)

Edited by Avogadro
Link to post
Share on other sites
9 minutes ago, Avogadro said:

To sum up, if I want something "powerful" I should drop Xpenology? I can manage without it if it's really complicated for my use case, but if I can get the DSM interface I'm willing to make some compromises.

if its more about ram and cpu power you can use 3617 baremetal and just use normal ssd's as data volumes (raid f1 mode) or as cache drives instead of nvme

using a 8 core with HT or beefier one with 12 or 16 cores without HT (disabled in bios) is no problem

if sata ssd's in raid1, 10 or raid f1 (ssd equivalent to raid5)  will be ok depends on you reqirements

so 3617 with sata ssd's might be a simple to handle solution (you can still add normal hdd's for slower but bigger storage as long as you have sata ports - if the 10G nic blocks your single pcie slot you cant extend for more sata ports)- security seems to to be your concern when you still run a 5.2 system and as long as you are doing operations and maintenance yourself (or/and make a good documentation) it should be fine, you 5.2 did not get updated to 6.x by accident so no problem from that side i guess

  • Like 1
Link to post
Share on other sites
Posted (edited)

Ok, so my biggest problem is my choice of NVMe if I understand correctly. I can forget about NVMe if it complicates things too much.
If I choose the best SATA 3 SSD that I can afford, that should still be good I think. (the difference in performance for mariadb should not be that big... I hope 😅)

Otherwise with an M.2 SATA SSD without NVMe would it work too? 

 

For the motherboard and Xeon CPU, do you have some suggestions? (Within my total budget of about 1000€) This compatibility stuff worries me.

Edited by Avogadro
Link to post
Share on other sites

As IG-88 said it will be rather better using virtualization. For example I am using Unraid in which you can create VMs and install Docker images if you mainly use your server for productivity. If you like you can setup XPEnology as VM on Unraid if you want to use DSM's docker... Using virtualisation I believe you are more flexible and you are not bound to all those limitations of the XPEnology...

  • Like 1
Link to post
Share on other sites
Posted (edited)

 

il y a 19 minutes, gadreel a dit :

As IG-88 said it will be rather better using virtualization. For example I am using Unraid in which you can create VMs and install Docker images if you mainly use your server for productivity. If you like you can setup XPEnology as VM on Unraid if you want to use DSM's docker... Using virtualisation I believe you are more flexible and you are not bound to all those limitations of the XPEnology...

 

Yes why not, but as I have never tested this, I have some questions:

In terms of performance and reliability, I don't lose anything if I run Xpenology in a VM or is it the same?
And if my Unraid machine (or any other OS) burns out, won't that be a problem for DSM high availability?

No firewall problems or with nginx reverse proxy, certificates, DNS server, etc?

Edited by Avogadro
Link to post
Share on other sites

I am not that techy to answer all your questions but all I can say is that if you run it as a VM you might lose like 5~10% performance. If you ask other people around here might say VM/Baremetal they have the same performance or you wont notice any difference.

 

I do not want to mess with your plan but with 1000euros you might be able to get better hardware if you decide to go with Virtualisation. For example an AMD 3700X 65W (8 cores/16 threads) in EU will cost around 300euros, a micro ATX board with more PCIE slots gives you more flexibility to add an LSI SAS to add additional drives and directly pass them through to your VMs (such as XPEnology) no need to create VDisks.

 

Like I said I am not that techy and I do not know to answer your high availability question maybe this is something you can ask the Unraid people or other people around here.

  • Like 1
Link to post
Share on other sites

along with its flexibility the hypervisor add a amount of complexity, if one already has enough experience with with a certain hypervisor it will not cost to much time

the hypervisor is also a instance that needs update/maintenance in addition to the system that is the real thing, that complexity can add up to a point where you have work work with the hypervisor then with the dsm vm

if its really about having a single purpose dsm install without bigger plans to extend to additional vm's (dsm can do docker and vm's with vmm on a smaller scale)

if its put it in place and forget about and not tinkering all the time then carefully chosen hardware for having 2 baremetal systems  might be the better choice

the soul purpose a having a dsm (synology) appliance is often to not care to much about it if its set up and running, adding a hypervisor adds complexity in a way that it can be a bother

i dont know if the gain of nvme in a vm (as virtual ssd) ist that much faster that its worth the effort

also when thinking about a baremetal with nvme (918+, 8 thread limit) it needs two nvme's  in dsm to use it as read/write cache and most mimi-itx boards will just have one nvme slot

 

21 hours ago, Avogadro said:

Otherwise with an M.2 SATA SSD without NVMe would it work too? 

will be the same as a normal connected ssd but i would not do that because it much more difficult to handle and to replace that just normal 2.5" sata drives

its easier to have 2 or 4 drives of the same type and one spare (cold), if one fails its just one part to replace, if its m.2 sata too beside normal sata then its two different spare parts

 

 

one point we have not touched yet is power consumption, if its 2 x NUC atm then it will not draw much power, using a oversized hardware may result in a nice additional fee for power, so it might be worth a thought and few minutes with a calculator to see what it will cost in a year.

 

 

17 hours ago, Avogadro said:

2x Samsung SSD 870 EVO 250 Go (RAID 1)

thats TLC ssd (and ok), if you look for different models don't use QLC, way slower when the internal cache of the drive is exceeded

also it might be better to use 4 drives in raid10 for more throughput (but you can start with 2 drives in raid1 and extend if you feel you want more speed)

 

btw. my last build was mini-itx and i changed to microATX last year, way more options with its 4 pcie slots and there are also models with 2xnvme (if you want to keep that a option)

 

Edited by IG-88
  • Like 1
Link to post
Share on other sites

Everything you guys say is very interesting, and it makes me think...
I also read on the forum what was said about Xpenology in a VM, and it simplifies some things obviously but not necessarily for everything indeed (high availability for example)
On the other hand @IG-88 you told me about Open Media Vault so I had a quick look at it since yesterday and even if it doesn't seem to be on the same level as DSM in terms of ease of use and features, it could still do the job I think. (I have to test it for real now)
And with an AMD configuration as proposed by @gadreel I could have a more powerful and easier to manage configuration (anyone know what a Rizen 7 is worth against a Xeon E-2200 for a server use?) Unfortunately I lose my dear DSM but I can't have everything.

 

il y a 22 minutes, IG-88 a dit :

most mimi-itx boards will just have one nvme slot

 

That's right, I found only a few with 2x M.2 slots. Or else, I need a PCI-E to M.2 converter.

 

 

il y a 25 minutes, IG-88 a dit :

one point we have not touched yet is power consumption, if its 2 x NUC atm then it will not draw much power, using a oversized hardware may result in a nice additional fee for power, so it might be worth a thought and few minutes with a calculator to see what it will cost in a year.

 

Indeed, these are things that are too often forgotten, but fortunately my electricity is not very expensive and low carbon (nuclear) 

 

 

il y a 29 minutes, IG-88 a dit :

btw. my last build was mini-itx and i changed to microATX last year, way more options with its 4 pcie slots and there are also models with 2xnvme (if you want to keep that a option)

 

I know, but the "size" is one important criteria and on which I have the least flexibility for this configuration. It should be kept as small as possible.

btw, thanks for the QLC tip, I was totally unaware of this.

Link to post
Share on other sites

about possible hardware

 

GIGABYTE W480M Vision W (microATX, 2 x Pcie 16x (1x16, 1x8), 2xM.2 Nvme, 8xsata)

Intel Core i5-10500T, 6C/12T (TDP 35W)

cpu can be different the example is low power and affordable 200 bugs

the board keeps everything open as it can have nvme and more disks (but 8xsata is a comfy start)

might be a overkill of options, if you go with mini-itx you would have less options but might be good enough

 

only negative with the board would be you cant use the 2nd nic as thw 2.5GBit nic from intel has no driver outside kernel 5.x and dsm is based in kernel 3.10 and 4.4

but you already planning a 10G nic

 

ASUS XG-C100C will work but if the systems are not to far apart sfp+ might be a better choice (cheap DAC cable ap to 7.5m and affordable 4 or 8 port switch)

(sfp+ would be my choice now because of the switch option, multiport 10G rj45 a way more expansive and power hungry, also sfp+ has a lower latency)

  • Like 1
Link to post
Share on other sites
4 minutes ago, Avogadro said:

Open Media Vault so I had a quick look at it since yesterday and even if it doesn't seem to be on the same level as DSM in terms of ease of use and features, it could still do the job I think. (I have to test it for real now)

it a full linux and is open to much more hardware then dsm

 

5 minutes ago, Avogadro said:

Unfortunately I lose my dear DSM but I can't have everything.

no dsm works with ryzen too, you would need a additional gpu

 

7 minutes ago, Avogadro said:

That's right, I found only a few with 2x M.2 slots. Or else, I need a PCI-E to M.2 converter.

when you already have a 10G nic in the one slot ...

thats one of the reasons to think about microATX, more options if the new build still need some more after a while

also going hypervisor or baremetal is not final and can be changed if needed, when disks are in direct control of dsm (like controller in vm or rdm mapping) its still possible to just remove the hypervisor and use the whole install (disks)  baremetal without reinstalling dsm or the data on the disks (or the other way around from baremetal to hypervisor)

 

18 minutes ago, Avogadro said:

I know, but the "size" is one important criteria and on which I have the least flexibility for this configuration. It should be kept as small as possible.

then mini-itx is the better choice and choose more wisely before buying

btw. there where more expensive server mini-itx boards with 10G nic anboard, that way you can keep the pcie slot open

but its over 2 years as i looked into that so it might be outdated (the market moves and compact servers might not be that interesting anymore)

 

  • Like 1
Link to post
Share on other sites

If you decide to go with BareMetal DSM read carefully what @IG-88 is saying because not all hardware work out of the box... For example new Ethernet chips are not compatible at all or other hardware might require extra effort from your end to make them work. What I said about Ryzen etc... all that applies if you follow the virtualisation road. As IG-88 said Ryzen CPUs like 3XXX or 5XXX do not have integrated GPU therefore if in the future you are considering to install Plex and you need hardware transcoding this can be a lot of hassle to setup on DSM.

 

As IG-88 mentioned all comes down what YOU want base on your needs. Hypervisor is flexible you can install it mostly on any hardware you want but you have to maintain it, updates etc... and if you decide to go with BareMetal DSM you are bound to a range of hardware and the capabilities DSM has to offer...

 

This is my setup and this is what I have installed:

I have a Ryzen9 3900X 12C/24T, 64GB Ram Non-ECC on an ASUS B550M (Micro ATX) with 2 x 6TB Red, 2 x 500GB 970 EVO (NVME), 2 x 1TB 870 QVO (SSD), 3 x 480GB SSD.

 

On the Unraid's docker I installed Plex and Qbittorent.

I have 3 VMs on my Unraid:

1) A windows 2016 server VM which is for development I remotely connect to it and do all my coding. (8 Vcores, 16GB memory)

2) I have another similar VM which I installed the MS SQL Server 2016 (2 VCores, 8GB Memory).

3) Lastly a VM with DS918+ which I store my personal files (photos/videos), I create LUN drives for my development server I mentioned above, I use DSM's Docker to install PostgreSQL, MariaDB, Redis, etc... I use DSM's Synology Directory Server to authenticate the users, VPN server, DNS server and I am using Active Backup for business to backup my VMs and my personal PC for gaming (4 Vcores and 8GB Ram)

 

Because I have a MICRO ATX MB which has 4X PCI Express I installed an LSI SAS in order to be able to add all those drives which the 2 X 6TB Red, 2 x 1TB 870 QVO, and 1 x 480GB SSD are passed-though to DSM.

 

This is what suits me...

 

Edited by gadreel
  • Like 1
Link to post
Share on other sites
Posted (edited)

Your comments are making me rethink my whole architecture.
Since for high availability and other things it might be complicated on Xpenology (even on a VM), I'm thinking about the idea of using Portainer on Open Media Vault in a cluster instead of Docker in DSM high availability. The 2 machines would be at two different physical locations (my home/my office)+ a VPS for load balancing.

 

-ASUS ROG STRIX X570-I (2x M.2 Gen4, 1x 1Gbps LAN, Mini-ITX)
-AMD Ryzen 7 3800X (on sale at my place)
-2x Samsung SSD 980 PRO M.2 PCIe NVMe 250 GB (RAID)
-2x 16 GB Corsair Vengeance 32 GB DDR4 3200 MHz CL16 (non-ECC, Unbuffered)

 

I haven't found an AMD AM4 mini-ITX motherboard (I'm restricted by this Mini-ITX format because of very limited space) with 2x M.2 Gen4 and that accepts ECC RAM, otherwise there is Intel but as you pointed out @gadreel for the same price AMD seems much more interesting, it looks a lot like a gamer setup though.

What do you think about this idea and this configuration?

Edited by Avogadro
Link to post
Share on other sites

Well, regarding the AMD part at the moment what AMD offers is much better than Intel and specifically for 2 machines that will be online 24/7 power consumption is very important. Intel's 11th generation CPU (Core i9 11900K) can reach almost 300W which is a lot when AMD with more cores will use half of that. 3800X is 105W, 3700X and below they are 65W if that matters for you.

 

As for the motherboard I do not know if you can find an AMD ITX that supports 2 NVMe Gen 4 and ECC Memory. I am quoting something that I remember on an article..

Quote

In a recent AMA on Reddit, AMD has confirmed that their new Ryzen CPUs support ECC error correcting memory, allowing motherboards to support the memory standard despite the fact that AMD does not officially supporting/verifying functionality

 

As for Portainer and Open Media Vault I do not really know them :(.

  • Like 1
Link to post
Share on other sites
Posted (edited)

Right, it seems that the ECC memory is compatible but not officially. And for the ROG STRIX B550-I motherboard, it is marked compatible with some CPUs on the Asus website. Since the Ryzen 7 3700/3800X are also ECC unbuffered compatible, I modified my candidate configuration a bit.

 

ASUS ROG STRIX X570-I (2x M.2 Gen4, 1x 1Gbps LAN, Mini-ITX)

- ASUS ROG STRIX B550-I (2x M.2 Gen4, 1x 2.5Gbps LAN, Mini-ITX)

- AMD Ryzen 7 3800X (on sale at my place)

- 2x Samsung SSD 980 PRO M.2 PCIe NVMe 250 GB (RAID)
- 2x 16 GB Corsair Vengeance 32 GB DDR4 3200 MHz CL16 (non-ECC, Unbuffered)

- 2x 16 GB MICRON RAM ENTERPRISE MEMC Crucial DDR4 2666MT/s CL19 288pin (ECC, Unbuffered)

 

 

Edited by Avogadro
Link to post
Share on other sites
  • 3 weeks later...
On 3/31/2021 at 7:04 AM, gadreel said:

If you decide to go with BareMetal DSM read carefully what @IG-88 is saying because not all hardware work out of the box... For example new Ethernet chips are not compatible at all or other hardware might require extra effort from your end to make them work. What I said about Ryzen etc... all that applies if you follow the virtualisation road. As IG-88 said Ryzen CPUs like 3XXX or 5XXX do not have integrated GPU therefore if in the future you are considering to install Plex and you need hardware transcoding this can be a lot of hassle to setup on DSM.

 

As IG-88 mentioned all comes down what YOU want base on your needs. Hypervisor is flexible you can install it mostly on any hardware you want but you have to maintain it, updates etc... and if you decide to go with BareMetal DSM you are bound to a range of hardware and the capabilities DSM has to offer...

 

This is my setup and this is what I have installed:

I have a Ryzen9 3900X 12C/24T, 64GB Ram Non-ECC on an ASUS B550M (Micro ATX) with 2 x 6TB Red, 2 x 500GB 970 EVO (NVME), 2 x 1TB 870 QVO (SSD), 3 x 480GB SSD.

 

On the Unraid's docker I installed Plex and Qbittorent.

I have 3 VMs on my Unraid:

1) A windows 2016 server VM which is for development I remotely connect to it and do all my coding. (8 Vcores, 16GB memory)

2) I have another similar VM which I installed the MS SQL Server 2016 (2 VCores, 8GB Memory).

3) Lastly a VM with DS918+ which I store my personal files (photos/videos), I create LUN drives for my development server I mentioned above, I use DSM's Docker to install PostgreSQL, MariaDB, Redis, etc... I use DSM's Synology Directory Server to authenticate the users, VPN server, DNS server and I am using Active Backup for business to backup my VMs and my personal PC for gaming (4 Vcores and 8GB Ram)

 

Because I have a MICRO ATX MB which has 4X PCI Express I installed an LSI SAS in order to be able to add all those drives which the 2 X 6TB Red, 2 x 1TB 870 QVO, and 1 x 480GB SSD are passed-though to DSM.

 

This is what suits me...

 

hello

 

you used "2 x 500GB 970 EVO (NVME)"  ... your DSM see that storage, for use like ssd cache for example ?

 

I have two 2280 M.2 PCI Express module up to Gen3 x4 of 500g ADATA, but my sistem not see them !!

 

I need to do something special ?

 

Thanks,

 

 

 

 

 

Link to post
Share on other sites
On 3/31/2021 at 9:29 PM, Avogadro said:

- ASUS ROG STRIX B550-I (2x M.2 Gen4, 1x 2.5Gbps LAN, Mini-ITX)

problematic choice as it uses "Intel® I225-V 2.5Gb Ethernet" and that "igc" driver is only part of kernel 5.x and there is no standalone driver for older kernels (and itel stated they have no intentions to make one) - just dead weight and you need pcie nic card for dsm

 

15 minutes ago, mbarac said:

you used "2 x 500GB 970 EVO (NVME)"  ... your DSM see that storage, for use like ssd cache for example ?

problem with nvme is that it can only be used as cache in the normal configuration, easiest way is to use a hypervisor and make them available as normal ssd in a vm

but you can't have a dsm system with just two nvme's you would not be able to even install dsm as cache drives dont hold system and swap partitions

just to use them as cache even needs patching and the cache will be "lost" on some updates so it needs to be disabled before updates

https://xpenology.com/forum/topic/13342-nvme-cache-support/page/3/

 

Link to post
Share on other sites

My build, for what it's worth:

Silverstone DS380 (you will need to do the airflow mod. easy if you have a 3d printer)
Asrock rack c246 WSI  - make sure you get a oculink to 4x sata cable too
Intel i3-9100
2x16g UNBUFFERED ecc
Silverstone 450w sfx
Samsung fit thumb drive fits in the front of the case w/ door closed
Also got some of the thin sata cables
Bunch of shucked 14tb drives. Most from seagate, but some from WD



Some points of note, in no particular order:
- Case is a tight fit, but it's a fit
- Case has air filters that work reasonably well
- Drive temps are kept in the 40-50c range. Other cases seem to keep cooler, but I've been unable to locate any data that indicates cooler is better within the spec. So YMMV. Just chose the case  because it's small for being able to fir 8+4 drives
- mini itx will limit expansion. I'm happy with the form factor, but in hindsight, I should have picked a board with 10gbe built in
- Stock intel cooler fits
- There is a known issue with the case regarding airflow. tl;dr you need to put an air barrier between the drive cage and the fans. I 3d printed mine, but you could get the same effect with some card stock
- Turbo on the CPU works, though not accurately reported in DSM. Geekbench scores reflect appropriate turbo frequencies
- Dual 1g lan works with 1.04b / 6.2.3 + updated lzma . 918+
- You cannot get mac address from the bios. You'll need to boot up a linux usb or similar and pull mac address from there
- Board does not come with a beeper. But a third party one doesn't seem to work out of the box w/ DSM. Need to look into why but haven't gotten around to it
- Plex hardware transcoding does work

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.