Jump to content
XPEnology Community

Search the Community

Showing results for tags 'esxi', 'serial port' or 'vm'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

  1. I'm out of ideas of how to debug this, but perhaps someone else knows what's going on. This is probably not an Xpenology specific issue, but I'm sure others here have experience using the LSI 9211-8i and ESXi together. I recently did a new build for my NAS with the following hardware: Ryzen 5 1600 ASRock B350 Pro 4 Mobo WD Green m.2 SATA SSD (used as VM datastore only) LSI 9211-8i SAS Controller I am running ESXi 6.5u1 on this setup which was perfectly stable until I added the LSI 9211-8i SAS Controller, since adding this card the system crashes if left idle for more than 30 minutes. Timeline of events: Initially using a LSI 3081E-R controller in pass-thru mode (with test drives since I didn't realise >2TB drives were not supported when I picked up the card for $30.) Online - Several weeks without issue until my LSI 9211-8i controller arrived. Added LSI 9211-8i controller to DSM VM in pass-thru mode with 4x 4TB WD RED HDDs Did disk parity check performed on new SHR array. Took about 8 hours. Offline - Less than an hour after parity check completed the system 'hard locks'. Thought it might be a one-off incident. Rebooted the system, did another parity check to make sure everything was ok. Offline - Again, less than an hour after parity check completed the system 'hard locks'. Updated the firmware on the LSI 9211-8i to v20 using IT mode. Thinking the problem was solved as I was now using ESXi supported firmware, started a data transfer task of 8TB of data, this took about 18 hours and the system stayed online the entire time without a hitch. Data transfer finishes, feeling pretty confident the firmware upgrade has worked since it's been online for this long, spend a few hours doing benchmarks and setting things up. System was online for over 24 hours. Offline - An hour or so after I stop 'doing stuff' with the system, it's hard locked again. Thinking the issue might be IOMMU pass-thru support, I disabled pass-thru mode, installed the official driver from the vmware site for the LSI 9211-8i and mounted the disks into the DSM using ESXi Raw Disk Mappings. Offline - An hour or so after I stop 'doing stuff' with the system, it's hard locked again. Realised it's only crashing when the system is 'idle'. Looked into power saving modes, disabled "C-State" and "Cool 'n' Quiet" in the host bios. Also disabled all power saving features in ESXi. Offline - An hour or so after I stop 'doing stuff' with the system, it's hard locked again. This morning I hooked up a screen to the host in an attempt to see if there was a PSOD or any message before it locks, there was not - just a black screen. System not responsive to short-press power button or keyboard. Offline - less than 40 minutes after sitting idle from boot, it's hard locked again. In addition to the steps taken above, I have tried to find a reason for the lock ups in the ESXi logs, there is seemingly nothing, one minute it's working fine, then nothing until it's booting up again after I power cycle the system. There are no core dumps. Each time nothing was displayed on the screen of the host . It just seems odd the system would crash only when sitting idle - it would make more sense if it was crashing under load. Now these issues only started to occur once I added the LSI 9211-8i to the mix, the LSI 3081E-R did not cause these same issues. Do I just have a dodgy card? I don't want to buy another LSI 9211-8i if I'm going to have the same issues. Is there another card I should get instead? Are there any other settings I should try to make this system stable?
  2. Hi everyone I am a little bit lost reading all forums and tutorials, where people which take some assumptions for granted don't bother to explain the basics. So first let me explain where I am right now and where I've come from. First I was considering buying the NAS device from the shop. #SYNOLOGY At first, I was considering buying Synology NAS. What I liked about it was the simplicity of DSM, the flexibility of SHR, the idea of private cloud available from everywhere and mobile apps to support it. I was considering buying 4+ bay NAS as using anything less seems like a huge sacrifice in terms of the amount of drives used for redundancy comparing to amount of drives available for use. What I didn't like was the price tag on those multibay devices, fact that to utilise the power of multiple HDs connected to RAID you would have to use 2-4 wired aggregated network connections which would require further investments in the special network switch and infrastructure, special card to PC and the whole multi-wire absurd all over the place. So this is when I started to consider another brand. #QNAP I liked that especially for the idea of adding DAS functionality on top of similar NAS functionality offered by Synology. Unfortunately, the USB solution provided in some "cheaper" devices is limited only to 100MB/s transfer so there is no improvement over the single 1Gb/s network connection. They offer also 10Gb/s network cards in higher spec models but that would also require a huge investment in the network card, not mentioning lack of connectivity on the laptop - Mac in particular which holds already Thunderbolt 2 port instead, the same as some QNAP models. This solution is really tempting as some people reported performance better than standard SSD reaching 1900MB/s read 900 MB/s write but the price tag on those models is not acceptable 2-3 thousands for an empty case. #unRAID Then I discovered this Linus video https://www.youtube.com/watch?v=dpXhSrhmUXo and I loved that idea from the beginning as it seems to solve the performance bottleneck problem as everything seats under the same hood and allow to put money where they are really needed - to buy better and bigger hard disks, maximise storage and horsepower of the machine, GPU, RAM, etc. So this is where I am right now, I've bought better hard disks 6 x 3TB HGST 7200 RPM and fresh new powerful gaming rig for less than cheapest QNAP with Thunderbolt adapter. I've followed the setup ... and this is where my doubts have shown up and I started to look for alternatives to unRAID and I started to consider XPE. And here is why? Unlike Linus on the youtube video, I've chosen new hardware setup based on AMD solution Ryzen + Vega GPU. The first problem is that Ryzen, unlike the Intel processor, doesn't allow you to have second video card yet even motherboard is equipt with HDMI output. It might be possible in the future when a new version of Ryzens would be released. I might have to put additional cheapest video card to solve that problem but my attempt to work with just one video card has failed so far even such option seems to be available. Posible poor performance symptoms. Right now I am waiting for the second day for the Parity-Sync to finish. It's been 2 days already and covered only 77% of space (2,3 TB out of 3TB in 1 day 18 hours and 18 minutes). It was promising at the beginning as the single drive was showing above 150MB/s read write speed and combined read speed from all 5 drives 750MB/s and predicted time to finish this process was just 4 hours. #XPE So I can see some people, like me right now, are considering to go the same way as me from unRAID to XPE https://xpenology.com/forum/topic/5270-reliability-and-performance-vs-unraid/ but at the same time some other people considering to go opposite way https://xpenology.com/forum/topic/3591-xpenology-vs-unraid/ So I do have doubts about performance, reliability, functionality to access the NAS from outside - as I understand Synology is trying to block some functionality so special hacks of network cards are required, the old website with some few months link in one of that post has gone already https://myxpenology.com doesn't sound promising, etc, and so on. #VM On top of this, I do not quite understand virtualisation approach in here. I can understand that I can install XPE bootloader which would allow me to install DSM as the first OS, right? So like in Synology (and unRAID) I can create VM running lat's say Windows 10, right? but I've not seen anything like Linus video prepared with XPE yet so I'm not sure if this would be possible at all to setup this the same way as in case of unRAID as a gaming rig on top of XPE+DSM? And if yes would the performance be the same/similar like in case of unRAID? Now there seems to be another installation path possible to install XPE in VM. So this would be running on Windows already, right? So this would be XPE+DSM running on top native Windows OS. But is it possible to utilise all functionality and performance of NAS native system in this case? Windows would have to be running all the time to keep server availability, I'm not sure about how the performance of each drive would be affected and I would have to leave one just to run Windows instead of having full space of array available for virtualized Windows. I wasn't sure but it seems like XPE also doesn't require any RAID hardware - the same like unRAID, right? I had some doubts as AMD motherboards supports only RAID 1, 0, and 10 as I remember where I would like to go with 5 or 6 or SHR equivalent if possible. Any comments and answers would be appretiated, especially around virtualisation subject.
  3. Bonjour, je voudrais savoir comment faire pour créer une sur vmware en partant du loader en .img ? soit avec ce loader ou bien celui ci D'avance merci
  4. I have installed multiple XPEnology DS3617 on ESXi server, and setup file sync using DSM's "Shared Folder Sync". Recently, there has problem in "Shared Folder Sync", an error of "Operation failed" is prompted when I press the [Create] or [Edit] button in the task list. This feature should works before, I'm not sure if any recent update in package center cause this issue. Recently, I have updated the "File Station" in package center, and a new version DSM 6.1.3-15152 Update 3 is ready to update, but I have not update yet. Anyone here encounter such problem in "Shared Folder Sync"? Is there any thing I can do to solve this issue? Thanks in advance.
  5. Bonjour, Je suis actuellement en pleine migration de serveur. J'était sur un ESXi en 5.5. J'avais un DSM 5.x. Aujourd'hui je passe sur un ESXi 6.5, cependant j'aimerais profiter de cette migration pour passer sur du DSM 6.x. J'ai essayé pas mal de choses que j'ai vu sur le net, ici et ailleurs. Rien n'abouti, VM qui de démarre pas et j'en passe. J'aimerais repartir de 0 en suivant vos conseil, vos lien de qualité. Merci d'avance pour votre aide.
×
×
  • Create New...