Jump to content
XPEnology Community

mervincm

Members
  • Posts

    189
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by mervincm

  1. In the process of resetting my admin password, I somehow lost it. I am not sure how as I immediately put it into my lastpass but in anycase thats where I find myself. I tried to guess what it was but proceeded to lock myip out. When I built this NAS I disabled the built in admin account and made myself a replacement admin level account, and this replacement is the account that I do not know the password to. I am using juns loader 918 I have physical access kb, display etc. Earlier bootloaders you could reset an account at boot .. is that still possible? Edit: found it
  2. From what I was able to determine it actually is boosting. You can tell by looking for it running at 3601 Mhz. That extra 1 indicated that turbo is active. Since that sounded like a load of BS to me I decided to run a benchmark. The benchmark came came with a value that I would have expected only with turbo active, thus I am now convinced that Turbo is actually kicking in.
  3. i have a 9900 on a z390 systemboard. It's been a while since I tried m2 in xpenology, but when i tested both slots were working. Heck, I use an NVME on a pcie card now and it works fine.
  4. This issue is bewildering to me. I built a test xpenology system and it worked first try. I tried it on my main system once again, and it worked this time. I moved over a couple VMs to NFS storage on my main system, and everything is working perfectly. A week later I go to use a VM and it has read errors and cannot be restarted. sure enough, the NFS share is no longer available to vsphere and can not be re-established. I didn't change any network config, nor NFS share permissions, nothing at all on vsphere or synology that I can recall. I can still connect to the same NFS share from my backup synology, so the share is actually available, I just can't use it from vsphere .....
  5. I have not had any luck yet. I wonder if it is an issue with 918+? I have not been able to scare up any spare hardware to build a test xpenology yet either
  6. It doesn't think the CPU is a j3455, it is just that the info screen you are looking at is not build via a query, it is just a textual representation of the CPU that synology knows they installed in there. it is costemetic. I am surprised you have issue with your LSI SAS card. Mine works without issue.
  7. I can confirm on my 918+ xpenology system with 2 port 10 gig E intel card I see both NICs (my motherboard NIC is disabled)
  8. jmb585 worked without issue in ESXi 6.7u3? and 7.0u1 in my limited testing
  9. Thank you for testing! I used a 918+ image, same for you? I will build a test NAS on the weekend.
  10. I am trying to use remote/shared storage to facilitate vmotion between two esxi hosts. thus I am trying to mount an NFS share hosted on my XPE box as a datastore on both of my hosts. I believe I worded that correctly, but I admit that I don't understand your comment as I don't understand how else esxi can access an NFS share. Xpenology side Control Panel - file services, NFS and NFS4.1 checked to enable. NFS advanced settings at default. Control Panel - shared folder, created a share "NFS" NFS Permissions, added 2 entries, one for each host, by single IP, Read/Write, squash>no mapping, Asyncronous>Yes, Non-priv port>Denied, Cross-mount>Allowed Control Panel, Info Center, service, confirmed NFS enabled and firewall exception in place. vmWare side html5 client login as root to the esxi host navigate to storage - datastores - click new datastore Mount NFS datastore name:DS2_NFS NFS server: 10.0.0.11 (ip of my xpenology system) NFS share: /volume2/NFS (case sensitive path to my NFS share as detailed in dsm-control panel-shared folder-NFS share, NFS permissions on bottom) NFS 3 , or NFS 4 (tried both) create NFS datastore task starts up, running, Failed - an error occurred during host configuration error bar message: failed to mount NFS datastore DS2_NFS - Operation failed, diagnostics report:Mount failed:unable to complete Sysinfo Operation. Please see the vmkernal log file for more details.:unable to connect to NFS server host - monitor - logs - 2021-01-02T23:38:54.839Z cpu0:2100027 opID=9c4e526e)NFS: 162: Command: (mount) Server: (10.0.0.11) IP: (10.0.0.11) Path: (/volume2/NFS) Label: (DS2_NFS) Options: (None)
  11. Is anyone using xpenology to supply NFS for esxi 7? I was not able to make it work. It does work as expected on my real Synology 1815+ running DSM beta 7.0, but no matter what I tried I could not get the esxi host to add a datastore on the XPEnology NAS. I can use nfs v3 and v4 to my real synology. if you are able to make it work I would appreciate some feedback.
  12. I ran this exact system on xpenology for years, rock solid. Internal NIC was fine in my experience.
  13. Thanks IG-88 I used teh supplied i915 to get plex transcoding working on my 6.23 U3 system. The only thing not working is the HDR transcoding, but that has requirements not yet available in Synology. OPenCL and Beignet.
  14. I didn't realize it was well known that update 3 was a new kernal and replaced all these drivers. I understant that this is always a possibility, and was lookingto point out that it did happen. I really do appreciate your assistance and will provide feedback on if the modified U3 version of i915.ko works. PS I am sure you notices but in case you didn't, the U3 build notes speak to fixing an issue with SAS and shutdown. Not necisarily exactly the issue that you were working on earlier (SAS and drive sleep) but I wonder if the changes in U3 help there? the newer JM585 cards largely make SAS drivers obsolete for xpenology, but many folks still have them (I have both.) Thanks again IG-88!
  15. This is the i915.ko (from /usr/lib/modules) , dated much later than the update 2 files, from my 918+ system post upgrade to Version: 6.2.3-25426 Update 3 I am not sure if I can simply replace it in /usr/lib/modules with your modified file from update2? i915.ko
  16. I believe that Version: 6.2.3-25426 Update 3 broke my hardware transcoding on my i9-9900k. I was using IG-88 modified i915.ko file. Perhaps update 3 included a newer i915.ko?
  17. I have run many Xpenology based NAS with plex hw transcoding, i3-8100 i3-8350K and i5-8600K were absolutely straight forward, no mess around with drivers etc. i9-9900K, i5-9600K were a bit more of a challenge, had to load a driver pack and a modified intel v driver as well.
  18. yes you can bond multiple nics in DSM
  19. if you want to see it graphically, install docker and netdata/netdata container. Here are my 8 cores.
  20. I can only back up what the guru master has to say. I also picked up a couple jmb585 based cards to add 5 SATA each. I don't have them in my full time system yet, still in my lab / test box, but so far they work very well. Their ability to use pcie 3 for performance, only require 2 lanes (happens quite often in boards for these CPU's) and low price make them easy to recommmend. If you don't plan on adding a bunch of cards, almost anything has worked for me (so far.) If you do want to add a bunch of cards (GPU, NICs, HBA for more slots, AIC NVME drives) then pay attention to A the physical slot size, B how the slots are wired electrically C how lanes are spread around when multiple slots are in use, and D what features are either/or. Most board have features that you must chose between and can't use at the same time. Some boards are much better than others in this way. as an example, a pile of 1x slots are useless whereas a board with a bunch of physical 4x/8x/16x slots are very handy, if they can be used at the same time with enough active lanes. The only useful thing that you can put into a PCIE 2.0 1X slot is a 1 / 2.5Gbit ethernet NIC. When I buy a board I just make an excel spreadsheet and track it. z390 boards seem the best to me so far.
  21. I seriously recommend the i3-8100 and the i3-8350 on the used market. They go for very little, "used" is not a scary word when it comes to a CPU. They are super easy to cool, there are tons of inexpensive systemboards you can get for litttle as well. The idle power usage is the same among nearly every model in a given series, so the power costs will be the same as the pentiums you mention. They have exceptional hardware transcoding with no driver fuss if you want to use that (it's amazing if you use it). And they have 4 honest to goodness fast intel cores for muscle when you want to do anthing locally. They also do not rely on turbo mode. Turbo mode may/may not function in xpenology, only base clocks and power saving modes. "Turbo boost is managed by intel_pstate scaling driver, which is not available with the 918+ kernel" The high base clock on the i3 models means they really kick ass compared to the lower end models and frankly some of the higher end models. I ran xpenology servers with i3-8100, i3-8350K, i5-8600k, i5-9600k, and even i9-9900k. The j5040 is not anywhere near as powerful as the others below as it is a pentium "silver" based on the atom core, not the skylake core. Also although it is 4 core, ignoring turbo mode it is only a 2.0GHz CPU consider these CPUs (What you will see without turbo) pentium 6400t 2 cores w HT 4MB cache 2.3/3.4GHz pentium 6400 2cores w HT 4MB cache 4.0GHz i3-8100t 4 cores 6MB cache 2.4/3.1GHz (bios set) i3-8100 4 cores 6MB cache 3.6GHz i3-8300t 4 cores 8MB cache 2.5/3.2GHz (bios set) i3-8300 4 cores 8MB cache 3.7GHz i3-8350K 4 cores 8MB cache 4.0GHz i5-8400t 6 cores 9MB cache 1.2/1.7GHz i5-8400 6 cores 9MB cache 2.8GHz i5-8500t 6 cores 9MB cache 2.1GHz i5-8500 6 cores 9MB cache 3.0GHz i5-8600t 6 cores 9MB cache 2.3GHz i5-8600 6 cores 9MB cache 3.1GHz i5-8600k 6 cores 9MB cache 3.6GHz Then consider that both single thread and multithread are important. In my mind the 8100 is a rock star because you can get them so cheap, say 40$ more that the 6400 used at times. The 8350K is also a standout becasue of the comparitive excellent single core and multi-core performance. The 8600K is the next model I like because the mulithreaded performance is enough over the i3-8350K to finally more than make up for the drop in single core performance. Don't make the mistake of thinking the low end models will save you appreciable money in power, they will not. Neither will the low power models. They don't burn less power to speak of. When you are looking at a nas from a power cost perspective, the most important thing to look at is the idle power usage. Once you add in the power saving script to shedule power savings, every single one of these CPU run at the same 800 MHz, have the same power gate tech, support the same power saving modes etc etc. They idle at the same power usage (or close enough that it doesnt matter) then you look at the relatively rare (for a nas) power burn when the CPU is busy. This is where the lower freq of the T models will burn less power over time. The important thing to remember is that in a NAS your CPU work is fixed, and your T models, being slower, spend a greater amount of time in the high burn mode. The regular CPUs are already back in idle mode while the low power CPU's are still in their high burn mode. So, while its not exactly the same, even when you need to do work, the over all power used is pretty darn closer than you would think when looking at TDP numbers. So, why chose the low power models? Because they are easier to cool. Use can use them in all sorts of places where the heat generated by a high power CPU would overheat or damage near by caps etc. In my opinion, T models are never worth looking at unless you can't use even a small heatsync and fan with at least some airflow. If Turbo mode works ( and the fact I see CPU rates at 36,000,001 means it might) things change a bit.
  22. Yes I am running baremetal. Also w LSI, I didn't change anything to allow for 16HDD
  23. I hate to fix what isn't broken but I might have the issue admin@DSM:~$ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 /dev/synoboot3 given I have the extra (/dev/synoboot3) I appear to have a problem. I am running 6.2.3U2 918+ with 16 visable drive slots, 13 used (I have 6 SSD and 7 HDD) I also have an AIC NVME Intel SSD750-400 used as a read cache on the HDD volume My volume 1, storage pool 1, is SHR1, 7x 8TB HDD using slots 1,2,3,4,5,6,12 BTRFS - Healthy My volume 2,storage pool 2 ,is SHR1, 6x1TB SSD using slots 7,8,9,10,11,13 BTRFS - Healthy My NVME is cache device 1 - Healthy Other than the strangeness in that my HDD and my SSD are not sequential in their slot numbers, and the fact that the drives two misordered drives 12 and 13 have had a few reconnects, (none since at least March) on all other disks I don't see any issues with the storage.
×
×
  • Create New...