jimmyjin1978
-
Posts
10 -
Joined
-
Last visited
Posts posted by jimmyjin1978
-
-
9 hours ago, IG-88 said:
really? there is no dsm 6.3.1 and afaik 6.2.4 does not boot at all with the current loader (at least in baremetal and esxi)
so i have doubts about your argumentation
Here I mean PVE version , the DSM version is always DS918+ 6.23
Quoteyou would need to use pci pass trough to have it directly in the vm, but that would be a unusual way and partly a waste of resources
usually you would use the nic directly in the hypervisor like esxi and use the virtual vmxnet3 adapter (10G capable) inside the vm, the driver is there and already part of jun's default driver set
Yes, I'm using PCIe pass through, as I said , I also used a RTL8125B 2.5G NIC , that NIC passing thorugh works just fine with DSM6.23 VM . I also have a intel Gigabit NIC card and it work fine, too. Intel X550-t1 10G NIC also works. Only AQC107 and AQC108 I found the " restart VM" issue
-
On 6/7/2020 at 4:43 AM, IG-88 said:
good, anything above 1GBit networks is really a must have for now days nas (i prefer 10G but even 2.5G would be a noticeable step up for must people
Sorry, IG-88, another multi G NIC issue need bother you :
I have an AQC-107 10G NIC (ASUS brand) and an AQC-108 5G NIC , they both work perfect in my baremetal machince( ASUS B250I or Giga B365M MATX), but when I use it in a VM with your 0.13.3 ext driver , it can always work OK when I start the DSM6.23 VM first time after boot , however , when I shut-off the DSM 6.23 VM and restart , the NIC 's driver seems go wrong and can't obtain an IP. I tried to add another virtula NIC like E1000E, I can get into SYNO's manage page , I can see the AQC NIC in the network list, but the DHCP area is grey . i.e. the AQC NIC can not get the IP address.
My VM env is Proxmox VE , I tried both 6.2.4 and 6.3.1 version , same issue . Only AQC NIC ( AQC-107 or AQC-108 ) have this issue , Intel NIC or even RTL8125 2.5G NIC work just fine , you can stop and restart VM many times . By now, what I find is AQC NIC can not RE-start VM , baremetal or first time start VM after server boot is OK .
Could you give some advice from the AQC NIC driver point of view?
-
On 6/7/2020 at 4:43 AM, IG-88 said:
good, anything above 1GBit networks is really a must have for now days nas (i prefer 10G but even 2.5G would be a noticeable step up for must people
no, i dont see a reason to put anymore work into this (would be better to spend more time in documentation about the state of the drivers for 6.2.3)
kind of a closed chapter now as synology is back to "normal" with 6.2.3 (and having three different extra version for 918+ was not really that nice, having a "universal" i915 driver make things much easier)
i will do the same realtek drivers for 3615 and 3617 and 6.2.3 this weekend
I have intel x550-t1 10G NIC which works perfertly , however 10G NIC's power comsumpiton is a bit high. I use MS04 case with 4 HDD bay WD Red 3T and RAID5 in DS918+ VM , 10G is kind of overshot while 2.5G is matching on disk speed wise with only PCIe x 1 needed , the whole idle power dropped from 30W to 24W without performance loss.
It seems Intel made some mistake on it's own 2.5G NIC with Gen10 CPU/MB. so most of MB venders provide RTL8125 2.5G NIC instead of intel's on Z490/B460 MB.
-
5 hours ago, IG-88 said:
i checked the realtek website and they even had a slightly newer version
look in the 1st post for 13.2 for 918+
You are light speed!
The new version works just fine on my PVE VM , SMB speed can sustain on 250+MB/s
Thanks for the brilliant work!
Will this driver also in 6.22 DS918 and DS3617 ?
-
On 6/5/2020 at 12:50 AM, IG-88 said:
i guess you write about 918+?
the files are in all three extra's but i forgot tot add the entry for 918+ in the rc.modules, so it doe not get loaded and the copy process for /lib/modules/updates only compares about what is in rc.modules, so its not get copied to disk (even if its in extra.lzma) - that only apply's to 918+, in 3615/17 everything looks ok
check the 1st thread i a few minutes, i will upload a new 13.1 for 981+
Thanks for the quick fixing!
However , with the new 13.1 version for DS918+ , I can't boot with the RTL8125 NIC only , after adding an E1000 virtual NIC , DSM can boot , but only E1000 NIC get the IP address , while for RTL8125 I can see the rate is 2500M in DSM , but no IP address assigned , if I mannually assign the IP address , the address won't work with the management page or SMB . Reinstall DSM 6.23 get the same result
Then I changed to DS3617 version 11.1 drive, same result: I can't boot with RTL8125 standalone , adding E1000 can boot and see the RTL8125 in DSM , but no IP address.
Today , I install this RTL8125 on my win10 desktop PC , now I am writing this reply with this NIC .
I checked the TP-link website and there is a linux driver release on 20200307, as attached , I'm not sure but may it is useful for you
-
Platform: ASUS strix B250 ITX, i3 8100T, 16G RAM
PVE v6.2 , 1.04b boot loader for 6.2.3-25426 virtualized
physical passthrough TPlink 2.5G NIC ( RTL8125 chip)
I saw RTL8125 is in supported driver list , but I'd no way to recongize my 2.5G NIC when boot with another NIC (virutalized E1000E NIC )
I checked /usr/lib/modules/update/*, there is no r8125.ko
Is there a typo for r8152 ?
Since a lot of Intel 10th gen CPU MB will use 2.5G NIC and mainly use rtl8125 chip , could rtl8125 also support in future ?
-
13 hours ago, jimmyjin1978 said:
My steup is Gigabyte B365M Auros with E3-1235Lv5 CPU, 8GB DDR4, intel X550 10Gb LAN .The volume1 is a WD6002FRYZ 6T HDD, with two Toshiba RC500 250GB NVme SDD as Read-Write cache. Below is the write speed with cache on ( not skip continue I/O):
First snap is large file (30+ GB) write test and the second is hundreds of JPG and RAW image.
The little problem is it seems DSM TRIMility is poor , after several write tests ( 60+ GB data), the write cached speed dropped to 300MB/s round , and need wait a long time ( days) to recover the full 10Gb speed . In Win10 OS, these Toshiba RC500 SSD also will drop to 300MB/s write speed after a continuous write excedd 30GB, maybe this is the SSD 's SLC cache exhausted .
BTW: the then flush writen cache procedure is extremely long, from DSM you can see it is only 10MBytes/s write speed to your volume
But I think that's all offical DSM behavior ,first of all , we have a Nvme Cache version Xpenology !
Thanks to @The Chief @flyride !Cheers and merry Christmas!
- 3
-
On 12/5/2019 at 12:08 AM, T-REX-XP said:
Thank for the good news, guys. So, can you share screenshots(before/after) of speed test after the setup the cache? Thanks.
My steup is Gigabyte B365M Auros with E3-1235Lv5 CPU, 8GB DDR4, intel X550 10Gb LAN .The volume1 is a WD6002FRYZ 6T HDD, with two Toshiba RC500 250GB NVme SDD as Read-Write cache. Below is the write speed with cache on ( not skip continue I/O):
First snap is large file (30+ GB) write test and the second is hundreds of JPG and RAW image.
The little problem is it seems DSM TRIMility is poor , after several write tests ( 60+ GB data), the write cached speed dropped to 300MB/s round , and need wait a long time ( days) to recover the full 10Gb speed . In Win10 OS, these Toshiba RC500 SSD also will drop to 300MB/s write speed after a continuous write excedd 30GB, maybe this is the SSD 's SLC cache exhausted .
- 1
-
I've marked this post for two months and my NVMe SDD ( intel 760P)are waiting, do we have any conclusion on using NVme Cache on Xpenology ?
Driver extension jun 1.03b/1.04b for DSM6.2.3 for 918+ / 3615xs / 3617xs
in Additional Compiled Modules
Posted
My Xpenology is ASUS B250i , same TP-NG421 2.5G as you have ,DS918+ 6.2.3-25426 U2 , works OK both in baremetal and VM, extra drive 0.13.3