Jump to content
XPEnology Community

test4321

Member
  • Posts

    126
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by test4321

  1. On 5/27/2018 at 8:15 AM, Davidov said:

    Подскажите пожалуйста, не могу определиться что купить.

    Рассматриваю варианты 

    Мають asrok j4205

    Мать плюс проц g4560

    Какие плюсы и минусы 

    Транскондинг будет только на j4250? 

     

    я бы брал что-то посильнее...буст на Synology не работает - проверял. Так что надо проц брать с начальной высокой частотой.

     

    У меня 3455(это почти как j4205) был - он транскодинг еле тянул. 

     

    Лучше брать норм проц с маткой отдельно чтобы можно потом поменять проц если нужно. 

     

    Я недавно купил:

     

    i5-6400T 2.2GHz

    MSI H110I Pro AC Mini ITX

     

    ^^ долго искал - но проц отличный и тянет только 35W.

     

    Мать не очень но зато дешёвая.

     

    Производительность в 2 раза больше чем у j4205-itx:

    https://www.cpubenchmark.net/cpu.php?cpu=Intel+Pentium+J4205+%40+1.50GHz&id=2877

     

    https://www.cpubenchmark.net/cpu.php?cpu=Intel+Core+i5-6400T+%40+2.20GHz&id=2668

     

    цена на оба должны быть одинаковыми - 6400T наверно даже дешевле.

     

     

  2. 11 hours ago, mseidler said:

    DSM 6.2 is now official released!

     

    BR,

     

    Michael

     

    Have you tried running it? I just got it as an update, but looking at new features, I decided not to upgrade until I know for sure.

  3. 14 hours ago, IG-88 said:

    not sure what your disk configuration is but even with 12 disks i "only" get ~500MB/s, there is more (900MB/s) for about a few second when writing data to nas goes to ram (source pc for this has nvme ssd so the satat3 limit of ~550MB/s can be broken)

    if you want to test network speed without possible limits of disks on both sides involved you can try netio there are versions for windows and linux (netio-x86_64) you just copy the binary to you system(s) and run it on both sides to measure

    even if you get a nice number, it does not help you much getting more speed out of the system, for more write speed you can have 2 ssd's as write cache for a volume in dsm (in bare metal there is the sata3 limit atm) and for reading (uncached) data there is just more disks or going all ssd drives (having 6 ssd's in raid config might be able to reach the 1000MB/s?)

    there is also the approach to use nvme with esxi and virtual dsm like @flyride did, but i guess thats a very unique scenario

     

     


    No I'm ok with it. 220mb on average is good!!! 

     

    I'll probably stuff 2 SSDs there when I get a better MB. But thats probably for next year project.

     

    Thanks for all the help!

  4. 11 hours ago, IG-88 said:

    ???

     

    i'd assume you use a DAC cable or 10GSR SFP+ module for 10G ethernet

    FC is different and not used with dsm

     

    also if you cant find a win7 driver then win server 2008r2 would be the better choice (same code base as win7)

    http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/SearchByProduct.aspx?ProductCategory=322&Product=1214&Os=190

     

    win7 seems to be supported

    https://forums.servethehome.com/index.php?threads/brocade-1020-cna-10gbe-pcie-cards.2955/

     

     

     

    Ah thanks! The 2008 R2 version worked!

     

    After that I had an issue with adding extra.lzma - ended up redoing the USB stick. Not the smoothest of the processes but it works!

     

    Speeds are about 180-300mb - probably limited by my hard drives on NAS. 

     

    Thanks for the help @IG-88, @quicknick, @Polanskiman!!!!!

     

    The last part I have to figure out is this Brocam BIOS hijacking my PC's bios - its weird it blocks the motherboard BIOS even if I disable it in MB BIOS from loading the card at all. 


    PS: sorry guys I didnt reply, I was out for a bit.

     

    2018-03-31 02_31_49-Copying 1 item (6.54 GB).jpg

  5. 19 hours ago, test4321 said:

    2) The NIC itself loads in before boot devices.


    I get an error when the screen is loading:

     

    Adapter 1/1/1 (Brocade-1020): Link initialization failed. Disabling BIOS

    Adapter 1/0/0 (Brocade-1020): Link initialization failed. Disabling BIOS

     

    I enabled both of the NICs through options / settings:

     

    I'm not sure what I'm supposed to enable here. Any ideas?

     

    Thanks!

     

    I think I figured out issue #2:

     

    This is some kind of boot over LAN setting which I dont need to use. I'll turn it off. 

     

    So I guess the only thing is the WIN7 issue

  6. Got all the devices in today. 

     

    Connected the cards.

     

    Having a couple of issues:

     

    1) Windows 7 Driver - non-existent. I have downloaded these two:

     

    http://driverdownloads.qlogic.com/QLogicDriverDownloads_UI/SearchByProduct.aspx?ProductCategory=322&Product=1214&Os=194

     

    -  Driver package for Windows Server 2012 R2 (x64)

    - Adapter Software Installer for Windows

     

    Installation of the device  through Device Manager fails too:

     

    9umpw84WkcSLQk46GCKFec5BA9S1xwBCXedNc6Gc

     

    ^^^ these adapters installed but with an error: 

     

    L9XuQLL73fH6W_C_eNdNoU7aT03y-MY39po0WP20

     

    Code 39 (havent looked into it yet)

     

     

    Fibre Channel Controller

     

    The bigger problem is the Fibre Channel Controller - I cant install the drivers without briking Win7 - complete bluescreen. Only restore point gets the system back to normal.

     

    RMV8zyjY6xojVRSmJyGSEjU1tsRKVpEoUkdlgn4t

     

    Any ideas on how to get this device to work on WIN 7 PRO?

     

    2) The NIC itself loads in before boot devices.


    I get an error when the screen is loading:

     

    Adapter 1/1/1 (Brocade-1020): Link initialization failed. Disabling BIOS

    Adapter 1/0/0 (Brocade-1020): Link initialization failed. Disabling BIOS

     

    I enabled both of the NICs through options / settings:

     

    xCF1jeQuP8c5wmerbU-1ds78gS4N-goEn-1JV2c6

     

    I'm not sure what I'm supposed to enable here. Any ideas?

     

    Thanks!

     

     

  7. 17 minutes ago, Xepnewbie2018 said:

    FYI....for testing purposes only....

    I bought (2) MNPA19-XTR 10GB MELLANOX CONNECTX-2 PCIe X8 10Gbe SFP+ NETWORK CARD W/CABLE from eBay.....pretty cheap ($48.00)including (2) SFP+ cables and it worked out of the box. Im using an older motherboard (MSI G41TM-E43) with only 2GB of RAM on a LGA775 Core 2Duo and Im getting 398.7 Write and 452.4 read when transferring files from my Mac Pro 3,1....the Mellanox doesn't work with my Mac Pro so I had to get a SolarFlare SFN5122F Dual Port 10Gbe PCIe Adapter SF329-9021-R6 ($44.00).  Setup was straight forward and speeds are very close to my OWC Thunderbay IV in RAID 0

     

    My current setup only have 4 SATAII ports with transfer rate up to 3Gb/s, ( 4 x 3TB Drives) so maybe once I move to a faster motherboard, speeds will increase.

     

    Can anyone comment or share what could contribute for speed bottleneck as the numbers of drives start to increase? 8 drives, 12 drives etc...my goal is to build a 12 drive system (36TB)

     

    Motherboard change is definitely the first thing to do. They are the dirt cheap part of the build - the more expensive stuff is RAM and CPU. You can probably try a build like mine - LGA1151 is cheap on eBay because of the whole Intel fuckup where they changed the socket. 

     

    As far as hard drives - I think you are probably better off just buying 2 SSD drives and putting them as WRITE CACHE instead of getting 12 drives for speed. I dont know if anybody attempted to do this on XPE though. 

     

    Also Synology WRITE CACHE is suspect - i have seen conflicting videos where it does improve the speed and where it doesnt at all. 

     

     

     

    EDIT: I also noticed Synology started to use RAM as fast storage for Synology device databases. This is applied to Universal Search and something else (I dont remember right now). So maybe in future they will just use RAM drive for every application?

     

     

     

     

     

  8. 1 minute ago, IG-88 said:

    he has not much time and wants to mod it further to incorporate all 3 versions of jun's loader so it will take a little longer

    but don't worry, if you really need that 10G driver i will add it  shortly to my extra.lzma and you can go on with jun's loader for now

     

    Sweet! Hopefully my ebay seller comes through and hurries the shipping up.

     

    Thanks!

  9. 2 hours ago, IG-88 said:

    btw. Brocade 1020 is part of the driver list for my extra.lzma, but color code is yellow so no one requested it yet (i try to find someone to test so i can see at least one positive  when inserting a driver)

    "bna: Brocade 1010/1020 10Gb Ethernet"

     

    so as long as we can compile drivers (not shure about 6.2, that seems to bring signed drivers into play) it should be possible

     

    if you go for independent from external drivers you can check what *,ko files are in dsm (the hardware list provided by synology lists only specific cards but usually the chip is supported by the kernel module so all cards with that chip should work)

    it you look for cheap then tehuti based (tn40xx.ko) cards are a option (10GBase-T and SFP+ available)

    my route was to use a 2 port card on the nas ( ASUS PEB-10G/57840-2T, bnx2x.ko based, driver is part of dsm) and two tehuti based cards for backup destination (also xpenology, tn40xx.ko is part of dsm)  and to desktop pc, all three have also a 1G connection on the main switch, 10G is just used ip based or with a host file for name resolution

    but thats for 2 years ago, now 4 or 8 port 10GBase-T switches start to get affordable (but 10GBase-T has higher latency then SFP+)

     

     

    I will be able to test around March 10-15th. My items already shipped out.

     

    I dont know much about plugging in drivers into a loader though. (I'll look into it this weekend)

    Can you point me into direction on how to do that? 

     

    Otherwise we can wait for Quicknicks loader :)

  10. 10 minutes ago, SteinerKD said:

    It was a perfectly running system rock solid until the update somehow killed the network connection on it (and maybe the drives as well, can't tell yet), I think the USB stick is just fine (since at one point I actually managed to create a new install on a separate disk, that promptly died when I tried reconnecting the other drives).
     

     

    Did you try the Force method(2nd from the top when you load up)? Is this a barebones or ESXI machine? AMD or Intel?

     

    PS: i would just recommend to re-install. It sucks, but it should be seamless - all files should stay from what I experienced.

     

  11. Found compatibility list:

     

    https://community.brocade.com/dtscp75322/attachments/dtscp75322/FibreChannel/248/1/External Brocade Qual and Support Transceiver Matrix.pdf

     

    Brocade 1020 and 58-1000023-01 (10G-SFPP-TWX-0501) seem to be compatible. 

     

    PS: The more I look into fiber networking the more I'm confused about it's terms and compatibility issues. Why would they not get a standard going? Seems like fiber transceivers are made specifically just to lock customers into getting just that specific company product and only it.

  12. 10 minutes ago, quicknick said:

    It all depends on the use case. I deduced that test4321 just needed connection from 2 nodes. if it was different, then switches would bw the way to go.

    Even with ESXi hosts or kvm hypervisors traffic can communicate between each other because switching happens on the hypervisor even with multiple portgroups.

    Or you can get really complicate with VMware and do VXLans, but that is going into the weeds.

    I keep my traffic east-west and everything north of my fws. ie workstations/laptops/wifi , live in a 1 Gbe world. everything fw, ids ips and below is 10Gbe and all virtual.

    Sent from my SM-N950U using Tapatalk
     

     

    Yeah my setup is super simple. I only have 2 PCs in my bedroom - one PC, one NAS(right next to each other). I sometimes need to do 3-4TB backups - which is a pain at 100mb speeds. So 10GBe is the way to go.

     

    Current spindle drive speeds are around 150mb-250mb. SSD's getting cheaper too (some 1TB drives are going below $300 mark) so sitting at 100mb / s has no point.

     

     

     

  13. 3 minutes ago, mervincm said:

    I agree with the MicroTik interface comment for the most part; that being said, for a flat switch, it's really nothing to do. Additionally, this model runs switch-OS, not the router OS you are used to. much much simpler. I have 4 10 gig switches in my home lab (Ubiquiti UNIFI 16x10gig, Microtik CRS-226 24x1, 2x10, quanta l4b, 48x1GB and 2x10G, and a DLINK 24x1G, 4x10G) so I have seen a range of less common options (no cisco/HPE/juniper) At this time I only really use the D-Link Enterprise line switch.

     

    a switch allows you to have a single IP address and a single name/DNS entry for your NAS. It's a nice to have, :) not a need to have.

     

     

     

    I dont have that many needs for my backups. Point to point should be good enough.

     

    So for now, no need for a switch. 

  14. 23 minutes ago, mervincm said:

    I suggest getting a switch. it will cost you 1 more cable, but it makes your networking so much easier. managing 2 networks (1 Data, and 1 Storage) is definitely possible, but it's a mess that you don't have to deal with for under 200$.  I have tried both.

     

    You can buy a brand new switch from Mikrotik with a couple 10 gig ports for under 150$ (CSS326-24G-2S+RM)

     

     

     

     

     

    Thanks, but I dont like microtic interface...I worked with Microtic router and it was like pulling teeth without lidocaine :)

     

    I'll probably invest into UNIFI switch / router, but not right now. 

     

     

  15. Порты пробовал проверять?

     

    https://www.yougetsignal.com/tools/open-ports/

     

    Нужно открывать их и роутере и в фаерволе на Xpenology.

     

    PS: может еще что динамический IP поменялся и небыл обновлён на KeenDNS - проверь IP в KeenDNS - такой же как и в https://www.google.ca/search?q=myip ?

  16. 2 minutes ago, quicknick said:

    Cheap route: 10Gbe's are the Brocade (QLogic) BR-1020's and it will always be supported by me because it is what I use.

     

    Expensive route: Can't go wrong with  most Intel cards because they are easily supported, and most already are

     

    So Brocade drivers are already in the standard loader 1.02 A/B?

  17. Thanks guys! I got the box setup going to be thinking about which way I wanna go with the network cards.

     

    1st way - built in drivers but expensive cards.

     

    2nd way - cheap network cards, but a lot of time spent on creating drivers and tinkering.

     

    I'll write up a post on what I did.

     

    Thanks again!

  18. 10 hours ago, mervincm said:

    I have used a range of 10GB NIC's, optical transceivers as well as DAC cables, and 4 different switches with 10g.

    I don't think that the mellanox connectX-2 (I tested one in the past) have any sort of built in driver support in XPenology. I have used broadcom in the past, and now use intel 82599 based cards.

     

    If you just want to built a point to point 10GBE link, I had great luck with the IBM BNX2 broadcom cards their included transceiver, and a standard OM3 LC/LC fiber patch. 

     

    I also use the 3615xs version

     

     

     

     

    Thanks! Thats my next step is to find those on ebay for a good price. 

     

    Which wire did you run between? Are you running device to device or device to switch?

×
×
  • Create New...