Jump to content
XPEnology Community

zzarac

Transition Member
  • Posts

    11
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

zzarac's Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. Well, threadrippers are great and I was thinking of getting one, but it's realistically out of my budget. The cheapest 3-series one (24 cores) is around 1400 Euro, compared to 750 for the 3950x (16 cores). Not only the CPU, but I would need a much more expensive motherboard too. For that extra money I would gain only encoding times speeds, but the work experience would remain basically the same. The budget for this NAS is derived from the budget that was going to go into an external RAID disk array connected by fibre. Since solutions like that are quite expensive, I decided to look outside of a box and make a NAS that could do what I need here (not as good as fibre option, of course,) but also theoretically a lot more. I do not expect 2000 MB/s, far from it. Since the theoretical maximum speed for 8 SSDs (at 550 MB/s) in a RAID5 environment is around 1700MB/s, I would be more than happy with 1300-1500 MB/s peak transfer rates. The "heaviest" video footage I regularly use is RED 6K WS, which asks for around 180 MB/s. My experience tells me I need 6x the bitrate of one stream to work comfortably and smoothly with no drops, because I use more than one stream at the time - up to 4. That would be just slightly above 10GbE. Unless there is some kind of overhead that I am not aware of, dual 10G should be almost perfect for my case. Latency-wise, some of the motherboards I mentioned earlier use SFP+ interface (e.g. A2SDi-TP8F has both dual RJ-45 and dual SFP+).
  2. Thanks IG-88. I feel that 40G would be an overkill, as 8 drives in a RAID5 at best can still theoretically achieve less than the maximum bandwidth of a dual 10G combined. My workstation will have a 2TB local nvme drive (~5000 MB/s), but it will be used for application cache, and there is simply not enough PCIe lanes (and m.2 slots) to accommodate for my needs in terms of total capacity. I am going with Ryzen 3950x and nVidia 2080 Super (still consumer grade workstation - low PCIe lanes count).
  3. Thank you flyride for your input. Yes, you are right. It would serve exclusively as a file server for one connected machine only, heavy usage tho. Concerning the dual 10 Gbe I think you misread my original post. I did state I would use SATA SSDs, not spindles. In this case (8 drives), it would mathematically amount to ~ 1700 MB/s which most certainly saturates a single link. Of course I am not expecting that kind of performance in real life, but it certainly warrants for link aggregation use. All of the mentioned motherboards have dual 10 GbE NICs, dual 1 GbE NICs, and an IPMI port as well, so the only variable is a CPU.
  4. OK, I can see that I may have come here a bit too needy and it could look as if I wanted someone else to do my research instead. This is not the case, I have been looking into this for a long time. The only thing I am really not sure about is CPU horsepower needed as I don't have any hands-on experience with this kind of setups. I have found several Supermicro motherboards that seem to tick all the boxes (granted they have only up to 6 SATA ports, but I still have a full x16 PCIe for a controller) : - X10SDV-4C+-TLN4F (Xeon D-1518, 4x 2.20GHz, 6MB cache, 35W TDP) - X10SDV-4C-TLN2F (Xeon D-1520, 4x 2.20GHz, 6MB cache, 45W TDP) - X11SDV-4C-TLN2F (Xeon D-2123IT, 4x 2.20GHz, 8MB cache, 60W TDP) - X10SDV-6C+-TLN4F (Xeon D-1528, 6x 1.90GHz, 9MB cache, 35W TDP) - A2SDI-H-TF (Atom C3758, 8x 2.20GHz, 16MB cache, 25W TDP) - A2SDI-TP8F (Atom C3858, 12x 2.00GHz, 12MB cache, 25W TDP) Would any of these CPUs be sufficient for the job at hand? If not, what should I be looking for? EDIT: I guess I didn't mention one really important bit. This NAS will be used by only one machine. This is for my home workstation, not a studio with multiple computers.
  5. I already have one working xpenology setup that has been good to me for the past five years. I use it mostly for home media and recently as an ftp server. I am a freelance video editor so being able to receive large amounts of video footage without physical contact became quite important these days. I am currently in the process of getting a new editing workstation, so naturally I started pondering my storage options. I would be thrilled if I could manage a superfast NAS using xpenology as a working storage for 4K editing. What I had in mind is a system with 8x 2TB SATA SSD-s in RAID5 connected to my workstation using 2x 10GbE NICs, with one more 1GbE NIC to connect to my home network. I would prefer a mini-itx setup which could fit inside a case like SilverStone CS280, but I could live with a larger setup. What components would you recommend - CPU, motherboard, memory amount, cache drives, NICs and SATA/SAS controller? Thank you very much.
  6. Yep, exactly the same procedure I linked above, but in Spanish.... Sent from my XT1063 using Tapatalk
  7. Read this guide to see what you need to control the fans: viewtopic.php?t=2565 In my case it didn't work because DSM does not include the driver for my Super I/O chip, so I kindly asked Trantor to include it in the next xpenology build: viewtopic.php?f=2&t=1361&start=780#p58195 He answered a few days ago and my Super I/O will be supported in the next build. Yeah! viewtopic.php?f=2&t=1361&start=790#p61011 Sent from my XT1063 using Tapatalk
  8. I would not recommend using 3TB drives as they turned out unreliable (all but HGSC): https://www.backblaze.com/blog/hard-dri ... y-q4-2015/ Sent from my XT1063 using Tapatalk
  9. Hello community, I'd like to ask for a Fintek F71808A Super I/O kernel driver for controlling the fan/temp on my motherboard MSI H61i-E35 (B3). After reading some posts about enabling fan control in DSM, I have installed ipkg (for bash, perl, mktemp and lm-sensors) and followed THIS GUIDE, but it turns out that "lm-sensors" doesn't play nice with my Super I/O chip, so fans are running at 100% constantly 'sensors' output: coretemp-isa-0000 Adapter: ISA adapter Physical id 0: +41.0°C (high = +82.0°C, crit = +102.0°C) Core 0: +36.0°C (high = +82.0°C, crit = +102.0°C) Core 1: +41.0°C (high = +82.0°C, crit = +102.0°C) As you can see, "sensors" command did not show any info about my Super I/O, so I ran "sensors-detect". 'sensors-detect' output: # sensors-detect revision 5946 (2011-03-23 11:54:44 +0100) # System: MSI MS-7677 # Board: MSI H61I-E35 (MS-7677) This program will help you determine which kernel modules you need to load to use lm_sensors most effectively. It is generally safe and recommended to accept the default answers to all questions, unless you know what you're doing. Some south bridges, CPUs or memory controllers contain embedded sensors. Do you want to scan for them? This is totally safe. (YES/no): Silicon Integrated Systems SIS5595... No VIA VT82C686 Integrated Sensors... No VIA VT8231 Integrated Sensors... No AMD K8 thermal sensors... No AMD Family 10h thermal sensors... No AMD Family 11h thermal sensors... No AMD Family 12h and 14h thermal sensors... No Intel digital thermal sensor... Success! (driver `coretemp') Intel AMB FB-DIMM thermal sensor... No VIA C7 thermal sensor... No VIA Nano thermal sensor... No Some Super I/O chips contain embedded sensors. We have to write to standard I/O ports to probe them. This is usually safe. Do you want to scan for Super I/O sensors? (YES/no): Probing for Super-I/O at 0x2e/0x2f Trying family `National Semiconductor'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... No Trying family `ITE'... No Probing for Super-I/O at 0x4e/0x4f Trying family `National Semiconductor'... No Trying family `SMSC'... No Trying family `VIA/Winbond/Nuvoton/Fintek'... Yes Found `Fintek F71808A Super IO Sensors' Success! (address 0x290, driver `to-be-written') Some systems (mainly servers) implement IPMI, a set of common interfaces through which system health data may be retrieved, amongst other things. We first try to get the information from SMBIOS. If we don't find it there, we have to read from arbitrary I/O ports to probe for such interfaces. This is normally safe. Do you want to scan for IPMI interfaces? (YES/no): # DMI data unavailable, please consider installing dmidecode 2.7 # or later for better results. Probing for `IPMI BMC KCS' at 0xca0... No Probing for `IPMI BMC SMIC' at 0xca8... No Some hardware monitoring chips are accessible through the ISA I/O ports. We have to write to arbitrary I/O ports to probe them. This is usually safe though. Yes, you do have ISA I/O ports even if you do not have any ISA slots! Do you want to scan the ISA I/O ports? (yes/NO): Lastly, we can probe the I2C/SMBus adapters for connected hardware monitoring devices. This is the most risky part, and while it works reasonably well on most systems, it has been reported to cause trouble on some systems. Do you want to probe the I2C/SMBus adapters now? (YES/no): Using driver `i2c-i801' for device 0000:00:1f.3: Intel Cougar Point (PCH) modprobe: chdir(3.10.35): No such file or directory Failed to load module i2c-i801. Now follows a summary of the probes I have just done. Just press ENTER to continue: Driver `to-be-written': * ISA bus, address 0x290 Chip `Fintek F71808A Super IO Sensors' (confidence: 9) Driver `coretemp': * Chip `Intel digital thermal sensor' (confidence: 9) Note: there is no driver for Fintek F71808A Super IO Sensors yet. Check http://www.lm-sensors.org/wiki/Devices for updates. Do you want to overwrite /etc/sysconfig/lm_sensors? (YES/no): n To load everything that is needed, add this to one of the system initialization scripts (e.g. /etc/rc.d/rc.local): #----cut here---- # Chip drivers modprobe coretemp /usr/local/bin/sensors -s #----cut here---- If you have some drivers built into your kernel, the list above will contain too many modules. Skip the appropriate ones! You really should try these commands right now to make sure everything is working properly. Monitoring programs won't work until the needed modules are loaded. It did find my Super I/O chip, but the note says: "there is no driver for Fintek F71808A Super IO Sensors yet". Of course, that meant that 'pwmconfig' command would fail. 'bash pwmconfig' output: # pwmconfig revision 5857 (2010-08-22) This program will search your sensors for pulse width modulation (pwm) controls, and test each one to see if it controls a fan on your motherboard. Note that many motherboards do not have pwm circuitry installed, even if your sensor chip supports pwm. We will attempt to briefly stop each fan using the pwm controls. The program will attempt to restore each fan to full speed after testing. However, it is ** very important ** that you physically verify that the fans have been to full speed after the program has completed. pwmconfig: There are no pwm-capable sensor modules installed However, while googling for a solution, I have found evolver56k's Synology DSM5.2 kernel sources for xpenology with Fintek F71808A support included, at least according to the documentation for that driver: Kernel driver f71882fg ====================== Supported chips: * Fintek F71808E Prefix: 'f71808e' Addresses scanned: none, address read from Super I/O config space Datasheet: Not public * Fintek F71808A Prefix: 'f71808a' Addresses scanned: none, address read from Super I/O config space Datasheet: Not public * Fintek F71858FG Prefix: 'f71858fg' Addresses scanned: none, address read from Super I/O config space Datasheet: Available from the Fintek website * Fintek F71862FG and F71863FG Prefix: 'f71862fg' Addresses scanned: none, address read from Super I/O config space Datasheet: Available from the Fintek website * Fintek F71869F and F71869E Prefix: 'f71869' Addresses scanned: none, address read from Super I/O config space Datasheet: Available from the Fintek website * Fintek F71869A Prefix: 'f71869a' Addresses scanned: none, address read from Super I/O config space Datasheet: Not public * Fintek F71882FG and F71883FG Prefix: 'f71882fg' Addresses scanned: none, address read from Super I/O config space Datasheet: Available from the Fintek website * Fintek F71889FG Prefix: 'f71889fg' Addresses scanned: none, address read from Super I/O config space Datasheet: Available from the Fintek website * Fintek F71889ED Prefix: 'f71889ed' Addresses scanned: none, address read from Super I/O config space Datasheet: Should become available on the Fintek website soon * Fintek F71889A Prefix: 'f71889a' Addresses scanned: none, address read from Super I/O config space Datasheet: Should become available on the Fintek website soon * Fintek F8000 Prefix: 'f8000' Addresses scanned: none, address read from Super I/O config space Datasheet: Not public * Fintek F81801U Prefix: 'f71889fg' Addresses scanned: none, address read from Super I/O config space Datasheet: Not public Note: This is the 64-pin variant of the F71889FG, they have the same device ID and are fully compatible as far as hardware monitoring is concerned. * Fintek F81865F Prefix: 'f81865f' Addresses scanned: none, address read from Super I/O config space Datasheet: Available from the Fintek website Author: Hans de Goede As I am new to all this stuff, would someone please point me in the right direction. Can this kernel driver be added? Should I compile the kernel myself (I have never done that and I cannot find any easy-to-follow instructions)? Please help, the fan noise is driving me insane. Thank you so much.
  10. zzarac

    Unas Case

    A month ago I bought the model with 4 drive bays (nsc-400) and I can't be more satisfied with it. It wasn't expensive either: $150 including a 250W PSU. Just make sure your ITX board is low profile because there is not too much room...
  11. SW: BootLoader: XPEnoboot_DS3615xs_5.2-5644.5.img DSM: DSM_5.2-5644 Update-5 HW: HDD: 4 x 4TB WD40EFRX Memory: 2 x 2GB Crucial DDR3 1066HMz MotherBoard: MSI H61i-E35 (B3) Processor: Intel® Celeron® Processor 550 Case: U-NAS NSC-400 I was about to buy an entry-level Synology DS, but then I found out about Xpenology, so naturally I tried it right away. No problems whatsoever. I have had this little ITX board for several years as my Ubuntu server/HTPC and it served me great. Now, seeing that the installation and setting up went flawlessly, I can hope it will continue to make me happy.
×
×
  • Create New...