Jump to content
XPEnology Community

PnoT

Member
  • Posts

    34
  • Joined

  • Last visited

Everything posted by PnoT

  1. PnoT

    DSM 6.1.x Loader

    I've searched the thread and have found a lot of people asking for Hyper-V support (specifically the networking) and was curious if anyone has been able to compile or add the proper drivers yet?
  2. I agree that the development side of XPEnology seems to have dropped off a bit but you can't hold anyone at fault. I'm still on 5.x at the moment and will probably jump to the release of FREENAS 10 when it hits on 03/06/2017.
  3. PnoT

    LSI

    You need to flash whatever card you want to use into IT mode so it's a basic HBA and if you attach your current drives to it and boot up you'll be fine. The worse case scenario is you'll have to do a migration and you can keep all your settings. Once you're up and running on the new HBA simply add your other drives into the mix and expand like normal.
  4. Thanks for all of the advice but 8 cores just doesn't cut it with my current usage and being able to maintain a level of fan noise that doesn't run my wife out of the house. I was hoping there would be a setting somewhere that we could adjust but from the replies it doesn't look like it.
  5. I have 2 x X5675s in my xpenology box and it's only seeing 8 cores total. Is there a way to utilize all cores as these are 6 cores each? [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 8/0x24 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 9/0x30 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 10/0x32 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 11/0x34 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 12/0x1 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 13/0x3 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 14/0x5 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 15/0x11 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 16/0x13 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 17/0x15 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 18/0x21 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 19/0x23 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 20/0x25 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 21/0x31 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 22/0x33 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 23/0x35 ignored. [ 0.000000] smpboot: 24 Processors exceeds NR_CPUS limit of 8 This post talks about the same thing but no resolution or any recommendations: viewtopic.php?f=2&t=7812&p=43584&hilit=max+cpu#p43584
  6. I updated the original post with a ton of information as I felt there was a lot lacking in it initially.
  7. Wow, thank you for helping out and I've bookmarked those sites for future use that's pretty amazing. My fix was to remove the Samsung drives as there are known issues with them dropping out of RAID sets with LSI cards and since then I haven't had a single problem. I will swap the cable out and try a different slot and give the batch of drives another try. I should have been more specific and just said I was on p19 as I've seen the issues revolving around p20 but thank you for pointing it out.
  8. I seem to be having slow performance on my SSDs in my SM chassis and am unable to pinpoint the source of the problem. I'm having severe performance issues with my SSDs in the sub 200MB/sec range for the entire array but today those numbers have increased a bit but are still below what I would expect from a RAID 0 of all SSDs with this setup. I've pieced together a bunch of information and have tried my best to supply everything I can think of in this post to describe my issue in the hopes that someone can help me diagnose the problem. One thing I couldn't find is a reliable way to determine link speed on each drive as most of the commands I found, on the net, came back with "". Current Setup X8SIL-F Xeon 3450 16GB ECC RAM SuperMicro SC846 Chassis SAS2 backplane IBM M1015 flashed to an LSI 9211-8i in IT mode and running R19 firmware. Single cable from M1015 P0 to PRI_J0 on the backplane 5592.2 Update 3 Drive / RAID layout 8 x 4TB WD RED + 2 x 5TB WD RED in SHR 4 x 256GB Samsung 850 Pro in RAID 0 Drive Info: /dev/sdq: ATA device, with non-removable media Model Number: Samsung SSD 850 PRO 256GB Serial Number: S1SUNSAFC81422B Firmware Revision: EXM02B6Q Transport: Serial, ATA8-AST, SATA 1.0a, SATA II Extensions, SATA Rev 2.5, SATA Rev 2.6, SATA Rev 3.0 Standards: Used: unknown (minor revision code 0x0039) Supported: 9 8 7 6 5 Likely used: 9 Configuration: Logical max current cylinders 16383 16383 heads 16 16 sectors/track 63 63 -- CHS current addressable sectors: 16514064 LBA user addressable sectors: 268435455 LBA48 user addressable sectors: 500118192 Logical Sector size: 512 bytes Physical Sector size: 512 bytes Logical Sector-0 offset: 0 bytes device size with M = 1024*1024: 244198 MBytes device size with M = 1000*1000: 256060 MBytes (256 GB) cache/buffer size = unknown Nominal Media Rotation Rate: Solid State Device Capabilities: LBA, IORDY(can be disabled) Queue depth: 32 Standby timer values: spec'd by Standard, no device specific minimum R/W multiple sector transfer: Max = 1 Current = 1 DMA: mdma0 mdma1 mdma2 udma0 udma1 udma2 udma3 udma4 udma5 *udma6 Cycle time: min=120ns recommended=120ns PIO: pio0 pio1 pio2 pio3 pio4 Cycle time: no flow control=120ns IORDY flow control=120ns Commands/features: Enabled Supported: * SMART feature set Security Mode feature set * Power Management feature set * Write cache * Look-ahead * Host Protected Area feature set * WRITE_BUFFER command * READ_BUFFER command * NOP cmd * DOWNLOAD_MICROCODE SET_MAX security extension * 48-bit Address feature set * Device Configuration Overlay feature set * Mandatory FLUSH_CACHE * FLUSH_CACHE_EXT * SMART error logging * SMART self-test * General Purpose Logging feature set * WRITE_{DMA|MULTIPLE}_FUA_EXT * 64-bit World wide name Write-Read-Verify feature set * WRITE_UNCORRECTABLE_EXT command * {READ,WRITE}_DMA_EXT_GPL commands * Segmented DOWNLOAD_MICROCODE * Gen1 signaling speed (1.5Gb/s) * Gen2 signaling speed (3.0Gb/s) * Gen3 signaling speed (6.0Gb/s) * Native Command Queueing (NCQ) * Phy event counters * unknown 76[15] * DMA Setup Auto-Activate optimization Device-initiated interface power management * Asynchronous notification (eg. media change) * Software settings preservation unknown 78[8] * SMART Command Transport (SCT) feature set * SCT LBA Segment Access (AC2) * SCT Error Recovery Control (AC3) * SCT Features Control (AC4) * SCT Data Tables (AC5) * reserved 69[4] * DOWNLOAD MICROCODE DMA command * SET MAX SETPASSWORD/UNLOCK DMA commands * WRITE BUFFER DMA command * READ BUFFER DMA command * Data Set Management TRIM supported (limit 8 blocks) Security: Master password revision code = 65534 supported not enabled not locked not frozen not expired: security count supported: enhanced erase 2min for SECURITY ERASE UNIT. 2min for ENHANCED SECURITY ERASE UNIT. Logical Unit WWN Device Identifier: 50025388a08df889 NAA : 5 IEEE OUI : 002538 Unique ID : 8a08df889 Checksum: correct { /volume3}-> dmesg | grep "Write cache" [ 8.714585] sd 0:0:15:0: [sdp] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.714698] sd 0:0:8:0: [sdi] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.715425] sd 0:0:9:0: [sdj] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.716042] sd 0:0:10:0: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.716334] sd 0:0:12:0: [sdm] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.716425] sd 0:0:11:0: [sdl] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.716564] sd 0:0:14:0: [sdo] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.769484] sd 0:0:13:0: [sdn] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.772838] sd 0:0:5:0: [sdf] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.773529] sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.773881] sd 0:0:6:0: [sdg] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.775287] sd 0:0:2:0: [sdc] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.778207] sd 0:0:3:0: [sdd] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.778531] sd 0:0:1:0: [sdb] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.779942] sd 0:0:4:0: [sde] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 8.790654] sd 0:0:7:0: [sdh] Write cache: enabled, read cache: enabled, supports DPO and FUA [ 66.494905] sd 7:0:0:0: [synoboot] Write cache: disabled, read cache: enabled, doesn't support DPO or FUA [236816.271293] sd 0:0:17:0: [sdq] Write cache: enabled, read cache: enabled, supports DPO and FUA [236830.021295] sd 0:0:18:0: [sdr] Write cache: enabled, read cache: enabled, supports DPO and FUA [236838.521456] sd 0:0:19:0: [sds] Write cache: enabled, read cache: enabled, supports DPO and FUA [236927.771667] sd 0:0:20:0: [sdt] Write cache: enabled, read cache: enabled, supports DPO and FUA The hdparm results per drive and finally the overall array look right on the money but why are tests with dd so horrible Disk /dev/sdq: 256GB Disk /dev/sdr: 256GB Disk /dev/sds: 256GB Disk /dev/sdt: 256GB hdparm -tT --direct /dev/sdr /dev/sdr: Timing O_DIRECT cached reads: 946 MB in 2.00 seconds = 472.71 MB/sec Timing O_DIRECT disk reads: 1468 MB in 3.00 seconds = 489.13 MB/sec hdparm -tT --direct /dev/sds /dev/sds: Timing O_DIRECT cached reads: 966 MB in 2.00 seconds = 482.10 MB/sec Timing O_DIRECT disk reads: 1476 MB in 3.00 seconds = 491.79 MB/sec hdparm -tT --direct /dev/sdt /dev/sdt: Timing O_DIRECT cached reads: 962 MB in 2.00 seconds = 480.54 MB/sec Timing O_DIRECT disk reads: 1464 MB in 3.00 seconds = 487.95 MB/sec hdparm -tT --direct /dev/sdq /dev/sdq: Timing O_DIRECT cached reads: 964 MB in 2.00 seconds = 481.62 MB/sec Timing O_DIRECT disk reads: 1466 MB in 3.00 seconds = 488.58 MB/sec hdparm -tT --direct /dev/vg3/volume_3 /dev/vg3/volume_3: Timing O_DIRECT cached reads: 2880 MB in 2.00 seconds = 1439.44 MB/sec Timing O_DIRECT disk reads: 4570 MB in 3.00 seconds = 1523.11 MB/sec Here is the query on my LSI controller to determine link speed which look like 8x: lspci -vvv -d 1000:0072 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) Subsystem: Device 1028:1f1c Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- SERR- Latency: 0, Cache Line Size: 32 bytes Interrupt: pin A routed to IRQ 16 Region 0: I/O ports at c000 [size=11] Region 1: Memory at fb3b0000 (64-bit, non-prefetchable) [size=64K] Region 3: Memory at fb3c0000 (64-bit, non-prefetchable) [size=256K] Expansion ROM at fb400000 [disabled] [size=1M] Capabilities: [50] Power Management version 3 Flags: PMEClk- DSI- D1+ D2+ AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-) Status: D0 NoSoftRst+ PME-Enable- DSel=0 DScale=0 PME- Capabilities: [68] Express (v2) Endpoint, MSI 00 DevCap: MaxPayload 4096 bytes, PhantFunc 0, Latency L0s <64ns, L1 <1us ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset+ DevCtl: Report errors: Correctable+ Non-Fatal+ Fatal+ Unsupported+ RlxdOrd+ ExtTag- PhantFunc- AuxPwr- NoSnoop+ FLReset- MaxPayload 256 bytes, MaxReadReq 512 bytes DevSta: CorrErr+ UncorrErr- FatalErr- UnsuppReq+ AuxPwr- TransPend- LnkCap: Port #0, Speed 5GT/s, Width x8, ASPM L0s, Exit Latency L0s <64ns, L1 <1us ClockPM- Surprise- LLActRep- BwNot- LnkCtl: ASPM Disabled; RCB 64 bytes Disabled- CommClk- ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt- LnkSta: Speed 5GT/s, Width x8, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt- DevCap2: Completion Timeout: Range BC, TimeoutDis+, LTR-, OBFF Not Supported DevCtl2: Completion Timeout: 50us to 50ms, TimeoutDis-, LTR-, OBFF Disabled LnkCtl2: Target Link Speed: 5GT/s, EnterCompliance- SpeedDis- Transmit Margin: Normal Operating Range, EnterModifiedCompliance- ComplianceSOS- Compliance De-emphasis: -6dB LnkSta2: Current De-emphasis Level: -6dB, EqualizationComplete-, EqualizationPhase1- EqualizationPhase2-, EqualizationPhase3-, LinkEqualizationRequest- Capabilities: [d0] Vital Product Data Unknown small resource type 00, will not decode more. Capabilities: [a8] MSI: Enable- Count=1/1 Maskable- 64bit+ Address: 0000000000000000 Data: 0000 Capabilities: [c0] MSI-X: Enable+ Count=15 Masked- Vector table: BAR=1 offset=0000e000 PBA: BAR=1 offset=0000f800 Capabilities: [100 v1] Advanced Error Reporting UESta: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UEMsk: DLP- SDES- TLP- FCP- CmpltTO- CmpltAbrt- UnxCmplt- RxOF- MalfTLP- ECRC- UnsupReq- ACSViol- UESvrt: DLP+ SDES+ TLP- FCP+ CmpltTO- CmpltAbrt- UnxCmplt- RxOF+ MalfTLP+ ECRC- UnsupReq- ACSViol- CESta: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ CEMsk: RxErr- BadTLP- BadDLLP- Rollover- Timeout- NonFatalErr+ AERCap: First Error Pointer: 00, GenCap+ CGenEn- ChkCap+ ChkEn- Capabilities: [138 v1] Power Budgeting <?> Kernel driver in use: mpt2sas If you look at the performance of volume2 which consists of NAS REDS (SHR) it's not bad when using dd and the speeds are very consistent. Keep in mind that this image only shows 7 / 9 drives in the array due to the limitation of the resource monitor tool so if you add another 2 drives @ 90MB/sec that's well over 1GB/sec. dd if=/dev/zero of=/volume2/test.bin bs=1M count=500M The same command on the RAID0 Samsung 850 Pros nets some pretty crappy results. If you look closely you can see huge swings in performance from 50MB/sec to almost 200MB/sec per drive which is completely the opposite of how the SHR and spinning disks are performing. dd if=/dev/zero of=/volume3/test.bin bs=1M count=500M
  9. I've finally figured out what was happening and it has to do with the 2TB Samsung F3s. These drives have been rock solid since the dawn of time but apparently they timeout the controller in XPEnology for some odd reason. I've had no issues with them in my 1812/1815 so I'm not sure, at this point, if it's due to the current LSI driver that was recently updated in XPEnology or some odd incompatibility with the controller /drives. The firmware on the controller and drives are the latest and there are no failures on the drives themselves.
  10. I've been running opkg since he released it and the developer is always fast to respond to issues and even compiles off the wall apps as well. I HIGHLY recommend it if you're still on ipkg and can use the updated binaries. A few apps that I needed an update to were LFTP and stunnel (when I was running a 1812+).
  11. I'm on the latest and greatest and am seeing what looks like timeouts on expanding a volume. During these timeouts all of my volumes are not accessible and the NAS pretty much freezes until it's over which is about 10-45 seconds. Aug 15 11:06:00 SYN kernel: [686987.368171] cdb[0]=0x28: 28 00 d3 8c 03 e0 00 03 80 00 Aug 15 11:06:02 SYN kernel: [686987.857817] cdb[0]=0x28: 28 00 d3 8c 07 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.857847] cdb[0]=0x28: 28 00 d3 8c 07 e0 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.857861] cdb[0]=0x28: 28 00 d3 8c 08 e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.857874] cdb[0]=0x28: 28 00 d3 8c 09 60 00 02 80 00 Aug 15 11:06:02 SYN kernel: [686987.857887] cdb[0]=0x28: 28 00 d3 8c 0b e0 00 02 80 00 Aug 15 11:06:02 SYN kernel: [686987.857900] cdb[0]=0x28: 28 00 d3 8c 0e 60 00 02 80 00 Aug 15 11:06:02 SYN kernel: [686987.857914] cdb[0]=0x28: 28 00 d3 8c 10 e0 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.857927] cdb[0]=0x28: 28 00 d3 8c 11 e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.857941] cdb[0]=0x28: 28 00 d3 8c 12 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.857954] cdb[0]=0x28: 28 00 d3 8c 12 e0 00 00 78 00 Aug 15 11:06:02 SYN kernel: [686987.857967] cdb[0]=0x28: 28 00 d3 8c 13 58 00 00 08 00 Aug 15 11:06:02 SYN kernel: [686987.857980] cdb[0]=0x28: 28 00 d3 8c 13 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.857994] cdb[0]=0x28: 28 00 d3 8c 13 e0 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.858037] cdb[0]=0x28: 28 00 d3 8c 14 e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858050] cdb[0]=0x28: 28 00 d3 8c 15 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858064] cdb[0]=0x28: 28 00 d3 8c 15 e0 00 02 80 00 Aug 15 11:06:02 SYN kernel: [686987.858077] cdb[0]=0x28: 28 00 d3 8c 18 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858091] cdb[0]=0x28: 28 00 d3 8c 18 e0 00 02 80 00 Aug 15 11:06:02 SYN kernel: [686987.858104] cdb[0]=0x28: 28 00 d3 8c 1b 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858117] cdb[0]=0x28: 28 00 d3 8c 1b e0 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.858130] cdb[0]=0x28: 28 00 d3 8c 1c e0 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.858144] cdb[0]=0x28: 28 00 d3 8c 1d e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858157] cdb[0]=0x28: 28 00 d3 8c 1e 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858170] cdb[0]=0x28: 28 00 d3 8c 1e e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858184] cdb[0]=0x28: 28 00 d3 8c 1f 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858197] cdb[0]=0x28: 28 00 d3 8c 1f e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858210] cdb[0]=0x28: 28 00 d3 8c 20 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858224] cdb[0]=0x28: 28 00 d3 8c 20 e0 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858237] cdb[0]=0x28: 28 00 d3 8c 21 60 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.858250] cdb[0]=0x28: 28 00 d3 8c 22 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.858263] cdb[0]=0x28: 28 00 d3 8c 22 e0 00 04 00 00 this is also showing up which looks like some type of timeout Aug 15 05:15:53 SYN kernel: [665966.766989] mpt2sas0: log_info(0x31120436): originator(PL), code(0x12), sub_code(0x0436) It looks like the LSI SAS card is having issues. Any ideas?
  12. I can get the InfiniBand portion to work properly but it's not recognizing 10GB Ethernet on the other port. The device is showing up from the command line but not in the GUI, which it was before, and the protocol remains "UNSPEC" instead of Ethernet. Any ideas?
  13. su -l postgres -c "exec /usr/syno/pgsql/bin/pg_ctl stop -s -m fast"
  14. Look at the README file in his zip as it tells you exactly what to do in order to get things working. I will say that upon upgrading to the latest build these modules don't seem to work anymore as my card is not being detected. When trying to insert the module manually: insmod /lib/modules/mlx4_ib.ko insmod: can't insert '/lib/modules/mlx4_ib.ko': invalid module format
  15. Thanks for reporting back I was just about to pull the trigger later on this afternoon with 5.2. You save me some frustration so thanks.
  16. I haven't made the plunge to 5.2 yet so you guys are specifically talking about 5.2 + Update 1 correct? I'd hate to upgrade to 5.2 and loose my iSCSI. Sorry to hear you're having issues and hopefully they have a fix in the works.
  17. How hard would it be to compile the applications that go with this Infiniband package?
  18. OH man check out my post you rock!!!!!!
  19. First of all I'd like to thank Trantor for compiling the modules here: You've seriously made my week... thank you! Here are some screenies of it in action and my system specs: Supermicro X8SIL-F with an Intel® Xeon® CPU X3450 @ 2.67GHz with 16GB of RAM First up is me querying the switch and it stating 4x10GB detected and enabled for "MT25408" which is apparently what the drivers is reporting the name to be. In reality this should be the hostname but I'm not picky. This is my initial test from a Windows Server 2012 R2 server with 4x256GB SSD in RAID 10 to the XPEnology box. Yes sir 1GB! I should be able to get close to 3-4GB/sec once I can get a RAM drive setup on my XPEnology. I'm having some difficulty getting it done so if anyone has some tips let me know. Here are a few graphs of the interface and what it's showing. As you can see my writes to disk are the bottleneck. If that wasn't enough here is my system utilization when all of this is going on which is pretty low. Next steps would be to create a RAM drive on the windows machine and xpenology and test again to see what the upper limits are on this setup.
  20. You sir are a rockstar! I'm going to try and test these tonight and report back. The instructions are inside the README file in the zip.
  21. I rebooted my box this afternoon after Installing a Mellanox 10GbE card and all hell broke lose. The machine failed to boot into the OS properly and even after removing the 10GbE NIC the same behaviour was happening. I've tried the following: 1. removing the NIC 2. booting off a clean XPEnoboot iso 3. swapping network cables around. The issues that I'm seeing now that I don't recall before are: :: Loading module adt7475 [ OK ] Can not detect ADT device, Retry: 1... :: Unloading module adt7475 [ OK ] :: Unloading module i2c-i801 [ OK ] :: Loading module i2c-i801 [ OK ] :: Loading module adt7475 :: Checking upgrade file Exit on error [2] .noroot exists... There are a few more errors and here is a link to the video that shows the whole process. Normal Boot https://www.dropbox.com/s/9qwm0noon0lo3fx/xpenology.avi?dl=0 Debug Boot https://www.dropbox.com/s/ugvs8x9d7ye3ioa/xpenoboot_error_debug.avi?dl=0 In synology assistance it shows "migratable" but I don't want to do anything at this point to jeopardize my data. I'm also attaching a debug video in which there seems to be some superblock issues but according to this that's normal? http://forum.synology.com/enu/viewtopic.php?f=19&t=42615 *UPDATE* I can manage to do a migration install and get the system back up and running but when it boots all of my volumes are crashed. I do a "repair" and let it sync up swap and root and then edit /etc/synoinfo.cfg to accommodate my other drives, 20 total, and upon reboot it starts up and then reboots on it's own. Once the system comes up after the second boot it's back to square one and the errors above persist.
  22. I know you're a busy man but have you had time to see how hard it would be to incorporate those InfiniBand drivers?
  23. Unfortunately this didn't work for me as the resize command fails with "The combination of flex_bg and !resize_inode features is not supported by resize2fs" This guide did not work for me as there is an issue with e2fsprogs 1.42.9 where it does not do anything and show progress. There's a bug that's been submitted and users have reported going back to 1.42.8 works but I can't find any howto on accomplishing that task so I'm stuck. *EDIT* I was originally using the LIVE CD for Ubuntu 14.04 and the e2fsprogs in that is 1.42.9 but decided to try the newer Ubuntu 15.04 which has 1.42.12 and that worked flawlessly. I hope this helps someone out.
×
×
  • Create New...