PnoT

Members
  • Content Count

    34
  • Joined

  • Last visited

Community Reputation

0 Neutral

About PnoT

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. PnoT

    DSM 6.1.x Loader

    I've searched the thread and have found a lot of people asking for Hyper-V support (specifically the networking) and was curious if anyone has been able to compile or add the proper drivers yet?
  2. I agree that the development side of XPEnology seems to have dropped off a bit but you can't hold anyone at fault. I'm still on 5.x at the moment and will probably jump to the release of FREENAS 10 when it hits on 03/06/2017.
  3. PnoT

    LSI

    You need to flash whatever card you want to use into IT mode so it's a basic HBA and if you attach your current drives to it and boot up you'll be fine. The worse case scenario is you'll have to do a migration and you can keep all your settings. Once you're up and running on the new HBA simply add your other drives into the mix and expand like normal.
  4. Thanks for all of the advice but 8 cores just doesn't cut it with my current usage and being able to maintain a level of fan noise that doesn't run my wife out of the house. I was hoping there would be a setting somewhere that we could adjust but from the replies it doesn't look like it.
  5. I have 2 x X5675s in my xpenology box and it's only seeing 8 cores total. Is there a way to utilize all cores as these are 6 cores each? [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 8/0x24 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 9/0x30 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 10/0x32 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 11/0x34 ignored. [ 0.000000] ACPI: NR_CPUS/possible_cpus limit of 8 reached. Processor 12/0x1 ignored. [
  6. I updated the original post with a ton of information as I felt there was a lot lacking in it initially.
  7. Wow, thank you for helping out and I've bookmarked those sites for future use that's pretty amazing. My fix was to remove the Samsung drives as there are known issues with them dropping out of RAID sets with LSI cards and since then I haven't had a single problem. I will swap the cable out and try a different slot and give the batch of drives another try. I should have been more specific and just said I was on p19 as I've seen the issues revolving around p20 but thank you for pointing it out.
  8. I seem to be having slow performance on my SSDs in my SM chassis and am unable to pinpoint the source of the problem. I'm having severe performance issues with my SSDs in the sub 200MB/sec range for the entire array but today those numbers have increased a bit but are still below what I would expect from a RAID 0 of all SSDs with this setup. I've pieced together a bunch of information and have tried my best to supply everything I can think of in this post to describe my issue in the hopes that someone can help me diagnose the problem. One thing I couldn't find is a reliable way to determ
  9. I've finally figured out what was happening and it has to do with the 2TB Samsung F3s. These drives have been rock solid since the dawn of time but apparently they timeout the controller in XPEnology for some odd reason. I've had no issues with them in my 1812/1815 so I'm not sure, at this point, if it's due to the current LSI driver that was recently updated in XPEnology or some odd incompatibility with the controller /drives. The firmware on the controller and drives are the latest and there are no failures on the drives themselves.
  10. I've been running opkg since he released it and the developer is always fast to respond to issues and even compiles off the wall apps as well. I HIGHLY recommend it if you're still on ipkg and can use the updated binaries. A few apps that I needed an update to were LFTP and stunnel (when I was running a 1812+).
  11. I'm on the latest and greatest and am seeing what looks like timeouts on expanding a volume. During these timeouts all of my volumes are not accessible and the NAS pretty much freezes until it's over which is about 10-45 seconds. Aug 15 11:06:00 SYN kernel: [686987.368171] cdb[0]=0x28: 28 00 d3 8c 03 e0 00 03 80 00 Aug 15 11:06:02 SYN kernel: [686987.857817] cdb[0]=0x28: 28 00 d3 8c 07 60 00 00 80 00 Aug 15 11:06:02 SYN kernel: [686987.857847] cdb[0]=0x28: 28 00 d3 8c 07 e0 00 01 00 00 Aug 15 11:06:02 SYN kernel: [686987.857861] cdb[0]=0x28: 28 00 d3 8c 08 e0 00 00 80 00 Aug 15 11:06
  12. I can get the InfiniBand portion to work properly but it's not recognizing 10GB Ethernet on the other port. The device is showing up from the command line but not in the GUI, which it was before, and the protocol remains "UNSPEC" instead of Ethernet. Any ideas?
  13. su -l postgres -c "exec /usr/syno/pgsql/bin/pg_ctl stop -s -m fast"