Eeso

Members
  • Content Count

    22
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Eeso

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. The H730 didn't support having the card in JBOD/HBA mode with disks attached, giving the errors I've reported earlier, so that is what I want to test Dell releases own drivers for the card, I think these are the ones that will work as people I've found on the internet give hints about that it works when recompiling some drivers for the kernel, I will confirm this with the technical support at Dell They only release the drivers and the source for Red Hat and SUSE, but I know the kernel well enough to do backporting, I've also done this in the past for a bunch of other dr
  2. I know you are very busy at this forum, very nice of you I don't know if you don't have time to answer anything of my question or just don't have anything to answer Anyway, you said eariler we could try newer drivers, I have ordered a HBA330 but I can still try experimenting with experimental drivers if you have time for that, I will probably be able to do some of that for a few more weeks
  3. Okey, so after some more research it seems that there are drivers patches that could make all these cards work in HBA mode, people are talking about this around the internet on other forums, I would believe it is the drivers provided by Dell for RHEL6/7 and SUSE ES 12/15, and that these drivers are not in the main tree of the linux kernel, I'm waiting for confirmation from Dell about this, also what card that can work as a replacement for the server, I guess a PERC card has better integration to the BIOS and server. If it is in those drivers provided by Dell, can you just help me w
  4. Okey same with Ubuntu 20.04, running in BIOS mode not UEFI mode, I can see that it installed correctly and shut down however, so I believe this problem is of another origin, I can also confirm DSM finds and installs on the disk as I had to erase them multiple times and there I can see the partitions and the data, if I run without disks I get error 38 as expected, and dmesg also just says it cannot recover from a faulty fw state, would you believe the controller might be broken? I believe not however. Everything was working good in RAID mode and before I removed the old OS. I guess
  5. Here is some logging after starting the installation of DSM: https://pastebin.com/ePCiuUqz Extended: https://pastebin.com/zunNz0aa I've tried a linux livecd and it seems to work, I think DSM has problem hibernating the disk and restart the system, 120s hung task message confirms this I think, it finds the controller fw in a fault state and can't handle it from there, I've found other on this forum with the same problem, and they also confirmed that it works on a normal linux dist I tried to install ubuntu 14.04, it found the drive and installed on the drive bu
  6. Okey, I removed the .ko files from the package as well, but you are saying its the same driver in both your extra and jun's vanilla loader? I hope so, in the RAID controller configuration utility under Controller Management it says the Package version is 25.5.7.0005, the Firmware version is 4.300.00-8364 and the NVDATA version is 3.1511.00-0028 When I set the controller in RAID mode it works, I had this configuration at first, everything installed and worked good, but I didn't want the hardware raid or the controller to handle the disks in such way, that is why I
  7. Correct Oh okey, I'll try to disable the megaraid from your package to try Jun's base driver, maybe as simple as removing all megaraid from DISK_MODULES in rc.modules inside the archive? I actually didn't catch that Jun's had already packed that driver, but I still needed your igb driver so many thanks for that I did this, I read that the controller need a reset right after it have been switched to HBA mode I'll check if I did this correctly I will report back on this ASAP About the 6.11 minimum requirement for
  8. Hi again, Got a Dell PowerEdge T630, it has an PERC H730 Adapter (1028:1F43) using an LSI SAS3108 chipset, megaraid driver not working (as expected) in 6.2.2 or 6.2.3 It actually worked (at least to install) when the controller was in RAID mode, now I've set the controller in HBA mode (as I don't want the use the HW RAID at all), I've read that it might be possible to flash it to IT mode (Initiator Target), I've updated the controller with the latest firmware from Dell, have not yet updated the BIOS as I don't think that will make any difference, the BIOS SATA settings
  9. Eeso

    DSM 6.2 Loader

    All good now, the 3617 loader was requiring the BIOS to be swtiched from UEFI to legacy mode
  10. So all seems sorted now and the drivers are working here as well, thank you very much for you work and patience
  11. Okey actually you were right, it boots now, seems to have been when I changed from UEFI to legacy also.. Didn't test 3617 until now on that configuration I thought it wasn't the problem when all the others were working, I'm assuming too much, I can see it is a great deal of hacking for it to run Well that is good, I will try to load the drivers now Ah alright
  12. Aha okey, such a shame I guess I have to get 3617 running then, do you know if any grub debug can be enabled as it tries to unpack and load the kernel and initrd? What is it the with the different zImage I changed on my usb then? If it can't be/no point in being changed I mean, it's a bit confusing
  13. Where do I find the source and toolchain of the kernels? CONFIG_X86_X2APIC seems to be valid for 4.4 as well Would you mind help me and try to compile 918+ with that flag set? I was in the BIOS but I couldn't find anything regarding any such thing
  14. Okey so I found some interesting things with 1.04b which made me wounder. This comes up in the dmesg [ 0.208866] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 1/0x0 ignored. [ 0.208867] ACPI: Unable to map lapic to logical cpu number [ 0.208887] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 2/0x2 ignored. [ 0.208888] ACPI: Unable to map lapic to logical cpu number [ 0.208909] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 3/0x4 ignored. [ 0.208910] ACPI: Unable to map lapic to logical cpu number [ 0.208926]
  15. Ye I know, but only 1.04b / 918+ doesn't recognize the other 3 cores, I will recheck Still it seems to be some kind of problem, I remember the 6.1 loader for 3615/3617 do this, it also worked good with the other dists I've tried on this server, but I will recheck this as well cpuinfo exactly Yes gonna do that and document it better this time and I'll report back on my findings Ye I thought it was only that, turns out it wasn't, thanks for heading me in the right direction Ye I tried and all looks good (I thought), except that the