Jump to content
XPEnology Community

Eeso

Member
  • Posts

    22
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Eeso's Achievements

Junior Member

Junior Member (2/7)

0

Reputation

  1. The H730 didn't support having the card in JBOD/HBA mode with disks attached, giving the errors I've reported earlier, so that is what I want to test Dell releases own drivers for the card, I think these are the ones that will work as people I've found on the internet give hints about that it works when recompiling some drivers for the kernel, I will confirm this with the technical support at Dell They only release the drivers and the source for Red Hat and SUSE, but I know the kernel well enough to do backporting, I've also done this in the past for a bunch of other drivers, I thought I can try to contribute with something to this forum If you have some fast links on how I compile these modules you do for Xpenology / Jun's loader that would help alot I have no need for nvidia drivers or transcoding, ty tho
  2. I know you are very busy at this forum, very nice of you I don't know if you don't have time to answer anything of my question or just don't have anything to answer Anyway, you said eariler we could try newer drivers, I have ordered a HBA330 but I can still try experimenting with experimental drivers if you have time for that, I will probably be able to do some of that for a few more weeks
  3. Okey, so after some more research it seems that there are drivers patches that could make all these cards work in HBA mode, people are talking about this around the internet on other forums, I would believe it is the drivers provided by Dell for RHEL6/7 and SUSE ES 12/15, and that these drivers are not in the main tree of the linux kernel, I'm waiting for confirmation from Dell about this, also what card that can work as a replacement for the server, I guess a PERC card has better integration to the BIOS and server. If it is in those drivers provided by Dell, can you just help me with resources on how I can compile them, or maybe try compile them urself? I can check the version of the Dell driver as they should be based on LSI's drivers, what version are you using? I'd be happy to try a newer driver I noticed the logs says "resetting fusion adapter", but isn't it really an invader adapter?
  4. Okey same with Ubuntu 20.04, running in BIOS mode not UEFI mode, I can see that it installed correctly and shut down however, so I believe this problem is of another origin, I can also confirm DSM finds and installs on the disk as I had to erase them multiple times and there I can see the partitions and the data, if I run without disks I get error 38 as expected, and dmesg also just says it cannot recover from a faulty fw state, would you believe the controller might be broken? I believe not however. Everything was working good in RAID mode and before I removed the old OS. I guess I can either set the controller in RAID mode and then mark the individual disks as Non-RAID, or just have it in HBA mode. But the later doesn't seems to work. It worked good when I initially had it in RAID mode, as soon as I switched it stopped working however. I can retry the factory reset directly after setting the controller to HBA mode. It was something that several people had reported on the internet must be done. The controller might just be too old to work good with linux in HBA mode? I'm looking at maybe buy an LSI SAS 9300-8i SGL, which has a SAS3008 chipset, would you recommend this one? The PERC H730 has an SAS3108, but since Dell has branded it might not be the same, SAS3108 might be an older chipset to begin with, or is not updated by Dell in the same way LSI cards are?
  5. Here is some logging after starting the installation of DSM: https://pastebin.com/ePCiuUqz Extended: https://pastebin.com/zunNz0aa I've tried a linux livecd and it seems to work, I think DSM has problem hibernating the disk and restart the system, 120s hung task message confirms this I think, it finds the controller fw in a fault state and can't handle it from there, I've found other on this forum with the same problem, and they also confirmed that it works on a normal linux dist I tried to install ubuntu 14.04, it found the drive and installed on the drive but didn't want to boot back afterwards, dumping me in the grub rescue console, but that could also be a BIOS/UEFI problem
  6. Okey, I removed the .ko files from the package as well, but you are saying its the same driver in both your extra and jun's vanilla loader? I hope so, in the RAID controller configuration utility under Controller Management it says the Package version is 25.5.7.0005, the Firmware version is 4.300.00-8364 and the NVDATA version is 3.1511.00-0028 When I set the controller in RAID mode it works, I had this configuration at first, everything installed and worked good, but I didn't want the hardware raid or the controller to handle the disks in such way, that is why I switched to HBA mode and got problem There is no JBOD mode, there is only HBA and RAID mode in the Controller Managment, in the Pyhiscal Disk Management I can only choose RAID or Non-RAID mode, its in Non-RAID, I suppose this is what you mean with JBOD But as of now it seems that my best option is to just change to another sas controller card maybe, wonder how that plays out with an OEM server like this, should probably work however, I've got tons of spare PCI slots
  7. Correct Oh okey, I'll try to disable the megaraid from your package to try Jun's base driver, maybe as simple as removing all megaraid from DISK_MODULES in rc.modules inside the archive? I actually didn't catch that Jun's had already packed that driver, but I still needed your igb driver so many thanks for that I did this, I read that the controller need a reset right after it have been switched to HBA mode I'll check if I did this correctly I will report back on this ASAP About the 6.11 minimum requirement for the firmware, I updated with Dell's Version 25.5.6.0009, A14 for the controller, which is the latest, I've tried to read the changelog but I can't find any reference on what broadcom firmware they are packing in it, would you possible have a clue where I find this?
  8. Hi again, Got a Dell PowerEdge T630, it has an PERC H730 Adapter (1028:1F43) using an LSI SAS3108 chipset, megaraid driver not working (as expected) in 6.2.2 or 6.2.3 It actually worked (at least to install) when the controller was in RAID mode, now I've set the controller in HBA mode (as I don't want the use the HW RAID at all), I've read that it might be possible to flash it to IT mode (Initiator Target), I've updated the controller with the latest firmware from Dell, have not yet updated the BIOS as I don't think that will make any difference, the BIOS SATA settings is set to AHCI I maybe could get this working under 6.1.x, however IIRC 6.1 lacks some of the hardware virtualization acceleration I needed to run my VMs smoothly I've gathered some logs if it would be of any help, but it seems that no one is lacking logs and that the problem is to backport drivers? Can I help in any way? As I've a fair amount of practice doing just this in my work I've only tried DS3617xs as I have Xeon E3 processor, 8 drives slots and need HW accelerated virtualization, would DS918+ be worth a shot? I think there was something with DS3615xs that was problematic with what I was trying to do last time So anyone have successfully flashed an LSI-based PERC card to IT mode? Can I help backport drivers? Can I completely remove the PERC card, buy something else to put in? The motherboard has no connectors for the 2 cables (0H3Y5T) coming from the disks, I think they are mini SAS connectors, they are marked "CTRL SAS A" on the controllers side, and "PB_A" on the disk side (aka Backplane?), never seen anything like these cables as I'm pretty new to OEM servers to begin with, if anyone has an answer to what connector that is, or can confirming it is mini SAS, or if there is any PCI card (gladly if its also supported and tested) that has that connector so I can swap it out or just add one in in the chassi BR
  9. Eeso

    DSM 6.2 Loader

    All good now, the 3617 loader was requiring the BIOS to be swtiched from UEFI to legacy mode
  10. So all seems sorted now and the drivers are working here as well, thank you very much for you work and patience
  11. Okey actually you were right, it boots now, seems to have been when I changed from UEFI to legacy also.. Didn't test 3617 until now on that configuration I thought it wasn't the problem when all the others were working, I'm assuming too much, I can see it is a great deal of hacking for it to run Well that is good, I will try to load the drivers now Ah alright
  12. Aha okey, such a shame I guess I have to get 3617 running then, do you know if any grub debug can be enabled as it tries to unpack and load the kernel and initrd? What is it the with the different zImage I changed on my usb then? If it can't be/no point in being changed I mean, it's a bit confusing
  13. Where do I find the source and toolchain of the kernels? CONFIG_X86_X2APIC seems to be valid for 4.4 as well Would you mind help me and try to compile 918+ with that flag set? I was in the BIOS but I couldn't find anything regarding any such thing
  14. Okey so I found some interesting things with 1.04b which made me wounder. This comes up in the dmesg [ 0.208866] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 1/0x0 ignored. [ 0.208867] ACPI: Unable to map lapic to logical cpu number [ 0.208887] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 2/0x2 ignored. [ 0.208888] ACPI: Unable to map lapic to logical cpu number [ 0.208909] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 3/0x4 ignored. [ 0.208910] ACPI: Unable to map lapic to logical cpu number [ 0.208926] ACPI: NR_CPUS/possible_cpus limit of 1 reached. Processor 4/0x6 ignored. [ 0.208927] ACPI: Unable to map lapic to logical cpu number These seems to be a common thing for linux when it doesn't find all the cores, the solution vary for some, seems to be either to switch bios mode from UEFI to legacy or to enable x2apic in the kernel I tried to switch bios to legacy, I changed it to UEFI for another distro a while back. This didn't do the trick however, do you know if the kernel has the x2apic config enabled (CONFIG_X86_X2APIC to be exact)?
  15. Ye I know, but only 1.04b / 918+ doesn't recognize the other 3 cores, I will recheck Still it seems to be some kind of problem, I remember the 6.1 loader for 3615/3617 do this, it also worked good with the other dists I've tried on this server, but I will recheck this as well cpuinfo exactly Yes gonna do that and document it better this time and I'll report back on my findings Ye I thought it was only that, turns out it wasn't, thanks for heading me in the right direction Ye I tried and all looks good (I thought), except that the 1.03b explicitly says it only supports the intel e1000 nic, which I don't have My logical reasoning was that it was only missing the wrong driver.. Since all other things I've tried have worked. I never checked the VSP before this. My bad Unfortunately I need 6.2, running 6.1 works fine for all simulated models, but one virtualization option is missing in VMM prior to 6.2, or rather you need a pro license enable it, which is free in 6.2. Seems to be bad to try to buy such a license for this I gonna check if there is something that can be done with that
×
×
  • Create New...