Jump to content
XPEnology Community

Recommended Posts

I just completed build of my XPEnology box. Basically following available tutorials it was relatively flawless experience (even with a lot of playing with it to try different options). Unfortunately the last step I did seems to be a mistake and now I can't recover... So, to keep long story short; my box has 3 NICs, one on board (Realtek) and 2 PCIe cards (Intel). My intention was to use on board one as user/management access and have 2 others bonded for iSCSI connection to my home server, also using link aggregation. While I made test everything was working fine and all 3 NICs were presented to XPEnology box and working properly (also in bonded mode). Then I decided to make some final touches and during the process I realized that I made mistake while configuring network and NICs assigned to bonded link were mixed (Realtek and one of Intels). So, I decided to correct it; I deleted the bond connection... and to my surprise it also deleted associated with it Intel NIC. Whatever I try to do now, I can only see Realtek and one of Intel NICs. The second Intel disappeared permanently from DSM, so I can’t recreate bond. I tried to reinstall system (from loader start menu), but it did not help. Any idea how to get this card back to be visible?

 

Link to comment
Share on other sites

the short way is to delete the old installtion (remove all parttions from disks) and start from scratch, can be done by booting a live/rescue linux

 

the other way would be to check the logs and edit config files in the linux (dsm) system

first step would be to use ssh and putty, login as admin and see waht "sudo ifconfig" tells you about the network interfaces

in the end its a debian linux with some extensions to get a nice and easy gui

so you can learn (google) how to manually configure network in debian and fix it yourself (and learn something on the way)

 

Edited by IG-88
Link to comment
Share on other sites

I've done exactly the same thing, except it's with the loss of my onboard lan.

It's annoying me because it's the only one that supports Wol.

I've had a quick play with the ifconfig |grub "eth"

Which has shown me some interesting stuff but shows that the other driver is not being used.

Although lsmod does list it and if you redo the usb boot disk it'll use it until it's recovered the installation then it disappears.

I need to do more digging.

Sent from my D5803 using Tapatalk

Link to comment
Share on other sites

I've used the 'clean drive' method to repair system partitions avoiding the ubuntu/mdadm process (because I am lazy :) )

 

You can reconfigure the network at an appropriate point. (between step 3-4?)

 

1) Disconnect your raid drives

2) Connect spare HDD to SATA port 1

3) Install DSM 6.x, create admin user and server name matching live system, DONT create volume/raid

4) Shutdown, reconnect raid drives to SATA ports 2-n

5) Boot, login, repair system partitions on raid, shutdown

6) Remove spare HDD and reconnect raid drives to SATA ports 1-n

7) Boot, repair packages etc

I've used this method several times and it works fine

  • Thanks 1
Link to comment
Share on other sites

OK, so I did ifconfig check and it shows 3 interfaces; 2 physical (eth0, eth1) and local loopback (lo). So definitelly 3rd NIC is somehow lost.

I was wondering if this is not the problem of too many resources being used by expansin cards on mobo (not enough resources). What I read from mobo manual is that some of PCIe slots share lanes, so if addon cards are placed in 8x slots some 1x slots become unavailable. So then I carefully moved cards around, to ensure that it is not the case. No change. then I made another swap, I replaced one of PCIe cards with old PCI 1gb Intel card. Then it become interesting again; now I can see in BIOS (during boot) 2x message from PXE, which indicates to me that at least both cards are initiated and there is no resources conflict (previously that was only one card showing such message, I believe). But ifconfig still shows only 2 interfaces... I also tried to use some 2 port Realtek card, but seems tere is some PCIe version incompatibity with this one, as system does not boot at all with this card (not single message from BIOS even). So no success so far...

P.S. Not sure how to read lsmod outut... not a linux guy

P.S. 2 I have some spare SATA drive laying around, if time allow I'll try clean drive method suggeted by sbv3000... Will report outcome.

 

Link to comment
Share on other sites

1 hour ago, mirekmal said:

OK, so I did ifconfig check and it shows 3 interfaces; 2 physical (eth0, eth1) and local loopback (lo). So definitelly 3rd NIC is somehow lost.

I was wondering if this is not the problem of too many resources being used by expansin cards on mobo (not enough resources). What I read from mobo manual is that some of PCIe slots share lanes, so if addon cards are placed in 8x slots some 1x slots become unavailable. So then I carefully moved cards around, to ensure that it is not the case. No change. then I made another swap, I replaced one of PCIe cards with old PCI 1gb Intel card. Then it become interesting again; now I can see in BIOS (during boot) 2x message from PXE, which indicates to me that at least both cards are initiated and there is no resources conflict (previously that was only one card showing such message, I believe). But ifconfig still shows only 2 interfaces... I also tried to use some 2 port Realtek card, but seems tere is some PCIe version incompatibity with this one, as system does not boot at all with this card (not single message from BIOS even). So no success so far...

P.S. Not sure how to read lsmod outut... not a linux guy

P.S. 2 I have some spare SATA drive laying around, if time allow I'll try clean drive method suggeted by sbv3000... Will report outcome.

 

Thats a good find.

Check the bios and disable any 'unnecessary' devices (audio/parallel/second serial/USB3?). Also disable every boot device except the xpe USB and disable 'boot from other devices'

Link to comment
Share on other sites

if you want to know if a device (hardare is present) you can use

sudo lspci -v

it will list all devices and will show the driver that is used, that way you can see if there is a hardware/driver problem (device present not present, driver or not) or that there is a config problem with the network

the network config will is stored in

/etc/sysconfig/network-scripts
in files like this: ifcfg-eth0 and its default content would look like this

DEVICE=eth0
BOOTPROTO=dhcp
ONBOOT=yes

 

i guess it might take to long when you have to ask every step (no linux experience) it will be faster to get rid of the old installation and start a new one (as the problem occurred with bonding and deleting bonding i'd expect it to be a configuration problem)

 

depending if there are any data on your disk you can choose to delete everything on the 2 disks you used or if there is already a data raid configured and you want to keep it you can use svv3000's suggestion with a 3rd disk

 

Link to comment
Share on other sites

Thanks for the info, :-)

I don't have a spare 3rd disk and have spent a few too many days organising a lot of data on the disks.

So I'll be getting to fix it from ssh, I'll try and Google search answers first and we'll see how we go.

I'll update here when I find the fix.

Sent from my D5803 using Tapatalk

Link to comment
Share on other sites

OK, so here is the progress...

All unused devices were already previously disabled, this is what I do by definition on all of my boxes.

Per IG-88 suggestions I did two checks:

- first I took a look at network configuration files. I found nothing special for eth0 and eth1. eth2 was somehow different; it keps some trials of being bonde. There was additional line MASTER=bond1 (or something very simlar). Since that was something I did not wanted to be here, I deleted it, so it started to look like any othe etx config. Unfortunately after system restart i did not changed anything...

- sudo lspci -v gave me following output:

.....

0000:04:00.0 Class 0200: Device 8086:10d3
    Subsystem: Device 8086:a01f
    Flags: bus master, fast devsel, latency 0, IRQ 46
    Memory at fd8c0000 (32-bit, non-prefetchable)
    Memory at fd800000 (32-bit, non-prefetchable)
    I/O ports at cf00
    Memory at fd8fc000 (32-bit, non-prefetchable)
    [virtual] Expansion ROM at fd700000 [disabled]
    Capabilities: <access denied>
    Kernel driver in use: e1000e

0000:05:00.0 Class 0200: Device 10ec:8168 (rev 02)
    Subsystem: Device 1458:e000
    Flags: bus master, fast devsel, latency 0, IRQ 18
    I/O ports at be00
    Memory at fdfff000 (64-bit, prefetchable)
    Memory at fdfe0000 (64-bit, prefetchable)
    [virtual] Expansion ROM at fdf00000 [disabled]
    Capabilities: <access denied>
    Kernel driver in use: r8168

0000:06:07.0 Class 0200: Device 8086:107c (rev 05)
    Subsystem: Device 8086:1376
    Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 21
    Memory at fd6c0000 (32-bit, non-prefetchable)
    Memory at fd6a0000 (32-bit, non-prefetchable)
    I/O ports at 9f00
    [virtual] Expansion ROM at fd500000 [disabled]
    Capabilities: <access denied>
    Kernel driver in use: e1000

.....

so to me it looks that all 3 cards are somehow visible to kernel, but not to SDM :-(

So for now still clean drive option seems to be one to follow...

 

EDIT:

I rechecked network config files after restart and surprisingly lines I deleted from eth2 file are there back again:

SLAVE=yes

MASTER=bond0

USERCTL=no

 

So seems somehow/somehwre information about this interface being part of bond is persistant.

 

Edited by mirekmal
Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...