mirekmal

Transition Members
  • Content count

    8
  • Joined

  • Last visited

Community Reputation

0 Neutral

About mirekmal

  • Rank
    Newbie
  1. Marvell storage driver: mvsas: Marvell PCIe 88SE64XX (3Gb/s) and 88SE94XX 6Gbs) SAS/SATA support, HighPoint RocketRAID 2710/2720/2721/2722/2740/2744/2760 Thank you
  2. mirekmal

    DSM 6.2 Loader

    Have you played somehow with HDD ontrollers configuration via grub.cfg or synoinfo.conf? Such change could cause, eventually, one port controller being visible to DSM as first disk slot...
  3. mirekmal

    DSM 6.2 Loader

    Outcome of the update: PARTIALLY SUCESSFUL - DSM version prior update: none - clean install on previously used with DSM6.1/1.02B loader HW (all disks and original startup USB drive replaced with spare HW) - Loader version and model: JUN'S LOADER v1.03b - DS3615XS - Using custom extra.lzma: NO - Installation type: BAREMETAL: - Asus P7H55/USB3 (H55 Express chipset) - Intel Core i3 550 (Clarkdale) - 8GB RAM (DDR3 1333) - 1x on board Realtek 8112L - 2x Intel Gigabit Pro/1000 CT PCIe cards - HighPoint RocketRAID 2720SGL (flashed to JBOD mode) - Additional comments: - REBOOT: OK - RocketRAID not working: no disks visible in DSM: - not workng natively - Tried with extra.lzma from 918+ (as advised in one of previous post): system starts but then reboots itself 30 seconds after loader screen visible - Tried with extra.lzma for 3517 1.02b (one working on original setup for RocketRAID): system does not load any network drivers, no connection available to check RocketRAID Learnings: - Despite very old architecture of i3 550 it seems to work properly. I made some test installing several packages (DS Video, Mail Server Plus, Mail Plus, Download Station, etc) and seems to be OK (as far as I can test within 2 hours). - lack of support for Marvel HDD controller - no option to supplement as no dedicated extra.lzma exisit yet for 1.03b/DSM 6.2x and other packages do not work properly. - overall very pleased, given the old HW it was tested on! Good work! Now only waiting for additional HW support to migrate it to productioion.
  4. Hello there! After several experiments I just finished to build something that shoul dbe my 'production' box. Trick is that moving from test bed to new hardware, some things that I though previously I managed to 'master' are not working on new box: - It is based on Asus P7H55 USB3 board. It has 6 internal SATA ports (part of H55 Express chipset,so I understand can either all be switched off or all active) - I added RocketRaid 2720SGL as main HDD controller. - Since case I use has max capacity of 12 drives (4x2.5 and 8x3.5) I changed SataPortMap=48, expecting to see first 4 ports from onboard and 8 ports from RocketRaid - I connected 2 drives to internal coontroller and 8 drives to RocketRaid XPEnology does show 12 drives slots, but filled as 2 occupied, 4 empty, 6 occupied. So looks showing all 6 ports on internal controller. Looking at disks reported for RocketRaid it shows first 6 ports. Then I tried to modify synouser.conf, and increase number of drives to 14, hoping that if can't disable 2 port on on board controller, at least I could use all 8 drives on RocketRaid.... Bad luck, now I see 14 slotts, but in 2 occupied, 4 emty, 6 occupied, 2 emty... So added 2 slots are not connected to ports on RocketRaid. Any idea how to get this fixed? Ideally I'd like to see 4+8 config... but anything that would allow to use all discks onRocketRaid would be great! BTW. 2 disks connected to internal controller shows as eSATA... can this be changed?
  5. mirekmal

    Weird SMB connecitivty problem from OS X

    OK, for anyone interested to know what might be the problem - this might be somehow usefull for troubleshooting similar issues in other setups. So, my setup for connection between Mac and XPEnology box was: Mac -> Cisco SLM2008 switch -> Cisco SG 200-08 switch -> XPEnology box. In theory all of components on the way support jumbo frames, so while finetuning setup I was enabling this gradualy on all components. What turned out is that SLM2008, despite its specification, does not always play well with jumbo frames. It was somehow blocking traffic between Mac on the way to XPE. Please note that all other traffic was going well! Only this very specific configuration was affected. After turning off jumbo frames on the swith everything started to work again. As I do have few similar swithces in my network, I replced it with another SG 200-08 one with jumbo frames enebled and again everything works fine. So this is pointing to conclusion that there is something wrong with SLM2008... Weird...
  6. So, I have new, fresh XPEnology box with 3 NIC. All 3 are 1GB cards and from what I can see on port status in swith and in in DSM itself are connecting at full speed. One NIC is used for general user access (on 1st subnet) and the other 2 NICs are bonded and used for sort of SAN (on 2nd subnet). From Windows machines all connectivity to SMB shares via user network works fine (download speed in range of ~100Mbps), so OK. From Mac (using user subnet) I can connect to other Windows server SMB shares and I get upt o ~100MBps download, so OK From Mac (again on user subnet) I get only ~200KBps... definitely not OK... When I reconfigure Mac IP to use storage subnet (the only change id do) I instantly get 120~140MBps of transfer (since it is bonded, so I guess disks speed is limt), so OK. General SMB connectivity problems (that many Macs are facing) can be rulled out, since storage subnet to XPEnology and user subnet connection to Windows server works fine. So from above obeservation issue seems to be specific to Mac connecting to this particular NIC on SXPEnology box. BTW same applies to AFP... Any idea what might be wrong, how to troubleshoot and eventually fix?
  7. OK, so another update; finally got some time to went through clean disk process and I'm happy to report that my 3rd NIC is back on-line Thanks a lot guys for guiding me through troubleshooting!
  8. OK, so here is the progress... All unused devices were already previously disabled, this is what I do by definition on all of my boxes. Per IG-88 suggestions I did two checks: - first I took a look at network configuration files. I found nothing special for eth0 and eth1. eth2 was somehow different; it keps some trials of being bonde. There was additional line MASTER=bond1 (or something very simlar). Since that was something I did not wanted to be here, I deleted it, so it started to look like any othe etx config. Unfortunately after system restart i did not changed anything... - sudo lspci -v gave me following output: ..... 0000:04:00.0 Class 0200: Device 8086:10d3 Subsystem: Device 8086:a01f Flags: bus master, fast devsel, latency 0, IRQ 46 Memory at fd8c0000 (32-bit, non-prefetchable) Memory at fd800000 (32-bit, non-prefetchable) I/O ports at cf00 Memory at fd8fc000 (32-bit, non-prefetchable) [virtual] Expansion ROM at fd700000 [disabled] Capabilities: <access denied> Kernel driver in use: e1000e 0000:05:00.0 Class 0200: Device 10ec:8168 (rev 02) Subsystem: Device 1458:e000 Flags: bus master, fast devsel, latency 0, IRQ 18 I/O ports at be00 Memory at fdfff000 (64-bit, prefetchable) Memory at fdfe0000 (64-bit, prefetchable) [virtual] Expansion ROM at fdf00000 [disabled] Capabilities: <access denied> Kernel driver in use: r8168 0000:06:07.0 Class 0200: Device 8086:107c (rev 05) Subsystem: Device 8086:1376 Flags: bus master, 66MHz, medium devsel, latency 64, IRQ 21 Memory at fd6c0000 (32-bit, non-prefetchable) Memory at fd6a0000 (32-bit, non-prefetchable) I/O ports at 9f00 [virtual] Expansion ROM at fd500000 [disabled] Capabilities: <access denied> Kernel driver in use: e1000 ..... so to me it looks that all 3 cards are somehow visible to kernel, but not to SDM So for now still clean drive option seems to be one to follow... EDIT: I rechecked network config files after restart and surprisingly lines I deleted from eth2 file are there back again: SLAVE=yes MASTER=bond0 USERCTL=no So seems somehow/somehwre information about this interface being part of bond is persistant.
  9. OK, so I did ifconfig check and it shows 3 interfaces; 2 physical (eth0, eth1) and local loopback (lo). So definitelly 3rd NIC is somehow lost. I was wondering if this is not the problem of too many resources being used by expansin cards on mobo (not enough resources). What I read from mobo manual is that some of PCIe slots share lanes, so if addon cards are placed in 8x slots some 1x slots become unavailable. So then I carefully moved cards around, to ensure that it is not the case. No change. then I made another swap, I replaced one of PCIe cards with old PCI 1gb Intel card. Then it become interesting again; now I can see in BIOS (during boot) 2x message from PXE, which indicates to me that at least both cards are initiated and there is no resources conflict (previously that was only one card showing such message, I believe). But ifconfig still shows only 2 interfaces... I also tried to use some 2 port Realtek card, but seems tere is some PCIe version incompatibity with this one, as system does not boot at all with this card (not single message from BIOS even). So no success so far... P.S. Not sure how to read lsmod outut... not a linux guy P.S. 2 I have some spare SATA drive laying around, if time allow I'll try clean drive method suggeted by sbv3000... Will report outcome.
  10. I just completed build of my XPEnology box. Basically following available tutorials it was relatively flawless experience (even with a lot of playing with it to try different options). Unfortunately the last step I did seems to be a mistake and now I can't recover... So, to keep long story short; my box has 3 NICs, one on board (Realtek) and 2 PCIe cards (Intel). My intention was to use on board one as user/management access and have 2 others bonded for iSCSI connection to my home server, also using link aggregation. While I made test everything was working fine and all 3 NICs were presented to XPEnology box and working properly (also in bonded mode). Then I decided to make some final touches and during the process I realized that I made mistake while configuring network and NICs assigned to bonded link were mixed (Realtek and one of Intels). So, I decided to correct it; I deleted the bond connection... and to my surprise it also deleted associated with it Intel NIC. Whatever I try to do now, I can only see Realtek and one of Intel NICs. The second Intel disappeared permanently from DSM, so I can’t recreate bond. I tried to reinstall system (from loader start menu), but it did not help. Any idea how to get this card back to be visible?