Jump to content
XPEnology Community

gnoboot 10.5-alpha within ESXi 5.5 and DSM 5.0 - SMART etc.


Dark-Sider

Recommended Posts

Hello,

 

I got myself a Supermicro A1SAM-2750F (http://www.supermicro.com/products/motherboard/Atom/X10/A1SAM-2750F.cfm). I put the board into a 24bay 19" case. The SAS/SATA backplane is fed thru a LSI 9240-8i SAS-HBA and a port-expander.

 

Since 8 cores seem a little much for just a NAS I installed ESXi 5.5 onto a Hard-Drive that is directly connected to the mainboards onboard SATA (which worked out of the box).

 

I DL'ed both, the gnoboot 10.3 (ESXi) and 10.5 versions. I patched the 10.3 vmdk to point to the 10.5 .ext2.img file and gnoboot just bootet fine.

 

The 9240 is running the stock-firmware without any RAID devices configured, so it just reports the single disks to the ESXi host.

 

After that I setup 6 SATA 4TB drives which are attached to ports 0 thru 5 on the HBA as physical RMD within ESXi. I can access and see the drives from within the booted xpenology, although the suggested "Paravirtual SCSI" HBA did report my Drives as SSDs. Using smartctl I was able to pull all smart-values from the drives, but the disk manager within the DSM webinterface refused to show me any SMART-Values inclunding the serial-numbers.

 

So I tried alle the different HBAs VMWAre would let me choose. Finalle the LSI-SAS HBA option displays all my drives as "HDD" and I get SMART readings (Serial-Number, self test possible etc.). Unfortunately the temperature of the drives shows as 0° C. Not sure why...

 

Network Performance seems very good @110MB/s

 

The gnoboot vmdk is LUN0:6 the Data Drives reside at LUNs 0:0 thru 0:5

 

There are still some other questions that are bothering me:

 

1) My first 4TB drive is listed as /dev/sdc (drive-3). I can't find out why. What occupies sda and sdb? fdisk can't open those devices. I'd prefer to have my first data disk sitting at /dev/sda. dmesg | grep sda also returns nothing :sad:

 

2) DSM reports a total number of 12 internal disks (since it's using the DS3612 software). I think I need to change the synoinfo internalportcfg value to 0xffffff if I want to have 24 internal disks. Maybe even 0x3ffffff if I can't figure out what occupies the first two slots. As far as I remember there is a copy of the synoinfo within etc.defaults. which file gets copied from where to where at which time?

 

3) Maybe the RS10613 software would contain some nice stuff for SAS drives, since the RM10613 probably uses a SAS HBA to connect to up to 106 disks :smile:

 

4) while browsing thru the synoinfo file I found the "eunitseq="sdm,sdq" line. Any idea what's that about? Expansion Unit?

 

bye+thanks,

Darky

Link to comment
Share on other sites

So I probably found what those 2 upfront devices are...

 

dmesg shows:

[sun Apr 20 03:22:14 2014] ata_piix 0000:00:07.1: version 2.13
[sun Apr 20 03:22:14 2014] scsi0 : ata_piix
[sun Apr 20 03:22:14 2014] scsi1 : ata_piix
[sun Apr 20 03:22:14 2014] ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14
[sun Apr 20 03:22:14 2014] ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15
[sun Apr 20 03:22:15 2014] stex: Promise SuperTrak EX Driver version: 4.6.0000.4
[sun Apr 20 03:22:15 2014] Fusion MPT base driver 3.04.20
[sun Apr 20 03:22:15 2014] Copyright (c) 1999-2008 LSI Corporation
[sun Apr 20 03:22:15 2014] Fusion MPT SPI Host driver 3.04.20
[sun Apr 20 03:22:15 2014] Fusion MPT SAS Host driver 3.04.20
[sun Apr 20 03:22:15 2014] mptsas 0000:03:00.0: PCI INT A -> GSI 18 (level, low) -> IRQ 18
[sun Apr 20 03:22:15 2014] mptbase: ioc0: Initiating bringup
[sun Apr 20 03:22:15 2014] ioc0: LSISAS1068 B0: Capabilities={Initiator}
[sun Apr 20 03:22:15 2014] mptsas 0000:03:00.0: setting latency timer to 64
[sun Apr 20 03:22:15 2014] scsi2 : ioc0: LSISAS1068 B0, FwRev=01032920h, Ports=1, MaxQ=128, IRQ=18

So ata_piix gets added as scsi0 and scsi1. How do get rid of them??

 

Also changing internalportcfg to 0xfffffff (also adjusting usb and esata accordingly) did not improve the number of drives avail. The disk overview still shows only 6 slots left, with disks 3 to 8 inserted.

 

bye,

Darky

Link to comment
Share on other sites

By googling another smaller issue, I found the solution to the problem where the first Data-Disk did not start as Drive 1.

 

After adding "kernel /zImage rmmod=ata_piix,r8169,r8169_new" (you could probably leave the realtek stuff out) to the /boot/grub/menu_alpha.lst of the boot image I was able to get rid of the two phantom disks.

 

The persisting problem still is, that the boot image vmdk (SCSI 1:0) get's somewhat recognized as /dev/sdg (sda thru sdf are my six data disks). This leads to the following log file entries:

 

/var/log/messages

Apr 21 01:16:39 Dark-NAS2 SystemInfo.cgi: disk_info_get.c:82 Failed to open /dev/sdg, errno=No such file or directory
Apr 21 01:16:39 Dark-NAS2 SystemInfo.cgi: disk_info_enum.c:42 Failed to get disk information sdg
Apr 21 01:16:46 Dark-NAS2 storagehandler.cgi: disk_info_get.c:82 Failed to open /dev/sdg, errno=No such file or directory
Apr 21 01:16:46 Dark-NAS2 storagehandler.cgi: disk_info_enum.c:42 Failed to get disk information sdg

 

/var/log/scemd.log

Apr 21 01:17:59 Dark-NAS2 scemd: disk_temperature_get.c:28 open /dev/sdg failed.
Apr 21 01:17:59 Dark-NAS2 scemd: external/external_fan_table_type_disk_temperature_ops.c:230 Invalid Disks temperature 0
Apr 21 01:17:59 Dark-NAS2 scemd: external/external_fan_config_table_lookup.c:148   Temperature Get fail

 

logging those events every few seconds prevents the drives from spinning down and also makes clicking noise with the drives every other second... (this is what got me started in looking what's accessing my drives...)

 

Any ideas on how to get rid of "sdg"?

 

bye,

Darky

Link to comment
Share on other sites

×
×
  • Create New...