Jump to content
XPEnology Community

dark alex

Member
  • Posts

    63
  • Joined

  • Last visited

Posts posted by dark alex

  1. Hi

     

    Feedback for Alpha 10 with Trantor's DSM Beta 5.0 on ESXi 5.0

     

    I dd'ed the image on sda (64MB LSILogic SCSI Drive - I understood it right, that this version should support that?) and modified menu.lst to default 1 (menu_me)

    Then modified menu_me.lst to default 2 (DSM 5.0 entry)

     

    Afthe the system booted I tried to install the pat using DS assistant and later once again with Web Assistant.

    Non of both ways worked. The Installation runs very fast (less than two minutes). The VM reboots but reboots into install environment...

     

    If iI start a rescue system I can see every disc now has raid partition what means for me it did something...

     

    I also did not have to "reflash" gnoboot!

     

    //I tried to boot gno-alpha and it worked. Did I understand something wrong?

  2. Hi!

     

    I hope I did not make a mistake by myself causing the following issue:

    I got DSM 5.0 Beta up and running on ESXi 5.0 using an 256MB disk as primary master IDE device for booting and 4 vmware Paravirtualized disks (20, 1, 1, 1 GB) just for testing before migrating my real data :smile: ...

    So now when I try to create an SHR-Volume over Disks 2, 3 and 4 (20GB, 1GB, 1GB) after finishing the setup nothing happens. I just come back to the overview in the storage manager.

    Note: The thrid 1GB-disk (Disk 5 in total) is for later testing to expand the existing SHR

    No volume or disc group is being created. Further when I try the same just with a disc group it does not work, too. Symptoms are the same.

     

    I did the follwing to get DSM 5.0 running:

    I downloaded gnoboot alpha5 image as gnoboot.img

    extracted alpha 8 update into folder gno8

    downloaded the zImage of alpha9 to gno9z

    #Copy image
    dd if=gnoboot.img of=/dev/sda
    
    #resize partition
    fdisk -c=dos /dev/sda
      d
      n
         p
    
    
      w
    
    #FilesystemCheck is required to expand - the y accepts to create lost+found-dir
    e2fsck -f /dev/sda1
      y
    
    #expand FS to new partition size
    resize2fs /dev/sda1
    
    #mount the drive
    mount /dev/sda1 /mnt
    
    #upgrade to alpha8 grub config
    cp -R gno8/boot/grub/* /mnt/boot/grub/
    
    #install alpha9 zImage
    rm /mnt/zImage
    cp gno9z /mnt/zImage
    

     

    Then I edited the grub configs to boot into 5.0 mode (grub.cfg default to 1 and menu_me.lst defaults to i think it was 2 - the entry which says 5.0 in the title. I checked this while bootning - it works correctly)

     

    I created a snapshot to return to after DS Assistant overwrites the boot disk.

    Note: My data disks are setup as "Independant" so Snapshots do not affect them. Only the drive where gnoBoot is on is affected.

    Then I installed the beta5.0 which I downloaded from this forum

     

    After installer rebooted the VM I stopped it, returned to the snapshot and started it again.

    It booted and DS Assistand finished successfully. I can now logon over DSM Web and have the problem described above!

     

    Any ideas?

     

    //Edit: System Log:

    Information	System	2014/03/03 00:13:30	admin	System failed to create [Volume 1](Device Type is [sHR]) with disk [3, 4, 5].
    Information	System	2014/03/03 00:13:30	admin	System starts to create [Volume 1](Device Type is [sHR]) with disk [3, 4, 5].
    Information	System	2014/03/03 00:12:34	admin	System failed to create [Disk Group 1](Device Type is [sHR]) with disk [3, 4, 5].
    Information	System	2014/03/03 00:12:34	admin	System starts to create [Disk Group 1](Device Type is [sHR]) with disk [3, 4, 5].

    I guess it counts disk 3-5 as it awaits a disk at secondary IDE as disk 2?

    My setup is

    CDROM at IDE 0:0

    256MB at IDE 0:1

    Nothing at IDE 1:0

    Nothing at IDE 1:1

    20GB PV at SCSI0:0

    1GB PV at SCSI0:1

    1GB PV at SCSI0:2

    1GB PV at SCSI0:3

     

    //Edit2:

    OH! When I try to initialize the 20GB as single disc it works... Probably 1Gb is just too less?

     

     

    //Edit 3:

    I solved it by my self...

    1 GB is indeed too less for a disk to work proberly. If you try to use it as "Base" Vallume it complains that your Volumes size has to be less than about 180TB so it sees that wrong. I tried with 8GB and it worked properly!

     

     

    But I got one question open... Is it possible to hide the gnoboot partition in DSM?

    I just do not want to see it as "unused disk" or for best efford not to see it anywhere at all.

  3. Hi

     

    Ich have the following problem:

    I am running XPE under vmware on a N40L.

    Everything works well except one Problem:

     

    Every time I plug a USB HDD into the Server and route it through to the XPE VM it will detect this HDD as internal drive.

    Well... When I format it and create a share on it after the next reboot it is consodered as being uninitialized.

     

    So... I can neighter set it up as an external drive nor use it as an internal one.

    Whats the cause, what can I do?

     

    I need to perform Backups ti this drive.

     

    I have one sata port free so I could build the HDD into my device but that would require me to be able to use some kind of SCSI (like PVSCSI).

    But as soon as I plug my HDDs in a PVSCSI Controller XPE will shutdown immediately after booting...

  4. Naja beim PÜoweroff ist das ganz logisch, du hast unter Umständen beim ausschalten ein einzelnes Bit auf einer Platte im Raid gekippt (z.b. einen Schribvorgang unterbrochen, bei RAId 5 wäre dann Data1 und Data2 z.B. geschrieben aber Parity noch nicht. => Inkonsistenz.

    Dadurhc muss er sien Raid rekonstruieren.

     

    Allersdings habe ich einen normalen Shutdown gemacht. Und noch dazu hat er mir ausdrücklich bei zwei Platten gesagt dass die "systempartitionierung fehlerhaft" sei.

     

    Inzwischen hat er angefangen das RAID zu prüfen und sollte damit fertig sein... Alles etwas sletsam.

  5. Hallo

     

    ich ahbe gerade XPE neu gestartet, danach heißt es mein Volume sie abgestürzt.

     

    Er schriebt bei einem RAID 5 (Platten 2,3,4) bei zwei(!) Platten "Systempartitionierung fehlgeschlagen"

    Ich kann aber nromal auf miene Daen zugreifne, also ist das wohl fehlerhaft... Was heißt das jetzt für mich und wie bekomm' ich das weg?

    Ich meine, wären zwei Platten ausgefallen, wären meine Daten ja weg...

  6. Hi together,

     

    I would like to know what you would do in my situation:

    I have XPE running on a HP MicroSerer N40L running VMWare ESXi 5.0

    The reason why vmware is that I have to run another small machine on it.

     

    Now my question is how would you set up the data storage if you were me?

    I have three Hard Disks dedicated for NAS data storage, 2TB, 2TB, 3TB

    A fourth HDD of 1 TB is used as vmware datastore for placing the system containers on.

    I could now use three ways to create a storage:

     

    Option 1: Semi-HW-RAID

    Use the onboard RAID Controller (HP) and create a RAID5 over all three disks and give the resulting logical disc as a raw device over to XPE

    which will then detect one data-disc connected.

    Advantages: RAID is done before any OS sees the drive. So vmware only has to deal with one drive.

    Disatvantages: a) If my board crashes I probably might not find a replace... oh! It's HP :wink: You know what I mean...

    b) If I decide to move to another device anytime I will need to backup and restore all data.

     

    Option 2: SW-RAID5

    Pass all three discs to XPE and let the DSM create a software raid

    Adv: I can pull the discs out and put them into a DiskStation any time without forcefully having to backup and restore.

    Dadv: vmware has to deal with three discs

     

    Option 3: Container

    Create SEMI-HW-RAID but do not pass the RAW "device" to XPE but use it as ESX datastore. Then create a big vHD for XPE.

    Additional to Advs and Diss of Option 1 there is the positive point that I could also use the storage for vHDs.

    Another negative point is that I have one more layer between the software and effectively writing the block...

     

    What way would you use?

×
×
  • Create New...