romansoft

Members
  • Content count

    12
  • Joined

  • Last visited

Community Reputation

0 Neutral

About romansoft

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I found a solution which permits me to boot my USB stick Simply I created a CD image (.iso) with plop (https://www.plop.at/en/bootmanager/index.html) with the sole funtion of booting USB (similar to "chainloading" with grub2). I've attached the .iso here (just in case somebody needs it; happens that it also will work -not tested, though- even if your BIOS doesn't support USB boot at all -old BIOSes-). Now simply add a CDROM device to your Proxmox VM and select this .iso as image. Finally, configure: Options -> Boot order -> CDROM. You are done Whenever you start the VM, it will boot to cdrom, which in turn will jump into USB boot. Simple. The only (minor) drawback I've found with this method is that Jun's loader cannot hot-update its grub config/env (for any reason I don't still get, grub cannot write to disk [*]), so you cannot "save" the default choice in Jun's loader menu. But to be honest, it doesn't hurt too much (just edit Jun's loader .img "offline" and select another default if you need it). [*] You'll get an error by grub but you can ignore it. You'll also be prompted to press a key but you can ignore too (after ~5 secs, booting will resume automagically :-)). Btw, you can get rid of both error and press-key prompt by editing Jun's image and simply deleting (or renaming) grub/grubenv file ("ren grub/grubenv grub/grubenv_OFF" will do the trick). After doing so, Jun's loader (grub) won't autosave chosen menu option. plpbt-usbboot.iso
  2. Loader version and type: 1.03b / 3617XS DSM version in use (including critical update): None (kernel crash when booting from loader so I cannot install DSM) Using custom modules/ramdisk? If yes which one?: None Hardware details: Proxmox on Hp Gen8. Problem: 1.03b loader fails to boot (kernel crash) when OVMF (UEFI) bios is selected. It works only when SeaBIOS is selected (which is Proxmox's default but it lacks features). Detailed info & logs in the following post:
  3. I've been iddle some time but I'm coming back with some responses I discovered by researching this a little bit. 1/ In order to have a *full* serial console, you need to uncomment/comment these lines in grub.cfg: set extra_args_3617='earlycon=uart8250,io,0x3f8,115200n8 earlyprintk loglevel=15' #set extra_args_3617='' Then you can observe the complete boot process with a simple "qm terminal <vmid>". 2/ When doing so, I discovered what was happening: [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.105 #23739 [ 0.000000] ffffffff814b6c18 ffffffff8187b1a1 0000000000000001 0000000000000001 [ 0.000000] 00000000366b1000 000000000194f000 0000000000000bff fffffffeffbcd000 [ 0.000000] 0000000100000000 0000000000000000 0000000000000000 000000000000000e [ 0.000000] Call Trace: [ 0.000000] [<ffffffff814b6c18>] ? dump_stack+0xc/0x15 [ 0.000000] [<ffffffff8187b1a1>] ? early_idt_handler_common+0x81/0xa8 [ 0.000000] [<ffffffff8188c20e>] ? efi_init+0x238/0x476 [ 0.000000] [<ffffffff8188c1fb>] ? efi_init+0x225/0x476 [ 0.000000] [<ffffffff8187f091>] ? setup_arch+0x43d/0xc50 [ 0.000000] [<ffffffff814b5f8b>] ? printk+0x4a/0x52 [ 0.000000] [<ffffffff8187b957>] ? start_kernel+0x7b/0x3b0 [ 0.000000] RIP 0xfffffffeffbcd000 I.e., I was getting a kernel-crash at the very beginning of the kernel-loading stage. I compared the crash with a normal kernel-loading, like this: [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014 [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000000] e820: last_pfn = 0x1ffdf max_arch_pfn = 0x400000000 ... So in the first case, kernel is crashing at BIOS check phase. I reviewed Proxmox VM config and I came across the root cause: 1.03b loader fails to boot when OVMF bios is selected. It works only when SeaBIOS is selected (which is Proxmox's default). Why am I currently using OVMF over SeaBIOS? Because only OVMF permits to permanently define a USB device as primary boot device, and I need this in order to boot from my USB stick. I've just emailed Jun with this info in order to know whether this is a known issue with 1.03b (the issue doesn't exist in 1.02b) and whether we could expect a fix. I'll update this post in case I have positive feedback Cheers, -r
  4. Thanks. Basically the difference with my setup is that your bootdisk is sata0 instead of usb. It sounds weird that loader may not work propperly if run from usb? Or maybe is known behaviour for 1.03b? Another point: simply booting 1.02b in proxmox, showed menu in serial console. But that doesn't happen with 1.03b. Shouldn't be the same?!? Cheers, -r
  5. My VM (Proxmox)'s disks are already SATA (physical disks being passed-thru). Boot/loader is USB-stick. Could you paste your current config (or screenshot)? Thx.
  6. So are you confirming Proxmox's serial console works for you with 1.03b? (it didn't work for me). My VM already has serial console defined (so your 1st cmd isn't necessary). Could you run a "qm terminal 101" and tell me if that works for you? I think it should be equivalent to socat cmd. For me, it doesn't work either "qm terminal" nor socat, with 1.03b (it works if I boot with 1.02b, as I already commented in my post). Cheers, -r
  7. Hi, I have DSM 6.1 (1.02b loader) on HP Gen8. I'm using an usb stick (with synoboot.img burnt), which can work both on baremetal and Proxmox so I can switch between starting DSM on HP Gen8 without any virtualization layer, or starting DSM from Proxmox (running over HP Gen8). I'd recommend this environment so if your proxmox fails, you can pick up your phisical disks (I'm currently passing them thru to proxmox) and the usb stick and start directly in the server (even another new server!). What's the procedure to update to DSM 6.2? I thought that new 1.03b loader would support both DSM 6.2 (and 6.1) so my idea was to burn 1.03b loader, start DSM 6.1, test new loader in that environment, and finally upgrade DSM (via GUI). But I tried it and it isn't working, so maybe I'm wrong with that, and 1.03b only support 6.2. Could somebody confirm? And a second question: I was successfully using serial console on Proxmox to monitorize the booting process (qm terminal <vmid>) on 1.02b, but it seems not to work for 1.03b loader. Is this fact known? Any workaround? Thank you for your help.
  8. romansoft

    ESXi 6.5 + DSM 6.1 + WD Red. Right performance?

    And you're probably right. This is specially true for freenas/zfs (where low-level optimizations are done by SO; hypervisor may affect). In case of xpenology, well, it's a simple Linux with software raid (then lvm + btrfs). If you monitor (smart) disks health from outside VM, you're done. Cheers, -r Enviado desde mi iPhone utilizando Tapatalk
  9. romansoft

    ESXi 6.5 + DSM 6.1 + WD Red. Right performance?

    Please, open a new thread for installation/upgrade issues, This thread was supposed to be related to *PERFORMANCE* === Btw, quick responses/ideas: 1/ you can import Jun's ovf first on Vmware Workstation (a lot more flexible than ESXi) on your laptop, play on that (change VM compatibility, etc), and finally "upload" to esxi directly from vmware wks (there's an option for that). This way you will get .vmdk only file "translated" to vmdk + flat-vmdk files (which ESXi 6.5 uses). 2/ For migration (DSM 6.0.2 -> 6.1), once you have full DSM 6.1 vm uploaded to ESXi 6.5, then pick up 1st disk (2 files: vmdk + flat) and copy to "legacy DSM" vm dir, then create a new disk with "opening existing disk". Finally change boot priority or best: simply edit vm hw and change port number so your new boot disk start the vm instead of old DSM boot disk (you can destroy oldest boot disk if you prefer). Upon starting vm, you'll get into DSM migration wizard (DSM will detect you're migrating). 3/ DSM migration wizard has 2 choices: 1/ try to preserve config&data; or 2/ only preserve data but not config. I did a quick test with 1, on another vm (with fake data), and it failed (migration seems ok but after rebooting I don't get IP address). So finally I upgraded using the "only preserve data" choice (you'll loose configs!!!!!) and it worked. === Again, please, be so kind to open a NEW thread for installation issues. This was not the purpose of this thread and I don't have any valid answer regarding the original question: performance. Thanks.
  10. Hi, I have a HP Gen8 Microserver with upgraded CPU&RAM (Xeon + 16GB RAM), running ESXi 6.5 so I installed Xpenology/DSM 6.1 (yes, new version) in a VM (with Jun's 1.02a loader). I've attached 3 WD Red disks, in *AHCI* mode, and I've configured them in the VM as RDM disks (I don't have extra sata card in order to use VTd/Passthrough). After solving a first huge performance problem (horrible 2 MB/s speed!!!), by disabling Vmware AHCI driver and enabling legacy one (full story here: http://www.nxhut.com/2016/11/fix-slow-d ... wahci.html), these are the current speeds I'm getting *inside* DSM: Read: - iops: 109 - performance: 154 MB/s - latency: 16.2 ms Write: - iops: 94 - performance: 35 MB/s - latency: 17.6 ms (data obtained using DSM 6.1: Storage admin -> HDD/SSD -> Test bank) For me, 154 MB/s read is ok and according to WD Red specs. But, isn't 35 MB/s write quite slow???? Are this values correct for you? Which values are you getting on WD Red disks? Please, could you complete this thread providing your values, and specifying: - virtualization / baremetal (eg: esxi, vbox, baremetal) - hw specs or server model - disk model Thank you. Cheers, -r
  11. romansoft

    System failed to create [RAID Group 1]

    Bingo! Thanks a lot, billat29. Cheers, -r
  12. Hi, Just created a VM in vmware workstation for testing xpenology (with the idea of moving to ESXi if test is ok): - hw type: workstation 11.x (esxi compatible) - sata disk: synoboot.vmdk (DS3615xs 6.0.2 Jun's Mod V1.01) - sata disk 2: new disk (8gb) - serial port: named pipe (only to have a look to console) With former config, I can boot xpenology and do basic install (DSM_DS3615xs_8451.pat) / config. I had to use option 1 from Jun's menu (yes, baremetal even being a vm!!); if esxi option is used, DSM doesn´t detect any disk and it's impossible to proceed with DSM install). Upon having a basic install, I shutdown VM. Then I added virtual hw: - sata disk 3: new disk (2gb) - sata disk 4: new disk (2gb) - sata disk 5: new disk (2gb) The problem is that I cannot init any of those 3 disks in storage manager. In particular, if I try to create a RAID group, I get this error: "System failed to create [RAID Group 1](Device Type is [basic]) with disk [5]." I also noticed that, during former raid creation process, DSM is showing a wrong capacity (tons of TBs instead of 2 gb capacity). I tried all kinds of different RAIDs (5, etc) but the problem is with those 3 disks, which btw are similar to disk2 (8gb) I created for testing (same kind of virtual sata device). I only can create a RAID group using disk2 (which is where DSM resides, and was init during DSM installation). Any ideas? What's going on? I'd appreciate some help, please, Cheers, -r