Jump to content
XPEnology Community

romansoft

Member
  • Posts

    17
  • Joined

  • Last visited

Everything posted by romansoft

  1. Even more difficult... did you (or somebody else) manage to "exchange" wires (in an easily way) in HP Gen8 so "5th SATA" (the one we usually plug into a SSD disk) could be exchanged by "1st SATA" (one of the four normal hd slots)? This would bring two benefits: 1/ (main) Take profit of full SSD speed (because 1st SATA is SATA3 while 5th SATA is SATA2 so running SSD on 5th SATA is running it at half-speed). 2/ Direct boot from SSD w/o nasty tricks (currently, I need to boot SD card with a special boot loader that in turn jump into SSD boot loader). Cheers, -r
  2. I suspect that, in my case, the problem's root cause is some dynamic parameter which may be changing between boots (Xpenology and/or Proxmox reboots), probably due to some recent change in Proxmox, and which finally lead to DSM "detecting" that the hw has changed (but then I cannot explain why a recovered backup does work; it shouldn't either). In your experience, which kind of hw changes do you think DSM could be detecting? See the (Proxmox) config I previosly posted (1st post in this thread). Re @haydibe's config, it seems simplest than mine but he took a different approach than me: I bet he included updated drivers in extra.lzma (so for instance, vmxnet3 works). My approach was to have always the same synoboot img (no modifications, apart from the initial config), valid for different DSM versions. Maybe my approach is worst since I have to maintain some hacks like the use of e1000e driver, which is not supported by Proxmox. I'll think about it and maybe I'll go for @haydibe's config. Thx!
  3. Hi all, DSM gets "continually broken" (I mean, I fix it but then it gets broken again) after a Proxmox 6 to 7 upgrade. Jun's loader shows (serial terminal): "lz failed 9 alloc failed Cannot load /boot/zImage" But please, let me first explain all details and read until the end because the situation is very strange... I'm using JUN'S LOADER v1.03b - DS3617xs in a Proxmox VM (on HP Gen8), with DSM 6.2.3-25426. I've been using this config for months without any problem on Proxmox **6.x** (this seems relevant). The other day I decided to upgrade Proxmox to **7.x**. I upgraded Proxmox and rebooted. All my VMs booted up perfectly... except DSM one I observed that I couldn't reach my network shares at DSM, and after a quick investigation, I discovered that DSM booted up in "installation mode", showing this message in web interface: "We've detected that the hard drives of your current DS3617xs has been moved from a previous DS3617xs, and installing a newer DSM is required before continuing.". I thought DSM partition may have been corrupted in some way or (most likely) Proxmox 7 introduced kind of a "virtual hw change" so now DSM thinks it's been booted in another hw. This last option is very plausible because Proxmox 7 uses qemu 6.0, while Proxmox 6 (latest) uses qemu 5.2. Maybe other changes in new version of Proxmox could have been introduced (for instance, I've read something regarding assigned MAC of a bridge interface being different). What I did was: 1/ Power off DSM VM. 2/ Back up partition 1 for all my 4 disks (i.e the md0 array which contains DSM OS). 3/ Power on DSM VM. 4/ I followed instructions in the web interface, chose "Migrate" (which basically keeps my data and config untouched), selected a manual installation of DSM and uploaded the .pat corresponding to the very same version I was already running before the problem, i.e. DSM_DS3617xs_25426.pat (DSM 6.2.3-25426). I didn't want to downgrade, and of course, I shouldn't upgrade because next version is 6.2.4, which is incompatible with Jun's loader. 5/ Migration got finished, DSM rebooted and... FIXED!!! DSM was working again with no loss of data nor config. *But* another problem arised later... When my server (Proxmox) got rebooted again, DSM VM resulted broken again but this time in a very different way: I couldn't ping my DSM VM, and after investigation, I concluded DSM kernel was not being loaded at all. Indeed, I attached a serial terminal to DSM VM and I could see Jun's loader being stuck at the very beginning with these messages: "lz failed 9 alloc failed Cannot load /boot/zImage" No idea why this is happening nor what these messages really mean (well, it seems obvius kernel is not being loaded but I don't know why) !! I managed to fix it again (yeah xD) by: 1/ Power off DSM VM. 2/ Restore partition 1 for all my disks from just the backup I took when solving former problem 3/ Power on DSM VM 4/ I confirmed loader worked again and that I got to the same point where DSM needed a migration 5/ I "migrated" exactly in the same way I had done minutes before :). FIXED!! What's the problem then? Easy... every time I reboot my server (so Proxmox reboots), my DSM VM got broken again with the second error ("lz failed... etc), i.e, loader's kernel not being loaded. I could temporarily fix it but sooner or later I'll need to reboot Proxmox again and... boom again Any of these problems are familiar to you? Any clue about how to solve this or a least, some ideas I should focus my investigation on? PLEASE, help!! :_( PS: My Proxmox VM config (a.k.a. qemu config) (with some info redacted): args: -device 'nec-usb-xhci,id=usb-ctl-synoboot,addr=0x18' -drive 'id=usb-drv-synoboot,file=/var/lib/vz/images/100/synoboot_103b_ds3617_roman.img,if=none,format=raw' -device 'usb-storage,id=usb-stor-synoboot,bootindex=1,removable=off,drive=usb-drv-synoboot' -netdev type=tap,id=net0,ifname=tap100i0 -device e1000e,mac=00:XX:XX:XX:XX:XX,netdev=net0,bus=pci.0,addr=0x12,id=net0 bios: seabios boot: d cores: 4 cpu: IvyBridge hotplug: disk,network,usb memory: 2048 name: NAS-Synology numa: 0 onboot: 1 ostype: l26 sata0: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4NXXXXXXX,size=2930266584K sata1: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4NYYYYYYY,size=2930266584K sata2: /dev/disk/by-id/ata-WDC_WD30EFRX-68EUZN0_WD-WCC4NZZZZZZZ,size=2930266584K sata3: /dev/disk/by-id/ata-WDC_WD40EFRX-68N32N0_WD-WCC7KAAAAAAA,size=3907018584K scsihw: virtio-scsi-pci serial0: socket smbios1: uuid=9ba1da8f-1321-4a5e-8b00-c7020e51f8ee sockets: 1 startup: order=1 usb0: host=5-2,usb3=1
  4. @IG-88 I didn't find any useful info in provided link but thank you anyway for trying to help. Finally, after some research, I managed to get a similar config *working* (it was a little bit tricky). 1/ I guessed the kernel panic was due to some USB driver problem (related to my USB stick being loaded) so I simply changed the synoboot device from the USB stick to an emulated USB-disk (qemu storage disk). 2/ Also, in order to install/upgrade DSM (avoiding the infamous "error 13"), I had to set proper vid/pid in synoboot.img (grub.cfg), which in case of qemu storage disk is: # QEMU STORAGE DISK set vid=0x46f4 set pid=0x0001 So not exactly what I wanted (I'd have prefered to keep using USB stick but I couldn't find any qemu option to alter USB driver/properties when the device is being passed-thru from host to vm) but it works. PD: I don't like e1000e trick (via args) either but it seems Proxmox won't support e1000e (according to Proxmox forum, although I don't have the exact link on hand). Btw, this is my final config for proxmox VM: For editing grub.cfg at synoboot.img, you can use any Linux shell and type: Done!
  5. Hi, I booted from loader 1.03b-ds3617xs, entered Syno's web-GUI and chose to install latest DSM (6.2.2-24922-4). Then rebooted and now I always get a Kernel panic when booting. Proxmox's host is Ivy Bridge: CPU model name : Intel(R) Xeon(R) CPU E3-1265L V2 @ 2.50GHz This is my Proxmox's config, which basically includes: e1000e hack Since Proxmox may only boot from USB if using UEFI and 1.03b doesn't support UEFI, I selected Seabios but did another hack: booting from CD and then chain-loads to USB. 1.03b loader is embeded in a physical USB Boot process log (ends up with kernel panic) : Any idea why it's failing? Can anybody post a full working Proxmox config (/etc/pve/qemu-server/XXX.conf)? Thanks in advance for your help. Cheers. -r
  6. I found a solution which permits me to boot my USB stick Simply I created a CD image (.iso) with plop (https://www.plop.at/en/bootmanager/index.html) with the sole funtion of booting USB (similar to "chainloading" with grub2). I've attached the .iso here (just in case somebody needs it; happens that it also will work -not tested, though- even if your BIOS doesn't support USB boot at all -old BIOSes-). Now simply add a CDROM device to your Proxmox VM and select this .iso as image. Finally, configure: Options -> Boot order -> CDROM. You are done Whenever you start the VM, it will boot to cdrom, which in turn will jump into USB boot. Simple. The only (minor) drawback I've found with this method is that Jun's loader cannot hot-update its grub config/env (for any reason I don't still get, grub cannot write to disk [*]), so you cannot "save" the default choice in Jun's loader menu. But to be honest, it doesn't hurt too much (just edit Jun's loader .img "offline" and select another default if you need it). [*] You'll get an error by grub but you can ignore it. You'll also be prompted to press a key but you can ignore too (after ~5 secs, booting will resume automagically :-)). Btw, you can get rid of both error and press-key prompt by editing Jun's image and simply deleting (or renaming) grub/grubenv file ("ren grub/grubenv grub/grubenv_OFF" will do the trick). After doing so, Jun's loader (grub) won't autosave chosen menu option. plpbt-usbboot.iso
  7. Loader version and type: 1.03b / 3617XS DSM version in use (including critical update): None (kernel crash when booting from loader so I cannot install DSM) Using custom modules/ramdisk? If yes which one?: None Hardware details: Proxmox on Hp Gen8. Problem: 1.03b loader fails to boot (kernel crash) when OVMF (UEFI) bios is selected. It works only when SeaBIOS is selected (which is Proxmox's default but it lacks features). Detailed info & logs in the following post:
  8. I've been iddle some time but I'm coming back with some responses I discovered by researching this a little bit. 1/ In order to have a *full* serial console, you need to uncomment/comment these lines in grub.cfg: set extra_args_3617='earlycon=uart8250,io,0x3f8,115200n8 earlyprintk loglevel=15' #set extra_args_3617='' Then you can observe the complete boot process with a simple "qm terminal <vmid>". 2/ When doing so, I discovered what was happening: [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.105 #23739 [ 0.000000] ffffffff814b6c18 ffffffff8187b1a1 0000000000000001 0000000000000001 [ 0.000000] 00000000366b1000 000000000194f000 0000000000000bff fffffffeffbcd000 [ 0.000000] 0000000100000000 0000000000000000 0000000000000000 000000000000000e [ 0.000000] Call Trace: [ 0.000000] [<ffffffff814b6c18>] ? dump_stack+0xc/0x15 [ 0.000000] [<ffffffff8187b1a1>] ? early_idt_handler_common+0x81/0xa8 [ 0.000000] [<ffffffff8188c20e>] ? efi_init+0x238/0x476 [ 0.000000] [<ffffffff8188c1fb>] ? efi_init+0x225/0x476 [ 0.000000] [<ffffffff8187f091>] ? setup_arch+0x43d/0xc50 [ 0.000000] [<ffffffff814b5f8b>] ? printk+0x4a/0x52 [ 0.000000] [<ffffffff8187b957>] ? start_kernel+0x7b/0x3b0 [ 0.000000] RIP 0xfffffffeffbcd000 I.e., I was getting a kernel-crash at the very beginning of the kernel-loading stage. I compared the crash with a normal kernel-loading, like this: [ 0.000000] NX (Execute Disable) protection: active [ 0.000000] SMBIOS 2.8 present. [ 0.000000] DMI: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.11.0-0-g63451fca13-prebuilt.qemu-project.org 04/01/2014 [ 0.000000] e820: update [mem 0x00000000-0x00000fff] usable ==> reserved [ 0.000000] e820: remove [mem 0x000a0000-0x000fffff] usable [ 0.000000] e820: last_pfn = 0x1ffdf max_arch_pfn = 0x400000000 ... So in the first case, kernel is crashing at BIOS check phase. I reviewed Proxmox VM config and I came across the root cause: 1.03b loader fails to boot when OVMF bios is selected. It works only when SeaBIOS is selected (which is Proxmox's default). Why am I currently using OVMF over SeaBIOS? Because only OVMF permits to permanently define a USB device as primary boot device, and I need this in order to boot from my USB stick. I've just emailed Jun with this info in order to know whether this is a known issue with 1.03b (the issue doesn't exist in 1.02b) and whether we could expect a fix. I'll update this post in case I have positive feedback Cheers, -r
  9. Thanks. Basically the difference with my setup is that your bootdisk is sata0 instead of usb. It sounds weird that loader may not work propperly if run from usb? Or maybe is known behaviour for 1.03b? Another point: simply booting 1.02b in proxmox, showed menu in serial console. But that doesn't happen with 1.03b. Shouldn't be the same?!? Cheers, -r
  10. My VM (Proxmox)'s disks are already SATA (physical disks being passed-thru). Boot/loader is USB-stick. Could you paste your current config (or screenshot)? Thx.
  11. So are you confirming Proxmox's serial console works for you with 1.03b? (it didn't work for me). My VM already has serial console defined (so your 1st cmd isn't necessary). Could you run a "qm terminal 101" and tell me if that works for you? I think it should be equivalent to socat cmd. For me, it doesn't work either "qm terminal" nor socat, with 1.03b (it works if I boot with 1.02b, as I already commented in my post). Cheers, -r
  12. Hi, I have DSM 6.1 (1.02b loader) on HP Gen8. I'm using an usb stick (with synoboot.img burnt), which can work both on baremetal and Proxmox so I can switch between starting DSM on HP Gen8 without any virtualization layer, or starting DSM from Proxmox (running over HP Gen8). I'd recommend this environment so if your proxmox fails, you can pick up your phisical disks (I'm currently passing them thru to proxmox) and the usb stick and start directly in the server (even another new server!). What's the procedure to update to DSM 6.2? I thought that new 1.03b loader would support both DSM 6.2 (and 6.1) so my idea was to burn 1.03b loader, start DSM 6.1, test new loader in that environment, and finally upgrade DSM (via GUI). But I tried it and it isn't working, so maybe I'm wrong with that, and 1.03b only support 6.2. Could somebody confirm? And a second question: I was successfully using serial console on Proxmox to monitorize the booting process (qm terminal <vmid>) on 1.02b, but it seems not to work for 1.03b loader. Is this fact known? Any workaround? Thank you for your help.
  13. And you're probably right. This is specially true for freenas/zfs (where low-level optimizations are done by SO; hypervisor may affect). In case of xpenology, well, it's a simple Linux with software raid (then lvm + btrfs). If you monitor (smart) disks health from outside VM, you're done. Cheers, -r Enviado desde mi iPhone utilizando Tapatalk
  14. Please, open a new thread for installation/upgrade issues, This thread was supposed to be related to *PERFORMANCE* === Btw, quick responses/ideas: 1/ you can import Jun's ovf first on Vmware Workstation (a lot more flexible than ESXi) on your laptop, play on that (change VM compatibility, etc), and finally "upload" to esxi directly from vmware wks (there's an option for that). This way you will get .vmdk only file "translated" to vmdk + flat-vmdk files (which ESXi 6.5 uses). 2/ For migration (DSM 6.0.2 -> 6.1), once you have full DSM 6.1 vm uploaded to ESXi 6.5, then pick up 1st disk (2 files: vmdk + flat) and copy to "legacy DSM" vm dir, then create a new disk with "opening existing disk". Finally change boot priority or best: simply edit vm hw and change port number so your new boot disk start the vm instead of old DSM boot disk (you can destroy oldest boot disk if you prefer). Upon starting vm, you'll get into DSM migration wizard (DSM will detect you're migrating). 3/ DSM migration wizard has 2 choices: 1/ try to preserve config&data; or 2/ only preserve data but not config. I did a quick test with 1, on another vm (with fake data), and it failed (migration seems ok but after rebooting I don't get IP address). So finally I upgraded using the "only preserve data" choice (you'll loose configs!!!!!) and it worked. === Again, please, be so kind to open a NEW thread for installation issues. This was not the purpose of this thread and I don't have any valid answer regarding the original question: performance. Thanks.
  15. Hi, I have a HP Gen8 Microserver with upgraded CPU&RAM (Xeon + 16GB RAM), running ESXi 6.5 so I installed Xpenology/DSM 6.1 (yes, new version) in a VM (with Jun's 1.02a loader). I've attached 3 WD Red disks, in *AHCI* mode, and I've configured them in the VM as RDM disks (I don't have extra sata card in order to use VTd/Passthrough). After solving a first huge performance problem (horrible 2 MB/s speed!!!), by disabling Vmware AHCI driver and enabling legacy one (full story here: http://www.nxhut.com/2016/11/fix-slow-d ... wahci.html), these are the current speeds I'm getting *inside* DSM: Read: - iops: 109 - performance: 154 MB/s - latency: 16.2 ms Write: - iops: 94 - performance: 35 MB/s - latency: 17.6 ms (data obtained using DSM 6.1: Storage admin -> HDD/SSD -> Test bank) For me, 154 MB/s read is ok and according to WD Red specs. But, isn't 35 MB/s write quite slow???? Are this values correct for you? Which values are you getting on WD Red disks? Please, could you complete this thread providing your values, and specifying: - virtualization / baremetal (eg: esxi, vbox, baremetal) - hw specs or server model - disk model Thank you. Cheers, -r
  16. Bingo! Thanks a lot, billat29. Cheers, -r
  17. Hi, Just created a VM in vmware workstation for testing xpenology (with the idea of moving to ESXi if test is ok): - hw type: workstation 11.x (esxi compatible) - sata disk: synoboot.vmdk (DS3615xs 6.0.2 Jun's Mod V1.01) - sata disk 2: new disk (8gb) - serial port: named pipe (only to have a look to console) With former config, I can boot xpenology and do basic install (DSM_DS3615xs_8451.pat) / config. I had to use option 1 from Jun's menu (yes, baremetal even being a vm!!); if esxi option is used, DSM doesn´t detect any disk and it's impossible to proceed with DSM install). Upon having a basic install, I shutdown VM. Then I added virtual hw: - sata disk 3: new disk (2gb) - sata disk 4: new disk (2gb) - sata disk 5: new disk (2gb) The problem is that I cannot init any of those 3 disks in storage manager. In particular, if I try to create a RAID group, I get this error: "System failed to create [RAID Group 1](Device Type is [basic]) with disk [5]." I also noticed that, during former raid creation process, DSM is showing a wrong capacity (tons of TBs instead of 2 gb capacity). I tried all kinds of different RAIDs (5, etc) but the problem is with those 3 disks, which btw are similar to disk2 (8gb) I created for testing (same kind of virtual sata device). I only can create a RAID group using disk2 (which is where DSM resides, and was init during DSM installation). Any ideas? What's going on? I'd appreciate some help, please, Cheers, -r
×
×
  • Create New...