Jump to content
XPEnology Community

WiteWulf

Contributor
  • Posts

    423
  • Joined

  • Last visited

  • Days Won

    25

Posts posted by WiteWulf

  1. 6 minutes ago, pocopico said:

     

    Actually since i have the pci-e occupied with an old SAS hba, i had to use the embedded LAN (tg3.ko) and add the mptsas drivers. Other than that and the usual VID/PID,SN/MAC etc pretty much nothing.

    That's great news! Since I got 6.2.4 on my Gen8 today, 7.x is my next step, but I wasn't sure from the posts so far if the transpositioning of sata and sas devices was unique to ESXi installs or affected baremetal, too. This gives me hope 👍

     

    Oh, also meant to mention: regarding the two reboots I had after migrating to 6.2.4, the server has been stable and up for over an hour now with no further reboots. I'll keep an eye on it, though.

    • Like 1
  2. 37 minutes ago, ilyas said:

    Great!,

    it would be helpful, create step by step guide from loader to complete migration.

    With respect, I'm not going to do that. As pointed out repeatedly elsewhere in this thread, this software/method isn't ready for general release yet and is only suitable for people who can follow the guides and assemble the software for themselves at this point. I've described everything I've done to get this far, or at least referenced and linked to other people's guides, so go and put it together yourself. If you can't figure it out this probably isn't ready for you yet.

     

    I'll be more than happy to do a full write-up when ThorGroup announce a release candidate or full release 👍

    • Like 4
  3. Okay, I've managed to get my HP Gen8 Microserver to 6.2.4 on baremetal 🥳

     

    I encountered two problems while running the migration:
    - the first reboot after reinstalling the 25556 pat it tried to boot off the internal hdd, rather than the USB stick. I seem to recall this being a common problem with firmware updates in the past anyway, and likely isn't unique to this release/boot method. A reboot fixed this and it booted off the USB stick properly and completed the install

    - after install the NIC had changed to DHCP, rather than static, for some reason. This was easily changed in the Control Panel in DSM

     

    Thanks for all the hard work and help getting this far!

     

    Now, unfortunately, it's rebooted itself twice since the upgrade completed (in the last ten minutes or so). I'll look into this further and see if I can figure out what the problem is, 

    • Like 1
  4. 32 minutes ago, sanyi65 said:

    My system didn't boot from the USB key. I set the USB first partition active (with Minitool Partition Wizard) and then I could boot.

    That worked perfectly! Feel stupid now for missing something so simple. @Rikk this should work for you, too.

     

    Now attempting to install 6.2.4 baremetal...

     

    *edit*

     

    Failed "file is probably corrupted", so I need to check vid/pid, etc. At least I can get it to boot now.

  5. 19 hours ago, Rikk said:

    Hello to all,

       I am trying to test the loader on my current hardware and I am stuck to a stupid step : my system didn't find the USB key to boot on it. With the previous setup (Jun's loader with 3615xs model, 6.2.3 DSM) it was working well.

     

    My hardware is :

    Gen8 HP microserver with Xeon E3-1220L CPU update

    6Gb of RAM

     

    I am using le last env (redpill-tool-chain_x86_64_v0.7.2) to build the image with MAC OS big sur platform., and docker installed.

    Hi Rikk, I can't help, I'm afraid, beyond commenting that I'm in the same situation as you: build runs successfully on macOS Big Sur in Docker, I try booting from the same internal USB port I'm using for my Jun's loader USB stick but the Microserver apparently doesn't see the stick as bootable and tries to netboot.

     

    I've confirmed that the same redpill usb stick successfully boots on other hardware (an old Toshiba R600 laptop).

     

    More details here. It's very odd...

  6. Quote

    It would also release some of my compute to run other esxi guests /vms (rather than use synology vmm as I am at the moment , which doesn’t have the flexibility of esxi ) 

    ( you can use a newer DSM version and still migrate btw )

    I'd also like to make better use of the CPU in my box. I considered running DSM's VMM but I'm not using btrfs, so that's out, and there's some stuff that Docker just isn't right for.

     

    Quote

    one thing to note , I’m not sure if you are using the onboard Sata /raid b120i controller. If you are, if you pass that through in esxi you would not be able to add further vm’s Without an additional card. . I was going to pass my pcie hba card through and then use my onboard Sata for other esxi vm’s 

    otherwise you could rdm your xpenology disks, (you don’t get smart data)

    I am using the internal SATA controller, but in AHCI mode so that DSM sees the disks separately and has them in a SHR volume. I was hoping to be able to pass-through the AHCI controller and see the controller and disks in the xpenology VM. I tried running xpenology on ESXi with RDM disks in the past and performance was poor (this was on an "Early 2009" Mac Pro with twin quad-core Xeons and 32GB RAM). If RDM performance is better these days I may give it another go.

     

    Either way, all the space on the 4x3TB disks in it at the moment is allocated to DSM, so I would definitely need more storage for additional VMs. Good point.

     

    *edit* this is an interesting thread on the topic, although few years (and ESXi revisions) old now:
    https://homeservershow.com/forums/topic/14778-esxi-65-ahci-bad-write-performance/

  7. 7 minutes ago, scoobdriver said:

    @WiteWulf I’ve not tried on bare metal yet, not Quite ready to take the jump yet, my g8 baremetal xpenology runs most my home dockers and a couple of VM’s ,   I also use a hba h220 card in my g8  Which I need to check works, so I need to schedule some downtime. Lol. I’ve only Tested my on esxi box so far  , with 6.7 and 7.0 which boot on legacy bios , and not uefi 

     

    Okay, cool, gotcha. I'm in the same boat, got a lot of stuff running on my Gen8 (plex, domoticz, librenms, some website stuff, energy monitoring, ubooquity) so I'm keen to ensure this goes smoothly. With that in mind I've just ordered a 6TB external HDD off Amazon to do some backups 🤣

     

    I'm interested by the suggestion that I could install ESXi on a USB stick, create an xpenology VM with the HDDs passed through (or the controller itself passed through) and install xpenology to the VM. It should see the existing HDDs and offer to do a migration. So long as I tried to install the same DSM version as I'm currently on I should be able to safely switch to booting the Microserver with Jun's boot loader and everything still be in place. I imagine passing through the controller would be more efficient than four HDDs in raw device mapping mode?

  8. 9 minutes ago, scoobdriver said:

    @WiteWulf I don’t see your signature? Perhaps as I’m on iPad. What is the cpu ?
    im on g8 with E3-1265L V2. Also reluctant to go virtual.. I run esxi on another box. 

    Same CPU as you 👍

     

    Have you successfully installed 6.2.4/7.0 with redpill on your gen8, bare metal, then? If so I'm guessing this could be a peculiarity with Amoreux's build environment for macOS 🤔

  9. Just now, scoobdriver said:


    uefi doesn’t need to be enabled , are you using DS3615xs ? The cpu on gen8 does not support DS918 

    Thanks, yes, I'm generating a toolchain and image for the bromolow target using Amoreux's macOS/docker build environment and scripts. FYI, my Gen8 has an upgraded CPU (see my sig), although I'm aware that it still doesn't support DS918.

     

    The problem is that when the Gen8 boots it completely ignores the USB stick the redpill image on it and attempts to netboot. The same stick boots successfully on an old laptop (Toshiba R600) however, and mounts two partitions on the same machine when it's booted into Linux Mint. The redpill stick also mounts when the Gen8 is booted into DSM 6.2.3 from a Jun's bootloader stick. I just can't figure out why the Gen8 is refusing to boot from it.

  10. Well, I've got .img file created now, but I can't get it to boot baremetal on my HP Gen8 Microserver. I've tested it on another machine (an old Tosh R600) and it boots, but the Gen8 just ignores it and tries to netboot.

     

    According to this thread, the Gen8 can't boot from a GUID partitioned disk, which would explain why my old MBR partitioned Jun boot loader stick works but this doesn't.

     

    Is it possible to get the toolchain to output an MBR partitioned image? If not I guess I'll need to look into setting this up with ESXi instead....

     

    Hang on....got that the wrong way around. I checked the Jun's bootloader stick, and that's GPT, and the redpill stick is MBR. FWIW, if I boot the Gen8 into DSM6.2.3 with the Jun's bootloader stick, then insert the redpill stick, it mounts two partitions and identifies it as an MBR/DOS partitioned disk (/dev/sdu is the redpill stick, /dev/synoboot is Jun's boot loader):

    sudo fdisk -l /dev/sdu
    Disk /dev/sdu: 14.9 GiB, 16018046976 bytes, 31285248 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: dos
    Disk identifier: 0xf110ee87
    Device     Boot  Start    End Sectors Size Id Type
    /dev/sdu1         2048 100351   98304  48M 83 Linux
    /dev/sdu2       100352 253951  153600  75M 83 Linux
    /dev/sdu3       253952 262143    8192   4M 83 Linux
    
    sudo fdisk -l /dev/synoboot
    GPT PMBR size mismatch (102399 != 7915519) will be corrected by w(rite).
    Disk /dev/synoboot: 3.8 GiB, 4052746240 bytes, 7915520 sectors
    Units: sectors of 1 * 512 = 512 bytes
    Sector size (logical/physical): 512 bytes / 512 bytes
    I/O size (minimum/optimal): 512 bytes / 512 bytes
    Disklabel type: gpt
    Disk identifier: AFB38D11-BCEA-4409-B348-F4FEEE602114
    Device         Start    End Sectors Size Type
    /dev/synoboot1  2048  32767   30720  15M EFI System
    /dev/synoboot2 32768  94207   61440  30M Linux filesystem
    /dev/synoboot3 94208 102366    8159   4M BIOS boot

     

    *edit* I've read in two different places online now (here is the other report) that the Gen8 will not boot from a GPT disk, but I'm seeing the opposite: booting from GPT and not MBR. FYI, I am using the Internal USB port, not the external ones.

     

    *edit* I've also seen comments on this thread that UEFI needs to be enabled for USB stick booting of this image, but the Gen8 doesn't have UEFI. Is this another reason to go ESXi? (They're starting to stack up now...)

  11. 16 hours ago, WiteWulf said:

    Thanks, good to see you're also running macOS. I did my build in debian VMs before, but I'll see if I can get your guide to work on my my Big Sur machine.

    Quick follow up: this method works very well for Mac users. I had a bit of a problem, but it was entirely down to a typo in my config json file that @Amoureux quickly helped me fix. I can highly recommend using it.

  12. 25 minutes ago, gadreel said:

    Yes, it will support baremetal too, as ThorGroup said before virtualisation has a lot less hassle!

    To be honest, I think it's time I bit the bullet and went virtual. I've been running xpenology for years now and have lost count of the number of problems I've had caused by running baremetal :D

    It's really just the time involved in backing everything up, rebuilding the machine, then restoring data that's holding me back. It's just so much simpler to do an in-place upgrade when it works, and not have to reconfigure all the services (mainly Plex) that are running on it.

    • Like 1
  13. Hi folks, I've been sitting and patiently keeping up with this thread since shortly after it started, keenly aware that this is in early development stages, not beta, not intended for use with production data and only for people who know how to compile, test and feedback, etc. 😀

     

    I set up a build environment and successfully compiled everything a week or so ago, but didn't want to test it out yet as the vast majority of talk on here seems to be around virtualised installs, but I'm running bare metal on an HP Gen8 Microserver, upgraded with an Intel Xeon E3-1265L 2.50GHz 4-Core (Ivy Bridge).

     

    Has anyone successfully used RedPill on a baremetal Microserver?

    Is RedPill intending to support baremetal, or just virtualisation platforms?

     

    If I need to migrate my install to esxi I'm gonna need a big HDD to back up all my data first 😬

  14. Chances are your onboard NIC is not supported by the version of DSM you're trying to install. Older versions of DSM had drivers in the kernel that supported the onboard NICs in HP Microservers, but after a certain point (I'm afraid I can't recall when, but around 6.2) Synology dropped the driver from the kernel. I have a Gen 8 Microserver that suffered from this and had to buy a compatible PCIe NIC to continue to be able to run later releases. Have a search of the forum and you'll find more details. FWIW I bought a "HP NC360T Dual-Port Gigabit NIC PCI-E", you should be able to find one on fleabay for ~£15. Your other option is to the virtualisation route and not install on bare metal.

     

    From the spec you gave, it looks like you're running a Gen 10 Microserver. There are plenty of threads on here from other folk running the same hardware as you, so it's certainly possible.

  15. - Outcome of the update: SUCCESSFUL

    - DSM version prior update: DSM 6.2-23739

    - Loader version and model: JUN'S LOADER v1.03b - DS3615xs

    - Using custom extra.lzma: NO

    - Installation type: BAREMETAL - HP Gen8 Microserver

    - Additional comment: internal NIC disabled and compatible Intel PCIe NIC installed

  16. Hey all, belated follow up to this, but I just wanted to let interested parties know I successfully update to 6.2.2-24922 today 😎

     

    Previously I was on 6.2-23739 on a HP Gen8 using the internal NICs. To upgrade I carried out the following steps:

    • installed a PCI dual GigE Intel NIC matching the PCI IDs listed above
    • created a new USB boot stick using Jun's 1.03b loader for DS3615xs with the relevant MAC addresses for my new NIC, VID and PID of the USB stick and a generated DS3615xs serial number
    • disabled the onboard NICs in the BIOS
    • rebooted to check that the new USB stick and NICs were working properly
    • upgraded to 6.2.2-24922 from local PAT file

    Overall it was a far less painful exercise than the upgrade to 6.2 from 6.1, besides having to install some new hardware (and the minimal expense of that).

     

    Thanks to everyone contributing in this thread (and elsewhere) for the help and relevant information 👍🏻

  17. I can confirm this is almost exactly the same as I was seeing with my failed 6.2 upgrade and the first log entry is identical. It all start with "chmod failed for /var/packages (Operation not permitted)" and goes downhill from there. There is nothing logged to /var/log/messages before this that relates to the login or what is triggering the shutdown of services.

     

    Have a look in "/var/log/apparmor.log" as well, I was getting loads of entries in there at the same time.

     

    In fact, rather than just looking at individual log files, tail everything in /var/log when you login and see what comes up:

     

    tail -f /var/log/*

     

    That'll show you all the activity and what file it's occurring in.

     

    (Here's where I was trying to fix my box: 

    )

  18. You can't run 6.2.1 on a stock Gen8 at the moment as the Synology software doesn't include compatible drivers for the onboard Broadcom NICs.

     

    You should be able to take a stock Gen8 to 6.2 simply by creating a new boot stick with the correct MAC addresses in your grub.cfg, rebooting and installing the correct PAT file (I'd recommend using the one for a DS3615xs).

     

    I say *should* as my update to 6.2 had some problems and needed manually fixing of a few shell scripts, ymmv.

  19. Just wanted to quickly bump this: I needed to be able to run 'lsof' against the Synology (not a docker container) as part of some debugging and the optware-ng repo at:

     

    https://github.com/Optware/Optware-ng

    ...still functions fine on my system using the 'x86_64' architecture files.

     

    But yeah, for 99% of other cmdline tasks I use a debian docker container these days (great for running mkvtoolnix and ffmpeg jobs on files that I want to mangle in my Plex library).

  20. I've been looking for a replacement for Plex as a media streaming server (long story, but I'm far from alone) and came across Emby (http://emby.media). I was very pleasantly surprised to see that they explicitly support Synology and Xpenology separately. The rationale being that, regardless of what CPU you actually have in your hardware, the Synology software always reports the CPU version for whatever .pat file you're running. This prevents the regular Synology Emby packages from utilising any CPU features that may aid in, for example, transcoding.

     

    To work around this, they create specific builds for different architectures that you can specify as a parameter in your repo URL. More info here:

     

    https://github.com/MediaBrowser/Wiki/wiki/Synology-:-Custom-Package-Architectures-for-XPEnology

     

    FYI, I'm presently running with the stock G1610T Celeron and there are *no* optimisations available for this, so I use the default package source URL.

    • Like 1
×
×
  • Create New...