Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 06/05/2019 in all areas

  1. Intro/Motivation This seems like hard-to-find information, so I thought I'd write up a quick tutorial. I'm running XPEnology as a VM under ESXi, now with a 24-bay Supermicro chassis. The Supermicro world has a lot of similar-but-different options. In particular, I'm running an E5-1225v3 Haswell CPU, with 32GB memory, on a X10SLM+-F motherboard in a 4U chassis using a BPN-SAS2-846EL1 backplane. This means all 24 drives are connected to a single LSI 9211-8i based HBA, flashed to IT mode. That should be enough Google-juice to find everything you need for a similar setup! The various Jun loaders default to 12 drive bays (3615/3617 models), or 16 drive bays (DS918+). This presents a problem when you update, if you increase maxdisks after install--you either have to design your volumes around those numbers, so whole volumes drop off after an update before you re-apply the settings, or just deal with volumes being sliced and checking integrity afterwards. Since my new hardware supports the 4.x kernel, I wanted to use the DS918+ loader, but update the patching so that 24 drive bays was the new default. Here's how. Or, just grab the files attached to the post. Locating extra.lzma/extra2.lzma This tutorial assumes you've messed with the synoboot.img file before. If not, a brief guide on mounting: Install OSFMount "Mount new" button, select synoboot.img On first dialog, "Use entire image file" On main settings dialog, "mount all partitions" radio button under volumes options, uncheck "read-only drive" under mount options Click OK You should know have three new drives mounted. Exactly where will depend on your system, but if you had a C/D drive before, probably E/F/G. The first readable drive has an EFI/grub.cfg file. This is what you usually customize for i.e. serial number. On the second drive, should have a extra.lzma and extra2.lzma file, alongside some other things. Copy these somewhere else. Unpacking, Modifying, Repacking To be honest, I don't know why the patch exists in both of these files. Maybe one is applied during updates, one at normal boot time? I never looked into it. But the patch that's being applied to the max disks count exists in these files. We'll need to unpack them first. Some of these tools exist on macOS, and likely Windows ports, but I just did this on a Linux system. Spin up a VM if you need. On a fresh system you likely won't have lzma or cpio installed, but apt-get should suggest the right packages. Copy extra.lzma to a new, temporary folder. Run: lzma -d extra.lzma cpio -idv < extra In the new ./etc/ directory, you should see: jun.patch rc.modules synoinfo_override.conf Open up jun.patch in the text editor of your choice. Search for maxdisks. There should be two instances--one in the patch delta up top, and one in a larger script below. Change the 16 to a 24. Search for internalportcfg. Again, two instances. Change the 0xffff to 0xffffff for 24. This is a bitmask--more info elsewhere on the forums. Open up synoinfo_override.conf. Change the 16 to a 24, and 0xffff to 0xffffff To repack, in a shell at the root of the extracted files, run: (find . -name modprobe && find . \! -name modprobe) | cpio --owner root:root -oH newc | lzma -8 > ../extra.lzma Not at the resulting file sits one directory up (../extra.lzma). Repeat the same steps for extra2.lzma. Preparing synoboot.img Just copy the updated extra/extra2.lzma files back where they came from, mounted under OSFMount. While you're in there, you might need to update grub.cfg, especially if this is a new install. For the hardware mentioned at the very top of the post, with a single SAS expander providing 24 drives, where synoboot.img is a SATA disk for a VM under ESXi 6.7, I use these sata_args: # for 24-bay sas enclosure on 9211 LSI card (i.e. 24-bay supermicro) set sata_args='DiskIdxMap=181C SataPortMap=1 SasIdxMap=0xfffffff4' Close any explorer windows or text editors, and click dismount all in OSFMount. This image is ready to use. If you're using ESXi and having trouble getting the image to boot, you can attach a network serial port to telnet in and see what's happening at boot time. You'll probably need to disable the ESXi firewall temporarily, or open port 23. It's super useful. Be aware that the 4.x kernel no longer supports extra hardware, so network card will have to be officially supported. (I gave the VM a real network card via hardware passthrough). Attached Files I attached extra.lzma and extra2.lzma to this post. They are both from Jun's Loader 1.04b with the above procedure applied to change default drives from 16 from 24. extra2.lzma extra.lzma
    1 point
  2. Comme souvent, tout dépend de tes attentes. Moi je voulais une lentille fixe mais de haute qualité, alimentée en PoE, vision nuit correcte et résistante aux intempéries. Résultat j'ai pris une Foscam https://www.amazon.fr/dp/B00NCR4DQA/ref=cm_sw_em_r_mt_dp_U_2G69CbJ6MX5X8 (le commentaire avec la vidéo est de moi^^). Je l'ai acheté le 4/12/2015, elle est toujours dehors au moment où j'écris ces lignes et elle marche toujours impec! Seul regret (toutes caméras confondues) aucune de dispose d'un anti-araignée! C'est pénible de voir ces salles bebetes passées devant constamment quand elle décide de tisser sa toile depuis la caméra 😅
    1 point
  3. https://www.synology.com/fr-fr/compatibility/camera ?
    1 point
  4. grub.cfg doesn't have much to do with whether a NIC driver works. I don't currently run DSM 6.2.2 either. Read on for why. Folks, Jun's loader is fundamentally trying to fool the Synology code into thinking that the booted hardware is Synology's platform hardware. How it does this is a mystery to most everyone except Jun. Whatever it does, has varying effects on the OS environment. Some are innocuous (spurious errors in the system logs), and some cause instability, driver and kernel crashes. By far, the most stable combination (meaning, compatibility with the largest inventory of hardware) is Jun's loader 1.02b and DSM 6.1.x. Jun stated that the target DSM platform for 1.02b was DS3615, and DS3617 was compatible enough to use the same loader. However, there are issues with DS3617 on certain combinations of hardware that cause kernel crashes for undetermined reasons. I can consistently demonstrate a crash and lockup on the combination of ESXi, DS3617 and 1.02b on the most recent DSM 6.1 release (6.1.7). There is otherwise no functional difference between DS3615 and DS3617, which is why DS3615 is recommended. DSM 6.2 introduced new Synology hardware checks and Jun came up with a new approach to bypass them so that we can run those versions - implemented in loaders 1.03 and 1.04. Whatever he did to defeat the DSM 6.2 Synology code results in more frequent kernel loadable module crashes, kernel panics, and generally poorer performance (the performance difference may be inherent to 6.2, however). There is a useless library of NIC support in DSM 6.2.1/6.2.2 as the 1.03b loader crashes NIC modules but a select few on DSM 6.2.1 and later. Similarly, the 1.04b loader has support for the Intel Graphics (i915) driver, but going from 6.2.1 to 6.2.2 causes it to crash on some revisions of the Intel Graphics hardware. Personally, I am running ESXi 1.02b/DS3615 6.1.7 for mission critical work and have no intention of upgrading. I do have a baremetal system running 6.2.1 right now, but it's really only for test and archive. Nowadays, I strongly advise against a baremetal XPEnology installation if you have any desire to attempt to keep pace with Synology patches. That said, there really isn't much of a reason to stay current. DSM 7.0 is imminent and the available loaders are virtually guaranteed not to work with it. Each new 6.2 point release is causing new issues and crashes, for little or no functional/incremental benefit.
    1 point
  5. Hello @FOXBI. Is it possible to update your tool so that it is fully functional with the latest release of DSM 6.2? Thanks.
    1 point
×
×
  • Create New...