Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 05/13/2022 in all areas

  1. https://github.com/jumkey/redpill-load Familiar link, but a little different this time, it supports automatic updates like Jun's Mod. Features Add BRP_JUN_MOD=1 Add DS2422+ No need for bsp patch files anymore Support online installation of the latest DSM Support 7.0.1 upgrade to the latest DSM Build BRP_JUN_MOD=1 BRP_DEBUG=1 BRP_USER_CFG=user_config-ds918.json ./build-loader.sh 'DS918+' '7.0.1-42218' If you want to upgrade to 7.1 you may need to add redpill-misc ./ext-manager.sh add 'https://github.com/jumkey/redpill-load/raw/develop/redpill-misc/rpext-index.json'
    4 points
  2. This was a previous attempt, but it failed. The way it works is to patch vmlinux with the first kernel and then boot with kexec Why use buildroot and not tinycore? Because I didn't find kexec and php available for tc Recently I found a way to rebuild zImage without compiling, so I tried to use the first kernel to patch vmlinux and then rebuild zImage and also use kexec to start, this time I successfully started the loader and it can be installed normally
    3 points
  3. Thanks goes to @IG-88 for getting this sorted for me in the 6.2 build as the DS918+ only has 2 lan ports. edit synoinfo.conf located in /etc.defaults/synoinfo.conf maxlanport="2" I changed it to 8. Also edit /etc/synoinfo.conf maxlanport="2" I changed it to 8 Reboot you now have all mac addresses showing Thanks goes to Pocopico and Peter Suh for helping me trying to get this sorted. Edit: Now I got my 10gb NIC working.
    2 points
  4. Use any of your previous build methods, remember to add BRP_JUN_MOD=1 Backup previous img file, replace new custom.gz Can't simply replace img file. Because the zImage and rd.gz of the second partition are already the latest version😅
    2 points
  5. Before installing XPEnology using DSM 7.x, you must select a DSM platform and loader. XPEnology supports a variety of platforms that enable specific hardware and software features. All platforms support a minimum of 4 CPU cores, 64GB of RAM, 10Gbe network cards and 16 drives. Each can run "baremetal" as a stand-alone operating system OR as a virtual machine within a hypervisor. A few specific platforms are preferred for typical installs. Review the table and decision tree below to help you navigate the options. NOTE: DSM 6.x is still a viable system and is the best option for certain types of hardware. See this link for more information. DSM 7.x LOADERS ARE DIFFERENT: A loader allows DSM to install and run on non-Synology hardware. The loaders for DSM 5.x/6.x were monolithic; i.e. a single loader image was applicable to all installs. With DSM 7.x, a custom loader must be created for each DSM install. TinyCore RedPill (TCRP) is currently the most developed tool for building 7.x loaders. TCRP installs with two step-process. First, a Linux OS (TinyCore) boots and evaluates your hardware configuration. Then, an individualized loader (RedPill) is built and written to the loader device. After that, you can switch between starting DSM with RedPill, and booting back into TinyCore to adjust and rebuild as needed. TCRP's Linux boot image (indicated by the version; i.e. 0.8) changes only when a new DSM platform or version is introduced. However, you can and should update TCRP itself prior to each loader build, adding fixes, driver updates and new features contributed by many different developers. Because of this ongoing community development, TCRP capabilities change rapidly. Please post new or divergent results when encountered, so that this table may be updated. 7.x Loaders and Platforms as of 06-June-2022 Options Ranked 1a 1b 2a 2b 2c 3a 3b DSM Platform DS918+ DS3622xs+ DS920+ DS1621+ DS3617xs DVA3221 DS3615xs Architecture apollolake broadwellnk geminilake v1000 broadwell denverton bromolow DSM Versions 7.0.1-7.1.0-42661 7.0.1-7.1.0-42661 7.0.1-7.1.0-42661 7.0.1-7.1.0-42661 7.0.1-7.1.0-42661 7.0.1-7.1.0-42661 7.0.1-7.1.0-42661 Loader TCRP 0.8 TCRP 0.8 TCRP 0.8 TCRP 0.8 TCRP 0.8 TCRP 0.8 TCRP 0.8 Drive Slot Mapping sataportmap/ diskidxmap sataportmap/ diskidxmap device tree device tree sataportmap/ diskidxmap sataportmap/ diskidxmap sataportmap/ diskidxmap QuickSync Transcoding Yes No Yes No No No No NVMe Cache Support Yes Yes Yes Yes Yes (as of 7.0) Yes No RAIDF1 Support No Yes No No Yes No Yes Oldest CPU Supported Haswell * any x86-64 Haswell ** any x86-64 any x86-64 Haswell * any x86-64 Max CPU Threads 8 24 8 16 24 (as of 7.0) 16 16 Key Note currently best for most users best for very large installs see slot mapping topic below AMD Ryzen, see slot mapping topic obsolete use DS3622xs+ AI/Deep Learning nVIDIA GPU obsolete use DS3622xs+ * FMA3 instruction support required. All Haswell Core processors or later support it. Very few Pentiums/Celerons do (J-series CPUs are a notable exception). Piledriver is believed to be the minimum AMD CPU architecture equivalent to Intel Haswell. ** Based on history, DS920+ should require Haswell. There is anecdotal evidence gradually emerging that DS920+ will run on x86-64 hardware. NOT ALL HARDWARE IS SUITABLE: DSM 7 has a new requirement for the initial installation. If drive hotplug is supported by the motherboard or controller, all AHCI SATA ports visible to DSM must either be configured for hotplug or have an attached drive during initial install. Additionally, if the motherboard or controller chipset supports more ports than are physically implemented, DSM installation will fail unless they are mapped out of visibility. On some hardware, it may be impossible to install (particularly on baremetal) while retaining access to the physical ports. The installation tutorial has more detail on the causes of this problem, and possible workarounds. DRIVE SLOT MAPPING CONSIDERATIONS: On most platforms, DSM evaluates the boot-time Linux parameters SataPortMap and DiskIdxMap to map drive slots from disk controllers to a usable range for DSM. Much has been written about how to set up these parameters. TCRP's satamap command determines appropriate values based on the system state during the loader build. It is also simple to manually edit the configuration file if your hardware is unique or misidentified by the tool. On the DS920+ and DS1621+ platforms, DSM uses a Device Tree to identify the hardware and ignores SataPortMap and DiskIdxMap. The device tree hardcodes the SATA controller PCI devices and drive slots (and also NVMe slots and USB ports) prior to DSM installation. Therefore, an explicit device tree that matches your hardware must be configured and stored within the loader image. TCRP automatic device tree configuration is limited. For example, any disk ports left unpopulated at loader build time will not be accessible later. VMware ESXi is not currently supported. Host bus adapters (SCSI, SAS, or SATA RAID in IT mode) are not currently supported. Manually determining correct values and updating the device tree is complex. Device Tree support is being worked on and will improve, but presently you will generally be better served by choosing platforms that support SataPortMap and DiskIdxMap (see Tier 1 below). CURRENT PLATFORM RECOMMENDATIONS AND DECISION TREE: VIRTUALIZATION: All the supported platforms can be run as a virtual machine within a hypervisor. Some use case examples: virtualize unsupported network card virtualize SAS/NVMe storage and present to DSM as SATA run other VMs in parallel on the same hardware (as an alternative to Synology VMM) share 10GBe network card with other non-XPEnology VMs testing and rollback of updates Prerequisites: ESXi (requires a paid or free license) or open-source hypervisor (QEMU, Proxmox, XenServer). Hyper-V is NOT supported. Preferred Configurations: passthrough SATA controller and disks, and/or configure RDM/RAW disks This post will be updated as more documentation is available for the various TCRP implementations.
    1 point
  6. I dont know. thats why I am anxious to see how people use this... are you saying you can issue this command is ssh shell ?
    1 point
  7. I'm not familiar with the build command, but if I use it like this, will I./build-loader.sh work? BRP_JUN_MOD=1 BRP_DEBUG=1 BRP_USER_CFG=user_config-ds918.json ./build-loader.sh 'DS918+' '7.0.1-42218'
    1 point
  8. You are right, the best way is to update manually, check compatibility before updating. This is just a solution for me to avoid repeated bsp patches, and the trouble of decompressing encrypted pat files
    1 point
  9. As i saw Jumkey removes the synoinfo.conf patches for example.com, so yes.
    1 point
  10. php-cli is available for TC. buildroot i dont know, i have to look for that.
    1 point
  11. @jumkey please remember to sync your redpill-misc to the one i've modified from my repo. Takes care of several other issues. redpill-load/include/loader-ext/target_exec.sh_ is also modified in my repo to allow extension colliding dependencies to load correctly.
    1 point
  12. That must be because of the HBA? I have used tcrp on 2 baremetal nas boxes, a western digital nas box and a terramaster nas box, one with 4 bay slots and one with 5 bay slots, each is running dsm7.1. Those 2 baremetal boxes correctly identify disks inserted into the correct slot. In other words, when I put a drive in slot 3, it shows graphically in slot 3 of dsm. It does not group them together like it does on the HBA card. This is true on both systems. I guess in that sense, sata controllers work better?
    1 point
  13. Change both to ./rploader.sh ext broadwellnk-7.1.0-42661 add https://github.com/pocopico/redpill-load/raw/develop/redpill-virtio/rpext-index.json ./rploader.sh ext broadwellnk-7.1.0-42661 add https://github.com/pocopico/redpill-load/raw/develop/redpill-acpid/rpext-index.json
    1 point
  14. If the dtb was not patched correclty by rploader then your next best option would be to manual patch the model.dtb The process is not that difficult but its a two step process. Boot DSM and login with ssh or by http://<yourip>:7681 if you are in junior mode and provide the following info fdisk -l ls /sys/block cat /sys/block/*/device/*block*info
    1 point
  15. Its my impression that, on older platforms (DS3615/918/3617/32622/DVA3221), there is no way to configure the slot#. The disks are represented to the GUI by their device name. eg. sda will always be 1st drive. sde the fifth and so on. So the disks are getting their slot on a sequencial way. Here the SataPortMap, DiskidxMap and sata_remap comes to assist. So on my system with mptsas (LSI 3Gb/sas HBA) ls -l /sys/block/sd[a-z]/device lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdc/device -> ../../../2:0:0:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdd/device -> ../../../2:0:1:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sde/device -> ../../../2:0:2:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdf/device -> ../../../2:0:3:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdg/device -> ../../../2:0:4:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdh/device -> ../../../2:0:5:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdi/device -> ../../../2:0:6:0 lrwxrwxrwx 1 root root 0 May 12 03:13 /sys/block/sdj/device -> ../../../2:0:7:0 I need to test on my test VM to see if there is a way to fix that. On newer platforms with Device tree files the slot number is hardcoded into the model.dtb file and enforced into a specific place on the system. So every HBA port number will be represented on a specific port.
    1 point
  16. With DS920+, if the second drive was not physically present in the machine when you built the loader, it probably won't be recognized. I just posted some contextual information in the Tutorials section here: https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/ You can try to rebuild the DS920+ loader with both drives connected, or you can go back to DS918+. The rebuild is probably a better choice for you so that you don't deal with the difficulty of backrevving.
    1 point
  17. - Outcome of the update: SUCCESSFUL - DSM version prior update: RedPill 0.4.4 DS3615xs v7.0.1-42218 update 3 - Loader version and model: RedPill 0.4.6 DS3615xs v7.1.0-42661 update 1 - Using custom extra.lzma: NO - Installation type: BAREMETAL - HP MediaSmart Server EX490 (Xeon E3110 / 2Gb RAM) - Additional comments:
    1 point
  18. Может кому пригодиться, для Pedpill-core. Все выводы на собственных экспериментах. Что бы команда ./rploader.sh satamap now отработала нормально, нужно, что бы в одном из портов было подключено устройство HDD или DVD-ROM. На моей материнке есть поддержка горячей замены HDD, очень важный параметр при определении карты, так пишут "англоязычные друзья". Имеем GA-N3150N 4 sata порта 0,1,2,3. Нужно подключить в 0 или 1 устройство и в 2 или 3. И получаем нормальную карту SataPortMap=22 DiskIdxMap=0002 На тестовой сборке подключал HDD только в 0 sata порт и получал карту, соответственно 2 порта (sata 2 и 3) не работали SataPortMap=2 DiskIdxMap=00
    1 point
  19. I'm trying to boot with buildroot as the first kernel, then use php to dynamically patch the syno kernel, and finally start the patched vmlinux with kexec, as jun did before. Now these are all executed successfully and I can enter the installation interface, but when I install I get a kernel panic. Due to dynamic patching, I did not add the bsp kernel patch file to the configuration of ds2422p, so it is currently unavailable
    1 point
  20. Aus Mangel an Kernel-Sourcen für DSM7 werden die Treiber-Module nach bestem Wissen und Gewissen von pocopico mit entsprechenden vanila Kernel kompiliert. Syno's Kernel sind ziemlich verbogen (bspw. wilde Patches und Backports die so nicht in den vanila Kerneln vorkommen), so dass Treiber-Module die mit einem vanila Kernel gebaut werden nicht immer zum Syno-Kernel passen.
    1 point
  21. I found the solution how to make bottom drive works for EX485 (hope the rest models as well) with DSM 6.2-23739 as a DS3617xs: - prepare USB drive and other things as described in the video - DO NOT insert any HDD into trays - turn on your HP with prepared USB drive in the bottom USB port - find your server with Synology Assistant - when it will say that there is no any HDD installed, put your primary HDD to the 1st (bottom) tray and install .pat file Manually as described - VOILA. You may restart or switch off the server with the bottom drive installed and it will load again at anytime
    1 point
  22. 15 - What is RAIDF1 and why would I want to use it? RAIDF1 a modification of RAID5, implemented with a customization of MDRAID (the disk array manager used by DSM). It is specifically tuned to minimize the likelihood of SSDs wearing out at the same time. SSDs have a finite lifespan based on the number of times they are written. This information is usually presented as a "wear indicator" or "life remaining" number from 100 (new) counting down to 0 (end of service life). Most operating systems, including DSM, monitor SSD health using SMART and will alert when devices near the end of their service lives, and prior to failure. An array of brand new SSDs are consumed evenly because of how RAID5 intentionally distributes workloads evenly to the array members. Eventually, the SSDs all wear out together, which can result in multiple, simultaneous drive failures and subsequent data loss. How does RAIDF1 work? RAIDF1 attempts to avoid this by biasing writes to a specific drive in the array. To understand this, consider how the DSM btrfs and ext4 filesystems store data. By default, DSM filesystems save data in 4K blocks. Even a 1-byte file uses 4K as a minimum amount of space. Modern HDDs/SSDs also store data in 4K sectors. When a byte must be changed, all the other bytes within the sector are read, then rewritten at the same time. This read/write requirement is called write amplification and it affects the performance of all parts of the storage ecosystem, from HDDs and SSDs to filesystems to RAID arrays. MDRAID also works with blocks, but they are called chunks to differentiate them from filesystem blocks. The default chunk size for DSM RAID5/6/10 is 64K. A stripe is the logical grouping of adjacent chunks spanning the array members horizontally. Using the example of a RAID5 with three drives, two of the chunks in the stripe contain data and the third chunk is parity. When DSM performs data scrubbing, it reads all three chunks, then validates all the data and parity in each stripe for mathematical consistency (and corrects if necessary). Each stripe rotates the position of the parity block successively through the array members. In the three-drive example, stripe 1's parity chunk is on drive 1, stripe 2's parity chunk is on drive 2, stripe 3's parity chunk is on drive 3, stripe 4's parity chunk is back on drive 1, and so on... This results in an even distribution of data and parity across all array members. Note that many files (filesystem blocks) may be stored in one chunk. The highest density case is 16 files of 4K or smaller in a single chunk. Consider that when one of those files change, only two of the three chunks in the stripe must be rewritten: first, the chunk containing the block containing the file, and then the parity chunk (since the parity calculation must be updated). RAIDF1 subtly modifies the RAID5 implementation by picking one of the array members (let's call it the F1-drive), and sequencing two consecutive stripes in the stripe parity rotation for it. This is NOT additional parity (each stripe still only has one parity chunk), so there is no loss of space or read/write performance. The table below compares parity distribution (how much of the total parity is stored on specific array members) between RAID5 and RAIDF1: Array configuration Drive #1 parity Drive #2 parity Drive #3 parity Drive #4 parity Drive #5 parity 3-drive RAID5 33.33% 33.33% 33.33% 4-drive RAID5 25% 25% 25% 25% 3-drive RAIDF1 25% 25% 50% (F1-drive) 4-drive RAIDF1 20% 20% 20% 40% (F1-drive) 5-drive RAIDF1 16.66% 16.66% 16.66% 16.66% 33.33% (F1-drive) With RAIDF1, anytime a full stripe is written, I/O is evenly distributed among the drives, just like RAID5. When a small file or file fragment (one that does not span a stripe) is written, on average the F1-drive will be used about twice as often as the other drives. Thus, the F1-drive will experience accelerated wear and will reach its life limit first. Then it can be replaced with minimal risk of one of the remaining members failing at the same time. Upon replacement, DSM selects the SSD that is closest to being worn out and designates it as the new F1-drive. The array sync then rewrites the array to achieve the desired RAIDF1 parity distribution. Note that the total number of write events are not increased with RAIDF1. "Total cost of ownership" does not change, as the extra writes to the F1-drive are avoided with the other array members, so they last longer. Caveats and other notable issues As a RAID5 variant, RAIDF1 creates arrays based on the smallest member device. For best results, all the drives should all be the same size and type (a larger drive can be used but extra space will be ignored). RAIDF1 can theoretically be “defeated” by installing dissimilar drives, with one drive having significantly higher capacity and/or a high DWPD (drive writes per day) rating. If this drive was then selected as the F1-drive, it may have enough write capacity to outlast the other array members, which could then fail together. Always using identical SSDs for the array will avoid this freak occurrence. SHR (Synology Hybrid RAID) allows drives of different sizes to be used in a redundant array while maximizing space available. This is done by creating a series of arrays, including a small one compatible with the smallest drive, and a large one using the available space common to the largest drives, and possibly some in between depending upon the makeup and complexity of the SHR. The arrays are then concatenated into a single logical volume (using LVM) available for use within DSM. For redundancy, the large SHR drives must be members of all the arrays. The small SHR drives contain only one array and not much of the overall data, and are accessed much less frequently than the larger drives. For RAIDF1’s algorithm to produce expected results, array write patterns must be simple and predictable. In summary, RAIDF1 and SHR array behaviors are not compatible with each other, which is reflected in the Synology DiskStation product lines. The Synology models that support RAIDF1 are the same as those that do not officially support SHR. This includes the XPEnology-enabled DS3615xs+ and DS3617xs+ platforms. Note that SHR can be enabled on these platforms by modifying /etc.defaults/synoinfo.conf, with no impact to RAIDF1 functionality. The MDRAID modifications that enable RAIDF1 are compiled into the DSM kernel. The consumer-oriented DSM platforms do not contain those changes, including the XPEnology-enabled DS916+ and DS918+ platforms. Creation and maintenance of a RAIDF1 is not possible on those systems. However, just like SHR, an established RAIDF1 array is completely functional and behaves like any other RAID5 array when migrated to a platform that does not support it. Brilliant! TRIM helps minimize the impact of write amplification on SSDs. Because the F1-drive is written to more frequently, it will be affected by write amplification more severely than the other array members, and performance of both the drive and the array will degrade over time unless TRIM support is enabled. Finally, there is no RAID6-based, SSD-optimized choice yet. Perhaps RAIDF2 will be an option in DSM 7.0. References If you want to install RAIDF1 on XPEnology, you will find a simple tutorial here https://en.wikipedia.org/wiki/Standard_RAID_levels https://global.download.synology.com/download/Document/Software/WhitePaper/Firmware/DSM/All/enu/Synology_RAID_F1_WP.pdf http://wiki.linuxquestions.org/wiki/Block_devices_and_block_sizes https://raid.wiki.kernel.org/index.php/RAID_setup#Chunk_sizes https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm https://www.synology.com/en-sg/knowledgebase/DSM/tutorial/Storage/Which_Synology_NAS_models_support_RAID_F1 https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Which_models_have_limited_support_for_Synology_Hybrid_RAID_SHR https://en.wikipedia.org/wiki/Trim_(computing)
    1 point
  23. Hi. Unfortunately no. It's not relevant for me right now. I left my home due to war((
    0 points
×
×
  • Create New...