Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 06/21/2022 in all areas

  1. As i have annouce on a previous message, it has been decided to split development and stable TCRP releases. In an effort to properly document the deplot. => So stable release will remain at 0.8 version and can be accessed through the main repo : https://github.com/pocopico/tinycore-redpill/tree/stable No new features will come through the stable release. Only fixes will be applied as they come along. => Development release 0.9 will bring new features, refinement of old processes etc. So far the development release includes the following : Current : - New images containing latest development rploader.sh version 0.9.0.2 - New background wallpaper when landing into TCRP desktop - New system status landing page when you start desktop. You can now easily find your IP addess on screen. - rploader.sh development release 0.9.0.2 - Several missing packages from Tinycore (ntpclient/file/php-8-cli/scsi modules etc) Roadmap : - User manual module compilation process. - User manual extension creation - Custom.gz re-engineering Tinycore image will remain at version 12 as the binutils on 13.1 have gone passed the required maximum for module compilation. You can head to the development repo to download new images if you would like to test : https://github.com/pocopico/tinycore-redpill/tree/main Please report any development additions and improvements you would to include on this thread. Do not post install issues here.
    1 point
  2. Hola chic@s, he hecho este videotutorial en Youtube para los que les interese tener un Xpenology DSM 7.1 en un servidor VMware ESXI. Además activaremos: Active Backup for business Active Backup for Google Workspace Active Backup for Microsoft 365 Enlace con más informacion (archivos necesarios a descargar): https://labsmac.es/synology-dsm-7-1-en-vmware-esxi-7/ Espero que le guste, un saludo.
    1 point
  3. @pocopico are you the "culprit" ?
    1 point
  4. TCRP mainly takes care of creating the loader. Once you have all working there is no need to recreate it unless it’s needed after an upgrade.
    1 point
  5. Built-in nic is tg3 of Broadcom chipset, igb seems to be the external nic you have installed. If so, I think it is normal to operate alone with tg3 without igb. How about rebuilding the loader with only tg3? ./rploader.sh clean ./rploader.sh ext broadwellnk-7.1.0-42661 add https://raw.githubusercontent.com/pocopico/rp-ext/master/tg3/rpext-index.json ./rploader.sh build broadwellnk-7.1.0-42661 manual Anyway, He used the same HP N54L as you, and the nic's problem passed.
    1 point
  6. I resolved it by editing the shutdown script. By default, it specifically references Lan Port 0, the one I don't want to use. So I edited the script and had it reference Port 1 (my embedded NIC) and everything worked as it should. Hope this helps.
    1 point
  7. System Partition Failed means that the RAID1 for DSM that is on all disks is no longer consistent. Here are the instructions from Synology website to correct. Failed to access the system partition To repair the system partition: Launch Storage Manager. Go to Overview and click the Repair link. The system should start repairing the system partition on the drives. Wait for the system to complete the repair. Go to HDD/SSD. The allocation status of the drives should return to Normal. If one or more drives still show the System Partition Failed status, they might be defective. You can do the following: Replace the defective drives one by one. Depending on what status is shown on the Overview page, repair the storage pool or the system partition. For detailed instructions on repairing a storage pool, refer to the respective help articles for DSM 7.0 and DSM 6.2. For detailed instructions on repairing the system partition refer to the respective help articles for DSM 7.0 and DSM 6.2. Follow these best practice tips to keep your data safe and your system operational: Regularly back up your data and system configurations. For detailed instructions, refer to the respective help articles for DSM 7.0 and DSM 6.2. Run S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) tests on your drives to monitor the drive health status. For detailed instructions, refer to the respective help articles for DSM 7.0 and DSM 6.2.
    1 point
  8. 15 - What is RAIDF1 and why would I want to use it? RAIDF1 a modification of RAID5, implemented with a customization of MDRAID (the disk array manager used by DSM). It is specifically tuned to minimize the likelihood of SSDs wearing out at the same time. SSDs have a finite lifespan based on the number of times they are written. This information is usually presented as a "wear indicator" or "life remaining" number from 100 (new) counting down to 0 (end of service life). Most operating systems, including DSM, monitor SSD health using SMART and will alert when devices near the end of their service lives, and prior to failure. An array of brand new SSDs are consumed evenly because of how RAID5 intentionally distributes workloads evenly to the array members. Eventually, the SSDs all wear out together, which can result in multiple, simultaneous drive failures and subsequent data loss. How does RAIDF1 work? RAIDF1 attempts to avoid this by biasing writes to a specific drive in the array. To understand this, consider how the DSM btrfs and ext4 filesystems store data. By default, DSM filesystems save data in 4K blocks. Even a 1-byte file uses 4K as a minimum amount of space. Modern HDDs/SSDs also store data in 4K sectors. When a byte must be changed, all the other bytes within the sector are read, then rewritten at the same time. This read/write requirement is called write amplification and it affects the performance of all parts of the storage ecosystem, from HDDs and SSDs to filesystems to RAID arrays. MDRAID also works with blocks, but they are called chunks to differentiate them from filesystem blocks. The default chunk size for DSM RAID5/6/10 is 64K. A stripe is the logical grouping of adjacent chunks spanning the array members horizontally. Using the example of a RAID5 with three drives, two of the chunks in the stripe contain data and the third chunk is parity. When DSM performs data scrubbing, it reads all three chunks, then validates all the data and parity in each stripe for mathematical consistency (and corrects if necessary). Each stripe rotates the position of the parity block successively through the array members. In the three-drive example, stripe 1's parity chunk is on drive 1, stripe 2's parity chunk is on drive 2, stripe 3's parity chunk is on drive 3, stripe 4's parity chunk is back on drive 1, and so on... This results in an even distribution of data and parity across all array members. Note that many files (filesystem blocks) may be stored in one chunk. The highest density case is 16 files of 4K or smaller in a single chunk. Consider that when one of those files change, only two of the three chunks in the stripe must be rewritten: first, the chunk containing the block containing the file, and then the parity chunk (since the parity calculation must be updated). RAIDF1 subtly modifies the RAID5 implementation by picking one of the array members (let's call it the F1-drive), and sequencing two consecutive stripes in the stripe parity rotation for it. This is NOT additional parity (each stripe still only has one parity chunk), so there is no loss of space or read/write performance. The table below compares parity distribution (how much of the total parity is stored on specific array members) between RAID5 and RAIDF1: Array configuration Drive #1 parity Drive #2 parity Drive #3 parity Drive #4 parity Drive #5 parity 3-drive RAID5 33.33% 33.33% 33.33% 4-drive RAID5 25% 25% 25% 25% 3-drive RAIDF1 25% 25% 50% (F1-drive) 4-drive RAIDF1 20% 20% 20% 40% (F1-drive) 5-drive RAIDF1 16.66% 16.66% 16.66% 16.66% 33.33% (F1-drive) With RAIDF1, anytime a full stripe is written, I/O is evenly distributed among the drives, just like RAID5. When a small file or file fragment (one that does not span a stripe) is written, on average the F1-drive will be used about twice as often as the other drives. Thus, the F1-drive will experience accelerated wear and will reach its life limit first. Then it can be replaced with minimal risk of one of the remaining members failing at the same time. Upon replacement, DSM selects the SSD that is closest to being worn out and designates it as the new F1-drive. The array sync then rewrites the array to achieve the desired RAIDF1 parity distribution. Note that the total number of write events are not increased with RAIDF1. "Total cost of ownership" does not change, as the extra writes to the F1-drive are avoided with the other array members, so they last longer. Caveats and other notable issues As a RAID5 variant, RAIDF1 creates arrays based on the smallest member device. For best results, all the drives should all be the same size and type (a larger drive can be used but extra space will be ignored). RAIDF1 can theoretically be “defeated” by installing dissimilar drives, with one drive having significantly higher capacity and/or a high DWPD (drive writes per day) rating. If this drive was then selected as the F1-drive, it may have enough write capacity to outlast the other array members, which could then fail together. Always using identical SSDs for the array will avoid this freak occurrence. SHR (Synology Hybrid RAID) allows drives of different sizes to be used in a redundant array while maximizing space available. This is done by creating a series of arrays, including a small one compatible with the smallest drive, and a large one using the available space common to the largest drives, and possibly some in between depending upon the makeup and complexity of the SHR. The arrays are then concatenated into a single logical volume (using LVM) available for use within DSM. For redundancy, the large SHR drives must be members of all the arrays. The small SHR drives contain only one array and not much of the overall data, and are accessed much less frequently than the larger drives. For RAIDF1’s algorithm to produce expected results, array write patterns must be simple and predictable. In summary, RAIDF1 and SHR array behaviors are not compatible with each other, which is reflected in the Synology DiskStation product lines. The Synology models that support RAIDF1 are the same as those that do not officially support SHR. This includes the XPEnology-enabled DS3615xs+ and DS3617xs+ platforms. Note that SHR can be enabled on these platforms by modifying /etc.defaults/synoinfo.conf, with no impact to RAIDF1 functionality. The MDRAID modifications that enable RAIDF1 are compiled into the DSM kernel. The consumer-oriented DSM platforms do not contain those changes, including the XPEnology-enabled DS916+ and DS918+ platforms. Creation and maintenance of a RAIDF1 is not possible on those systems. However, just like SHR, an established RAIDF1 array is completely functional and behaves like any other RAID5 array when migrated to a platform that does not support it. Brilliant! TRIM helps minimize the impact of write amplification on SSDs. Because the F1-drive is written to more frequently, it will be affected by write amplification more severely than the other array members, and performance of both the drive and the array will degrade over time unless TRIM support is enabled. Finally, there is no RAID6-based, SSD-optimized choice yet. Perhaps RAIDF2 will be an option in DSM 7.0. References If you want to install RAIDF1 on XPEnology, you will find a simple tutorial here https://en.wikipedia.org/wiki/Standard_RAID_levels https://global.download.synology.com/download/Document/Software/WhitePaper/Firmware/DSM/All/enu/Synology_RAID_F1_WP.pdf http://wiki.linuxquestions.org/wiki/Block_devices_and_block_sizes https://raid.wiki.kernel.org/index.php/RAID_setup#Chunk_sizes https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm https://www.synology.com/en-sg/knowledgebase/DSM/tutorial/Storage/Which_Synology_NAS_models_support_RAID_F1 https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Which_models_have_limited_support_for_Synology_Hybrid_RAID_SHR https://en.wikipedia.org/wiki/Trim_(computing)
    1 point
×
×
  • Create New...