Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,628
  • Joined

  • Last visited

  • Days Won

    210

Everything posted by IG-88

  1. thats about ssd's, on a single ssd there would be the controller leveling that out by distributing the write access between cells (wear leveling) something like that is not needed with conventional magnetic recording but might get a thing with heat or microwave assisted magnetic recording in the next years when combining ssd's in a raid5 set you might face the effect that all disks fail at the same time as wear out is something that will hit a ssd at some point (there is usually a tool or s.m.a.r.t. to monitor this), synology has raid f1 for this as alternative to raid5 https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-131458 https://global.download.synology.com/download/Document/Software/WhitePaper/Firmware/DSM/All/enu/Synology_RAID_F1_WP.pdf on synology system that is build into kernel and as we use syno's original kernel some units lack that support https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/#comment-281190 you can also see if that is supported when looking at the state of a mdadm with "cat /proc/mdstat" "Personalities" would tell you what raid types are possible, in general don't expect a consumer unit being able to use raid f1 but there is a list from synology https://kb.synology.com/en-ro/DSM/tutorial/Which_Synology_NAS_models_support_RAID_F1
  2. i came up with this (should survive updates?) create file: /etc/apparmor/usr.syno.bin.synowedjat-exec /usr/syno/bin/synowedjat-exec { deny network, deny capability net_raw, deny capability net_admin, } create file: /usr/local/bin/apparmor_add_start.sh (needs to be executable) #!/bin/sh apparmor_parser -r /etc/apparmor/usr.syno.bin.synowedjat-exec create file: /usr/local/bin/apparmor_add_stop.sh (needs to be executable) #!/bin/sh # apparmor_parser -R /etc/apparmor/usr.syno.bin.synowedjat-exec # no plan to remove that as long as the system is running create file: /usr/local/lib/systemd/system/apparmor_add.service # Service file for apparmor_add # copy this file to /usr/local/lib/systemd/system/apparmor_add.service [Unit] Description=Add Apparmore profile on boot [Service] Type=oneshot ExecStart=/bin/bash /usr/local/bin/apparmor_add_start.sh ExecStop=/bin/bash /usr/local/bin/apparmor_add_stop.sh RemainAfterExit=yes Restart=no [Install] WantedBy=syno-low-priority-packages.target test it: "systemctl start apparmor_add" to start it now check with "aa-status" that the new apparmor profile is active -> /usr/syno/bin/synowedjat-exec "systemctl enable apparmor_add" to make enable it at start of the system should result in this: "Created symlink from /etc/systemd/system/syno-low-priority-packages.target.wants/apparmor_add.service to /usr/local/lib/systemd/system/apparmor_add.service." reboot and check again with "aa-status" -> /usr/syno/bin/synowedjat-exec
  3. its way easier then you think, you sum up the space of all disks in your array and subtract the largest - in case of shr-1, for shr-2 you subtract the two largest disks a raid5 or raid6 (same disk size) would be a sub case of this since dsm 7 all the created volumes are shr1 or 2 if you look closely, shr was always mdadm software raid sets (same size partitions of disks put together as raid set) and these "glued" together by LVM2 to a volume, in older dsm versions you could leave out LVM2 if you had same size disks only mainly its the question how many disks max for 1 disk as redundancy, if you are willing to have 8 disks as raid5 then you "loose" one disk if you create two raid5 sets it will be two disks for redundancy if you take it as not more the 6 disks in a raid 5 then with 8 disks there is the need of raid6 and in that scenario there is not much difference between two raid5 sets (raid6 has the edge as its not important whats disks fail if two disks fail, with two raid5 sets of 4 disks each and two disks in a 4 disk set fail ... also its more convenient to have just one volume, no juggling space between two volumes, so two times a little argument for all disks in raid6 (you can replace raud5 with shr1 and raid6 with shr2 here, there can be some differences with shr of whats "lost" for redundancy depending on the size difference of biggest disks) that might have been the weaker point of raid6 as it need more writing with two redundancy's, but there is also the argument of having more disks is better to split the needed IOPS and transfer between more disk (older phrase, more spindles more speed, gets more clear if you think of a raid0 made of raid1 sets aka raid10) there is also some correlational with the number of disks in a raid5 set like sets of 3 or 5 having better performance as the number of disks taking the data (not redundancy) it 2 and 4 in this case, so a 9 disk raid 5 would be next in this line (8 disks taking the date, so its about two to the power of n, 2, 4, 8 ) but thats kind of two many disks for just one redundancy, so a 10 disk raid6 would be in that place see my comparison above, the possibility's for a two disk fail of a raid6 of 8 disks are better then having two raid5 sets of 4 disks each, with the 8 disk raid6 any two disks can fail, thats not the case with the two times raid5 there are more things you could take into account when building a system and deciding about redundancy and there a a lot more options to handle this with a normal linux/bsd system then dsm can offer, dsm pretty much limited to mdadm raid depending on how important these things are there can be other solutions then dsm with its mdadm, like systems doing ZFS or UnRAID examples of other things that might be important: constant guaranteed write speed, IOPS, scaling to a larger number of disks, caching, higher level of redundancy, ...
  4. https://www.synology.com/de-de/dsm/7.1/software_spec/synology_photos "... HEIC-Dateien und Live Photos erfordern das Advanced Media Extensions-Paket, um angezeigt zu werden ..." aka AME https://xpenology.com/forum/topic/30552-transcoding-and-face-recognitionpeople-and-subjects-issue-fix-in-once/?do=findComment&comment=441007 https://xpenology.com/forum/topic/65643-ame-30-patcher/
  5. https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=122819
  6. dva1622 is device tree, i guess that goes within a platform and dva1622 is geminilake like 920+ and has more in common with 920+ then with dva3122 (denverton) denverton platform dates back to 2018 and device tree came up around 2020 and is used in newer platforms like v1000 older ones like broadwell/broadwellnk (ds3617 or ds3622) are still old style
  7. that was for "old" style disk handling (sataportmap/diskidxmap, like 918+ or 3622), might be not tested yet to do that with device tree based disk handling
  8. beside using another usb flash drive there would be trying arpl as loader https://github.com/fbelavenuto/arpl/releases if the hardware is new and untested try a rescue/live linux to boot a see if it runs without problems
  9. no and lately more inclined to yes https://xpenology.com/forum/topic/67961-use-nvmem2-hard-drives-as-storage-pools-in-synology/ its a feature thats about to come out (not sure why sonology is not making it available as general feature for all models) but under normal conditions an dsm 7.1 its usually a no, dsm's only use would be as cache drive thats not supported under normal conditions as normal volume, it will appear as external usb single disk, no integration into a raid set in a normal volume (for good reasons) that scenario is usually used when you want to user VM's in general a lot, kvm inside dsm (VMM pacckage) is not that good or you would use it when you have way more cpu cores/threads then the baremetal dsm could handle or you wand to use m.2 nvme without the dsm typical constrains (m.2 nvme as virtual ssd in vm, that way dsm will use the nvme as data volume without any tweaking), or you want to use a hardware raid that is not usable in dsm (you can have a single virtual disk in dsm that in the hypervisor is on a raid set) easier install in a vm with dva's? not sure why, i use my dva1622 baremetal
  10. not sure if you want to use dsm baremetal or as a vm as baremetal there might be two issues 1. cpu number https://ark.intel.com/content/www/de/de/ark/products/64616/intel-xeon-processor-e52430-15m-cache-2-20-ghz-7-20-gts-intel-qpi.html 6 code / 12 thread x2 = 12/24 not all platforms of dsm support these high counts i'd suggest ds3622 or ds3617 (sa6400 might be also a thing when availible, has kernel 5 and seems to perform better on the same hardware) 2. P4xx - hardware raid in not supported in general, dsm is build around having single disks, afaik P420 can be switched into this mode, most dsm 7.1 loaders will have a hpsa.ko driver but be aware that it might not work, lots of trouble seen in dsm 6.2 with that and never really worked - so try it in single disk mode and it it does not work replace it i'd suggest replacing it with a lsi controller in IT mode, same mechanical connections to the backplane and <100$ lsi 9211-8i is the name of the "original" but any LSI SAS2008 or 2108 should work nicely with ds3622 or ds3617 or switch to a hypervisor and use dsm as vm, that removes the troubles above as you can add cpu's at will to the vm and also storage can be used as "single" disk as virtual disk to a vm (raidX implemented on the hypervisor level so you dont need to care about raid in dsm, only use a single disk per volume)
  11. if the driver in arpl is not that one its still possible to replace the i915.ko file with the one from that thread i've seen setups like old gaming board/cpu going into a nas and thats often some high power consumption cpu's (no one case about that in a gaming setup, just use a potent cooler and you are done) did you look at the list of disks (the graphic only shows the original state of the housing wit its possible disks) if the disks are listed in hdd you should be able to add them depending on the configuration of the hardware it might be needed to renew the device tree files to get the disks, going into the loader and use build loader might be needed there is also a option to show disks detected in arpl (in advanced options?) if not detected at this point its not going to work in dsm
  12. if the nvme is cache then its just read cache, no use in case of backup afair all 2.5" hdd's 3TB and above are shingled recording, not much use in you want to write bigger amounts of data with 1G speed (cmr buffer area will be full at some point and disk will fall back to native smr speed aka ~25-30MB/s) just wanting to make you aware of that limit, if you did not read about that and switch to another hardware last minute (often the old hardware is used for backup and new hardware as main and your old microserver can only have 2nd/3rd gen cpu's) you might run into trouble, also the 4th gen is about bare functioning like booting into OS, face recognition in photo station or AI stuff might need a newer gen, cant remember but i think it was 6th gen as minimum for that
  13. low default count on cam's and you cant buy a license for use with xpenology https://xpenology.com/forum/topic/13597-tutorial-4-camera-on-surveillance-station-legally/ or use a DVA1622/3221 unit they come with 8 cam license by default but need newer cpu's (having MOVBE feature like intel 4th gen cpu's) install yes and as its 7th gen cpu the mentioned limit above about movbe should be no issue, dva1622 would be the way to go with that hardware, in case of backup i can't answer as im not using syno's active backup, but as it is the same as on original units you can read about that anywhere not just here in the forum in general the speed will be limited do what 1G nic can do and with hat kind of pcie less hardware there will be no option to 2.5G/5G or 10G nic or more disks (more disks usually more speed - at least to a certain degree, i have 4 x 16TB and can use ~450MB/s and before 12 x 4TB only had slightly more) afair dsm as xpenology tends to run the cpu usually at more then the idle on other systems imply, you might read futher about that if its important https://xpenology.com/forum/topic/19846-cpu-frequency-scaling-for-ds918/
  14. 3615/17 dont come with i915 driver needed for intel qsv and there are parts missing in the kernel so you cant just compile additional drivers (like its done for network and storage drivers) any platform with build in i915 driver needs at least a haswell (4th gen) cpu and microserver gen8 is 2nd/3rd gen cpu's the only ray of light might be sa6400 (only unit with kernel 5.x, epyc based) where Jim Ma added a i915 driver by also adding things missing for i915 in the kernel config https://xpenology.com/forum/topic/68067-develop-and-refine-sa3600broadwellnk-and-sa6400epyc7002-thread/ 1st i cant say for sure if sa6400 will need 4th gen, its still a beta release (i did not test that and i cant remember seeing anyone writing about that) 2nd is in theory someone could apply the same technique to other (future 5.x based units, dsm 7.2 will keep all kernels as it is now so we talk about new units with a new platform in 2023 and dsm 7.3 in 2024), so in 2024 there might be be ds3622 based on dsm 7.3 and kernel 5.x that get the same treatments as sa6400 got now, but thats a lot of time and if's (as we dont know if dsm 7.3 will bring kernel 5.x to most units as it was done with 7.0 where most units changed from 3.x to 4.x - also 7.3 is a placeholder it might be dsm 8.0 next year instead) beside trying jim ma's sa6400 arpl loader on "older" cpu's there is nothing that could be done atm - at least with dsm/xpenology - you could switch to open media vault and have i915 working immediately, its just creating a (additional) boot disk (or usb) and replace the usb xpenlology loader with that, the data raid partitions will be recognized and should be usable ootb (there might be a problem with volume naming but that can be solved) https://xpenology.com/forum/topic/42793-hp-gen8-dsm-623-25426-update-3-failed/#comment-200475 you could remove the OMV boot media at any time and re-add the xpenology usb loader and have DSM back, the dsm partions (1st and 2nd on every disk) will not be touched so dsm with its config will stay put until its used again)
  15. the real 10th gen driver was made for 7.x here https://xpenology.com/forum/topic/59909-i915ko-backported-driver-for-intel-10th-gen-ds918-ver-701-up3/ i never checked but i guess that driver is in arpl as 10th gen support (at least it would be the best option), the old method of just changing the id of a 9th gen unit into a 10th gen has limits (see link below about what was done in 6.2.3), the newer driver does have real backported code for 10th gen arpl comes with a gui/menu and there is a section for addon's yes syno's own ootb driver in platform apollolake (918+) and geminilake (920+/dva1622) can do up to 9th gen (when the needed firmware files are added) the i915 driver in dsm 7.1 is still the same as in 6.2.3 (synology backported a newer i915 driver in 2020 for 920+ and has not released any intel qsv hardware since, dva1622 is also just geminilake) i documented the device numbers here https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/ "iGPU device ID's supported by synology's i915 driver (Reveal hidden contents)" you could check the gpu id in a intel cpu on ark intel (just google "ark intel" and the cpu) for about 10 years all "BIOS" in new hardware are uefi and have a added module named Compatibility Support Module aka "CSM" to add classic BIOS support https://en.wikipedia.org/wiki/UEFI#CSM_booting csm was only needed with dsm 6.2.3 (3615/3617) you might have used uefi already as for using CSM you need to enable that "option" in bios but also would need to boot from the "non-uefi" (or legacy) usb boot device, when using the uefi usb boot device then csm is not used (as it is just an option) some systems of the last 2-3 years might not even have CSM module anymore (seen on some NUC's from intel) if you have real need like that dont bother about my comment, lots of people put in way to much cpu power just in case no real diffrence, same kernel/driver/software, the kernel config for 920+ (geminilake is slightly different ) main difference is in sata disk handling, 920+ uses device tree that made some issues in the beginning with the loaders, as 920+ will get 2 years longer updates from synology it would be the starting point if there are problems that can't be solved or it gets to frustrating to solve 918+ is a option thats easy to try https://en.wikipedia.org/wiki/Devicetree#Linux https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/ support the coders of the loaders, they do the heavy lifting https://github.com/pocopico https://github.com/fbelavenuto https://github.com/jim3ma (he is doing the sa6400 stuff and i915 driver for sa6400)
  16. yes, there is a extended driver (for kernel 4.x based uits like 918/920/dva1622) that can do up to 10th gen you can have more disks the default number of disks, i use a dva1622 installation with 5 disks (also a device tree model like 920) the loader should factor that in when building the loader you would need to activate the 10th gen i915 driver addon in arpl loader, that should make the 10th gen gpu work bios would be seen as exotic, uefi is the normal way should be working you might consider a smaller cpu with lower power consumption (less heat equals less noise for cooling and also less money per year) you could also use 918+ if you want, with dsm 7.1 and 7.2 (still beta atm) there is no real difference as they use the same kernel, no difference with plex, 918+ might not get dsm 7.3 in the future but if you going for sa6400 anyway that might not make any difference for now, so if you have any trouble with 920 then try 918
  17. SA6400 can do up to 24 threads https://xpenology.com/forum/topic/68067-develop-and-refine-sa3600broadwellnk-and-sa6400epyc7002-thread/?do=findComment&comment=441436 11600k https://ark.intel.com/content/www/us/en/ark/products/212275/intel-core-i511600k-processor-12m-cache-up-to-4-90-ghz.html GPU ID -> 0x4C8A https://xpenology.com/forum/topic/68067-develop-and-refine-sa3600broadwellnk-and-sa6400epyc7002-thread/?do=findComment&comment=440411 or jim ma's blog (table with gpu id's ins not in chinese, also its possible to use google translate to read most of whats there) https://blog.jim.plus/blog/post/jim/synology-sa6400-with-i915 4c8a is in the list (rocket lake) as plex does not need anything else beside the i915 driver there should be no limitations with that (synolgy's own videostation would need a synology codec pack that has protections that need to be circumvented) 8 disks is even below the 12 default disks the original hardware has and even >>64GB is a supported scenario on the original (i dont recall any limit with ram size coming from synologys kernel config so no limit imho, the original comes with 1x32GB and supports extending with 32GB and 64GB DIMM's with up to 16 DIMM's slots so up to 1TB RAM might be synology's official support for SA6400)
  18. for 12th gen? maybe later https://xpenology.com/forum/topic/68067-develop-and-refine-sa3600broadwellnk-and-sa6400epyc7002-thread/?do=findComment&comment=440534 for now its up to 11th gen
  19. rp refers to mechanism all recent loaders are build from, i guess its tcrp what you had in mind (tinycore aka tc based rp loader solution) you could try this one https://github.com/PeterSuh-Q3/tinycore-redpill/releases tcrp with addon's and menu or maybe don't change anything with the loader and just add it "the old way", not that much to do, more or less just running a script once https://xpenology.com/forum/topic/13030-dsm-5x6x7x-cpu-name-cores-infomation-change-tool/page/15/
  20. up to 6.2.3 update3 with jun's loader (6.2.3 = 25426) in theory redpill loaders should be able to use 6.2.4 up to the recent update level there where changes in kernel that did not work with jun's loader anymore prevention parts of his code to load with that old cpu only 3615 or 3617 are possible, not mach difference between these two https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ it different then that, you bios is uefi and you can enable CSM mode for older BIOS compatibility, if you disable CSM it will be uefi only CSM is a option so even when CSM is enabled you could use UEFI mode to boot, in that scenario you usually see the usb boot device two times in boot selection as uefi und "normal" (without uefi in the name) - gets important if you have to use CSM as it was the case with dsm 6.2.x for 3615/3617 (-> link above) you could use jun's laoder for up to 6.2.3.u3 (security updates up to ~12/2020) or rp loader for 6.2.4 u6 (last update 5/2022) i'd suggest trying rp loader and 6.2.4 as it comes with way more security fixes then 6.2.3 if you plan for 6.2.3 then you will at least need to download dsm, update and some spk file now, afaik synology plans to remove all older files 6.2.3 and downward (including old packages) on 01.05.2023 also 6.2.x has EOL 6/2023, no security updates for 6.2.4 after that with 6.2.4 you might also not be able to update some of the packages to the latest version
  21. you could also try arpl or arc loader with 920+, it comes with mlx4/5 drivers for 920+ https://github.com/fbelavenuto/arpl-modules/tree/main/geminilake-4.4.180 https://github.com/AuxXxilium/arc-modules/tree/main/geminilake-4.4.180
  22. nothing special, just a 9th gen cpu, B365 chipset and onboard sata (ahci) what hardware did you try?
  23. wäre vermultich für viel mehr leute intesressant und erreichbar wenn es hier mit drin wäre https://synocommunity.com
  24. both are kernel 4, denverton 16 threads, geminilake 8 threads kernel limit similar to HT and the core/thread limits it might be worth a try to disable E cores in bios not sure how kernel 5.10 deals by default with that but it might be a better then kernel 4 maybe a hypervisor can help to only assign p cores to a dsm vm
  25. the kernel config look like this CONFIG_NR_CPUS_RANGE_BEGIN=2 CONFIG_NR_CPUS_RANGE_END=512 CONFIG_NR_CPUS_DEFAULT=64 CONFIG_NR_CPUS=24 the "official" table from synology can be deceiving some times as the value will be about the max. config in this platform (one kernel config per platform) as there is only sa6400 in epyc7002 it is 24 (at least for now, if they release something more potent in that platform the value can go up) but i guess there might be performance inconsistencies with syno's old kernels/configs when it comes to E and P core's of newer intel cpu's
×
×
  • Create New...