Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,640
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. in a original system the usb vid/pid would always be the same (f400/f400), not sure if they would check that for upgrading to a different system as it is by default the same on original systems we can start guessing and try to check and test in /etc/synoinfo.conf is a entry "dsm_installtion_id" that looks like it could be the culprit when the new loader re-creates (changes) a new synoinfo.conf from its loader config it might get lost in best case it might solve the problem by just placing the old value after installing and reboot, but it also could be a value that is created as a fingerprint on every boot and we dont know how its created so it ends up different on a loader changed system also a reminder there are things like "synowedjat" https://xpenology.com/forum/topic/68080-synology-backdoor/ so synology is identifying and tracking every system individually and thats need fingerprinting and id's, the SN and MAC's is not the only thing for sure @AuxXxilium or @Peter Suh might know more about hat individual id? i guess that kind of scenario is not part of the loaders (migration from a different loader and reading stuff from the old installation, i'd guess they look for there own loader config file on the boot media and what the user inputs there) that rabid hole can be pretty deep in worst case, for a simple use it might be easier to just document it and know that when using hyperbackup and replication scenarios the id of systems will change and these things will have to start from scratsch
  2. as my 7.1 long term informations where older i looked it up, looks like there is a catch to that (at least now) only older units (presumably kernel 3.10, mostly 2015 units and older) will get one more year up to 6/2025, namely the ds3615xs, the other commonly used types like 3617 or 3622 will loose update support 6/2024 and will have to be updated to 7.2 (if security updates are important) https://kb.synology.com/en-global/WP/Synology_Security_White_Paper/3 looks like i will have to upgrade my dva1622 in the next 3 month to 7.2 there was also a "Synology E10G18-T1" with that performance problem so it might stretch to more nic's or its something different but if exchanging the driver on the already running system clears the speed problem then it would be clearly that issue edit: i looked up the syno nic, its a "newer" model, not tehuti bsed, its marvell aquantia (aka AQtion Aquantia AQC) and the driver module to look for is "atlantic.ko", sadly "modinfo" is not part of dsm but you would see driver realted information when the driver is loaded ( dmesg |grep atlantic) also a sign of a original driver would be if its signed by synology, you could check this by looking in raw/hex at the end of the driver "xxd /lib/modules/8021q.ko" for one that usually is still original and "xxd /lib/modules/atlantic.ko" you will see "Module signature appended" at the very end of the driver when its a driver provided by synology forgot to mention it but my test system was using arc loader, as most loaders have there own driver set its worth checking the driver version and comparing to the original driver from synology (if there is one), in some cases it might not work using synology's drivers as there might not cover other oem versions of nic's, seen that with ixgbe drivers on 6.2 (but also tehuti and other intel drivers and realtek 2.5G nic driver) so it might not work to "downgrade" the driver to syno's original in some cases, that might result in a non working nic, it might find the device but if the phy chip is not supported the it will not show any connection or the driver might not load at all so in case of experiments like that its handy to keep a 2nd nic inside that can be used if the 10G nic fails to work after changing the driver
  3. 6MBit like 1000 times less? thats like not working at all i have my main with 7.1.1 and backup system with 7.2.1, the later one does 250 MByte/s with a bunch of older disks and hyperbackup rsync (not tested wit iperf) so from the 7.1.1 to 7.2.1 is no problem the main 7.1.1 from a win11 system makes 1G Byte/s as long as it fills up ram and then drops to 650MB/s, cant ask for more (good nvme ssd on win11 and 20GB single file) just tested it for 7.2.1 from win11 and its the same as with 7.1.1, 1GByte/s as long as its ram and then ~600MByte/s (no iperf neede here if it looks that good already) (win11 and 7.1 is a sfp+ mellanox and 7.2 a rj-45 Tehuti TN9210 based (might be similar to you syno card) there are some base differences even with systems having the same dsm version, 3615 is kernel 3.10 but 3617 and 3622 are both 4.4 so looks no like its not related to that as 3615 and 3617/22 seem to perform both badly but anyway try arc loader arc-c (sa6400) with 7.2.1 that's kernel 5.x and might be different any switch involved there might be differences when connecting directly or through a switch as of how speed is negotiated what speed does the system show (ethtool eth0 | grep Speed) when its only 6Mbit/s its interesting but if you want to save time consider just keep it with 7.1, as its a LTS version and afair will get the same support as 7.2, if there are no features you need from 7.2 you dont miss anything when just using 7.1
  4. no, there is no such thing that tricks dsm (xpenology or original) into accepting a "normal" sata (or sas) enclosure as one of there original expansion units afaik there is some tinkering in specific drivers kernel code and firmware checks for the external unit involved (was introduced ~2012/2013 before that any eSATA unit that presented every disks as single disk worked) also worth mentioning is that technically its a simple sata multiplier and you use the amount of disk in that external unit through that one 6Mbit sata connection, all drives involved (external unit) sharing this bandwidth) as long a you just have to handle a 1Gbit nic you wont see much difference but if you want to max out what lets say 4 + 4 10TB drives can do and use a 10G nic you will see some differences and raid rebuild might speed might also suffer in eSATA connection scenarios also a general problem with that kind of scenario is reliability as when accidentally cutting power to the external unit there will be massive raid problems afterwards usually resulting in loss of the raid volume and when manually forcing repairs its about how much data is lost and hot to know what data (files) is involved i dont know if synology has any code in place to "soften" that for there own external units (like caching to ram or system partition when sensing that "loss" by a heartbeat from external and bringing the raid to a read only mode to keep the mdadm raid in working condition) as you can use internal up to 24 drives with xpenology and only your hardware is the limit (like having room internal for disks and enough sata ports) there is only limited need for even connecting drives external and some people doing this hat seen raid problems if you dont have backup from you main nas then don't do that kind of stuff, its way better to sink some money in hardware then learning all about lvm and mdadm data recovery to make things work again (external company for recovery is most often out of question because of pricing) maybe a scenario with a raid1 with one internal and one external disk might be some carefree thing bat anything that goes above the used raid's spec's for loosing disks is very dangerous and not suggested and to bridge to the answer from above, in theory usb and esata externel drives are handled the same, so it should be possible to configure esata ports in the same way as usb ports to work as internel ports, as esata is old technology and mostly in the way when it comes to xpenology config files it's most often set to 0 and not in use, i used one esata port as internal port years back with dsm 6.x - but as a ootb solution is the thing then externel usb as internel drives is the common thing now and with 5 or 10 Gbits usb is just as capable as esata for a single disk (and you will have a good amount of usb ports on most systems where esata is usually, if there as any, is just one if you want to use usb drives as "intrnal" drives then you can look here (arc loader wiki) https://github.com/AuxXxilium/AuxXxilium/wiki/Arc:-Choose-a-Model-|-Platform its listed specifically that its usable that way
  5. no, all the dsm is the same and are just complied with different kernel options and different settings from syno's config files (a little simplified) the kernel in x64 based units comes with different settings but will run anyway even if a amd based unit like sa6400 is used with a intel cpu, its not that much different the loaders pretty much do exactly that, synology intends to make model based dsm version just run on that hardare, the loader is circumventing that by "emulating" or spoofing that hardware and making it run on any hardware you have, for more or less most x64 based models (does not make much sense to have 2 slot based unit supported when there s a bigger one with the same hardware and features) as for changing models ist pertty much all you can see in the loader you can chenge from one to another, its even a "offical" synology dsm feature and you will find lists of hardware you can migrate to from one to another, luse case ist like your 918+ out of support of support you the hardware is dead and you buy a new or bigger unit like ds923+ you can just insert the disks of you old unit ito the new hardware an the loader in that unit (usb dom) will recognise the old different system on the disk and the installtion (config files) will be converted tio the new dsm installtion that will end on the disks when "migrating" to the new hardware, thats called drive or hdd migration" https://kb.synology.com/en-global/DSM/tutorial/How_to_migrate_between_Synology_NAS_DSM_6_0_HDD
  6. its about dsm being able to "convert" its settings from a aolder ds, version to a more recent one without additional steps (recommanded by synology) the loader is not much involved here as long as it can make dsm run (redpill kernl module on all loaders we unse 6.2.4 an any 7.x) as long as the loader has anough drivers to support the hardware in question ... (ahci sata is part of syno's kernel so any ahci compatible controller will work, so in most cases its about network driver support) as far as i have seen yes, but i use dva1622 for now (as sa6400 pretty new), sa6400 might have better intel qsv support as its kernel 5.x based and its i915 driver support newer hardware ootb then what comes with kernel 4.x try arc or arc-c for sa6400 https://github.com/AuxXxilium/AuxXxilium/wiki/Arc:-Choose-a-Model-|-Platform#epyc7002---dt only downside might be not beeing able to use VMM with intel cpu (i guess that, not testes it myself and might never do that as if i need really might vm's to run i would choose a "real hypervisor" system like proxmox (esxi by now sees to be out of the race because of broadcom ditching the free version) Why you chosing this model? more free survailance cam's by default, native intel qsv build in by synology as the system in it original is using gemini lake (with the right cpu that is supported ootb its also easy to use jellyfin or plex, , "AI" stuff ootb in survailance station in the end i do not use any of that, still options i "could" use (lucky me i had not have to spend all the mony synology asks for a real 1622), but with the loaders you can switch models any time and and dsm will be able to convert everything (usually) as all is x64 based (some specifics about sas hba's or cpu dependent features like intel qsv and amd vmm support) arpl is on hold for at least a year (and might never come back), arc is good supported,lots of features and easy to use as its menu based (and the wiki is helpful too - people should use it more often), it might get confusing with all the options but there is the wiki, youtube, this forum and discord ... nothing different in result, as long as the loader is configured the right way (model) you will not see much difference as dsm is doing your jobs and somem things like care for some special problems might not be different between maintained loaders as its open source and new knowledge about how to fix quirks are shared and might find there way in your specific loader overtime (but in most cases you find the loader that work now for you and you look for new stuff 2 years after the or even later)
  7. might add some addition thinking about the 8 thread limit in 918/920 6/12 usually relutis in half real core and half HT virtual cores and as as HT "virtual" core has about 25% of a real core, what you get in a 8 thread limit scenario is the performance of ~5 cores (4 x real 4 x 25% of a real core) disabling HT in bis will 6 core performance so without a different model having a higher thread limit baked into the kernel you will see no gain of the better cpu and often these models dont have intel quick sync support (i915 driver) dva1622 also has a 8 thread limit, not sure arc loader as sa6400 has i915 driver, usually i915 cant be easily added by just compiling some modules for the kernel as the most synology kernel miss the part the needs to be in the kernel to load i915 as module arc seems to have i915 drivers for sa6400 so i guess you might be able to use arc as sa6400 wit your cpu if you dont need intel qsv the there are a lot option for the model, and 3622 is most common as it the successor of 3615/3617 cant say much about this as i never done it that way (i use syno's migration assistant as i had a old and new system parallel), i dont think 6.2.4 would be needed, most people here did 6.2.3 to 7.0 or even 7.1 or 7.2, 6.2.4 never was atractiv in any way as it already neede redpill loader to work and most people wanting to risk some new loader woul have used 7.0 as it offered some new stuff for the risk of using a new loader the update report section is a source of information for that this entry is 6.2.3 to 7.2.1 https://xpenology.com/forum/topic/69680-dsm-721-69057/?do=findComment&comment=451137
  8. i had a (cheap) jmb585 m.2 and it never worked stable, i also had concerns about flimsy pcb (might crack or parts get damaged when pressing to hard like inserting cables when already mounted in m.2, also the force 5 or 6 sata cables to that flimsy thin board can be a problem, that needs some adjustment too to not run into problems when working inside the system after placing the m.2 adapter), had more success with m.2 cable based adapter that terminated in a pcie 4x slot but also there was no universal solution as one with a slightly longer cable did not work stable with one specific controller, i ended up only using this m.2 contraptions for 10G nic) or not at all) and did spread the needed sata ports over the pcie 4x and 1x slots of the m-atx board i use (pcie 1x slot with jmb582) - a few bugs more for the controllers is better then a shredded btrfs volume (that is often hopeless beyond repair in a situation like this - learned that the hard way, but i also do backups of my nas ...) most normal housings cant have more then 12-14 3.5" hdd's and that often can be achieved with a m-atx or atx board and more ahci adapters for small money (like 6 x onboard, 5-6 sata by one jmb585 or asm1166 in a 4x slot and one or two 2port adapters in 1x slots - the 16x slot or one 4x slot might already be used for a 10G nic in my scenarios, if a 16x and and a 4x slot is free then two 2x/4x cards can add 10-12 sata disks to the 6 sata onboard ...)
  9. i use arpl with a dva1622 and 6 disks (original has 2) and arc with 3622 and 13 disks (original 12), no problems, in the graphics you see a box with the original amount of slots but in using there is just the "normal" and old 26 disk limit, you will see all disks in HDD/SSD listing of disks you might want to change from 918+ to something newer as 918+ might loose its support and might not get updates as long as newer models (the guarantee is about 5 years, anything above that depends) depending on the features you need (like intel quick sync video) there might be some limits of models you can choose in the loader there is also a model specific cpu thread limit in the kernel but as you use a low spec cpu for you new system that wont be much of a problem, only thing with new er intel cpu's might be that the old 4.x kernel in its original form does not support 12th gen intel qsv and it depends on the loader how far that support is working as it needs extra drivers from the loader, so you might need to read up on that in the loaders doku or here in the forum (i use a older intel cpu with the dva1622 that is working with syno's original i915 driver so i'm not that much up to knowing whats the best solution now, dva1622 comes with a nice feature set ootb when the i915 supports the cpu but there was also some interesting stuff going on with sa6400 and its 5.0 kernel with i915 extended drives, initially here https://github.com/jim3ma but i guess some of it might have found its way to other loaders by now) in genral it does nor matter if the original unit has a amd or intel cpu for just the basic NAS stuff, only when using KVM based VMM from synology or specific things like intel qsv it becomes important (as kernels per from synology are tailored for cpu's to some degree and the most obvious is the thread limit) i'd suggest to use a different usb thumb drive and a single empty disk (maybe two to connect to the last sata port to see how far it gets) to do some tests (you can keep the original usb and he disks you use now offline (just disconnect the disks), play with the loaders model until you find your sweet spot and then use that configured loader to upgrade to the new model and dsm version (7.1 is still fine and as its a LTS version it will get updates at least as long as 7.2)m when creating a system from scratch with empty disks the partitions layout for system and swap will be different with 7.1/7.2 but upgrading from 6.2 and keeping the older smaller partitions is supported by synology so there is no real need to start from scatch for 7.x https://kb.synology.com/en-global/DSM/tutorial/What_kind_of_CPU_does_my_NAS_have
  10. you just forgot about the fun we had as its quiet some time since you updated your 6.2.3 6.2.2 with its different pcie kernel options or 6.2.3-25xxx (and all after that up to 6.2.4) just to name a few arc-a and arc-c are about a fixed dsm type like RS4021xs+ and SA6400, more automated and arc-c even has a custom kernel option to have more cpu cores usable (also its the one dsm x64 version that has kernel 5.x) just try the normal arc (and read about the limitations and special things of models in his wiki, https://github.com/AuxXxilium/AuxXxilium/wiki/Arc:-Choose-a-Model-|-Platform) but if you want to use 6.2. and jun's laoder just try it. disks should be no problem with ahci controllers, if the last extra gets you nic working then it should be fine you should never expose such old unpatched system directly to the internet as 6.2.3 has a lot of security fixes missing by now
  11. https://github.com/AuxXxilium/AuxXxilium/wiki https://auxxxilium.tech/redpill/ and he also has a lot of stuff on youtube https://www.youtube.com/@AuxXxiliumTech
  12. in the 1st post of this thread are links, also for 918+ and one of the links is still working as it has i219 (intel e1000e driver) and i211 (intel igb driver) at least one should work (if you really want to torture yourself with that old 6.2.3 stuff) asm1166 and jmb585 are both ahci compatible and will work even without anx eytra drivers as the ahci support is a fixed part in synologys kernel (that is always used with jun's or newer loaders) i'd suggest using a newer loader like arc, support dsm 7.1 (lts) oder 7.2, plenty of drivers and loads of fixes to special conditions arc gets a lot of effort and is well maintained https://github.com/AuxXxilium/AuxXxilium/wiki
  13. arc loaders wiki can also be a good source of information https://github.com/AuxXxilium/AuxXxilium/wiki (the important and limitations part from the start page) https://github.com/AuxXxilium/AuxXxilium/wiki/Arc:-Notice-&-Workarounds https://github.com/AuxXxilium/AuxXxilium/wiki/Arc:-SataPortMap-&-SataRemap
  14. if the module is not loaded it can't detect the press of the power button the loader should (at least when activating the acpid add on) load that module in theory a insmod /lib/modules/button.ko should fix that and after that the shutdown might work as expected if you found soemthing thats working for you that i guess its a solved problem
  15. check what mdadm has to say about that cat /proc/mdadm at least there should be something if disks from a raid are missing my (untested) assumption was that these things might not work on DT models they might use different things now like changes in device tree? maybe changes to a non DT model for your install or as suggested earlier change to jmb585/582 cards to get the port count you are aiming for you can try to dive deep into DT, syno's kernel (kernel source is available), the mods they have done and the shiming in the loader ... the less time consuming and non developer way is just to circumvent problems and using asm1166 only as last controller in the system is that way (or not using it at all or if you did not already have bough disks just lower the needed port count with bigger disks (i reduced my system from 12 to 5 disks that way) that might have been the way with jun's loader but the new loader (rp) works different, you would need to edit a config file of the loader for that, the loader now has its own boot and menu system to do that and re-writes the config file when saving the loader config (if you change the resulting config file manually you changes might get lost when re-running loader config later (like having DT and needing to renew the device tree after changing hardware)
  16. evdev might be now part of synos kernel with 7.x (like ahci) you can check on button.ko with cat /proc/modules | grep button that should show if its loaded
  17. sata_remap=9>0:0>9 https://xpenology.com/forum/topic/32867-sata-and-sas-config-commands-in-grubcfg-and-what-they-do/ https://gugucomputing.wordpress.com/2018/11/11/experiment-on-sata_args-in-grub-cfg/ did you see this`? https://xpenology.com/forum/topic/52094-how-to-config-sataportmap-sata_remap-and-diskidxmap/ also a possible solution might be to just use one asm1166 and place it as last card, that way the 32 ports are no problem like 6 x sata onbard, 5 x sata with jmb585, 6 x sata with asm1166 if needed another jmb585 or jmb582 card can be placed in the middle to keep asm1166 last, jmb582 will be a pcie1x card but sometime all the good slots are all used but even a 1x slot can be useful (afair there a re even jmb585 with pcie 1x but using to many of the ports might result in some performance degradation) there is also a newer firmware from 11/2022 for asm1166 (at least newer the the one from silverstone) but it does not fix the 32 port problem https://winraid.level1techs.com/t/latest-firmware-for-asm1064-1166-sata-controllers/98543/18
  18. looks like it when reading here https://www.servethehome.com/hpe-proliant-microserver-gen10-plus-ultimate-customization-guide/2/ "... hi again, I can confirm i5-9400 works, didn’t try i5-9400f ... I can confirm the i7-9700f works. ..."
  19. vieleicht hilft ja ein video "How to install QNAP NAS on VMWare in pc." https://www.youtube.com/watch?v=VCElcA6CdBI es gab in einem qnap.zip mit beiden images (img/vmdk) auch mal eine pdf anleitung wie man die benutzen kann ist zwar von 2020 aber vieleicht hilft es dir, hänge ich hier mal mit an Anleitung-DE.7z
  20. my old documentation about the mod is this (mvsas kernel 3.10.105 patch backport) diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c index 7b7381d..83fa5f8 100644 --- a/drivers/scsi/mvsas/mv_init.c +++ b/drivers/scsi/mvsas/mv_init.c @@ -729,6 +729,15 @@ static struct pci_device_id mvs_pci_table[] = { .class_mask = 0, .driver_data = chip_9485, }, + { + .vendor = PCI_VENDOR_ID_MARVELL_EXT, + .device = 0x9485, + .subvendor = PCI_ANY_ID, + .subdevice = 0x9485, + .class = 0, + .class_mask = 0, + .driver_data = chip_9485, + }, { PCI_VDEVICE(OCZ, 0x1021), chip_9485}, /* OCZ RevoDrive3 */ { PCI_VDEVICE(OCZ, 0x1022), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ { PCI_VDEVICE(OCZ, 0x1040), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ ------------------------------------------------------------ delete new and old 9485 section and replace with the following (as from kernel 5.x) { PCI_VDEVICE(MARVELL_EXT, 0x9485), chip_9485 }, /* Marvell 9480/9485 (any vendor/model) */ i might need to have look at the old kernel source to figure out what it was about "delete new and old 9485 section", i will do this if needed (and there would also be the patched kernel in my old vm i was using to build the modules) there was also a old todo list having a point "new mvsas fix" but i cant remember what this was about i also have 2 patches for adding alx killer 2400 and 2500 to the old kernels edit: code in my mv_init.c from kernel 3.10 looks like this mv_init.c ... <------>{ PCI_VDEVICE(MARVELL, 0x6485), chip_6485 }, <------>{ PCI_VDEVICE(MARVELL, 0x9480), chip_9480 }, <------>{ PCI_VDEVICE(MARVELL, 0x9180), chip_9180 }, <------>{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1300), chip_1300 }, <------>{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1320), chip_1320 }, <------>{ PCI_VDEVICE(ADAPTEC2, 0x0450), chip_6440 }, <------>{ PCI_VDEVICE(TTI, 0x2710), chip_9480 }, <------>{ PCI_VDEVICE(TTI, 0x2720), chip_9480 }, <------>{ PCI_VDEVICE(TTI, 0x2721), chip_9480 }, <------>{ PCI_VDEVICE(TTI, 0x2722), chip_9480 }, <------>{ PCI_VDEVICE(TTI, 0x2740), chip_9480 }, <------>{ PCI_VDEVICE(TTI, 0x2744), chip_9480 }, <------>{ PCI_VDEVICE(TTI, 0x2760), chip_9480 }, <------>{ <------><------>.vendor><------>= PCI_VENDOR_ID_MARVELL_EXT, <------><------>.device><------>= 0x9480, <------><------>.subvendor<---->= PCI_ANY_ID, <------><------>.subdevice<---->= 0x9480, <------><------>.class<><------>= 0, <------><------>.class_mask<--->= 0, <------><------>.driver_data<-->= chip_9480, <------>}, <------>{ <------><------>.vendor><------>= PCI_VENDOR_ID_MARVELL_EXT, <------><------>.device><------>= 0x9445, <------><------>.subvendor<---->= PCI_ANY_ID, <------><------>.subdevice<---->= 0x9480, <------><------>.class<><------>= 0, <------><------>.class_mask<--->= 0, <------><------>.driver_data<-->= chip_9445, <------>}, <------>{ PCI_VDEVICE(MARVELL_EXT, 0x9485), chip_9485 }, /* Marvell 9480/9485 (any vendor/model) */ <------>{ PCI_VDEVICE(OCZ, 0x1021), chip_9485}, /* OCZ RevoDrive3 */ <------>{ PCI_VDEVICE(OCZ, 0x1022), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1040), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1041), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1042), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1043), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1044), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1080), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1083), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ PCI_VDEVICE(OCZ, 0x1084), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */ <------>{ }<--->/* terminate list */ }; static struct pci_driver mvs_pci_driver = { <------>.name<-><------>= DRV_NAME, ...
  21. YES you are right i did not read careful enough but the driver required is correct and arpl seems to to have that one, as arpl is unmaintained atm i'd suggest using ARC loader https://github.com/AuxXxilium/arc that one comes with a wider selection of drivers and to be sure i just checked for that driver and it is present there, so it should work with this loader
  22. HEP Spec lists it as "QLogic cLOM8214" https://www.hpe.com/psnow/doc/c04111574.pdf?jumpid=in_lit-psnow-getpdf the driver would be "qlcnic.ko" and that driver was part of the 3615/17 extra package pci vendor and device id in the driver are 1077:8430 1077:8030 1077:8020 check if the pci id's match, check log dmesg about "qlogic" or "qlcnic" that driver was also part of jun's original extra.lzma so most likely its not about the driver being present afair 6.2 update support ended 6/2023 so you might be better off trying out a newer loader and dsm 7.x
  23. why do you think you cold be "banned" from your own google account when you copy/sync data? also xpenology IS dsm, it uses the original kernel from dsm and the original install files and updates, the loader try's to make things look like a original system and for that reason you can use and do most of the stuff how its done with a original synology system (and can usually use the KB from synology), there are some differences when it comes to things that need extra added licenses (like extra cam's for survailance station) or enforce serial number and mac validity (like quick connect) but in most situations you can use it ootb as if its a original system, the extra checks for serial/mac usually are enforced when extra services are used like its with quick connect that cost synology money (thy need to have resources in the cloud/internet to realize them) and likely synology has to cough up money to to the MPEG LA (and similar license holders) when stuff like extra codecs or hardware en- and decoding is used (i guess they save money when only paying if that part is actually used/installed and a lot of sold units never use that stuff so it saves money that way - as long as a bunch of freeloaders are not "miss-using" it and might produce extra cost by that, so that is where the extra protection by SN/mac is used
  24. did you ever heard of something like a search engine? https://www.startpage.com/do/dsearch?q=synology+sync+google+photos&cat=web&language=english and even synology hast something in its own knowledge base https://kb.synology.com/vi-vn/DSM/tutorial/How_do_I_migrate_photos_from_Google_Photos
×
×
  • Create New...