Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,628
  • Joined

  • Last visited

  • Days Won

    210

Posts posted by IG-88

  1. 2 hours ago, giacomoleopardo said:
    cat /proc/modules | grep button

    produces zero result

     

     

    if the module is not loaded it can't detect the press of the power button

    the loader should (at least when activating the acpid add on) load that module

    in theory a

    insmod /lib/modules/button.ko 

    should fix that and after that the shutdown might work as expected

     

    55 minutes ago, giacomoleopardo said:

    Another hint: I tried the latest AuxXxilium's ARC with DS3622xs+and the acpi service does work!

    if you found soemthing thats working for you that i guess its a solved problem

    • Like 1
  2. On 3/8/2024 at 10:06 PM, Kanst said:

    I try reconnection this RAID to another two 1166 controllers without any problems.
    Then insert in user_config SataPortMap=6666 and DiskIdxMap=00060c12 and after DSM recovering lost first 2 hdds from 6. 

    But RAID is not degrade!!!

    check what mdadm has to say about that

    cat /proc/mdadm

    at least there should be something if disks from a raid are missing

     

     

    On 3/8/2024 at 10:06 PM, Kanst said:

    PS. sata_remap no work in any variants.

    my (untested) assumption was that these things might not work on DT models

    they might use different things now like changes in device tree?

    maybe changes to a non DT model for your install or as suggested earlier change to jmb585/582 cards to get the port count you are aiming for

    you can try to dive deep into DT, syno's kernel (kernel source is available), the mods they have done and the shiming in the loader ... the less time consuming and non developer way is just to circumvent problems and using asm1166 only as last controller in the system is that way (or not using it at all

    or if you did not already have bough disks just lower the needed port count with bigger disks (i reduced my system from 12 to 5 disks that way)

     

    On 3/8/2024 at 10:06 PM, Kanst said:

    Editing grub (pressing "e" in boot menu) unswer, that grub has no sata_remap command.

    that might have been the way with jun's loader but the new loader (rp) works different, you would need to edit a config file of the loader for that, the loader now has its own boot and menu system to do that and re-writes the config file when saving the loader config (if you change the resulting config file manually you changes might get lost when re-running loader config later (like having DT and needing to renew the device tree after changing hardware)

  3. On 3/5/2024 at 1:08 AM, Kanst said:

    What about a right sintacs?

    sata_remap=9>0:0>9

    https://xpenology.com/forum/topic/32867-sata-and-sas-config-commands-in-grubcfg-and-what-they-do/

    https://gugucomputing.wordpress.com/2018/11/11/experiment-on-sata_args-in-grub-cfg/

     

    did you see this`?

    https://xpenology.com/forum/topic/52094-how-to-config-sataportmap-sata_remap-and-diskidxmap/

     

    also a possible solution might be to just use one asm1166 and place it as last card, that way the 32 ports are no problem

    like 6 x sata onbard, 5 x sata with jmb585, 6 x sata with asm1166

    if needed another jmb585 or jmb582 card can be placed in the middle to keep asm1166 last, jmb582 will be a pcie1x card but sometime all the good slots are all used but even a 1x slot can be useful (afair there a re even jmb585 with pcie 1x  but using to many of the ports might result in some performance degradation)

     

    there is also a newer firmware from 11/2022 for asm1166 (at least newer the the one from silverstone) but it does not fix the 32 port problem

    https://winraid.level1techs.com/t/latest-firmware-for-asm1064-1166-sata-controllers/98543/18

     

  4. my old documentation about the mod is this (mvsas kernel 3.10.105 patch backport)

     

    diff --git a/drivers/scsi/mvsas/mv_init.c b/drivers/scsi/mvsas/mv_init.c
    index 7b7381d..83fa5f8 100644
    --- a/drivers/scsi/mvsas/mv_init.c
    +++ b/drivers/scsi/mvsas/mv_init.c
    @@ -729,6 +729,15 @@ static struct pci_device_id mvs_pci_table[] = {
             .class_mask    = 0,
             .driver_data    = chip_9485,
         },
    +    {
    +        .vendor        = PCI_VENDOR_ID_MARVELL_EXT,
    +        .device        = 0x9485,
    +        .subvendor    = PCI_ANY_ID,
    +        .subdevice    = 0x9485,
    +        .class        = 0,
    +        .class_mask    = 0,
    +        .driver_data    = chip_9485,
    +    },
         { PCI_VDEVICE(OCZ, 0x1021), chip_9485}, /* OCZ RevoDrive3 */
         { PCI_VDEVICE(OCZ, 0x1022), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
         { PCI_VDEVICE(OCZ, 0x1040), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    
    
    
    ------------------------------------------------------------
    
    delete new and old 9485 section and replace with the following (as from kernel 5.x)
    
    { PCI_VDEVICE(MARVELL_EXT, 0x9485), chip_9485 }, /* Marvell 9480/9485 (any vendor/model) */

     

    i might need to have look at the old kernel source to figure out what it was about "delete new and old 9485 section", i will do this if needed

    (and there would also be the patched kernel in my old vm i was using to build the modules)

    there was also a old todo list having a point "new mvsas fix" but i cant remember what this was about

     

    i also have 2 patches for adding alx killer 2400 and 2500 to the old kernels

     

    edit:

    code in my mv_init.c from kernel 3.10 looks like this

    mv_init.c
    ...
    <------>{ PCI_VDEVICE(MARVELL, 0x6485), chip_6485 },
    <------>{ PCI_VDEVICE(MARVELL, 0x9480), chip_9480 },
    <------>{ PCI_VDEVICE(MARVELL, 0x9180), chip_9180 },
    <------>{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1300), chip_1300 },
    <------>{ PCI_VDEVICE(ARECA, PCI_DEVICE_ID_ARECA_1320), chip_1320 },
    <------>{ PCI_VDEVICE(ADAPTEC2, 0x0450), chip_6440 },
    <------>{ PCI_VDEVICE(TTI, 0x2710), chip_9480 },
    <------>{ PCI_VDEVICE(TTI, 0x2720), chip_9480 },
    <------>{ PCI_VDEVICE(TTI, 0x2721), chip_9480 },
    <------>{ PCI_VDEVICE(TTI, 0x2722), chip_9480 },
    <------>{ PCI_VDEVICE(TTI, 0x2740), chip_9480 },
    <------>{ PCI_VDEVICE(TTI, 0x2744), chip_9480 },
    <------>{ PCI_VDEVICE(TTI, 0x2760), chip_9480 },
    <------>{
    <------><------>.vendor><------>= PCI_VENDOR_ID_MARVELL_EXT,
    <------><------>.device><------>= 0x9480,
    <------><------>.subvendor<---->= PCI_ANY_ID,
    <------><------>.subdevice<---->= 0x9480,
    <------><------>.class<><------>= 0,
    <------><------>.class_mask<--->= 0,
    <------><------>.driver_data<-->= chip_9480,
    <------>},
    <------>{
    <------><------>.vendor><------>= PCI_VENDOR_ID_MARVELL_EXT,
    <------><------>.device><------>= 0x9445,
    <------><------>.subvendor<---->= PCI_ANY_ID,
    <------><------>.subdevice<---->= 0x9480,
    <------><------>.class<><------>= 0,
    <------><------>.class_mask<--->= 0,
    <------><------>.driver_data<-->= chip_9445,
    <------>},
    <------>{ PCI_VDEVICE(MARVELL_EXT, 0x9485), chip_9485 }, /* Marvell 9480/9485 (any vendor/model) */
    <------>{ PCI_VDEVICE(OCZ, 0x1021), chip_9485}, /* OCZ RevoDrive3 */
    <------>{ PCI_VDEVICE(OCZ, 0x1022), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1040), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1041), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1042), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1043), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1044), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1080), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1083), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    <------>{ PCI_VDEVICE(OCZ, 0x1084), chip_9485}, /* OCZ RevoDrive3/zDriveR4 (exact model unknown) */
    
    <------>{ }<--->/* terminate list */
    };
    
    static struct pci_driver mvs_pci_driver = {
    <------>.name<-><------>= DRV_NAME,
    ...

     

  5. On 1/8/2024 at 2:43 PM, Davide03gm said:

    I use arpl for boot DSM

     

     

    On 1/11/2024 at 9:31 PM, Orphée said:

    He is probably using DSM 7 already as he referred to ARPL.

     

    YES you are right i did not read careful enough but the driver required is correct and arpl seems to to have that one, as arpl is unmaintained atm i'd suggest using ARC loader

    https://github.com/AuxXxilium/arc

    that one comes with a wider selection of drivers and to be sure i just checked for that driver and it is present there, so it should work with this loader

     

  6.  

     

    HEP Spec lists it as "QLogic cLOM8214"

    https://www.hpe.com/psnow/doc/c04111574.pdf?jumpid=in_lit-psnow-getpdf

     

    the driver would be "qlcnic.ko" and that driver was part of the 3615/17 extra package

    pci vendor and device id in the driver are

    1077:8430

    1077:8030

    1077:8020

     

    check if the pci id's match, check log dmesg about "qlogic" or "qlcnic"

    that driver was also part of jun's original extra.lzma so most likely its not about the driver being present

     

    afair 6.2 update support ended 6/2023 so you might be better off trying out a newer loader and dsm 7.x

  7. why do you think you cold be "banned" from your own google account when you copy/sync data?

    also xpenology IS dsm, it uses the original kernel from dsm and the original install files and updates, the loader try's to make things look like a original system and for that reason you can use and do most of the stuff how its done with a original synology system (and can usually use the KB from synology), there are some differences when it comes to things that need extra added licenses (like extra cam's for survailance station) or enforce serial number and mac validity (like quick connect) but in most situations you can use it ootb as if its a original system, the extra checks for serial/mac usually are enforced when extra services are used like its with quick connect that cost synology money (thy need to have resources in the cloud/internet to realize them) and likely synology has to cough up money to to the MPEG LA (and similar license holders) when stuff like extra codecs or hardware en- and decoding is used (i guess they save money when only paying if that part is actually used/installed and a lot of sold units never use that stuff so it saves money that way - as long as a bunch of freeloaders are not "miss-using" it and might produce extra cost by that, so that is where the extra protection by SN/mac is used

     

     

     

  8. der vorteil liegt eher darin das mehr ootb mit den kernel treibern läuft, z.b. ist es leichter i915 treiber nachträglich einzufügen da man weniger backporten muss und sollte kernel 5.x für die gemini lake basierten kommen (z.b. 920+) dann kann man da ohen mehraufwand wesentlich modernere intel cpu's nutzen da dort der i915 treiber schon ootb vorhanden ist

    auch ein vorteil ist der support von mehr cpu cores (wobei 3617/3622  immerhin bei 16 sind)

    da das auf amd ausgelegt wird damit auf jeden fall auch VMM laufen, was bei den versionen für intel cpus wie 3622 nicht gehen würde

    da du intel cpu planst ist die sa6400 evtl. sogar leicht im nachteil da evtl. kein VMM mit einer eintel cpu läuft (häufig sind die kernel mit begrenzten merkmalen für die cpu kompiliert)

    da deine cpu nur 4 cores (8 threads) hat bis da da nicht unbedingt am limit und wenn du z.b. nvme als data volume nutzen willst wäre das evtl. ein faktor, nicht alle versionen unterstützen nvme als data volume ootb und wenn man das "nachrüstet" läuft man gefahr das es bei (größeren) updates unfälle gibt und das volume ist weg (im günstigen fall aktiviert man nvme wieder nachträglich im ungünsten fall sind die daten futsch und man braucht sein backup) - da solltest du ein wenig hier im forum nachlesen, wenn man es einfach will (und das ist ja der sinn warum man DSM statt anderen freien NAS OS nimmt) dann sollte man nicht zu viel reinfrickeln und sehen das die wichtigen sachen ootb gehen und updates überstehen, es haben schon viele ein paar monate nach der intitial setup ein update durchgeführt und denn ein böses erwachen gehabt weil etwas nicht ging und sie ganz vergessen hatten das die eine oder andere mod "spezial" ist und man manuell nacharbeiten muss (ist mit den aktuellen loadern aber besser geworden und die bringen in der regel auch eine funktion mit sich über internet selbst zu aktualisieren - aber das sollte man wissen und dokumentieren (wie auch die einstellungen die man im loader gewählt hat)

     

    kernel 5 kann vorteile haben aber im normalen einsatz gbt es je nach ausrichtung der nutzung bessere wegen, bei mehr kameras greift man besser zu einer DVAxxxx, die kommt ootb mehr mit

     

    die apollo und gemin lake haben imho im kernel "CONFIG_NR_CPUS=8" konfiguriert so das deine cpu voll ausgenutzt würde und wenn man die dva1622 (gemini lake) nommt dan gits neben mehr kameras auch mehr fetures in der SS und man kann auch i915 (in tel qsv) ootb nutzen, wäre in deinem fall das teil mit den meisten features, was die nicht hat ist raid f1 (raid modus speziell für ssd's) das merkmal it in den consumer versionen in der regel nict mit in den kernel compiliert, bei sowas müsste es dann eher ein 3622 sein

     

    die sa6400 käme imho nur in frage wenn man speziell kernel 5.x will oder man AMD cpu hat und wenn man mehr als 24 threads braucht (bis zu 24 geht imho mit 3622 und 3617)

     

    aber das ist auch immer etwas nach persönlicher erfahrung die man so gemacht hat, ich habe da jetzt eher konservativ argumentiert, wenn man sich gut auskennt und an der sache ständig dranbleibt (hier im forum) dann kann man das anders machen, ist immer auch eine frage wie viel zeit man da versenken will/kann und wie lange das anhält, wenn man 1.5-2 jahre die kiste lauen lässt ohne dran zu bleiben sollte man sich immer erst einlesen was grade geht und was man vermeiden sollte

     

    dsm/xpenlogy ist ein wenig speziell, kein eigenes know how und dann mal eben jemanden hinzuziehen der sich "nur" mit linux auskennt recht da nicht, man muss einiges wissen/beachten und zweit versenken (weshalb sich schon der eine oder andere auch eine "echte" DSM kiste von synology gekauft hat, die haben zwas häufig etwas unterklassige hardware aber sind am ende meist doch ausrechend für den NAS job mit ein paar extras

  9. when "sata hub" is sata multiplier (https://en.wikipedia.org/wiki/Port_multiplier)  (i guess so) then no there is no use option in DSM as synology has disabled that and we use the kernel of the original systems

    you could still use "non appliance" NAS systems like open media vault, that would have "normal" kernel  configurations and would support that (even when from a performance point its not a good option for raid systems)

     

  10. depending what dsm system you need you could also choose one that comes with that driver from synology like 920+ or dva1622, all apollo and gemini lake systems come with that driver ootb, that way you could use 7.2 now

    maybe just wait until its fixed, 7.1 is fine as it gets updates from synology

  11. On 12/5/2023 at 3:01 PM, canzone said:

    Before messing up with the real deal, I've installed ARPL boot loader in a new USB stick and installed it with a new HDD just to see what happens. I've heard that one cannot move directly from 6.x to 7.2, so I did this first. Installed a DS3622xs+ model 42962 and DSM 7.1.1-42962 Update 6, the last one available. Couldn't be easier. Runs perfectly.

    at this point you would have been finished as 7.1 gets updates and everything, no need do more, 7.2 is optional, most likely nothing there what you need or might be able to use (with that old hardware)

    (when coming from  6.x you would need to check about packages, as 7.x is different in that regard and needs newer packages, so if you user 3rd party packages check if they are available for 7.x)

     

    On 12/5/2023 at 3:01 PM, canzone said:

    Tried ARC with a lower DSM version (7.2.1 64570) and did not work.

     

    Also tried to use ARC with 7.1, did not work too.

    there are alternatives

    https://github.com/PeterSuh-Q3/tinycore-redpill

     

  12. for a estimate you can look into what synology suggests or say is possible for a certain model and check the hardware thats build in

    https://www.synology.com/en-global/products/DVA#specs

    https://kb.synology.com/en-global/DSM/tutorial/What_kind_of_CPU_does_my_NAS_have

     

    i guess DVA3221 will do it but it needs a NVIDIA GeForce GTX 1650 (afair synology has hard coded that into some parts of the software)

    according to the specs from above a dva1622 (just a gemini lake with intel qsv active in use) can do 8 x 4k so with some newer/beefier hardware with qsv working in dsm it might be possible to use dva1622

    "80 FPS @ 4K (3840x2160), 8 Channels" - i guess thats 8 channels with 4k and 10 fps each channel, might be more cam's if you lower fps or use better hardware then original gemini lake in dva1622 (or capture lower resolution on some cameras might also help)

    i guess you will need the intel qsv support to have h264/h265 decoded for that amount of data

     

    there also a 8 camera limit in the default config in dva's (and extending that needs some extra "license" you can't buy from synology)

     

    On 11/28/2023 at 9:11 PM, WowaDriver said:

    I don't think the network is a problem here as the cameras only have a 100MB interface. I think it's more likely to be the hard disk. Do you think it would be better to record on a large SSD or NVMe?

     

    if you take the compute power given from synology for the dva1622 it can handle you can calculate i little bit

    3840x2160 x 3 (base resolution x3 because of 24bit color) and 80 fps oer all cams is ~2000 MB/s, as its h.264 encoded let say 0.07 for the compression than its still 140 MByte/s

    if thats correct then a normal single hdd will be on its limit as its in different files on the disks and it also need iops performance (and thats low on hdd's), i guess it at least would need some raid to have more performance

    beside this a single 1G nic will max out at ~110 MByte/s so a single 1G nic might also be a problem

    as the original dva1622 only comes with two disks slots and one 1G port my "estimate" might be to pessimistic and synology can evaluate from practical tests that a normal hdd (or raid1) and 1G is enough for 80fps@4k

     

    imho when taking some headroom into account it should be more like a raid5 and at least 2x1G nic to handle 9 x 4k (10fps) as h.264 stream

    digging into the recorded data while still recording all channels ... yes might need some additional headroom to still keep things smooth and running

     

    maybe some people with systems already running can add here?

    but you guess 9 cam's @4k might be a little over the usual size

     

    as commented above look up what specs/system a regulars synology or qnap system would need and dont forget to read small print as there might be some marketing involved (like only recording or having ssd/raid to use full spec)

    also check google/youtube for people commenting about that kind of system (multi 4k cam's and using regular synology/qnap for it)

     

    • Like 1
  13. 23 hours ago, apejovic said:

    Könnte vielleicht jemand das dva1622 mit arpl loader von auxillum testen?

     

    also wenn die cpu es hergibt und sata vorhanden ist sollte es gehen und das träfe für den futro zu

     

    23 hours ago, apejovic said:

    Wäre der Anschluss einer normalen HDD möglich?

    da das ding m.2 sata hat kann man einen adapter einsetzen (ähnlich wie man aus m.2 nvme einen pcie 4x anschluss machen kann, ist rein mechanische umlegung der siganle auf einen anderen schluss)

    https://www.m-ware.de/adapter-und-konverter/adapter-ngff-m-2/m-2-adapter-sata

    aber wenn der m.2 auch nvme kann dann gbet natütlich auch ein m.2 adaper mit 2-5 sata anschlüssen der einen chip dafür hat (asm oder jmb)

     

    du musst dir dann aber irgendwo im gehäuse den strom abzweigen, was aber sicher gehen wird

    hier gibts ausführliche info's

    https://github.com/R3NE07/Futro-S740

     

    da hat es zu genau dem auch bilder (2.5" ssd im gehäuse mit m.2 adapter)

    https://www.mydealz.de/deals/refurbished-fujitsu-futro-s740-raspberry-pi-alternative-2041563#comments

    (offensichtlich gibt es auch m.2 sata adapter die gleich einen stromanschluss für die hdd drauf haben)

     

    • Like 1
  14. 14 hours ago, Peter Suh said:

    If I use Quickboot in BIOS, when the PC reboots, it will not find any disks mounted on this device.
    So, as an alternative, I disabled quickboot and added a slight boot delay. After that, these phenomena no longer occur.

    interesting i would have expected that it would be enough to have the linux kernel (and its ahci driver) getting loaded to find the disks (we dont need them to boot so no need to have them ready  so early as we are booting from usb and the kernel is loaded from usb)

  15. meine erste vermutung wäre das die sata anschlüsse nicht ahci sind und deshalb nicht erkannt werden, starte mal ein live/recovery linux und schau mit lspci welcher treiber benutzt wird

    vermutlich ata_piix

    wenn es den geht würde man im bios auf native sata/ahci stellen

    ein wenig suche im internet zu der atom cpu:

    Calistoga chipset,https://en.wikipedia.org/wiki/List_of_Intel_chipsets, also ICH7-M und damit zumindest theoretisch ahci fähig wenn man es im bios einstellen kann

     

     

     

  16. 54 minutes ago, Peter Suh said:

    Is this possible for Ryzen GPU as well? I wonder.

    it would need some api's in dsm/linux to make any use of a amd driver running and afaik there is no generic use from synology of a amd gpu yet

    if jellyfin or plex could use some amd gpu driver/api's  support it might be more interesting but as it is now i would check about how to use it before sinking a lot of time adding that

    even adding nvidia support (way easier as the rout jim ma had to take for i915 on sa6400 kernel5) got not that much love even its api's are common and supported by 3rd party packages like plex

    beside this its not that common on amd hardware based systems to have a integrated gpu (at least yet)

    • Like 1
  17. On 9/27/2023 at 3:09 PM, Kashiro said:

    1. Is the LSI HBA supported?

    seeing this i'd say at least now

    https://github.com/pocopico/redpill-lkm/commit/787faf5ece6afe851421dabb15c924757a3c60cc

    // by jim3ma:
        // dsm 7.2 check syno_port_type in sd_probe, when syno_port_type == 1(SYNO_PORT_TYPE_SATA), the disk is sata
        // for disks from hba card like Microsemi Adaptec HBA 1000-8i with aacraid driver, syno_port_type is always 0, we need change it to 1(SYNO_PORT_TYPE_SATA), otherwise sd_probe will return error with -22
        // solution: update syno_port_type in hba driver

     

    On 9/27/2023 at 3:09 PM, Kashiro said:

    2. Is the Mellanox NIC supported?

    1st thing can be to check if its supported by dsm itself (on dsm types supported by synology for mellanox)

    https://www.synology.com/en-global/compatibility?search_by=category&category=network_interface_cards&filter_brand=Mellanox&display_brand=other

    your's is supported and when checking the compatibility row from the link above

    and as long as the the loader make support wotth (no likely) the it should at least working in the official supported systems, often the driver in the loader are newer and stretch over more systems then synology supports (like having mlx support in units that in original dont have a pcie slot like dva1622 as a gemini lake system)

    ConnectX-4 Lx EN MCX4121A-XCAT (14.20.1010)
    Compatible Models
    
        FS series:
        FS3400, FS3017, FS2017, FS1018
        SA series:
        SA3600, SA3400
        22 series:
        DS3622xs+
        21 series:
        RS4021xs+, RS3621xs+, RS3621RPxs, DS1621xs+
        19 series:
        RS1619xs+, DS2419+II, DS2419+, DS1819+
        18 series:
        RS2818RP+, RS2418RP+, RS2418+, DS3018xs, DS1618+
        17 series:
        RS18017xs+, RS4017xs+, RS3617xs+, RS3617RPxs, RS3617xs, DS3617xsII, DS3617xs
        16 series:
        RS18016xs+
        15 series:
        RC18015xs+, DS3615xs
        14 series:
        RS3614xs+, RS3614RPxs, RS3614xs
        13 series:
        RS10613xs+, RS3413xs+
        12 series:
        RS3412RPxs, RS3412xs, DS3612xs
        11 series:
        RS3411RPxs, RS3411xs, DS3611xs
    
    Incompatible Models
    
        19 series:
        RS1219+
        18 series:
        RS818RP+, RS818+
        17 series:
        DS1817+, DS1517+

     

    On 9/27/2023 at 3:09 PM, Kashiro said:

    3. Can I migrate my existing SHR1 array without formatting? I'm reading that SHR isn't enabled/supported on newer models by default?

    yes you can even check synology's faq and it should say this, shr is just a disabled option when creating a new array (and even that can manually changed)

    existing arrays will work as its a general dsm feature and as all systems come (more or less) as the same dsm version the code is present (exceptions will be code that is part of the kernel, if that is left out when compiling the kernel its not persent, example can be raid f1, not pesent in consumer units and can't be added easily, when needing it you need to chooese a unit in the loader that comes from synology with support for raid f1 as we use syno's original kernel)

     

    On 9/27/2023 at 3:09 PM, Kashiro said:

    4. Can I use the M.2 NVME drives to create a new volume? Not looking to use these for caching but planning to create an all-flash volume as datastore for vmware. 

    synology has enabled that for some units and it was "extended" from xpenology users to more (all?) units, if you want to be on the safe side look up on what unit synology supports it and choose one of there in the loader, prevents problems when updating and m.2 volume is gone/missing after update - when using a non official unit then thoroughly read about the things to keep in mind and the risks

     

    On 9/27/2023 at 3:09 PM, Kashiro said:

    5. I'll soon be running into the 108TB volume limit with this array. Any way to convert to 200TB without formatting? Synology says that you have to recreate the volume, but maybe there's some magic that can be done with Xpenology?

    thats all about dsm itself and you can just use the information synology has about it

    https://kb.synology.com/en-in/DSM/tutorial/Why_does_my_Synology_NAS_have_a_single_volume_size_limitation

    keep to that and you wont have problems, DS... units in general seem not to come with 1PB support now but that might change when bigger disks will be more common and synologys customers with DS units will stumble more often, its a software and marketing thing i guess

     i did not look up about that in the forum, it might be as simple as enabling shr after installing to get 1PB support, its still corner case but will get more important the next two years

     

  18. booting the hypervisor form m.2 is the right way but you misunderstand the way how dsm/xpenology is booting and where the system is loaded from

    short: boot hypervisor from m.2 and pass through the onboard sata controller to the dsm vm (both old disks connected to that controller - or use a added controller and pass that one trough), use howto section to find out how to configure proxmox with dsm vm to use the boot loader (arc, tcrp, ...)

     

    the loader for ypenology is the replacement for the usb dom module of a original system, it just loads grub loader the kernel file and a few basic kernel modules to get access to network and the disks (its just a few hundred megabyte over all)

    the dsm operatind system is a raid1 partitons going over all disks in the system (except cache drives)

    so your DSM operating system in on your two disks and when using the newer xpenology loader its like having a new/updates usb dom with a new kernel

    dsm's boot loader ("with red pill help") will come up and see disks with a older dsm operating system, cant boot into that as the kernel from the (fake) usb dom does not match the stuff on disk = upgrade of dsm on disk needed - its the same process as in synology's knowledge base description so you can look up on that process and ther is even al list from source to destinatioen (DSM versions and dsm hardware from->to), using xpenology gives you more flexibility in case of from what source hardware to new xpenology "hardware" (aka dsm hardware type choosen in xpenology red pill loader)

     

    its a drive migration scenarion when using drives from old system in a newer one

    https://kb.synology.com/en-global/DSM/tutorial/How_to_migrate_between_Synology_NAS_DSM_6_0_HDD

     

×
×
  • Create New...