Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,645
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. as long as you cant have your own boot image (loader) in don't see how that will be possible i do remember there where some discussions in the german section about using cloud based virtual servers might have been this one https://xpenology.com/forum/topic/27798-xpenology-auf-einem-root-server-kvm-qemu/
  2. bevor man einen neun usb erzeugt ist es gut zu wissen welche dsm versio man hatte also erst mal sehen ob der usb am pc noch erkannt wird auf der ersten partition die grub.cfg prüfen (am unteren ende sieht man die menü einträge die grub sagen was man vor sich hat (6.0. 6.1,6.2, also die zuordnung welcher loader wenn man einen neuen usb erzeugen will) man kann auch ganz genau bestimmen was man hatte wenn man die rd.gz. der 2. partition auspackt da findet sich unter /etc dann das VERSION file das ganz genau aufschlüsslet welche dsm version man auf disk installiert hat aber normalerweise sollte es reichen die main line mit dem richtigen loader zu treffen und wenn man dann bootet und feststellt das der kernel auf dem usb zu alt ist sollte eigentlich eine reperatut vorgeschalgen werden bei der der kernel von disk wieder auf den usb kopiert wird (setzt aber voraus das der loader grundlegend zur installierten dsm version passt, als z.b. 1.02b für dsm 6.1) wichtig bei einem neuen usb ist sich daran zu erinnern das man ab dsm 6.0 und jun's loader die vid/pid des usb's den man verwendet in die grub.cfg eintragen muss
  3. that sounds like the driver is producing a kernel panic, you would be able to monitor it with iLO and the virtual com port (dsm does all the things on serial console) i only can test with what i have and p410 was the best i got for free lately (and its kind of hard to use on a non HPE system as it needs the bios to support switching off storage bios extension to boot the system at all) we are struggling with the p4xx for a long time and gave up about 2 years ago and i just added the driver from the kernel (that should at least give some basic support and it did with the p410 in raid mode) maybe a newer driver from external source can do the trick, it will be in the next extra.lzma if it passes the test with the p410 in raid mode no if its working it should just recognize the drives without configuring anything (beside the hba mode), if it does not the driver is not working properly
  4. the driver in 0.13.3 supports the following 1000:0064 1000:0065 1000:006E 1000:0070 1000:0072 1000:0074 1000:0076 1000:0077 1000:007E 1000:0080 1000:0081 1000:0082 1000:0083 1000:0084 1000:0085 1000:0086 1000:0087 1000:0090 1000:0091 1000:0094 1000:0095 1000:0096 1000:0097 and here is the list https://pci-ids.ucw.cz/read/PC/1000 your is this one https://pci-ids.ucw.cz/read/PC/1000/00c4 ? the lsi sas drivers in 918+ are problematic as you can see in the 1st post, i would recommend using 3617 as it comes with newer drivers from synology this would be whats supported in 3617 right now (and with smart and serial and without problems about that it might kill your raid if disk hibernation in on) so you 1000:00c3 would be supported in 3617 1000:0064 1000:0065 1000:006E 1000:0070 1000:0072 1000:0074 1000:0076 1000:0077 1000:007E 1000:0080 1000:0081 1000:0082 1000:0083 1000:0084 1000:0085 1000:0086 1000:0087 1000:0090 1000:0091 1000:0094 1000:0095 1000:0096 1000:0097 1000:00AA 1000:00AB 1000:00AC 1000:00AD 1000:00AE 1000:00AF 1000:00C0 1000:00C1 1000:00C2 1000:00C3 1000:00C4 1000:00C5 1000:00C6 1000:00C7 1000:00C8 1000:00C9 1000:00D0 1000:00D1 1000:00D2 1000:02B0 please use the forum search to find the attempts in that direction, its not that easy, its more then just a driver (*.ko file) that would be needed i have this one on my list but there might bo one or two more threads https://xpenology.com/forum/topic/22272-nvidia-runtime-library/ it looked like it was possible to just use the drivers from a "compatible" dsm version but as already stated it gets way more complicated as ffmpeg and other are involved,way more then i'm willing to do as i don't need that, i provided drivers and ways on how to build them, so other people with enough time to spare can push this forward if there is enough will to do it every case where it does not match the installed version (on disk, raid1 over all disks) it a issue usually dsm will not boot and offer a repair that usually copy's the kernel files from disk back to the loader i do mention this i the 1st post (afaik), as i'm only offering the extra/extra2 its up to the user to copy the (6.2.3) files to the loader, before using it how would you want to do that automatically? in theory its possible to offer pre-made (modded) loader with newer kernel files and matching extra/extra2 i'm not doing this for legal reasons, any one is free to offer this, either jun nor me have copyrights on that stuff and as long as there is a decent description with that loader making clear in what way its different from jun's original and giving the version of the extra/extra2 thats used i dont think there is anything to complain about i dont mind if anyone uses the extra/extra2, i would prefer to see a version number, not my name, to be used with it to give people a way to see if its the recent version and being able to read about problems and quirks i've seen my extra/extra2 been used in some stuff from china (like 1019+ loader) and i dont mind that, its free to use for everyone and if anyone tells people its his work then i dont mind too at some point i started to add checksums to the download to make sure people can check if its the version from me or anything modded and i may add some version text file to the extra/extra2 to make it easier to check what version is in use (extra/extra2 can easily be unpacked with 7zip to check a text file) so if you feel the need to offer loaders for pre configured for 6.2.3 feel free to do so and open a thread in the loader section (might need to be approved by a mod so it can be a few days before everyone can see it) btw before doing so PM me, i have a way to overcome the read/write problems in win10 so the offered loader would also be easier to handle as both partitions could be used freely in win10, auto mount drive letters and working without "as administrator" (as in older win10 version)
  5. if the sata is configured to 4 drives then its normal that the following controller (scsi or whatever as long as dsm has a driver) will be 5 and ap, dsm counts the ports, used or not and will place the controller one after another never tested this as i only use it for tests and compiling drivers, according to the documentation virtualbox can have up to 30 sata drives in a vm https://www.virtualbox.org/manual/ch05.html and loader 1.04b 918+has a preconfigured max. of 16 in esxi its suggested to have the boot image on the 1st sata controller and data disks (where dsm in installed to) on 2nd or above controller seem not needed in virtualbox on my system but can't hurt to try it
  6. it a full linux and is open to much more hardware then dsm no dsm works with ryzen too, you would need a additional gpu when you already have a 10G nic in the one slot ... thats one of the reasons to think about microATX, more options if the new build still need some more after a while also going hypervisor or baremetal is not final and can be changed if needed, when disks are in direct control of dsm (like controller in vm or rdm mapping) its still possible to just remove the hypervisor and use the whole install (disks) baremetal without reinstalling dsm or the data on the disks (or the other way around from baremetal to hypervisor) then mini-itx is the better choice and choose more wisely before buying btw. there where more expensive server mini-itx boards with 10G nic anboard, that way you can keep the pcie slot open but its over 2 years as i looked into that so it might be outdated (the market moves and compact servers might not be that interesting anymore)
  7. about possible hardware GIGABYTE W480M Vision W (microATX, 2 x Pcie 16x (1x16, 1x8), 2xM.2 Nvme, 8xsata) Intel Core i5-10500T, 6C/12T (TDP 35W) cpu can be different the example is low power and affordable 200 bugs the board keeps everything open as it can have nvme and more disks (but 8xsata is a comfy start) might be a overkill of options, if you go with mini-itx you would have less options but might be good enough only negative with the board would be you cant use the 2nd nic as thw 2.5GBit nic from intel has no driver outside kernel 5.x and dsm is based in kernel 3.10 and 4.4 but you already planning a 10G nic ASUS XG-C100C will work but if the systems are not to far apart sfp+ might be a better choice (cheap DAC cable ap to 7.5m and affordable 4 or 8 port switch) (sfp+ would be my choice now because of the switch option, multiport 10G rj45 a way more expansive and power hungry, also sfp+ has a lower latency)
  8. along with its flexibility the hypervisor add a amount of complexity, if one already has enough experience with with a certain hypervisor it will not cost to much time the hypervisor is also a instance that needs update/maintenance in addition to the system that is the real thing, that complexity can add up to a point where you have work work with the hypervisor then with the dsm vm if its really about having a single purpose dsm install without bigger plans to extend to additional vm's (dsm can do docker and vm's with vmm on a smaller scale) if its put it in place and forget about and not tinkering all the time then carefully chosen hardware for having 2 baremetal systems might be the better choice the soul purpose a having a dsm (synology) appliance is often to not care to much about it if its set up and running, adding a hypervisor adds complexity in a way that it can be a bother i dont know if the gain of nvme in a vm (as virtual ssd) ist that much faster that its worth the effort also when thinking about a baremetal with nvme (918+, 8 thread limit) it needs two nvme's in dsm to use it as read/write cache and most mimi-itx boards will just have one nvme slot will be the same as a normal connected ssd but i would not do that because it much more difficult to handle and to replace that just normal 2.5" sata drives its easier to have 2 or 4 drives of the same type and one spare (cold), if one fails its just one part to replace, if its m.2 sata too beside normal sata then its two different spare parts one point we have not touched yet is power consumption, if its 2 x NUC atm then it will not draw much power, using a oversized hardware may result in a nice additional fee for power, so it might be worth a thought and few minutes with a calculator to see what it will cost in a year. thats TLC ssd (and ok), if you look for different models don't use QLC, way slower when the internal cache of the drive is exceeded also it might be better to use 4 drives in raid10 for more throughput (but you can start with 2 drives in raid1 and extend if you feel you want more speed) btw. my last build was mini-itx and i changed to microATX last year, way more options with its 4 pcie slots and there are also models with 2xnvme (if you want to keep that a option)
  9. i tested the hpsa.ko driver in my last extra.lzma with a p410 (that does not support hba mode) as raid1 single disk and the driver seemed to work as it was loaded (dis not crash) and found the disk the P420 should be switchable into hba mode and should work with disks in the same way as the lsi sas hba'a in IT mode you might need to boot with a service linux to switch the p420 into hba mode as (afaik) that one does not have a switch in the controller bios (later P4xx might have that)
  10. if its more about ram and cpu power you can use 3617 baremetal and just use normal ssd's as data volumes (raid f1 mode) or as cache drives instead of nvme using a 8 core with HT or beefier one with 12 or 16 cores without HT (disabled in bios) is no problem if sata ssd's in raid1, 10 or raid f1 (ssd equivalent to raid5) will be ok depends on you reqirements so 3617 with sata ssd's might be a simple to handle solution (you can still add normal hdd's for slower but bigger storage as long as you have sata ports - if the 10G nic blocks your single pcie slot you cant extend for more sata ports)- security seems to to be your concern when you still run a 5.2 system and as long as you are doing operations and maintenance yourself (or/and make a good documentation) it should be fine, you 5.2 did not get updated to 6.x by accident so no problem from that side i guess
  11. yes, xpenology uses the original dsm kernel (for different reasons) so you are bound to the limits of the dsm type you are using https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ atm you user 8 real cores and 8 HT "virtual cores" (~20-25% of a real core for a HT core) so its ~10 core performave atm when disabling HT in bios you will have real 12 cores (of the ~15 core performace you could expect of that 12+12 cpu) if you want to stay with DSM you could use a hypervisor like esxi or proxmox and have 2 DSM VM's to make use of all cpu power depending on you use case you can also consider a baremetal install of open media vault
  12. mini-itx brings some limitations when it comes to extensions as it does not have much pcie slots (usually just one) you did read about the general limitations of xpenology? https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ when doing docker and vm then ram is more important, very few people use ecc as most build low cost systems with desktop hardware that does not support ecc, can't hurt to have it when its more important is it worth the money? depends on how much more it costs and how you see it here we have the 1st problem, if you look for the limits you will see only 918+ supports nvme (and only as cache and manually patching) but its kernel is limited to 8 threads even 3617 "only" hast 16 threads so its a max. 8 core with HT or more cores and disabling HT (if the bios can do that), like when having a 12 core with HT you might be better off with esxi or proxmox and using a dsm vm, that way you can handle resources more flexible (cpu power for other vm's) and the nvme's can be made virtual ssd's and 3617 can handle them as normal drivers or as cache - but you loose a lot of simplicity compared to a baremetal install that sounds a little low for HA, consult synologys design guides for that, maybe plan a added 10G nic for the pcie slot (4x or better) and use a direct connection between the two HA units over the 10G connection (you don't need a switch for just connection the two units) also check the forum if HA is available without a valid serial xpenology is still a hacked dsm appliance and there are things not working without "tweaking" and you can loose functions when updating if synology changes things and if you can't install all security updates there are risk's involved like you cant install 6.2.4 that already contains new security fixes, also its easy to "semi" brick or damage the system on updates when doing things wrong (like not disabling write cache on updates when using 918+ with nvme cache or installing "to new" updates like 6.2.4) like when having other people not aware of the specialty's handling the system - keep that in mind) have you given open media vault a thought (i guess they will have no equivalent to HA but webgui, nas and docker will be there too) i don't know how close to production that system is supposed to be but it can be riskier to do that then you expect it to be (if you dont have longer experience with xpenology)
  13. if you try loader 1.03b for dsm 6.2 use sata controller (ich9 as chipset) as seen in my screenshot above, i also user 3615/17 as vm that way there is not much difference beside the hardware transcoding support and support for nvme ssd's (SHR will work ootb but you already figured out how to activate that on 3615/17, there is also something about that in the faq here in the forum) also 918+ loader has already a default of max 16 disks, 3615/17 loader comes with max. 12 disks but as you plans are based on more recent hdd sizes that will not make a difference for you
  14. just to make szre, please post the grub.cfg you used
  15. a original usb module has f400/f400 for vid/pid, to completly replace a original module you would also need to change a usb flash drive to have this id's - and that part need some reading, work and the right source material (usb flash drive with the right hardware that you can get tools for)
  16. sorry to much los in translation here, i cant get what you are on here VM? we were talking about a normal baremetal install also the loader you use defines the main dsm version you have to use and also the type of dsm (like 3615 or 3617) i'd szggest using loader 1.03b for 3617 dsm 6.2.3 if you have problems with csm mode you can try loader 1.02b dsm 6.1, this can run legacy (csm) and uefi https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ you should read normal install thread about 6.1, its the same when doing a 6.2 install https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/ maybe look for a youtube video in you native language to get a better picture
  17. i'm not doing anything with wifi as synology dropped that themself (beside other things) lspci_syno.txt show it 0000:00:1f.6 Class 0200: Device 8086:15fa (rev 11) not unexpected as the board is from a new chipset for 10th gen intel, they also used a newer onboard nic revision, it will need a newer driver for sure as we got a new 3rd wave shut down on easter i guess i will have time to do something in that direction
  18. where did you picked that up? ich6 upward can do ahci according to my sources https://ata.wiki.kernel.org/index.php/SATA_hardware_features maybe you should really read it? page 2-18 you could even search (text) for ahci in the pdf and find about 10 references to ahci i'm not going to check any further, if you don't bother to do even search in a manual you referencing ... good luck with your further efforts
  19. even in your 2nd, you dont give any details about your hardware (board, cpu, added cards), kind of hard to give advise without any specific information in general there is less support in dsm 6.1/6.2 as synology did changes to there custom kernel code and broke drivers, without someone fixing it there are lots of sata and pata drivers that dont work anymore https://xpenology.com/forum/topic/9508-driver-extension-jun-102bdsm61x-for-3615xs-3617xs-916/ afair it did work in 6.0 so you might try that version (DS3615xs 6.0.2 Jun's Mod V1.01) https://xpenology.com/forum/topic/7848-links-to-loaders/
  20. the chipset might be mcp78 and thats supported in linux ahci driver that is part of dsm any log like /var/log/dmesg ? check the bios, maybe rest it to defaults and look for sata to be in ahci mode i dont see much difference in ahci and kernel between 6.1 and 6.2, should work the same in both
  21. afaics thats mcp78 https://de.wikipedia.org/wiki/Nvidia_nForce_700 and thats ahci compatible in linux https://ata.wiki.kernel.org/index.php/SATA_hardware_features so check you bios settings for sata to be ahci
  22. even if its working one pcie 2.0 lane still has 500MB/s and all cards/devices will have to share this bandwidth
  23. migh read the faq, raid is not supported, dsm is build around software raid with single disks, ahci mode in bios for the onboard (not ide mode or raid mode) extra controller depends on what it is but in most cases its not supported, look into using the onboard sata in ahci mode start usb from scratch, check bios settings (if its a uefi bios check csm mode is on and you boot ftom the non uefi usb boot device) and network cable also try loader 1.02b for 3615/17 for dsm 6.1
  24. whats your point? you say 6.2.4 is *** but you had to update your original system to it - btw. thats no option for xpenology users as the current loader is not booting with 6.2.4 anymore (presumably new protection in the same way as in 7.0) 6.2.3 u3 cant be that bad, beside my own system there is a good amount of people having done the u3 update without problems, so i would not issue a warning for u3 https://xpenology.com/forum/topic/37652-dsm-623-25426-update-3/
×
×
  • Create New...