IG-88

Members
  • Content count

    447
  • Joined

  • Last visited

  • Days Won

    11

IG-88 last won the day on October 11

IG-88 had the most liked content!

Community Reputation

53 Excellent

3 Followers

About IG-88

  • Rank
    Super Member
  1. if you use a live linux (like ubuntu) you can also just use dd to overwite the mbr/beginnig of disk with zero's, that will also get rid of partition inforamtion
  2. Looking for NAS options

    if ZFS is needed then dsm/xpenology is not for you, ext4 or btrfs are the options, maybe FreeNAS (http://www.freenas.org/zfs/) is more what you are looking for, but be aware of the huge ammount of RAM thats needed if have something with 20+ TB in mind also open media vault might be an option (also no ZFS) DSM is a little problematic when it comes to drivers (it a appliance not a linux distribution), as synology only packs in what is used by synology hardware, juns loader brings some drivers but not to compare what was (in the end) possible with 5.x if you choose your hardware in a way that it only need drivers already packed by synology inside the dsm*.pat you might be more future proof, at least you will have to check what hardware is supported atm from dsm + jun-loader (and maybe a additional extra.lzma with extra drivers)
  3. hi, (afaik) in jun's loader is still the original bzimage and the trick to overcome the protection sysnology put in place is to insert additional kernel modules into the already running base system when booting the source and toolchain can be downloaded here https://sourceforge.net/projects/dsgpl/files/Synology NAS GPL Source/ i think if build you own kernel from source it will not work as it will trigger the protection sysnology builds into dsm, the loader mechanism jun uses is kind of stealthy to keep thinks look like the original state
  4. as the outcome is "open" it could be dangerous to let it lay around, i will send you the link and you can report back what happend
  5. no, like NO if you want dsm 6.1 and esxi is no option then only a different controller is your way, used lsi's (reflashed to it-mode) or new marvell (syba brand?) seem to be used often, just search the forum there are some people who ditched there hp sa controllers to use dsm 6.1 (like cseb) PS: maybe when dsm 6.2 next year comes out with kernel 4.4 and if the hp sa driver from that kernel works? but there are new protections in the dsm 6.2 beta, so might not happen, who knows
  6. did you read the thread here about hpsa.ko? both option (kernel and external driver) do not work so no my statemant two posts above is still valid, no hp smart array driver support as long as nobody proofs that he has found a way (working driver) only way might be to use esxi as base (dsm in vm) and use raid5 volume over all disks as vmfs datastore (using hardware raid) and in dsm basic volume (vmdk file) or keep disks (single disk as single raid volume in hp controller) and in esxi hand them over as rdm to dsm vm in both scenarios you use the hp sa driver of esxi and dsm does only need existing drivers
  7. looks like somone already build this drivers fp 916+, you might ask him for the extra.lzma
  8. DSM 6 Boot Image for Hyper-V

    even with a dec nic driver what use will a 100MBit nic have for a NAS?
  9. you dont have to guess, just open the dsm*.pat file with 7zip open the hda1.tgz and in there /usr/lib/modules/ i think mlx*.ko will be mellanox, 3617 has it, 916+ does not source for 916+ to compile drivers https://sourceforge.net/projects/dsgpl/files/DSM 6.1 Tool Chains/Intel x86 Linux 3.10.102 (Braswell)/ https://sourceforge.net/projects/dsgpl/files/Synology NAS GPL Source/15152branch/braswell-source/linux-3.10.x.txz/download if you read this https://xpenology.com/forum/topic/7187-how-to-build-and-inject-missing-drivers-in-jun-loader-102a/ and al little bit from this (toolchain) https://xpenology.club/compile-drivers-xpenology-with-windows-10-and-build-in-bash/ you might be able to help yourself i still plan to do a extra.lzma for 916+ and 3617 but i guess not in the next two weeks
  10. DSM 6.x.x Loader

    it does, as there are two possible drivers you might try both version (you do not write much about what you did so its hard to tell where the problem is, i'd expect someting else then a r8168/r8169 driver problem - damn it i should have bin more serious when we had crystal ball gazing at school )
  11. Any data recovery advice?

    just as a qick note (not much time atm) first use dd_rescue to copy as much data as possible to working disk (sector wise, 1 try per sector) from disk(s) with defective sectors without that there will nerver be a repair in any way after this you can try to repair the raid be warned that quick an dity and there is no second try because if you start changing data by repairing the array its final real data recovery would not work with the original disk, so if you have time an money you would make a image if every disk (like created with dd_rescue) and work with this also wait what other people her might suggest, dont rush into something that will change data an any disk
  12. well at least the driver from external source did not crash with the p420, but not detecting any drives doesnt help much so its still the same and final, no hpsa support for 6.1 as long as anyone comes up with a solution (i cant see any atm) it will be removed in the next version at least the new tigon3 driver seem to work better then before
  13. just use a usb flash drive you can spare and install jun loader 1.02b, edit the vid/pid of the usb stick in grub and mac address (-> tutorial in the beginnig of the thread) and try it, if 6.1 is already installed it doesnt change much if you try if you give more information like board, storage controller and nic (only when onboard is not used) then we might predivt if it will work (as bootloader for 6.1 does not contain as many drivers as it was for 5.2)
  14. i'm at 11 disks from 12 but there is a point where it might be better to throw the monney for bigger disks (8TB) instead of investing in ports (and hotswap bays), also more disks consume more energy and produce more heat investing in small disks will also limit the 2nd use for backup as you will also need lots of ports (and a board with 2x pcie slots) - at least i try to use the old disks for backup as selling 4-5 year old disks wont bring much money
  15. Empfehlung 2. NwK und Systemsicherung ?

    nein, die "system" (also DSM) partition kommt auf JEDE platte im system, ist also auch auf den platten die als raid5 eingeritet wurden (die "fehlenden" 4,4GB je platte wenn man nachrechnet) was nur einmal da sein wird ist die datenpartition der ssd, da wirst du vermutlich die plugin daten abgelegt haben, wenn du die eingerichtet hast dann ist die nur basic aber das ist nur eine vermutnug, evtl. hast du ja auf der ssd kein volume angelegt (was mich aber wundern würde) das "problem" mit dem, eine platte dazu ist aber das wenn es keine ssd ist dann wird das wieder lansamer da ja das raid1 dann in sync gehalten werden muss und sobald der cache voll ist wartet die ssd auf die konventinelle platte and der "ssd effekt" verpufft unter umständen also wenn du mit dd klarkommst dann usb stick mit live linux und du kannst die dsm partition locker sichern, die wird immer gleich groß angelegt bei synology so das man so ein dd image auch wieder einspielen kann wenn man komplett alles verloren hat, dann würde man einfach einmal default installieren (dann werden die partitionen und das raid erzeugt und dann könnte man mit dd das backup wieder auf /dev/md0 spielen, wichtig ist halt zu verstehen das es wirklich auf jeder platte diese 2 partitonen gibt und man deshalb nicht einfach eine platte einzeln verarzten kann sondern erst mal das raid1 zusammensetzen muss und dann ...