Jump to content
XPEnology Community

IG-88

Developer
  • Posts

    4,645
  • Joined

  • Last visited

  • Days Won

    212

Everything posted by IG-88

  1. technicaly its not a difference between those three its more about the missing support of spk maintainers and wrong spk options, if they would do x64 packeges instead like synology does they would work on all x64 platform dsm's it can be a help to decide as there seem to be more packages for 3615 (longer on the market) i wrote about that in may here
  2. i remember some people reporting restart problems with 6.1, shutdown and start did work for them, maybe its the case here?
  3. the added modules from the bootloader should be found in /usr/lib/modules/upddate/ the location in question is the one where dsm stores it "original" files did you copy anything manualy at some point? whats the time/date/size of the files? can you create a md5 hash? md5sum /usr/lib/modules/bnx2x.ko 31c19459ba1854e623eec89dc75f6e98 /usr/lib/modules/bnx2x.ko md5sum /usr/lib/modules/usbnet.ko c62816846ca4bc83eb0723153043a015 /usr/lib/modules/usbnet.ko
  4. liegt im auge des betrachtes, ich finde meine lösug geht halt evtl. am grundgedanken vorbei, docker soll ja eine einfache plattform bieten und statt dessen eine vm auf der software oder dann auch weider docker läuft ist deutlich komplexer - allerdings sehe ich das mit dem modden der dsm "appliance" genauso, kann nach jedem update versagen und man muss sehem wie man es hinbekommt, falls man das überhaupt aus eigener kraft schafft weil ein softeatrteil fehlt VMM ist im grunde KVM und eine VM darin sollte jedes dsm update überstehen, notfalls kann an die vm ja auch woanders laufen lassen (einfache portierbarkeit)
  5. you can try to make the vmdk file from the img youself be using "StarWind V2V Image Converter", also provided an this page edit: how about this:
  6. i forgot to mention how to do it sudo lspci -k and after you know like e1000 (intel network card) you can check if e1000.ko is part of the extra.lzma in polanskimans howto is a section ############# Included default modules & firmwares in Jun's Loader ############# or you open the synoboot.img with 7zip, open the image.img, then the extra.lzma, extra and in there you open /usr/lib/modules/ there are the additional *.ko files you can check
  7. depends on what hardware is used if a extra.lzma was used with 6.0.2 then the migration might fail because of missing divers its good to know what kernel modules where loaded for network and storage, then you can check if this *.ko files are present in 6.1
  8. IG-88

    DSM 6.1.x Loader

    ok i asumed shutting down dsm and plugging the usb stick to another computer, i did'nt take the running so serious
  9. the first partion (grub.cfg) is always shown (any windows version with usb and fat support), the second partition (extra.lzma) since winddows 10 creators update (1703 or 10.0.15063 aka Redstone2) and this seems mandatory now for all win10 consumer installtions (as it goes for microsoft and you let them do it by windows update)
  10. did you read the tutorial about it on that site? https://xpenology.club/setup-xpenology-vmware-workstation-12/ i'd expect it to work as the author had testted it most people here use esxi or (under windows) virtual box as both are free to use
  11. i was refering to "StarWind V2V Image Converter"
  12. with his 1.02a jun delivered not just the *.img file, he also had a vmdk file (text) that referenced to the img file, the later releases did not contain this, i' still using this vmdk "template" for virtual box vm's (with 1.02b) same way is this (esxi is kind of common here) there is also a converter for vmdk files but i cant remember the name or find the topic where its mentioned
  13. IG-88

    DSM 6.1.x Loader

    the files and there date in the image are a hint like first parttion (easy to access when usb stick is in another system like windows or linux) folder "grub" the file "grubenv" 27.02.2017 - 1.02a 10.04.2017 -1.02a2 17.06.2017 - 1.02b
  14. serial is only importend for some funcrions like hardware transcoding (on 916+) and one or two wrote that they have used the sn of a real hardware 916+ to get it working - if you read the forum rules you will find that tossing aroud serials and generators is not allowed so discussion about that is limted (besides i dont care if synology is messing around to much there are other solutions then dsm like open media vault and they have much less problems with hardware support - witch makes about 70%-90 of the traffic here)
  15. what hardware? storage and notwork controller have to be supported be the bootloader you are using did you try the additional extra.lzma?
  16. just look for topics cintaining HP Gen8 or use the search funcktion i don't think you will find a more specific/detailed upgrade descrition then in this topic for your specific hardware, just read about what people say and what (specific) problems they had with that modell it often used so you will find information and you also can PM a owner of such system for advice
  17. if you dont have a backup and want to know for shure i suggest you test it with a vm and some virtual disks set up a vm with 3 vitual disks, install 5.2 and create a raid5 after setup, copy some date to the new volume (or just create a text file, something that will be recogniszed later), shut down vm, "unplug" disks, copy disk files to your new vm with 6.x and "plug" them in, you should see the same notification as with our real disks and can try what happens, if it work you will see your test data (or text file) as data volume
  18. IG-88

    DSM6 and SAS HP410

    as there ist no ciss.ko in dsm or the boot image (jun or my extension) how could there be a conflict we will see what the new driver brings btw. its from here: https://sourceforge.net/projects/cciss/files/hpsa-3.0-tarballs/
  19. as i understand it it will only overwite the 5.2 installtation on the disk added as in add the fist two partitions to the raid1 set md0 and md2 after this i would expect that dsm finds the data on the disk (it it was a single disk without raid) no way back to 5.2 because the system (first two partitons) are replaced with the 6.x, plgins an plugin configuration are stored in the (3rd) data partition if you want a upgrade you should have the 5.2 system booting, shutdown, replace the boot media (in case of vmware not a usb stick, will be a vmdk i guess) with the one for 6.x and boot, the synology assistent should recognise the old system, update it and take as mach settings as possible to the 6.x installtion i'm not shure what you wand, a clean fresh 6.x install or update maybe read this about migration
  20. no those are to old and for kernel 2.4/2.6 the kernel of dsm 6.1.x is 3.10.102
  21. how about this? https://www.dsebastien.net/2015/05/19/recovering-a-raid-array-in-e-state-on-a-synology-nas/
  22. please try the new 3.2 version with driver compiled from external source (the former one was the default driver from kenel 3.10.102) https://xpenology.com/forum/topic/7967-driver-extension-for-jun-102b3615xsdsm613/?do=findComment&comment=77381
  23. IG-88

    DSM6 and SAS HP410

    please try the new 3.2 version with driver compiled from external source (the former one was the default driver from kenel 3.10.102)
  24. there is a new test version 3.2 with folowing changes -extenel source - Realtek RTL8152/RTL8153 Based USB Ethernet Adapters r8152.ko (0008-r8152.53-2.09.0.tar.bz2) -> already tested and ok -external source Broadcom Tigon3 tg3.ko (tg3_linux-3.137k.tar.gz) -> ??? -external source - HP Smart Array SAS hpsa.ko (hpsa-3.4.20-100.tar.bz2) -> ??? http://s000.tinyupload.com/?file_id=91168375327128238944
  25. yes, a mdadm raid1 device ist created over all disks, if a new empty disk is initialized dsm will create 2 patitions one for dsm one for swap and add them to /dev/md0 and /dev/md1 cat /proc/mdstat will show you the existing configuration to make the system start from one single specific disk and ignore the other oe way could be to format the /dev/md0 /dev/md1 (not deleting partions, just wipe file system of the old md0/1) and let the 3rd /dev/md2 as it is when the system is booting then there is just one system to use and dsm can expand the system to the already existing (empty) partions and the also recover the old /dev/md2 is that gone work? i dont know! you can try rthis scenario with a vm and some vitual disks before you do it with the real disks but there are different ways possible to do what you want, maybe someone comes up with a way he already tested
×
×
  • Create New...