Leaderboard


Popular Content

Showing content with the highest reputation since 01/14/2020 in all areas

  1. 2 points
    This is the first (experiential?) test version of the driver extension for loader 1.04b and 918+ DSM 6.2.2 edit 1/2020: atm i'd not recommend using the 918+ 0.6_test anymore, there will be a package with newer drivers shortly and it also will take the different cpu/igpu types into account (in some cases jun's new i915 driver can prevent systems from booting, seen on J1xxx and N42xx) additional information and packages for 1.03b and 3615/3617 are in the lower half under a separate topic Tested as fresh install with 1.04b loader with DSM 6.2.2, there are extra.lzma and extra2.lzma in the zip file - you need both - the "extra2" file is used when booting the 1st time and under normal working conditions the extra.lzma is used (i guess also normal updates - jun left no notes about that so i had to find out and guess). Hardware in my test system used additional driver: r8168, igb, e1000e, bnx2x, tn40xx, mpt2sas The rest of the drivers just load without any comment on my system, i've seen drivers crashing only when real hardware is present so be warned extra.lzma/extra2.lzma for loader 1.04b ds918+ DSM 6.2.2 v0.6_test http://s000.tinyupload.com/?file_id=29784352988385987676 !!! still network limit in 1.04b loader for 918+ !!! atm 918+ has a limit of 2 nic's (as the original hardware) If there are more than 2 nic's present and you can't find your system in network then you will have to try after boot witch nic is "active" (not necessarily the onboard) or remove additional nic's and look for this after installation You can change the synoinfo.conf after install to support more then 2 nic's (with 3615/17 it was 8 and keep in mind when doing a major update it will be reset to 2 and you will have manually change this again, same as when you change for more disk as there are in jun's default setting) - more info's are already in the old thread about 918+ DSM 6.2.(0) I might change that later so it will be set the same way as more disks are set by jun's patch - syno's max disk default for this hardware was 4 disks but jun's pach changes it on boot to 16!!! (so if you have 6+8 sata ports then you should not have problems when updating like you used to have with 3615/17) I will extend here to what is in the old thread for 6.2 had but atm i'm more willing to add 3615/17 support Basically what is on the old page is valid, so no sata_*, pata_* drivers Here are the drivers in the test version listed as kernel modules: The old thread as reference !!! especially read "Other things good to know about DS918+ image and loader 1.03a2:" its still valid for 1.04b loader !!! This section is about drivers for ds3615xs and ds3617xs image/dsm version 6.2.2 (v24922) Both use the same kernel (3.10.105) but have different kernel options so don't swap or mix, some drivers might work on the other system some don't at all (kernel oops) Its a test version and it has limits in case of storage support, read careful and only use it when you know how to recover/downgrade your system !!! do not use this to update when you have a different storage controller then AHCI, LSI MPT SAS 6Gb/s Host Adapters SAS2004/SAS2008/SAS2108/SAS2116/SAS2208/SAS2308/SSS6200 (mpt2sas) or LSI MPT SAS 12Gb/s Host Adapters SAS3004/SAS3008/SAS3108 (mpt3sas - only in 3617), instead you can try a fresh "test" install with a different usb flash drive and a empty single disk on the controller in question to confirm if its working (most likely it will not, reason below) !!! The reason why 1.03b loader from usb does not work when updating from 6.2.0 to 6.2.2 is that the kernel from 6.2.2 has different options set witch make the drivers from before that change useless (its not a protection or anything), the dsm updating process extracts the new files for the update to HDD, writes the new kernel to the usb flash drive and then reboots - resulting (on USB) in a new kernel and a extra.lzma (jun's original from loader 1.03b for dsm 6.2.0) that contains now incompatible drivers, the only drivers working reliable in that state are the drivers that come with dsm from synology Beside the different kernel option there is another thing, nearly none of the new compiled scsi und sas drivers worked They only load as long as no drive is connected to the controller. ATM I assume there was some changes in the kernel source about counting/indexing the drives for scsi/sas, as we only have the 2.5 years old dsm 6 beta kernel source there is hardly a way to compensate People with 12GBit SAS controllers from LSI/Avago are in luck, the 6.2.2 of 3617 comes with a much newer driver mpt3sas then 6.2.0 and 6.2.1 (13.00 -> 21.00), confirmed install with a SAS3008 based controller (ds3617 loader) Driver not in this release: ata_piix, mptspi (aka lsi scsi), mptsas (aka lsi sas) - these are drivers for extremely old hardware and mainly important for vmware users, also the vmw_pvscsi is confirmed not to work, bad for vmware/esxi too Only alternative as scsi diver is the buslogic, the "normal" choice for vmware/ESXi would be SATA/AHCI I removed all drivers confirmed to not work from rc.modules so they will not be loaded but the *.ko files are still in the extra.lzma and will be copied to /usr/modules/update/ so if some people want to test they can load the driver manually after booting These drivers will be loaded and are not tested yet (likely to fail when a disk is connected) megaraid, megaraid_sas, sx8, aacraid, aic94xx, 3w-9xxx, 3w-sas, 3w-xxxx, mvumi, mvsas, arcmsr, isci, hpsa, hptio (for some explanation of what hardware this means look into to old thread for loader 1.02b) virtio driver: i added virtio drivers, they will not load automatically (for now), the drivers can be tested and when confirmed working we will try if there are any problems when they are loaded by default along with the other drivers they should be in /usr/modules/update/ after install To get a working loader for 6.2.2 it needs the new kernel (zImage and rd.gz) and a (new) extra.lzma containing new drivers (*.ko files) zImage and rd.gz will be copied to usb when updating DSM or can be manually extracted from the 6.2.2 DSM *.pat file and copied to usb manually and that's the point where to split up between cases/way's case 1: update from 6.2.0 to 6.2.2 case 2: fresh install with 6.2.2 or "migration" (aka upgrade) from 6.0/6.1 Case 1: update from 6.2.0 to 6.2.2 Basically you semi brick your system on purpose by installing 6.2.2 and when booting fails you just copy the new extra.lzma to your usb flash drive by plugging it to a windows system (witch can only mount the 2nd partition that contains the extra.lzma) or you mount the 2nd partition of the usb on a linux system Restart and then it will finish the update process and when internet is available it will (without asking) install the latest update (at the moment update4) and reboot, so check your webinterface of DSM to see whats going or if in doubt wait 15-20 minutes check if the hdd led's are active and check the webinterface or with synology assistant, if there is no activity for that long then power off and start the system, it should work now Case 2: fresh install with 6.2.2 or "migration" (aka upgrade) from 6.0/6.1 Pretty much the normal way as described in the tutorial for installing 6.x (juns loader, osfmount, Win32DiskImager) but in addition to copy the extra.lzma to the 2nd partition of the usb flash drive you need to copy the new kernel of dsm 6.2.2 too so that kernel (booted from usb) and extra.lzma "match" You can extract the 2 files (zImage and rd.gz) from the DSM *.pat file you download from synology https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3615xs_24922.pat or https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3617xs_24922.pat These are basically zip files so you can extract the two files in question with 7zip (or other programs) You replace the files on the 2nd partition with the new ones and that's it, install as in the tutorial In case of a "migration" the dsm installer will detect your former dsm installation and offer you to upgrade (migrate) the installation, usually you will loose plugins, but keep user/shares and network settings DS3615: extra.lzma for loader 1.03b_mod ds3615 DSM 6.2.2 v0.5_test http://s000.tinyupload.com/?file_id=87576629927396429210 DS3617: extra.lzma for loader 1.03b_mod ds3617 DSM 6.2.2 v0.5_test http://s000.tinyupload.com/?file_id=80273327432412263889
  2. 2 points
    Монтируем в osfmaount Partition 1 и перезаписываем файл extra.lzma, а для 918 еще и extra2.lzma. Попробуйте у кого не работала сетевая intel 219.
  3. 2 points
    15 - What is RAIDF1 and why would I want to use it? RAIDF1 a modification of RAID5, implemented with a customization of MDRAID (the disk array manager used by DSM). It is specifically tuned to minimize the likelihood of SSDs wearing out at the same time. SSDs have a finite lifespan based on the number of times they are written. This information is usually presented as a "wear indicator" or "life remaining" number from 100 (new) counting down to 0 (end of service life). Most operating systems, including DSM, monitor SSD health using SMART and will alert when devices near the end of their service lives, and prior to failure. An array of brand new SSDs are consumed evenly because of how RAID5 intentionally distributes workloads evenly to the array members. Eventually, the SSDs all wear out together, which can result in multiple, simultaneous drive failures and subsequent data loss. How does RAIDF1 work? RAIDF1 attempts to avoid this by biasing writes to a specific drive in the array. To understand this, consider how the DSM btrfs and ext4 filesystems store data. By default, DSM filesystems save data in 4K blocks. Even a 1-byte file uses 4K as a minimum amount of space. Modern HDDs/SSDs also store data in 4K sectors. When a byte must be changed, all the other bytes within the sector are read, then rewritten at the same time. This read/write requirement is called write amplification and it affects the performance of all parts of the storage ecosystem, from HDDs and SSDs to filesystems to RAID arrays. MDRAID also works with blocks, but they are called chunks to differentiate them from filesystem blocks. The default chunk size for DSM RAID5/6/10 is 64K. A stripe is the logical grouping of adjacent chunks spanning the array members horizontally. Using the example of a RAID5 with three drives, two of the chunks in the stripe contain data and the third chunk is parity. When DSM performs data scrubbing, it reads all three chunks, then validates all the data and parity in each stripe for mathematical consistency (and corrects if necessary). Each stripe rotates the position of the parity block successively through the array members. In the three-drive example, stripe 1's parity chunk is on drive 1, stripe 2's parity chunk is on drive 2, stripe 3's parity chunk is on drive 3, stripe 4's parity chunk is back on drive 1, and so on... This results in an even distribution of data and parity across all array members. Note that many files (filesystem blocks) may be stored in one chunk. The highest density case is 16 files of 4K or smaller in a single chunk. Consider that when one of those files change, only two of the three chunks in the stripe must be rewritten: first, the chunk containing the block containing the file, and then the parity chunk (since the parity calculation must be updated). RAIDF1 subtly modifies the RAID5 implementation by picking one of the array members (let's call it the F1-drive), and sequencing two consecutive stripes in the stripe parity rotation for it. This is NOT additional parity (each stripe still only has one parity chunk), so there is no loss of space or read/write performance. The table below compares parity distribution (how much of the total parity is stored on specific array members) between RAID5 and RAIDF1: Array configuration Drive #1 parity Drive #2 parity Drive #3 parity Drive #4 parity Drive #5 parity 3-drive RAID5 33.33% 33.33% 33.33% 4-drive RAID5 25% 25% 25% 25% 3-drive RAIDF1 25% 25% 50% (F1-drive) 4-drive RAIDF1 20% 20% 20% 40% (F1-drive) 5-drive RAIDF1 16.66% 16.66% 16.66% 16.66% 33.33% (F1-drive) With RAIDF1, anytime a full stripe is written, I/O is evenly distributed among the drives, just like RAID5. When a small file or file fragment (one that does not span a stripe) is written, on average the F1-drive will be used about twice as often as the other drives. Thus, the F1-drive will experience accelerated wear and will reach its life limit first. Then it can be replaced with minimal risk of one of the remaining members failing at the same time. Upon replacement, DSM selects the SSD that is closest to being worn out and designates it as the new F1-drive. The array sync then rewrites the array to achieve the desired RAIDF1 parity distribution. Note that the total number of write events are not increased with RAIDF1. "Total cost of ownership" does not change, as the extra writes to the F1-drive are avoided with the other array members, so they last longer. Caveats and other notable issues As a RAID5 variant, RAIDF1 creates arrays based on the smallest member device. For best results, all the drives should all be the same size and type (a larger drive can be used but extra space will be ignored). RAIDF1 can theoretically be “defeated” by installing dissimilar drives, with one drive having significantly higher capacity and/or a high DWPD (drive writes per day) rating. If this drive was then selected as the F1-drive, it may have enough write capacity to outlast the other array members, which could then fail together. Always using identical SSDs for the array will avoid this freak occurrence. SHR (Synology Hybrid RAID) allows drives of different sizes to be used in a redundant array while maximizing space available. This is done by creating a series of arrays, including a small one compatible with the smallest drive, and a large one using the available space common to the largest drives, and possibly some in between depending upon the makeup and complexity of the SHR. The arrays are then concatenated into a single logical volume (using LVM) available for use within DSM. For redundancy, the large SHR drives must be members of all the arrays. The small SHR drives contain only one array and not much of the overall data, and are accessed much less frequently than the larger drives. For RAIDF1’s algorithm to produce expected results, array write patterns must be simple and predictable. In summary, RAIDF1 and SHR array behaviors are not compatible with each other, which is reflected in the Synology DiskStation product lines. The Synology models that support RAIDF1 are the same as those that do not officially support SHR. This includes the XPEnology-enabled DS3615xs+ and DS3617xs+ platforms. Note that SHR can be enabled on these platforms by modifying /etc.defaults/synoinfo.conf, with no impact to RAIDF1 functionality. The MDRAID modifications that enable RAIDF1 are compiled into the DSM kernel. The consumer-oriented DSM platforms do not contain those changes, including the XPEnology-enabled DS916+ and DS918+ platforms. Creation and maintenance of a RAIDF1 is not possible on those systems. However, just like SHR, an established RAIDF1 array is completely functional and behaves like any other RAID5 array when migrated to a platform that does not support it. Brilliant! TRIM helps minimize the impact of write amplification on SSDs. Because the F1-drive is written to more frequently, it will be affected by write amplification more severely than the other array members, and performance of both the drive and the array will degrade over time unless TRIM support is enabled. Finally, there is no RAID6-based, SSD-optimized choice yet. Perhaps RAIDF2 will be an option in DSM 7.0. References If you want to install RAIDF1 on XPEnology, you will find a simple tutorial here https://en.wikipedia.org/wiki/Standard_RAID_levels https://global.download.synology.com/download/Document/Software/WhitePaper/Firmware/DSM/All/enu/Synology_RAID_F1_WP.pdf http://wiki.linuxquestions.org/wiki/Block_devices_and_block_sizes https://raid.wiki.kernel.org/index.php/RAID_setup#Chunk_sizes https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm https://www.synology.com/en-sg/knowledgebase/DSM/tutorial/Storage/Which_Synology_NAS_models_support_RAID_F1 https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/Which_models_have_limited_support_for_Synology_Hybrid_RAID_SHR https://en.wikipedia.org/wiki/Trim_(computing)
  4. 1 point
    kernel modules/drivers are specifically compiled for a kernel (-versions) and even distributions it's not like windows where you can download a driver somewhere and just put it in so don't take any *.ko file, stick it in and expect it to work if you haven't build the *.ko yourself or don't know exactly where it came from, expect it to fail I'm no expert but as there is no how-to here in the forum - lets start one hopefully other will correct and help refine or take over and rewrite it some steps are made in windows (osfmount) but will also possible in the chroot environment on linux basic knowledge about using a linux console and command-line tools (or midnight commander) is needed, if you never used this you should not start with this how-to, choose something easier or invite someone who is able to help (do a workshop) doing all this from scratch will take at least 1-2 hours, in most cases (re-read, google, try, google, try again, ...) much longer, maybe plan a weekend of text-adventure fun edit: i think it also will do for 6.0.2 and loader 1.01 (not tested), kernel sources are available: https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/8451branch/bromolow-source/linux-3.10.x.txz/download, extra.lzma is a little differently placed (boot.img\image\DS3615xs\) but the steps will be the same 1. building the kernel module (driver) 1.1 what driver/module i need you will have to find out (google) what the name of the driver/module is that your hardware needs or you will have to know where to find the rigt option in the menu-system of the kernel when configuring it example: nForce 630 chipset with RTL8211E, you might expect it to be a realtek driver like rtl*.ko but it's not its "forcedeth.ko" because the RTL8211 is not a fully working PCIe Network Chip in some cases you might be forced to find out by booting a linux distribution and look in /var/log/, use lspci or other tools it also helps if the hardware provider has already compiled packages for specific distributions like redhat, you can look inside these packages for *.ko files you also can look in the .config file of the kernel (more below) with a text editor to find a section where the module is mentioned, this will also give you a hint where to find it in the menu-system when configuring the kernel 1.2 you need the kernel source in the case of synology that seemed sometimes difficult but at the moment there is kernel source for dsm 6.1 https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/ 15047 is the synology build version and tells you about what dsm version it is (15057 = dsm 6.1) and what kernel was used to build the modules, it !!!might!!! change in a later version so always check for what version your bootloader from usb stick is made for (jun 1.02 is for 15047) edit: dsm 6.1.1 has a new build number 15101 but seems to use the same kernel 3.10.102 as 6.1 so it should work with 6.1.1 too as i write this for ds3615xs it's bromolow as a platform, for ds3617xs it's broadwell and ds916+ is braswell (you usually see that name on the update files for a synology system like "synology_broadwell_3617xs.pat") so for ds3615xs we use this: https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/15047branch/bromolow-source/linux-3.10.x.txz/download edit: it looks like as for building the modules there is no difference for kernels modules build for 3615 and 3617 even if https://sourceforge.net/projects/dsgpl/files/Synology%20NAS%20GPL%20Source/15047branch/ has extra kernel sources for bromolow and broadwell, there are all intel x86_64, same might go for the 916+ (not testet yet), at least a evdev.ko build from 3615 kernel source did load without problems in a vm with the 3617 image 1.3 setting up a DSM 6.1 ds3615xs test environment with virtualbox (its free) or whatever works with juns loader look in the forum to find something that works, basics for virtualbox are - mac of the nic (intel pro 1000 desktop) in vm and grub.conf need to be the same - boot controller for jun's image (vmdk with reference to img file) is ide, controller for dsm disks is scsi lsi (!!!) - choose esx server option in grub menu 1.4 installing chroot put in http://packages.synocommunity.com for custom packages and change the setting that beside synology packages trusted ones are also to install install debian-chroot plugin (https://synocommunity.com/package/debian-chroot) from community section (some info's about it: https://github.com/SynoCommunity/spksrc/wiki/Debian-Chroot#configure-services) you might also install midnight commander if you are on it, makes things easier if you're not a command-line junky and more used to a graphical environment that give you clues activate ssh in dsm connect with ssh/putty to you dsm, login with user admin (and if you want to be root use "sudo -i") start the chroot with: /var/packages/debian-chroot/scripts/start-stop-status chroot after that you are inside the chroot, check with ls and you won't see "/volume1" or other sysnology specific directory's from the dsm environment, you can leave the chroot environment with "exit" later if you want now we have to update and install tools: apt-get update apt-get upgrade apt-get install locales dpkg-reconfigure locales dpkg-reconfigure tzdata apt-get install mc make gcc build-essential kernel-wedge libncurses5 libncurses5-dev libelf-dev binutils-dev kexec-tools makedumpfile fakeroot linux-kernel-devel lzma bc after that we create a directory (lets say "test") 1.5 copying kernel files and create kernel modules copy the downloaded kernel (linux-3.10.x.txz) to a share on the dsm, open a 2nd putty and copy the linux-3.10.x.txz (/volume1/...) to /volume1/@appstore/debian-chroot/var/chroottarget/test/ (that's where the "test" directory of the chroot environment is located in your real system) change back to your first putty where you are in chroot (the same way can be used to get the created files back to your shared folder on your volume1 which can be accessed from windows) change into "test", extract the linux-3.10.x.txz to a directory named "linux-3.10.x" and change into it the following copy's the kernel config file from synology to the right place for use/build cp synoconfigs/bromolow .config we make a fallback copy make ARCH="x86_64" oldconfig we start the ascii art menu and search for the missing driver to activate cursor/return are your friend in navigating, space selects, we activate the driver to an "M" so its build as module (*.ko file we need) there will be tons of descriptions how to do it, just google if needed make ARCH="x86_64" menuconfig on exit we save the configuration and with the following we make/create the modules (will take a while) make ARCH="x86_64" modules now you have to find your *.ko file (use some nice ls options, to be expanded later) usually you will have to look in /test/linux-3.10.x/drivers/scsi or block copy that file to /test for easy access when we put it in the boot image 2. modify the "synoboot.img" use osfmount (windows) to extract the "extra.lzma" (see dsm 5.2 to 6.0 guide, used there to edit grub.cfg in synoboot.img) in "extra.lzma" are the additional *.ko files and a config file where the files to be loaded on boot are named -> see forum thread "dsm 5.2 to 6.0" with howto to modify jun's loader for usb vid/pid and mac, its basically the same you just open the other partition (30MB) and extract the "extra.lzma" copy the "extra.lzma" to the share of the dsm so we have local access in a putty session on dsm in putty session #2 ("normal" session without chroot) we copy the "extra.lzma" to the "test" directory in the chroot environment go to putty session#1 (in chroot) decompress "extra.lzma" to "extra" ("extra.lzma" is a compressed cpio file) with: lzma -d extra.lzma with ls we can check that "extra.lzma" is now just ""extra" (a cpio file without the extension cpio) create a new directory, copy the "extra" there, change into it and extract it with: cpio -idv < extra delete the remaining file "extra" inside this directory we copy the *.ko file into usr/lib/modules/ and in /etc we edit the file rc.modules (easy with midnight commander, go to file, press F4, internal editor) network drivers seems to be added under EXTRA_MODULES, storage drivers under DISK_MODULES, just go to the end of the line and fill in the name of the *.ko file without the ".ko", what you add is basically a blank and the name rc.modules looks like this: EXTRA_MODULES="mii mdio libphy atl1 atl1e atl1c alx uio ipg jme skge sky2 ptp_pch pch_gbe qla3xxx qlcnic qlge netxen_nic sfc e1000 pcnet32 vmxnet3 bnx2 libcrc32c bnx2x cnic e1000e igb ixgbe r8101 r8168 r8169 tg3 usbnet ax88179_178a button evdev ohci-hcd" DISK_MODULES="BusLogic vmw_pvscsi megaraid_mm megaraid_mbox megaraid scsi_transport_spi mptbase mptscsih mptspi mptsas mptctl ata_piix megaraid_sas mpt2sas mpt3sas" EXTRA_FIRMWARES="bnx2/bnx2-rv2p-09ax-6.0.17.fw bnx2/bnx2-rv2p-09-6.0.17.fw bnx2/bnx2-rv2p-06-6.0.15.fw tigon/tg3_tso5.bin tigon/tg3_tso.bin tigon/tg3.bin" if your controller or nic needs a firmware, you add the file under usr/lib/modules/firmware/ and add the appropriate line in EXTRA_FIRMWARES, if a extra directory inside "firmware" is used it has to be added to the name, see the bnx2 firmware files after everything is in place we recreate the cpio file, re-compress it as lzma and write it in the directory above as "extra.lzma" the command is used inside the directory where we extracted the file "extra" (command line taken from https://github.com/kref/scripts, its what jun uses to create it): (find . -name modprobe && find . \! -name modprobe) | cpio --owner root:root -oH newc | lzma -8 > ../extra.lzma in putty session #2 (without the chroot) we copy "extra.lzma" from the chroot position in filesystem to the location where we can access it from windows if you still have osfmount open to the "synoboot.img" replace the "extra.lzma" with the new one, dismount and close osfmount - our new "synoboot.img" is ready to test it ps: i was asked to make a video - thats much harder to change and i'm to old for this
  5. 1 point
    vid pid прописаны неправильно set vid=13FE set pid=3623 а должно быть так set vid=0x13FE set pid=0x3623
  6. 1 point
    Hmm, it turns out norecovery is not a valid option in Syno's compile of btrfs. Investigating.
  7. 1 point
    Тема это импортная, и они нас переводами не балуют. Вот вроде основная тема
  8. 1 point
    Please do NOT repair, that would be catastrophic at this point.
  9. 1 point
    nimm dir einen usb und eine leere platte und versuch es einfach, du kannst die platte oder den usb ja wieder löschen von vorn anfangen (beim usb musst du evtl ein anderes tool als die windows datenträgerverwaltung nehemn bevor du mit Win32DiskImager wieder eine neues image schreiben kannst)
  10. 1 point
    Я чутка про другое. D918+ DSM 6.1.2 23739 с этой версии dsm обновлялся на крайнюю. Вроде бы 6.2.2 тоже не взлетала сразу
  11. 1 point
    Ищите в БИОС_е что-то типа: Intergation Video или Primary Video Controller и отключаем - Disable Возможны и иные пункты, но близкие по смыслу. БИОС_ы разные бывают
  12. 1 point
    99,9999999999% коряво прописали VID\PID в grub.cfg, проверьте.
  13. 1 point
    Не меняются. Основные не меняются Странности однако у вас...... Спрошу, может и глупый вопрос, но такое тоже было.... Это делали ? "Обязательно снимаем галочку с опции: read only - только для чтения. Монтируем образ и открываем необходимый нам файл grub.cfg" Ну и наверное тогда стоит попробовать это
  14. 1 point
    Да, на загрузочной флешке меняются файлы, где-то это уже обсуждали.
  15. 1 point
    Решил немного облегчить жизнь новичкам и тем, кто успел подзабыть, где и что лежит. 1. Ссылка на загрузчики от 5.0 до 6.2 2. Как установить на примере загрузчика 1.04b для DSM 6.2 (918+) 3. Совместимость загрузчиков 6.0-6.2 и железа 4. Тестирование и как проверить работает ли транскодинг на примере Asrock J4105-itx, там же сборка extra.lzma с гибернацией дисков 5. Как отредактировать grub.cfg и заменить extra.lzma на работающей хрени 6. Пакет для активации железной кнопки Power off на корпусе хрени (крайняя версия 6.2-0002, на нее и ссылка) 7. Корректное отображение процессора в Информационном центре 8. Librusec на хрени через COPS (скачивание в fb2 и mobi на читалку с wi-fi прямо с хрени) 9. Torrent TV через Ace Stream в docker (актуальные команды в посте ID 273, инструкция в следующем) Просьба ссылки тут не обсуждать, добавляйте свои, если посчитаете полезным.
  16. 1 point
    Загружается с первого раза 1. Отформатировать диск 2. Скачать Загрузчик с измененным extra.lzma v1.04b: https://mega.nz/#!IAsmwSBL!9xpLSlkxl-jkWsCeN-f-Zsm4qPXLLGE-Yj-MNigyiWk изменить vid pid, записать на флэшку 3. Установить систему
  17. 1 point
    Добрый вечер. DSM 5.2-5644 Update 5
  18. 1 point
    Возможно стоит добавить какие то дрова....... А чем вызван выбор 918+ , ведь 3615 наиболее универсальная версия.
  19. 1 point
    Я бы для начал посмотрел в область корректности правки конфига, и именно на VID/PID
  20. 1 point
    J'ai pris une HP nc360t. Je viens de l'installer et j'ai bien une ip maintenant. Le chipset nvidia n'est pas reconnue par xpenlonoly par contre la carte perc oui. C'est parfait.
  21. 1 point
    @diqipib It seems like there are many revisions of that mainboard... But, first thing first, have you identified the onboard SATA controller, and verified the ports are set to AHCI ? Edit: Nevermind .. It seems like your MB's chipset does not support AHCI. It has the ICH7 southbridge, not the ICH7R or ICH7M.
  22. 1 point
  23. 1 point
    Between the last mdstat and your current one, your /dev/sdp went offline - that is one of your 10TB drives. Check all your connections and cables, if they are "stretched" or not stable, secure them. Reboot. Post another mdstat. If you can't get your hardware stable, this is a lost cause. Standing by for status.
  24. 1 point
    Посмотрите темы, может поможет https://mariushosting.com/how-to-install-nextcloud-on-your-synology-nas/ https://luvis.se/software/install-nextcloud-on-synology-dsm-6/
  25. 1 point
    @RangaWal Yes, the N40L has a Gb interface, make sure you have a good cable and that its well connected in both ends. Check the network status on your NAS: And also check any link LED's on your switch, to see if it says 1Gb active, might try another port?
  26. 1 point
    /dev/md2 has a drive with an error state so do a mdadm --detail for each array when it's done, along with the /proc/mdstat. We aren't out of the woods yet. If everything is healthy, it probably doesn't matter. The point of booting to Ubuntu is when there is a boot problem on DSM - you can get to the files to fix whatever's wrong with DSM. I haven't ever heard of someone transporting a broken array over to Ubuntu for the purposes of fixing it. Not sure how that would be better than working with the same MDRAID in DSM.
  27. 1 point
    You might want to do this: # echo 1000000 >/proc/sys/dev/raid/speed_limit_min # echo 1000000 >/proc/sys/dev/raid/speed_limit_max
  28. 1 point
    Things are promising, yes. All three arrays are set to repair. Don't do anything from Storage Manager. You can monitor progress with cat /proc/mdstat. Post another mdstat when it's all done. It might be awhile.
  29. 1 point
    don't forget to mention that these are only for 6.2.2 and for the dsm type (3615?) you made them for, for 3617 you would need a different kernel config (broadwell) to compile and for 918+ you would even need a different kernel source (4.4.59+) anything special done to get the drivers to compile? how big are those additional drivers? maybe i can add the dvbsky source to my build environment and add them to the "normal" extra.lzma that going to make problems as juns original files are not compiled with the right kernel settings and some will crash when using them with 6.2.2, if you publish something "mixed" issue a proper warning with it, any driver thats not tested might crash
  30. 1 point
    if that is the case you would see the message from little above in the log "pic_disable_link_state(_locked) error" its in the 1st post here, the tutorial itself optional, i wanted to get rid of warnings when compiling afair, at least they are not used lading drivers with jun's loader i also attached the last kernel config for 3615 (bromolow) from synology 6.2 toolkit, already contains the "CONFIG_PCIEASPM" change edit: i also use the toolchain 6.2.2 copyed to the same directory as the kernel /test/linux-3.10.x/ - ist where the kernal source is (bromolow 22259 linux-3.10.x.txz from Synology NAS GPL Source) /test/linux-3.10.x/x86_64-pc-linux-gnu/ - toolchain for 6.2.2 (bromolow-gcc493_glibc220_linaro_x86_64-GPL.txz from Synology NAS GPL Source) so the make for the kernel looks like this when using the menu system for configuring the kernel make ARCH=x86_64 CROSS_COMPILE=/test/x86_64-pc-linux-gnu/bin/x86_64-pc-linux-gnu- menuconfig .config
  31. 1 point
    Kconfig option https://www.kernel.org/doc/Documentation/kbuild/kconfig-language.txt
  32. 1 point
    please put your questions in the section where you took the content from, discussing this here does not help (off topic) people interested in compiling drivers might be interested too or might profit from that later when its in the place where it belongs to and has the right context
  33. 1 point
    Пример на 44 (дальше лень 😂), думаю уже поняли алгоритм 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 ==> eSata ports (0 drives) 0 0000 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 1111 ==> Sata ports (44 drives) fffffffffff 0011 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 ==> Usb ports (2 usb) 300000000000
  34. 1 point
    Для примера, каждый блог по 4 знака в двоичном коде переводится в HEX и пишется в качестве параметра. Максимальное количество дисков берется из числа internalportcfg maxdisks="14" ... usbportcfg="0xff000" ... internalportcfg="0xfff" ... esataportcfg="0x300000" 0000 0000 0000 0000 0000 0011 0000 0000 0000 0000 0000 ==> eSata ports (2 drives) 300000 0000 0000 0000 0000 0000 0000 0000 0000 1111 1111 1111 ==> Sata ports (12 drives) fff 0000 0000 0000 0000 0000 0000 1111 1111 0000 0000 0000 ==> Usb ports (8 usb) ff000 P.S. Искать и менять значения параметров в файлах.
  35. 1 point
  36. 1 point
  37. 1 point
    На 6.1. народ 58 дисков делал. Как-то баловался на 6.2.2, правда на виртуалке, но трюк не прокатил - после ребута, DSM требовала миграции, а после орала, что системный раздел поврежден. Естественно все манипуляции с настройкой дисков слетали.
  38. 1 point
    j'utilisais effectivement dd, sinon le docker de geekbench mais je ne me souviens plus s'il y avait un speed test des disques, je me dis que oui mais n'en ai pas souvenir. Un topic benchmark avait été créé ici-même. EDIT: pour des tests de vitesse entre le NAS et mon hackintosh, j'utilise Blackmagic Disk Speed Test qui a l'avantage d'être gratuit.
  39. 1 point
    Железка ваша подойдёт.
  40. 1 point
    Различий там оч много. Хом моде просто для меня важен. я думаю, зачем ставить старое если есть новое!? Установи 8.2.2 и настрой как я писал выше и все будет нормально работать.
  41. 1 point
    Там же почитай различия между 8.0.3 и 8.2.2. И сам сделай вывод какая версия тебе нужна.
  42. 1 point
    https://www.synology.com/ru-ru/knowledgebase/Surveillance/tutorial/General/What_do_I_need_to_know_about_Home_Mode_in_Surveillance_Station тут почитай.
  43. 1 point
    dvb drivers are not part of my package please open a new thread for this kind of problem you should ask the person who made this modules (presumably for dsm 6.2.0) if he is willing to redo the drivers for 6.2.2 i've already documented the change thats needs to be done when compiling new drivers for 6.2.2 https://xpenology.com/forum/topic/7187-how-to-build-and-inject-missing-drivers-in-jun-loader-102a/?do=findComment&comment=122631
  44. 1 point
    Just a quick guide from my installation of DSM 6.2.1 Update4, before I forget it. The process was much easier than I expected, thanks to all the contributions and sharing in this forum. *** This guide required MODDED BIOS and a HP NC360T NIC *** Hardware Setup Flash kamzata's modded bios, use attached file to burn an USB with the modded bios, boot the server from it and let it complete the bios flashing Install the HP NC360T NIC, I bought a used one in Taobao for 14 bucks Load "Optimal Defaults" from BIOS, then In Advance page, disable C1E Support In Chipset page, disable the onboard NIC Atheros AR8132M NIC DSM Installation Download Jun's Loader v1.03b DS3615xs (Synoboot_3615.zip | 17.3 MB | MD5 = e145097bbff03c767cc59b00e60c4ded) Download PAT file DS31615xs DSM 6.2.1-23824 Update4 (263MB) Prepare the USB boot drive, I followed Polanskiman's tutorial to write the generated SN, USB vid and pid in to grub.cfg boot image then burned it to my USB drive (below steps also come from Polanskiman's tutorial, really recommended to read it through) Boot the USB drive, select the 1st option (or just let it timeout in a sec), wait for 10mins then run Synology Assistant, your Microserver should be found as a "Not Installed" DSM Right click on the found DSM and select install, browse to the download PAT file above. Installation will take some time (30mins for my 8TB drives) Post Installation Enable SSH login Update packages Add SynoCommunity package sources - http://packages.synocommunity.com/ WIP Using Webcam as an IPcam with Surveillance Station Kamzata Modded BIOS - run HPQUSB.rar
  45. 1 point
    you should step back an read you post again - you have given absolutely no information about what you used whats the hardware (cpu, mainboard, network and storage controller if not onboard) what loader (version number, type) what additional files like *.pat file used to install dsm, any extra.lzma for drivers what tutorial did you use as base what version did you had before trying to reinstall did you read the faq and normal update tutorial here https://xpenology.com/forum/forum/83-faq-start-here/ https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/
  46. 1 point
    - Outcome of the update: SUCCESSFUL - DSM version prior update: Disks migration from DS3615xs/6.1.2/loader 1.02b - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: YES - Installation type: BAREMETAL - HP Gen8 Micro 12Gb i5 3470T - New install - Dell H310 w/LSI 9211-8i P20 IT Mode + extension Rack of SAS/SATA mixed HDD - Additional comments: Gen8 onboard Dual Broadcom NiC working (no need of additionnal NiC thanks to native drivers from IG-88)
  47. 1 point
    In addition to bricked boxes due to inattentive upgrades, there seems to be a surge of questions regarding how to select a DSM platform, version and loader. This table should help navigate the options and current state of the loaders. While situations rapidly change, it should be correct as of the listed date. 6.x Loaders and Platforms as of 15-Jan-2020 Loader DSM Platform DSM Versions Kernel Boot Method /dev/dri supported NVMe cache supported RAIDF1 supported Required CPU Architecture Max CPU Threads Notes 1.04b DS918 6.2 to 6.2.2 4.4.x EFI or Legacy BIOS Yes Yes No Haswell or later AMD Piledriver (unverified) 8 recommended, 6.2.2 fails on ASRock Qxxxx/Jxxx, fix with real3x mod or extra.lzma 1.03b DS3617 6.2 to 6.2.2 3.10.x Legacy BIOS only No No Yes Nehalem or later 16 6.2.1+ panics without Intel e1000e NIC or extra.lzma 1.03b DS3615 6.2 to 6.2.2 3.10.x Legacy BIOS only No No Yes Nehalem or later 8 6.2.1+ panics without Intel e1000e NIC or extra.lzma 1.02b DS916 6.0.3 to 6.1.7 3.10.x EFI or Legacy BIOS or MBR (Genesys) Yes No No Haswell or later AMD Piledriver (unverified) 8 1.02b DS3617 6.0.3 to 6.1.6 3.10.x EFI or Legacy BIOS or MBR (Genesys) No No Yes Nehalem or later 16 6.1.7/ESXi failures reported 1.02b DS3615 6.0.3 to 6.1.7 3.10.x EFI or Legacy BIOS or MBR (Genesys) No No Yes Nehalem or later 8 recommended 1.01 DS916 or DS3615 or DS3617 6.0 to 6.0.2 3.10.x EFI or Legacy BIOS or MBR (Genesys) No No Nehalem or later obsolete
  48. 1 point
    wenn du den onboard netzwerk adapter nutzt (broadcom) dann brauchst du für dieses update eine neue extra.lzma (da stecken die treiber drin) https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/
  49. 1 point
    Ce tuto est une mise à jour du tuto que j'ai déjà fait l'année dernière. Le tuto ci-dessous permet d'installer/migrer DSM 5.2 à DSM 6.1.7 directement sans devoir à passer par DSM 6.0.2. Si pour une raison ou une autre vous voulez migrer à DSM 6.0.2 d'abord ou alors tout simplement vous ne voulait pas migrer à DSM 6.1.7 mais souhaiter migrer à DSM 6.0.2 uniquement alors utilisez le lien ci dessus. Pour mettre à jour DSM 6.0.2 à DSM 6.1.7 voir ici. Comme la plupart de vous doivent déjà le savoir Jun a réussi un exploit en créant un loader permettant l'installation de DSM 6 sur du matériel non Synology. Voici le fil de la discussion pour ceux que ça intéressent: https://xpenology.com/forum/topic/6253-dsm-6xx-loader/. Il va vous falloir quelques outils. Je pars du principe que vous êtes sous Windows 10, 8, 7, Vista ou XP. Si vous êtes sur un MAC OS et que vous comprenez l'anglais réfèrerez vous au post que j'ai fait sur comment écrire et monter l'image sur une clef USB. Vous pouvez ensuite revenir sur ce tuto après avoir effectué les manips nécessaires sur MAC OS. Si vous avez des doutes n'hésitez pas à laisser un commentaire. Si vous êtes à DSM 5.1 ou sur une version antérieur, il faut tout d'abord mettre à jour DSM à la version 5.2. Si vous faites une installation fraîche de DSM 6.1 alors vous êtes bon pour la suite. Simplement omettez toute référence à DSM 5.2 Voici ce dont vous avez besoin: - Win32 Disk Imager. Application permettant de rendre une clef USB bootable. - Une clef USB que l'on utilisera pour y mettre le loader. 4GB suffiront largement. Je conseille une clef de marque (Kingston, SandDisk ou autre). Cela évitera les problèmes dans le future. - Comment connaitre le VID et PID de votre clef usb >>> voir ici - Un éditeur de texte avancé. Notepad++ fera l'affaire. L'éditeur de text Notepad inclu avec Windows est déconseillé. - DSM 6.1.4. Télécharger un des fichiers relatifs à un des 3 modèles suivant: DS3615sx ou DS3617sx ou DS916+. Télécharger uniquement le fichier avec l'extension ".pat" et non celui avec l'extension ".pat.md5". Les fichiers PAT viennent directement des serveurs de Synology, donc sans aucune altération. - Le loader (miroir) officiel v1.02b de Jun. Ce loader est hybrid c'est à dire qu'il marche en EFI ou BIOS donc il devrait marcher sur une majorité de machines capable de lire du GTP. Pour les machines plus vielles qui ne peuvent lire que du MBR ce loader ne marchera pas. Utilisez alors l'image du loader v1.02b faites par @Genesys construite à partir du loader de Jun mais faite avec une table de partitionnement de type MBR. Note: Le loader v1.02b de Jun est compatible avec les CPU Intel. Pour les CPU AMD cela n'est pas entièrement le cas mais plusieurs personnes ont mentionnée qu'il était possible d'utiliser ce loader. Il serait d'après eux nécessaire de désactiver la fonction C1E dans le BIOS (applicable aux machines HP comme le N40L ou N54L par exemple). Si vous avez un autre model/marque de machine AMD ne me demandez pas, je ne sais pas. Il faudra que vous regardiez dans le bios et fassiez des tests vous même. Partagez voter expérience ca en aidera surement d'autres. - Le ramdisk personnalisé extra.lzma. Ce ramdisk est optionnel et ne doit être utiliser que si le ramdisk par défaut inclus dans le loader ne permet pas la détection du matériel. Je le fourni pour ceux qui pourraient avoir des problèmes de détection réseau ou de contrôleurs de disques non reconnus. Ce ramdisk personnalisé contient des modules (drivers) additionnels qui ont été compilé par @IG-88 avec le code source de DSM 6.1.3. Je ne garantie pas qu'ils marchent tous. Vous devrez remplacer (ou renommer, au cas ou!) le ramdisk par défaut extra.lzma par celui ci. Si vous avez des questions spécifiquement liées au ramdisk de IG-88 merci de les poster directement dans le fil de discussion de IG-88, pas ici. - Faites attention à branchez vos HDD successivement en commençant par le premier port SATA normalement décrit comme SATA0 sur les cartes mères. Vérifiez avec le fabriquant de votre carte mère. Si vous faites une migration à partir de DSM 5.2 alors laissez tel quel. - OSFMount. Application permettant de modifier le fichier grub.cfg directement sur l'image. Ceci n'est pas strictement nécessaire car Jun a rendu possible la configuration du VID/PID, S/N et MAC directement à partir du Menu Boot Grub. Si vous préférez utiliser la nouvelle méthode il suffit d'omettre le Point 4, lisez plutôt la Note 4 et reprenez le tuto à partir du Point 5. SVP LISEZ TOUT LE TUTO AVANT DE FAIRE N'IMPORTE QUOI L'utilisation de ce loader relève de votre entière responsabilité. Ne me tenez pas responsable si vous perdez vos données ou que votre NAS part en fumée. Sachez aussi que ce loader contient moins de drivers que sous DSM 5.2 donc si il est primordiale pour vous d'avoir une machine opérationnelle h24 je vous conseille de lire en bas du tuto les drivers disponibles. Si votre module n'est pas inclus alors il vous faudra les compiler vous même ou utiliser le ramdisk personnalisé qui se trouve ci-dessus. Ne me demandez pas de compiler des modules à votre place, je ne le ferais pas. NE METTEZ PAS A JOUR DSM AU DELA DE LA VERSION 6.1.7 AVEC LE LOADER v1.02b. EN D'AUTRES TERMES NE METTEZ PAS A JOUR DSM A LA VERSION 6.2 A bon entendeur. Maintenant que vous avez tout ce qu'il vous faut, passons aux choses sérieuses: 1 - Faites un backup de vos données et de votre configuration avant toute chose. Cela évitera la pleurniche plus tard. Imprimez ce tuto si nécessaire. 2 - Eteignez votre NAS. Déconnectez votre clef USB avec votre loader 5.2. Je conseille que vous mettiez de coté la clef USB que vous utilisez actuellement avec DSM 5.2 et prenez une nouvelle clef USB pour DSM 6.1. Cela évitera de la refaire si l'upgrade ne marche pas pour vous et que vous avez besoin de revenir à DSM 5.2. 3 - Allez à votre PC, branchez votre clef USB et lancez votre application de choix permettant de voir le VID et PID de votre clef USB. Notez ça quelque part car vous en aurez besoin sous peu. 4 - Maintenant lancez OSFMount. Sélectionnez "Mount New", puis choisissez votre loader (fichier au format .img) dans "Image File". Une autre fenêtre s'ouvre. Sélectionnez la partition 0 (celle de 15 MB). Cliquez Ok. Sur la fenêtre principale décochez la case "Read only drive". Cliquez Ok. La partition de l'image devrait maintenant être montée dans votre explorateur de fichiers. Vous pouvez maintenant aller au dossier /grub et remplacer (ou renommé) le ramdisk par défaut extra.lzma par celui que j'ai fourni un peu plus haut. Maintenant revenez en arrière et aller dans le dossier /grub et éditer le fichier grub.cfg avec votre éditeur de texte avancé. Si vous avez besoin de remplacer le ramdisk par défaut par le ramdisk extra.lzma personnalisé il vous faudra alors aussi monter la partition 1 (celle de 30MB) avec OSFMount. Le contenue du fichier grub.cfg est le suivant: Je ne mets uniquement ici que la portion du code qui nous intéresse dans le cadre de ce tuto: [...] set extra_initrd="extra.lzma" set info="info.txt" set vid=0x058f set pid=0x6387 set sn=C7LWN09761 set mac1=0011322CA785 set rootdev=/dev/md0 set netif_num=1 set extra_args_3615='' set common_args_3615='syno_hdd_powerup_seq=0 HddHotplug=0 syno_hw_version=DS3615xs vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet' set sata_args='sata_uid=1 sata_pcislot=5 synoboot_satadom=1 DiskIdxMap=0C SataPortMap=1 SasIdxMap=0' set default='0' set timeout='1' set fallback='1' [...] Les données à modifier sont les suivantes: vid=0x090C par vid=0x[le vid de votre clef usb] pid=0x1000 par pid=0x[le pid de votre clef usb] sn=C7LWN09761 par sn=générez votre sn ici avec le modèle DS3615xs ou DS3617xs ou DS916+ (cela va dépendre que quel loader vous avez choisi) mac1=0011322CA785 par mac1=[l'adresse MAC du port réseau #1]. Vous pouvez rajouter set mac2=[l'adresse MAC du port réseau #2] si vous avez un 2ième port réseau et ainsi de suite jusqu'à mac4 mais ceci n'est pas nécessaire. Conseil: changez timeout='1' par timeout='4' - Cela rallonge la durée d'affichage du Menu Boot Grub lorsqu'il apparaitra à l'écran. Une fois finie l'édition du fichier grub.cfg, sauvegardez les changements et fermez l'éditeur de texte. Sur OSFMount cliquez sur "Dismount all & Exit". Vous êtes maintenant fin prêt à écrire l'image sur votre clef USB. 5 - Utilisez Win32 Disk Imager pour rendre votre clef USB bootable avec l'image que vous venez d'éditer. 6 - Ejectez votre clef USB chaude et brulante proprement. Branchez la sur votre NAS (évitez les ports USB 3.0). Démarrez la machine et accédez immédiatement à votre BIOS afin de le reconfigurer pour que le boot se fasse à partir de la nouvelle clef usb. Faites les changements nécessaires pour redémarrer en UEFI ou en legacy bios, c'est à vous de choisir. Aussi, dans le BIOS, les HDD doivent être configurés en AHCI et non pas en IDE. Finalement et si possible, activez dans le BIOS le port série si il ne l'est déjà pas. Toutes les cartes mère n'ont pas forcément de port série. Si c'est le cas pour vous alors ce n'est pas bien grave, le loader se chargera du problème. Sauvegarder les changements fait au BIOS et redémarrez. 7 - Une fois redémarré, si vous avez un écran connecté au NAS vous verrez le Menu Boot Grub suivant: CONSEIL: avant même que le Menu Boot Grub n'apparaisse cliquez de façon répétée les touches haut ou bas. Cela aura comme effet d'arrêter le compte à rebours et vous donnera le temps de choisir la ligne que vous voulez. Vous verrez l'écran suivant après avoir fait entrer: Si vous avez démarrer la clef USB en mode EFI vous ne verrez normalement pas les 3 dernières lignes. Rien de grave. 8 - Retournez à votre PC et lancez de préférence Synology Assistant ou alors allez à http://find.synology.com. Normalement si vous avez bien tout suivi votre NAS devrait être détectée sur votre réseau local au bout d'une minute environ (j'ai testé avec un machine virtuelle et ça a prit ~55 secondes). Il suffit ensuite de suivre les indications pour soit faire une installation fraîche soit faire la migration de DSM 5.2 à DSM 6.1. A un moment donné DSM vous demandera le fichier PAT (DSM_DS3615xs_15217.pat ou DSM_DS3617xs_15217.pat ou DSM_DS916+_15217.pat) que vous avez normalement déjà téléchargé. 9 - Une fois finie la MAJ ou l'installation fraîche, accédez à votre NAS comme d'habitude. Il vous faudra surement mettre à jour plusieurs applications. Vous pouvez ensuite mettre à jour DSM 6.1 jusqu'à DSM 6.1.7-15284. Il est possible que vous soyez obligé de faire un reboot forcé. Certaines personnes ont dû refaire la clef usb aussi. Désactivez la mise à jour automatique dans DSM. Si besoin est, voici ou télécharger les fichiers individuels (DSM et updates): https://xpenology.com/forum/topic/7294-links-to-dsm-and-critical-updates/ 10 - Voila c'est fini. Si vous avez des questions cherchez le forum/Google d'abord. Si vous êtes toujours bloqué alors posez votre question en donnant les spécifications de votre matériel (model carte mère, contrôleur LAN, contrôleur disk etc) faute de quoi votre post sera supprimé ou sciemment ignoré. -------------- Note 1: Si après avoir suivi le tuto votre NAS n'est pas accessible via http://find.synology.com ou Synology Assistant la raison la plus probable c'est que les drivers de votre carte réseau n'ont pas été inclus dans le loader. Faites un effort et utilisez Google pour savoir quel module votre carte réseau et/ou votre contrôleur de disque utilisent sous linux suite à quoi vérifiez que ces modules soient inclus dans le ramdisk personnalisé. Si vous le voyez alors utilisez le ramdisk personnalisé. Si rien ne marche alors poser votre question. Note 2: Une fois passé à DSM 6.1 sachez que vous n'aurez plus accès au NAS via ssh avec le compte root. Vous pouvez ceci dit y accéder avec votre compte administrateur puis élever les droits en utilisant sudo -i Ceci est tout à fait normal. C'est Synology qui a voulu sécuriser l'accès à DSM. Note 3: Vérifiez bien le VID/PID de votre clef USB avant d'entamer la MAJ. Si lors de la migration vous obtenez l'erreur suivante: "Failed to install the file. The file is probably corrupted. (13)" (ou l'équivalent en français) c'est que le VID/PID ne correspond pas à votre clef USB. Si vous avez toujours des problèmes après avoir bien vérifié le VID/PID alors essayez une autre clef usb. Note 4: Les changements effectués sur le fichier grub.cg peuvent aussi être fait directement à partir du Menu Boot Grub donc en principe il est tout à fait possible d'ignorer le Point 4 et écrire l'image synoboot.img sur votre clef USB sans rien modifier (il suffit de continuer à lire à partir du Point 6). Pour faire les modifications il faut appuyer sur la lettre 'C' lorsque vous voyez le Menu Boot Grub apparaitre. Il faut être vif car vous n'avez qu'une seconde avant que le menu disparaisse. Apres avoir appuyé sur la lettre C vous vous retrouverez dans une invite de commande grub. Pour changer le VID vous devez écrire comme suit: vid 0xLES 4 CHIFFRES VID DE VOTRE CLEF USB Faites la même chose pour pid, sn et mac1. Appuyez sur entrer à chaque commande. Les commandes sont les suivantes: pid 0xLES 4 CHIFFRES PID DE VOTRE CLEF USB sn LE NUMERO DE SERIE DE VOTRE NAS mac1 L'ADRESSE MAC1 DE VOTRE NAS Si vous avez plusieurs cartes réseau vous pouvez les rajouter de la meme manière: Le maximum c'est mac4. Voir ci dessous: mac2 L'ADRESSE MAC2 DE VOTRE NAS mac3 L'ADRESSE MAC3 DE VOTRE NAS mac4 L'ADRESSE MAC4 DE VOTRE NAS Si vous pensez avoir fait une erreur il suffit de refaire la commande. Lorsque vous avez fini appuyez sur Esc et sélectionnez la ligne du menu qui convient. Ci dessous un example a quoi ressemble l'invite de commande grub avec les commandes: Note 5: Si lors de l'installation vous recevez un message d'erreur de type "Nous avons détecté des erreurs sur les disques [numero des disques] et les ports sata ont également été désactivés, remplacer les disques et réessayer" alors il faut rajouter SataPortMap dans l'invite de commande Grub (ou dans le fichier grub.cfg). Appuyer sur la letter C lors du Menu Boot Grub et écrivez ceci: append SataPortMap=XX XX est le nombre de HDD présent. N'oubliez pas de mettre à jour ce paramètres si vous rajouter des HDD. Par ailleurs, si vous êtes amené à utiliser Reinstall, il ne faut pas oublier de sélectionner le mode normal (première ligne du menu grub) lors du reboot automatique après l’installation, sinon le loader sélectionnera à nouveau Reinstall et cela occasionnera des problèmes ultérieurement. @@@@@@@@ Précisions sur ce que veut dire SataPortMap= @@@@@@@@ @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@ ############## Problèmes connus ##################### - Sur des machines à coeur unique et lent le "patcher" charge trop tard. - Certains drivers de cartes réseau plantent lorsque le MTU est au delà de 4096 (Jumbo frame). ############# Modules inclus dans le Loader de Jun par défaut ############# ############### Log des modifications du Tuto ###################
  50. 1 point
    Hi! I made a little tool which can help you to get your XPEnology up & running without installing any software. It contains (as portable versions): - Nirsoft's USB device view (helps to identify the VID & PID of your USB boot media) - V2.76 - XPEnology Serial Generator for DS3615XS, DS3617XS and DS916+ (a converted version of the HTML site) - Win32 DiskImager (to write your modified synoboot.img to your USB boot media) - V1.0 (only available in V1.4.1) - OSFMount x64 (to mount the synoboot.img and modifiy it) - V1.5 - Notepad++ (best editor for changing values inside grub.cfg) - V7.5.3 - Synology Assistant (useful tool from Synology to find your XPEnology and install DSM) - V6.2-23733 - TFTP/DHCP portable (a small TFTP, DHCP and Syslog server by Ph. Jounin) - V4.6.2 - MiniTool Partition Wizard 10 (helps assigning already formatted/written USB devices to modify existing grub.cfg) - V10.3 - SoftPerfect Network Scanner - V6.2.1 - USB Image Tool - V1.75 - New: Rufus - V3.3 In the section "Downloads" all links open corresponding websites to download the files. For beginners I added a small HowTo for bare-metal installation. Update New link for download: https://mega.nz/#F!BtViHIJA!uNXJtEtXIWR0LNYUEpBuiA The download link/folder also contains @IG-88's extra.lzma (V0.6) for the DS918+. You'll have to run it "As Administrator" because some of these tools (like Win32 DiskImager) need to be executed with higher rights. It's possible that the SmartScreen filter will give you a warning, because the EXE isn't signed. Bug reports and comments are welcome Cheers Current version: V1.4.2 (2018-11-19)