Leaderboard


Popular Content

Showing content with the highest reputation since 05/21/2020 in all areas

  1. 1 point
    @pmchan m'en a fait prendre une palette ! Alors maintenant je les flash a la chaine, et je les revend le double du prix, MP si sa t’intéresse
  2. 1 point
    Héhé, l'aurais-tu achetée sur Rakuten comme @EVOTk et moi il y a quelques jours?
  3. 1 point
    No, DSM is installed to all available block storage devices for redundancy (which is another reason not to present a monolithic device to your VM). All you need is the boot loader on the first Virtual SATA controller as device 0:0 Connect your RDM drives in sequence to the second Virtual SATA controller as device 1:0, 1:1, 1:2, etc 2nd virtual disk isn't required if you have RDM devices online Every disk has three partitions, Linux, swap and data. When you run the DSM install it will RAID 1 DSM (Linux and swap) to the first two partitions, respectively. Once DSM is fully installed you can use the data space for whatever array configuration and volume layout you wish.
  4. 1 point
    If you want to modify your grub.cfg of your written boot stick or want to access the partitions (adding extra.lzma from @IG-88) you can use a free portable tool. 1.) Download MiniTool Partition Wizard Free Edition (portable): https://www.partitionwizard.com/download/v12-portable/pw12-free-64bit.zip 2.) Unzip it and launch the "partitionwizard.exe" with administrative rights. 3.) Plug in your boot stick. Partition Wizard will automatically recognize the new drive. In this example it is drive no. 4. Now select the 1st partition of your stick (15MB) -> rightclick and select "Change Letter" in the context menu. 4.) Select a desired drive letter and click on "OK". 5.) In the lower left pane of the tool click on "Apply" and confirm the pending changes: If everything went OK you should see a success message: Now you have full access to the 1st partition with the Explorer or your favourite file manager. If you're done with any modifications it's advisable to unmount the drive letter. The steps are nearly the same. 6.) Right click the 1st partition again -> "Change Letter" -> select "New Drive Letter: none" -> "OK" -> click on "Apply" to the lower left and confirm the changes.
  5. 1 point
    Always use DSM RAID, btrfs self-healing features and advanced disk management are the whole point of the system.
  6. 1 point
    Hi all:) After a little bit of reverse engineering I was able to bypass the license checking mechanism introduced in DSM 6 successfully with a simple two line binary patch of synocodectool and therefore enable transcoding without a valid serial number[emoji4]. I wrote a little script to make it easier for everyone. For more information please check the github repo: https://github.com/likeadoc/synocodectool-patch HOWTO: 1. wget https://raw.githubusercontent.com/likeadoc/synocodectool-patch/master/patch.sh 2. chmod +x patch.sh 3. ./patch.sh Done:) If things go wrong simply restore the original file: ./patch.sh -r Cheers
  7. 1 point
    Bonjour, Je poste ici un solution (ou plutôt 2 selon votre cas) qui est connue et qui marche légalement pour disposer de Surveillance Station avec plus de 2 caméras. Le concept repose sur la fonction CMS de SS et le fait d'avoir n instances DSM virtualisés. En effet avec CMS, SS regroupe toutes les instances de SS en une seule interface et in fine 2 caméras par instance de SS. il va de soit que selon ce principe on peut avoir n caméras, mais il faut n/2 instances de DSM en parallèle. C'est le prix à payer pour ne pas avoir à payer le prix exorbitant des licences de caméras supplémentaires. Procédure commune : 1. Installer Surveillance Station dans la nouvelle VM (cf. pré-requis ci-dessus) 2. Connectez-vous à Surveillance Station de cette VM 3. Déclarer la ou les caméras supplémentaires 4. Cliquer sur Centre des applications, rechercher CMS et lance-le. Cliquer sur 'Oui' dans la popup : 5. Après le rechargement de la page, lancer CMS depuis le menu principal. Assurez-vous qu'il est bien en mode 'Serveur d'enregistrement'. Si ce n'est pas le cas, allez le faire dans l'onglet Avancé 6. Si vous avez besoin de plus de 4 caméras alors refaire les étapes du Pré-requis jusqu'à l'étape 5 du présent paragraphe selon le nombres de caméras supplémentaires que vous avez besoin! 7. Sur la machine hôte (pas la nouvelle VM!!!), se connecter à l'autre instance Surveillance Station 8. Cliquer sur Centre des applications, rechercher CMS et lance-le. Cliquer sur 'Oui' dans la popup : 9. Après le rechargement de la page, dans le menu principal, cliquer sur CMS, puis Avancé. 10. Sélectionner l'option : "Mode serveur d'hôte" puis sur Enregistrer. valider la popup par 'oui' : 11. Après le nouveau rafraîchissement de la page, allez déclarer le ou les autres serveur Surveillance Station en cliquant sur Ajouter > Ajouter un serveur : 12. Renseigner les paramètres correspondant à l'instance créée à l'étape 5 dans Virtual Machine Manager. Cliquer ensuite sur Suivant : 13. Régler les paramètres de couplage selon vos choix et cliquer sur terminer 14. Pour vérifier que cela à marcher, ouvrir "Caméra IP" et constater que le(s) autre(s) caméra(s) apparaissent! Pour mon test j'ai volontairement réutiliser la même caméra : Enfin, pour la gestion centralisée des fichiers d'enregistrement, je vous laisse la main! Libre à vous de faire un montage NFS ou CIFS entre le(s) VM(s) vers l'hôte principal.
  8. 1 point
    I made an English version of the tutorial mentioned by @IG-88 in the appropiate section:
  9. 1 point
    for one you would need additional driver as extra.lzma (you can open this with 7zip, the file that needs to be there is hpsa.ko) also its best to set the p420 in non-raid or hba mode afair that can be done in the ssa utility, reset raid configuration and then choose hba mode, you can google it or look on youtube, there should be detailed information i can't remember that we had someone doing it and using the hpsa.ko driver successfully most people go with a cheap 9211-8i (reflashed oem clone) with IT firmware, it has two sas connectors so you should be able to connect the backplane of the server i guess you used 3617? (918+ needs 4th gen intel and that model seems to have only ivy bridge) you can use this extra.lzma for 3615/3617 for testing https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/
  10. 1 point
    Смотря что вы хотите и какая у вас конфигурация железа. SataPortMap =42 - Это значит, что у вас 4 порта на первом контроллере, 2 на втором Если у вас один контроллер и на нём четыре порта, то хватит и такой записи - SataPortMap =4 Тут тоже не совсем понятен ваш желаемый, конечный результат. Если откатиться назад, до 6.1.7, то вам необходим загрузчик 1.02 и ..... 1. Либо форматнуть Первый раздел на каждом диске и потом переустановить систему, с последующем восстановлением конфигурации из резерва 2. Либо , если есть так называемый "ROOT" доступ (sudo i), то можно удалить /etc.defaults (аналог действий варианта 1) и потом переустановить систему, с последующем восстановлением конфигурации из резерва Если же хотите обновиться до последней версии, то загрузчик 1.03, действия с первым разделом, если система не грузится и ..... https://mega.nz/#F!yQpw0YTI!DQqIzUCG2RbBtQ6YieScWg!7AoyySoS Последние файлы последней DSM ( на ваш выбор) https://archive.synology.com/download/DSM/release/ Ну и полезные инструкции из этого раздела. https://xpenology.com/forum/forum/99-добро-пожаловать/ Ну вот где то так или около того..... )))
  11. 1 point
    - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 - Loader version and model: Jun's v1.04 - DS918+ - Using custom extra.lzma: extra v10 - Installation type: BAREMETAL - J1900 NAS - Additional Comment: Everything is work (HW and SW transcode is work).
  12. 1 point
    Bienvenida! Pues en realidad todo depende que uso le vayas dar al NAS. En mi caso monté algunos con i7 pues tengo Plex instalado en ellos y son capaces de procesar hasta 12 o 13 reproducciones simultaneas de Plex. El problema principal con el ahora mismo es la imposibilidad de pasar a la version 6.2, por problemas de compatibilidad con la tarjeta de red y la configuración modificada del número de discos máximos admitidos. Sobre tu caso en concreto que funcione o no el M.2 dependerá de la controladora que utilice la placa base y del loader que vayas a utilizar, los últimos requieren de inclusión de modulos de drivers para hacerlos compatibles con ciertas controladoras. Mi consejo, casi cualquier placa base será compatible con la última version de DSM 6.1.7, además de tarjetas de red Realtek (que suelen ser las integradas en casi todas las placas base actuales). Además puedes añadir adaptadores SATA adicionales en el futuro para añadir discos duros según necesites. Eso sí busca una buena caja que admita gran número de discos en caso de que pienses ir añadiéndolos. Espero que te sirva para empezar...
  13. 1 point
    Con lo de la tarjeta M2 no te puedo ayudar, ¿pero te refieres a instalar el arranque ahí o ya el sistema operativo DSM? Y en cuanto al i3 yo tengo mi servidor xpenology con ese procesador, aunque es antiguo un 3220T con solo 2 núcleos y va muy sobrado, entiendo que el tuyo irá aún más holgado. Pero también depende del nº de discos que vayas a meter y cuantas personas van a tener acceso al servidor al mismo tiempo. En el mio la placa base es una Asrock H77M (6 puertos sata),, vendí la tarjeta gráfica Geforce 1030 y con el dinero me compré dos tarjetas SATA de 2 y 4 puertos, fuente de alimentación ENERMAX Platimax D.F. 600 (la anterior se me quemó y ya aproveché) y la caja una Silverstone GD04, todo heredado de un HTPC que tenía ya poco uso. Primero compré 3 discos duros de 6 Tb. Seagate Ironwolf reacondicionados con 2 años de garantía pero aquello hacía mucho ruido y se calentaban a lo bestia. Como no hay mucho sitio en la caja los sustituí por 7 discos duros externos Maxtor M3, de 2.5 pulgadas (les he quitado la caja) y el SSD que ya tenía para Windows 10 por si lo necesito para otras cosas, con un menú de GRUB al arrancar para elegir entre DSM y Windows, Los Maxtor llevan dentro un Seagate de 4 Tb (salen más baratos externos con caja que internos sueltos, una cosa extraña a descubrir, aunque eso sí te quedas sin garantía). y luego me he comprado esto: https://es.aliexpress.com/item/32778843469.html?spm=a2g0s.9042311.0.0.268d63c05zIYFI&cv=12_Deeplink&af=179029&aff_platform=aaf&sk=Y7bAZbY&aff_trace_key=332e9094fa05457c9f27ea8faa343dae-1590299154135-02277-Y7bAZbY&cn=15716&dp=12%3A%3A179029%3A%3A%3A%3A%3A%3A1590299153&terminal_id=030ee5a389464578aceba83dad9364e8&tmLog=new_Detail&aff_request_id=332e9094fa05457c9f27ea8faa343dae-1590299154135-02277-Y7bAZbY (para en un futuro añadir 4 discos más para un total de 11 discos, 44 Tb) + el SSD, en mi caso iría externo conectado al ordenador.
  14. 1 point
    Yes, it is - all intel integrated cpu graphics from sandy bridge ix-2xxx to ix-9xxx included, preumable the ix-10xxx as well as they are still UHD620 interfaces. Excluding the 'F' parts, of course (F=disabled GPU on chip) DeadS
  15. 1 point
    Es que como te esta contestado el gurú IG-88, cualquier cosa q te digamos, seguro q el lo explica mejor.... Suerte con la placa! Enviado desde mi iPhone utilizando Tapatalk
  16. 1 point
    Разрешение должно меняться. Камера какая?
  17. 1 point
    По onvif никак. Только для камер поддержка которых есть в ss
  18. 1 point
    Да всё просто...... Если вам руку набить, то любая флешка и любой Хард подойдёт. Пишете правленый загрузчик 1.03 ( Свой VID и PID), устанавливаете dsm (лучше сразу 6.2.3), смотрите на результат и при успешном варианте, а он обязательно таковым будет, можете заменить харды на родные и вручную подсунуть файл dsm . Должно быть предложена миграция. И далее по накатанной... Бытует мнение, что после установки, dsm слегка меняет загрузчик и есть совет от многомудрых, заново записать загрузчик. Я изменений не заметил, но тут лучше пере бдеть, чем не до бдеть. Незабываем сохранять резервную конфиггурацию Естественно, не важна какая флешка, загрузчик под новые dsm будет 1.03.
  19. 1 point
    cpu? from the hpe specs it seems to be ivy bridge, to old for 918+ (loader 1.04b) loader version and type? i guess you will have to use 1.03b 3615 or 3617 (dsm 6.2) and keep in mind that it will not work as uefi, it needs csm/legacy mode you can test with loader 1.02b (dsm 6.1),that one will work uefi and csm https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ not when you do it the right way that is a intel based card hpe has the intern 4port adapter as "HP Ethernet 1Gb 4-port 366i Adapter" in the specs and from vendor id (8086) thats intel
  20. 1 point
    В п.3 флешку перезапишите с образа, который на нее изначально писали. про тест, оно либо взлетит, либо нет.
  21. 1 point
    TL;DR: When running DSM 6.2.3 under ESXi, Jun's 1.03b and 1.04b bootloaders fail to build /dev/synoboot (this can be fixed by installing an extracted script from the loader to re-run after the boot has completed) DSM 6.2.3 displays SATA devices (i.e. bootloader on 1.04b) that are mapped beyond the MaxDisks limit when previous versions did not DSM 6.2.3 updates the synoinfo.cfg disk port bitmasks which may break some high-disk count arrays, and cause odd behavior with the bootloader device Background: Setting the PID/VID for a baremetal install allows Jun's loader to pretend that the USB key is a genuine Synology flash loader. On an ESXi install, there is no USB key - instead, the loader runs a script to find its own boot device, and then remakes it as /dev/synoboot. This was very reliable on 6.1.x and Jun's loader 1.02b. But moving to DSM 6.2.x and loaders 1.03b and 1.04b, there are circumstances when /dev/synoboot is created and the original boot device is not suppressed. The result is that sometimes the loader device is visible in Storage Manager. Someone found that if the controller was mapped beyond the maximum number of disk devices (MaxDisks), any errant /dev/sd boot device was suppressed. Adjusting DiskIdxMap became an alternative way to "hide" the loader device on ESXi and Jun's latest loaders use this technique. Now, DSM 6.2.3: The upgrade changes at least two fundamental DSM behaviors: SATA devices that are mapped beyond the MaxDisks limit no longer are suppressed, including the loader (appearing as /dev/sdm if DiskIdxMap is set to 0C) The disk port configuration bitmasks are rewritten in synoinfo.conf: internalportcfg, usbportcfg and esataportcfg and on 1.04b, do not match up with default MaxDisks=16 anymore. NOTE: If you have more than 12 disks, it will probably break your array and you will need to edit them back (and that's not just an ESXi issue)! Also, when running under ESXi, DSM 6.2.3 breaks Jun's loader synoboot script such that /dev/synoboot is not created at all. Negative impacts: The loader device might be accidentally configured in Storage Manager, which will crash the system The loader partitions may inadvertently be mapped as USB or eSata folders in File Station and become corrupted Absence of /dev/synoboot devices may cause future upgrades to fail, when the upgrade wants to modify rd.gz in the loader Unpacking Jun's synoboot script reveals that it harvests the device nodes, deletes the devices altogether, and remakes them as /dev/synoboot. It tries to identify the boot device by looking for a partition smaller than the smallest array partition allowed. It's an ambiguous strategy to identify the device, and something new in 6.2.3 is causing it to fail during early boot system state. There are a few technical configuration options can can cause the script to select the correct device, but they are difficult and dependent upon loader version, DSM platform, and BIOS/EFI boot. However, if Jun's script is re-run after the system is fully started, everything is as it should be. So extracting the script from the loader, and adding it to post-boot actions is a universal solution to this problem: Download the attached FixSynoboot.sh script Copy the file to /usr/local/etc/rc.d chmod 0755 /usr/local/etc/rc.d/FixSynoboot.sh Thus, Jun's own code will re-run after the initial boot after whatever system initialization parameters that break the first run of the script no longer apply. This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for ESXi running 6.2.3, and it won't hurt anything if installed or ported to another environment. FixSynoboot.sh
  22. 1 point
    Pues parece que en español nadie aporta mucho del tema. He obtenido información del english forum. Quisiera saber los siguiente: 1) puedo instalar xpenology sobre una tarjeta M2 ssd Corsair? 2) Para funcionamiento de NAS (sin monitor, solo server), si coloco un i3 4núcleos, en vez de un Celeron (2 núcleso) no estaría desaprovechando el xpenology ? gracias!
  23. 1 point
    Be careful when opening your NAS to the internet. Things to do: - enable DoS protection in DSM - enable account blocking (block IP after X login failures within XX minutes) - for your admin account enabling 2FA is also recommended - use strong passwords for your user accounts - redirect HTTP to HTTPS - change the standard port for HTTPS (5001) to a higher, non-standard port (for example: 55001)
  24. 1 point
  25. 1 point
    ja, seit anfang 2020 ist win10 zickig und ordnet keinen laufwekrsbuchstaben mehr zu "Ext2 Volume Manager" (http://www.ext2fsd.com) installieren und der ersten partition einen buchstaben zuordnen, dann kann man wieder mit "Win32DiskImager 1.0" den stick einlesen es gingen auch recovery tools die image files erzeugen können (z.b. DMDE, https://dmde.com, free edition), da muss man dann den lesebereich begrenzen so das hinter der dritten partition aufgehört wird zu lesen (kommt ja nix mehr, wäre beim 918+ image sektor 102366) - und natürlich geht auf dd unter linux wenn du vom alten die grub.cfg holst (1. partition) und von der 2. partition die extra/extra2 und zImage+rd.gz solltest du auch das default image nehmen können (und dann mit osfmount die datein draufspielen) man kann den stick übrigens auch in dsm mounten (oder einen anderen linux) mkdir -p /mnt/synoboot1 mkdir -p /mnt/synoboot2 echo 1 > /proc/sys/kernel/syno_install_flag mount /dev/synoboot1 /mnt/synoboot1 mount /dev/synoboot2 /mnt/synoboot2
  26. 1 point
    - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25423 - Loader version and model: Jun's v1.04b - DS918+ - Using custom extra.lzma: YES (v0.11 for 6.23 by IG-88) - Installation type: BAREMETAL - Asrock B250M-HDV / i5 7500T crossflashed Dell Perc H200 to LSI 9211-8i - Additional comments: HW transcode is still working in Video Station, Plex and Jellyfin, thanks to extra.lzma v0.11.
  27. 1 point
    mine looks like this and make sure to unmount before installing mkdir -p /mnt/synoboot1 mkdir -p /mnt/synoboot2 echo 1 > /proc/sys/kernel/syno_install_flag mount /dev/synoboot1 /mnt/synoboot1 mount /dev/synoboot2 /mnt/synoboot2 yes when using 6.2.2 it should be 0.8 standard for 918+ the cpu should support it
  28. 1 point
    In this tutorial we are going to place the bootloader alongside the DSM OS and the remaining storage Keep in mind that i use DSM 6.1 and not 6.2! What you need: Win32DiskImager Jun's loader v1.02b DS3615xs with MBR partition DSM 6.1 (15047) grub2 (i used grub2-for-windows) partition/hard disk manager (i used Paragon Hard Disk Manager 15) USB stick for the bootloader empty SSD First we are going to put the bootloader on a USB stick using Win32DiskImager Then we are going to install DSM normally After the installation of DSM and configuring your device name, username, etc goto the Storage Manager and create a RAID Group for only the SSD in Basic then click Apply and goto Volume and create one (doesn't matter what File system you used, but i use ext4) then click OK and shutdown the server Now we are going to do some fun things with the SSD drive where DSM and the storage are installed on put the SSD drive and the USB stick with the bootloader on it (we need some files later) out of the DiskStation machine and put them in your main PC Start up your partition/hard disk manager and look for the SSD that you installed DSM on it should look like this: Look at the last Unallocated partition, it should be 100MB big thats plenty for the loader so we are going to make a new partition (50MB is enough) and make sure you put it at the very end of the drive dont forget to put the drive as Active and assign a drive letter to it Now we are going to install grub2 on that 50MB partition i used this website to make one (because i use Windows) After you have install grub2 on that partition that we need to copy all the files from the 2nd partition of the usb drive and place them in the root directory of the 50MB partition don't forget to place the grub.cfg (from the 1st partition) in the /grub folder of 50MB partition Now unplug the SSD from your PC and place it in your DiskStation pc and boot it up (you dont need to edit anything just let it boot) you can use Synology Assistant to find your DiskStation pc and you should see a normal welcome page were you can login After you have logged in you should see an error message, that is because of the small FAT16 partition you will get that everytime you startup that machine Storage Manager should say that your system is healthy. Thats it, you can now use it normally without an USB bootloader also you can update to the latest 6.1 version if you want (make sure you don't install 6.2, haven't tested that one)
  29. 1 point
    - Outcome of the update: SUCCESSFUL (but see comments) - DSM version prior update: DSM 6.2.2-24922 Update 6 - Loader version and model: Jun's v1.04b - DS918+ - Using custom extra.lzma: YES - real3x mod (but see comments) - Installation type: BAREMETAL - ASRock J4105-ITX - Comments: no /dev/dri (missing Gemini Lake firmware) NVMe code is new. The original NVMe patch does not work. I uninstalled NVMe cache as a prerequisite prior to upgrade and recommend that you do too. NVMe cache is reinstalled and working fine after applying the updated patch here. ASRock J/Q-series motherboards require extra.lzma or real3x mod to boot DSM 6.2.2. 6.2.3 is verified to install on J4105-ITX with no mods. So I chose to revert real3x mod immediately prior to invoking the upgrade with the following procedure:
  30. 1 point
    Сам разобрался ) Кому интересно -- extra.lzma с одним модулем rtc-cmos для Asrock J3455-ITX, работает HW в VS и Plex, отключение, загрузка около 2-х минут, скорость по сети 110 Мб/с В BIOS CSM - Enable (PXE not used, Storage - not used, Video UEFI only !!!) Грузимся в Legacy с отключенным монитором. В биос попасть можно без перемычки сброса )) Можно отключить энергосбережение CPU. extra.lzma extra2.lzma
  31. 1 point
    - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.2-24922-3 - Lo ader version and model: JUN'S LOADER v1.04b - DS918+ - Using custom extra.lzma: NO - Installation type: BAREMETAL - Asrock B250M-HDV / i5 7500T / 8Gb DDR4 / crossflashed Dell Perc H200 to LSI 9211-8i / 6x4TB RAID6 + 60GB SSD for VM and 240GB SSD cache / HP NC360T (dual ports nic) - Additional comments: reboot required