Jump to content
XPEnology Community

Leaderboard

Popular Content

Showing content with the highest reputation on 04/24/2021 in all areas

  1. I thought I'd post here, not seeing a thread explaining what the deal is with the latest intermediate updates failing and so few people testing them, my curiosity got the best of me, unless i'm just missing something completely, which has been known to happen. 😉 Though not the newest member, I've been on Xpen 2 years maybe? I do have some noob-type questions since noticing the two latest updates failing, and not seeing a lot of discussion of it. In my short time here I cannot recall new updates like 25554/25556 failing so quickly, and it does concern me. But again, I'm relatively new. And I'm sure this isnt the first time this has happened, where Jun or others have had to fix loaders MY HW 2x Identical Baremetal DS918+ Xpen's 1.04b With Gigabyte B360-hd3's I3-8100's both working fantastically on DSM 6.2.3-25426 Update 3 My questions regarding this are: Should we be concerned? Long term, with the intermediate updates not working? (assuming we already have backups and have all the functionality we need) Could DSM 6.2.3-25426 Update 3 possibly be the last safe Xpen Version for us using Jun's Loaders? Is Jun still kicking around to come up with a possible solution? Or anyone else? How often in the past have 2 consecutive updates failed right out of the gate like the latest intermediate updates have? Is there any particular reason, such as a major OS change, or Synology wanting to rid us "Open Sourcers" of this beautiful software, that is causing these updates not to work? Have Xpen users been migrating to alternatives to Jun's loader or other solutions noobs may not be aware of? if so what's out there for Baremetal?
    1 point
  2. Bonjour, Je te suggère de lire mes réponses précédentes à ces différents posts : Oubli mot de passe admin : https://xpenology.com/forum/topic/9608-oubli-mot-de-passe-admin/?tab=comments#comment-82913 Reconnexion impossible après changement de pass : https://xpenology.com/forum/topic/6376-reconnexion-impossible-après-changement-de-pass/?tab=comments#comment-56929 problème mot de passe : https://xpenology.com/forum/topic/6881-problème-mot-de-passe/?tab=comments#comment-115697 Comme le compte admin est désactivé en 6.x par défaut, tu peux utiliser la chaîne de caractères indiquée dans un de mes messages pour remplacer le mot de passe d'un compte disposant des droits suffisants, ou remplacer le MdP d'un compte basic et de celui d'admin pour pouvoir ensuite passer par un su - admin pour obtenir les droits nécessaires aux modifications. Pour résumer : démarrer le NAS avec un LiveCD (rescueCD intègre tout ce qu'il faut), monter la partition DSM (en principe /dev/md0 mais elle peut être identifiée comme /dev/md127 (c'est la plus petite au format ext4)) et accéder au dossier etc du DSM comme indiqué. Editer le fichier shadow et remplacer le mot de passe, éventuellement en profiter pour mettre en place une authentification SSH par clé (ce qui implique que SSH ait été activé avant)n, puis démonter les disques raid, retirer le CD ou la clef USB et rebooter le NAS. Il y a de nombreux tutoriels sur le web concernant la connexion SSH par clef et la suite shadow (pour réactiver le compte admin, il suffit d'effacer le 1 situé à la fin de la ligne "admin:.........." dans le fichier /etc/shadow). Jacques
    1 point
  3. First welcome to the forum, second there is no need to upgrade to 6.2.4 as it doesn’t bring anything new & 6.2.3 is stable. Going forward it’s unlikely there will be any effort given to make 6.2.4 work given that Synology will be releasing DSM 7 later this year so it would make sense for jun or anyone else to concentrate on that.
    1 point
  4. Hola, te esta pidiendo que lo actualices a la ultima version que tiene el NAS, bajo ningun concepto apliques esa actualizacion o tu NAS, morira. Puedes ir subiendo de version pero te las tienes que bajar de la web de sysnology, y se cargan desde el mismo entorno grafico del nas. Repito no actualices a la version. 6.2.4, solo versiones anteriores, y hazlo teniendo copia de toda la info que tengas almacenada. Saludos.
    1 point
  5. AFAIK VID/PID tells real Syno code what the bootloader device is so that it hides it properly. Since it is hardcoded and the VID/PID is Synology's it is a simple way for Syno to ensure DSM doesn't run on non-Synology hardware (except if hacked). VID/PID error is essentially their code rejecting non-Syno hardware. Anytime you attempt a (6.2.4) install on a loader device, it will write those files to the loader and then it cannot install a version earlier than that. It doesn't matter if it worked or not. So just write a new clean loader from a fresh download and this problem you have will be gone. This is the way it's always been.
    1 point
  6. "Most all of your problems with going to 6.2.4 is because VMWare pushed out patches with deprecation to linuxkernel." VMWare may use a variant of Linux as its core hypervisor OS, but what it supplies as a hardware platform has nothing to with it - the VM doesn't even know what OS is going to run on it. Yes, we select "Linux 3.x 64-bit blah blah" as a profile, but that is not linking to anything that is actually Linux, it's just defining a standard virtual hardware configuration that is presented to the client OS installation. Furthermore, if this were somehow true, how would VMware's purported actions affect Proxmox and KVM and all the other hypervisors at the exact same moment? So I'm gently calling foul on this assertion, but if someone can present a working baremetal configuration, that would be awesome and worth investigating further. "[VM] has been proven to be bad from the get go way back with DSM 5.2.xxxx" Um, yeah sure.
    1 point
  7. @bearcat Sorry for the miss understanding , integrated graphic card /dev/dri make me write pci my bad. I have the M93p Tiny running Openmediavault docker and KVM it performs well.
    1 point
  8. ну как бы совсем не так PCI-E 3.0 кои как раз у меня используются пропускают 8 ГТ/с -по одной линии. Итого для PCI-E 3.0 x8 получаем 7,88 Гбайт/с или 63Гбит\с - то есть можно спокойно создавать сеть на Infiniband 40Гбит а если ставить в слот x16 то и все 100Гбит. - но это совсем не для дома расходы. Мне Switch 10Gb на 8 портов SFP соил 17тр + по 3тр за карту SSD и прочее не имеет к пропускной способности сети отношение. Так как в случае прокачки через iperf они не задействуются. Хе - у меня самые обычные десктопные Toshiba по 2Тб - 12шт Raid5 по тестам DiscSpeed легко выдают по сети 850 и 980мбайт\с соответственно. Да даже NAS на базе Gen8 и 5 тошибок по 4Тб в RAID5 выдают скорость около 600-650мбайт да ладно - чем же они не сравнимы - PCI шины и количество линий от навороченности сервера ничем не отличается на работе сервер на Supermicro и паре Xeon-26xx так домашний на 6950 их в видеоредакторах уделывает. Нет никакой мистической серверной производительности - в большинстве задач десктопные будут на голову производительнее при той же цене. Но серверные можно поставить 2 в один сервер и получить 80PCI линий а с новыми Silver Gold процами 96Линий. Я же говорю что на том же самом железе у меня стоит MacOS и на той же самой сетевухе в iperf по тесту именно между сетью компа и NAS я получаю большую пропускную способность. Речь именно о сети, а не дисковой подсистеме. Кстати скорость на Intel x520 удалось поднять за счет установки Interrupt Moderation Rate в Low и наче выше 4-5Гбит не разгонялась. При чем NC550SFP - с ходу давал на базовых настройках в Win10 7.8-8.5Гбит\с Собственно у меня это средний размер файла с которыми я работаю - а так 130-200Гб и последнее частенько по 500-700Гб, есть файлы и по 980Гб. Но я собирал NAS как раз для быстрой сетевой работы что бы отказаться от локального хранилища по многим причинам: 1. совместная работа с нескольких компов - на одном создал задание и отправил на просчет на второй, а первый освободил. 2. BTRFS удобна симлинками - скопировал файл и он не повторно копируется и создается симлинк, NTFS так же умеет, но не по сети и не встроенными средствами. 3. Могу создать файл и на хренологии его сразу расшарить во вне через файлообмен, ftp, torrent и тд, не держа основной комп включенным.
    1 point
  9. So. IG-88 create for me extra.izma Tutorial. 1. Download Jun 1.03b - https://mega.nz/#!zcogjaDT!qIEazI49daggE2odvSwazn3VqBc_wv0zAvab6m6kHbA 2. Replace in Jun 1.03b «rd.gz» and «zImage» from https://archive.synology.com/download/DSM/release/6.2.2/24922/DSM_DS3617xs_24922.pat 3. Replace in Jun 1.03b - extra.lzma for loader 1.03b_mod ds3617 DSM 6.2.2 v0.4_Beta http://s000.tinyupload.com/?file_id=81158484589811846693 4.After start need need to get root rights and and open access for WinCSP - https://suboshare.blogspot.com/2019/02/synology-nas-dsm-62-root-access.html Restrart. 5. Open up synoinfo.conf Search for maxdisks. Change the 12 to a 24. Search for internalportcfg. Change the 0xffff to 0xffffff for 24. Restart. Results: Enternet connection to Intel i350 NIC dual-ports 1G - ready Mellanox 40G (Check in Data Center) - ready 12 drives up to 24 drives - ready 12 SSD in RAID F1 - it's work CPU - sees 16 threads (Check - cat /proc/cpuinfo)
    1 point
  10. Запустил все диски. Как оказалось у меня 2 Backplane стоит, для них нужно было всего 2 контроллера SAS. А я третий подключил в свободный порт, в итоге третий отключил. Отключил все SATA порты в биосе, eSATA тоже. Первый физический диск в сервере соответствует первому диску в веб интерфейсе, все 16 SSD завелись. Осталось RAID F1 активировать и Mellanox 40G Network card. Прогресс на лицо.
    1 point
  11. Jun's Loader v1.04b DS918+ работает без танцев с бубном, дальше допиливать буду.
    1 point
  12. Should the system show that two processors are installed? Added extra.ism and extra2.ism files from the topic: Disks are in the server from 1-16 In DSM it is displayed from 11-25 (14 disks), somewhere 2 disks disappeared.
    1 point
  13. chip is SAS3008, driver mpt3sas, ds3617 image from synology comes with this driver in dsm (3615 does not) so in theory it should work, whats the vendor id when listing it with lspci? the F1 Raid mode might be present, you can try to switch of the SHR Support by reverting whats written about enabling SHR https://xpenology.com/forum/topic/9394-installation-faq/?do=findComment&comment=81094 so disable shr support and use raidgroups instead, maybe the F1 option will appear, synology seems to use the same base for all getting a network driver working seems much less hassle then getting mpt sas drivers working, i did try to add newer drivers last year and it did not work thats a ConnectX-3 card, also can you give the vendor id from lspci taking into accout that even the newly compiled mpt2sas driver from 3615 (kernel 3.10.105) did not work (loads but when devices are found crashes, also happens with other scsi/sas drivers, might be a kernel source problem, synology using a newer changed kernel, we only have >2 year old beta kernel)) and we will have to use the drivers coming with dsm when it comes to scsi/sas you might be better off with 918+ when the storage is working, i see more hope getting a nic driver to work the scsi/sas not sure why you choose the sas controller, when it comes to xpenology (hacked dsm) then ahci is 1st choice (imho) the board has 10 x sata already and plenty of pcie slots for 4port or 8port ahci cards (in your case just two 4port cards with marvel 9215 might be enough)
    1 point
  14. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 Update 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617+ - Using custom extra.lzma: NO - Installation type: VM - vmware ESXi 6.7 - Additional comments: VMXNET 3 did not work anymore. I changed virtual NIC type from VMXNET 3 to E1000E. I connected all 12 SSDs to "LSI SAS3008(LSI 9300-8i)", and I use "Intel X710-DA2(10Gbit SFP+)", I think thay work perfect. All additional(SAS card and NIC) device were paththrough device.
    1 point
  15. - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 Update 2 - Loader version and model: jun v1.04b (DS918+) - Using custom extra.lzma: NO - Installation type: BAREMETAL - AMD X370 CPU: AMD Ryzen 7 1700X Mobo: ASUS ROG Strix X370-F Gaming LAN: Intel I350-T2 HBA: LSI 9300-8i (IT mode) - Additional comments: Chipset SATA and LSI card both work fine.
    1 point
  16. I'm sorry for my wrong infromation. I think 1.03b work fine on VMware ESXi 6.7 with LSI 9300-8i(Host bus adapter, SAS3008, paththrough device). - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 UPDATE 2 - Loader version and model: JUN'S LOADER v1.03b - DS3617xs - Using custom extra.lzma: NO - Installation type: Virtual Machine, vm version is 14, VMware ESXi 6.7, All storage device is connected to LSI 9300-8i(Host bus adapter, SAS3008, paththrough device). - Additional comments: Update from 6.1.7 to 6.2
    1 point
×
×
  • Create New...