Hiromatsu Posted January 24, 2020 #1 Posted January 24, 2020 (edited) Hello there, I am clueless on how to fix my problem. I have the following hardware: Supermicro X10SRi-F, Intel E5-1620v3, 64GB DDR4 https://www.supermicro.com/en/products/motherboard/X10SRi-F using v1.04b is working like a charm. As I would like to use 10GBe networking I was trying to use loader v1.03b (DS3617xs) When using cleared HDDs I am able to complete the installation process, but the system doesn't come online afterwards. Does anybody have an idea why and how to fix it? Can I somehow provide logs? Kind regards Edited January 24, 2020 by Hiromatsu Quote
Hiromatsu Posted January 24, 2020 Author #3 Posted January 24, 2020 Preferred Intel X520-DA2 as I have some spare and I know they are running out of the box with the DS3617xs (as I have a real one) But the type of 10GBe adapter won't solve my question, because I have tried with and without the 10GBe, no luck Quote
flyride Posted January 24, 2020 #4 Posted January 24, 2020 I would expect that motherboard to run 1.03b and associated DSM platform well. Make sure you are building your system and connecting with the gigabit port first. Also, try DS3615xs instead of DS3617xs. You don't need the cores support of DS3617xs. 1 Quote
flyride Posted January 24, 2020 #5 Posted January 24, 2020 Also be sure you are setting up Legacy BIOS boot (not UEFI) with 1.03b. See more here: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ 1 Quote
Hiromatsu Posted January 24, 2020 Author #6 Posted January 24, 2020 Thank you very much, I set the BIOS to legacy already and the strange thing is that it connects with clean formatted HHDs and I can start the installation process... Just afterwards it is no longer responding and is no longer found via find.synology or the tool. Could that be because of the "wrong" type of onboard nics? I have read something of kernel panic if you don't have a Intel e1000e NIC or extra.lzma Therefore I bought a Intel PRO/1000 PT Server Adapter (EXPI9400PTBLK) but that didn't change a thing. I will try the DS3615xs installation instead... Thank you again Quote
flyride Posted January 24, 2020 #7 Posted January 24, 2020 I am pretty sure i350 onboard NIC is e1000e so that should be ok. If you wish to prove with another NIC get yourself a cheap Intel CT $20 and try that. 1 Quote
Hiromatsu Posted January 26, 2020 Author #8 Posted January 26, 2020 @flyride Thank you very much! Using the 1.03b loader for the 3615xs worked out of the box. The system is up and running since yesterday! I would never have tried the 3615xs I guess, so again: Thank you!!! Quote
IG-88 Posted January 26, 2020 #9 Posted January 26, 2020 1 hour ago, Hiromatsu said: Using the 1.03b loader for the 3615xs worked out of the box. The system is up and running since yesterday! I would never have tried the 3615xs I guess, so again: Thank you!!! if using the new 0.6 extra.lzma for 6.2.2 from here https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ there shouldn't be a difference between 3615 and 3617 with drivers if you still want to use 918+ then you can try it with the new 0.8 extra.lzma (eta next weekend) - but i dont see any added value with your hardware (no gpu, no m.2 nvme), only useful to test if you have problems with 3615 1 Quote
Hiromatsu Posted January 26, 2020 Author #10 Posted January 26, 2020 (edited) 9 hours ago, IG-88 said: if using the new 0.6 extra.lzma for 6.2.2 from here https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ there shouldn't be a difference between 3615 and 3617 with drivers if you still want to use 918+ then you can try it with the new 0.8 extra.lzma (eta next weekend) - but i dont see any added value with your hardware (no gpu, no m.2 nvme), only useful to test if you have problems with 3615 Thank you, I would really like to do that, but I think I am kind of too stupid for that. Especially I would like to make use of SHR, which the 3615xs and 3617xs cant I would really like to experiment a little, because I have two nvme devices here (not m.2, but PCI-E) and a Arago SAS3 controller + a 36 bay 4u chassis, but I wasn't able to make the changes in the config work for more than the drives that the 1.04b loader came with... Will keep an eye on the release of the 0.8 extra.lzma, if I have an hour or two to tinker around Edited January 26, 2020 by Hiromatsu Quote
flyride Posted January 26, 2020 #11 Posted January 26, 2020 (edited) You can turn on SHR with 3615/3617 by changing two lines in /etc.defaults/synoinfo.conf Edited January 26, 2020 by flyride 1 Quote
IG-88 Posted January 26, 2020 #12 Posted January 26, 2020 5 minutes ago, Hiromatsu said: Thank you, I would really like to do that, but I think I am kind of too stupid for that. maybe try this one https://www.youtube.com/watch?v=2PSGAZy7LVQ and keep in mind 24 is max. as long as you have not tested it (with a raid set), quicknick claimed there where certain higher drive counts but noone tested that afaik 12, 16, 20, 24, 25, 26, 28, 30, 32, 35, 40, 45, 48, 50, 55, 58, 60, 64 https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=87213 if you need nvme ssd on baremetal then 918+ is you goal 15 minutes ago, Hiromatsu said: Arago SAS3 controller if you can give me the pci id i can look in the driver if it will work SAS2 is no problem, SAS3 depends on how new the chip is older SAS3 will work 1 Quote
flyride Posted January 26, 2020 #13 Posted January 26, 2020 7 minutes ago, Hiromatsu said: I would really like to experiment a little, because I have two nvme devices here (not m.2, but PCI-E) and a Arago SAS3 controller + a 36 bay 4u chassis At some point you have to make some decisions on which features you want to support. NVMe direct support is only in DS918. RAIDF1 support (if you ever need it) is only in 3615/3617. 10Gbe and SAS support is better in 3615/3617 but somewhat mitigated if you experiment with add-in drivers from IG-88. My main system is similar to yours - 10Gbe, and I use enterprise SATA SSD drives now so RAIDF1 is important. I also have two U.2 NVMe drives, and the only way I found to get everything to work was to use ESXi to run DS3615 (for RAID F1 and 10Gbe support) and present the NVMe devices via RDM. 1 Quote
Hiromatsu Posted January 26, 2020 Author #14 Posted January 26, 2020 30 minutes ago, flyride said: You can turn on SHR with 3615/3617 by changing two lines in /etc.defaults/synoinfo.conf with this line: support_syno_hybrid_raid = "yes" above supportphoto="yes" ? Cool! Thanks for this supercool hint! 18 minutes ago, IG-88 said: maybe try this one https://www.youtube.com/watch?v=2PSGAZy7LVQ Thank you for that video, I will definitely give it a try 18 minutes ago, IG-88 said: and keep in mind 24 is max. as long as you have not tested it (with a raid set), quicknick claimed there where certain higher drive counts but noone tested that afaik 12, 16, 20, 24, 25, 26, 28, 30, 32, 35, 40, 45, 48, 50, 55, 58, 60, 64 https://xpenology.com/forum/topic/8057-physical-drive-limits-of-an-emulated-synology-machine/?do=findComment&comment=87213 not a problem 24 would be enough, I will just leave the rear backplane unattached! Is there still this limitation that it would not work if all HDDs are attached to a SAS Controller? Last time I tried one HDD had to be attached to the onboard sata controller. 18 minutes ago, IG-88 said: if you need nvme ssd on baremetal then 918+ is you goal if you can give me the pci id i can look in the driver if it will work SAS2 is no problem, SAS3 depends on how new the chip is older SAS3 will work Does this contain anything you would would need? (It is an older card and the backplane its the Supermicro BPN-SAS3-846EL1 <node id="storage" claimed="true" class="storage" handle="PCI:0000:06:00.0"> <description>Serial Attached SCSI controller</description> <product>SAS3008 PCI-Express Fusion-MPT SAS-3</product> <vendor>Broadcom / LSI</vendor> <physid>0</physid> <businfo>pci@0000:06:00.0</businfo> <logicalname>scsi7</logicalname> <version>02</version> <width units="bits">64</width> <clock units="Hz">33000000</clock> <configuration> <setting id="driver" value="mpt3sas" /> <setting id="latency" value="0" /> </configuration> <capabilities> <capability id="storage" /> 14 minutes ago, flyride said: At some point you have to make some decisions on which features you want to support. NVMe direct support is only in DS918. RAIDF1 support (if you ever need it) is only in 3615/3617. 10Gbe and SAS support is better in 3615/3617 but somewhat mitigated if you experiment with add-in drivers from IG-88. My main system is similar to yours - 10Gbe, and I use enterprise SATA SSD drives now so RAIDF1 is important. I also have two U.2 NVMe drives, and the only way I found to get everything to work was to use ESXi to run DS3615 (for RAID F1 and 10Gbe support) and present the NVMe devices via RDM. You are absolutely right flyride! I had an ESXi server running in a similar config for a year now and the electricity bill made me think of running it on Baremetal, because then I can use the power on and off schedule . Besides that: I love to tinker around and try things, so I will experiment with the drivers from IG-88 Quote
flyride Posted January 26, 2020 #15 Posted January 26, 2020 2 minutes ago, Hiromatsu said: 51 minutes ago, flyride said: You can turn on SHR with 3615/3617 by changing two lines in /etc.defaults/synoinfo.conf with this line: support_syno_hybrid_raid = "yes" above supportphoto="yes" ? Cool! Thanks for this supercool hint! That's the one. I think you also must comment out #supportraidgroup="yes" 1 Quote
IG-88 Posted January 26, 2020 #16 Posted January 26, 2020 20 minutes ago, Hiromatsu said: Does this contain anything you would would need? i was thinking of something lspci would output like this: Class 0200: Device 10ec:8168 (rev 16) for a nic i guess it might be 1000:0097 in your case https://pci-ids.ucw.cz/read/PC/1000/0097 this one would be ok with 918+ and 3617 1 Quote
Hiromatsu Posted January 26, 2020 Author #17 Posted January 26, 2020 8 minutes ago, IG-88 said: i was thinking of something lspci would output like this: Class 0200: Device 10ec:8168 (rev 16) for a nic i guess it might be 1000:0097 in your case https://pci-ids.ucw.cz/read/PC/1000/0097 this one would be ok with 918+ and 3617 [1000:0097] is correct, just found it. Great to know, that it will work. Thank you again Quote
-iliya- Posted May 6, 2021 #18 Posted May 6, 2021 В 26.01.2020 в 12:16, IG-88 сказал: if using the new 0.6 extra.lzma for 6.2.2 from here https://xpenology.com/forum/topic/21663-driver-extension-jun-103b104b-for-dsm622-for-3615xs-3617xs-918/ there shouldn't be a difference between 3615 and 3617 with drivers if you still want to use 918+ then you can try it with the new 0.8 extra.lzma (eta next weekend) - but i dont see any added value with your hardware (no gpu, no m.2 nvme), only useful to test if you have problems with 3615 Hi, I am considering upgrading my 3617 to new hardware - change z87x to server MB X10SRA-F + E5-2628L + 32Gb ECC Reg But I have questions - will it be possible to run this MB from 3617, will USB3.0 work in Syno? Basically I plan to use as fast storage + VMM for win10 (database Davinci Resolve + qBittorrent) + os with NextCloud. Will ECC Reg RAM be useful in Syno? Quote
flyride Posted May 6, 2021 #19 Posted May 6, 2021 10 minutes ago, -iliya- said: Hi, I am considering upgrading my 3617 to new hardware - change z87x to server MB X10SRA-F + E5-2628L + 32Gb ECC Reg But I have questions - will it be possible to run this MB from 3617, will USB3.0 work in Syno? Basically I plan to use as fast storage + VMM for win10 (database Davinci Resolve + qBittorrent) + os with NextCloud. Will ECC Reg RAM be useful in Syno? Yes, it should all work. However, that processor cannot be fully utilized (24 threads vs. a maximum supported 16 threads on DS3617xs). ECC is transparent to the Linux OS, so there will be benefit with no specific action on your part. I'm not a huge fan of Syno VMM implementation, you might consider running DSM in ESXi, and other VM's in parallel as needed (you can set up separate storage for ESXi VM's, or run NFS out of DSM). Quote
-iliya- Posted May 6, 2021 #20 Posted May 6, 2021 I just do not really like ESXi - because it is important for me that DSM is on the baremetal and works at maximum speed with an array of 16 disks. But On Win10 I do not need great performance, but you can quickly and easily take snapshots of the virtual OS. Now everything works for me on the z87x i7-4771, but not enough pci-e lines to put 2-3 10G network cards - that's why I thought to take the server MB Quote
flyride Posted May 6, 2021 #21 Posted May 6, 2021 DSM is not particularly CPU limited. There is little to no impact from running as a VM, particularly if you pass through your SATA controller. But to each their own. Quote
-iliya- Posted May 7, 2021 #22 Posted May 7, 2021 i try DSM over ESXi - HDD speed working lowest than DSM on baremetal I am working with video and need maximum HDD speed. Quote
-iliya- Posted May 13, 2021 #23 Posted May 13, 2021 Hi, today successful transfer my Syno to new MoBo x10sra-f all works fine VMM can add all 16 cores to guest OS - I'm add 6 cores for Win10 Have a one question - how to redirect com1 to console to watch the system boot in the IPMI viewer as by com1 port and putty. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.