Jump to content
XPEnology Community

Kanedo

Member
  • Posts

    89
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by Kanedo

  1. Does any one have any updates on this issue? I have a server that is now unusable with XPEnoboot and DSM5.1 because of this.
  2. Glad it's working out for you. For future reference, you don't need to modify /etc/rc if it's included in the bootloader image. It should just work as is.
  3. You're in luck! I happen to have two Mellanox ConnectX-2 EN cards myself. First, ConnectX-2 EN kernel modules are included with the XPEnoboot 5.1-5022.3 bootloader img. So, if you use it with DSM5.1, it's already built in. However, if you must have it for DSM5.0, I've built this from source. Mellanox ConnetX Kernel Modules.zip This zip contains the two kernel modules, mlx4_core.ko and mlx4_en.ko, needed to enable Mellanox EN cards. 1) Extract the contents of the above zip, and you'll find mlx4_core.ko and mlx4_en.ko 2) Copy mlx4_core.ko and mlx4_en.ko into /lib/modules/ 3) Edit both /etc/rc and /etc.defaults/rc with the following change... Find this around line 327 NET_DRIVERS="dca e1000e i2c-algo-bit igb be2net ixgbe tn40xx" and modify it to NET_DRIVERS="dca e1000e i2c-algo-bit igb be2net ixgbe tn40xx mlx4_core mlx4_en" 4) Reboot your system. If done correctly, you should see your Mellanox ConnectX-2 show up as an extra Network Interface. If your ConnectX-2 card is connected to another card or 10Gbit switch, you can look at the Network Interfaces to see which interface is 10000 Mbps. Alternatively, you use ifconfig and look at the MAC address to determine which one is your Mellanox card. NOTE: I've only tested this with Nanoboot and DSM5.0 NOTE 2: While I've been able to achieve 9.89 Gbits/sec using iperf between two systems, it's pretty difficult to achieve even close to this speed via rsync and smb/cifs unless you have fast CPU and enough disks in RAID configuration on both servers. It's really taxing on the CPU to process all this data. The fastest I've been able to achieve so far with my hardware via copying over smb/cifs is around 350MB/sec. I may be able to increase this if I upgrade my second server's CPU. Server 1: Xeon E3-1275v3, 10 disks in RAID5 Server 2: Celeron G550, 12 disks in RAID5 <-- currently CPU bottlenecked with the Celeron G550. Need to upgrade to a dual or quad-core CPU with very fast single-threaded performance since rsync is single-threaded Good Luck!
  4. I have a Supermicro H8DME-2, which has a fairly old nVidia chipset. Connecting drives onto any of the onboard nForce SATA, I get the dreaded hang on "trigger device plug event". Then I tried Supermicro AOC-SAT2-VM8 PCI-X 8 port SATA card based on Marvell chipset. Connecting any drives on this SATA card, I also get the hang. Finally, I tried my trusty LSI 9211-8i PCIe 8 port SAS card and it worked right away. Also, there is no menu for me to configure the nForce or Marvell card to select IDE or SATA or AHCI. I think they're just too old. Similar to what others have mentioned, there is definitely something different in respect to SATA controller compatibility with XPEnoboot/DSM5.1. Both the onboard nForce SATA and Marvell PCI-X SATA works just fine with Nanoboot/DSM5.0. Hopefully this can get resolved soon.
  5. The number of disks you can use in a volume depends on the type of volume you're creating. From what I vaguely remember, you cannot exceed 12 disks per volume for RAID5 and RAID6 with Synology. Don't quote me on this. However, as you can see below, I've successfully created a single SHR1 volume with 24 drives.
  6. You can look into mod_proxy module for apache to see if it fits the bill. If this is for personal access from a remote computer, you're much better off just setting up a VPN server than trying to do all these port forwards.
  7. Bare metal Mobo:Supermicro X10SLM-F CPU: Intel Xeon E3-1275v3 RAM: 4GB ECC DDR3 UDIMM SAS Card: LSI 9211-8i in IT mode Nanoboot 5.0.3.1 DSM 5.0-4493 update 7 All 6 onboard SATA work All 8 LSI SAS ports work Both onboard Intel network ports work with WOL Pretty much everything works.
  8. All four cores will be available to the system. I have had no problems using nanoboot with up to 8 cores with DS3612xs firmware. You will be fine with the Q6600. No need to change firmware.
  9. Show us the output from the following command fdisk -l | grep ^Disk
  10. There is a really simple fix for this problem. ssh into your xpenology system edit /etc.defaults/synoinfo.conf modify internalportcfg and esataportcfg fields original values ---------------- internalportcfg="0xfff" esataportcfg="0xff000" change to something like this ----------------- internalportcfg="0xfffff" esataportcfg="0x0" Save the file and reboot. You should see all of your drives now. The above fields are device letter masks. The original value of 0xfff is a hexadecimal mask indicating up to 12 devices are internal drives from /dev/sda to /dev/sdl. However, if your device assignments fall out side of that range (for example /dev/sdm and /dev/sdn), then you'll need to increase it to accommodate. The new value of 0xfffff signifies that there can be up to 20 devices from /dev/sda through /dev/sdt. Hopefully this is enough to cover all of your internal disk letters. NOTE: On a real Synology DS3612xs, there are exactly 12 internal ports that will get enumerated from /dev/sda to /dev/sdl. However, on your system, you may have additional ports (SATA or PATA) that are unused, but they will still occupy a device letter. So let's say you have 16 ports on your server, but you're only using 12 of them. Some of your drives' enumeration may very well fall outside of the original 12 from /dev/sda through /dev/sdl. Therefore, you need to modify your internalportcfg value to cover your system's device enumeration. If you want to know what exact drive letters are being enumerated on your system, run the following commaned fdisk -l |grep ^Disk If you search for internalportcfg in this forum, you'll see it being discussed multiple times. I have 14 SATA ports (6 onboard SATA + 8 SAS ports) on my server. 12 of the 14 are currently populated.. The values I use are.... esataportcfg="0x0" // i have no esata drives internalportcfg="0xfffff" // up to 20 internal devices enumeration letters from /dev/sda - /dev/sdt usbportcfg="0xf00000" // usb will get enumerated from /dev/sdu - /dev/sdx Check my other post to see how I got 24 drives to show up in VMware. http://xpenology.com/forum/viewtopic.php?f=2&t=3529&p=21475&hilit=internalportcfg#p21475
  11. You absolutely can have 24 disks, as long as you modify /etc.defaults/synoinfo.conf to the correct values To demonstrate this, I am running Nanoboot 5.0.3.1 + DSM 5.0-4493 update 3 in VMWare Fusion. I added 24 x 20GB virtual sata drives to this VM instance just to demonstrate that it is indeed possible. These are the values I have modified in /etc.defaults/synoinfo.conf. Your actual values may differ depending on how your installation enumerates your drives. So the following settings is just an example. maxdisks="24" esataportcfg="0x0" internalportcfg="0xffffff" usbportcfg="0x0"
  12. for faster writes, use /dev/rdisk# instead of /dev/disk# for the dd command https://github.com/abock/image-usb-stick/issues/5 http://superuser.com/questions/631592/m ... n-dev-disk From "man hdiutil" DEVICE SPECIAL FILES Since any /dev entry can be treated as a raw disk image, it is worth not- ing which devices can be accessed when and how. /dev/rdisk nodes are character-special devices, but are "raw" in the BSD sense and force block-aligned I/O. They are closer to the physical disk than the buffer cache. /dev/disk nodes, on the other hand, are buffered block-special devices and are used primarily by the kernel's filesystem code. Example: sudo dd if=~/Desktop/NanoBoot-5.0.2.4-fat.img of=/dev/rdisk2 bs=1m
  13. XPEnology is a superset of the original Synology firmware. Care must be taken to verify that all the hacks, additional drivers, etc work with each new DSM release. The original Synology hardware uses a USB DOM (drive) for boot, so emulation is not at work. However, you don't see the USB boot drive on a Synology exposed to the user because the Kernel hides it from the user based on a VID:PID combination that matches the USB DOM in the Synology system.
  14. Trantor, I've finally got a chance to turn on my AMD server again. Using XPEnology DS3612xs DSM 4.3 build 3810++ (repack v1.0)... I'm happy to report that the Marvell SATA driver is working beautifully now that's it's a built-in module. I'm also happy to report powernow-k8.ko is working great as well. My AMD Opteron 8384 shows all four P-States after insmod powernow-k8.ko and I can change the frequency governor to on demand with cpufreq-set. I've 12 drives so far in the system and will be adding 12 more soon to max out my Supermicro server. Thanks for incorporating these in the last several releases, and please do continue to have them in future releases.
  15. I own a LSI 9211-4i, unfortunately for now it's not possible to install DSM on drives attached to it, but disks are detected in DSM when it's installed on a drive connected to the motheboard. Yesterday I build a test version with mpt2sas driver built-in the kernel and not loaded as module.... no yet tested on my system ^^ Trantor, What is the reason we cannot install DSM to LSI attached drives? Is it the because the mpt2sas.ko is being loaded too late in the boot process? I would assume if it's a built-in module instead of LKM it should work just fine. Sorry, I don't have my LSI 9211-8i at the moment to test this theory. It's currently in storage. What I do know is that the sata_mv started behaving correctly once you made that a built-in module. When it was LKM, it got loaded too late in the boot-up process that it left raid-1 partition in degraded mode each time after a reboot. When it was a built-in module, all problems went away. I'm looking forward to your next build when both the patched sata_mv and generic mpt2sas are both built-in modules.
  16. This is definitely due to internalportcfg esataportcfg values. You need to modify /etc.defaults/synoinfo.cfg and reboot See this post viewtopic.php?f=2&t=1028&start=80#p5413 That did it, thanks Kanedo. All 3 disks are getting recognized. Other issue popped up: after each reboot, system partition "goes bad". After a click and about 2 seconds its back to normal. This is due to either incorrect removal of the drive during a shutdown/reboot or more likely due to the LSI mpt2_sas.ko LKM being loaded too late in the bootup process. I wish I could test this, but my other server with the LSI card is unavailable at the moment and I can't help you troubleshoot this issue until gaining access to it in November. Trantor. Is the LSI driver being loaded during the USB boot and prior to booting off the HDDs? I had a similar problem with sata_mv.ko being loaded too late in the bootup process and got the the same system partition error after each reboot. Would it be possible to build this mpt2_sas into the kernel as a built-in module?
  17. I built with official sata_mv.c from kernel.org but system start and shutdown after 30sec. I see error about gpio when booting so I add syno_sata_mv_gpio_write function in sata_mv.c and like your voodoo magic... it's boot Everything seem to work fine. Patched kernel for marvell diff file between original sata_mv.c and syno's one Feedbacks please Is this this the possible fix for your issue with DSM 4.3? Trantor, your zImage-4.2-marvell is working beautifully. No hangs, no shutdowns, and all drives on Marvell controller is now working at correct mode and speed. They are properly detected even after reboots. Thank you so much for helping me to get this working. Please include this in your next release. Could you post your a copy of your modified sata_mv.c that is making this work?
  18. Trantor, Unfortunately, I can't get network up and running using your 4.3 test build. I have Nvidia NIC. Did you add support for that in .config? Thanks.
  19. Wow great, what did you patch/modified ? Is this the sata_mv.c from 4.3 branch ? DSM is not shutdowning with sata_mv as module ? Yes of course if this fix issue it will be included in next repack Trantor, I've gotten kernel.org's 3.2.30 sata_mv.c to build as a built-in module and it works just fine. Disregard the previous attachment. Instead, all you have to do is replace drivers/ata/sata_mv.c with the one (3.2.30) from kernel.org. It will build just fine. Link to source tree to 3.2.30 https://www.kernel.org/pub/linux/kernel ... .30.tar.xz Could you make a test zImage with this for me to test?
  20. This is definitely due to internalportcfg esataportcfg values. You need to modify /etc.defaults/synoinfo.cfg and reboot See this post viewtopic.php?f=2&t=1028&start=80#p5413
  21. Wow great, what did you patch/modified ? Is this the sata_mv.c from 4.3 branch ? DSM is not shutdowning with sata_mv as module ? Yes of course if this fix issue it will be included in next repack So I just randomly found a v1.28 sata_mv.c source file in the above link. This source didn't initially build due to some unresolved functions. I just simply replaced the offending lines with lines from syno's sata_mv.c. After building, I was able to insmod it and it loaded all the drives on the Marvell controller without the errors I had previously noted. Let me play with this source a bit more later today. So far it's kind of voodoo magic that I got this working at all. Let me go and dig around for a vanilla sata_mv.c for comparison. Anyhow, the bottomline is that syno's sata_mv.c is heavily modified to the point of broken for my card. As I've demonstrated, it is possible to have a working driver. I just need to spend some more time trying to figure out what exactly changed in the syno version for it to be so broken. I'll post updates as I progress. In the meantime, could you detail out your build toolchain, source, and environment? Perhaps a recipe file that someone like myself can follow so we have fewer problems when merging our work. How about a source code repo? Also, do you know anything about the latest 4.3 GPL source and toolchain? Is it good enough to use for anything?
  22. Trantor, EDIT: please disregard this attachment. I've come up with a much cleaner build recipe using the vanilla sata_mv.c from kernel.org. See post: viewtopic.php?f=2&t=1028&start=140#p5556 I've been able to successfully build a custom version of sata_mv.ko and can load it properly using my Marvell 88SX6081 PCI-X card. Original source https://raw.github.com/robclark/kernel- ... /sata_mv.c My modification to the above source (sata_mv.c) and the kernel module (sata_mv.ko).
  23. I built the kernel, but it sends a SIGTERM half way through booting and shuts down. EDIT: nevermind. had incompatible modules in rd.gz
  24. You can modify directly on the system. 1) ssh into system 2) cd /volumeUSB1/usbshare1-1/boot/grub 3) edit menu.lst I have an ESXi install, not on a dedicated computer. If you can boot the xpenology vm instance, shouldn't you have access to the syno vdi?
  25. You can modify directly on the system. 1) ssh into system 2) cd /volumeUSB1/usbshare1-1/boot/grub 3) edit menu.lst
×
×
  • Create New...