Jump to content
XPEnology Community

audiophile20

Transition Member
  • Posts

    8
  • Joined

  • Last visited

audiophile20's Achievements

Newbie

Newbie (1/7)

0

Reputation

  1. I have checked the Supported Devices list for 12G HBA card SAS Drivers and did not see these on the list. I am now running two builds with a 16x and 24x port SAS cards in two separate builds. I wcan use greatly appreciate the addition of the drivers. Please eat me know if that would be possible. Hopefully I did not make a mistake and duplicate the request. If this is a duplicate request, my apologies. Please do let me know. Thanks in advance for your asssitance. Oops! Details of build, one system running on DSM 5.2 and the other on DSM 6.1. Both are stable. CPU Xeon, mobo Supermicro X10 series, LSI SAS controller and Intel 5xx 10G copper.
  2. I have been running this system with 8 drives and added 4 more. The system, was operating normally, the new drives were immediately recognized and started functioning normally. Now trying to additional 4 drives and I am having problems! One seated, the drives are powering up and running. But Xpenology, is not seeing them! I have checked on the config files and the MaxDrive var is set to 24 disks. I have power cycled the unit after installing them and that has made no difference. The same hardware as I started before. LSI controller card that can handle 24 drives. Maybe I need to get a different controller card? This does not name sense to me. Since I thought the card can handle full set of 24 dirves. I am lost at this moment. Thank you in advance for your ideas.
  3. Thanks to Jun for the v1 loader - I am running DSM6.0 and update 9 is running; build is stable on the test machine. The test machine is a bare metal install; Hardware - Dell Dimension (re-purposed to a test NAS). The production unit - NARCO 24 bay hot-swap 4U; w/ Supermicro mobo, Intel Xeon CPU and a LSI controller card; all my mobo controllers are disabled in BIOS + 2 1GB NICs running DSM5.0 and the machine is stable. 1. Now I would love to upgrade to DSM6.0 on the test machine. I think, I will have to wait for the DSM60 v2.xx loader to be available to make this transition. Am I correct in the assumption? 2. Can I use Jun's v1 loader and mod it to see 24 drives? I tried playing with the MaxDrive variable on the test bed and I am not seeing the 24-slot display in DSM60 on the test machine. Can someone please point me in the right direction with answers? My apologies if this has already been asked and answered. Thanks in advance! Again thanks to all the coders on this effort.
  4. Edited: I have now found the FAQ that had been updated since my last checking the forum. I am reading through them. If anyone has a functioning config. similar to mine please do let me know how your set-up went and if I should be watching for anything while the upgrade. Thank you. ORIGINAL POST: Hi, As the title says, I have DSM5.xx running on bare metal, in a housing w/ 24 hot swap bays; all connected to a LSI controller. Currently running 8 drives and will expand to 16 next week. Thanks to Juno's build, I have Xpeno DSM6.0 running on bare metal, as a test system. This is stable. No problems. Now I am debating if I should upgrade my production data to Xpeno DSM6.0 build as well... But, prior to upping the count to 16x drives, I wanted to see if anyone is running 16 or more drives in a single chassis, Xpeno DSM6.0? If yes, would it be possible to share config steps please? Else, can someone please coach me on how I might update/config my setup to enable DSM6.0 to see 24 drives? Thank you!
  5. Update: now the system is up and running with 8 disks I am running this system on a supermicro LGA-2011v3. board, that has 10 Sata ports. Initially I was trying to keep the ports functioning while I wanted to control my array via the LSI 9305-24i. Having tried for a week off/on and being unsuccessful, I changed tactics and disabled all my SATA ports by disabling the two on board controllers. Reboot, make the mods as suggested in the forums and Voila! The system came back up correctly. No challenges with the USB ports or location of the drives. Since DSM lists the disk serial numbers and you can determine the location of the failing disk, when the time comes , as I am sure it will at some future date. DSM is an amazing and user friendly interface. This buys me time for to learn Linux and maybe I will still switch to FreeNAS simply for the FS. But given DSM supports BTrFS, will wait and see how that progresses. Thanks again, Chris, for the schooling on the controller behavior. That started me thinking about disabling the onboard controller! Now will have to monitor The forums and determine the right time to migrate to DSM 6.0.
  6. Thanks Chris. I will look for the Adaptec 1000-8i and see if I can source a couple. Maybe that is the way to skin the cat. I did load the disks one at a time so that the load was sequential. Glad I did that! So cross flashing the Adaptec firmware will not do the trick then. Going the Adaptec route, I can get breakout cables and will need to wire them sequentially, right?
  7. Before I begin, Thank You to the Xpenology team and I am grateful to the knowledgeable forum members who have shared info freely. I am in the process of building a 24-Unit NAS box. I have the box working on FreeNAS, but I have also playing with Xpenolgoy. I love the DSM UI and would love to switch. I have been playing with the build and I can get it working for 12HdDDs or less but cannot get DSM to recognize 13 thru 24 HDDs. Sorry for the long post but wanted to document my efforts. If we can solve this maybe others may find it useful. Since I am coming from a FreeNAS build, my hardware will be overkill for this build, but I already have the gear so what the heck Here you go --- Supermicro mobo LGA2011, Xenon E5-1xxx CPU, ECC memory (crazy to count), LSI 9305-24i HBA, Norco-4424 case with backplane connected to controller via 6-SAS cables (4 HDDs/backplane/cable = 6x4HDDs = 24), 16x 4TB NAS drives to start and will add 8x later to complete the build. Based on a post from back in 2014 by Stanza, I build the following: 1 2 3 4 5 6 7 8 9 10 11 12 BLOCK COUNT 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 TEMPLATE (options for 48 slots) 0000 1111 1111 1100 0000 0000 0000 0000 0000 0000 0000 0000 USB (interal headers/regular + back panel) 0000 0000 0000 0011 1111 1111 1111 1111 1111 1111 1111 1111 HDD (10 on mobo + 24 drives through LSI-9305-24i) Number of ports: 0x eSATA 10x USB (internal headers + ports) 10x SATA ports on motherboard 24x Narco 4224 backplane (controller LSI9305-24i (HBA only NOT a RAID card) NOTE: LSI9305-24i HBA controller connected backplanes via 6-SAS cables (4HDDs/backplane/cable) Thanks to a great post from STANZA going back to 04/Jan/2014 I worked out the following: . maxdisks="44" (total ports needed, includes USB) . internalportcfg="0x3ffffffff" (34 ports) . esataportcfg="0x0" (0 esata ports) . usbportcfg="0xffc" (10 internal headers + ports) I edited both these files and modified the data: . /etc/synoinfo.conf . /etc.defaults/synoinfo.conf Observations/Questions Q1. Slots 1 - 10 are showing empty; just as it should as the ports are not connected to anything. Also, GUI shows the correct number of slots. Anything else I should be looking for here? Q2. HDDs are present in the first 4 slots of the NORCO-4224 and are connected to the LSI controller via SAS cable controlling drives 0-3. In slots 11 & 12 GUI shows there HDDs present. But in slots 13 & 14 GUI shows empty. Given hard drives in slots 13 & 14 (of the GUI should show as populated). What is the problem? What am I missed? Are there additional files I need to modifiy I am missing? Q3. Every time I tried to reboot, the data gets reset to factory setting. Upon reboot, I am presented with reinstall software option. That wipes out my mods? Is this correct? Anyway I can make a back-up of the files and load them before boot-up? Hope someone can help me in solving the issues. I really love the software/GUI and would love to use Xpenology+BtrFS , if at all possible
×
×
  • Create New...