Jump to content
XPEnology Community

Battlebengt

Rookie
  • Posts

    3
  • Joined

  • Last visited

Posts posted by Battlebengt

  1. 2 hours ago, flyride said:

    The values in /etc.defaults/synoinfo.conf are copied to /etc/synoinfo.conf each reboot.

     

    You will probably need SataPortMap=188 and DiskIdxMap=180008 for your drives to be properly visible spanning controllers.  If you are asking whether this is new for version 6, you need to go to the FAQs and spend some time reviewing the 6.x loader instructions and how to implement those settings into the loader.

     

    Thanks. I will study.

    In the mean time, is it possible to downgrade 5.2 5967?

  2. 18 hours ago, flyride said:

    try editing /etc.defaults/synoinfo.conf.

     

    You did not post much information about your system.  Are your disks passthrough?  How many controllers?  How many ports on those controllers? Did you have a custom setting for DiskIdxMap and/or SataPortMap in your loader?

     

     

    Thank you for your answer.

    I did try to edit /etc.defaults/synoinfo.conf. That was the first thing I tried. But even though I changed the values to 24 disks, there is still only 8 disks visible in DSM Storage Manager. I think this must be the problem, right? Why doesn't the change in synoinfo.conf reflect in the Synology?

     

    About my system:

    I have a Supermicro Motherboard with a built in LSI controller with 8 SATA ports (on the mother board). The controller works as a regular PCI HBA and is passed through to the Synology server. I also have a regular 8 port PCI LSI HBA that is passed through in the same way.

     

    I don't recognize DiskIdxMap and SataPortMap. I have never changed any of those values before. Is that something new with version 6?

     

  3. Hi,

     

    I need your help.

    I have resisted to update to DSM 6 for a very long time, but I finally came to a moment where I had to take the plunge.

     

    I have been running Xpenology DSM 5.2-5644 Update 3 (DS3615xs) on a ESXI 5.5.0. I have 16 drives and for that I have modified my synoinfo.conf for maxdisks=24

    esataportcfg="0xff000000"
    usbportcfg="0x300000000"
    internalportcfg="0xffffff"

     

    I made a test VM first and brought it to same spec as my real VM, with the exception of the 16 drives. 

    The update of the test VM went fine, with the exception that the 50MB boot drive was visible in the Storage Manager. I don't know if that is an issue or not.

     

    Anyway, i proceeded to update my real VM with the disk raid still connected.

    Installation went fine but after logging in for the first time DSM reports the RAID is crashed.

    Checking the Volume manager I can see that max disks are reverted to 8.

    I have edited synoinfo.conf for maxdisk=24 again after that, restarted the VM, but maxdisk are still 8.

     

    I really hope that my raid is not really crashed and that this is fixable somehow, but I really need your help, gentlemen.

    Do you have any suggestion for me that I can try on my test VM?

    Do you need any more information to give a good answer?

     

    Thanks!

     

     

×
×
  • Create New...