Jump to content
XPEnology Community

Search the Community

Showing results for 'maxdrives synoinfo.conf'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Information
    • Readers News & Rumours
    • Information and Feedback
    • The Noob Lounge
  • XPEnology Project
    • F.A.Q - START HERE
    • Loader Releases & Extras
    • DSM Updates Reporting
    • Developer Discussion Room
    • Tutorials and Guides
    • DSM Installation
    • DSM Post-Installation
    • Packages & DSM Features
    • General Questions
    • Hardware Modding
    • Software Modding
    • Miscellaneous
  • International
    • РУССКИЙ
    • FRANÇAIS
    • GERMAN
    • SPANISH
    • ITALIAN
    • KOREAN

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


About Me

Found 8 results

  1. Also, it seems like the maxdrives and internalportcfg settings in /etc/synoinfo.conf keep reverting to their original settings. I have made the change in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf. The usbportcfg setting is retained, but the others revert back to maxdrives=16 and internalportcfg=0xffff
  2. Happy new year everyone, I have a z97 gaming 5 with 6 sata ports. I test install 918+ with 6.2 and 916+ with 6.17, both version with correct drivers and loader from jun. However both of two version seems not working with Dell Perc H310, because I can't see the esata pop up on the top right corner. So I modify the synoinfo.conf in both of ect.default and ect to get the sata and esata in order to work. z97 gaming 5 have 6 sata ports, Dell perc H310 have 8 ports. I know both 918 and 916 only support 12 disks, so I did modify like below, 4 internal ports stay in sata, 2 internal ports and 6 ports from Dell Perc H310 as esata, 2 ports from H310 disable by default maxdrives limited. esata : 111111110000 = 0xff0 insata: 000000001111 = 0xf then reboot, HOWEVER the synoinfo.conf in /etc.defaults roll back my edit to esataportcfg="0x0" and internalportcfg="0xff" The synoinfo.conf in /etc still like the same as my edited esataportcfg="0xff0" and internalportcfg="0xf" And I can't see any esata drives pop up when i login to the system. Before I was using 3617xs and 3615xs with 6.17 and 6.2, both works without any issue, thus i believe is not the H310 problems. Any one can help me on this??
  3. The reason for why you see 12 drives is because "maxdrives" in "synoinfo.conf" is set to 12 As being discussed many places in the forums, this value can be changed but it may cause you problems if you update your system.
  4. NeoID

    DSM 6.2 Loader

    What I mean is that I don't understand how jun is able to override synoinfo.conf and make it survive upgrades (never had any issues with it at least), but when I change /etc.defaults/synoinfo.conf it may or may not survive upgrades... it sounds to me like I should put the "Maxdrives=24" or whatever somewhere else (in another file) that again makes sure that synoinfo.conf always has the correct value. For example.. I see that there's also /etc/synoinfo.conf which contains the same values.. is that file persistent and overriding the /etc.defaults/synoinfo.conf file? Seems by the way that I'm still missing something regarding the sata_args or so as I now get the following message trying to log into my test VM: "You cannot login to the system because the disk space is full currently. Please restart the system and try again." I have never seen the error message before and it's obviously not out of disk space by just sitting there idling for 5 hours. It works again after a reboot, but there is still something that isn't right. I'll update my post when I've figured out what's causing this.
  5. flyride

    DSM 6.2 Loader

    Maxdrives=16 is already overriding synoinfo.conf. This is not different than Maxdrives=12 we have had up until now, with no more or less risk due to upgrades. You'd have to get Jun to speak about the user configurability of it however. I suspect it's not easy to do as the changes in grub are affecting arguments passed to the kernel at boot time, and Maxdrives is not a kernel parameter.
  6. NeoID

    DSM 6.2 Loader

    I see... how will this work in terms of future upgrades? I know Synology sometimes overwrites synofinfo.conf. If this implementation survives upgrades... It's there a way to change it/make it configurable through grub? I have currently two VMs with two 16 ports LSI HBAs, but it would be much nicer to merge them into one with 24 drives. However, not a fan of changing synoinfo.conf and risking a crashed volume if it's suddenly overridden. I guess I could make a script that runs on bolt that makes sure the variable is set before DSM is started, but much nicer if @jun would take the time to implement this or suggest a "best practice" method of changing maxdrives and the two other related (usb/esata) variables.
  7. NeoID

    DSM 6.2 Loader

    I will answer my own post as it might help others or start a at least shine some light on how to configure the sata_args parameters. As usually I've modified grub.cfg and added my serial and mac before commenting out all menu items except the one for ESXi. When booting the image Storage Manager gave me the above setup which is far from ideal. The first one is the bootloader attached to the virtual SATA controller 0 and the other two are my first and second drive connected to my LSI HBA in pass-through mode. In order to hide the drive I added DiskIdxMap=1F to the sata_args. This pushed the bootloader out of the 16 disk boundary so I was left with the other two data drives. In order to move them to the beginning of the slots I also added SasIdxMap=0xfffffffe. I've testet multiple values and decreased the value one by one until all drives aligned correctly. The reason for why you see 16 drives is because maxdrives in /etc.defaults/synoinfo.conf was set to 16. Not exactly sure why it is set to that value and not 4 or 12, but maybe it's changed based on your hardware/sata ports and the fact that my HBA has 16 drives? No idea about that last part, but it seems to work as intended. set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'
  8. pigr8

    DSM 6.1.x Loader

    tested and everything is working as expected, upgraded my gen8 baremetal from 5.2 to 6.0.2-u3 with all data and config intact, no problem whatsoever, the only thing is that after the upgrade dsm changed my local ip to dhcp instead of let it static but it's not an issue, reverted back to static in 10 seconds. al packages working fine, update3 applied and rebooted with no issue, changed my filesystem to btrfs so i'm pretty happy edit: one last thing, how can i change max drive shown in the disk manager? it shows 12 slot, i wanted to limit it to 5. edit2: nevermind, synoinfo.conf had maxdrives set to 12, fixed
×
×
  • Create New...