NeoID

Members
  • Content Count

    204
  • Joined

  • Last visited

Community Reputation

6 Neutral

About NeoID

  • Rank
    Super Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. NeoID

    DSM 6.2 Loader

    As mentioned before, when using the following sata_args, the system crashes after a little while. I have given my VM a serial port so I could see what's going on, but I have a hard time understanding the issue. Anyone who could explain to me what's going on here and how I can get the sata_args to work? I mean everything looks right from DSM/Storage Manager.... set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'
  2. NeoID

    DSM 6.2 Loader

    I know this is only a cosmetic thing, but no matter how I try to hide the boot drive using DiskIdxMap in the sata_args, DSM stopps working after a few hours. After a while it stops working and I'm only presented with a white page when trying to login, but after a while I get the message saying that the disk space is full when trying to login. Any ideas? I guess the best choice is to stick to the default "set sata_args='SataPortMap=4'" and just ignore the bootloader and the four spaces between the PCI card. This way you still get to use 12 drives.
  3. NeoID

    DSM 6.2 Loader

    Anyone seen this before? Occurs after 5-10 hours and requires a reboot to get past. I can't see anything in dmesg or logs and can't get access to SSH either once the error arises.
  4. NeoID

    DSM 6.2 Loader

    Apparently the lastest bootloader or version of DSM does something silly. When doing cat /proc/mdstat I get: md1 : active raid1 sda2[0] sdb2[1] 2097088 blocks [16/2] [UU______________] But that doesn't seem right.. a RAID 1 on the previous loader looked like this: md3 : active raid1 sdi3[0] sdj3[1] 483564544 blocks super 1.2 [2/2] [UU] Netdata doesn't like the 16/2 status as it assumes drives are missing from the array and therefore degraded. DSM doesn't seem to care though, but still not perfect.
  5. NeoID

    DSM 6.2 Loader

    I may have found a bug. I'm using netdata for system monitoring on both my "production" and test xpenology (see storage manager screenshot in my previous post). Using synoboot 1.03b/ds3615xs everything is fine, but using 1.04b with two drives it tells me that my array is degraded. DSM on the other hand says everything is normal. As mentioned, this isn't an issue on the previous loader.
  6. NeoID

    DSM 6.2 Loader

    What I mean is that I don't understand how jun is able to override synoinfo.conf and make it survive upgrades (never had any issues with it at least), but when I change /etc.defaults/synoinfo.conf it may or may not survive upgrades... it sounds to me like I should put the "Maxdrives=24" or whatever somewhere else (in another file) that again makes sure that synoinfo.conf always has the correct value. For example.. I see that there's also /etc/synoinfo.conf which contains the same values.. is that file persistent and overriding the /etc.defaults/synoinfo.conf file? Seems by the way that I'm still missing something regarding the sata_args or so as I now get the following message trying to log into my test VM: "You cannot login to the system because the disk space is full currently. Please restart the system and try again." I have never seen the error message before and it's obviously not out of disk space by just sitting there idling for 5 hours. It works again after a reboot, but there is still something that isn't right. I'll update my post when I've figured out what's causing this.
  7. NeoID

    DSM 6.2 Loader

    I see... how will this work in terms of future upgrades? I know Synology sometimes overwrites synofinfo.conf. If this implementation survives upgrades... It's there a way to change it/make it configurable through grub? I have currently two VMs with two 16 ports LSI HBAs, but it would be much nicer to merge them into one with 24 drives. However, not a fan of changing synoinfo.conf and risking a crashed volume if it's suddenly overridden. I guess I could make a script that runs on bolt that makes sure the variable is set before DSM is started, but much nicer if @jun would take the time to implement this or suggest a "best practice" method of changing maxdrives and the two other related (usb/esata) variables.
  8. NeoID

    DSM 6.2 Loader

    I will answer my own post as it might help others or start a at least shine some light on how to configure the sata_args parameters. As usually I've modified grub.cfg and added my serial and mac before commenting out all menu items except the one for ESXi. When booting the image Storage Manager gave me the above setup which is far from ideal. The first one is the bootloader attached to the virtual SATA controller 0 and the other two are my first and second drive connected to my LSI HBA in pass-through mode. In order to hide the drive I added DiskIdxMap=1F to the sata_args. This pushed the bootloader out of the 16 disk boundary so I was left with the other two data drives. In order to move them to the beginning of the slots I also added SasIdxMap=0xfffffffe. I've testet multiple values and decreased the value one by one until all drives aligned correctly. The reason for why you see 16 drives is because maxdrives in /etc.defaults/synoinfo.conf was set to 16. Not exactly sure why it is set to that value and not 4 or 12, but maybe it's changed based on your hardware/sata ports and the fact that my HBA has 16 drives? No idea about that last part, but it seems to work as intended. set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'
  9. NeoID

    DSM 6.2 Loader

    This is how it looks right after a new install with two WD Red drives attached to my LSI HBA in pass-through mode. I can see the bootloader (configured as SATA) taking up the first slot. I have no additional SATA controller added to the VM (I have removed all I can remove). For some reason I have four empty slots before the two drives of my HBA show up. I guess this has something to do with the SATAmap variable but I cant' find much info on it. Considering that it shows me 16 slots (more than the default 4 or 12 of the original synology) without modifying any system files I guess I somehow can support all 16 drives or even pass-through two HBA's in order to get 24 drives?
  10. NeoID

    DSM 6.2 Loader

    Anyone who is running 1.04b on ESXI with a HBA in pass-through? Would love some input on what settings to change in the grub config in order to only show the drives connected to the HBA and ignore the bootloader itself. Is it possible to increase the number of drives by only changing the settings in the grub file now? I'm using a LSI 9201-16i, so 16 ports.
  11. NeoID

    DSM 6.2 Loader

    Anyone who knows what syno_hdd_powerup_seq and syno_hdd_detect does? It was set to 0 in the older bootloaders, but now it's set to 1 by default. Anyone who is using ESXi and has successfully migrated from ds3615xs to ds918+? Any particular VM setting that should be changed? I'm still struggling to get the latest loader to list all my disks (using LSI in passthrough mode). Anyone who can explain what the sata_args are and how to figure out what values to use?
  12. NeoID

    DSM 6.2 Loader

    Pretty sure, I'm also running the 3615 image without issues, but I couldn't get the DS918 image to work until I tried it on a 6.gen intel CPU. I guess the boot image requires instructions that aren't present on earlier processors.
  13. NeoID

    DSM 6.2 Loader

    I could be wrong, but I think you'll need a newer CPU. Something from the same generation (or newer) as Intel Celeron J3455.
  14. NeoID

    DSM 6.2 Loader

    What CPU do you have? Doesn't show up on the network on my i7-4770, but it works on a later 6.gen CPU from Intel.
  15. NeoID

    DSM 6.2 Loader

    Anyone got it to work on ESXi 6.7 yet? I've added the boot image as a SATA drive and tried both e1000e and the VMX network adapter. I've tried BIOS and EFI, but I can't seem to find it on the network. Not with Synology Asssistant or by looking at the router. Any tips what I might be missing? I can see the splash screen at the beginning, but notice the CPU spike to 100% and stays there. If that means anything I'm using a i7-4770 in my ESXi server.