NeoID

Members
  • Content Count

    209
  • Joined

  • Last visited

Community Reputation

6 Neutral

About NeoID

  • Rank
    Super Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've by mistake figured out that the DS Photo app (at least on Android) by default overwrites images with the same name when moving them from one album to another. So if you move test.jpg from album A to B and album B already has test.jpg (even if it isn't the same photo), it's overwritten WITHOUT notice. There is supposed to be a setting to change the default behaviour from overwrite to keep, but I can't find it anywhere. Anyone else familiar with this?
  2. Not that I know of. I have not used snapshots or quotas before. Enable shared folder quota is off for every shared folder i have... User quota in the profiles are all set to "No Limit"
  3. I've searched around the web, but can't seem to find any information on this. Anyone seen these errors in "/var/log/messages" before and know what they mean? 2019-12-11T11:15:53+01:00 x kernel: [13651.708876] BTRFS error (device md2): cannot find qgroup item, qgroupid=15084 ! 2019-12-11T11:15:53+01:00 x kernel: [13651.708876] 2019-12-11T11:15:53+01:00 x kernel: [13651.944144] BTRFS error (device md2): cannot find qgroup item, qgroupid=14903 ! 2019-12-11T11:15:53+01:00 x kernel: [13651.944144] 2019-12-11T11:15:54+01:00 x kernel: [13652.795072] BTRFS error (device md2): cannot find qgro
  4. I'm running ESXI, HBA in passthrough mode and Jun's Loader v1.03b DS3617xs without any customization. I use a INTEL Xeon D-1527, 4 cores and 25 GB with ECC RAM. I also use a Intel X540-T2 network card in passthrough. My Xpenology server feels a bit sluggish, but I have a hard time to pinpoint the issue. I run quite a few Docker images, but there is hardly any CPU utilization. Disk and Volume utilization stays around 50% each and I/O wait at ~25%. I'm not sure, but doesn't that sound much? Does anyone have any tips on how to get to the bottom of this? I've read
  5. NeoID

    DSM 6.2 Loader

    Anyone struggling with SMART with the DS918 image on ESXI? I'm using a LSI00244 9201-16i with 3615 and 3617 and everything works great, but when trying 918 I get the following message when trying to get the SMART data from any hard drive in the storage manager. Since SHR and vmxnet3 works in 918 it would be awesome to continue to use that image. Edit: Is missing kernel support the reason for this?
  6. NeoID

    DSM 6.2 Loader

    As mentioned before, when using the following sata_args, the system crashes after a little while. I have given my VM a serial port so I could see what's going on, but I have a hard time understanding the issue. Anyone who could explain to me what's going on here and how I can get the sata_args to work? I mean everything looks right from DSM/Storage Manager.... set sata_args='DiskIdxMap=1F SasIdxMap=0xfffffffe'
  7. NeoID

    DSM 6.2 Loader

    I know this is only a cosmetic thing, but no matter how I try to hide the boot drive using DiskIdxMap in the sata_args, DSM stopps working after a few hours. After a while it stops working and I'm only presented with a white page when trying to login, but after a while I get the message saying that the disk space is full when trying to login. Any ideas? I guess the best choice is to stick to the default "set sata_args='SataPortMap=4'" and just ignore the bootloader and the four spaces between the PCI card. This way you still get to use 12 drives.
  8. NeoID

    DSM 6.2 Loader

    Anyone seen this before? Occurs after 5-10 hours and requires a reboot to get past. I can't see anything in dmesg or logs and can't get access to SSH either once the error arises.
  9. NeoID

    DSM 6.2 Loader

    Apparently the lastest bootloader or version of DSM does something silly. When doing cat /proc/mdstat I get: md1 : active raid1 sda2[0] sdb2[1] 2097088 blocks [16/2] [UU______________] But that doesn't seem right.. a RAID 1 on the previous loader looked like this: md3 : active raid1 sdi3[0] sdj3[1] 483564544 blocks super 1.2 [2/2] [UU] Netdata doesn't like the 16/2 status as it assumes drives are missing from the array and therefore degraded. DSM doesn't seem to care though, but still not perfect.
  10. NeoID

    DSM 6.2 Loader

    I may have found a bug. I'm using netdata for system monitoring on both my "production" and test xpenology (see storage manager screenshot in my previous post). Using synoboot 1.03b/ds3615xs everything is fine, but using 1.04b with two drives it tells me that my array is degraded. DSM on the other hand says everything is normal. As mentioned, this isn't an issue on the previous loader.
  11. NeoID

    DSM 6.2 Loader

    What I mean is that I don't understand how jun is able to override synoinfo.conf and make it survive upgrades (never had any issues with it at least), but when I change /etc.defaults/synoinfo.conf it may or may not survive upgrades... it sounds to me like I should put the "Maxdrives=24" or whatever somewhere else (in another file) that again makes sure that synoinfo.conf always has the correct value. For example.. I see that there's also /etc/synoinfo.conf which contains the same values.. is that file persistent and overriding the /etc.defaults/synoinfo.conf file? Seems by the way
  12. NeoID

    DSM 6.2 Loader

    I see... how will this work in terms of future upgrades? I know Synology sometimes overwrites synofinfo.conf. If this implementation survives upgrades... It's there a way to change it/make it configurable through grub? I have currently two VMs with two 16 ports LSI HBAs, but it would be much nicer to merge them into one with 24 drives. However, not a fan of changing synoinfo.conf and risking a crashed volume if it's suddenly overridden. I guess I could make a script that runs on bolt that makes sure the variable is set before DSM is started, but much nicer if @jun would take the time to implem
  13. NeoID

    DSM 6.2 Loader

    I will answer my own post as it might help others or start a at least shine some light on how to configure the sata_args parameters. As usually I've modified grub.cfg and added my serial and mac before commenting out all menu items except the one for ESXi. When booting the image Storage Manager gave me the above setup which is far from ideal. The first one is the bootloader attached to the virtual SATA controller 0 and the other two are my first and second drive connected to my LSI HBA in pass-through mode. In order to hide the drive I added DiskIdxMap=1F to the sata_args. This p
  14. NeoID

    DSM 6.2 Loader

    This is how it looks right after a new install with two WD Red drives attached to my LSI HBA in pass-through mode. I can see the bootloader (configured as SATA) taking up the first slot. I have no additional SATA controller added to the VM (I have removed all I can remove). For some reason I have four empty slots before the two drives of my HBA show up. I guess this has something to do with the SATAmap variable but I cant' find much info on it. Considering that it shows me 16 slots (more than the default 4 or 12 of the original synology) without modifying any system files I guess I somehow can
  15. NeoID

    DSM 6.2 Loader

    Anyone who is running 1.04b on ESXI with a HBA in pass-through? Would love some input on what settings to change in the grub config in order to only show the drives connected to the HBA and ignore the bootloader itself. Is it possible to increase the number of drives by only changing the settings in the grub file now? I'm using a LSI 9201-16i, so 16 ports.