shrabok

Members
  • Content Count

    49
  • Joined

  • Last visited

  • Days Won

    4

shrabok last won the day on November 15 2018

shrabok had the most liked content!

Community Reputation

12 Good

About shrabok

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi @StifflersMoM, Review the details here: https://github.com/bitwarden/core/issues/253
  2. In the bitwarden documentation there is additional details on configuring the smtp as well as in the main post:
  3. Hi @StifflersMoM Sorry to hear about your issues. Could you try the following commands and post your results: docker ps this will provide all running containers and also show their port forwarding Are you also using a unique domain name for your bitwarden instance and proxying it to bitwarden?
  4. @Binkem as a side note, sounds like your model (DS216+) supports ram upgrades: https://forum.synology.com/enu/viewtopic.php?t=114782
  5. Hi @Binkem, This could very well be a possibility. There are multiple containers used by bitwarden and mssql is quite large as well. Here are my current docker stats: CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS 370126b59277 0.00% 14MiB / 15.63GiB 0.09% 1.85MB / 1.28MB 0B / 0B 0 291a000cfd52 0.93% 746.6MiB / 15.63GiB 4.66% 701kB / 537kB 0B / 0B 0 c4f8e956a4ae 0.03% 27.76MiB / 15.63GiB 0.17% 139kB / 0B 0B / 0B 0 0911a92c79e6 0.06% 39.08MiB / 15.63GiB 0.24% 726kB / 148kB 0B / 0B 0 0ba98ce071b3 0.02% 59.76MiB / 15.63GiB 0.37% 607kB / 520kB 0B / 0B 0 e3e8ac74eea8 0.02% 42.95MiB / 15.63GiB 0.27% 1.28MB / 449kB 0B / 0B 0 49b58a990b7f 0.02% 17.88MiB / 15.63GiB 0.11% 139kB / 0B 0B / 0B 0 7ca297b1174c 0.02% 35.82MiB / 15.63GiB 0.22% 214kB / 60.8kB 0B / 0B 0 75ddff907b44 0.01% 16.79MiB / 15.63GiB 0.10% 139kB / 0B 0B / 0B 0 you can also try `docker logs bitwarden-mssql` to see the logs and what is causing the restart.
  6. Hi @Binkem It seems as though bitwarden setup has changed over time. I had a look at my config.yml and it looks like this (FYI - I've excluded all the comments): url: https://bw.domain.com generate_compose_config: true generate_nginx_config: true http_port: 8123 https_port: compose_version: ssl: false ssl_versions: ssl_ciphersuites: ssl_managed_lets_encrypt: false ssl_certificate_path: ssl_key_path: ssl_ca_path: ssl_diffie_hellman_path: push_notifications: true database_docker_volume: false Also sounds like you can reconfigure your deployment using the commands here: https://help.bitwarden.com/article/install-on-premise/#post-install-environment-configuration I've not attempted an install since my original post. Please let me know if this is helpful with regard to your setup.
  7. Here's a few screenshots of what happens after a reboot As you can see below the my SHR1 Volume is Normal and system is healthy, but there was a failure accessing "system partition" You can see all the USB drives had a system partition failure but not the SATA. Which is why I included it as part of my SHR1 Raid. Once a Repair is run everything is Normal again. Once Repair is clicked everything is normal but in the background it's repairing the volumes. If you try to reboot you'll see
  8. Hi @Balrog, - My configuration is SHR1 with all 4 drives (1 x 2TB internal and 3 x 4TB external/usb combined). I like to think of my 2TB internal drive as my rock to maintain stability within the array. Not sure if thats valid logic but it seems to work well and also I think we get a little boost in read/write speed because of the performance of the SATA drive. - When I reboot my array survives but I need to perform rescan of the drives. The data is accessible and there is no issue accessing the data in that state. I believe its the boot/os partition that requires the rescan not the data partition used by SHR. My thoughts on this are, synology puts a os partition on each disk attached to the device, in the case a drive fails it has a backup. But since the USB HDD's are not loading as quick as a SATA HDD it sees that there are missing disks and requests a rebuild/rescan/parity check. This is just my assumption. - Currently my HDD hibernation settings are set to never. I don't think my drives ever sleep. I have not experienced any issues or delay pulling files etc. Hi @mysy, I currently use a UPS with my primary xpenology device which runs as a Synology UPS Server. If power goes out it will also tell this device to go into safe mode. I wouldn't recommend using my configuration for your primary xpenology device with important data. This is my secondary device which I was trying to get some use out of old hardware. It also gave me a huge capacity at a low cost. The data I have on this device I'm willing to lose and I also find it useful for testing updates to xpenology/synology and if you need a large backup location (say for your primary xpenology device) its a good option. I've had a single power outage on this device. The raid needed rebuilding but recovered properly. I think as long as you're not writing at that time you have a low risk of corruption and if you use a UPS (which implemented after this event) you can avoid significant failures.
  9. Upgrade to version 1.24 requires creation of two new log directories in your bitwarden location bwdata/logs/notifications bwdata/logs/icons Edited Original Post for additional changes on new deployments
  10. Thanks for sharing @GKay, I have not come across IPV6 connection yet so thats a good thing to keep in mind
  11. GeoIP Region Blocking using Synology Firewall I noticed internet performance issues today and was checking my router logs, I found excessive logs showing: Jun 18 20:55:48 dropbear[5405]: Child connection from <My Synology IP>:40894 Jun 18 20:55:49 dropbear[5405]: Exit before auth: Exited normally Jun 18 20:55:49 dropbear[5411]: Child connection from <My Synology IP>:40896 Jun 18 20:55:51 dropbear[5411]: Exit before auth: Exited normally I searched and found it was related to numerous invalid login attempts to the synology login page. This lead me to login to the cli of my synology and check logs for failed attempts. When checking the logs I found the most concerning log was /var/log/httpd/apache22-error_log 2018-06-18T19:28:42-06:00 nas [Mon Jun 18 19:28:42 2018] [error] [client 193.106.30.99] File does not exist: /var/services/web/wp-rdf.php 2018-06-18T20:11:16-06:00 nas [Mon Jun 18 20:11:16 2018] [error] [client 27.29.158.10] script not found or unable to stat: /var/services/web/login.cgi 2018-06-18T21:51:26-06:00 nas [Mon Jun 18 21:51:26 2018] [error] [client 172.18.0.2] File does not exist: /var/services/web/apple-touch-icon-precomposed.png 2018-06-18T21:51:26-06:00 nas [Mon Jun 18 21:51:26 2018] [error] [client 172.18.0.2] File does not exist: /var/services/web/apple-touch-icon.png 2018-06-18T21:51:26-06:00 nas [Mon Jun 18 21:51:26 2018] [error] [client 172.18.0.2] File does not exist: /var/services/web/apple-touch-icon-precomposed.png This lead me to consider blocking all geographical regions except my own. Most brute force attempts and vulnerability attacks are outside of my home country, this will reduce the attack surface significantly. My first attempt at implementing the geoip blocking was problematic, I attempted a "deny all" entry after the "allow local network range" and "allow my region" rules, but this ended up blocking all access to the services I had running. I thought I'd share how I implemented it for others wanting to reduce the surface area for attacks. Enable firewall Open Control Panel Select Connectivity -> Security Go to Firewall tab Check Enable firewall Add "Allow" Rules for internal network Select Edit Rules for the default Firewall Profile (Disregard existing rules in screen shot, these will be created in the following steps) Create rule to allow your internal/home network Add "Allow" Rules for your country/countries Create rule to allow specific locations Set network interface to deny if rules are not matched Select the network interface that is default to your synology (mine is LAN 1, you can find your interface under Connectivity -> Network -> Network Interface) ***This was the secret to getting the deny all after the allow rules to work*** Set "if no rules were matched: Deny Access" Click OK and Apply Test reaching your synology on your internal network and from external networks in your region. You can also validate if the firewall is blocking by using a Tor browser to send traffic from a different country to see if your firewall rules are working properly.
  12. There are no real questions here, just sharing my experience of trying to repurpose old hardware and bring it new life. Hopefully you can too! I came across an old HP Mini 5103 Netbook I had unused. It wasn't capable in running light weight linux OS's very well but didn't want to throw it out. I checked the specs: Intel Atom N455 Processor (1.66 GHz, 512KB L2 cache, 667 MHz FSB) 2GB DDR3 1333 MHz SDRAM Dedicated 10/100/1000 NIC 250GB 2.5" 7200RPM HDD 3 USB Ports 1 SD Card Slot 6-cell (66 WHr) high capacity Li-Ion Battery Low power consumption (approximately 15w on average) ... I realized this is really a close match to most entry level Synology NAS devices. My only concerns were the lack of additional SATA ports (1 hard drive). I thought I would give it a test with the latest xpenology, and it works remarkably well. I found it responsive in the UI and strangely more responsive than my current (higher performance) device. This led me to question, Can I get a couple hard drives on here? We'll we only have one 250GB Sata 2.5" 9.5mm drive. I have a 500GB USB drive laying around and thought, this drive is USB, but could USB drives be added to an SHR Raid? With a google search I found this video of xpenology interpreting USB drives as regular HDD's by changing some settings in /etc.defaults/synoinfo.conf: I found this useful in proving the point that you can add USB drives and interpret them as internal SATA drives. I made the following changes to my config /etc.defaults/synoinfo.conf ### Increase disk capacity #maxdisks="12" maxdisks="24" ### ### Disable esata port discovery #esataportcfg="0xff000" esataportcfg="0" ### ### Increase internal drive discovery #internalportcfg="0xfff" internalportcfg="0xffffffffff" ### ### Enable synology SHR # Ref https://xpenology.com/forum/topic/9281-synology-hybrid-raid-shr/?page=0#comment-79472 #supportraidgroup="yes" support_syno_hybrid_raid ="yes" ### ### Disable usb discovery #usbportcfg="0x300000" usbportcfg="0" ### Once I made these changes I could restart and see any USB drives plugged in were viewed as internal drives Now I created a Disk group and Volume that was SHR for mixed drive sizes to get the most out of them. I found after a restart I would have a Degraded Raid, I think this may have to do with the USB being slower to discover and looks like a drive was removed and added on boot. I can rebuild the raid and its good again. But this is something to consider regarding stability. My next attempt is to get 2 external hard drives and raid them without the internal SATA drive, they may be more consistent being the same connection and handle a reboot with out a degraded state. If thats the case, I think its a real winner for repurposing old hardware. I also came across an old 4GB SDCard, and thought, instead of the USB, why not use the SD for booting and that would be 3 free USB ports for External HDD's. I found this post I followed the same process, inserted the SD Card into the Mini, copied the PID and VID and setup the boot loader on the SD Card. After that, I set the bios to boot from SDCard and restarted. When going to the URL it sees the device as a "new hardware" and you "recover" the device. After that SD Card boot was all good. If all goes well with the USB HDD Raid where it can handle a reboot, I may consider getting some big external USB HDD's and raid them. Some use cases for this would be: - a backup location for my primary xpenology box (good to run upgrades on before doing the primary) - store media files that are network shared (movie streaming, music etc) that aren't a major concern of losing and don't require massive disk performance - with a working battery (mine is dead, but may consider getting a replacement) it has its own build in UPS, so it could handle power outages etc. and could be used in remote places like your parents house for remote access and other centralized services you manage for others - home automation box - the list goes on. This setup is probably not recommended for stable use but more of a stretching the boundaries of what can be done with xpenology when your hardware is limited or your use case is experimental. *** Edit *** Ended up going with an internal 2TB HDD and 3 external 4TB HDD's for a total of 9TB usable space with SHR. Velcro attached the drives to the lid and strangely enough the metal finish was a brown tone like the HP case, this was a coincidence.
  13. Hi @ebell In my current installation I can reach the admin site using https://bitwarden.domain.com/admin/login/ without a dedicated port. There is an issue open regarding the https://bitwarden.domain.com/admin redirecting to a non https port here https://github.com/bitwarden/core/issues/253. Your approach could be interesting alternative as a dedicated url specific to admin that is internal only and not public facing. Also I have edit capability to the orginal post and will attempt to keep it relevant and recent.
  14. Hi @ebell Thanks for mentioning the corrections. It seems I'm no longer able to edit the original post. Referencing the Bitwarden docs for all the latest changes is highly recommended as it does change frequently. Hopefully Synology can get Docker updated to a point where we no longer need to manually create the folders that were missing as well. Also for clarity the line `command: mkdir bwdata/log bwdata/log/admin bwdata/log/api bwdata/logs/identity bwdata/logs/mssql bwdata/logs/nginx` should be changed to `command: mkdir bwdata/logs bwdata/logs/admin bwdata/logs/api bwdata/logs/identity bwdata/logs/mssql bwdata/logs/nginx`
  15. Hi @latonita I have a feeling the server block is clashing with the existing configuration in /etc/nginx/nginx.conf I have a server block for a port 80 listener which is the synology web service. Have a look in that config to see if there is an existing server listening on port 80. Also just for clarity, is tank.local your synology/xpenology device. And is path based routing (using /portainer) your only option? do you have an internal dns server and could create a CNAME record to point at tank.local like "portainer.local CNAME tank.local" and then create a host reverse proxy in DSM Application Portal to create a reverse proxy like "portainer.local:80 -> localhost:9000" I'd recommend against path based routing in nginx because of the number of synology location blocks being used and the chance that an upgrade could lose the configuration. And just for simplicity sake. I can provide additional assistance regardless of method, so if you have any additional questions I can try to assist some more.