• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About bughatti

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I get what you are saying. We had one laying around in our datacenter that a customer left behind. I decided to see if I could get it to work and it did and was easy. I have installed DSM on many different parts over the years and this appliance by far was the easiest and appears to be the fastest, We are only using it for a veeam backup over 10gb iscsi. I guess in all retrospect, its really what management interface you prefer. We have a bunch of equalogics that I am replacing with these because I hate the administrative nightmare of the EQ software. I just found a few on eb
  2. I appreciate the information. I have been searching, "nimble array gen 4", nothing pops out except that 2018 they announced gen 5, so I can safely assume anything made prior to 2018 would be usable, but I have found no document that says cs220 is gen X or cs460 is gen X. I will do more digging though
  3. Thank you for this information. Could you provide any model numbers of controller arrays that are gen 4 and also what is the largest size sata hard drive that it will recognize.
  4. Just curious if anyone else has tried this. We had an old Nimble CS220 laying around at work, I was able to get 6.2 update 2 loaded on it and it runs like a champ with ssd cache and 2 10GB ethernet links. I am interested in buying more used ones, maybe newer models and curious if anything has changed in the newer ones that would prevent me from installing.
  5. Yes, 80 and 443 are both open in my router to my xpenology. I have verified with open port checker, also Web STation responds with a page on both from outside my network root@LiquidXPe:~# sudo syno-letsencrypt new-cert -d domain.com -m email@gmail.com -v DEBUG: ==== start to new cert ==== DEBUG: Server: https://acme-v01.api.letsencrypt.org/directory DEBUG: Email:email@gmail.com DEBUG: Domain: domain.com DEBUG: ========================== DEBUG: setup acme url https://acme-v01.api.letsencrypt.org/directory DEBUG: GET Request: https://acme-v01.api.letsencrypt.org/direc
  6. All, I am trying to issue a lets encrypt on my nas, and it does not want to work. Below is the error 2019-12-09T14:57:58-06:00 LiquidXPe synoscgi_SYNO.Core.Certificate.LetsEncrypt_1_create[5038]: certificate.cpp:957 syno-letsencrypt failed. 200 [new-req, unexpect httpcode] 2019-12-09T14:57:58-06:00 LiquidXPe synoscgi_SYNO.Core.Certificate.LetsEncrypt_1_create[5038]: certificate.cpp:1359 Failed to create Let'sEncrypt certificate. [200][new-req, unexpect httpcode] I am running DSM 6.1.7-15284 Update 3 I hav found a few articles and tried all the fixes that worked for
  7. Hello all and thanks in advance for any help or assistance. I think I am pretty much screwed but figured I would ask first before i make things worse. I have a system with 12 drives, I have 1 raid 6 that correlates to volume 2 and a raid 5 that correlates to volume 1. I moved my setup a few days ago and when I plugged it back in, my raid 5 lost 2 of the 4 drives. 1 drive was completely hosed, not readable in anything else. The other drive seemed to just be empty and not in the raid like it was previously. I think part of the reason for the drive just removing itself from the raid is that
  8. Here is some more info, seems something is up with the superblocks, any idea how to fix it? LiquidXPe> mdadm --assemble --scan --verbose mdadm: looking for devices for further assembly mdadm: cannot open device /dev/sdu1: Device or resource busy mdadm: cannot open device /dev/sdu: Device or resource busy mdadm: no recogniseable superblock on /dev/dm-0 mdadm: cannot open device /dev/md5: Device or resource busy mdadm: cannot open device /dev/md3: Device or resource busy mdadm: cannot open device /dev/md2: Device or resource busy mdadm: cannot open device /dev/md4: Device or resour
  9. Thanks for the reply, I am running SHR on it. Unfortunately I have not had much luck yet, as far as I got today was after adding all partitions from the new drive back into each /dev/mdx and each rebuild was complete I got a healthy on the top right under system health in the GUI It says healthy but the volume still says crashed. When I reboot the new drive goes back to available spare but when i look at mdadm -detail /dev/md0 and md1 the partitions from the new drive are in them but the new drive sdk5, sdk6 and sdk7 are not in md2, md3 and md5. I can add them back in but as soon as I reboo
  10. So I wanted to post this here as I have spent 3 days trying to fix my volume. I am running xpenology on a JBOD nas with 11 drives. DS3615xs DSM 5.2-5644 So back a few months ago I had a drive go bad and the volume went into degraded mode, I failed to replace the bad drive at the time because the volume still worked. A few days ago I had a power outage and the nas came back up as crashed. I searched many google pages on what to do to fix it and nothing worked. The bad drive was not recoverable at all. I am no linux guru but I had similar issues before on this nas with othe
  11. Allright so here is where I am at. I decided to attach the drives to the 5644 installation. on first bootup DSM came up and complained about degraded volumes and wanted to rescan and re import I am guessing because the drives on the new board are in different sata slots than the old board. I rebooted and after dsm would not come online but I could ssh, i noticed from a few commands I had all my volumes even though they are degraded. It took about 30 min before dsm would let me log in. I know only see 2 drives missing and I think a reboot can fix that. I am now able to see all my shares fr
  12. So here are my thoughts. I have disabled all unnecessary hardware on the new motherboard. Still no good. I just took a new usb stick and extra HD and installed 5644 perfectly fine. Looking at settings on the old mb compared to new old mb: was not UEFI, sata was set to Legacy vs Native, was using onboard nic for mgmt new mb: is UEFI, only options for sata is AHCI vs Raid, no legacy. Also on the old install with the new mb nic, it does not detect the onboard nic. Since I have 5644 running, if I power off and plug all the hard drives in to the new install, is it possible to i
  13. Well I swapped out my motherboard, proc and memory and my NAS will not come back online. I am able to ping the ip and i can login as root to console but the web admin and ssh are not operating. Before I get to crazy with this, does anyone know some console commands i can try to get things working, if not i will have to put it back on the original MB combo forgot to add that xpenoboot does come up at boot
  14. Team I have been using XPEnology for some time now, the 5565 version. It is very stable and great. I have around 18tb of usable storage on it. My question here is can I change out the motherboard and CPU and still keep all my settings?