Jump to content
XPEnology Community

DaWorm

Rookie
  • Posts

    6
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

DaWorm's Achievements

Newbie

Newbie (1/7)

1

Reputation

  1. I found this link that showed a fairly easy way to update. Since I was so far back (pre-March 2018) I had to go through a few other steps, and only went to 6.5 (may go to 7.0 later, but for now, that's good enough). The 6.0 licence I had should work on all 6.x releases. This was good enough, and I have the UI up and running. A few config updates and I'll be ready to test if this one survives a power cycle. Thanks!
  2. No, I did not. The lowest I saw was something like $600 with an annual support subscription required. I'll look to see where I can get that. Jeff
  3. I only have ESXI 6.0, don't have the $$$ to buy 6.7 or 7.x, so I'm kind of stuck with it. 6.0 doesn't have the web interface, and almost all of the guidelines seem to be using that. One of the main things that differs when using WebSphere 6.0 to create the VMs instead of the web interface is that it only ever offers me IDE or SCSI drives. All the guides have the boot disk image (synoboot.vmdk) going to SATA 0:0 (moving it off of SCSI as part of the process), but for me, the only option I have is SCSI 0:0. I can't find any way to make ESXI 6.0 create a SATA controller or SATA disk to move the boot disk image to. I had DSM5.x working but needed to update to 6.x so that my Plex server could be updated. I managed to get 6.1.7 to install and work, but after the first power loss, it no longer loads nginx so no GUI (I have SSH access though, and it is functioning). I have another post here on that topic. I can't get 6.2.x to even come up beyond the initial splash screen, which never shows the booting messages like 6.1.7 does (Decompressing Linux... etc.). All my raid disks are on a raid card PCI passthrough, the only datastore is an SSD that ESXI boots from, but I'm not even using that on my 6.2 attempts. For those, I'm just using the boot disk image and a small virtual disk just to see if I can get it to boot far enough for the discovery tool to see it, but it never gets that far. If it did, I could then try to remove the virtual disk and turn on the PCI passthrough and hopefully that would work. I've followed the tutorials as best I can, with the only difference being the boot disk is SCSI instead of SATA. Is this likely to be the cause of my problem? If so, any idea how to fix it, if this is possible at all? As a last resort I could attempt to go bare bones, but I do have a couple of other VMs on the box that I fire up on occasion, and it would be nice to be able to keep them, but the NAS/Plex combo is the primary purpose. Thanks, Jeff
  4. I'm having a very similar problem to that from this topic. I responded to that thread but was directed to start a new one, I hope this is the right place for this. I have a custom built NAS box that I built a few years back using ESXI 6.0 and DSM 5.x. Ran fine, but the Plex package for it no longer supports DSM5.x so I needed to upgrade. I tried using the DSM6.2 and Jun's bootloader 1.3 and could never get the virtual machine to boot properly. In the WebSphere console window, I'd see the splash screen and nothing would happen after that. So I went to DSM6.1.7 and bootloader 1.2b and this seemed to work. This was on a new VM, nothing from the DSM5 was reused except that my raid card is set to PCI passthrough so that my storage disks were the same. But the virtual disk that DSM runs in is completely new. Everything worked fine, I could browse into DSM, I installed the latest Plex server, installed the Ubiqiti package, and all looked good... until the first power outage. When the unit came back up, all of the shares appeared correctly, Plex loaded properly, I can SSH into the box just fine, but I cannot access DSM via the browser. I try starting nginx manually, and it fails as well. This site can’t be reached192.168.254.120 refused to connect. Try: Checking the connection Checking the proxy and the firewall ERR_CONNECTION_REFUSED Here's my DSM VERSION file. root@WormNAS:~# more /etc.defaults/VERSION majorversion="6" minorversion="1" productversion="6.1.7" buildphase="GM" buildnumber="15284" smallfixnumber="3" builddate="2018/12/26" buildtime="08:39:07" I can try starting nginx manually, but it does not, and I see the error below in the error log. 2020/11/06 09:05:00 [emerg] 17636#17636: socket() [::]:5000 failed (97: Address family not supported by protocol) Everything I can find about this error says that IPV6 is either not enabled or not working. I can't find where to fix IPV6, or better yet since I'm not using it, disable it, so that whichever script keeps putting the... listen [::]:5000 default_server; ... lines into /etc/nginx/nginx.conf would stop doing that. You can edit that line out of the various server sections, but as soon as you try to restart nginx, it comes back and nginx fails to start again. Very frustrating. If anyone has any idea of how I can recover this, using just SSH I'd appreciate it. I could always reinstall everything from scratch in a new VM if I needed to, but I'd also like to be able to ensure it wouldn't happen again on the next power cycle. Alternatively, if anyone has tips on DSM6.2 and bootloader 1.3, I could try that route again. I spent four hours trying last time with no luck, when 6.1.7 and 1.2b worked in about five minutes, but if that config is preferred, I'd like to know how to make it work as well. If I haven't provided enough info, let me know. Thanks, Jeff
  5. 1 - Are you running a HP N40L (or other G7 microserver)? No. But this was the only thread in the site search I found with my exact error, and I hoped that someone who had solved it on one set of hardware could help out. I doubt this particular error is hardware specific (but it could be). 2 - Did you update to DSM 6.x from DSM 5.x? Yes and no. Since I'm on ESXI, the old Xpenology is in a different VM, on a different virtual disk. Only the pass through PCI raid controller and hard drives were carried over, not the datastore with the old Xpenology bootloader or DSM version. 3 - If No, make a new thread and fill in all the details. Will do. 4 - if Yes on 2: sudo rm -r .xpenoboot See #2. Thanks, Jeff
  6. I'm having the same problem running 6.1.7. Don't know if I should respond here or make a new post... root@WormNAS:~# more /etc.defaults/VERSION majorversion="6" minorversion="1" productversion="6.1.7" buildphase="GM" buildnumber="15284" smallfixnumber="3" builddate="2018/12/26" buildtime="08:39:07" I'm running bootloader 1.02b on ESXI 6.0.0. I can access my files, my Plex server is running and I can get to the Plex web page, but I cannot get to DSM. I could when I first installed everything, but after a power loss, when it came back up, I could no longer get to the web interface. Everything I can find about the error... 2020/11/06 09:05:00 [emerg] 17636#17636: socket() [::]:5000 failed (97: Address family not supported by protocol) ... says that IPV6 is either not enabled or not working. I can't find where to fix IPV6, or better yet since I'm not using it, disable it, so that whichever script keeps putting the... listen [::]:5000 default_server; ... lines into /etc/nginx/nginx.conf would stop doing that. You can edit that line out of the various server sections, but as soon as you try to restart nginx, it comes back and nginx fails to start again. Very frustrating. Jeff
×
×
  • Create New...