HP ProLiant Microserver Gen8 No WebUI anymore


Recommended Posts

Hello,

 

Today I started my synology and was greeted with "The site cannot be found". After restarting the microserver it doesnt load the page at all. I unfortunately have not enabled SSH... Is there any way I can recover it without reinstalling?

Link to post
Share on other sites

BKC3T0n.png

 

Running mdadm --detail /dev/md/2 on a ubuntu live CD shows 2 of the 3 Drives.

 

DATokdI.png

 

Running mdadm --detail /dev/md0 (Synology System Partition) shows the third missing data disk, and the 2nd disk but is missing the first data disk.

 

The whole system works with only with the 2nd volume disk (which is another SSD disk). Putting 1 or 2 of the data disks the login doesn't work. It doesnt find my user and the admin user is disabled for some reason. Putting all 3 in there the web ui doesn't work at all.

 

This started after a period of not using the NAS and having it shutdown for a month.

Edited by Subtixx
Link to post
Share on other sites

if you are able to mount and r/w md0 then you could activate ssh manually

you would also be able to see the logs

that might give you some insights of whats going on

one failing disk does not prevent the dsm system (and its webgui) from starting, its a raid1 over all disks and it should start as long as at least one disk is operational

Link to post
Share on other sites
54 minutes ago, IG-88 said:

if you are able to mount and r/w md0 then you could activate ssh manually

 

What is the DSM config entry for this?  Looked for it a few times but have not found it.

Link to post
Share on other sites
38 minutes ago, flyride said:

What is the DSM config entry for this?  Looked for it a few times but have not found it.

not tested but my guess was

/usr/syno/etc/synoservice.override/ssh-shell.cfg

{
  "auto_start":"yes"
}

 

  • Like 1
Link to post
Share on other sites
1 hour ago, IG-88 said:

not tested but my guess was

/usr/syno/etc/synoservice.override/ssh-shell.cfg


{
  "auto_start":"yes"
}

 

Okay thanks, but when I cannot login in DSM, I wont be able to log in in SSH wont I?

Link to post
Share on other sites
2 minutes ago, Subtixx said:

Okay thanks, but when I cannot login in DSM, I wont be able to log in in SSH wont I?

if the problem is related to the webgui then ssh will still work, its a independent service

Edited by IG-88
  • Like 1
Link to post
Share on other sites

 

7 hours ago, Subtixx said:

Did this but SSH is still not working

 

also delete


/usr/share/init/ssh-shell.override

/usr/share/init/sshd.override

 

if not present the sshd.conf/ssh-shell.conf will run at boot, the *.override prevents it from starting automatically

i tested that with a fresh dsm6.2.3 installation together with the /usr/syno/etc/synoservice.override/ssh-shell.cfg it worked as intended and i was able to use ssh even never configured in the webgui

Edited by IG-88
  • Like 1
Link to post
Share on other sites

Hmm that didnt work for me, ssh still not enabled and cannot login. I'm trying now to remove the first drive and try to boot it that way.

 

EDIT:

Removing the first (sda) drive results in a boot and I can log in

 

mceqzPc.pngg9F3eR1.png

Edited by Subtixx
Link to post
Share on other sites
On 11/9/2020 at 11:21 AM, Subtixx said:

Hmm that didnt work for me, ssh still not enabled and cannot login.

did you mount the whole raid1 with all drives?

maybe there is something else wrong

i tested this with a fresh dsm 6.2.3 install, after 1st login in webgui i did a shutdown, mounted the raid1 with a rescue linux (mdadm ..., mount , ...), changed the files and after this i was able to login and it also showed the ssd as enabled in the webgui

 

On 11/9/2020 at 11:21 AM, Subtixx said:

Removing the first (sda) drive results in a boot and I can log in

 a crashed raid5 is not so good is you only removed one disk it should be degraded because one redundancy disk failed

also the message about system partitions is unusual, not a problem as is can be repaired from the running system but before you do anything else you should reinsert the removed disk so dsm should recognize its system partiton as invalid and on next boot it should not be used for booting until you repair the system partitions (raid1 copy from running partition), hopefully the reboot with the missing disk might bring up the raid5 again as degraded, if not it might be possible to repair it by forcing a failed drive back into the raid and ignoring the (small) data loss, for that it needs a good estimation of the state, @flyride is the one with the most experience here with that kind of task, you can also look for threads with mdadm and recovery where he helped out,  if you are familiar with linux and the technical workings of a mdadm raid set its not so difficult to understand the process from reading a thread where a recovery was done

 

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.