Jump to content
XPEnology Community
  • 0

HP N40L upgrade to 6.1.7 not showing up on network


Knarf

Question

I'm currently running 5.2-5644.3 and I'd like to update to at least 5.2-5967 which my machine is offering me on the update page.  There was a time that with certain installations that worked, but mostly I remember always reading not to update from within the machine offerings.

 

So I followed Polanskiman's tutorial on migrating from 5.2 to 6.1.X (Jun's Loader).  I get the same screen shown in the tutorial once I boot but Synology assistant never finds NAS.  Will replacing the NIC solve this problem?  I've seen a reference to that but I think it's for a later version.

 

One way or another I'd appreciate any direction on either getting 6.1 to work or what bootloader and Pat file I'll need for 5.2-5967.  I've searched and worked on this all week and just can't find the answers I'm looking for so I hope someone here has some info.

 

Thanks

 

Frank

 

Edited by Knarf
Link to comment
Share on other sites

Recommended Posts

  • 0
4 minutes ago, bearcat said:

First thing first: congrats :-) I know how frustrating these things can be, and sometimes you just need some time away from it, to be able to "reboot the brain".

Now, you have it running, at least a good starting point.

 

The update 3 is all OK for your system, as can be seen right here (among other places).

Notice that he did not use any custom extra.lzma.

 

I have scratched my head in regards to your BIOS resets,

as I recall some problems from long ago, where one common thing, was the use of power-savings and some old bootloaders.

If you have a look here or here you will see some old stuff in regards to that.

One "easy fix" seems to be password protecting the BIOS after setting it the way you want it.

 

btw: What BIOS did you finally use to flash your system?

 

In regards to your System Partition Failed, that is more problematic..

 

How does it look from the "Storage Manager"?

Anything like this?

 

According to Synology there should be a link offering you to "Repair the system partition" do you see that ?

if you don't get the link (or it don't help) you might want to do a manually fsck

 

You may want to do some reading here for basic info and here and here for more Syno related info.

 

 

 

Bearcat,

 

I used the MOD from this website since it was specifically addressing my HP N40L.  https://www.nathanielperez.us/blog/hp-proliant-n40l-bios-modification-guide  

 

I'll look at your other links and get back.

 

Thanks

 

Link to comment
Share on other sites

  • 0
15 minutes ago, bearcat said:

 

 

How does it look from the "Storage Manager"?

Anything like this?

 

According to Synology there should be a link offering you to "Repair the system partition" do you see that ?

if you don't get the link (or it don't help) you might want to do a manually fsck

 

You may want to do some reading here for basic info and here and here for more Syno related info.

 

 

 

Bearcat,

 

I went to that page and it was just as he described-all was well until the HDD/SSD section.  Showed one HDD with a partition problem.  I hit repair and all is well.

 

Thanks for the links.

Link to comment
Share on other sites

  • 0
10 hours ago, Knarf said:

Does anyone know why the old bootloader would have been resetting my BIOS to default? 

that rings a bell, i remember problems with microservers and  bios reset but it was usually on shutting down and you seemed to loose settings on reboot

on hp systems (desktop have this option too) you can save your settings as default, that way you don't loose it if it goes to default

 

 

problems with system partitions like you had might come up when one or more disks are not present when booting up, next time when disks all online again you need to repair in storage manager, basically you re-syncing the dropped out disk to your raid

 

the onboard nic (afair broadcom) should work with the additional extra.lzma for 6.1 but you can use a diffrent in pcie if you want or have to

 

if you can use 6.1 without extra.lzma then 6.2.(0) should work too, but 6.2.1 and above will not as there are changed kernel settings and most of the drivers in juns extra.lzma that comes with the loader 1.03b will not work, only the drivers directraly provided by dsm will work, that will be fixed with the new extra.lzma i'm still working on

safest way is still 6.1 imho

 

Edited by IG-88
Link to comment
Share on other sites

  • 0
14 minutes ago, IG-88 said:

that rings a bell, i remember problems with microservers and  bios reset but it was usually on shutting down and you seemed to loose settings on reboot

on hp systems (desktop have this option too) you can save your settings as default, that way you don't loose it if it goes to default

 

 

problems with system partitions like you had might come up when one or more disks are not present when booting up, next time when disks all online again you need to repair in storage manager, basically you re-syncing the dropped out disk to your raid

 

the onboard nic (afair broadcom) should work with the additional extra.lzma for 6.1 but you can use a diffrent in pcie if you want or have to

 

if you can use 6.1 without extra.lzma then 6.2.(0) should work too, but 6.2.1 and above will not as there are changed kernel settings and most of the drivers in juns extra.lzma that comes with the loader 1.03b will not work, only the drivers directraly provided by dsm will work, that will be fixed with the new extra.lzma i'm still working on

safest way is still 6.1 imho

 

That post about the bios issue would have saved me days as well.  Better late than never.  I repaired the Disk in storage manager but on the next boot it is now saying Drive health status is degraded and I'm getting notifications that disk 4 is trying to re-identify.  Any ideas about repair short of a new HDD¿

If you think about let me know when you have the new extra.lzma ready.  I'll hold off updating any further until then.

 

Thanks,

Frank

Edited by Knarf
Link to comment
Share on other sites

  • 0
15 minutes ago, Knarf said:

Any ideas about repair short of a new HDD¿

did you you waited the until the repair was complete (there is a percent counter) before a reboot or shutdown?

in storage manager under disks section is a tab "log", did you check if anything useful is in there?

Link to comment
Share on other sites

  • 0
12 minutes ago, IG-88 said:

did you you waited the until the repair was complete (there is a percent counter) before a reboot or shutdown?

in storage manager under disks section is a tab "log", did you check if anything useful is in there?

Yes, I waited until it was complete and I received a nice big green check mark in my disk health widget.  It was after I then upgraded to package 3 that I received the new bad news.  Unfortunately I would not know if there was anything useful in the log or not.  I'll have a look and see.

log is below, not sure if it tells anything useful.

 

Thanks for your help.

disk_log-2019-10-27-19-33-30.html

Edited by Knarf
Link to comment
Share on other sites

  • 0

@Knarf Ok, I'm back ;-)

 

First question: Did you ever see any HDD/Volume related problems before you started your upgrade adventure?

Second Q: Do you see any SMART errors from your drive?

 

If this problem first appeared after you updated your BIOS and DSM, it might be related to that.

 

1 - Shut down your NAS, unplug and replug both the HDD's and the cables used in case there is a bad connection.

 

2 - The BIOS you flashed, I don't "know", but I have followed some links, and it seems to enable "all hidden options".

     Some of those options might interfere with your HDD's stability and cause corruption.

    By reading some of the comments from where you found your BIOS, it seems like there are some questions in regards of it's origin.

    It seem's like the BIOS you used might be "Kamzata" that is used by many users here, but as I have mentioned before, I have only personal

   experience with "TheBay", and have used it on 4*N54L and 1*N40L, starting with DSM 4.x, upgrading to 5.x and to 6.2 (before having to add an external NIC).

   On my "Media Server NAS", I'm sticking to DS3615 / DSM 6.1.7-15284 Update 3.

 

If you want to use the latest official BIOS from HP, you might want to have a look here.

You would only "need" a modded BIOS if you want to use more than 4 HDD's (the one in the bay's).

 

  On my "Backup NAS" I run 6.2, and has been for a while:

image.thumb.png.80c458c568e760863f5b325670567ab8.png

 

 

3 - In regards to the custom extra.lzma file you used, is it using the same driver for your drive controller as the "original" file?

     Just thinking out loud that it might be a driver conflict, causing your HDD problems.

     In none of the above mentioned G7 microservers have I used a custom extra.lzma and as long as I don't push the DSM 6.2.x limit

     the internal NIC is doing it's job.

 

 

 

Edit:

Make sure your BIOS settings in regards to the HDD's has not been altered / changed in any ways.

Edited by bearcat
Link to comment
Share on other sites

  • 0
19 minutes ago, bearcat said:

@Knarf Ok, I'm back ;-)

 

First question: Did you ever see any HDD/Volume related problems before you started your upgrade adventure?

Second Q: Do you see any SMART errors from your drive?

 

If this problem first appeared after you updated your BIOS and DSM, it might be related to that.

 

1 - Shut down your NAS, unplug and replug both the HDD's and the cables used in case there is a bad connection.

 

2 - The BIOS you flashed, I don't "know", but I have followed some links, and it seems to enable "all hidden options".

     Some of those options might interfere with your HDD's stability and cause corruption.

    By reading some of the comments from where you found your BIOS, it seems like there are some questions in regards of it's origin.

    It seem's like the BIOS you used might be "Kamzata" that is used by many users here, but as I have mentioned before, I have only personal

   experience with "TheBay", and have used it on 4*N54L and 1*N40L, starting with DSM 4.x, upgrading to 5.x and to 6.2 (before having to add an external NIC).

   On my "Media Server NAS", I'm sticking to DS3615 / DSM 6.1.7-15284 Update 3.

 

If you want to use the latest official BIOS from HP, you might want to have a look here.

You would only "need" a modded BIOS if you want to use more than 4 HDD's (the one in the bay's).

 

  On my "Backup NAS" I run 6.2, and has been for a while:

image.thumb.png.80c458c568e760863f5b325670567ab8.png

 

 

3 - In regards to the custom extra.lzma file you used, is it using the same driver for your drive controller as the "original" file?

     Just thinking out loud that it might be a driver conflict, causing your HDD problems.

     In none of the above mentioned G7 microservers have I used a custom extra.lzma and as long as I don't push the DSM 6.2.x limit

     the internal NIC is doing it's job.

 

 

 

Edit:

Make sure your BIOS settings in regards to the HDD's has not been altered / changed in any ways.

 

Bearcat,

 

No problems before my big adventure upgrading.  After, there were suddenly lots of bad clusters.

Then the RAID listed as degraded and the drive was no longer in play.

 

I removed all drives and put them back and it tried again to initialize but failed.

 

Since I don't have the original BIOS I'm not able to compare, and that is probably beyond my skills to boot.

 

I took the drive out and placed it in a windows machine and used Minitool partition Wizard to try to format it and start over and after trying to wipe the drive it came back with "Bad Drive" message. 

 

My takeaway from all this is that somehow the upgrade process killed that drive.  I don't know how that is possible but it did.  New drive will arrive tomorrow and I'll install it and hopefully my array will rebuild sucessfully.

 

Thanks for your reply.  Is there a way to mark this thread closed on this forum?  I've looked but didn't see it, most forums want you to do that when your questions have been answered.

 

Frank

 

Link to comment
Share on other sites

Guest
This topic is now closed to further replies.
×
×
  • Create New...