Jump to content
XPEnology Community

Running 6.2.3 on ESXi? Synoboot is BROKEN, fix available


flyride

Recommended Posts

On 6/13/2020 at 4:20 PM, flyride said:

The issue is a matter of timing in the first place. Something in your system is causing the eSATA initialization to happen later than other systems, or perhaps there needs to be more time between unmounting and device deactivation. The moment where /etc/local/rc.d executes seems to work for most, but on your system it must be too soon.

 

I'd try modifying the script to add pauses in the following locations and see if it helps:

 

Add a line containing sleep 5 immediately before echo "remove" >/sys/class/block/$basename/uevent

 

Add a line containing sleep 15 between start) and FixSynobootu

 

unfortunately this 2 changes not solved my problem, log still remain in the webui. thanks a lot for your support flyride and thanks for all time that you spend to help me.

the changes are liste in the picture below:

 

dsm.JPG

Link to comment
Share on other sites

  • 1 month later...

Thank you for the script!

I had the issue that my config was changed back to the defaults, which caused a degraded array (8 of 10 disks only visible). However, I have used that script before figuring out that the config was not up to date, and I guess your script did part of the job ;)

Now its working and I can start to repair my array. Thanks for your work!

 

Edited by IceBoosteR
Link to comment
Share on other sites

Hey Guys,

 

my last XPEnology was with DSM 5.x. Now I made some tests on my ESXi 6.7 and was happy how stable it's working. I uses Juns Loader 1.04b with 918+. I add synobook.vmdk as Sata 0:0 und all other drives as RAW using SCSI Controller.

 

/dev/syno* are all fine. Just one issue: DSM Shows 50 MB disk under Disks. I tried this script, but nothing changed, like expected because it's fresh install. Any idea what I can do, to hide this one drive?

 

Thanks for this great community!

Bildschirmfoto 2020-07-30 um 13.29.12.png

Bildschirmfoto 2020-07-30 um 14.34.33.png

Bildschirmfoto 2020-07-30 um 14.34.16.png

Bildschirmfoto 2020-07-30 um 13.29.29.png

Link to comment
Share on other sites

6 hours ago, flyride said:

Two probable reasons - 1) failed to pick the VMWare option at the grub boot menu, or 2) your grub.cfg needs to include DiskIdxMap=0C00 which should also fix the slot placement of your live hard disks.

@flyride thanks for fast answer. 1) is impossible, uncommented all baremetal options. ;-) 2) I tried and it looks different, but now its 50 MB Drive at the end (please find attached my grub)

 

@StanG I found here in the forum as solution for my mounting problem, it's possible to switch it. But I went back now.

 

 

Bildschirmfoto 2020-07-30 um 23.37.07.png

Bildschirmfoto 2020-07-30 um 23.37.39.png

 

grub.cfg

Edited by etests
Old grup replaced
Link to comment
Share on other sites

It takes some minutes, ecause it starts now in repair mode. Have to fix or install new. Thanks for your extreme fast answers. :)

 

Edit: @flyride you made my day - DiskIdxMap=1000 solved it. Next fight get Hibernation working. Maybe I have to ask in new thread.

Edited by etests
Link to comment
Share on other sites

The UI pop-up message is a cosmetic problem not a functional one.  But the current version of the script is intended to suppress the pop-up error message. Reading back through the thread, there was an individual that for some reason received UI notification, and to my knowledge never resolved it. I was unable to duplicate this reliably in test. 

 

Can you be very specific as to what you are doing when you do and don't receive the message?  Are you using ESXi or baremetal?  Do you mean restarting ESXi?

Link to comment
Share on other sites

Hello flyride,

 

I'am using ESXI 6.7 in an HP ProLiant MicroServer G8 (customised with a Xeon and 16 go of ram) with a VM with 2 CPU and 4 go of RAM.

So sorry for the explanation let me rephrase:

if I click to "Restart" in the DSM WebUI I get the error message.

If I click to "Shutdown" in the DSM WebUI and then restarting the VM through ESXI I did NOT get the error message.

 

Thanks

Edited by MaTTrs
Link to comment
Share on other sites

Unfortunately I still can't duplicate this behavior.  The issue is probably a resource timing issue in the first place, so I suspect it's a difference between the boot up time of the systems.

 

Your system is somewhat slower than the system I am using to test.  You might want to try the modifications suggested in this post here.  If you do and it works for you please report back.

https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/?do=findComment&comment=150336

 

Again, even if you receive the pop-up messages, your synoboot should be working correctly and your system ought to be fully functional.

 

Link to comment
Share on other sites

Hello community. What if the output of my Xpenology (HP Microserver Gen8, bare metal, Juns loader 1.03b with no extra lzma, Version DS3615xs / 6.2.3 25426, no PCIE Network card, just the stock ones) ls /dev/synoboot* looks like this?

/dev/synoboot

/dev/synoboot1

/dev/synoboot2

/dev/synoboot3

Is it safe to install the normal way via the Control panel?

Or the script has to be executed before?

Thanks in advance.

Edited by sandisxxx
Link to comment
Share on other sites

On 4/15/2020 at 11:00 PM, flyride said:

This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for ESXi running 6.2.3, and it won't hurt anything if installed or ported to another environment.

 

Quoted is the last sentence in post #1.  The script won't do anything if synoboot is properly configured, but it won't hurt anything either.

Link to comment
Share on other sites

Can anyone help me out here? I've noticed that one of my volumes has crashed after updating from 6.2.2 to 6.2.3. So I checked the command and I noticed that I needed to run this script. The two sata drives (or something like that) are now gone, so that's good. However my volume is still crashed. My pool and disk that the volume is on is still healthy according to DSM.

 

Anything I can do or am I in problems? Most of my packages were on that volume... I can still cd to /volume1 but all I see is a `@database` folder on there.

Edited by Bose321
Link to comment
Share on other sites

I hate to fix what isn't broken but I might have the issue

 

admin@DSM:~$ ls /dev/synoboot*
/dev/synoboot  /dev/synoboot1  /dev/synoboot2  /dev/synoboot3
 

given I have the extra (/dev/synoboot3) I appear to have a problem.

 

I am running 6.2.3U2 918+ with 16 visable drive slots, 13 used  (I have 6 SSD and 7 HDD) 
I also have an AIC NVME Intel SSD750-400 used as a read cache on the HDD volume

My volume 1, storage pool 1, is SHR1, 7x 8TB HDD using slots 1,2,3,4,5,6,12 BTRFS - Healthy

My volume 2,storage pool 2 ,is SHR1, 6x1TB SSD using slots 7,8,9,10,11,13 BTRFS - Healthy

My NVME is cache device 1 - Healthy

 

Other than the strangeness in that my HDD and my SSD are not sequential in their slot numbers, and the fact that the drives two misordered drives 12 and 13  have had a few reconnects, (none since at least March) on all other disks I don't see any issues with the storage.

 

 

 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...