reziel84 Posted June 14, 2020 #51 Posted June 14, 2020 On 6/13/2020 at 4:20 PM, flyride said: The issue is a matter of timing in the first place. Something in your system is causing the eSATA initialization to happen later than other systems, or perhaps there needs to be more time between unmounting and device deactivation. The moment where /etc/local/rc.d executes seems to work for most, but on your system it must be too soon. I'd try modifying the script to add pauses in the following locations and see if it helps: Add a line containing sleep 5 immediately before echo "remove" >/sys/class/block/$basename/uevent Add a line containing sleep 15 between start) and FixSynobootu unfortunately this 2 changes not solved my problem, log still remain in the webui. thanks a lot for your support flyride and thanks for all time that you spend to help me. the changes are liste in the picture below: Quote
RobbieT Posted June 18, 2020 #52 Posted June 18, 2020 (edited) Hi, Currently the fixsynoboot.sh file can't be downloaded. Can you reupload the file please? Edit: I can be downloaded after loggin in Edited June 18, 2020 by RobbieT Quote
anibalin Posted July 18, 2020 #53 Posted July 18, 2020 Thanks flyride for the instructions. I uploaded the file, changed permissions, installed the 6.2.3 update, restarted automatically after the update, everything smooth. Quote
xqdong Posted July 22, 2020 #54 Posted July 22, 2020 Thank you so much for the fix, just installed as instructed, and now I successfully upgrade the software to 6.2.3 25426 update 2, previous, it was always promoting the software was broken. Quote
IceBoosteR Posted July 26, 2020 #55 Posted July 26, 2020 (edited) Thank you for the script! I had the issue that my config was changed back to the defaults, which caused a degraded array (8 of 10 disks only visible). However, I have used that script before figuring out that the config was not up to date, and I guess your script did part of the job Now its working and I can start to repair my array. Thanks for your work! Edited July 26, 2020 by IceBoosteR Quote
etests Posted July 30, 2020 #57 Posted July 30, 2020 Hey Guys, my last XPEnology was with DSM 5.x. Now I made some tests on my ESXi 6.7 and was happy how stable it's working. I uses Juns Loader 1.04b with 918+. I add synobook.vmdk as Sata 0:0 und all other drives as RAW using SCSI Controller. /dev/syno* are all fine. Just one issue: DSM Shows 50 MB disk under Disks. I tried this script, but nothing changed, like expected because it's fresh install. Any idea what I can do, to hide this one drive? Thanks for this great community! Quote
flyride Posted July 30, 2020 Author #58 Posted July 30, 2020 Two probable reasons - 1) failed to pick the VMWare option at the grub boot menu, or 2) your grub.cfg needs to include DiskIdxMap=0C00 which should also fix the slot placement of your live hard disks. Quote
StanG Posted July 30, 2020 #59 Posted July 30, 2020 offtopic: Does the bootloader disk of 50MB need to be Indepentdent - not persistent? Quote
flyride Posted July 30, 2020 Author #60 Posted July 30, 2020 It's a provided img file, you aren't creating it. Quote
etests Posted July 30, 2020 #61 Posted July 30, 2020 (edited) 6 hours ago, flyride said: Two probable reasons - 1) failed to pick the VMWare option at the grub boot menu, or 2) your grub.cfg needs to include DiskIdxMap=0C00 which should also fix the slot placement of your live hard disks. @flyride thanks for fast answer. 1) is impossible, uncommented all baremetal options. 2) I tried and it looks different, but now its 50 MB Drive at the end (please find attached my grub) @StanG I found here in the forum as solution for my mounting problem, it's possible to switch it. But I went back now. grub.cfg Edited July 30, 2020 by etests Old grup replaced Quote
flyride Posted July 30, 2020 Author #62 Posted July 30, 2020 (edited) Oh, you are using DS918+ with original MaxDisk=16. So DiskIdxMap should be 1000, that will make the 50MB disappear. You have SataPortMap=4 which probably should be "1" Edited July 30, 2020 by flyride Quote
etests Posted July 30, 2020 #63 Posted July 30, 2020 (edited) It takes some minutes, ecause it starts now in repair mode. Have to fix or install new. Thanks for your extreme fast answers. Edit: @flyride you made my day - DiskIdxMap=1000 solved it. Next fight get Hibernation working. Maybe I have to ask in new thread. Edited July 30, 2020 by etests Quote
MaTTrs Posted August 3, 2020 #64 Posted August 3, 2020 Hello! First, thanks for this fix! I have a question: Is it normal that the fix works well when I restart the computer whereas if I only reboot DSM I get the message in notification? Thanks! Quote
flyride Posted August 3, 2020 Author #65 Posted August 3, 2020 The UI pop-up message is a cosmetic problem not a functional one. But the current version of the script is intended to suppress the pop-up error message. Reading back through the thread, there was an individual that for some reason received UI notification, and to my knowledge never resolved it. I was unable to duplicate this reliably in test. Can you be very specific as to what you are doing when you do and don't receive the message? Are you using ESXi or baremetal? Do you mean restarting ESXi? Quote
MaTTrs Posted August 4, 2020 #66 Posted August 4, 2020 (edited) Hello flyride, I'am using ESXI 6.7 in an HP ProLiant MicroServer G8 (customised with a Xeon and 16 go of ram) with a VM with 2 CPU and 4 go of RAM. So sorry for the explanation let me rephrase: if I click to "Restart" in the DSM WebUI I get the error message. If I click to "Shutdown" in the DSM WebUI and then restarting the VM through ESXI I did NOT get the error message. Thanks Edited August 4, 2020 by MaTTrs Quote
flyride Posted August 4, 2020 Author #67 Posted August 4, 2020 Unfortunately I still can't duplicate this behavior. The issue is probably a resource timing issue in the first place, so I suspect it's a difference between the boot up time of the systems. Your system is somewhat slower than the system I am using to test. You might want to try the modifications suggested in this post here. If you do and it works for you please report back. https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/?do=findComment&comment=150336 Again, even if you receive the pop-up messages, your synoboot should be working correctly and your system ought to be fully functional. Quote
MaTTrs Posted August 4, 2020 #68 Posted August 4, 2020 So, I change the script and it was worst, error notification at each restart or reboot... I put back your script, add cpu and RAM to my VM and now it works in any case... But yeah I know it is only cosmetic Thanks Quote
sandisxxx Posted August 6, 2020 #69 Posted August 6, 2020 (edited) Hello community. What if the output of my Xpenology (HP Microserver Gen8, bare metal, Juns loader 1.03b with no extra lzma, Version DS3615xs / 6.2.3 25426, no PCIE Network card, just the stock ones) ls /dev/synoboot* looks like this? /dev/synoboot /dev/synoboot1 /dev/synoboot2 /dev/synoboot3 Is it safe to install the normal way via the Control panel? Or the script has to be executed before? Thanks in advance. Edited August 6, 2020 by sandisxxx Quote
flyride Posted August 6, 2020 Author #70 Posted August 6, 2020 On 4/15/2020 at 11:00 PM, flyride said: This solution works with either 1.03b or 1.04b and is simple to install. This should be considered required for ESXi running 6.2.3, and it won't hurt anything if installed or ported to another environment. Quoted is the last sentence in post #1. The script won't do anything if synoboot is properly configured, but it won't hurt anything either. Quote
Jbur Posted August 7, 2020 #71 Posted August 7, 2020 (edited) @flyride I had the same issue as MaTTrs or reziel84, but in my case that "sleep" lines in the script helped. Such notifications didn't appear any more. Thanks a lot. Edited August 7, 2020 by Jbur new info Quote
Bose321 Posted August 14, 2020 #72 Posted August 14, 2020 (edited) Can anyone help me out here? I've noticed that one of my volumes has crashed after updating from 6.2.2 to 6.2.3. So I checked the command and I noticed that I needed to run this script. The two sata drives (or something like that) are now gone, so that's good. However my volume is still crashed. My pool and disk that the volume is on is still healthy according to DSM. Anything I can do or am I in problems? Most of my packages were on that volume... I can still cd to /volume1 but all I see is a `@database` folder on there. Edited August 14, 2020 by Bose321 Quote
flyride Posted August 14, 2020 Author #73 Posted August 14, 2020 Please repost here and don't thread jack, thanks. https://xpenology.com/forum/forum/82-general-questions/ Quote
mervincm Posted August 28, 2020 #75 Posted August 28, 2020 I hate to fix what isn't broken but I might have the issue admin@DSM:~$ ls /dev/synoboot* /dev/synoboot /dev/synoboot1 /dev/synoboot2 /dev/synoboot3 given I have the extra (/dev/synoboot3) I appear to have a problem. I am running 6.2.3U2 918+ with 16 visable drive slots, 13 used (I have 6 SSD and 7 HDD) I also have an AIC NVME Intel SSD750-400 used as a read cache on the HDD volume My volume 1, storage pool 1, is SHR1, 7x 8TB HDD using slots 1,2,3,4,5,6,12 BTRFS - Healthy My volume 2,storage pool 2 ,is SHR1, 6x1TB SSD using slots 7,8,9,10,11,13 BTRFS - Healthy My NVME is cache device 1 - Healthy Other than the strangeness in that my HDD and my SSD are not sequential in their slot numbers, and the fact that the drives two misordered drives 12 and 13 have had a few reconnects, (none since at least March) on all other disks I don't see any issues with the storage. Quote
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.