Forrest81 Posted May 24, 2019 Share #1 Posted May 24, 2019 Hi guys, Although I have read several posts in the past, I'm new to the community as an active member. I've spend half a day yesterday reading several topics, but none of them actually described a solution for my problem. I have built my own NAS a few years ago. It's an AIMB 272 Mini-ITX board with an Intel i7 CPU and 16GB of RAM. Started with DSM5.2 but eventually upgraded tot 6.0.1 and so on. Thanks to the 'DSM Updates Reporting' topic I did some successful updates through the DSM menu. MY PROBLEM: I did an update to 6.2.2-24922 through the DSM menu after checking the forum. I'm not sure about the DSM version prior to the update, I think it was 6.1.2.xxx. After the update, I could not reach my NAS at the port number I had assigned, so I used Synology Assistant to find my NAS and connect. I was able to find it and connect to it, and it displayed 'No Hard Disk found on DS3615xs'. Screenshot is attached. I have tried booting with a fresh installed 1.03b (Jun), but same result. I then tried booting a fresh installed 1.02b, again same result. Next I tried booting with the 1.02b install option (2nd option), this gave me the option to repair the NAS, took like 20 seconds and then rebooted, same result. Could you please tell me what to do? Although I have backed up most of my imported files (pictures and music), I'm not looking forward to reinstall all my packages and settings, and recover all my data. Thanks in advance! Quote Link to comment Share on other sites More sharing options...
Olegin Posted May 24, 2019 Share #2 Posted May 24, 2019 May be you have problems with data or power cable on hdd. Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted May 24, 2019 Author Share #3 Posted May 24, 2019 Hi Olegin, No the cables are fine. All 4 disks are recognized in the BIOS, so would be very unlikely that they don't work correctly after the boot... Cheers, F Quote Link to comment Share on other sites More sharing options...
jensmander Posted May 24, 2019 Share #4 Posted May 24, 2019 Do you have your old boot stick? If so take a look in your grub.cfg and check if you have previously modified the values of S-ATA PORT MAP. You can assign a drive letter to the stick‘s first partition with tools like the MiniTool Partition Wizard (free edition). If your S-ATA PORT MAP differs try to set this value in your 1.03b boot stick. Quote Link to comment Share on other sites More sharing options...
bearcat Posted May 24, 2019 Share #5 Posted May 24, 2019 Could it be, that the 6.2.2 update broke the compatibility in regards to the drivers for your SATA controller? Quote Link to comment Share on other sites More sharing options...
nemesis122 Posted May 24, 2019 Share #6 Posted May 24, 2019 (edited) Hi Im not 100 procent sure but with 1.03 b only the offiicial pci /Lan drives are working or correct me when im wrong. check with en other loader in this case the 1.02b loader 3615 has the most hardware support which cpu you exatcly have sandy brige ivy or haswell?? when the CPU is newer or the same as haswell check with the loader 1.04b. i have the experience with ASUS H87i plus XEON 1245v3 Haswell for this mainboard 1.02b and 1.04b is working but with 1.03b is not working no network adapter found in short check with 1.04b and when is not working check with loader 1.02b Highest dsm version with 1.02 b is 15284 update 3.(DSM 6.1) Edited May 24, 2019 by nemesis122 Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted May 24, 2019 Author Share #7 Posted May 24, 2019 Hi folks, thank you for your ideas. I just got the same feeling: unsupported hardware / lack of drivers. I have attached the spec sheet of my motherboard, it has the i7 on it. I just think the step from 6.1.2.x to 6.2.2.x was too big. I'm happy to downgrade by the way, if the new DSM does not (yet) support my hardware... Thanks for sharing your thoughts! AIMB-272_DS(01.15.14)20140122142059.pdf Quote Link to comment Share on other sites More sharing options...
Polanskiman Posted May 27, 2019 Share #8 Posted May 27, 2019 Looking at the boot logs should give you a clear understanding of the problem. This said, I would do a downgrade. Seems like a driver issue due to updates in the kernel. I can't say with certainty the cause of the problem but if you want to be up and running asap a downgrade will the be the fastest route in my opinion. Good luck. Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 1, 2019 Author Share #9 Posted July 1, 2019 (edited) Hi folks, sorry for my late reply. I was abroad for work and was on holiday directly afterwards. I am preparing for the downgrade. However, I cannot get my head around this one question: why does my NAS not function properly with the 1.4b bootlader? and the related question: how can I ever upgrade to 6.2.x?? @Polanskiman thanks for your advice about the boot logs. Should I check them before the downgrade? And how do I do that? Thanks again! Cheers, Frank Edited July 1, 2019 by Forrest81 Quote Link to comment Share on other sites More sharing options...
Polanskiman Posted July 1, 2019 Share #10 Posted July 1, 2019 In order to see boot logs you would need to get them through serial. There is no other way around it. I believe your issue is due to sata controller not being recognized with the version of DSM you installed. I would be you I would downgrade and stay at DSM 6.1.7 or DSM 6.2 max. Nothing beyond. Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 1, 2019 Author Share #11 Posted July 1, 2019 Thanks again. Forget the bootlegs, I'm doing the downgrade. I have followed the tutorial and it works up to point 5. I can find my NAS through the assistant, and after installing 6.1 from a PAT file, I can create a login and so on. When I connect 3 of the 4 drives with data on ( I only have 4 SATA ports), nothing pops up, but after a reboot from the menu, I can see 1 healthy but unused disk ( the blank one I started with) and 3 used but damaged disks. Please see attached picture. When I go to the volume to repair it, it only gives me the option to select the health disk. That would erase the data on the other 3 disks so I did not proceed. Please tell me what to do next... Quote Link to comment Share on other sites More sharing options...
Polanskiman Posted July 3, 2019 Share #12 Posted July 3, 2019 Try updating to DSM6.1.7. Hopefully during the update DSM should fix the system partitions of these 3 disks.@flyride What do you recommend? Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 3, 2019 Author Share #13 Posted July 3, 2019 (edited) I updated to 6.1.7.15284, nut no luck. HDD status now changed to "Initialized, Normal' (Disk 1) and 'Normal' (Disk 2,3,4), see attached screenshot. I had a 'Repair' option in the overview screen, I clicked it, it said 'Repairing' for 3 seconds, then the message disappeared and the status is still 'Degraded'... The 2nd screenshot is from the Raidgroup-screen I do not have an option to repair the volume, the 'Manage' button is greyed out. Is the order of SATA connections maybe an issue? Any other tips? Edited July 3, 2019 by Forrest81 Quote Link to comment Share on other sites More sharing options...
flyride Posted July 3, 2019 Share #14 Posted July 3, 2019 This doesn't make a ton of sense. Did you have 5 drives to start with? What was the exact configuration of the array and volumes before the upgrade? The 160GB Hitachi drive can't be part of your array, yet it says it is intact and degraded. So something isn't adding up. Along with the answers to the above, go to the command line and cat /proc/mdstat and post here. Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 3, 2019 Author Share #15 Posted July 3, 2019 (edited) Hi flyride, thank you for your comments. The configuration of the array before the upgrade: 4x Western Digital Red 3TB. The exact same disk as shown in the 2nd screenshot on disk 2, 3 and 4. The Hitachi (disk 1) is the blank disk mentioned in the tutorial, since I only have 4 SATA ports, I added 3 out of 4 existing disks on point 6. Here is the output from the command line: Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4] [raidF1] md2 : active raid5 sdb5[1] sdd5[3] sdc5[2] 8776306368 blocks super 1.2 level 5, 64k chunk, algorithm 2 [4/3] [_UUU] md1 : active raid1 sda2[0] sdb2[1] sdc2[2] sdd2[3] 2097088 blocks [12/4] [UUUU________] md0 : active raid1 sdd1[2] sdc1[3] sdb1[1] sda1[0] 2490176 blocks [12/4] [UUUU________] unused devices: <none> And now the funny part: I can see and access all the data. Since the volume was degraded, I assumed that the data would not be there. But it is, I'm just missing all my packages and settings (which is still a lot of work to redo anyway). Hope you have any idea to fix the volume and get my NAS back to pre-update state... Cheers, Frank Edited July 4, 2019 by Polanskiman Added code tag. Quote Link to comment Share on other sites More sharing options...
flyride Posted July 3, 2019 Share #16 Posted July 3, 2019 (edited) So you deliberately degraded your array in order to backrev. Boggle... At this point, remove the Hitachi and re-install your fourth WD drive. You really ought to do this while the system is running (don't reboot or power it down). The system should recognize the WD as a new drive. You should then be able to repair your array and the array may be back to normal. You'll need to noodle through your packages and settings (hopefully you did a settings backup?). Packages are lost when the system cannot see the volume the packages are installed on after bootup. Is your volume named the same as it was before the upgrade (i.e. volume1)? Edited July 3, 2019 by flyride Quote Link to comment Share on other sites More sharing options...
Polanskiman Posted July 4, 2019 Share #17 Posted July 4, 2019 7 hours ago, flyride said: So you deliberately degraded your array in order to backrev. Boggle... I believe the intention was not to deliberately degrade the array but he did it unknowingly by not plugging all drives (3 instead of 4) due to only having 4 sata ports. I should have seen this coming! Thanks for stepping in. Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 4, 2019 Author Share #18 Posted July 4, 2019 (edited) 2 hours ago, Polanskiman said: I believe the intention was not to deliberately degrade the array but he did it unknowingly by not plugging all drives (3 instead of 4) due to only having 4 sata ports. Exactly! Anyway, I removed the Hitachi and re-installed my 4th WD Red drive while the NAS is running, but nothing happens. Refreshed the browser, still showing the Hitachi. Should I do something in DSM, scan for new drives or something similar? Or just wait? Is plugged in 10mins now... Edited July 4, 2019 by Forrest81 Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 5, 2019 Author Share #19 Posted July 5, 2019 Any update guys?I’m considering saving the data to external drives, wiping the disks and starting from scratch... Please share any thoughts. Thx! Quote Link to comment Share on other sites More sharing options...
flyride Posted July 6, 2019 Share #20 Posted July 6, 2019 Your BIOS may not support hot plugging, or you may need to enable it. What you want to avoid is booting the wrong DSM copy (the one from the drive you removed). Do you have a computer that you could use to wipe the WD disk not currently in the array? If you can install it to another computer and delete all the partitions, then you can install in your NAS, boot normally, then rebuild the clean drive back in to the array. Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 6, 2019 Author Share #21 Posted July 6, 2019 I will check the bios, good point!But, if I understand correctly, when I reboot with all 4 WD’s connected I would end up having the original problem: booting 6.2.2 with the error ‘no disks found’.So I would need to reattach the Hitachi in order to boot the 6.1.4, and then wipe the 4th WD and switch it with the Hitachi...?? Quote Link to comment Share on other sites More sharing options...
Forrest81 Posted July 15, 2019 Author Share #22 Posted July 15, 2019 Guys, a short update: I was able to fix the problem! I don't know how, but I did a reboot from the menu and the NAS rebooted with 6.1.4 (unlike earlier tries, where the system was not found by the Syno Assistent). I wiped the 4th WD RED and reattached it, not found. I did another reboot with the disk connected and the disk was found, and I was able to repair the volume. I did a full backup of all my data before the wipe, just to be safe, but all the data is still in place. However, all my programs are gone. I restored a settings-backup so I have most of my settings like before, but still need to reinstall al lot of stuff. But at the end of the day, nothing that can't be repaired, so nothing is lost. Thanks ever so much for all your help guys, much appreciated! Cheers, Frank Quote Link to comment Share on other sites More sharing options...
Polanskiman Posted July 15, 2019 Share #23 Posted July 15, 2019 Glad you were able to fix your issue. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.