NooL

Members
  • Content Count

    38
  • Joined

  • Last visited

Community Reputation

5 Neutral

About NooL

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. For others checking the topic - If you get the error that site cannot be opened when you try opening the QuickConnect in Control Panel - Or try to setup a synology.me DDNS - Then its a serial number/mac issue. If you check tail /var/log/messages when this occurs, you will see an error stating invalid serial/mac combinatination or the likes.
  2. SUCCESS AGAIN! Synocheckshare was the magic command to get my shares back and running. Everything is now as if nothing had happened.
  3. And the raid parity check went with no problems. BUT - i am still not seeing the "Shared Folders" in the GUI - they are there and i can copy data via SSH - but i am not seeing them in GUI - which is making everything a bit harder as I cannot use the built in tools to transfer (Windows share, ftp, etc etc) Does anybody have an idea how i can restore the Shared Folder metadata that was lost in the crash/reboot?
  4. Same deal basically with md3 - md4 was fine already. mdadm --assemble --force --run /dev/md3 /dev/sdk6 /dev/sdl6 /dev/sdm6 /dev/sdj6 /dev/sdn6 -v i tried to --re-add at first with md3 since it was just missing 1 drive, but the event counts were to much different , so had to re-assemble it with above
  5. SUCCESS!!! I now have access to my data! mdadm --assemble --force --run /dev/md2 /dev/sdg5 /dev/sdh5 /dev/sdi5 /dev/sdk5 /dev/sdl5 /dev/sdm5 /dev/sdj5 /dev/sdn5 -v vgchange -a y vg1000 lvm lvchange -ay /dev/mapper/vg1000-lv mount /volume1 was the magic commands. Any ideas on how to re-create the Meta-data of Shared Folders? I can see in the log that it was removed after reboot following the crash.
  6. Yeah i've been reading up on that, and thats the one that gave me hope. From what i read, it should be possible to mdadm assemble the raid in the correct order and get access to the data again.
  7. Unfortunately my friend have not had time to look into it yet, but after some intense googling i found that the space_history files contain the disk/partition order i believe. From before the crash it looked like this: is this any help?
  8. Yeah the raid was expanded as i used one of the 8tb as a temp storage while i built the NAS. They do not belong to same cable, as the 8TB disks are on different connectors. So its like the screenshot. 2,2,2,8 and 4,4,4,8 on the SAS controller. My theory, but im not sure.. do correct me if its not plausible. I already had one disk (Disk 9) which had 2 bad sectors (This will flag the disk as degraded?) This morning i then got a disk write failure (The one from the log) on disk 8 - which caused it to throw the raid as its a SHR1? Im not sure if this is it though as the raid was marked as healthy in the UI - even with the 2bad sectors on one of the disks.
  9. I have a friend who works professionally with unix/linux who will take a look at it monday, hopefully he can work some magic. If there are any suggestions or help, please keep it coming Thanks alot for the help so far - will keep you guys updated if i dont hear more in here
  10. I believe one other disk is in degraded state as it has 1 bad sector. so thats probably why this triggered it.
  11. @IG-88 Thanks for your reply - i apologize for the danish in the screenshots. And yes, you are absolutely correct in regards to the 26TB SHR1 volume, and yep from what i can see online and a few youtube videos, mdadm is the way to go, from what i read etc - there is a strong chance that the raid can be rebuilt(If nothing else, then so i can take backup of the data) - i am in no way familiar with mdadm though - so i am hoping somebody in here is. In the log i am seeing this: Internal disks woke up from hibernation. Write error at internal disk [8] sector 10094696. Storage Pool [1] was crashed.
  12. Hi guys So this morning i woke up to a "Volume has crashed" email, this is the first time this has happened. My setup: 8 Sata hdd's attached to a SAS controller. At first i had access to the data, but i was foolish enough to reboot the box - after this i no longer had access to data. I now have 4 out of 8 disks showing as "Initialized" after clicking the "Repair system partition link" it presented me with. But still no access to data. What are my next steps? How do i go about getting the volume up and running again successfully? I really hope for help here, since i had about 13tb of data on that volume. I have attached pictures showing the current state.
  13. Very nice, thank you Maybe you could make a few with themes to the different login screns. Like One for Download station with the theme downloads/downloading. One for Video station with the theme Videos/Tv/Movies One for File station with the theme files. And a small suggestion would be to make them friendlier to the Synology style login boxes.. Im finding it very hard to see the outline of the login box on some of the wallpapers. Could be awesome.
  14. Sounds good, in my case i think i accidently overwrite the backup file too, so cant restore or anything When do you expect the new version to be ready? Awesome work!
  15. What am I doing wrong? I keep getting: Invalid entry length (3). DMI table is broken! Stop. Invalid entry length (3). DMI table is broken! Stop. Invalid entry length (3). DMI table is broken! Stop. Invalid entry length (3). DMI table is broken! Stop. Also it seems broken in control panel now.