• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About mmedeiro

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi all, I'm performing a repair on the volume after it became un-writeable due to a disk full error in windows. I should have 16tb free but this is no longer the case. I logged in to DSM via the web and was met with this below... 16777140.96TB used out of 36.96TB.. I should be 16.78TB used out of 36.96TB...
  2. That's the best part. I decided to run the disk check bc I just purchased 2 UPSs lol for the NAS. Luckily the NAS came back online but it says I have 2 bad disks but all 11 show as healthy.. I have drives ready to implant but I don't know which ones to remove. Today I backed up all my important data to cloud servers in preparation for a catastrophic failure. It's a 34 TB with 21TB of data so now way to back all that up unfortunately.
  3. Even if the volume crashed why in the world am i not able to see the NAS at all on DS assistant. The particular NAS has been up for years, i have never had an issue like this. Also seeing this sbin/e2fsck returns 1.. im guessing that's the disk check running before booting the NAS.
  4. Hi all, I have an 11 disk DSM 5.2 Xpenology NAS which recently had another unexpected power down... not the first time. It's been reporting there was an issue with the file system for a while now. I have began the file system check via the volume manager on the DSM webpage in the past but canceled and rebooted the NAS due to impatience. Today i decided to run the full check but upon restart I had a f1 to boot to bios error because Im watercooled, usual stuff nothing new. So i bypass that by hitting f1, the the LSI controllers initialize as usual. Then it moves on to boot into xpen
  5. Ok, now here is the log. Looks like the disk crashed. Can i now confirm the HDD was on its way out? SMART quick check comes back fine.
  6. Hi, I've been running my 11 disk Xpenology DS for roughly a month with excellent consistency and performance until on of my disks, disk 3 "Disk Plugged Out" and then "Disk Plugged in" 12 minutes later. This resulted in having to rebuild. Took roughly 26 hours to rebuild which is far superior to the 7 days it took on the Synology hardware. I'm just wondering if this is common? I was using a 500 watt EVGA 80+ PSU. I ordered a second PSU to run disks 8 - 16 going forward. Disk 3 has no bad sectors and has 7500 hours on the clock so I don't think this was a disk issue. Opinions on
  7. OK. It wasn't recognizing the server name as "NAS" any longer. I used the local IP to map and now its fine. Weird since the server is still named NAS..
  8. Hi, I setup link aggregation using my Cisco SG 200-8. Everything is fine in setup but i cannot connect to the NAS in windows any longer. Syno assistant recognizes the NAS on the network but when i try to map it to a letter drive it cannot connect.
  9. Please pm me your paypal so i can atleast buy you a beer and a lunch. You got me up and running with my SHR 2 transfered!
  10. So its denying the migration bc i was on a newer DSM on my previous Syno box where the array was housed. EDIT Migration completed, rebooting now.. fingers crossed.
  11. Finally all disks recognized! There is an extra disk recognized, i think its the USB thumbdrive recognized as a Harddisk. Now how to i import the SHR raid?
  12. Disk n the Grey port on the mobo it shows up as the 9th disk, this is before and after I edit the max disks from 12 to 32. Editing the Esata and USB and then rebooting has been fine. Now i will attempt to edit the internal config which is what asks for a migration.