68rustang Posted February 25, 2016 Share #1 Posted February 25, 2016 Is there anybody willing to help guide a Linux noob through attempting to recover data from a fubar'd SHR volume? Last weekend my normally reliable XPenology box decided to blow up. It might have been power related I am not sure because I was not home. The majority of my issues are explained here : http://xpenology.com/forum/viewtopic.php?f=2&t=12414 and here: http://xpenology.com/forum/viewtopic.php?f=2&t=12458 Where I am at right now is that I have the computer booted up from a Ubuntu LiveUSB stick and I tried following the Synology tutorial for recovering data using Ubuntu that can be found here: https://www.synology.com/en-us/knowledgebase/DSM/tutorial/Storage/How_can_I_recover_data_from_my_DiskStation_using_a_PC The SHR array is showing up as 1.42.6-522 but when I try to access it through Ubuntu I get an unable to access error: Error mounting /dev/dm-0 at /media/ubuntu/1.42.6-5022: Command-line `mount -t "ext4" -o "uhelper=udisks2,nodev,nosuid" "/dev/dm-0" "/media/ubuntu/1.42.6-5022"' exited with non-zero exit status 32: mount: wrong fs type, bad option, bad superblock on /dev/mapper/vg1000-lv, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so. I know only enough about Linux to be dangerous and don't really know what this means. I have been searching the web and I see quite a few people have had success rescuing their volumes but since I don't understand what the commands they are issuing mean I am afraid to just start typing. Nothing on the Volume is super critical, it is just all of our media files but to rerip, redownload and remooch everything would take a looooong time. So I willing to invest some time in trying to rescue what I can. Poking around the web and running some commands it looks like 4 of the five disks are showing up as part of the array with the 5th showing as removed. I do have a Windows copy of UFS Explorer that I have read good things about but I do not have a Windows computer available that I can plug the HDDs into. What info does somebody need to help point me in the right direction? PLEASE HELP! Link to comment Share on other sites More sharing options...
68rustang Posted February 25, 2016 Author Share #2 Posted February 25, 2016 running dmesg | tail give me: ubuntu@ubuntu:~$ dmesg | tail [ 571.526516] JBD2: no valid journal superblock found [ 571.526521] EXT4-fs (dm-0): error loading journal [31114.861827] JBD2: no valid journal superblock found [31114.861833] EXT4-fs (dm-0): error loading journal [31118.935355] JBD2: no valid journal superblock found [31118.935359] EXT4-fs (dm-0): error loading journal [31119.654014] JBD2: no valid journal superblock found [31119.654018] EXT4-fs (dm-0): error loading journal [31176.915385] JBD2: no valid journal superblock found [31176.915391] EXT4-fs (dm-0): error loading journal Link to comment Share on other sites More sharing options...
shrabok Posted February 25, 2016 Share #3 Posted February 25, 2016 Hi 68rustang, I would try these commands first to find out what the state of what is being seen on the systems List partitions fdisk -l or parted -l you can also list block devices with lsblk that should get a list of whats currently there, I have never tried any of this and making assumptions of my reading regarding SHR, but you have to check for LVM partitions as well as mdadm arrays. I would try the following to show if any lvm physical volumes exist pvdisplay I have a feeling you will have to rebuild all the LVM config to match what was on the nas, and rebuild the array as well. I'll do my best to assist with what you get back from the commands above, but I don't think it will be a simple task. Also do you have any information on what your disk group and volume config looked like, how many groups, volumes and the sizes. Since SHR is complex and if you had different disk sizes it could be really hard to know what is what. Hope this will help Link to comment Share on other sites More sharing options...
68rustang Posted February 26, 2016 Author Share #4 Posted February 26, 2016 List partitions fdisk -l ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/ram0: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram1: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram2: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram3: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram4: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram5: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram6: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram7: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram8: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram9: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram10: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram11: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram12: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram13: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram14: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/ram15: 64 MiB, 67108864 bytes, 131072 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/loop0: 4 GiB, 4287627264 bytes, 8374272 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/loop1: 1.1 GiB, 1130688512 bytes, 2208376 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sda: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: B2637025-73FA-47C9-AD5D-8E0AF999E7AE Device Start End Sectors Size Type /dev/sda1 2048 4982527 4980480 2.4G Linux RAID /dev/sda2 4982528 9176831 4194304 2G Linux RAID /dev/sda5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdb: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: 7B22BC62-2BE5-4CD8-97F3-1BF0F12D69CE Device Start End Sectors Size Type /dev/sdb1 2048 4982527 4980480 2.4G Linux RAID /dev/sdb2 4982528 9176831 4194304 2G Linux RAID /dev/sdb5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdc: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: D53778A8-1525-4B33-B161-E75A157451F6 Device Start End Sectors Size Type /dev/sdc1 2048 4982527 4980480 2.4G Linux RAID /dev/sdc2 4982528 9176831 4194304 2G Linux RAID /dev/sdc5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdd: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: AE5A3D73-77F6-4ED6-889B-E144C1D5BA38 Device Start End Sectors Size Type /dev/sdd1 2048 4982527 4980480 2.4G Linux RAID /dev/sdd2 4982528 9176831 4194304 2G Linux RAID /dev/sdd5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/md2: 10.9 TiB, 11982582841344 bytes, 23403482112 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 262144 bytes Disk /dev/sde: 1.4 TiB, 1500301910016 bytes, 2930277168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x00032380 Device Boot Start End Sectors Size Id Type /dev/sde1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/sde2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/sde3 9437184 2930263007 2920825824 1.4T f W95 Ext'd (LBA) /dev/sde5 9453280 2930070239 2920616960 1.4T fd Linux raid autodetect Disk /dev/sdf: 2.7 TiB, 3000592982016 bytes, 5860533168 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: gpt Disk identifier: F70A9DB6-6448-4A39-9802-6CC5A1F1F0E4 Device Start End Sectors Size Type /dev/sdf1 2048 4982527 4980480 2.4G Linux RAID /dev/sdf2 4982528 9176831 4194304 2G Linux RAID /dev/sdf5 9453280 5860326239 5850872960 2.7T Linux RAID Disk /dev/sdg: 7.5 GiB, 8086618112 bytes, 15794176 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: dos Disk identifier: 0x04030201 Device Boot Start End Sectors Size Id Type /dev/sdg1 * 144 15794175 15794032 7.5G c W95 FAT32 (LBA) Disk /dev/mapper/vg1000-lv: 10.9 TiB, 11982581268480 bytes, 23403479040 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 65536 bytes / 262144 bytes parted -l ubuntu@ubuntu:~$ sudo parted -l Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD30EFRX-68E (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD30EFRX-68E (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD30EFRX-68E (scsi) Disk /dev/sdd: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD15EARS-00S (scsi) Disk /dev/sde: 1500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 2551MB 2550MB primary ext4 raid 2 2551MB 4699MB 2147MB primary linux-swap(v1) raid 3 4832MB 1500GB 1495GB extended lba 5 4840MB 1500GB 1495GB logical raid Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdf: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg1000-lv: 12.0TB Sector size (logical/physical): 512B/4096B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00B 12.0TB 12.0TB ext4 Error: /dev/md2: unrecognised disk label Model: Linux Software RAID Array (md) Disk /dev/md2: 12.0TB Sector size (logical/physical): 512B/4096B Partition Table: unknown Disk Flags: Model: PNY USB 2.0 FD (scsi) Disk /dev/sdg: 8087MB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 73.7kB 8087MB 8087MB primary fat32 boot, lba list block devices lsblk ubuntu@ubuntu:~$ sudo parted -l Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sda: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD30EFRX-68E (scsi) Disk /dev/sdb: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD30EFRX-68E (scsi) Disk /dev/sdc: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD30EFRX-68E (scsi) Disk /dev/sdd: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: ATA WDC WD15EARS-00S (scsi) Disk /dev/sde: 1500GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 2551MB 2550MB primary ext4 raid 2 2551MB 4699MB 2147MB primary linux-swap(v1) raid 3 4832MB 1500GB 1495GB extended lba 5 4840MB 1500GB 1495GB logical raid Model: ATA WDC WD30EFRX-68A (scsi) Disk /dev/sdf: 3001GB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 2551MB 2550MB ext4 raid 2 2551MB 4699MB 2147MB linux-swap(v1) raid 5 4840MB 3000GB 2996GB raid Model: Linux device-mapper (linear) (dm) Disk /dev/mapper/vg1000-lv: 12.0TB Sector size (logical/physical): 512B/4096B Partition Table: loop Disk Flags: Number Start End Size File system Flags 1 0.00B 12.0TB 12.0TB ext4 Error: /dev/md2: unrecognised disk label Model: Linux Software RAID Array (md) Disk /dev/md2: 12.0TB Sector size (logical/physical): 512B/4096B Partition Table: unknown Disk Flags: Model: PNY USB 2.0 FD (scsi) Disk /dev/sdg: 8087MB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 73.7kB 8087MB 8087MB primary fat32 boot, lba show if any lvm physical volumes exist pvdisplay ubuntu@ubuntu:~$ sudo pvdisplay --- Physical volume --- PV Name /dev/md2 VG Name vg1000 PV Size 10.90 TiB / not usable 960.00 KiB Allocatable yes (but full) PE Size 4.00 MiB Total PE 2856870 Free PE 0 Allocated PE 2856870 PV UUID bshIBa-lAzW-0fJD-rEXF-5LIo-UUKV-QzTNuB any information on what your disk group and volume config looked like, how many groups, volumes and the sizes. Since SHR is complex and if you had different disk sizes it could be really hard to know what is what. The drives in the box when it blew up were one 1.5TB WD Green that was a single disk volume (volume 2 or 3, I can't remember) and five 3TB WD Red HDDs that made up a SHR volume (#1). I am only concerned about the disks that make (or made) up Volume 1. I don't think I ever used any disk groups unless that is something that is created when you make a volume? Hope this will help I really appreciate any and all help or information. Link to comment Share on other sites More sharing options...
68rustang Posted February 26, 2016 Author Share #5 Posted February 26, 2016 FWIW mdadm --detail: ubuntu@ubuntu:~$ sudo mdadm --detail /dev/md2 /dev/md2: Version : 1.2 Creation Time : Sun Feb 8 04:13:39 2015 Raid Level : raid5 Array Size : 11701741056 (11159.65 GiB 11982.58 GB) Used Dev Size : 2925435264 (2789.91 GiB 2995.65 GB) Raid Devices : 5 Total Devices : 4 Persistence : Superblock is persistent Update Time : Thu Feb 25 02:19:45 2016 State : clean, degraded Active Devices : 4 Working Devices : 4 Failed Devices : 0 Spare Devices : 0 Layout : left-symmetric Chunk Size : 64K Name : DiskStation:2 UUID : a63e9d9c:d0e186cc:a525d249:9249f306 Events : 43531 Number Major Minor RaidDevice State 0 8 5 0 active sync /dev/sda5 1 8 21 1 active sync /dev/sdb5 2 8 37 2 active sync /dev/sdc5 6 0 0 6 removed 5 8 85 4 active sync /dev/sdf5 Clean but degraded is a positive sign, right? Link to comment Share on other sites More sharing options...
sbv3000 Posted February 26, 2016 Share #6 Posted February 26, 2016 I think you are 90% there. As an aside, some questions - have you setup the drives in new hardware? Have you tried booting XPE/DSM with the new hardware and 4 (or 5) drives to see if you get your volume back but 'degraded'? - you might be lucky and your volume is read only have you tried to access the volume/folders in ubuntu file browser or (crazy thought) telnet/winscp Link to comment Share on other sites More sharing options...
68rustang Posted February 26, 2016 Author Share #7 Posted February 26, 2016 I had tried rebooting XPenology with different hardware/HDD combos but that just seemed to make things worse. After some more reading last night I ran fsck on the LV and after answering "y" a few hundred times I had READ ONLY access to volume 1. Woohoo! I then rebooted the box with Xpenology and I was met with an orange "degraded" warning but still have READ ONLY access to the volume. It looks like I may have lost some files, a couple directories are showing as empty, but most of the things I care about are still there. The new DS showed up yesterday and the HDDs should be here today. Thank you for the pointers, the different commands gave me enough info to search google for answers. Link to comment Share on other sites More sharing options...
shrabok Posted February 26, 2016 Share #8 Posted February 26, 2016 Great to hear you made some progress, I was also wondering about the health of the usb used for booting into XPenology, but it sounds like you were able to get back to a reasonable state. Just out of curiosity is one of your drives considered bad? I think the degraded state is due to a failed HDD, which you could attempt a rebuild or replace the bad drive. This post can walk you through it https://www.synology.com/en-global/know ... oup_repair I'm actually impressed to see so much was still visible (mdadm raid and lvm partitions), I see the raid was /dev/md2 and was added to the lvm physical group. Its good to know that's what's going on under the hood with their SHR. As for preventative measures, I have a UPS connected to my XPenology box that will power off the unit gracefully before the battery runs out. I really do think its a great way to keep your data safe, especially if your storing important data you don't want to lose. Also consider SHR2 (dual parity) in the raid array to allow extra fault tolerance. There is a chance during a rebuild the amount of data being replicated can cause an additional disk failure. All these things cost extra, but provide peace of mind. Be sure to post how your recovery goes or if you need some additional assistance. Link to comment Share on other sites More sharing options...
68rustang Posted February 26, 2016 Author Share #9 Posted February 26, 2016 (edited) The boot drive seems to be OK as I never had an issue starting the system it was just once it was up and running that I was seeing failures, degraded volumes and sometime no drives at all. The 1.5 TB WD GREEN that was not part of the array was showing as bad but I haven't tested it yet. The 3TB WD RED that was part of the array is now showing as removed and I am not sure what caused that. I will test it once I have safely removed all the data . I actually have a large APC UPS that this PC and all my connected network gear is plugged into. The power issues being the root cause is only speculation. I am planning on SHR2 when I set up the new DS1815+ nut haven't looked into the specifics of it yet. Thanks again for everyone's input. Edited February 27, 2016 by Guest Link to comment Share on other sites More sharing options...
pastrychef Posted February 26, 2016 Share #10 Posted February 26, 2016 Just out of curiosity... What type of hardware were you originally running XPEnology on that caused this? Link to comment Share on other sites More sharing options...
68rustang Posted February 27, 2016 Author Share #11 Posted February 27, 2016 Asus MB, i3 CPU, gskill RAM, Corsair PSU, WD HDDs, unsure of the SATA Card. Initially after first getting errors I checked all the SATA and power connections on the MB and SATA card. This seemed to fix it at first then I started getting more errors and everything went downhill. I do not think my problems had anything to do Xpenology itself. In my experience over the last year or so it has been very stable and indistinguishable from my other real Synology DS415+ I have at the office. The only difference being my Xpenology build was way more capable than the 415+ for about the same money spent. Link to comment Share on other sites More sharing options...
Recommended Posts