CrazyFin

Members
  • Content count

    16
  • Joined

  • Last visited

Community Reputation

1 Neutral

About CrazyFin

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Wohoooo! After almost giving up on the issue with not being able to disable motherboard SATA ports in order to get my 8 SATA port on LSI card to start numbering at 1 I finally found it!! Stupid setup in the BIOS but one has to FIRST choose IDE for the SATA ports and then voilaaaa! A submenu option where the SATA ports can be disabled suddenly shows up! This is NOT visible if you have configured SATA as RAID or AHCI!? See this: After reconfiguring my setting internalportcfg from 0x3cf0 to 0x00ff all works fine now and all my 8 disks connected to my add-on LSI card are now detected as drives numbered from 1-8 and I will not have the problem with loosing disks on a Xpenology/Synology system update.... Aaaahhh finally!
  2. Alright, sorry for this late reply back. I am quite embarrased but the solution was pretty clear when I started to open up the case in order to replace the PSU... No need to replace PSU... When I was connecting the PSU to see if the server was going down totally or if it was just the PSU shutting down I also decided to open up the case to prepare for a PSU replace I realised that there was a dust filter at the bottom of the chassi that I had forgotten about. I always clean the dust filters on the chassis 1-2 times per month but I had TOTALLY forgotten about this dust filter at bottom of the chassi... and guess what... it was totally clogged with dust so it is pretty clear that the PSU was closing down itself due to overheating.. After cleaning and then testing the server several times for a couple of days with for example a srubbing operations it is running perfectly well now.
  3. Nope no UPS. I have a couple of them waiting to be installed though... Hmmm maybe it would be better to test with an UPS first to see if it actually is the PSU and not something else.. In fact, I´ll install the UPS tomorrow and start a scrub of my disk volume which usually triggers the sporadic shut downs.
  4. Since DSM 6.1.2-15132 I have started to experience random shut downs of my Xpenelogy server. My feeling is that this happens when disks (I have totally 8 disks with 4TB each) are doing a recovery scrub after an upgrade or any other very heavy file operation. It might be my PSU that is starting to give up and I´ll most likely try with replacing the PSU this coming weekend. The PSU is a Corsair RM1000 and I have had it installed for approx a year in my barebone Xpenology server (Asus P7F-X with X3440 CPU, LSI9211 controller card with 8 x 4TB WD green disks). I can not see any particular events in the kern.log file and there is nothing that indicates a PSU problem or CPU heating problem. CPU seems to run at approx 40-44 degrees Celsius when the shut down happens. These are the last lines I see in the kern.log file. There is no message about powering down and the interrupt errors shown below can be seen other times as well without Xpenology shutting down: 2017-07-30T23:45:39+02:00 CrazyServer kernel: [214024.761531] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:45:40+02:00 CrazyServer kernel: [214026.041864] sky2 0000:03:00.0: error interrupt status=0x8 2017-07-30T23:45:44+02:00 CrazyServer kernel: [214029.548165] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:45:59+02:00 CrazyServer kernel: [214045.313551] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:46:01+02:00 CrazyServer kernel: [214046.695887] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:46:01+02:00 CrazyServer kernel: [214047.351521] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:46:02+02:00 CrazyServer kernel: [214047.559463] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:46:11+02:00 CrazyServer kernel: [214056.790955] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:46:12+02:00 CrazyServer kernel: [214058.117275] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:46:16+02:00 CrazyServer kernel: [214062.056383] sky2 0000:03:00.0: error interrupt status=0x8 2017-07-30T23:46:17+02:00 CrazyServer kernel: [214063.124915] sky2 0000:03:00.0: error interrupt status=0x8 2017-07-30T23:46:20+02:00 CrazyServer kernel: [214065.942479] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-30T23:47:33+02:00 CrazyServer kernel: [214139.442902] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-31T00:00:28+02:00 CrazyServer kernel: [214914.051214] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-31T00:10:25+02:00 CrazyServer kernel: [215510.528919] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-31T00:13:13+02:00 CrazyServer kernel: [215678.491358] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-31T00:20:10+02:00 CrazyServer kernel: [216095.276985] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-31T00:20:34+02:00 CrazyServer kernel: [216119.265336] sky2 0000:03:00.0: error interrupt status=0x8 2017-07-31T00:28:23+02:00 CrazyServer kernel: [216587.629802] sky2 0000:03:00.0: error interrupt status=0x40000008 2017-07-31T00:44:50+02:00 CrazyServer kernel: [217574.558506] sky2 0000:03:00.0: error interrupt status=0x40000008 And here is where I powered on the server again: 2017-07-31T07:20:52+02:00 CrazyServer kernel: [ 0.000000] Linux version 3.10.102 (root@build1) (gcc version 4.9.3 20150311 (prerelease) (crosstool-NG 1.20.0) ) #15152 SMP Thu Jul 13 04:20:59 CST 2017 My gut feeling is that it happens only on highly disk intensive operations such as scrubbing, repairing, file copying of many large files but anyway, my first step will be to replace to PSU.
  5. Ah, good idea! I didn't think that the raid group would work if I move 2 of the 8 drives with no 13 and 14 away from the LSI 9211 controller and connect them to a SATA port. If it works then it is a much better work around compared to having to do a rebuild/repair that takes 16-20 hours after every update. Thanks for the tip. Will try it out.
  6. Alright I have now tested various settings and the only SATA chip that I can completely disable in the BIOS is the Marvell SATA chip (for 4 drives) but the main Intel SATA chip is not possible to disable... So this means that Xpenology loader will always find the first 6 SATA ports and then my LSI 9211 card, i.e. my raid group will always start with disk no 7 and go up to no 14 no matter what I try to do. We´ll I can live with this now since I know why and when it happens. My only solution would be to try with another motherboard where I can actually turn off the onboard SATA controllers.
  7. I started a repair as I do usually after an install of the latest DSM 6.1 patch/udate. (See this post where I discuss my problem with raid always being degraded after an update due to my 8 SHR-2 raid starting at no 7 up to 14.) This time the repair started directly with a 2nd repair when the first one was done. Any idea why this is happening? I don't think that I have seen this before and I tired to search the forum for this but can´t find any other posts with this issue. I see this in the kern.log: 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50877.963147] md: md2: recovery done. 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157924] md: md2: set sdn5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157931] md: md2: set sdm5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157934] md: md2: set sdg5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157936] md: md2: set sdl5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157939] md: md2: set sdk5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157941] md: md2: set sdi5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157944] md: md2: set sdj5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.157946] md: md2: set sdh5 to auto_remap [0] 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.261429] md: md2: current auto_remap = 0 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.261433] md: md2: flushing inflight I/O 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50878.289637] md: recovery of RAID array md2
  8. Yepp, thats my next step to test and in fact I thought I already had disabled the SATA controller in the BIOS setup and on a reboot I can see that it is indeed disabled but I found another setting in BIOS that I will try on next reboot (as soon as my RAID repair is done). My motherboard ASUS P7F-X (ASUS P7F-X server motherboard) has totally 6 SATA ports and the SATA chip is Intel® 3420 with 6 * SATA2 300MB/s ports. I have tried with searching the forum for posts that could help me out but none of the posts discusses the sataportmap setting in the GRUB file and the internalportcfg-setting in the /etc.defaults/synoinfo.conf when one disables the onboard sata ports completely and only uses for example an LSI 9211 controller attached to the motherboard. As soon as my RAID repair is done (approx 60% at the moment) then I´ll do some new testing with different BIOS-settings, sataportmap settings and internalportcfg settings.
  9. On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again. I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults. My setting of internalportcfg="0x3fc0" is reset back to internalportcfg="0xfff" and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks. The reason why I have internalportcfg="0x3fc0" is that I am not using the 6 SATA ports on my motherboard at all. All my 8 disks are connected to my LSI 9211 controller. Not a big problem though since I just edit the file \etc.defaults\synoinfo.conf back to my default setting and then run a time consuming scrub/repair of my raid volume. This is very time consuming though since my RAID volume is approx 21TB and having 2 disks being degraded is quite risky during the repair. It looks like this is a problem only if one has more than a certain number of disks and/or the disks being attached starts at a higher port number. In my case I am not using the first 6 SATA ports on the motherboard so my disk slots being used starts at disk number 7 and goes up to disk number 14 (totally 8 disks). I have another server with only 4 disks and that server has not these problems on Synology DSM updates. Is there ANY way to have the Synology update to NOT overwrite the internalportcfg-setting in /etc.defaults/synoinfo.conf file to NOT being overwritten with new Synology default values or can I replace this file BEFORE the Synoloy restart?
  10. Aaah excellent! Thx for the explanation.
  11. Yesterday I also took the step and added 3x3TB SATA disks to 3 of the motherboard internal SATA connectors, changed internalportcfg setting: from binary 0011 1111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports) to binary 0011 1111 1111 1000 = 0x3ff8 (i.e. zeroes now on the 3 first positions only since I have 3 SATA disks connected and the other 8 are connected to my LSI-controller) Rebooted the system and the 3 new disks on the SATA ports were visible as disks in Synology. When I started to greate a separate disk group of the 3 new disks added I could not choose SHR as the raid type but some searching here on the forum quickly enlightened me that I needed to comment out / delete support_raid_group = "yes" and then add support_syno_hybrid_raid = "yes" in /etc.defaults/synoinfo.conf and /etc/synoinfo.conf and reboot and voilaaaa! My 2nd disk group is now also up and running in SHR mode with BTRFS-filesystem. Cool!
  12. When moving from DSM 5.2 up to DSM 6.1 (yes I'm brave... , system working fine now though... ) I learned a lot about Xpenelogy and its boot sequence, settings, drivers and also about linux. Spent the whole weekend doing the upgrade with different problems but all working fine now. (See my posts Some config Settings and getting SHR as a RAID choice enabled again) During my upgrade I did quite many edits of both the grub file and the two files /etc/synoinfo.conf /etc.defaults/synoinfo.conf. I am a little bit confused though what the different impact /etc/synoinfo.conf /etc.defaults/synoinfo.conf have since they (usually?) are identical and what happens if I for example change a setting only in /etc.defaults/synoinfo.conf or change only a setting in /etc/synoinfo.conf?
  13. I am using Jun's loader v1.02a for DSM 6.1 on my baremetal setup and it is working nice indeed. Had some trouble with 2 of my 8 disks being seen as eSata disks until I figured out to mod the internalportcfg-setting in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf (See my thread http://xpenology.com/forum/viewtopic.php?f=2&t=30258 on how I solved it.) Working really nice now! I also took the step and added 3x3TB SATA disks to 3 of the motherboard internal SATA connectors, changed internalportcfg setting: from binary 0011 1111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports) to binary 0011 1111 1111 1000 = 0x3ff8 (i.e. zeroes now on the 3 first positions only since I have 3 SATA disks connected and the other 8 are connected to my LSI-controller) Rebooted the system and when I started to greate a separate disk group of the 3 new disks added I could not choose SHR as the raid type but some searching here on the forum quickly enlightened me that I needed to comment out / delete support_raid_group = "yes" then add support_syno_hybrid_raid = "yes" in /etc.defaults/synoinfo.conf and /etc/synoinfo.conf and reboot and voilaaaa! My 2nd disk group is now also up and running in SHR mode with BTRFS-filesystem. Cool!
  14. [sOLVED!] Alright, Saturday and Sunday was used for trying to figure out what had happened when I went from DSM 5.2 to DSM 6.1 (yeah, yeah, I know I made a mistake by testing 6.1 directly even though it is still pure alpha but hey, I'm brave and I have backups... ) I first tried to downgrade to 6.0.2 but still the same problem that 2 of my 8 disks where shown as eSata disks and control center yelling about degraded raid. I then tried with downgrade to 5.2 but it always went back to "migratable" mode so even though I re-booted several time with Xpeneology 5.2 (which I initially had) I was not able to complete to roll-back to 5.2. (I did never try FULL RE-INSTALL though where all settings are wiped but data kept.) So I decided to go the hard path and just get DSM 6.1 working... I now decided to skip all my settings but keep my data so I did a full re-install and that didn't help either... I tried with modifying sataportmap setting in grub file and also played around with the internalportcfg and usbportconfig setting in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf but I was not able to get it to work properly and I realised there was something I am doing wrong since it felt like any setting I tried the 2 of my 8 disks (connected to LSI card, I have no sata drives connected) where always showing up as externally connected eSata drives. My disk setup is 6 sata connectors on mobo and I have NO sata drives connected to them. Instead I have the "standard" LSI 9211-8i with 8 disks connected to it via fan out sata cables. So, Google, google, google and google and finally after reading a BUNCH of different posts here at Xpenology forum as well as other posts/blogs/comments I realised that I most likely had: 1. missed to modify the esataportcfg setting and 2. used the wrong values. and 3. missed to INCREASE the setting maxdisks from 12 to 14 (6 sata ports with NO disks connected + my 8 disks connected to the LSI-card) Lookning in dmesg output I could see that my LSI-connected disks were displayed as slot 7 to 14 I figured out that the internalportcfg value should therefore be Binary 111111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports) and I set esataportcfg = 0x0 and kept the current setting usbportcfg = 0x300000 I made the change in both /etc/synoinfo.conf and /etc.defaults/synoinfo.conf, rebooted, waited anxiously.... and voilaaaaa! The 2 external eSata disks were now back available as two internal disks within the raid but degrade of course. I just did "Repair volume" from the GUI and it started its parity checking mode and now 10 hours later parity checking is at 90% done and I am happy and confident it will work as intended from now on... So the conclusion is: 1. sataportmap=1 which is the default setting works fine for me and didn't seem to help/affect my problems 2. VERY important to understand how many sata ports one has and if one has extra controller cards (like I have) to figure out the proper value to use in BOTH files for the internalportcfg setting. In my case I had 6 mobo sata ports (which I do not use) and the LSI controller card with 8 disks, i.e. totally 14 disks (and remember the default setting in the conf-files seems to be maxdisks=12) 3. Important also to set proper esataportcfg value as well (in my case I forced it to zero since I have no eSata drives/ports, I use USB 3.0 ports for backing up data to external USB 3.0 disks). In my case it was this post the pointed me out in the proper direction (as well as the many other posts here at xpenology.com): Sata and eSata port configuration at xpenology.eu-forum as well as this one here at xpenelogy.com
  15. Nope... all those 8 disks were part of my raid while running DSM 5.2. While on DSM 5.2 all was running normal with all 8 disks being part of my RAID6 array and all disks were checked with smartctl long test before upgrade and no errors shown. No, I did not attach them to eSata ports. I did NOT change ANYTHING in my hardware conf before the upgrade or after the upgrade. All of these disks are attached to the SAS9211-81 card (the card is physically an "IBM ServeRAID M1015 8-CH SAS-SATA PCI-E - 46M0861") Anyway, I´ll revert back to DSM 5.2 and then do a "proper upgrade to 6.0 (bootloader version DS3615xs 6.0.2 Jun's Mod V1.01.zip) instead of 6.1 (bootloader DS3615xs 6.1 Jun's Mod V1.02-alpha.zip) that I made "accidentally".