CrazyFin

Members
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

1 Neutral

About CrazyFin

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Wohoooo! After almost giving up on the issue with not being able to disable motherboard SATA ports in order to get my 8 SATA port on LSI card to start numbering at 1 I finally found it!! Stupid setup in the BIOS but one has to FIRST choose IDE for the SATA ports and then voilaaaa! A submenu option where the SATA ports can be disabled suddenly shows up! This is NOT visible if you have configured SATA as RAID or AHCI!? See this: After reconfiguring my setting internalportcfg from 0x3cf0 to 0x00ff all works fine now and all my 8 disks connected to my add-on LSI c
  2. Alright, sorry for this late reply back. I am quite embarrased but the solution was pretty clear when I started to open up the case in order to replace the PSU... No need to replace PSU... When I was connecting the PSU to see if the server was going down totally or if it was just the PSU shutting down I also decided to open up the case to prepare for a PSU replace I realised that there was a dust filter at the bottom of the chassi that I had forgotten about. I always clean the dust filters on the chassis 1-2 times per month but I had TOTALLY forgotten about
  3. Nope no UPS. I have a couple of them waiting to be installed though... Hmmm maybe it would be better to test with an UPS first to see if it actually is the PSU and not something else.. In fact, I´ll install the UPS tomorrow and start a scrub of my disk volume which usually triggers the sporadic shut downs.
  4. Since DSM 6.1.2-15132 I have started to experience random shut downs of my Xpenelogy server. My feeling is that this happens when disks (I have totally 8 disks with 4TB each) are doing a recovery scrub after an upgrade or any other very heavy file operation. It might be my PSU that is starting to give up and I´ll most likely try with replacing the PSU this coming weekend. The PSU is a Corsair RM1000 and I have had it installed for approx a year in my barebone Xpenology server (Asus P7F-X with X3440 CPU, LSI9211 controller card with 8 x 4TB WD green disks). I can not see a
  5. Ah, good idea! I didn't think that the raid group would work if I move 2 of the 8 drives with no 13 and 14 away from the LSI 9211 controller and connect them to a SATA port. If it works then it is a much better work around compared to having to do a rebuild/repair that takes 16-20 hours after every update. Thanks for the tip. Will try it out.
  6. Alright I have now tested various settings and the only SATA chip that I can completely disable in the BIOS is the Marvell SATA chip (for 4 drives) but the main Intel SATA chip is not possible to disable... So this means that Xpenology loader will always find the first 6 SATA ports and then my LSI 9211 card, i.e. my raid group will always start with disk no 7 and go up to no 14 no matter what I try to do. We´ll I can live with this now since I know why and when it happens. My only solution would be to try with another motherboard where I can actually turn off the onboard SATA control
  7. I started a repair as I do usually after an install of the latest DSM 6.1 patch/udate. (See this post where I discuss my problem with raid always being degraded after an update due to my 8 SHR-2 raid starting at no 7 up to 14.) This time the repair started directly with a 2nd repair when the first one was done. Any idea why this is happening? I don't think that I have seen this before and I tired to search the forum for this but can´t find any other posts with this issue. I see this in the kern.log: 2017-07-18T05:55:02+02:00 CrazyServer kernel: [50877.963147] m
  8. Yepp, thats my next step to test and in fact I thought I already had disabled the SATA controller in the BIOS setup and on a reboot I can see that it is indeed disabled but I found another setting in BIOS that I will try on next reboot (as soon as my RAID repair is done). My motherboard ASUS P7F-X (ASUS P7F-X server motherboard) has totally 6 SATA ports and the SATA chip is Intel® 3420 with 6 * SATA2 300MB/s ports. I have tried with searching the forum for posts that could help me out but none of the posts discusses the sataportmap setting in the GRUB file and the internalport
  9. On some DSM updates I get my RAID degraded after the system update reboot. It does not happen on all system updates but the latest one DSM 6.1.3-15152 from July 13th it happened again. I know what the root cause of the problem is and it is the fact that my special settings in \etc.defaults\synoinfo.conf gets overwritten with some Synology defaults. My setting of internalportcfg="0x3fc0" is reset back to internalportcfg="0xfff" and this causes my 8 drive SHR-2 RAID to become degraded since it looses 2 of the 8 disks. The reason why I have internalportcf
  10. Yesterday I also took the step and added 3x3TB SATA disks to 3 of the motherboard internal SATA connectors, changed internalportcfg setting: from binary 0011 1111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first positions since I do not have any disks connected to the mobo Asus P7F-X sata ports) to binary 0011 1111 1111 1000 = 0x3ff8 (i.e. zeroes now on the 3 first positions only since I have 3 SATA disks connected and the other 8 are connected to my LSI-controller) Rebooted the system and the 3 new disks on the SATA ports were visible as disks in Synology. When I started to greate a
  11. When moving from DSM 5.2 up to DSM 6.1 (yes I'm brave... , system working fine now though... ) I learned a lot about Xpenelogy and its boot sequence, settings, drivers and also about linux. Spent the whole weekend doing the upgrade with different problems but all working fine now. (See my posts Some config Settings and getting SHR as a RAID choice enabled again) During my upgrade I did quite many edits of both the grub file and the two files /etc/synoinfo.conf /etc.defaults/synoinfo.conf. I am a little bit confused though what the different impact /etc/synoinfo.conf /etc.defaults/syn
  12. I am using Jun's loader v1.02a for DSM 6.1 on my baremetal setup and it is working nice indeed. Had some trouble with 2 of my 8 disks being seen as eSata disks until I figured out to mod the internalportcfg-setting in /etc/synoinfo.conf and /etc.defaults/synoinfo.conf (See my thread http://xpenology.com/forum/viewtopic.php?f=2&t=30258 on how I solved it.) Working really nice now! I also took the step and added 3x3TB SATA disks to 3 of the motherboard internal SATA connectors, changed internalportcfg setting: from binary 0011 1111 1100 0000 = 0x3fc0 (i.e. zeroes on the 6 first po
  13. [sOLVED!] Alright, Saturday and Sunday was used for trying to figure out what had happened when I went from DSM 5.2 to DSM 6.1 (yeah, yeah, I know I made a mistake by testing 6.1 directly even though it is still pure alpha but hey, I'm brave and I have backups... ) I first tried to downgrade to 6.0.2 but still the same problem that 2 of my 8 disks where shown as eSata disks and control center yelling about degraded raid. I then tried with downgrade to 5.2 but it always went back to "migratable" mode so even though I re-booted several time with Xpeneology 5.2 (which I initially had)
  14. Nope... all those 8 disks were part of my raid while running DSM 5.2. While on DSM 5.2 all was running normal with all 8 disks being part of my RAID6 array and all disks were checked with smartctl long test before upgrade and no errors shown. No, I did not attach them to eSata ports. I did NOT change ANYTHING in my hardware conf before the upgrade or after the upgrade. All of these disks are attached to the SAS9211-81 card (the card is physically an "IBM ServeRAID M1015 8-CH SAS-SATA PCI-E - 46M0861") Anyway, I´ll revert back to DSM 5.2 and then do a "proper upgrade to 6.0 (bootload