SteinerKD

Members
  • Content count

    40
  • Joined

  • Last visited

Community Reputation

1 Neutral

About SteinerKD

  • Rank
    Junior Member
  1. Access FS before boot.

    Would this (as I hoped) be a viable preemptive solution to the broken raids on upgrade problem?
  2. Access FS before boot.

    Just a thought, not sure how possible/viable it is. Would it be possible for someone with skills to create a USB stick you can switch in that will give you access to the filesystem (mainly /etc/synoinfo.conf) without booting the system? The reason I wonder is that I use an increased maxdrive setting on my system and major updates resets it crashing my RAIDs until I can get in and reedit the setting. This hasn't been a problem yet as the raid groups reassemble themselves once the config is once again edited, but it does require a parity check and scrubbing which takes time. If you could switch out the usb stick whenever you do an upgrade you know will reset the config and edit it before the updated system boots you could probably avoid this. Not sure how viable this idea is though which is why I put it out here for discussion. Another extended thought would be, is it possible to create a a tool that will take any .pat file and check that it's OK (MD5) and then pre-patch it for any value max drives you prefer (maybe even the esata/usb/drives values, surely the conf files must exist in there somewhere)?
  3. I hope this is the right place to ask. I've installed ESXi on my XPE NAS and it seems to be running pretty fine, but now when I tried to migrate a VM to it I got a CPU incompatible error and when I check my viirtual ESXi machine on VMM it says that CPU Compatibility Mode is disabled. I can't find any way to turn it on however. Anyone know how to turn it on? Is it related to the hardware virtualization setting in bios? (I remember seeing a complaint about it when I installed ESXi)
  4. DSM 6.1.4 - 15217

    Updated bare metal, Jun 1.02b loader. Came back with my largest RAID 5 volume crashed (4*4TB). (BIG scare) Luckily before doing anything drastic I realized that the maxdrive had been reset from my 18 to 12 (leaving half of the RAID 5 outside of the limit), a quick config edit and reboot and the raid volume was back in working order again. Other than that everything seems to work just fine.
  5. NAS present but not working, help

    I've now also realized I installed using the "wrong bootloader" (1.02a2). Is there any way I can "start over" without loosing ANY of the files on my two raids on the current NAS (loosing users and such is OK, I can set that up again quickly enough as long as all the files and folders remain) using the newest bootloader (1.02b)? @IG-88 You seems to be the guru here, so I hope you can give me some of your usual good advice and bear with my noobness
  6. NAS present but not working, help

    Well, hopefully I won't have to experience this again so it won't be a problem, but next time I will avoid rebooting if it seems to be working. What worries me though was the total way the computer locked up, refusing to boot, loosing all USB, failing to initialize etc. Something serious must have happened, but was the hardware causing it, or the software?
  7. NAS present but not working, help

    Just changed router, from an old Netgear Wndr3700 to a Ubiquiti EdgeRouter Lite. Not sure what could have spiked so badly that my computer went into total lockdown. Still think the HBA card might have been the culprit.
  8. NAS present but not working, help

    Not sure what happened, for a while there my computer wouldn't boot at all (I first got the "the page you are looking for" but all drives where still available, when I rebooted the system I ended up with the unbootable machine), got stuck with a 99 code, had no USB and couldn't enter BIOS. Had to rip everything out and do dozens of cmos resets etc. In the end I had to connect an old drive to an internal SATA port and that got me into bios so I could configure a bootable system. Naturally that still didn't help as now the system partitions were ruined. As you said I booted with the old drive and repaired with that drive and then removed it (Think I better keep it around in case something like this happens again). Things seems to be working now and most of the settings and the version of DSM was the same so probably no big harm done, but it was a REAL scare! Not sure if the LSI 9207 was the culprit or if it was something else? (happened while I was installing and configing a new router).
  9. NAS present but not working, help

    I've now connected up all drives but booting from the old NAS install. All raid volumes shows up and it looks like the files are all there. In the SSD/HDD section all drives (except the old boot one) says "Systems partition failed, normal", is this a good or a bad sign? Could a sys partition repair and then reboot without the old drive work?
  10. Well after a long while testing and running virtual Xpenology I took the plunge and sacrificed a computer for a bare metal install. All was well, built some RAIDs and moved all my files over (14TB of data) and it was working well. Today after changing my router I got "Sorry, the page you are looking for is not found." and neither Assistant or "findmysynology" find any NAS on my LAN (it does show up as connected device in router, in "networks" in windows and I can ssh to it though. Thinking it was a firewall problem or similar I switched back to my old router, but no go Another thing I tried was connecting a single HD from one of the test NAS and that one boots fine and become accessible so the hardware side of things seems to be working. Any ideas of advice, is this fixable? (Or have I lost ALL my data?)
  11. DSM 6.x.x Loader

    Haha, if you check the comments under that video me and him are having a few discussions.
  12. Looking for SATA controller card for baremetal

    Thank you for the suggestion, but the case I'm building the NAS in already have 9 3.5" bays and I'm also getting a 3x5.25 to 5x3.5" IcyDock HD cage to add to it (So room to fill 2 HBA cards in the end for some expand-ability), will have 6*4TB and 2*3TB disks to throw at it for now.
  13. Looking for SATA controller card for baremetal

    These were old PCIe 1.0 cards, one was Fujitsu branded, the other OEM. I think one of them might be using a LSI chip on it, neither had more than 4 internal ports and I want a card with 8 internal ports. If I get them I might give them out for free here I guess, but for myself I will go with the mentioned LSI card.
  14. Looking for SATA controller card for baremetal

    I'm picking up a LSI 9207-8i today (PCIe 3.0 SATA III/SAS 2 x8 ports). I was offered a few SAS HBAs by a tech friend but they were all SATA II only (one 4i/4e and one 8i)
  15. I've gone through a number of upgrades with no issues due to changed config as long as I don't go over 26 drives, just have to remember to edit both the one in /etc and /etc.defaults (or am I wrong about that? Seems to have worked for me). I bow to your greater expertise here but still think that just editing the /etc located config to see if it had made any difference could be worth a try (but by now the damage is likely already done). My guess is he added a 8 port HBA to a motherboard with 6 or more SATA ports natively which will immediately take the port count above 12.