Jump to content
XPEnology Community

SteinerKD

Member
  • Posts

    69
  • Joined

  • Last visited

Everything posted by SteinerKD

  1. It's a Barebone Intel system. As I said above, I DID do a reinstall on a separate drive, when I connected the drives for one of my volumes discs where detected but DSM said there was no volumes in the system and when I rebooted I had lost access to that install as well and haven't been able to regain it since.
  2. It was a perfectly running system rock solid until the update somehow killed the network connection on it (and maybe the drives as well, can't tell yet), I think the USB stick is just fine (since at one point I actually managed to create a new install on a separate disk, that promptly died when I tried reconnecting the other drives).
  3. Update. First hurdle was I was caught on a no input device error code, Attached keyboard and got past that, now stuck at A2 error code (SATA init). After powering down and a few restarts finally got back to the normal Xpenology loader screen on the monitor. Back on computer however there's no connection whatsoever to the NAS. No assistant, no find synology, no direct IP, no domain man access or SSH access. Disconnected drives and insert another HD, now nas is found and could do upgrade/repair install on that drive. Connect the 2 disks from a volume of my NAS before update. New system detects disk but say there is no volume on them. Restart with the 2 previous volume disks connected and again, non reachable system, no access in any way. Getting seriously nervous now as I have 4 more disks to hook up with my big volume, 12TB of data. (much of it as a iSCSI drive). Any thoughts or advice would be welcome at this point.
  4. So, after checking reports on DSM 6.1.4-15217 Update 5 I decided to run the update on my NAS. Got to the reloading screen and that's the end of it. After a while reloaded the loading webpage and got error to connect. http://find.synology.com/ and Synology assistance doesn't detect any NAS on my lan any more, nor does trying to load the IP directly work. The machine is still running and I need to reboot it to get visible access to it as I didn't have a monitor connected to it when this happened (need to reboot to activate the screen). Question is, do I dare rebooting it if it's in some kind of update process, how long to wait until I force a hardware reboot?
  5. Would this (as I hoped) be a viable preemptive solution to the broken raids on upgrade problem?
  6. Just a thought, not sure how possible/viable it is. Would it be possible for someone with skills to create a USB stick you can switch in that will give you access to the filesystem (mainly /etc/synoinfo.conf) without booting the system? The reason I wonder is that I use an increased maxdrive setting on my system and major updates resets it crashing my RAIDs until I can get in and reedit the setting. This hasn't been a problem yet as the raid groups reassemble themselves once the config is once again edited, but it does require a parity check and scrubbing which takes time. If you could switch out the usb stick whenever you do an upgrade you know will reset the config and edit it before the updated system boots you could probably avoid this. Not sure how viable this idea is though which is why I put it out here for discussion. Another extended thought would be, is it possible to create a a tool that will take any .pat file and check that it's OK (MD5) and then pre-patch it for any value max drives you prefer (maybe even the esata/usb/drives values, surely the conf files must exist in there somewhere)?
  7. I hope this is the right place to ask. I've installed ESXi on my XPE NAS and it seems to be running pretty fine, but now when I tried to migrate a VM to it I got a CPU incompatible error and when I check my viirtual ESXi machine on VMM it says that CPU Compatibility Mode is disabled. I can't find any way to turn it on however. Anyone know how to turn it on? Is it related to the hardware virtualization setting in bios? (I remember seeing a complaint about it when I installed ESXi)
  8. Updated bare metal, Jun 1.02b loader. Came back with my largest RAID 5 volume crashed (4*4TB). (BIG scare) Luckily before doing anything drastic I realized that the maxdrive had been reset from my 18 to 12 (leaving half of the RAID 5 outside of the limit), a quick config edit and reboot and the raid volume was back in working order again. Other than that everything seems to work just fine.
  9. I've now also realized I installed using the "wrong bootloader" (1.02a2). Is there any way I can "start over" without loosing ANY of the files on my two raids on the current NAS (loosing users and such is OK, I can set that up again quickly enough as long as all the files and folders remain) using the newest bootloader (1.02b)? @IG-88 You seems to be the guru here, so I hope you can give me some of your usual good advice and bear with my noobness
  10. Well, hopefully I won't have to experience this again so it won't be a problem, but next time I will avoid rebooting if it seems to be working. What worries me though was the total way the computer locked up, refusing to boot, loosing all USB, failing to initialize etc. Something serious must have happened, but was the hardware causing it, or the software?
  11. Just changed router, from an old Netgear Wndr3700 to a Ubiquiti EdgeRouter Lite. Not sure what could have spiked so badly that my computer went into total lockdown. Still think the HBA card might have been the culprit.
  12. Not sure what happened, for a while there my computer wouldn't boot at all (I first got the "the page you are looking for" but all drives where still available, when I rebooted the system I ended up with the unbootable machine), got stuck with a 99 code, had no USB and couldn't enter BIOS. Had to rip everything out and do dozens of cmos resets etc. In the end I had to connect an old drive to an internal SATA port and that got me into bios so I could configure a bootable system. Naturally that still didn't help as now the system partitions were ruined. As you said I booted with the old drive and repaired with that drive and then removed it (Think I better keep it around in case something like this happens again). Things seems to be working now and most of the settings and the version of DSM was the same so probably no big harm done, but it was a REAL scare! Not sure if the LSI 9207 was the culprit or if it was something else? (happened while I was installing and configing a new router).
  13. I've now connected up all drives but booting from the old NAS install. All raid volumes shows up and it looks like the files are all there. In the SSD/HDD section all drives (except the old boot one) says "Systems partition failed, normal", is this a good or a bad sign? Could a sys partition repair and then reboot without the old drive work?
  14. Well after a long while testing and running virtual Xpenology I took the plunge and sacrificed a computer for a bare metal install. All was well, built some RAIDs and moved all my files over (14TB of data) and it was working well. Today after changing my router I got "Sorry, the page you are looking for is not found." and neither Assistant or "findmysynology" find any NAS on my LAN (it does show up as connected device in router, in "networks" in windows and I can ssh to it though. Thinking it was a firewall problem or similar I switched back to my old router, but no go Another thing I tried was connecting a single HD from one of the test NAS and that one boots fine and become accessible so the hardware side of things seems to be working. Any ideas of advice, is this fixable? (Or have I lost ALL my data?)
  15. SteinerKD

    DSM 6.1.x Loader

    Haha, if you check the comments under that video me and him are having a few discussions.
  16. Thank you for the suggestion, but the case I'm building the NAS in already have 9 3.5" bays and I'm also getting a 3x5.25 to 5x3.5" IcyDock HD cage to add to it (So room to fill 2 HBA cards in the end for some expand-ability), will have 6*4TB and 2*3TB disks to throw at it for now.
  17. These were old PCIe 1.0 cards, one was Fujitsu branded, the other OEM. I think one of them might be using a LSI chip on it, neither had more than 4 internal ports and I want a card with 8 internal ports. If I get them I might give them out for free here I guess, but for myself I will go with the mentioned LSI card.
  18. I'm picking up a LSI 9207-8i today (PCIe 3.0 SATA III/SAS 2 x8 ports). I was offered a few SAS HBAs by a tech friend but they were all SATA II only (one 4i/4e and one 8i)
  19. I've gone through a number of upgrades with no issues due to changed config as long as I don't go over 26 drives, just have to remember to edit both the one in /etc and /etc.defaults (or am I wrong about that? Seems to have worked for me). I bow to your greater expertise here but still think that just editing the /etc located config to see if it had made any difference could be worth a try (but by now the damage is likely already done). My guess is he added a 8 port HBA to a motherboard with 6 or more SATA ports natively which will immediately take the port count above 12.
  20. What IG-88 said, driver compatibility is better than top speed, especially if you're on a budget. Anyway, my original numbers were way off (SORRY) as a single lane PCIe 2 is capable of 5 Gbit/s or about 500 MB/s. A single SATA III HD works at 100-150 MB/s or so so this card should not bottleneck you in any significant way. And I agree with you, an up to date hardware and driver support list would be super nice.
  21. 88SE9235 Supports slightly better bandwidth. https://www.marvell.com/storage/system-solutions/assets/Marvell-88SE92xx-002-product-brief.pdf Basically as booth are PCIe 2.0 cards the 15 version supports 2.5Gbit/s transfer between the card and computer and the 35 version 5 GBit/s so will be a bottleneck if you use 4 SATA3 disks (4*6 Gbit = 24 theoretical GBit/s).
  22. Yes, it's listed on the compatible list at http://xpenology.me/compatible/ (look for the chipset number 88SE92ХХ on page 2 of "SATA / SAS / SCSI Adapters"
  23. While the real 3615 might only support 12 drives XPEnology supports 26 drives with a simple config edit. You might try changing the support for more drives and see if it saves anything. Here's one guide, plenty more if you Google. Hope it helps!
  24. You can grab the LSI 9207-8i HBA cheap off eBay. 8 SATA/SAS 6GB ports on a PCIe 3.0 x8 card. (Not to be confused with the 9211-8i which is a raid card and need to be flashed to IT mode, and it's PCIe 2.0)
  25. I'm not sure I understand what you're saying here. Why did you create datastores on each individual drive? The logical route would be to add the disks or pass them through to the XPEnology VM, create the RAID 5 volume, make it iSCSI (Or NFS if you prefer) and use that shared volume to create a datastore on.
×
×
  • Create New...