Jump to content
XPEnology Community

SteinerKD

Member
  • Posts

    69
  • Joined

  • Last visited

Everything posted by SteinerKD

  1. Ok, this got me beaten for now. I created a new Ubuntu 20 machine just for the task. Got the extra.lzma in and copied to an empty folder Unpacked it as per instructions. I go into the new etc dir but only see: jun.patch rc.modules Not synoinfo_override.conf Opening jun.patch there is no "maxdisk" or "internalportcfg" anywhere in the file (searching for it or stepping through manually. The difference for me is it's an extra.lzma from 1.03b/Ds3617XS, does this guide not work for it?
  2. This sounds interesting, is there a good guide or explanation of how/where to set the extra_args_3617, I will have the 6 motherboard sata's and a 8 port LSI hbva (might add a second) and would prefer to let the LSI ports take priority and use the motherboard ones last (16+6=22 so still under the 24 disk threshold). For starters though I would like to put 8 LSI ports first then the 6 intel ones. (I also saw the guide for setting these numbers as the default, but will have to read that a few more times to figure it out, I've previously edited in the running machine, but that resets with updates).
  3. never mind, you just answered what I needed to hear in another post @flyride I will go 1.03b / DS3617XS, which decides the UEFI vs BIOS question as well.
  4. I think you just answered all I wanted/needed to know with one short message there 1.03b/DS3617xs it is!
  5. Thank you kindly for answering. I guess I might be overthinking things a bit. Some of the information just makes your head spin (been reading various discussions like the one mentioned) and although I *think* I have an OK grasp on it I just want to make the best choice I can before I do the build. I guess the main concern is the kernel choice and 3617/918 should represent the most modern hardware with the 918 with a 4.4 kernel the very latest emulated gear. I think I read something about an issue with SMART data on the 3617? (but maybe that was something older already resolved)
  6. With a Haswell CPU (i5 4670K that might upgrade to i7 later) and H97 chipset (latter Z97) and a LSI 9207 HBA would there be any issues to do my build for the 1.04/918+ over 1.03/3617? Would I actually benefit anything from the 4.4 or just buy myself potential issues? Note I will not be using any video encoding, just wants the most up to date DSM and performance for the hardware.
  7. Did I post this in the wrong place? What would be the best and appropriate section to post to get the needed feedback?
  8. I host my NAS in a desktop tower case so I'm not restricted. I used one of these https://gelidsolutions.com/thermal-solutions/accessories-pci-slot-fan-holder/ Unfortunately I don't think that's an option for you, but also since the 9211 is a PCIe gen 2 card maybe it runs cooler? Depending on how much you load it it might not get that hot either I guess.
  9. One thing to keep in mind is that these are industrial cards meant for a server environment, they do run quite hot and need airflow over them. I added a fan attached to the back PCI brackets that blow directly at the card.
  10. I had some issues with my old mainboard, it lost all USb and didn't detect the card every few boots, but that was a hardware problem. As for Xpenology it worked flawlessly with nothing needing to be done from day 1. The difference between the 9207 and 9211 is that the 07 supports PCIe 3.0 instead of 2.0 so it can handle a higher throughput.
  11. I've used a LSI 9702-8i for a few years and quite happy with it. My motherboard didn't like it at first, but I don't think XPE has ever had any issues with it.
  12. About to do a built and immediately went a bit wrong. I thought I bought a Z97-Pro motherboard but turns out it was a H97-Pro. I still have the USBs and SATA ports I was planning to use, but instead of 2*8x + 1x4 PCIe 2 I now will have 1*16x PCIe 3 + 3*1x PCie 2 which spoils my upgrade plans for later a bit. (I have 1 LSI 9209-8i SAS/SATA HBA and was planning on maybe adding a second, but that's out the window now). Anyway, I have some choices to make But first I guess the hardware: Asus H97-Pro motherboard with Intel i218-V gigabit nic Intel I5 4670K CPU LSI SAS/SATA 9207-8i HBA 4* Seagate IronWolf 4TB HDs 3* Seagate IronWolf 12TB HDs 2* Seagate 4TB HDs 1* 128GB SSD (Each of the same type will be a separate volumes, 2 RAID 5 and one RAID 1) I think everything should be compatible without much hassle, right? in BIOS, any particular settings to make extra sure they are correct? CSM on/off? Does UEFI matter as I see some loaders supports it while other require legacy mode (I want to be able to run the latest DSM's so 1.03b or 1.04b are the main options and 1.03b is legacy only I think). What about Kernel version, the options are 3.10 or 4.4, any benefits/disadvantages to go for the 4.4 one? (Limiting me to 918+ and 1.04b) I won't have more than 4 cores/4 threads, does a choice between the DS3615XS or DS3617XS make any difference (think I read something about SMART data etc) based on the listed hardware? Naturally I want to get the most out of what I have. The NAS will be connected to a UPS via USB, there should be no issues there I think, but will it allow to also control shutdown of the ESXi host if the NAS is the controller? The PSU is 550W and the UPS is 600W/1000VA, I think that will be enough (at least I hope so) to shut down the NAS (and possible the ESXi host) safely. For the SSD I was thinking of using it as a read only cache for the datastore volume, that is doable right? The NAS will be use strictly for storage and file serving (media for Plex and datastore for ESXi) so throughput is the main concern, will not do any form of media or virtualization with it if that matters). Ok, think that was my thoughts for now, I'm sure I forgot something I wanted/needed to ask, but this will keep the planning rolling at least. Thank you in advance! Thomas
  13. Yes, but setting the Maxdisk is just part of the problem, you still have the limit of 26 physical disks, or is this being addressed as well?
  14. Thank you, and I greatly appreciated your advice, it was a variation of it that saved me.
  15. I'm back! In pure desperation I reconnected the new system drive, then all old drives. All my settings and old users are gone and the iSCSI target was also gone (But LUN left). One volume came up as degraded with the missing disk as a free spare (luckily a raid 1 so no actual loss) while the second and larger volume was entirely intact. I've got some configuring to do but the data is all intact by the look of it and after fixing the iSCSI I seems to have access to everything from everywhere in my LAN/Domain. I guess no more updates for me for a LONG time.
  16. Ah, crap, none of my systems actually have a serial port. Going to try the Ubuntu thing to see if the files are actually still there at least.
  17. Thanks, unfortunately even if I could I have nowhere to recover the files to as the drives the data resides on represent my entire storage pool. A USB cable shouldn't be a problem, any chance you can point me to a guide on how to attach me Win PC to the NAS and analyze via USB?
  18. Again no luck :/ Clean USB and disk, completely clean install of system, Shut system down, remove the disk and attach old disk volume disks. Booth to a recoverable system (no other options), after recovery and reboot non accessible system, invisible to assistant and find.synology. Not grabbing a new IP via DHCP or using the old systems static IP. Starting to look more and more like my NAS is complete write-off and all 6*4tb of data a complete loss. If anyone have any more ideas of things to try I'm all ears?
  19. May I ask another question. What's the rationale for removing the newly installed system before attaching the disks? What will be the effect if you just add the old disks to the new system? (my understanding is that the system of the lowest numbered disk will be the master and written to any other disk if there are discrepancies, is this incorrect?). I've tried several installs so far with no luck, the old disks keep appearing as empty and prevents access to the system. Now doing another run after having completely cleaned a HD (wiped all partitions) and done a complete clean install of 6.1.4-15217 update 1. The disk is installed in slot 1 of 18 in my system (6 first Intel motherboard, then 4 ASMedia motherboard followed by a LSI 8-port PCI-e card).
  20. Meanwhile though, any thoughts or ideas why just attaching the disks to a working system will instantly disable all net access (after reboot) or why no volumes are detected on these disks when attached toa working system?
  21. Ok, point made, no further arguments from me. I apologize for the inconvenience. I will report back once I've tried it.
  22. Thank you, I will try that after some sleep, been at this for far too many hours. Not sure about the ram disk, have no idea about that part. I currently have any clean HDs unfortunately, might a forced reinstall on my current spare HDs work (all have had DSM on them at some point)
  23. Although related I considered it two different topic. The first was only if I needed to wait with reboot because I was worried about the ongoing update process, The second post was about a specific issue that has occurred after the system became inaccessible and concerns both the crashed system and a newly installed one. I would delete the first post if I could as only this second post is of any relevant at this time. It wasn't my intention to spam or break any rules, but in general the rules tend to be not to hijack threads with unrelated issues but rather add another.
  24. Tried burning a new loader USB and it seemed to work at first. System booted and Assistant found a recoverable system (on my old drives). Selected to recover, it ran through it's process. and BAM no access whatsoever again.
  25. Ok, this is getting very weird now. As described earlier I had some issues booting (hardware) but then I managed to get to where I see the normal Jun loader screen. I however have absolutely no access to the NAS. No assistant, no find.synology, no ssh, no direct Ip or domain. Removed all drives and done an install on another drive, it's detected and installed. No hardware issue or network issue. So, why no access to the old system if there are no IP conflict or hardware issue? Connect 1 or 2 of the drives for my smallest volume from the defunct system. New system detects disc immediately but says they are not initialized and detect no volumes or shared folders on them. Now the weird stuff starts. If I reboot with any of these discs connected I again end up with a system that is not accessible (the new install is in the lowest numbered slot so should be the master). Even if somehow the update attempt had wiped the drives (to explain the no volumes) simply connecting them shouldn't disable access completely to the working install, right? Running those two discs alone (2*4tb raid 1) the system seems to load (you get the normal nas loaded screen) but alas, no access in any way, invisible to the system (although the switch seems to indicate traffic). Tried to use the reinstall option during boot, but again, no access to the system so no help. This is a barebones intel install using Jun 1.02b, DSM 6.1.x Before doing the DSM update it had been rock solid with a 2 months uptime. Anyone have any thoughts on what's going on and why the discs shows up as empty or why they somehow disable all network connections and maybe some solutions?
×
×
  • Create New...