Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Luck has nothing to do with it. This is due to a lack of preparation and understanding. There is a Realtek NIC driver in 6.2.3. The 1.02b loader is not compatible. Properly prepare a 1.04b loader and boot with that. Install DS918+ platform DSM 6.2.3 with a Migration Install, and your settings should be retained. You are likely not to be able to continue to use the DS3615xs DSM platform because your J3710-ITX is unable to select CSM/Legacy Boot mode for the USB stick; that boot mode is required to use loader 1.03b.
  2. Your CPU will do more work with six non-hyperthreaded cores active (6 threads), than four hyperthreaded cores (8 threads).
  3. Nobody has done it yet. Would require compiling the kernel (kernel source just got released by Synology). Patching/hacking is unlikely because each of these issues are compile time options that build complex structures within the kernel. Best option for your use case is to turn off hyperthreading in your BIOS so that you can use all six cores. You can also try to use ESXi so that you can offload some workload to the extra cores (although passthrough of the GPU to the DSM VM may be difficult, but not impossible).
  4. Glad to hear that you could sort it out. Looking at the datasheet for the GS110EMX, it's definitely a managed switch, supports VLAN tagging and port assignment, but does not have Layer 3 (routing) capabilities. If you were to want multiple IP networks, you will still need a routing service.
  5. Are you using the resources on this website or instructions you found elsewhere? https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/
  6. set it in grub.cfg per procedure herein: https://xpenology.com/forum/topic/7973-tutorial-installmigrate-dsm-52-to-61x-juns-loader/
  7. XPe supports 16 drives natively on DS918+ with no need for expansion units.
  8. flyride

    Hyper-V

    I'm not clear how this would work. There is no suitable virtual NIC presented to DSM by Hyper-V. You could boot it, but not connect to it.
  9. A switch does not route between IP networks unless it is a routing switch (most consumer/low-end switches do not have this ability), AND The correct IP network is configured on each physical port, so it knows how to divide the networks, or Overlapping IP networks are configured on all interfaces (in this case, what is the point of having multiple networks), or VLANs are configured, assigned to IP networks, and mapped to physical ports The only way I think it can be working is if it is not using TCP/IP protocol (a layer 3 protocol) and is instead using Layer 2 - i.e. AppleTalk or NetBEUI. This traffic is forwarded by the switch to all ports regardless of the way the IP networks are configured. But it will not allow you to access the web UI, which is TCP/IP only. Not necessarily. If you need to configured advanced features like VLANs etc, you will need to assign an IP to the switch so that you can get access. It probably took one via DHCP already and you aren't using it. However, I asked the question to see if you were trying to route with the switch, which you are not. So it's really irrelevant at the moment. What model is the switch and we can confirm whether it has routing capabilities. IP networks are defined by the network number and mask. They go together. There are 32 bits available (four 8-bit numbers). The mask decides how many bits are assigned to the network number, and how many are assigned to identify devices on the network. You cannot have different masks on the same IP network. Your current physical network has no way to route between multiple IP networks, so the solution is to stop using the odd (10.10.0.0/16) IP network and assign the only device using it (NAS 10GBe) to the 192.168.1.0/24 network. Your NAS will then have two IP addresses on the same network - one for the 1Gbe port and the other for the 10Gbe port. This is fine.
  10. There are actually 16 drive slots available on DS918+ so that should not be a consideration for you. DS3617xs is only needed if your CPU cannot handle it (not your concern) or if you need 16 CPU threads, or RAIDF1.
  11. An IP network is a logical entity defined by the address and mask, it's not the physical connection between devices but is governed by physical connectivity rules. You have two networks defined right now: 192.168.1.0/24 (/24 is the 24 bit mask which is the same as 255.255.255.0) - this network has 255 potential members 10.10.0.0/16 (/16 is a 16 bit mask which is the same as 255.255.0.0) - this network has 65,535 potential members Your Modem can talk with your PC and with the NAS Intel port because everyone is on the same IP network 192.168.1.0/16 and it is all connected on the switch. One IP network cannot talk with the other IP network unless there is a router involved, which you say you do not have (technically your modem is a router as it has a public IP lease on the Internet port, but it does not factor into this problem). How did you verify that 10GBe traffic was working. I don't think it is (unless you are using a layer 2 protocol like AppleTalk). The NAS is not a router so it will not pass traffic from the 10.10.0.0/16 network to 192.168.1.0/24, and unless the switch has a 10.10.0.0/16 address and it is also routing, it will not pass the 10Gbe traffic to your PC. I think you should change your 10Gbe port to a 192.168.1.0/16 address and start using that IP to connect to the NAS instead of 192.168.1.40, which is then just a backup IP.
  12. What is the IP, bitmask and gateway of each device (modem, PC, NAS, switch)? Is there a second NIC in the NAS or PC? Is there an IP address for the switch? Is any routing being done by the switch?
  13. So far so good. Ignore the disk Failing status for now, hopefully that is a red herring. Either Drive 3 is going to perform and be fully functional to recreate a full parity set for Drive 4, or it will completely fail and we will still have a broken array. Then we will go on to the Insurance drive (old #4) which has a mostly-intact parity set of your data. This is going to take a long time (hours). Don't interrupt it. It might report errors on #3 but it will retry, let it do that. You can monitor with cat /proc/mdstat or watch the parity consistency % increase in Storage Manager.
  14. Yep, that's what we need. Repair the Storage Pool with Disk #4
  15. Ugh. I got confused between the screenshot and the description of what happened with Drive #4. You unfortunately actually did create an array failure by replacing #4 and booting the NAS. I did ask if you had booted it and you did not answer. In any case, this means we need the "failing" drive (which hopefully is not actually failing) to be functional in order to restore redundancy. Option 2 is now invalid, except as a last-ditch emergency method of recovering your data. Power off your NAS, remove your spare drive, restore the original disk #3 and remove disk #4. Set #4 aside as insurance for your data. Then install the spare into #4 slot. Then boot up the NAS and you should again see the array as Degraded and drive #4 as Not Initialized. If that isn't the case, stop and report back. Otherwise, repair the array per instructions. Don't bother with the SMART Extended test for now.
  16. Post a current cat /proc/mdstat and let's figure out what was missed.
  17. Everything can be done from the UI. Replace the drive, then go to Storage Manager and ensure that the Storage Pool for the enclosure is still in a Degraded state. If it shows Crashed, don't do anything else, take screenshots and report back. The replacement drive should be visible in the HDD list in Storage Manager as Not Initialized Then, from the Storage Pool window, select Action, then Repair, and select your replacement drive. Wait several hours for the array to resync. Monitor progress from Storage Manager or cat /proc/mdstat When everything is done and the array is Healthy, then Fix System Partition from the Storage Manager.
  18. Ok. This tells us a few things, mostly positive. Your sata2 device (which is physical disk #3 of 4) has a SMART status indicated seek failures at some point but that is not flagging the drive as SMART failed. DSM has determined that the drive has failed because there was a problem completing a SMART Extended test in the past. The drive may be fine, but requires further testing for DSM to unflag it. Whatever has happened to the array has caused sata2 (physical disk #3) to drop out of the array, but only very recently. Whatever you did with disk #4 happened when the array was offline so no harm done (good news). You have two options to clear this up: Attempt to run a SMART Extended test on physical disk #3 to see if it will clear the flag. If it does, just resync the array with disk #3 Replace disk #3 with your spare, and resync the array After you restore your array redundancy, correct disk #4's System Partition error by going to the Storage Pool and click "Fix System Partition"
  19. Ok, I got it now. Partitions are labeled p1 p2 p3 instead of just numerical sequence. # smartctl -x -d sat /dev/sata2 and # mdadm --examine /dev/sata[1234]p3 | egrep 'Event|/dev/sata'
  20. The enclosure causes the devices to be named (and maybe classified) differently and it makes different results. Try: # find /dev -name sata1p and # fdisk -l
  21. There is a SMART detail page from the UI that might be helpful. In lieu of that, post # smartctl -x -d sat /dev/sata2p Also I'm not clear whether you ran the system with the replaced drive #4 or if the array is really still usable safely. Post the results of this command: # mdadm --examine /dev/sata[1234]p3 | egrep 'Event|/dev/sata' You may need to elevate to root before running at least the smartctl command. Note that the drive sequence 1,2,3,4 is actually 4,1,2,3 and logical drive #2 is the one with the actual issue.
  22. System Partition failed does not mean that a drive is failing - it means the the copy of the DSM OS on that particular drive is inconsistent with the others, so it is not being used. You have three (maybe two) other copies of it. It is not a big deal. Unfortunately replacing drive #4 was the wrong thing for your data. Did you try to boot up with the replacement drive in place? DSM aggressively reports a drive as "failing" whenever there is a SMART failure. It may or may not be critical. Find out what is actually happening before doing anything else. First post the SMART status of drive #3. Then go to command line and execute cat /proc/mdstat which will show the actual status of your array and post the result.
  23. PCIe x4 is adequate for 10Gbe network card, but there is no reason not to use the Broadcom. You need to be on 6.2.3 however to get Broadcom support without a custom extra.lzma. I get full wire speed throughput (10Gbe through my ConnectX-3) but I have a very high performance array, which may actually be the limitation cited. You don't need to edit the MAC addresses unless you want to be able to wake on LAN, or if you plan to use the Intel NIC elsewhere in your local network (to avoid MAC collision). You don't have to sequence it though, you can change the USB before or after as you wish.
×
×
  • Create New...