Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Best chance for that is to do a passthrough of the SATA controller on the motherboard to your VM so that DSM can directly manage the disk. There are issues with spurious/unimportant log messages being frequently written to the logs which inhibit hibernation. It's a bit of a chore to fix, but see these links: https://xpenology.com/forum/topic/29581-suppress-virtual-disk-smart-errors-from-varlogmessages/ https://xpenology.com/forum/topic/34121-hibernation-doesn’t-work
  2. I'm running it on ESXi test on all three platforms, no issues so far. I posted an update thread but it has to be moderator-approved.
  3. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 Update 2 - Loader version and model: Jun's Loader v1.04b DS918+ - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) Test system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 Update 2 - Loader version and model: Jun's Loader v1.03b DS3615xs - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) Test system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 Update 2 - Loader version and model: Jun's Loader v1.03b DS3617xs - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) Test system - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 Update 2 - Loader version and model: Jun's Loader v1.04b DS918+ - Using custom extra.lzma: No - Installation type: BAREMETAL J4105-ITX production system - Additional comments: NVMe cache present, no issues - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3-25426 Update 2 - Loader version and model: Jun's Loader v1.03b DS3617xs - Using custom extra.lzma: No - Installation type: VM - ESXi (with FixSynoboot.sh installed) production system
  4. If referring to the 3.3v power line used as a reset line on many shucked disks, it can be taped over or just removed entirely.
  5. As long as there is no 6.2.2 modification to the 1.03b loader currently in use, yes ... we don't know the history on it though.
  6. I would upgrade the HP to the latest 6.2.3 before I attempted this. Once you are on 6.2.3, it will probably work. But I wouldn't do it with my production data until tested (install new USB and DSM to another drive on HP, then move the test install to SuperMicro as planned).
  7. A couple of observations: 1. "Because of using a microsd" [there was an effect]. MicroSD vs USB pendrive does not generally cause the issue you describe. 2. "[The issue was fixed] with 3617xs". This probably isn't accurate either given the number of success reports on your exact hardware. It's likely that a reinstall of DSM, or discontinuing use of an intermittently faulty pendrive was the resolving factor.
  8. Migration IS a reinstallation. This is a problem with 6.2.3 that is mostly encountered with virtualized DSM instances, but occasionally with baremetal. Here's how to fix it: https://xpenology.com/forum/topic/28183-running-623-on-esxi-synoboot-is-broken-fix-available/ FixSynoboot should also correct your Error 21 corruption problem.
  9. So since you want to re-install DSM, I would consider doing this: 1. Install new USB loader and install new clean DSM to your new 6TB drive. Don't create a Storage Pool or a Volume. 2. When you are happy with the new install, power down your old system normally 3. Physically install your array disks over to the new platform. Hotplugging is okay if your system can do it. 4. Follow the process here to import the array into the new system: The procedure will have you start all the arrays (including the superfluous ones for your old DSM boot and swap). Mount the data array to /volume1, not volume2 (since you have no other volumes online) and verify your data is present. Stop the superfluous boot and swap arrays Complete the procedure to create the Storage Pool and Volume in DSM. After everything is done, ONLY THEN fix the System Partition in Storage Manager. If you try to fix it before this step and you need to go back for some reason, you will have to reinstall DSM on your old system. Reboot and verify that the array got assigned to a normal device (should be /dev/md3) After you are fully satisfied that the system and its data are complete and stable, you can add the new 6TB into the array normally. You said you have a backup. That's a really good idea with this procedure, but it's mostly harmless. Good luck!
  10. DS1812+ has earlier version Atom chip (Cedarview D2700) than the chips that had the boot clock problem (C25xx). So that probably won't apply. PSU is usually the culprit there, but you already tried that...
  11. You can activate the mobo NIC without adjusting the loader. I'm not sure what your last question means. You always were able to install whatever DSM dialect you wanted that was supported by your hardware. "Going" to 3617xs (presumably from 3615xs) involves a reinstallation of DSM, but I think you know this. You'll need to burn a new, clean loader in order to do it though. However, if I may borrow your quote, "I don't believe it will change anything."
  12. The only thing I can think to do is have you set a static IP on a PC and then remove all other devices (including firewall) except the NAS and the PC from the network switch. If it worked reliably then at least you would know it was related to one of the removed devices. If it continues to fail, there are still many failure possibilities - DSM, NIC, server hardware itself. I'll also point out that you switched NICs during this process and the problem persisted, so that doesn't suggest intermittent NIC to me. But on 6.2.3 you should be able to use your onboard NIC okay so you could swap the Intel CT for the onboard as it seems you want to try most everything you can.
  13. Not guaranteed. The system may be booting but not in a state where it can generate that log. No, I mean connecting to a COM port. The DSM console (all the boot messages and direct Linux access) are only accessible via serial, not the hardware console.
  14. There isn't one right way to do things. But it might help to have a little bit more information.... please answer: 1. What DSM hardware platform? 2. What array type are your 5 x 6TB drives? (SHR, SH2, RAID5, RAID6)? 3. What size is your new drive? 4. Are you planning on just adding the new drive to the existing array to increase the space? 5. Got a backup of all your data somewhere else?
  15. I don't really understand this statement, and don't think your logic is correct here. You posted a boot screen when you cited that it was not booting at all. That is all that is ever displayed from the loader, so we know the boot loader has executed. DSM does not post a notification to the VGA screen during a reset, and if the network wasn't working there would be no notification via any other means. So you probably don't really know the state of things when you do not have a network connection. The only way to properly troubleshoot this is via serial, and you will be able to monitor DSM's state without any network connection. How to set up and use a serial connection is well chronicled on this forum.... Also if you are trying to minimize all variability, you should reconfigure back to static IP.
  16. I just have to point out that it's the same hardware and network and you've experienced this problem across three different DSM versions. This is not a normal behavior. Something in your local environment is causing it.
  17. Some enclosures: https://www.u-nas.com/xcart/cart.php?target=product&product_id=17640 https://www.amazon.com/SilverStone-Technology-Mini-Itx-Computer-DS380B-USA/dp/B07PCH47Z2/ref=sr_1_2?dchild=1&keywords=u-nas&qid=1606234858&sr=8-2 https://www.amazon.com/Antec-Three-Hundred-Two-Pre-Installed/dp/B006TVQTHW/ref=sr_1_2?dchild=1&keywords=8-bay+case+computer&qid=1606234801&s=electronics&sr=1-2 There are many more options besides these. I personally use the NSC-810A and it is extremely compact, virtually the size of your DS1812+ while still supporting a M-ATX motherboard, but it's a bit fiddly because of the limited internal layout. And to echo @bearcat you can port your drives into the new system and do a migration install and all data will be intact. It would be a Good Idea to trial run this with a plain vanilla XPEnology install, then a simulated drive set migration install before putting your real data in play.
  18. Work has kept me away from monitoring what happens here, please DM me if you decide to bring the disks back to DSM for recovery.
  19. It's cosmetic and not affects performance. The DSM UI displays the CPU that originally came with the host system (you are using a DS3615xs image). Someone that built a patch to change the text string if it really bothers you.
  20. Be sure to take out your loader USB with the drives and burn another one for the test, then when you restore your old drives, put your old loader back at the same time.
  21. You have one virtual sata controller, with a 20GB virtual data disk in slot 0, the loader in slot 1 and a 50GB virtual data disk in slot 2. You can see you are booting from the loader on sata1. Best practice is to have two virtual sata controllers, this is easy in ESXi but I don't know ProxMox. vsata0 should have only the loader in slot 0 (sata0) and boot from that device. vsata1 should have your virtual data disks in sequence, slot0 = 20GB, slot1 = 50GB, etc. If it is hard to set up two virtualsata controllers, one vsata controller should work, with loader in slot 0 and virtual disks to follow slot 1, slot2, etc. DSM is generating a confusing error message about the non-standard placement of the loader device, but the script is correctly unmounting it and hotspare ejecting it as required.
  22. It does look a little strange. Can you share a list of the virtual controller and drives you are using? Do you possibly have two loader devices mounted to your VM by accident? The script (which is just part of the Jun bootloader code) tries to identify the loader partition by its size smaller than any valid RAID partition. If a targeted partition is mounted, this is characteristic of the eSATA automap problem, so it quietly dismounts it for you. If there are two block devices that present as a loader, it could get confused. This is a potential problem with Jun's loader with or without the FixSynoboot script, although there would be no error message displayed; it would be manifested later as an upgrade crash as the wrong loader would be modified in the upgrade.
  23. Your NAS is not infected, just your files. MARS has no ability to attack a Linux system and your system files are not exposed, only the files with your shares that are accessible from Windows (where the infection occurred). This would be a great time to tell us you are using btrfs and creating regular snapshots, where the entire filesystem could be rolled back to before the ransomware event.
  24. The /dev/sdf3 error should not happen. Did you reboot the system since the last time you posted the drive stats?? It looks like your drives may have reassigned? Please repeat "sudo fdisk -l /dev/sd*"
×
×
  • Create New...