Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. The host machine must be Haswell or later to run DVA3221. Your E5-2420 is too old. https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/
  2. Ok, let's do this too: vgcfgrestore --file /etc/lvm/backup/vg1 vg1 Then pvs again
  3. Yes, something is wrong. You'll need to post more information about your hardware (motherboard, any disk controllers) and the output from satamap if you'd like some advice. Be sure that you "update" and "fullupgrade" prior to running satamap
  4. Ok, if you still are waiting on this, the status identified by @IG-88 is that the /dev/md4 array is missing its LVM UUID. This can be re-created without damaging data, although we don't know if the data is intact. If you want to try this, the command will be: pvcreate --uuid C6GTjV-xqi0-e1XG-045q-hCjs-1dCS-fsbzhK --restorefile /etc/lvm/backup/vg1 /dev/md4 Then verify what has happened with a pvs and post the results
  5. 0.8.0 is the TCRP "stable release" 0.9.0 is the TCRP development release Use the stable release unless you are testing something for development.
  6. as @IG-88 says. Your loader is very old. If you want to keep your loader as is, install the same DSM version PAT file 6.1-15047. If you want to use the latest 6.1 version, replace your loader with a more current one Jun 1.02b and install DSM 6.1.7-15284 Other option is 1.03b and DSM 6.2.3-25426 (not 6.2.4... that version is not supported) Or TCRP loader and DSM 7.1-42661 I recommend options #1 or #2 since you seem unfamiliar with current install process, and your data is in the balance. FMI https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/
  7. You didn't mention that part. The write penalty on a QLC NAND drive is very high. https://www.firstpost.com/tech/news-analysis/samsung-870-qvo-sata-ssd-review-possibly-the-best-qlc-drive-you-can-get-but-it-isnt-for-everyone-9145371.html#:~:text=The Samsung 870 QVO is a QLC NAND SSD with,within that 42 GB cache. "The Samsung 870 QVO is a QLC NAND SSD with a SATA interface. The unit I received is the 1 TB variant. It has a 1 GB of DRAM cache and about 42 GB of SLC cache. Samsung rates the read/write speeds of the drive at around 550 MB/s, but neglects to mention that you’ll only get these speeds within that 42 GB cache. Once the cache is saturated, speeds drop to about 50 MBps for mixed data, and 110 MB/s for large files"
  8. What are you copying from? Many small files will not get a drive up to rated speed. Are you sure you are supplying from 10GBe? Check the raw read on the device first, hdparm -t /dev/sdX Then maybe look at a disk to disk copy from the command line without using the network.
  9. That looks equivalent to ESXi's RDM (raw device mapping) service. Based on what you have posted, I would expect DSM not to see any difference for the 870 QVO drives when connected to a passthrough physical SATA controller. Agreed.
  10. If the drives are accessible to the hypervisor as backing storage, you format them within the hypervisor, create a filesystem, then a virtual disk file within that, then finally assign that virtual disk to a virtual controller - that is not a drive passthrough. It is just a virtual disk you have attached to a VM. That will not survive moving to a physical SATA controller passthrough.
  11. That support is not available on the DS3622xs+ platform. https://xpenology.com/forum/topic/61634-dsm-7x-loaders-and-platforms/
  12. System Partition Failed means that the RAID1 for DSM that is on all disks is no longer consistent. Here are the instructions from Synology website to correct. Failed to access the system partition To repair the system partition: Launch Storage Manager. Go to Overview and click the Repair link. The system should start repairing the system partition on the drives. Wait for the system to complete the repair. Go to HDD/SSD. The allocation status of the drives should return to Normal. If one or more drives still show the System Partition Failed status, they might be defective. You can do the following: Replace the defective drives one by one. Depending on what status is shown on the Overview page, repair the storage pool or the system partition. For detailed instructions on repairing a storage pool, refer to the respective help articles for DSM 7.0 and DSM 6.2. For detailed instructions on repairing the system partition refer to the respective help articles for DSM 7.0 and DSM 6.2. Follow these best practice tips to keep your data safe and your system operational: Regularly back up your data and system configurations. For detailed instructions, refer to the respective help articles for DSM 7.0 and DSM 6.2. Run S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) tests on your drives to monitor the drive health status. For detailed instructions, refer to the respective help articles for DSM 7.0 and DSM 6.2.
  13. Ok, so you are indeed describing attaching physical drives to a virtual controller, then presenting the same drives connected to a passthrough SATA> I would expect the individual drives to be recognized with no issue. The only reason that it would not be the case is if the virtualization translated the disk in some way (i.e. a virtualization wrapper or size change). Migration prompt occurs with the version/SN/loader does not match the last boot. Moving the drives from one functional controller to another won't prompt a migration. It should just boot.
  14. 970 EVO is a NVMe SSD, not SATA. So I don't understand how passing through a motherboard SATA controller will be helpful. There is no way to physically connect a NVMe disk to a SATA controller. The question of whether the drives will need to be reformatted is therefore irrelevant. I'm not sure how Unraid works, but with ESXi, publishing an NVMe drive via RDM can emulate whatever is needed - attaching it via virtual SATA controller works and it will perform at rated speed. Maybe that is essentially what you are doing with Unraid. If so maybe consider ESXi as an alternative, purely for performance reasons.
  15. Each install modifies the USB stick. What you did was not ideal as it broke the connection between your loader and your installed system. So the DSM stub on the loader now no longer recognizes the system you have and is offering to migrate it so that it can work with it again. Your option is to Migrate. You can also burn a brand new loader and boot that and it will try to Migrate that one as well.
  16. FWIW, satamp does not ask about LSI controller ports as it is not a AHCI SATA controller. LSI ignores SataPortMap and DiskIdxMap - those only apply to AHCI SATA. It just tacks on its ports at the end of whatever SATA ports are defined.
  17. Post the build output. If there are errors with the build it will say so and the grub menu entries will not be generated.
  18. HBA does not pay any attention to DIskIdxMap, so I don't believe you will be able to do this.
  19. There is nothing to do to migrate RDM disks. They attach to a virtual SATA controller exactly as before.
  20. What you have described suggests that you have set the SATA1 controller up with two ports - i.e. SataPortMap=12 and DiskIdxMap=1000 (or 100002) With RedPill, there will always be a gap from the last SATA controller to the first HBA device. You could delete your SATA1:0 controller and attached virtual disks(s) and then rebuild the loader with SataPortMap=1 - then your HBA would start at port 2. That's the best on ESXi unless you want to passthrough a real USB flash drive and boot from that.
  21. I'm heading out on a business trip for several days. But I think you can follow the steps starting with the cited comment in this thread and restore your volume. https://xpenology.com/forum/topic/41307-storage-pool-crashed/?do=findComment&comment=195345
  22. There are many examples of this in the dev threads. But you can build a new loader, then in the same session, edit the dts file, then rebuild again with the same result.
  23. Since inception, all the Quicksync-enabled products have required 4th gen. This was determined over time by inference and a large sample set. To the best of anyone's knowledge, the Quicksync DSM builds utilize the FMA3 family of instructions, which was introduced with 4th gen/Haswell. But we don't know why FMA3 instructions are compiled into the Linux kernel, and it would be odd to think it had anything to do with the boot loader. If Synology has moved away from whatever required the FMA3 support, then DS920+ should potentially install on any x86-64 machine. We do know that DS1621+ (AMD v1000 platform) does seem to require the FMA3 instructions. So it's not a Device Tree oddity. Maybe someone else can confirm this on Nehalem/Sandy Bridge/Ivy Bridge and post it to a new thread. Sorry for the topic drift.
  24. @pocopico FWIW, update fetches the latest rploader.sh file. fullupgrade updates all the support data files. I don't think fullupgrade will download TinyCore binaries, so if these get damaged or deleted, best to just start over with a new img file.
  25. To be fair, the PN is this: MBD-X10SDV-TLN4F, the CPU is a D-1541. https://www.supermicro.com/products/Product_Naming_Convention/Naming_MBD_Intel_UP.cfm
×
×
  • Create New...