Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. There isn't all that much documentation on how to recover btrfs. The thread is following a recovery using some commands I compiled over several years. It's supposed to self-heal (which it does most of the time, and often without your knowledge), and when it doesn't there is usually something significantly wrong. It doesn't always mean data is lost, but the ability for btrfs to operate the filesystem in a healthy mode cannot usually be restored. There are a few things that can go wrong that it won't automatically invoke redundancy for, which is the purpose of the find-root and tree commands. Sometimes those need to be executed for anything to work at all. Almost always, the long-term solution is to recover the files to another device, delete and rebuild the btrfs filesystem.
  2. I agree, your array seems okay. cat /etc/fstab will tell you how DSM thinks your array should be mounted. Current versions do not mount directly to an array dev unless you go through some significant effort. Are you certain you don't have a volume group setup? This thread should help you with troubleshooting and data recovery on btrfs: https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107931
  3. I think you'll have to do it with --assemble and then you can specify each disk in order and the raid type
  4. Force it as a RAID5. Now that it modified the superblock, you may need to specify the array member order.
  5. We don't actually know the answer to this question. The issue is some sort of instruction requirement compiled into the Linux kernel for this (and several other) platforms. If your CPU doesn't work, it crashes very early in the boot process. This has been the case since DSM 6.1 and DS916+ and generally correlates to Intel QuickSync support starting with the Haswell microarchitecture. There is a strong crowdsourced correlation to the presence/absence of the FMA3 architecture and this has been a good rule of thumb, but it could be something more obscure like SSSE3. To confuse the issue, it appears that some Intel chips support some FMA3 instructions even when not explicitly branded or flagged to do so. And it is possible for a new build or DSM version to remove whatever compile-time requirement is in play. For example, it seems that DS920+ (which is the refresh of the DS918+) runs on any x86-64 platform, even while supporting QuickSync. Occasionally there are reports such as yours that say things are working on X or Y CPU. Great! Post your results on the upgrade threads. Better yet, figure out what is really going on and advise!
  6. An HBA is NOT an AHCI device; therefore it needs a driver. If TinyCore can see your P212, it will only be as a SCSI device. Therefore you will need to figure out and load an TC extension for the controller driver in order for it to work in DSM via passthrough. You should not see ESXi storage devices on the controller when it is in passthrough mode. There are a lot of posts on this. Picking the DS3622xs+ platform seems to be the best chances for success as it has the best HBA support. If you can't figure it out, take the controller out of passthrough and if you can see the disk devices in ESXi, you should be able to RDM as a backup option.
  7. It's in the original post: the second paragraph following the table with this reference, labelled "DRIVE SLOT MAPPING CONSIDERATIONS"
  8. Correct. A real Synology has an embedded USB drive with the loader installed on it. We are just replicating that by using a plug-in drive
  9. It's installed on ALL of the HDDs. A tiny bit of corresponding information is updated and linked on the USB as well.
  10. This is not 100% correct. SataPortMap/DiskIdxMap are not consulted for HBA disk assignment, but the HBA assignment begins wherever the SataPortMap assignments end. If there are no SATA ports of any kind mapped, it is required to have SataPortMap=00 and DiskIdxMap=00 so that the HBA assignments occur in the port namespace. If a virtual SATA port is being mapped out of the port namespace (Sataboot or the ProxMox false controller) and there are no other SATA ports mapped, then SataPortMap=1000 and DiskIdxMap=00 for proper HBA assignment. TCRP rploader.sh satamap attempts to calculate this if it detects only SCSI/HBA even though it is technically not configuring HBA ports with SataPortMap
  11. I believe that I tried this early on and either firmware option works with RedPill. But it has little advantage as why would boot a USB stick on a DSM client VM, when a VMDK will do? No harm in trying/verifying a test machine yourself. If this is during the TinyCore boot, you can ignore it. But even the DSM Linux boot has a lot of spurious error messages that are usually not visible to the user, and should be ignored.
  12. There is no interrelation between the graphic and the MaxDisks and the internalportcfg, esataportcfg, usbportcfg bitmasks. These only need to be changed if you plan to use more than 16 disks. Otherwise just leave them alone.
  13. The number of drives displayed in the graphic has to do with the DSM platform used and cannot be easily changed. You can safely ignore it.
  14. Even with a real Synology, they are so far behind the industry on critical vulnerabilities that it should override any underlying urgency associated with patch timing. Anyone who is serious about security and is also making DSM accessible to the Internet should be doing so via VPN, a third-party authenticated proxy, or better yet, not at all.
  15. The output in Control Panel is cosmetic and tied to the underlying platform you chose. This will show you how many threads are actually in use: cat /proc/cpuinfo | grep cores | wc -l
  16. SataPortMap does not apply to LSI drives. You have an on board controller with 6 SATA ports. You are reserving slots for them. The LSI ports are tacked on to the end automatically (in your case, starting with slot #7) If no LSI drives are visible during the DSM installation (migration), then your LSI controller is not being recognized by DSM. Please don't confuse that with it being recognized by TinyCore - two different operating systems. You need to look into another driver for the LSI or another platform that supports LSI better.
  17. Yes, I would click the link on the overview status page, the automatic routine should fix it for you. Sometimes DSM doesn't handle repairs when it is only one of the partitions on the drive, which is what happened to you for whatever reason.
  18. I am not aware of any successful use of Hyper-V to host DSM.
  19. Well we haven't done anything yet. First thing to do is to fix the broken /dev/md3 sudo mdadm --manage /dev/md3 -a /dev/sde6 You can monitor its progress from Storage Manager or by repeatedly cat /proc/mdstat Post the final cat /proc/mdstat when it is finished. At that point you probably will be able to use the link to fix the System Partition, but we should review the state first.
  20. In addition to the system partition problem (inconsistent /dev/md1 and /dev/md0) it appears that part of your SHR is broken (/dev/md3). What does mdadm say abou the broken array? sudo mdadm -D /dev/md3
  21. Repairing the system partition should not require another disk. It almost looks as if you somehow added another disk to your array but that is not something that happens as part of the upgrade. Always better to post an mdstat, you might want to search for other data recovery threads so you can see some of the commands involved. cat /proc/mdstat from the command line is the place to start.
×
×
  • Create New...