Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. Once you are on the Intel platform, you have many options - migration install, btrfs replication, etc. But coming from ARM, those features are not available. You can keep your settings by saving the config (DSS file) from DSM UI and restoring it to your new system once it is installed. It's not perfect but it does restore user accounts etc. You may have to remake permissions etc, so take good notes. Side note: the 14TB drive is too different from the 6TB disks to get extra space from SHR. You will only be able to use 6TB of the 14TB available, for 12TB usable, because of the space disparity. That said, you MUST initiate your new array with a 6TB disk if you want it to interoperate with a 14TB drive at all. Assuming you have a 2-disk RAID 1, you already have two copies of your data. So just pull one of the drives out and use that to build the new system. Your Synology will complain that the array is "critical" but it's fully functional and your data is intact on the remaining drive. No need to break out the 14TB drive unless you wanted to use it to make another copy of all your data. If you wanted to do THAT you could pull one of the drives and let the system rebuild the array using the 14TB drive (be sure it makes a RAID 1 and not a SHR - i.e. only 6TB usable storage). Here are a few data migration ideas, somewhat dependent upon your level of technical ability: Build up a new XPe DSM with the removed 6TB drive, and just copy folders from your old Synology using your PC client Copy all your data off to the external disk, build up a new XPe DSM with the removed 6TB drive, then attach the external and copy folders using File Station Build up a new XPe DSM with the removed 6TB drive, and rsync folders from your old Synology (via UI or command line) Build up a new XPe DSM with the removed 6TB drive, then connect your Synology 6TB drive (or the 14TB copy if you did that), manually mount the filesystem from the command line, and move the folders from the Linux command line
  2. @SnowDrifter you are correct. The issue is that the ARM versions identified with "X" in that table have a system partition that is too small to accommodate the Intel code. So a migration install is not possible for those units. Whatever you do, do NOT subject your only copy of your data to an upgrade. This is a good way to increase your stress level at minimum, and data lockout or even loss if you mess up badly enough at worst.
  3. I think it should work fine. If you have not reviewed this information, you may find it helpful: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  4. A couple of comments. System Partition is Synology's generic term for the OS partition. If you migrate install, you will have a damaged System Partition since the RAID 1 across all the disks is not consistent. You could have just clicked "fix" and all would be fine as it would have propagated your booted DSM. Depending on your tech, there may be normal SMART related errors in the syslogs. See "suppressing SMART errors" for a workaround. Glad you got yourself up and running on new platform. Congrats
  5. Which platform and loader are you installing? CSM/Legacy boot = non UEFI. UEFI = CSM/Legacy boot off. CSM/Legacy boot required for 1.03b and DS3615xs/DS3617xs https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Since you say you tried a supported Intel CT card, I suspect you have a CSM/Legacy boot mismatch. There are numerous posts on this. The newest Intel chipsets may not have NIC support unless you load extra.lzma from @IG-88. Also there are reports of a requirement for a functioning 1 GBe card be present for some 10Gbe cards to work.
  6. Your DSM configuration settings will migrate just fine so no need to delete that folder unless you want to remove them deliberately. Just boot the USB normally. Migrate from find.synology.com
  7. Ok, don't mind your Docker for the moment. Before trying to troubleshoot the filesystem further, I would remove the cache from the Storage Pool.
  8. Before getting started, I would do a test install to your new motherboard, a scratch drive and a clean loader to make sure all your hardware is working. I don't know what you would accomplish by step #1 - that seems unnecessary. You should not reuse a stick when migrating, reburn a clean loader. Be sure to select Migrate install. The migration should work okay.
  9. USB must remain connected. It is used for each boot and for upgrades. I don't see a question regarding SSD vs HDD. SATA SSDs are fully supported. DSM on XPenology and DSM on regular Synology are fully interoperable. Also aside from licensing limits, there are no issues running apps, packages, etc on DSM on XPenology
  10. Well that tells us that the cache is intact and in use and is being used as the target to mount your volume. Can you post your Storage Pool status screen? This is one of those situations where the problem is harder to address because of the use of cache. And let's try to get your Docker shut down, use the manual command: # synoservice --stop pkgctl-Docker I would be interested to see if your volume error stops with Docker. If it does not, we can look into whether you can use the UI to remove the cache or try it manually as well.
  11. mdadm should allow you to reshape without unmounting, and it is easier if you don't have to. I would stop Docker from Package Manager before trying a mounted reshape however. The only command I believe you should have to do is the reshape (decreasing the array size in the article would not apply to you). I expect that DSM will automatically increase your Storage Pool and volumes when more space is available, but if it does not, it's easy to do. i.e: # mdadm --grow /dev/md2 --level=0 Obviously, have a backup before doing anything like this. The reshape is dependent upon kernel support, and Synology modifies mdadm for their own purposes, so no guarantee it will work. If it does work, you save time. If it doesn't, you are back to the manual process.
  12. I think you are right, but surprised to see a chip without these microarchitecture features introduced in 2012. Bobcat lacks the FMA3 instructions that seem to be required for DS918+. https://en.wikipedia.org/wiki/Bobcat_(microarchitecture) So OP, you will need to use 1.03b loader and DS3615xs or DS3617xs. And the main reason folks are not successful with this combination is that they are unable to get it the USB to legacy boot/CSM mode, which is an absolute requirement. All this said, you aren't missing all that much. 6.1.7 is fully functional and really there is little that 6.2.x does that 6.1.x does not.
  13. Sorry I haven't been paying much attention to this thread. I don't fully understand the German screenshots so I am a little challenged. I am completely confused by the reports on the volume mount however. Based on the mdstat, it looks like @Subtixx has a straight RAID 5 (with a read cache), and not a SHR. So there should be no /dev/vg1 or /dev/vg1000 and the data volume must be directly contained within /dev/md2. But still you have posted volume group and logical volume screenshots. It was reported that your files are accessible right now. So the volume must be presently mounted. We need to be sure how things are set up. Please post the output of: # cat /etc/fstab and # df Also, Synology Docker creates and loads btrfs subvolumes. Most of that is temporal/transient data and the subvolumes are destroyed when the container is ended. If you go into Package Manager and stop Docker, do you see any different reports with your filesystems?
  14. I recommend loader 1.04b, and DS918+ 6.2.3 for the best results with new hardware such as this. I don't use AMD but the only thing I know that may be an issue is to ensure that "C1 / C1E is disabled in the BIOS" Your CPU is new and is an integrated platform; you have not posted the motherboard type. It is POSSIBLE that the RealTek NIC is a new silicon rev and is not supported by the drivers available. You can try the extra.lzma for DS918+ 6.2.3 and see if that helps, or you can try buying an Intel PCIe CT network card (about $20) which is definitely supported.
  15. How are the SATA drive bays connected? The T320 has a lot of controller options. How is your system configured? While you are at it, it would helpful - as always - to provide information about the loader and DSM versions you are running. EDIT: I see you are on 5.2, the final version of which was released six years ago. You may want to upgrade to a newer version of the software and the loader in order to resolve this problem. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/
  16. No problem, I can run Google for you. https://www.google.com/search?q=docker+on+synology&oq=docker+on+synology&aqs=chrome.0.0l3j0i395l3j69i60l2.1735j1j7&sourceid=chrome&ie=UTF-8 https://www.baitando.com/it/2019/09/22/using-docker-on-synology-nas https://forums.plex.tv/t/official-plex-media-server-docker-images-getting-started/172291 https://hub.docker.com/r/plexinc/pms-docker/
  17. Is there a reason you don't use Docker to run Plex? It is superior to the SPK in my opinion and you get updates in a much more timely manner.
  18. Well, you can just remove one of the disks and what is left will technically be RAID0 (a critical RAID5 is a RAID0), with space lost for parity. Manual reshape of the array with mdadm ought to permanently convert it to RAID0. You might have to unmount your volume and stop the array first. See this: https://wysotsky.info/mdadm-convert-raid5-raid10/
  19. Synology natively uses a custom circuit that is connected via serial port. So it is unlikely that you can use any of the internal Synology code to control your fans. BIOS is probably the best option but as you found, that is dependent upon motherboard implementation. I have also done what you have - install lm-sensors etc and that can be made to work with some fiddling. fancontrol and pwmconfig are scripts that are usually installed with lm-sensors. If you are not able to execute them, this is something to do with your installation.
  20. @bateau please post your question in the questions forum, and please include details of your system - what loader and software versions you were running before, etc.
  21. I'm using the U-NAS 4-bay and 8-bay cases. They are very compact but hard to build.
  22. I was under the impression that this message was part of Synology's hardware validation feature and if it failed (meaning that the loader hack did not work), this message was the result. I don't believe that it means there is a data integrity problem. However, I would start with looking at your arrays and see if they are damaged, then start documenting the system. OP seems to be wandering around with online tutorials (already tried a filesystem check for ext4 and btrfs, there is only one or the other) and that will only result in damage. If arrays look intact, I'd consider regenerating a loader and doing a migration install and see if that fixes the issue.
  23. It is not required. But it often is helpful for troubleshooting as there is little to go wrong with configuration. If you haven't done XPe with ESXi before, configure with a virtual disk, don't provision a Storage Pool on it and then delete it when your RDM drives are up and running and working correctly.
×
×
  • Create New...