flyride

Members
  • Content Count

    1,046
  • Joined

  • Last visited

  • Days Won

    60

flyride last won the day on August 13

flyride had the most liked content!

Community Reputation

346 Excellent

About flyride

Recent Profile Visitors

5,073 profile views
  1. I would add that if DSM was in a best-practice configuration (passthrough of disks to DSM, and use of RAID) btrfs may very well have fixed this problem itself given the log output.
  2. There is some btrfs corruption that is preventing the volume mount (if not obvious from the log dump). BTRFS tried to self-heal but it can't in this case. My advice is try and recover all the files to new storage, delete the volume and recreate. Here's a thread with some methods of extracting the data from the damaged volume. Start with post #9, read through the rest before doing anything, and you may ignore any "vgchange" commands which do not apply to you. There are a few other threads around on repairing/recovering btrfs corruption if you search for them. https://xpenology.com/forum/topic/14337-volume-crash-after-4-months-of-stability/?do=findComment&comment=107979 Very likely that you can either mount read-only or restore (recover) files to other storage.
  3. So this tells us a few things Confirms your use of virtual disks, not physical Simple volumes, no RAID (aside from DSM's normal RAID1 for DSM and swap) btrfs filesystems /volume1 is NOT being mounted So let's see what the error message is when the system tries to mount your volume: # mount -v /dev/md2 /volume1
  4. Your volume might not be mounted as the underlying root filesystem exists and a process might have tried to reinitialize a mariadb file. Please run the following commands from ssh root: # synodisk --enum # synodisk --detectfs /volume1 # cat /etc/fstab # cat /proc/mdstat Post the results
  5. Please repost here and don't thread jack, thanks. https://xpenology.com/forum/forum/82-general-questions/
  6. You are the only person that can determine "safety." @bearcat gave you a link that demonstrates what others are doing. Also, see this: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ For what it's worth, there are really no new features on 6.2.x compared to 6.1.7 on the DS3615xs platform you are using.
  7. If continuous writing spurious errors to log files is in fact the reason hibernation can't occur, there are two fairly feasible solutions... 1: repoint scemd.log to ramdisk. or 2: adapt the log filter that I posted for SMART error suppression... anyway, take a look and see if it can help. https://xpenology.com/forum/topic/29581-suppress-virtual-disk-smart-errors-from-varlogmessages/
  8. Again, there is nothing wrong with 6.1.7 as you will have no additional features with a newer version/loader unless you switch platforms. But if you want to try, see post #3 in this thread and review the link there.
  9. The hypervisor is probably using it. https://community.spiceworks.com/topic/1146570-how-much-free-ram-does-the-hypervisor-need
  10. https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Use DSM version 6.2.3 when using 1.03b loader. That also requires you to select Legacy/CSM boot for the USB stick (it may be the only boot mode supported by your motherboard though).
  11. 6.2.1 and 6.2.2 had problems with driver compatibility and were difficult to make work. Stay on 6.1.7 or go to 6.2.3, you'll probably have a different result than with 6.2.2.
  12. You never said which DSM version you tried, only "newer." Should you want 6.2.3, try this combination: loader 1.03b, DSM DS3615xs version 6.2.3-25426. You MUST set Legacy Boot/CSM in your BIOS for your USB stick. The install (VID/PID, etc) is otherwise the same as what you have done for 6.1.7. That said, there is nothing wrong with 6.1.7. On the DS3615xs platform you have chosen, there really is no difference between 6.1.7 and 6.2.3. As someone decided to hijack your post, I will, without criticism or judgment, state for his/her sake that this exact advice has been posted on average once per day for at least the last month.
  13. Given all your constraints, you should probably ask what you want to use 16 threads for. There is no filesharing workload that requires more than 8 cores for maximum throughput (my system can max 10Gbe with 4C/8T at well under 100%). Is it to run other VM workloads on the same server? I don't know if anyone has actually done this, but it is technically possible to pass iGPU through under ESXi 6.7. It is also possible (although can be hardware limited) to passthrough NVMe devices. You might consider testing a switch to ESXi, run DS918+ DSM as a VM and passthrough iGPU/NVMe. If NVMe passthrough doesn't work on your hardware, you can RDM or RAW and present the disk to the VM as SATA and use it as cache. The performance will be the same. That would give you 4C/8T for DSM and whatever you have left for VM's.
  14. Amazing functionality for the cost - 8C/16T CPU and 32GB RAM for $US190? Wow. Wonder if they are reclaiming old chipsets or purchased enough new old stock to make a run of boards...