Jseinfeld

Members
  • Content Count

    20
  • Joined

  • Last visited

Community Reputation

3 Neutral

About Jseinfeld

  • Rank
    Junior Member

Recent Profile Visitors

509 profile views
  1. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM DSM 6.2.3-25426 - Loader version and model: Jun's Loader v1.04b DS918+ - Using custom extra.lzma: Yes, v0.13.3 for 6.2.3 by IG-88 - Installation type: BAREMETAL - J4105-ITX + 8GB RAM - Additional Notes: With the powerbutton fix installed, didn't encounter any trouble with the reboot. Came up just fine after the update. HW Accel also working on first boot.
  2. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.1-23824 Update 6 - Loader version and model: Jun's Loader v1.04b DS918+ - Using custom extra.lzma: Yes, v0.13.3 for 6.2.3 by IG-88 - https://xpenology.com/forum/topic/28321-driver-extension-jun-103b104b-for-dsm623-for-918-3615xs-3617xs/ - Installation type: BAREMETAL - J4105-ITX + 8GB RAM - Notes Updated the "extra.lzma & extra2.lzma" by SSH'ing in and mounting /dev/synoboot2. Replaced the old ones with the new "extra.lzma & extra2.lzma" files and unmounted the USB from DSM. Af
  3. The script is designed to be automatic and only needs one small change initally. Change the "Highload" value to 65. More on why that needs to match the "midload" value below. - Save it to your home folder - Make it executable - `chmod +x <scriptname.sh>` - e.g: - `chmod +x scaling.sh` - Run the script (in background) - `./<scriptname.sh> &` - e.g: - `./scaling.sh &` - By default the script is set to do this ---- Set CPU Speed to lowest value, it can support, at below 50% load ---- Set CPU Speed to 50% value, it can support, above 50% &
  4. I would be more than happy to volunteer my J4105-ITX & time to to this, if you have any ideas. I tried the exact instructions as in the signature of real3x, but on 6.2.1-23824 it doesn't appear to work. I lose the "/dev/dri" folders if I replace the stock "extra.lzma"
  5. So wait do we need to copy some files from Fedora to get transcoding working? The extra.lzma in your signature has NO i915 files at all
  6. Nope, even all of that is wrong. Somehow the more aggressive I get with my changes, the lesser things work. I reverted everything back to stock and started going through the "/var/log/messages" files. It seems that the "/usr/syno/bin/spacetool" is the culprit and at every other boot will output a line like this 2019-09-15T02:40:43-07:00 pi-hole spacetool.shared: space_unused_block_device_clean.c:50 Remove [/dev/vg3] If this above line is not present, then further down I'll see confirmation of the "volume" mounting 2019-09-15T02:43:39-07:00 pi-hole sy
  7. Nope, that ^^ is not the solution either. Even after the "migration", my volumes vanished, when I rebooted some 2-3 hours later. At this point, I had checked the "s00_synocheckfstab" and it was back to hosing (removing the volume 1/volume2 entries) the "fstab" file This time, I replaced the "s00_synocheckfstab" file(symlink) with my own BASH script (see post above) and that seems to survive extended reboots. Rebooted some 9 hours later and the volumes 1 & 2 came back just fine. ----------------------------------------------------------------------------------------
  8. Ok, so I did some more digging in the "/etc.defaults/rc" file and find that "/usr/syno/cfgen/s00_synocheckfstab" is the process wiping my fstab. Check further and that "s00_synocheckfstab" along with all the other "sXX_XXXXXXXXXX" files are all just links to "/usr/syno/bin/synocfgen" So I wipe that "/usr/syno/bin/synocfgen" and replace it with the script, I had written above. As you may have guessed, this hosed my DSM. On reboot I was getting "Permission Denied" errors on SSH & "You are not authorized to use this service." on the Web Console. Re-insta
  9. I have a solution and I do not know why it works or how reliable it is, but for the past 3 reboots, my disks have come back up. I have found that re-writing to the fstab with the correct ext4 entries, ensures the volumes are auto mounted correctly. Made a lil script for this, and set it to run at boot. #!/bin/bash #Echo into fstab echo -e "none /proc proc defaults 0 0\n/dev/root / ext4 defaults 1 1\n/dev/vg3/volume_1 /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl,relatime 0 0\n/dev/vg3/volume_2 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquo
  10. I am using 2 Disks on my NAS - 8TB & 6TB - with SHR RAID-1 setup on them. Using the standard process, I created a 6TB BTRFS Volume, which is now mirrored & managed by DSM. This however leaves 2 TB of space unused on the disk. Wanting to make use of this unused space, I used "fdisk", "mdadm", "vgcreate" & "lvcreate" to create a partition, raid array, volume group & logical volume on this unused space manually. Then by leaving it unformatted, I was able to get DSM to pick it up as a Crashed Volume. Now using Storage Manager, I deleted my manually created
  11. Thanks. Works great on my Asrock J4105 with the 918+ image.