• Content Count

  • Joined

  • Last visited

  • Days Won


Kanedo last won the day on November 20 2020

Kanedo had the most liked content!

Community Reputation

10 Good

About Kanedo

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If you use tinycore-redpill. Boot as IDE, then after you finish building the bootloader via ./rploader.sh build ... then, you can switch back to SATA. Then for SATA to work, here's how I got it working: Use SATA DOM boot arg For this to work, you must choose a model that supports SATA DOM, such as DS3615xs. All you have to do is add the following boot arg: synoboot_satadom=1 You don't even need VID or PID.
  2. No problem. Glad it worked. You'll have to run this after every reboot though. Or you can put it in some startup script.
  3. Try this and see if it works. sudo find /sys/devices/ -type f -name locate -exec sh -c 'echo 0 > "$1"' -- {} \;
  4. It's because your enumeration of your disks are not contiguous. Notice that /dev/sde and /dev/sdf is missing in your list When you set internalportcfg to 0xfff, this means you're assigning /dev/sda - /dev/sdl as disks 1-12. Because you have a gap in your enumeration (missing /dev/sde and /devsdf) it pushes the remaining 8 disks from /dev/sdg - /dev/sdn. Since /dev/sdm and /dev/sdn falls outside of /dev/sda - /dev/sdl, two disks won't show up. Easiest way to solve your problem is to simply increase the number of enumerated slots maxdisks=16 internalportcfg=0
  5. The proper way to move synoboot to a much higher enumeration is to change DiskIdxMap in grub.cfg On jun's 1.03b, the default is DiskIdxMap=0C 0C = Disk 13 Change it to something much higher DiskIdxMap=1F 1F = Disk 32 For more info: https://github.com/evolver56k/xpenology/blob/master/synoconfigs/Kconfig.devices#L245
  6. This is a repost of an archive I posted in 2015. This method works for DSM 6.2 using Jun's loader 1.03b for me. 1) Enable SSH and ssh into your DiskStation 2) Become root ( sudo -i ) 3) Make a mount point ( mkdir -p /tmp/mountMe ) 4) cd into /dev ( cd /dev ) 5) mount synoboot1 to your mount point ( mount -t vfat synoboot1 /tmp/mountMe ) 6) Profit! admin@DiskStation:~# sudo -i root@DiskStation:~# mkdir -p /tmp/mountMe root@DiskStation:~# cd /dev root@DiskStation:/dev# mount -t vfat synoboot1 /tmp/mountMe root@DiskStation:/dev# ls -l /
  7. Yes, I have that exact card and it works with the current xpenology boot image.
  8. Terrific. Please report back your findings.
  9. You can choose the default adapter in the network management section of DSM. Try to set the default adapter to your 1Gb NIC.
  10. Totally agree! All depends on the SATA controller chips used. There are a few Marvell chips that don't play too well with the latest XPE builds. Very curious which chips these come with. Official Product page http://www.hcipctech.com/Home/ProductCo ... &english=2 I believe the 19NVR3 uses the quad-core J1900. OP linked to 18NVR3, which is the dual-core J1800 model. At the price of over $150+shipping for the J1900 model, I'm not sure if this really is an ideal solution. Neither the J1800 or J1900 is particularly powerful processor. I think you can get more for your money
  11. neuro1, There are two ways you can expose Intel X520-DA2 to your XPE VM in ESXi. 1) IOMMU (VT-d). Basically this is PCIe passthrough if your CPU and Motherboard supports it. This will allow you to assign your X520 for exclusive use with the XPE VM. XPE has built-in drivers for X520 so it will see it as a physical card. Downside to this approach is that only your XPE VM has access to 10Gb outside the host. 2) Assign your X520 to a vSwith. Then assign one of your XPE VM's virtual Network Adapter to use VMXNET3 as adapter type assigned to your 10Gb vSwitch. This is the method I
  12. Basically, if you want to load balance for multiple clients, multiple 1GbE link-aggregation/bond is a perfectly acceptable solution. However, if your goal is achieve higher than 1Gb to a single client, I'm afraid the only practical and software agnostic solution is with a faster interface such as 10GbE.
  13. I have this exact card. It worked with Nanoboot 5.1 via a kernel option. However, I cannot get it to work with XPEnoboot 5.2 at all. You'll be easier off just forking up the cash for a LSI SAS2 card.
  14. I have the same card. Apparently Xpenoboot broke support for some Marvell cards. It use to work with Nanoboot 5.1. So basically, I can't use those cards either until it's fixed with a future version of the bootloader