Kanedo

Members
  • Content count

    78
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Kanedo

  • Rank
    Regular Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Let us know if it works in a startup script.
  2. No problem. Glad it worked. You'll have to run this after every reboot though. Or you can put it in some startup script.
  3. Try this and see if it works. sudo find /sys/devices/ -type f -name locate -exec sh -c 'echo 0 > "$1"' -- {} \;
  4. Kanedo

    DSM 6 - Missing 2 Disks

    It's because your enumeration of your disks are not contiguous. Notice that /dev/sde and /dev/sdf is missing in your list When you set internalportcfg to 0xfff, this means you're assigning /dev/sda - /dev/sdl as disks 1-12. Because you have a gap in your enumeration (missing /dev/sde and /devsdf) it pushes the remaining 8 disks from /dev/sdg - /dev/sdn. Since /dev/sdm and /dev/sdn falls outside of /dev/sda - /dev/sdl, two disks won't show up. Easiest way to solve your problem is to simply increase the number of enumerated slots maxdisks=16 internalportcfg=0xffff usbportcfg=0x1f0000 esataportcfg=0x0 With this config, you're allocating disk 1-16 (/dev/sda - /dev/sdp) for internal use.
  5. Kanedo

    Disable/Hide 50MB VMware Virtual Disk?

    The proper way to move synoboot to a much higher enumeration is to change DiskIdxMap in grub.cfg On jun's 1.03b, the default is DiskIdxMap=0C 0C = Disk 13 Change it to something much higher DiskIdxMap=1F 1F = Disk 32 For more info: https://github.com/evolver56k/xpenology/blob/master/synoconfigs/Kconfig.devices#L245
  6. This is a repost of an archive I posted in 2015. This method works for DSM 6.2 using Jun's loader 1.03b for me. 1) Enable SSH and ssh into your DiskStation 2) Become root ( sudo -i ) 3) Make a mount point ( mkdir -p /tmp/mountMe ) 4) cd into /dev ( cd /dev ) 5) mount synoboot1 to your mount point ( mount -t vfat synoboot1 /tmp/mountMe ) 6) Profit! admin@DiskStation:~# sudo -i root@DiskStation:~# mkdir -p /tmp/mountMe root@DiskStation:~# cd /dev root@DiskStation:/dev# mount -t vfat synoboot1 /tmp/mountMe root@DiskStation:/dev# ls -l /tmp/mountMe total 2554 -rwxr-xr-x 1 root root 2605216 Aug 1 10:40 bzImage drwxr-xr-x 3 root root 2048 Aug 1 10:40 EFI drwxr-xr-x 6 root root 2048 Aug 1 10:40 grub -rwxr-xr-x 1 root root 103 Jul 3 15:09 GRUB_VER -rwxr-xr-x 1 root root 225 Aug 1 10:40 info.txt root@DiskStation:/dev#
  7. Yes, I have that exact card and it works with the current xpenology boot image.
  8. Kanedo

    Possible 'ideal' XPE MOBO

    Terrific. Please report back your findings.
  9. Kanedo

    Transfer Speeds >1Gb/s

    You can choose the default adapter in the network management section of DSM. Try to set the default adapter to your 1Gb NIC.
  10. Kanedo

    Possible 'ideal' XPE MOBO

    Totally agree! All depends on the SATA controller chips used. There are a few Marvell chips that don't play too well with the latest XPE builds. Very curious which chips these come with. Official Product page http://www.hcipctech.com/Home/ProductCo ... &english=2 I believe the 19NVR3 uses the quad-core J1900. OP linked to 18NVR3, which is the dual-core J1800 model. At the price of over $150+shipping for the J1900 model, I'm not sure if this really is an ideal solution. Neither the J1800 or J1900 is particularly powerful processor. I think you can get more for your money elsewhere. By the time you max out the 13 SATA ports on your server, you might be concerned with faster processor, ECC memory, and possibly 10Gb Network. However, if all you want is max number of SATA ports with low power consumption, this does seem like an option. Although I would say XPE may not be the most ideal OS if you're going after low power. Something like UnRAID would typically use less power, since it doesn't spin up all the drives.
  11. Kanedo

    Advice on 10gb network card

    neuro1, There are two ways you can expose Intel X520-DA2 to your XPE VM in ESXi. 1) IOMMU (VT-d). Basically this is PCIe passthrough if your CPU and Motherboard supports it. This will allow you to assign your X520 for exclusive use with the XPE VM. XPE has built-in drivers for X520 so it will see it as a physical card. Downside to this approach is that only your XPE VM has access to 10Gb outside the host. 2) Assign your X520 to a vSwith. Then assign one of your XPE VM's virtual Network Adapter to use VMXNET3 as adapter type assigned to your 10Gb vSwitch. This is the method I employ so multiple VMs can all talk out over the 10Gb adapter.
  12. Kanedo

    Advice on 10gb network card

    Basically, if you want to load balance for multiple clients, multiple 1GbE link-aggregation/bond is a perfectly acceptable solution. However, if your goal is achieve higher than 1Gb to a single client, I'm afraid the only practical and software agnostic solution is with a faster interface such as 10GbE.
  13. Kanedo

    Marvell 88SX6081 / SuperMicro SAT2-MV8 SATA

    I have this exact card. It worked with Nanoboot 5.1 via a kernel option. However, I cannot get it to work with XPEnoboot 5.2 at all. You'll be easier off just forking up the cash for a LSI SAS2 card.
  14. Kanedo

    Bios requirements for XPEnoboot?

    I have the same card. Apparently Xpenoboot broke support for some Marvell cards. It use to work with Nanoboot 5.1. So basically, I can't use those cards either until it's fixed with a future version of the bootloader
  15. Kanedo

    ECC Ram worth it?

    If you're most concerned with how your board looks, I'm not sure any argument for ECC or other server board features will sway you. I personally feel uptime and data integrity is more important than anything else with a file server. File servers should be functional and reliable. Looks should be a secondary concern in my book. People who don't recommend ECC are the ones that just haven't lost data YET. Once you've lost data or had corrupt data due to bad RAM, you'll swear by ECC no matter the cost. If you don't have or want to spend the money on a dedicated setup, then by all means use whatever you have. However, if you're building from scratch, definitely get a server board + ECC. Things like IPMI, ECC memory, IOMMU, and Intel NICs are features you'll only appreciate once you've had them. Bottomline is if your board and cpu supports ECC, get them. If it doesn't, then make sure you get it for your next full setup.