• Content count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About b0fh

  • Rank
    Regular Member
  1. DSM 6.1.4 - 15217

    Baremetal N54L, random Athlon Phenom II hardware, random Intel E6850 hardware, and ESXi guest all upgraded just fine and seem to be humming along.
  2. No. You need to disable transport mode encryption. Synology has confirmed that is the proper workaround at this time.
  3. And do you have SHR enabled in synoinfo.conf? It will certainly "import" existing SHR volumes, but I am not sure you can make any changes without it enabled.
  4. One additional note, my system is reporting <4gb ram available but has 8gb in it. It reported all 8gb before the upgrade, but is now only reporting something like 3.5gb, like it defaulted back to x86 from x64, but it is still showing x86_64 in the kernel (uname -a). No biggie as I am hoping to retire this server soon (move to a VM), but thought I would report it. Sorry for spamming up the thread.
  5. Ok, so the issue was C1E. I could have sworn I had disabled it, but I must have exited BIOS without saving. It is working fine so far after the upgrade (and it was pretty painless from there on out).
  6. I missed the part about the MBR bootloader. I'll try that. Machine is older to be sure.
  7. I have searched but come up empty. I am using an Intel dual port NIC that uses the e1000e driver successfully in a bare metal 5.2 install. I tried to use the 1.02b loader from mega that was posted here and it does not load the NIC drivers. Here is what lspci -v shows regarding in 5.2: 02:00.1 Class 0200: Device 8086:105e (rev 06) Subsystem: Device 8086:115e Flags: bus master, fast devsel, latency 0, IRQ 43 Memory at fea80000 (32-bit, non-prefetchable) Memory at fea60000 (32-bit, non-prefetchable) I/O ports at e880 Expansion ROM at fea40000 [disabled] Capabilities: [c8] Power Management version 2 Capabilities: [d0] MSI: Enable+ Count=1/1 Maskable- 64bit+ Capabilities: [e0] Express Endpoint, MSI 00 Capabilities: [100] Advanced Error Reporting Capabilities: [140] Device Serial Number 00-15-17-ff-ff-c5-ad-1a Kernel driver in use: e1000e When I try to boot on 1.02b, it looks normal, but I cannot find the NAS with utilities and it is not pulling a DHCP address. Do I need to explicitly call the extras? It looks like it should work on its own. Sorry to have to ask.
  8. Still random shutdown on DSM 6.1

    What bootloader are you using?
  9. Single volume vs multiple volume

    No, it must be a volume. You can use the USB to transfer apps for backup or while moving, but its easiest to just have a separate volume for those things, imho.
  10. Single volume vs multiple volume

    Multivolume can provide a little more flexibility with installed apps, though, if you are needing to rebuild a volume for some reason. I like to put an old drive of some sort that is not included in any raid volume so I can move installed apps to it in the event I need to reconfigure the main volume for some reason.
  11. EXPI9404PTLBLK should be supported. I am using the dual ports in my Xpenology hardware instances and using the quads in my ESXi instances (doesn't mean it is for sure supported in Xpenology hardware instances, but they are the same chipsets).
  12. GaryM - that is an intel chipset. Seriously - keep up.
  13. Should I go for bare metal or ESXi?

    You're talking about virtualising an operating system designed to manage low level RAID. I personally would much prefer to run that on bare metal hardware. Sent from my iPad using Tapatalk NO - I am talking about a general purpose OS that in this case has been slightly customized to run Synology apps. The apps were written to run on Linux. The only thing that really gimps the packages is their own version of DRM (is that even the right word here?) so people cannot do exactly what the Xpenology project does. Don't assume that IoT devices or NAS devices run custom, purpose-built OS. They don't. They just use busybox, linux, whatever, that is already built and bolt on their customizations. Very little is custom OS anymore.
  14. Should I go for bare metal or ESXi?

    Designed to run on bare metal? As was unix, linux, freebsd, openbsd, solaris, whatever. However, they all work fine in VMs as well. Did you know a LOT of embedded systems are running inside of a VM of sorts? There is no magic about a VM that will make the linux the NAS is built on run like s**t. However, unsupported bare metal hardware will. There is that. If everyone relied solely on what software was "designed" to run on, we would all be limited to the very expensive hardware created by a certain three-letter named company and been happy with whatever they decided we needed. We would not have been okay to buy our own, different hardware, to run the software on because it wasn't designed to run on non-whatever company hardware.
  15. If you used all of your disks to create the volume, then you can only have 1 volume. Think of a volume as a collection of hard drives rather than a share.