flyride

Members
  • Content Count

    1,956
  • Joined

  • Last visited

  • Days Won

    98

Everything posted by flyride

  1. The synoboot.vmdk referencing the bootloader img should be connected via the default virtual SATA controller, and it will be automatically hidden when you boot with the ESXi option. No other devices should be attached to that SATA controller. I have had the best results with VMDK's for data drives by adding a separate VM SCSI controller and connecting the virtual disk to that. If you have a plan to passthrough a physical SATA controller and all the drives attached to it, that generally works without much fanfare assuming that DSM has driver support for the hardware.
  2. This is the same problem I'm encountering specifically with the 6.1.6 upgrade. Try using 6.1.5 for now until we figure out what's going on.
  3. flyride

    DSM 6.1.x Loader

    Is that after attempting the 6.1.6 update? If so, you should be posting in the Critical Updates section in the 6.1.6 thread
  4. Verified unique serial and mac address and repeated the test, but it still fails so far.
  5. - Outcome of the update: UNSUCCESSFUL - DSM version prior update: DSM 6.1.5-15254 Update 1 - Loader version and model: JUN'S LOADER v1.02b - DS3617xs - Using custom extra.lzma: NO - Installation type: VM - ESXi 6.5 (test VM) - Additional comments: Does not come back up after reboot and shows up in Synology Assistant as Not Installed without a disk installed. Replacing the boot loader makes the disk visible and the VM come up as "Recoverable." Attempting to recover ends up rebooting back into the Not Installed state. 6.1.6 installs successfully on the same config
  6. I'm seeing a problem where both the boot loader and the boot drive are corrupted by the upgrade. Tried a few times, each with the same results. But I realized that somewhere along the line, I accidentally substituted my production boot loader so that two serial numbers were active on the network at the same time, and I am wondering if maybe that is the issue. I will test later today to prove or disprove this idea. But is a duplicate serial number on the same network a possibility for you?
  7. USING PHYSICAL RDM TO ENABLE NVMe (or any other ESXi accessible disk device) AS REGULAR DSM DISK Summary: Heretofore, XPEnology DSM under ESXi using virtual disks is unable to retrieve SMART information from those disks. Disks connected to passthrough controllers work, however. NVMe SSDs are now verified to work with XPEnology using ESXi physical Raw Device Mapping (RDM). pRDM allows the guest to directly read/write to the device, while still virtualizing the controller. NVMe SSDs configured with pRDM are about 10% faster than as a VMDK, and the full
  8. Solution to the logfile spam problem: 1) cat /proc/version > /etc/arch-release 2) edit /var/packages/open-vm-tools/scripts/start-stop-status, and comment out these lines: # if [ -e ${PIDFILE} ]; then # echo "$(date) vmtoolsd ($(cat ${PIDFILE})) is running..." >> ${LOGFILE} # else # echo "$(date) vmtoolsd is not running..." >> ${LOGFILE} # fi Any chance to make the SSH port configurable?
  9. Ah, I didn't see the fact you were running VMWare Workstation. I don't have any experience with that, as all my work knowledge comes from ESXi. I'm guessing that the "physical hardware" option of VM Workstation isn't a true passthrough, so XPEnology doesn't have unfettered access to the hardware. Maybe someone else who knows the product can comment. Most of us build up a server expressly to run XPEnology. I started with DSM on a baremetal server, but switched to ESXi when I couldn't get NVMe functional. As far as running DSM on a VM (and other VM's side-by-side with DSM) vers
  10. I have never edited the ssd db to have SATA SSD recognized on Synology or XPEnology. I don't think that the ssd db even matches the current compatibility list? Honestly, I really don't know what the db is for. For what it's worth, every Intel, Samsung and VM SSD I've tried has been recognized as SSD. Obviously there are a lot of other SSD products on the market. This is in /etc/rc: if [ "$PLATFORM" != "kvmx64" -a -f /usr/syno/bin/syno_hdd_util ]; then syno_hdd_util --ssd_detect --without-id-log 2>/dev/null fi I can only guess that syno_hdd_util eval
  11. By default, if ESXi is virtualizing any type of SSD, it will present to the guest as SSD. This can be overridden (SSD->HDD or HDD->SSD) as needed, and maybe some drives ESXi can't determine SSD status. From the ESXi console: [root@esxi:/] esxcli storage core device list (results are trimmed for clarity) t10.NVMe____INTEL_SSDPE2MX020T4_CVPD6114003E2P0TGN__00000001 Display Name: Local NVMe Disk (t10.NVMe____INTEL_SSDPE2MX020T4_CVPD6114003E2P0TGN__00000001) Size: 1907729 Device Type: Direct-Access Vendor: NVMe Model: INTEL SSDPE2MX02 Is SSD: true
  12. A faster CPU is going to transcode faster than a slower CPU, unless the slower CPU has hardware acceleration like Quicksync, AND you have software that will use it. So despite your reluctance to use PassMark or other CPU benchmark site, you will only confirm what is already known. If you must test yourself, you should use a standardized transcoding workload. Pick a file to transcode and run it on each of the platforms. If you are using Synology Videostation for transcoding, you'll probably have to test it with a stopwatch and the GUI. If you are using Plex, just use the instal
  13. I don't think this has anything to do with your update. Your Disk 1 disconnected from the RAID momentarily, which caused your array to go critical (non-redundant). The disk reconnected and appears to be working, but SMART (the drive self-test information) is now reporting a hardware fail state or pending failure on the drive. You should replace it. It is in warranty so you should be able to print the details of the SMART status (under "Health Info") and WD will send you a new drive. Once you install the new drive, manage your RAID Group 1 and add it to the array to r
  14. This doesn't get any space back, just avoiding disk access. The system drive partition structure is intact on all drives even after the adjustment. So if DSM "reclaims" via hotspare activity or other, it only operates within the preallocated system partition. So no possibility of damage to any other existing RAID partition on the drives. If the system or swap partitions are deleted on any disk, DSM will call the drive Not Initialized. Any activity that initializes a drive will create them, no exceptions
  15. Just to clarify the example layout: /dev/md0 is the system partition, /dev/sda.../dev/sdd. This is a 4-disk RAID1 /dev/md1 is the swap partition, /dev/sda../dev/sdd. This is a 4-disk RAID1 /dev/md2 is /volume1 (on my system /dev/sda../dev/sdd RAID5) Failing RAID members manually in /dev/md0 will cause DSM to initially report that the system partition is crashed as long as the drives are present and hotplugged. But it is still functional and there is no risk to the system, unless you fail all the drives of course. At that point cat /proc/mdstat w
  16. USE NVMe AS VIRTUAL DISK / HOW TO LIMIT DSM TO SPECIFIC DRIVES NOTE: if you just want to use NVMe drives as cache and can't get them to work on DSM 6.2.x, go here. Just thought to share some of the performance I'm seeing after converting from baremetal to ESXi in order to use NVMe SSDs. My hardware: SuperMicro X11SSH-F with E3-1230V6, 32GB RAM, Mellanox 10GBe, 8-bay hotplug chassis, with 2x WD Red 2TB (sda/sdb) in RAID1 as /volume1 and 6x WD Red 4TB (sdc-sdh) in RAID10 as /volume2 I run a lot of Docker apps installed on /volume1. This worked the 2TB Reds (which are n
  17. Not if everything you have is working now. If you are having trouble with some 10GBe network cards, 3617xs has some additional native driver support. See:
  18. Answers: 1) don't passthrough USB controller, just virtualize the USB devices you need in the VM. This also works for a physical synoboot key if you want to do that. A related hint: don't use two USB keys with the same VID/PID (obvious in hindsight). 2) The ESXi option on grub boot menu hides the boot drive/controller (not affected by SATAPortMap, etc). I got better results by making sure that the drive and controller order is correct in the VM (boot drive and controller first).
  19. Again, it doesn't work that way. You can create a volume1 on a new disk group that is mirrored all over the HDD's, but you can't save packages on the system or swap RAIDs. You can't install packages until you create at least a volume1, and that volume will be on a different partition than system/swap.
  20. I have been running XPEnology for many months in bare-metal configuration. I started experimenting with enterprise NVMe drives a few weeks ago to address Docker/system IOPs limits, and despite some success getting DSM utilities to recognize them, it's apparent that the core udev functionality that Synology uses for hotplug support will prevent any reliable hacking of NVMe volumes into the system. Hopefully better NVMe drive support will come soon from Synology themselves. In the meantime, I have drives with 1.5GBps write rates, and 1024 command queue, so I had better find a way t
  21. ds3615xs is older x86 architecture and has the most package support ds3617xs is slightly newer x86 architecture and has better native add-in card support since it has an exposed PCI slot, but extra.lzma mostly covers the bases on all platforms ds916 has the most recent x86 instruction and quickconnect support, if you want to hardware transcode and have compatible hardware