Jump to content
XPEnology Community

flyride

Moderator
  • Posts

    2,438
  • Joined

  • Last visited

  • Days Won

    127

Everything posted by flyride

  1. RS is a different platform altogether and you somehow selected an older version as well.
  2. RS3617xs+ won't work. I suggest you use this image here: https://archive.synology.com/download/DSM/release/6.2.3/25426/DSM_DS3617xs_25426.pat
  3. You can't make the console do more than that; it's a function of DSM 6. If you want to monitor console messages, connect a serial port, run a terminal app, and boot messages are available there.
  4. Ok, it failed a quick test. It doesn't really say why. The SMART stats look fine. Try manually running another quick test through the UI, and if it passes, the errors will be gone.
  5. I don't think it matters much, there isn't anything unique to the vmdk definition between versions. Here's the one running on my test DS918+ system $ cat synoboot.vmdk # Disk DescriptorFile version=1 CID=8b9950bd parentCID=ffffffff createType="vmfs" # Extent description RW 102400 VMFS "synoboot.ds918.104b.img" # The Disk Data Base #DDB ddb.adapterType = "lsilogic" ddb.deletable = "true" ddb.encoding = "UTF-8" ddb.longContentID = "bf1ed85c590a19a0c8db34278b9950bd" ddb.thinProvisioned = "1" ddb.uuid = "60 00 C2 9a ee da ca 33-df 5e 04 3f 80 55 f9 62" ddb.virtualHWVersion = "10"
  6. from the posted FAQ: https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-81092 Go to DSM 6.2 section, then find DS918+ and it's in the downloaded zip file
  7. Nothing wrong with DSM 6.2.3, just some concerns (largely resolved now) for upgraders from previous versions. You don't show how the problem drive is being used. Can you print screenshots of your Storage Pool and Volume configuration in Storage Manager? If it isn't being used, it isn't much of a risk. In any case, DSM is installed on all drives. So failure of this drive is unlikely to make you lose data immediately unless you have provisioned it as a single non-redundant Disk Group. Furthermore, it looks like the drive in question failed a SMART test, but isn't showing any bad sectors. Look at the SMART Attributes in the Health Info page for the drive, and see if it tells you anything useful.
  8. https://xpenology.com/forum/search/?q=HP N40L&quick=1
  9. If you don't want anything to go to Synology at all, you can trace and block Synology's IP's in your router. DSM phones home a great deal, just ignore the log failures. If you really don't want to see such periodic information in your log, the log ingress can be filtered.
  10. You're asking a lot of general questions that need your own research. Suffice it to say that it is all possible. USB devices are addressable in Linux via /dev To get you started: df will show you space stats. ifconfig will show you IP address information. You're going to need to learn some Linux shell scripting, and some utilities. I suggest spending some time learning grep, awk, cut in particular.
  11. Something like this, write scripts to push data to USB? http://www.yoctopuce.com/EN/products/usb-displays/yocto-display
  12. Synology does not make their own cards. Intel, Mellanox, Tehuti, Aquantia are all OEMs for Syno 10Gbe. DS3615xs and DS3617xs have native support for these. Understand that there is a base "driver pack" that is part of the loader. DS918+ natively has no 10Gbe card support at all; it's added by the loader. IG-88's extra.lzma adds further hardware support and/or newer drivers.
  13. Hmm, not sure how that happened. Can verify it's working correctly now with your libfile.
  14. root@archive:/usr/lib# sha1sum -b /lib64/libsynonvme.so.1 8c39cdda125b02688c0fb06f5c9aaaf7e06b5295 */lib64/libsynonvme.so.1
  15. root@archive:/usr/lib# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 PHBT721608GH016D INTEL MEMPEK1W016GA 1 14.40 GB / 14.40 GB 512 B + 0 B K3110310 /dev/nvme0n1p1 PHBT721608GH016D INTEL MEMPEK1W016GA 1 14.40 GB / 14.40 GB 512 B + 0 B K3110310 root@archive:/usr/lib# udevadm info /dev/nvme0 P: /devices/pci0000:00/0000:00:13.0/0000:01:00.0/nvme/nvme0 N: nvme0 E: DEVNAME=/dev/nvme0 E: DEVPATH=/devices/pci0000:00/0000:00:13.0/0000:01:00.0/nvme/nvme0 E: MAJOR=250 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:13.0/0000:01:00.0 E: SUBSYSTEM=nvme E: SYNO_INFO_PLATFORM_NAME=apollolake E: SYNO_KERNEL_VERSION=4.4 E: USEC_INITIALIZED=212638 root@archive:/usr/lib#
  16. root@archive:/usr/lib# ls -la *nvme* lrwxrwxrwx 1 root root 16 May 19 19:54 libsynonvme.so -> libsynonvme.so.1 -rw-r--r-- 1 root root 37642 Jul 18 11:15 libsynonvme.so.1 -rw-r--r-- 1 root root 37642 Jul 18 11:15 libsynonvme.so.1.bak root@archive:/usr/lib# diff libsynonvme.so.1 libsynonvme.so.1.bak Binary files libsynonvme.so.1 and libsynonvme.so.1.bak differ root@archive:/usr/lib# tail -5 /var/log/messages 2020-07-21T09:03:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme 2020-07-21T09:03:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme 2020-07-21T09:04:41-07:00 archive synostoraged: SYSTEM: Last message 'nvme_slot_info_get.c' repeated 1 times, suppressed by syslog-ng on archive 2020-07-21T09:04:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme 2020-07-21T09:04:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme root@archive:/usr/lib#
  17. No, I never use r/w cache, and really I only have cache for a test system.
  18. Be sure you are using an isolated virtual SATA controllers for your loader and then another controller for all the data drives.
  19. Weird, mine doesn't. I'll check into it.
  20. SFP+ to RJ45 $18 each at FS.COM. But better yet next time buy SFP+ network cards and just use DACs (transceiver cables) for $15 each for direct connect.
  21. Direct connect. Two devices on a wire are faster than a switch (nobody pipe in about collision domains please). If it's not obvious, you will need a (unique) IP network for EACH connection.
  22. Understood, based on your stated use case there really nothing to be gained by moving to DS918+. The decision factors are explained here: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Number of available disk slots isn't a concern; if you want more than 12 (DS3615xs/17xs) or 16 (DS918+) it's a config entry to override it anyway.
  23. Alternative, get a 2-port 10Gbe on desktop, direct connect 10Gbe ports to NAS, don't use the 10Gb switch at all. 10Gbe switches are still noisy and hot. Why use them if you don't have to?
  24. My point to you is that you aren't limited to 4 bays by the loader (which is coincidentally IG-88's point). It doesn't matter what the original Synology hardware supported, nobody is installing a loader that only supports four drives. Number of drives supported are not a decision point of which loader to use. Spend some time looking though the XPEnology documentation and that will be apparent. TL;DR: DS918+ running via loader 1.04b supports 16 "internal" devices. NVMe cache do not count as part of those 16. Non-NVMe SSD used for cache do count.
  25. You can't use the same device for boot and scratch. So yes, either you need multiple SSD's connected to C224 or a USB boot and SSD for scratch.
×
×
  • Create New...