flyride
Moderator-
Posts
2,438 -
Joined
-
Last visited
-
Days Won
127
Everything posted by flyride
-
Ok, it failed a quick test. It doesn't really say why. The SMART stats look fine. Try manually running another quick test through the UI, and if it passes, the errors will be gone.
-
I don't think it matters much, there isn't anything unique to the vmdk definition between versions. Here's the one running on my test DS918+ system $ cat synoboot.vmdk # Disk DescriptorFile version=1 CID=8b9950bd parentCID=ffffffff createType="vmfs" # Extent description RW 102400 VMFS "synoboot.ds918.104b.img" # The Disk Data Base #DDB ddb.adapterType = "lsilogic" ddb.deletable = "true" ddb.encoding = "UTF-8" ddb.longContentID = "bf1ed85c590a19a0c8db34278b9950bd" ddb.thinProvisioned = "1" ddb.uuid = "60 00 C2 9a ee da ca 33-df 5e 04 3f 80 55 f9 62" ddb.virtualHWVersion = "10"
-
from the posted FAQ: https://xpenology.com/forum/topic/9394-installation-faq/?tab=comments#comment-81092 Go to DSM 6.2 section, then find DS918+ and it's in the downloaded zip file
-
Nothing wrong with DSM 6.2.3, just some concerns (largely resolved now) for upgraders from previous versions. You don't show how the problem drive is being used. Can you print screenshots of your Storage Pool and Volume configuration in Storage Manager? If it isn't being used, it isn't much of a risk. In any case, DSM is installed on all drives. So failure of this drive is unlikely to make you lose data immediately unless you have provisioned it as a single non-redundant Disk Group. Furthermore, it looks like the drive in question failed a SMART test, but isn't showing any bad sectors. Look at the SMART Attributes in the Health Info page for the drive, and see if it tells you anything useful.
-
https://xpenology.com/forum/search/?q=HP N40L&quick=1
-
You're asking a lot of general questions that need your own research. Suffice it to say that it is all possible. USB devices are addressable in Linux via /dev To get you started: df will show you space stats. ifconfig will show you IP address information. You're going to need to learn some Linux shell scripting, and some utilities. I suggest spending some time learning grep, awk, cut in particular.
-
Something like this, write scripts to push data to USB? http://www.yoctopuce.com/EN/products/usb-displays/yocto-display
-
Synology does not make their own cards. Intel, Mellanox, Tehuti, Aquantia are all OEMs for Syno 10Gbe. DS3615xs and DS3617xs have native support for these. Understand that there is a base "driver pack" that is part of the loader. DS918+ natively has no 10Gbe card support at all; it's added by the loader. IG-88's extra.lzma adds further hardware support and/or newer drivers.
-
Hmm, not sure how that happened. Can verify it's working correctly now with your libfile.
-
root@archive:/usr/lib# sha1sum -b /lib64/libsynonvme.so.1 8c39cdda125b02688c0fb06f5c9aaaf7e06b5295 */lib64/libsynonvme.so.1
-
root@archive:/usr/lib# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 PHBT721608GH016D INTEL MEMPEK1W016GA 1 14.40 GB / 14.40 GB 512 B + 0 B K3110310 /dev/nvme0n1p1 PHBT721608GH016D INTEL MEMPEK1W016GA 1 14.40 GB / 14.40 GB 512 B + 0 B K3110310 root@archive:/usr/lib# udevadm info /dev/nvme0 P: /devices/pci0000:00/0000:00:13.0/0000:01:00.0/nvme/nvme0 N: nvme0 E: DEVNAME=/dev/nvme0 E: DEVPATH=/devices/pci0000:00/0000:00:13.0/0000:01:00.0/nvme/nvme0 E: MAJOR=250 E: MINOR=0 E: PHYSDEVBUS=pci E: PHYSDEVDRIVER=nvme E: PHYSDEVPATH=/devices/pci0000:00/0000:00:13.0/0000:01:00.0 E: SUBSYSTEM=nvme E: SYNO_INFO_PLATFORM_NAME=apollolake E: SYNO_KERNEL_VERSION=4.4 E: USEC_INITIALIZED=212638 root@archive:/usr/lib#
-
root@archive:/usr/lib# ls -la *nvme* lrwxrwxrwx 1 root root 16 May 19 19:54 libsynonvme.so -> libsynonvme.so.1 -rw-r--r-- 1 root root 37642 Jul 18 11:15 libsynonvme.so.1 -rw-r--r-- 1 root root 37642 Jul 18 11:15 libsynonvme.so.1.bak root@archive:/usr/lib# diff libsynonvme.so.1 libsynonvme.so.1.bak Binary files libsynonvme.so.1 and libsynonvme.so.1.bak differ root@archive:/usr/lib# tail -5 /var/log/messages 2020-07-21T09:03:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme 2020-07-21T09:03:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme 2020-07-21T09:04:41-07:00 archive synostoraged: SYSTEM: Last message 'nvme_slot_info_get.c' repeated 1 times, suppressed by syslog-ng on archive 2020-07-21T09:04:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme 2020-07-21T09:04:41-07:00 archive synostoraged: nvme_slot_info_get.c:119 Fail to strip the address of device from PHYSDEVDRIVER=nvme root@archive:/usr/lib#
-
No, I never use r/w cache, and really I only have cache for a test system.
-
DSM 6.2.3 -- does not see more than one HDD
flyride replied to vadimax's question in Answered Questions
Be sure you are using an isolated virtual SATA controllers for your loader and then another controller for all the data drives. -
Weird, mine doesn't. I'll check into it.
-
Understood, based on your stated use case there really nothing to be gained by moving to DS918+. The decision factors are explained here: https://xpenology.com/forum/topic/13333-tutorialreference-6x-loaders-and-platforms/ Number of available disk slots isn't a concern; if you want more than 12 (DS3615xs/17xs) or 16 (DS918+) it's a config entry to override it anyway.
-
My point to you is that you aren't limited to 4 bays by the loader (which is coincidentally IG-88's point). It doesn't matter what the original Synology hardware supported, nobody is installing a loader that only supports four drives. Number of drives supported are not a decision point of which loader to use. Spend some time looking though the XPEnology documentation and that will be apparent. TL;DR: DS918+ running via loader 1.04b supports 16 "internal" devices. NVMe cache do not count as part of those 16. Non-NVMe SSD used for cache do count.
-
You can't use the same device for boot and scratch. So yes, either you need multiple SSD's connected to C224 or a USB boot and SSD for scratch.