• Content Count

  • Joined

  • Last visited

Community Reputation

4 Neutral

About nadiva

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes, it's not about chip shortage, but greed. They always put the outlet hardware inside.. Really old discounted junky chips, slightly faster than router chips. That's why i will not buy their box. XPenology is milkyway better than Freenas, which is way better than OMV/Xigma. I do love technicalities about Freenas - jails, native FDE encryption, TCG OPAL support, better NFS and Samba with "Previous versions" support. In DMS i'm getting lausy FBE encryption which cripples tons of functionality (can't replicate some; can't hyperbackup some; can't encrypt emails; Drive is eating disk space b
  2. not really, i didn't fill up the 2nd slot (1 NVME cost me 1/2 of NAS microserver) but i plan to upgrade to RAID0 as it fills up. if what you suspect is true (and RAID0 is also under risk), i'd consider adapter like Asus Hyper M.2 which i guess has a controller to face server with a bootable standalone RAID drive. Hopefully! If not, some shares will move back to spinning drives. Still the "intended purpose" for most people is the single member array setup i reckon, the speed and extra redundancy is worth it imo.
  3. this is a barebone setup with NVME setup as a standalone drive, same as SSD: Device Boot Start End Sectors Size Id Type /dev/nvme0n1p1 2048 4982527 4980480 2.4G fd Linux raid autodetect /dev/nvme0n1p2 4982528 9176831 4194304 2G fd Linux raid autodetect /dev/nvme0n1p3 9437184 4000795469 3991358286 1.9T fd Linux raid autodetect md4 : active raid1 nvme0n1p3[0] 1995678080 blocks super 1.2 [1/1] [U] ls /volume3 @eaDir Share1 @Share1@ homes @homes@ @quarantine @sharesnap @sharesnap_restoring @SnapshotReplication @synology
  4. Yes they have nomenclatures, but only after patch (which is supposed to hide NVMEs via drivers). I think this will vary per model. My patched file wasn't overwritten by updates, but is still patched. Second limitation is in UI: Syno is not confident to use NVME arrays via their cheap hardware so they don't offer volume creation. We just do exactly what DSM would do during volume creation: use the same tools to create volume, and since then, it's treated equally.
  5. Since formatted with synopartition NVME acts exactly like other arrays, it has the the same small SYNO partition, and every UI/CLI disk related command works on it, from creating volumes, monitoring, trimming, replicating, share transfers back and forth. It had too much time to prove itself, and became the most reliable drive with highest availability along with SSD (even HDD RAID had to be rebuilt for no reason, just internal controller hiccup). Not bad for a cheap external PCI3x1-4. Once NVMEs are cheap, i will build big arrays from PCI5 NVMEs in order to utilize multi-40+gbit NICs In futur
  6. interesting topic, i never could make drives sleep, and it's frustrating from day one. never saw "ioctl device failed" errors, so my scemd is quiet. perhaps apparmor.log is the busy one, constantly dropping dmidecode messages dmidecode: AppArmor: /sys/firmware/dmi/tables/smbios_entry_point denied by profile /usr/sbin/dmidecode it's hard to test since hdparm -Y /dev/sdX will spin down volume, but it will wake up immediately: ata5.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 ata5.00: waking up from sleep ata5: hard resetting link ata5: SATA link up 6.0 Gbps (S
  7. my intended purpose too, but this was solved earlier in the thread. Running NVME as a standalone drive second year without issue (not even patching anymore with updates). The cache isn't cool idea for me, just like VM, i believe it gets reset after restart, allocates only around 400GB of multi TB drive. I prefer accessing all hot data from NVME and using RAID as a backup or replication target and cold data storage, and strongly prefer max speeds via 10Gbit NIC, where RAID seriously lags. Once you do the patch, you already win, then you just publish it into MD array to make it visible to DSM. T
  8. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.3 25426-0 - Loader version and model: JUN'S LOADER v1.04b - DS918+ - Using custom extra.lzma: YES v0.11 - Installation type: BAREMETAL - HP ProLiant Microserver Gen10 X3421, 32GB ECC RAM, SDD. HDD, Intel NVME 2TB, Aquantia 10gbit NIC, DVD, USB Audio, UPS - Additional Comment: these updates are always easy, no patch for NICs or NVME main drive needed, just heard "system shutdown" and "system started" as badusb was typing TCG-Opal password in between.
  9. CALM DOWN coming back to the script as my NAS is busier and busier, and having now cpu load around 0.5, it can get noisy. The noise is sinuous and therefore annoying. There's no real load, it's just the amount of of services is high and fileindexd makes sure to make repeated load pops. i attach my fork for HP Gen10 (non Plus). core count is reported as 1 for AMD, instead for processors are there, using nproc have only 3 frequencies, so had to remove unused coolfreq didn't like IF for each core, double conditions, added it to the same block to save some
  10. yeah so you install transcode in ikpgui an set it up in the script above: then insert another one...wait... another one...wait... just an example
  11. goal: have CD-ROM / DVD reader working, ready to server files asap without going to terminal, auto mount hardware: for example Microserver Gen10 with internal USB port and cute thin CD slot, it'd be a shame to leave it empty software: find cdrom.ko and sr_mod.ko from someone nice who can build initial setup here the work can end, if you like to operate it via CLI. Automount is crippled like on other appliances, we can hook up to udev, e.g. monitor actions here: normally udev would do the job, but here events are limited if
  12. nadiva


    openvpn -> wireguard for breathtaking speeds
  13. - Outcome of the update: SUCCESSFUL - DSM version prior update: DSM 6.2.2-24922-update 5 - Loader version and model: JUN'S LOADER v1.04b - DS918+ - Using custom extra.lzma: YES v0.11 - Installation type: BAREMETAL - HP ProLiant Microserver Gen10 X3421, 32GB ECC RAM, SDD. HDD, Intel NVME 2TB, Aquantia 10gbit NIC, DVD, USB Audio, UPS - Additional Comment: Semibrick install (PAT in UI + copy extras in file manager before boot). Lost few settings and root folder. Both NVME as standalone drive and 10gbit 3rd NIC worked after patches and another reboot
  14. Good to know. I am now rsyncing these folders now (what a cheap small backup). Maybe mirroring boot drive would be helpful too. I was scared to do "migration" instead of semibricking method because you say "In case of a "migration" the dsm installer will detect your former dsm installation and offer you to upgrade (migrate) the installation, usually you will loose plugins, but keep user/shares and network settings". I don't know what are "plugins" in DSM world but sounds scary. I hope you don't mean packages. This version 6.2.3 didn't solve any problem so far - CloudStation/Drive ch
  15. I waited a bit and after 6.2.3 25426-0 PAT was available (today) I started the upgrade. HP Gen 10 with 10gbit NIC and standalone NVME drive, DVD, and USB sound. Upgrade from 6.2.2 extra918plus_v0.7_test -> 6.2.3 extra918plus_v0.11 was very easy: upgrade manually in UI (reboot) copy extras to USB drive in Windows and boot patch_LAN; activation patch (newest but not yet with this version, just add the ""6.2.3 25426-0"" string as last); libNVMEpatch (newest) patch and reboot Super quick, no rufus / osfmount / imager, lovely. NVME standalone drive - didn't