Jump to content
XPEnology Community

dragosmp

Member
  • Posts

    19
  • Joined

  • Last visited

  • Days Won

    1

dragosmp last won the day on September 22 2020

dragosmp had the most liked content!

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

dragosmp's Achievements

Newbie

Newbie (1/7)

2

Reputation

  1. Hi IG-88, Is there a way that we can include these files in DSM6.2.3 for 918+ ? I have a Nvidia on a shelf and maybe I will do a better job than G5400 cpu.
  2. Loader: jun 1.04b (DS918+) DSM: DSM 6.2.3-25423 with Driver extension 0.11 for HW transcoding. Hardware/CPU: Gugabyte B365M DS3H / Gold G5400 HDD: 1x Seagate ST2000DM008-2FR102 Brtfs 2x WD WD20EFRX-68EUZN0 Raid 1 2x SSD Kingston A400 Cache Raid 1 for Seagate Volume 2x SSD ADATA SX6000LNP - Cache Raid 1 for WD Raid 1 CPU: root@DiskStation:/volume2/public# time $(i=0; while (( i < 9999999 )); do (( i ++ )); done) real 0m25.304s user 0m25.301s sys 0m0.001s root@DiskStation:/volume2/public# dd bs=1M count=1k if=/dev/zero of=/dev/null | md5sum d41d8cd98f00b204e9800998ecf8427e - 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB) copied, 0.0485021 s, 22.1 GB/s Disk: root@DiskStation:/volume2/public# sudo dd bs=1M count=256 if=/dev/zero of=/volume2/public/testx conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 1.62424 s, 165 MB/s root@DiskStation:/volume2/public# sudo dd bs=1M count=256 if=/dev/zero of=/volume3/data/testx conv=fdatasync 256+0 records in 256+0 records out 268435456 bytes (268 MB) copied, 2.55855 s, 105 MB/s Very disappointed in Disk Speed. [ 273.370385] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8 [ 273.370402] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID) [ 273.380798] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 273.389296] pcieport 0000:00:1d.0: [ 0] Receiver Error Error remaing after update.
  3. Hi, After patch to 6.23 added the so.1 and the patch everything is visible in Storage Manager but I have the following error: 486.646943] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 486.655445] pcieport 0000:00:1d.0: [ 0] Receiver Error (First) [ 488.015090] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8 [ 488.015097] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID) [ 488.025522] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 488.034230] pcieport 0000:00:1d.0: [ 0] Receiver Error [ 491.950039] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8 [ 491.950045] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID) [ 491.960446] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 491.968918] pcieport 0000:00:1d.0: [ 0] Receiver Error (First)
  4. Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2.2-24922 Update 6 - Loader version and model: Jun's Loader v1.04b - DS918+ - Using custom extra.lzma: YES (v0.10 for 6.23) - Installation type: BAREMETAL, Gigabyte B365M DS3H with Intel Gold G5400, RAM 8 GB; 2TB*3 WD Red; Cache: 2X SSD Adata 128GB A400 PCIE; 2X SSD 256 Kingston A400 Sata; 500W; UPS - Additional comments: /dev/dri/* finally exists after extra.lzma added. H/W transcoding OK! /Plex Transcoder -codec:0 h264 -hwaccel:0 vaapi -hwaccel Recurrent error on SSD Cache: [ 273.370385] pcieport 0000:00:1d.0: AER: Corrected error received: id=00e8 [ 273.370402] pcieport 0000:00:1d.0: PCIe Bus Error: severity=Corrected, type=Physical Layer, id=00e8(Receiver ID) [ 273.380798] pcieport 0000:00:1d.0: device [8086:a298] error status/mask=00000001/00002000 [ 273.389296] pcieport 0000:00:1d.0: [ 0] Receiver Error
  5. Thank you ! root@DiskStation:~# nvme list Node SN Model Namespace Usage Format FW Rev ---------------- -------------------- ---------------------------------------- --------- -------------------------- ---------------- -------- /dev/nvme0n1 2J1820020553 ADATA SX6000LNP 1 128.04 GB / 128.04 GB 512 B + 0 B V9001c01 /dev/nvme0n1p1 2J1820020553 ADATA SX6000LNP 1 128.04 GB / 128.04 GB 512 B + 0 B V9001c01 /dev/nvme1n1 2J1820018995 ADATA SX6000LNP 1 128.04 GB / 128.04 GB 512 B + 0 B V9001c01 /dev/nvme1n1p1 2J1820018995 ADATA SX6000LNP 1 128.04 GB / 128.04 GB 512 B + 0 B V9001c01
  6. Transcoding in plex work only with Plex Pass 12796 ? Sl 0:01 \_ /volume1/@appstore/Plex Media Server/Plex Transcoder -codec:0 h264 -hwaccel:0 vaapi -hwaccel_fallback_threshold:0 10 -hwaccel_output_format:0 vaapi -codec:1 dca -ss 5.9519989999999989 -analyzeduration 20000000 -probesize 20000000 -i /volume1/public/download/Dark Phoenix (2019)/Dark.Phoenix.2019.1080p.BluRay.x264-GECKOS[rarbg].mkv
  7. I don't think if you set the same SN and Mac on several device will still work OK. Try to find a different SN @mervincm I have Intel® UHD Graphics 610 on G5400 gold series is a 9th Gen cpu So it's a good probability that the 9th Gen cpu will work.
  8. root@DiskStation:~# cat /usr/syno/etc/codec/activation.conf {"success":true,"activated_codec":["hevc_dec","aac_dec","aac_enc","h264_dec","h264_enc","mpeg4part2_dec","ac3_dec","vc1_dec","vc1_enc"],"token":" root@DiskStation:~# ls -all /dev/dri/ total 0 drwxr-xr-x 2 root root 80 Sep 21 22:49 . drwxr-xr-x 12 root root 18920 Sep 21 23:01 .. crw-rw---- 1 root video 226, 0 Sep 21 22:49 card0 crw-rw---- 1 root video 226, 128 Sep 21 22:49 renderD128 New Setup: Intel Gold G5400 Gigabyte GA-B365M-DS3H USB: Kingston Loader 1.04b with extra.lzma attached on post 4 or 5 DSM 6.2.2-24922 Update 3 Codec list worked only after I installed Photo Station and added a video. fffmpeg started, only after that codec.conf was availble.
  9. Hi everyone, Can I get some help? root@DiskStation:~# cd /dev/dri -ash: cd: /dev/dri: No such file or directory root@DiskStation:~# cat /usr/syno/etc/codec/activation.conf {"success":true,"activated_codec":["hevc_dec","aac_dec","aac_enc","h264_dec","h264_enc","mpeg4part2_dec","ac3_dec","vc1_dec","vc1_enc"],"token":" Running : [ 0.000000] DMI: HP ProLiant ML10 v2, BIOS J10 05/21/2018 CPU Intel i3-4150 DSM 6.2.2 1.04b Don`t know how to enable it.
  10. - Outcome of the update: SUCCESSFUL - DSM version prior to update: Clean install DSM 6.2.2-24922 - Loader version and model: Jun v1.04b - DS918+ - Using custom extra.lzma: NO - Installation type: BAREMETAL - HP ProLiant ML10v2 - Additionnal comment: reboot required - Outcome of the update: SUCCESSFUL - DSM version prior to update: Clean install DSM 6.2.2-24922 - Loader version and model: Jun v1.04b - DS918+ - Using custom extra.lzma: NO - Installation type: BAREMETAL - HP ProLiant ML10v2 - Additionnal comment: reboot required
  11. - Outcome of the installation/update: SUCCESSFUL - DSM version prior update: DSM 6.2-23739 Update 2 - Loader version and model: JUN'S LOADER v1.03a - DS3615xs - Using custom extra.lzma: NO - Installation type: BAREMETAL - Additional comments: Normal boot does not work. Using Reinstall option. System boots ok
  12. 2017-03-20T10:51:48+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_check_status[28880]: ccc/service.cpp:892 Failed to write /tmp/ccc/cache.etcd.state.json 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/host.cpp:1492 Start to create cluster 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/host.cpp:1295 Start check before host init 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: extent_pool_rodsp_key_get.c:16 open(/config/rodsys/local_key, O_RDONLY) failed, err=No such file or directory 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/host.cpp:127 GetLocalStorageKey(): SYNOExtentPoolRODKeyGet(0x7ffe682441e0, 13) failed, err=Failed to open file 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/host.cpp:1313 Failed to get storage key 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/host.cpp:1638 Rollback create cluster 2017-03-20T10:51:55+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/service.cpp:246 service pkg-synohostcmdd ...[1] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/service.cpp:246 service pkg-synohostcommd ...[1] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/service.cpp:246 service pkg-synocccd ...[1] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/service.cpp:246 service pkg-libvirtd ...[1] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/service.cpp:246 service pkg-etcd_hyper ...[1] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/service.cpp:246 service pkg-etcd ...[1] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: core/utils.cpp:390 Failed to get register_ovs_by_ccc[0x0900 file_get_key_value.c:29] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: core/utils.cpp:390 Failed to get register_ovs_by_ccc[0x2000 file_get_key_value.c:81] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: ccc/host.cpp:1391 OVS is not enabled by virtualization, skip unregister 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create_SYNO.CCC.Cluster_1_create[29036]: Cluster/cluster.cpp:231 Failed to create cluster, [401] 2017-03-20T10:51:56+02:00 ITData synoscgi_SYNO.Virtualization.Cluster_1_create[29036]: Cluster/Cluster.cpp:56 Bad response [{"error":{"code":401},"httpd_restart":false,"success":false}]/ request [{"addr":"172.16.16.1","api":"SYNO.CCC.Cluster","method":"create","version":1,"volume_paths":["/volume5"]}] 2017-03-20T10:54:56+02:00 ITData kernel: [ 294.790449] hpilo 0000:01:00.2: Closing, but controller still active ovs_eth0 Link encap:Ethernet HWaddr 00:11:32:D8:09:76 inet addr:192.168.1.240 Bcast:192.168.1.255 Mask:255.255.255.0 ovs_eth1 Link encap:Ethernet HWaddr 00:11:32:FF:AD:E8 inet addr:172.16.16.1 Bcast:172.16.16.255 Mask:255.255.255.0 I have the same error. Same with Open vSwitch on or off. Using XPEnology bootloader for DSM 6.0.2-8451.5.
  13. Thank you so much. It works. https://s22.postimg.org/9yl1rrmld/ilo4.jpg
  14. Thank you for the link, but i`m looking for a new version of hp-ams no iLO. I installed allready iLO 2.50 firmware that works pretty good. It keeps my fans at 18%, But with hp-ams i have all info reported from Syno to iLO (ex: fan, temperature, ip on nics, controller and drive informations). DSM 5.2 with hp-ams worked pretty good. But now if I try to compile a new version for this kernel I get stuck on some things
×
×
  • Create New...