• Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About Vaifranz

  • Rank
    Junior Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Great job guys, congratulations! I can help somehow, I installed baremetal DSM 7.0.1-42218 DS3615xs, unstable for kernel panic (return to login page every time), on Supermicro 10XSLH-F with CHIP Intel C226 and CPU Intel Xeon E3-1245 v3.
  2. Se hai la possibilità prova a fare un'istallazione pulita, cioè su un HDD formattato. Se va a buon fine il problema credo sia la tua versione di DSM attualmente installata
  3. Vaifranz

    DSM 7.0

    Non aggiornate, Xpenology è compatibile, al massimo, con la versione che avete installato, la DSM 6.2.3. L'aggiornamento causa l'inutilizzo del NAS. per le versioni successioni c'è un gruppo su questo forum che ci sta lavorando.
  4. Magari hai lo slot un poco sporco, consiglio pulizia con alcol isopropilico, un buon prodotto per la pulizia in elettronica, sia contatti che altro. La uso anche per togliere i residui di pasta termica sulle CPU quando faccio la sostituzione. Inviato dal mio iPhone utilizzando Tapatalk
  5. [ 98.256360] md: md2: current auto_remap = 0 [ 98.256363] md: requested-resync of RAID array md2 [ 98.256366] md: minimum _guaranteed_ speed: 10000 KB/sec/disk. [ 98.256366] md: using maximum available idle IO bandwidth (but not more than 600000 KB/sec) for requested-resync. [ 98.256370] md: using 128k window, over a total of 483564544k. [ 184.817938] ata5.00: exception Emask 0x0 SAct 0x7fffffff SErr 0x0 action 0x6 frozen [ 184.825608] ata5.00: failed command: READ FPDMA QUEUED [ 184.830757] ata5.00: cmd 60/00:00:00:8a:cf/02:00:00:00:00/40 tag 0 ncq 262144 in
  6. I did some tests, my system works fine without the 24 bay case backplane, I will test it with other systems like OMV or Unraid or FreeNAS to understand if the problem is hardware or software. Here I took a photo of the only code present. I searched the net but found nothing about it, the case has this code CSE-S46524. I hope it will be useful to someone.
  7. Yes, a great idea, I can also try the system that is installed there without a backplane, so only mobos, disks and HBA cards. I will update you, in the meantime thanks for your time.
  8. I don't know exactly, I should take it apart, but it doesn't seem very old to me, the strange thing is that it is now running, not with JMB585 but with HBA, Dell 200 in IT mode (9211), it works fine, but it happens that every 4 / 5 days freezes, remains on but not accessible.
  9. same kernel version as 3617, no reason to assume a change in central ahci code would not be in both What I thought, I'm not very well versed in computer systems, just a little passionate. just add it at the end of the line as additional parameter to the others set common_args_918='syno_hdd_powerup_seq=1 HddHotplug=0 syno_hw_version=DS918+ vender_format_version=2 console=ttyS0,115200n8 withefi elevator=elevator quiet syno_hdd_detect=0 syno_port_thaw=1' a space separates entry's and the line is "closed" with the ' for 3617 its
  10. Interesting, I have all WD disks! Except the ones I use for tests. Does the 3615 kernel also have this "flaw"? Could you be more precise and tell me where to insert the command "libata.force = noncq", in the file grub.cfg? I would also like to do some tests on the systems I have. Thank you.
  11. However it only happened once, the other times it happened continuously and repeatedly, the difference is that now I am not using the 24 bay case, but another case where the HDDs are connected individually and not via a backplane as in the 24 bay. Maybe my problem is the backplane.
  12. [ 3361.874311] ata14: device unplugged sstatus 0x0 [ 3361.874339] ata14.00: exception Emask 0x10 SAct 0x1c0000 SErr 0x9b0000 action 0xe frozen [ 3361.882474] ata14.00: irq_stat 0x00400000, PHY RDY changed [ 3361.887987] ata14: SError: { PHYRdyChg PHYInt 10B8B Dispar LinkSeq } [ 3361.894350] ata14.00: failed command: WRITE FPDMA QUEUED [ 3361.899726] ata14.00: cmd 61/80:90:e0:fa:01/00:00:08:00:00/40 tag 18 ncq 65536 out res 40/00:98:e0:fd:01/00:00:08:00:00/40 Emask 0x10 (ATA bus error) [ 3361.915280] ata14.00: status: { DRDY } [ 3361.919039] ata14.00: failed command: WRI
  13. Thank you for taking the time to address this issue, which apparently is mine alone. The card I use is like the first photo you posted, unfortunately today I did not have the opportunity to do some tests to post the messages that result from dmesg, I hope tomorrow. Meanwhile, I am grateful to you.
  14. The reconnections did not occur on all the disks but only on one or two. Testing, replacing disks that were disconnecting, there were others that had the same problem. The problem occurred with both 918+ and 3615/17. I try again because I have no feedback to post. maybe tonight. Basically it disconnected the disk, or the disks, and could not create the raid, generally I tried on SHR having disks of different sizes, I think that with SHR it works more.
  15. Thanks for your interest, I will wait for the results of your tests. I don't believe you are using a particular extra.lzma file.