cooledspirit

Members
  • Content Count

    32
  • Joined

  • Last visited

Everything posted by cooledspirit

  1. Hi, I expanded a while ago my 8 disk raid 6 config to 10 disks (3 Tb disks). At that time I was given the posibility to "unfold" to 21.8 Tb, which I accepted. This worked well for quite a while... At a certain moment one disk failed and I now replaced it, but DSM refuses to repair, as there appears to be a 16 Tb limit. I get a popup stating that the limit is 16 Tb, so repair is not possible. My volume is stuck, I can't retrieve data, and I can't repair either. How on earth can I fix this ? Any ideas ? Many thanks !
  2. Hi, I have XPenology (6.1.4 update 5) booting from a USB drive, but I also have a separate HDD running Windows 10 on that same machine. I also would like to add an Ubuntu instance to be able to run file system repairs should I ever need to. I was wondering if I can combine the 3 options to a single disk (with a boot menu of course)? So upon boot get the possibility to select Jun's boot loader or run Ubuntu or run Windows 10. If yes, can you indicate how I can do that ? In terms of partitioning the disk carrying all this and getting the menu available ...
  3. I replaced the RocketRaid sata controller can report that all seems to be working fine now. I managed to upgrade to update 5 without any problem. Would it be ok to enable "write cache" again ? Or is that a bad idea ?
  4. I disabled write cache again (as re-installing DSM overwrote the setting, I forgot to disable it after reinstalling). I can't find any way in the Rocketraid bios to modify what you indicated. I ordered the following non-RAID sata controller (Syba sd-pex40099 4 Port SATA III PCI-EXPRESS 2.0 x) in the hope this will solve everything. By now, I assume it has something to do with the controller or its driver. I read on another place in this forum that the controller above appears to work. What I notice is: parity check with DSM 6.1, update 1 completes succesfully. The same wi
  5. This is what happens... last adding of the 2 disks worked (consistency check passed successfully) on 6.1.4 update 1, however as soon as I dare to install 6.1.4 update 5, the system partition appears to fail and the disks are rejected. That or simply rebooting is a problem.... Any ideas ?
  6. This is what happens... last adding of the 2 disks worked (consistency check passed successfully) on 6.1.4 update 1, however as soon as I dare to install 6.1.4 update 5, the system partition appears to fail and the disks are rejected. Any ideas ?
  7. So this time I have a log that indeed mentions timeouts (see attachment). Can somebody interpret and indicate what might be going on here ? Many thanks in advance ! Cooled Spirit dmesg.log
  8. In the mean time the Synology assistant is detecting the machine again. This topic can be closed.
  9. In attachment the dmesg results when booting with only 2 disks in the lowest SATA slots. dmesg - only the 2 disks.txt
  10. I notice that synology assistant doesn't find the station anymore, even when booting the regular way with the USB. I can access the admin webpage and use telnet though when booted normally...
  11. Please find the log (dmesg) file in attachment. Please also take a look at my questions in my prior reply regarding running re-install on the disks in the lowest 2 channels ? dmesg-log.txt I disabled the serial port (the USB with JUN's boot loader complained that 3F8 was disabled afterwards) I disabled "powernow" from AMD as well Perhaps running an fsck might help ? But how ? If I run syno_poweroff_task -d, my session gets closed.
  12. I see on my router that "diskstation" has gotten an IP address, but there appears to be no way to connect to it, nor via the assistant, telnet or ssh (connection refused)
  13. I rebooted my machine, selecting the "reinstall" option of Jun's loader, but the Synology Assistant can't find the machine so far. Seen the fact that I experienced some problems lately, is it possible that prior to coming available, a file systems check might be running ? Is there a way to validate that (what could be causing the delay of becoming available) ? Or is something else potentially wrong ? I just want to avoid forcing a shutdown in case a file systems check would actually be running. In that case I can wait until the machine becomes available again. In any other case, I
  14. Thank you very much for the suggestions. Currently another attempt is running, so I have to wait a few more hours before I can check what you suggested.
  15. Hello, running DSM 6.1 (jun's loader) on Asus A88X-Plus/USB 3.1 with AMD A series A10-7860K CPU, 8 Gb RAM and expanded from 8 disks to 10, system is configured as RAID 6. As the motherboard natively only supports 8 disks I added a PCI-E sata controller Highpoint Rocket 640L. I added the 2 disks to the external sata controller. The system appears to have succesfully switched from 8 to 10 disks, but never fully worked,as the system partition appears to be faulty of the 2 added disks. Re-installing DSM did not work either, everything ok until I get a timeout when wr
  16. Anybody got a chance to test DSM 5.1-5021 with nanoboot ?
  17. I'm using Nanoboot 5.0.3.1, I have a Asrock B75 pro3-M motherboard, which has 8 SATA connectors: 6 from the Intel B75 chipset and 2 from an onboard ASMedia ASM1061 controller. I'm using 8 disks with a raid 6 setup and everything worked fine until I added a Hightpoint Rocket 640L (not RocketRaid) in an attempt to upscale from 8 to 12 sata connections. For some reason, the disks connected to the ASMedia are no longer detected by DSM (at all), they previously worked fine. The disks are visible in BIOS though, so I assume there must be some conflict on Linux level. Luckily I have a workaround
  18. I'd like to put Nanoboot and a Windows install on a hard drive. If possible, I'd even like to add Ubuntu. Windows (and Ubuntu) would only have to be started when explicitly selected in the boot menu. Normal boot is Nanoboot. Can this be done ? And if yes, can you point me in the right direct direction ? Nanoboot is available as an .img , how can it be transferred to a hard drive (leaving room for windows install) ? Many thanks !
  19. Update: I ran e2fsck -v -f -y /dev/md2 to make sure. I notice that my volume is available (apart from the 2 replaced disks), it is only degraded. However, DSM does not "see" the volume. the mount is available in /etc/fstab. However, mdadm.conf seems to be missing. Could that be causing it ? What can I do to make DSM see the volume again and allow me to repair the volume by adding my new disks ? Cooled.
  20. I had a volume with 8 disks, 2 of them being redundant. 2 disks crashed. When I replace them, I notice that 2 other disks (worst luck ever) have problems with the system partition, meaning that the volume crashed. Raah... I never expected 4 disks to fail at once ! Luckily, I was able to get those 2 disks back to life using Seatools from Seagate, which means there were some bad sectors, but the Seagate tool reallocated them. To recover the system partition, I installed 4493 using Synology assistant, which succeeded. All disks are now reported as being "healthy". However, my volume
  21. I feared that... I won't be able to use nanoboot then. As soon as I use the nanoboot image (instead of gnoboot 10.5), the system doesn't work anymore... even without upgrading.
  22. How can I perform a clean install of 4482 (through Synology assistant) ? Without losing my data of course... 4482 is installed, but it appears that the installation didn't go well... I'd like to redo it completely.... Thanks !
  23. I'd like to make a clean install of 4482 with nanoboot, without losing data. For some reason I can't install 4482 (it always returns an error), I assume something went wrong earlier. Having to re-configure everything is not a problem, but I don't want to lose data. How can I approach this (safely) ? I assume Synology assistant needs to be tricked in believing there's nothing installed yet, and then have it remove and overwrite previous system and configuration files ? PS: additional explanation. I upgraded to 4482 using nanoboot, coming from Trantor, then GNOboot and ultimatel