Global Moderators
  • Content count

  • Joined

  • Last visited

  • Days Won


sbv3000 last won the day on October 14

sbv3000 had the most liked content!

Community Reputation

18 Good


About sbv3000

  • Rank
  1. Still random shutdown on DSM 6.1

    Change the MAC address to be the 'real' one for your network card Also, check the DHCP settings on your router and set it to a long renewal and also give the NAS an IP reservation from the router (not static on the NAS). See if these changes make a difference as it sounds like there is a networking issue somewhere.
  2. Still random shutdown on DSM 6.1

    Can you telnet or ssh to the system when its frozen? What are your mac address settings in grub? eg a 'fake' syno mac or the real mac address of your nic or 'blank' You could connect a serial port to look at status when it freezes and if the system is 'up' but lost the network, try another network adapter, eg Intel Have you replaced the 'extra.lzma' file with one of the alternatives that have been compiled? Some of these have different Realtek drivers may perform better. It could be a good idea to create a clean install of XPE6/DSM6 on a spare HDD and test that for a few days, there may be some 'bugs' from your upgrade from 5.2>6
  3. DSM 6 Boot Image for Hyper-V

    Interesting and clever hack What is the virtual hardware configuration of the VM? I've not played with Hyper-V for a long time, but if I recall correctly you allocate an internal NIC to the VM and its a 'passthrough' of whatever is the brand installed in the hypervisor, so if you have a physical intel nic and allocate that, it might work.
  4. PSU should be ok then I use winscp for running command windows and it allows better handling of the full dmesg output to copy etc. Have you tried just creating an array with the 2 ide drives only maybe create two jbod single volumes on each drive and see if that survives the build/reboot, then delete and create a 2 drive volume etc. If this works, but things go bad once the sata drives are mixed in, maybe this points to a conflict somewhere. Also I wonder if the different udma modes might somehow be involved with different drives types. As an aside, mask out the USB drives in your config, probably totally irrelevent, but another variable, seeing that the boot drive is being mounted I think.
  5. Did you strip back the number of drives for testing? ie only 2-4 connected to the HPT? If you have a lot of drives connected, check the power supply can deliver whats needed on the 12v/5v rails continuously, (noting that the dmesg says some drives are needing full power mode) Also, a possible dumb suggestion, but if the drives can be set to master/slave, with jumpers, do that rather than 'cable select' mode
  6. That us an old loader so there is limited support I would test as follows; Disconnect your raid drives Use a live Ubuntu boot disk to check your hardware, network etc If thats ok Test your boot usb drive on another PC to see if it boots/appears on the network as Reflash the usb boot drive with the same boot image Boot the NAS with the reflashed boot image and see if it appears on the lan, if yes then power off and reconnect drives etc You should consider upgrading/migrating to XPE5.2/DSM 5.2 and then to 6.x (if your hardware supports it)
  7. The lost interrupts and chs errors might point to some hardware configuration settings needing changing. For diagnostic purposes I'd suggest stripping back the system and do a set of tests for stability, then add on more components; Disable onboard parallel, serial, sound Disable onboard sata/pata Remove the SIL card Boot/test the HTP card with drives - I'm presuming you have setup the HPT as a jbod, IDE master/slave are all correct etc - and jumpers on the drives match etc Check the dmesg and see if there is any change. If stable then enable the onboard pata and retest etc and see if it 'breaks' at any point. As an aside, your total potential drive capacity with all controllers seems to be 16 (6+2+4+4), presuming again that you have altered the drive configuration in synoinfo.conf to match.
  8. Any data recovery advice?

    I suspect that by a bad combination of events (mismatched XPE loader/DSM update) and a bad drive, your volume may be badly damaged, beyond recovery. The bad upgrade might have left the DSM partitions out of sync in addition to the change of architecture (SHR/Volumes to RAID Groups/Volumes) from 5.2>6.1, hence Disk 1 status, Disk 3 looks like a genuine failure, maybe brought about by the raid migration/conversion process hitting the disk hard. If you now have a correctly booting XPE6.1/DSM.6.1 system, I would boot with all 4 drives attached, DSM/SHR might allow read only of your volume in File Station, or via SSH. @IG-88 is right to try and raw copy your data first, the more you play around the more the chance of causing more damage.
  9. It looks likes your ide drives are somehow losing their DSM partition information Can you give some more information about your mobo/ide controllers and number of drives Did you try 5.2 loader 'as is' and your IDE controllers to see if the built in modules covered your hardware? You could look at your dmesg raid/drive output for the various scenarios ('as is' 5.2, your modules added, after reboot with lost array) to compare and see if there are errors in the disks/drivers and raid build process and system partitions
  10. You can create an SHR2 array in DSM5.2 if you did not want to upgrade, you would need to backup your data, delete the current SHR1 and create a new SHR2 volume. In place conversion from SHR1>SHR2 is possible in DSM6.1 - your reason to upgrade - You should be able to migrate/update directly to 6.1 following carefully the tutorial from @Polanskiman I would recommend that you try a test install using a spare HDD to test your hardware compatibility first, make sure your USB vid/pid are correct and you can install DSM. Disconnect your current raid drives of course! Also pay close attention to the onboard SATA settings in your bios (ide or ahci), if they are currently ide then your existing volume will probably crash if you try and migrate. You should always backup your data before any upgrade anyway!
  11. A 'Supported Hardware' list for 6.x would be useful, but as there are lots of different devices (brands, models) perhaps a simpler option might be to list the provided modules/drivers in the default and extended ramdisk created by @IG-88? An internet search should show up the 'supported hardware' for a given module, good enough to answer most questions.
  12. From FreeNAS to Xpenology

    As an observation, FreeNAS is a large scale community supported NAS O/S whereas XPE is a 'hack' of the open source elements of DSM that allows the Synology DSM O/S to run on other hardware. The XPE/DSM combination is a 'hobby project' by some clever Dev guys and could change at any time if Synology remove the open source components or 'secure' the system (DSM 6.2 seems to move in that direction). If you are running a critical production business data NAS then be aware of the risks with XPE/DSM, and make sure the company managers know what you are proposing. As an aside DSM 6x does offer several cloud backup options (Amazon etc) and if your data is as important as you say, then you should have an offsite backup regardless of your NAS O/S
  13. Check the bios of the lenovo and see if you can disable one of the sata ports, so that your maximum is 12. That will mean you do not have to edit your synoinfo.conf file for > 12 drives when there is a new boot loader. It will avoid 'degrading' your array if you use a new boot loader for the first time.
  14. You may have damaged your array beyond repair, however you could try the Ubuntu live recovery process outlined by @polanskiman. You could also try a winscp session first and see if that allows you to access folders in 'volume1' and download.
  15. Most of the Marvell cards work ok with 6.x loader. I use this 4 port one.