yabba235 Posted November 7, 2014 Share #1 Posted November 7, 2014 Time for trying new wersion DSM. Link to pat file: " http://ukdl.synology.com/download/DSM/5.1/5004/ " Quote Link to comment Share on other sites More sharing options...
Poechi Posted November 7, 2014 Share #2 Posted November 7, 2014 (edited) - Edited July 10, 2015 by Guest Quote Link to comment Share on other sites More sharing options...
spectre Posted November 7, 2014 Share #3 Posted November 7, 2014 Verify! Quote Link to comment Share on other sites More sharing options...
Poechi Posted November 7, 2014 Share #4 Posted November 7, 2014 (edited) - Edited July 10, 2015 by Guest Quote Link to comment Share on other sites More sharing options...
martva Posted November 7, 2014 Share #5 Posted November 7, 2014 init: Unable to mount /dev filesystem: No such device" In other words, there is no volume available. I think we need Sancome (nanoboot) and other experienced users to have a look and try to fix this and troubleshoot the problems that arise from testing. I'm willing to help but my Linux skills are somewhat outdated and not sufficient to address the problem. I think spectre verified the above... @Sancome. Would be happy to donate for the solution... Quote Link to comment Share on other sites More sharing options...
Poechi Posted November 7, 2014 Share #6 Posted November 7, 2014 (edited) - Edited July 10, 2015 by Guest Quote Link to comment Share on other sites More sharing options...
jagwaugh Posted November 8, 2014 Share #7 Posted November 8, 2014 As far as I could tell, synology started implementing some sort of checksum procedure on the patch mechanism which was dependent on the (non existant in xpenology) boot drive image. If there is a workaround I couldn't find it. Andrew Quote Link to comment Share on other sites More sharing options...
Jotouriste Posted November 8, 2014 Share #8 Posted November 8, 2014 Warning !!!! ça plante le HD plus de connexion réseau !!!!!!!! Quote Link to comment Share on other sites More sharing options...
Schmill Posted November 8, 2014 Share #9 Posted November 8, 2014 que? (Even google won't translate it for me - sorry! ) Quote Link to comment Share on other sites More sharing options...
civilmann Posted November 8, 2014 Share #10 Posted November 8, 2014 I think this one needs a little work let's all take a step back and see if Synology will fix the issues. As far as I can tell real Diskstations are having major issues. I own a 713+ and I'm not going near this one till I see some fixes in the forums! See http://forum.synology.com/enu/viewforum.php?f=250 for issues being reported. I own a bricked 209+ that was a victim of a buggy update (ie. 4.2-3246) when I called Synology they tried to help me but after about 15 min. they suggested I buy a new one. I did buy the new 713+ but I also got my HP N54L and found Xpenology.com. Having both is nice because I can be good and bad at the same time! Quote Link to comment Share on other sites More sharing options...
martysport Posted November 8, 2014 Share #11 Posted November 8, 2014 que? (Even google won't translate it for me - sorry! ) Not difficult to translate He says the HDD won't work as well as network problems Quote Link to comment Share on other sites More sharing options...
Poechi Posted November 8, 2014 Share #12 Posted November 8, 2014 (edited) - Edited July 10, 2015 by Guest Quote Link to comment Share on other sites More sharing options...
Poechi Posted November 9, 2014 Share #13 Posted November 9, 2014 (edited) - Edited July 10, 2015 by Guest Quote Link to comment Share on other sites More sharing options...
lowietje Posted November 9, 2014 Share #14 Posted November 9, 2014 Hi Poechi, The problem is that at boot they excist but at some point they are renamed to hda, check the /dev. I once wrote a script to create a symlink .... But they are deleted every time If the sda is in /dev all is good, everything works, but after some time they are deleted ... I guess by some dev daemon Cheers Louis Quote Link to comment Share on other sites More sharing options...
Poechi Posted November 9, 2014 Share #15 Posted November 9, 2014 (edited) - Edited July 10, 2015 by Guest Quote Link to comment Share on other sites More sharing options...
civilmann Posted November 9, 2014 Share #16 Posted November 9, 2014 Responding to what some are saying about mounting existing volumes. Can't you use putty or terminal and some command line tools from linux. example "mdadm --examine --scan" to find out volume info. Then maybe something like this for typical Raid not Synology Raid "mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1". This would work as long as you can find original mount point then something like "mount /dev/md0 /mnt/raid" to mount it? Come on one of you Linux geeks chime in here and bail me out. Quote Link to comment Share on other sites More sharing options...
lowietje Posted November 9, 2014 Share #17 Posted November 9, 2014 @Poechi I think i've already read it somewhere ... When entering DSM it moves /dev/sd* to /dev/hd* So doesn't seem to be the daemon ... But something in DSM .... The question is what causes it .... I read something of a checksum somewhere Quote Link to comment Share on other sites More sharing options...
lowietje Posted November 9, 2014 Share #18 Posted November 9, 2014 @civilmann Tried several options .... but still only volume 1 will be available... The other volumes can be mounted but are not available in the DSM The only option is to have a boot script which well reassamble the raid's then activate the volume groups and remount the logical volume. Also to use these other volumes you will first have to create the shared folders on volume1. Then you'll have to create the folder on, let say, volume2 and remove the folder on volume1 .. then create a symbolic links from volume1 to the folder on volume2. Or you might use a mount --bind to do it (or from fstab) Alot of problems to get it to run ... and an upgrade will probably kill it all btw on a clean install i couldn't get it to work yet ..... cheers Louis example of script: mdadm --assemble --force --no-degrade /dev/md3 /dev/hdd3 /dev/hde3 # vg1 mdadm --assemble --force --no-degrade /dev/md2 /dev/hdc3 # vg1000 vgchange -a y vg1 vgchange -a y vg1000 cp /root/fstab /etc/fstab mount -a example of my test systems fstab : none /proc proc defaults 0 0 /dev/root / ext4 defaults 1 1 /dev/vg1000/lv /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl 0 0 /dev/vg1/volume2 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl 0 0 /volume2/test /volume1/test bind defaults,bind 0 0 # this is the "linked" shared folder Quote Link to comment Share on other sites More sharing options...
lowietje Posted November 10, 2014 Share #19 Posted November 10, 2014 Hmmm, guess i forgot to tell you ... After DSM is started volume1 still is removed So not really a great solution [emoji17] Quote Link to comment Share on other sites More sharing options...
jagwaugh Posted November 10, 2014 Share #20 Posted November 10, 2014 The weird thing is, the volume (i.e. md2) is there. If you boot with a recovery cd (eg Ubuntu) you can assemble the raid and the files are all still there (I installed 5.0 & populated the volume with some files etc before trying 5.1). To me this means that the partitions and superblocks are all ok. I wonder if the problem is related to the "unable to mount /dev" message during boot - if /dev isn't there in time then how can the kernel populate it at boot? Andrew Quote Link to comment Share on other sites More sharing options...
yud Posted November 11, 2014 Share #21 Posted November 11, 2014 I am totally new to Synology/Xpenology so please be gentle on me... Currently I'm on 5.0-4493 Update 7 Given all the reported problems from some people here should I upgrade to 5.1? If so - what is the method of upgrading (given that it's a major version)? Sorry, I only know how to update a minor version... A link to a step by step guide will be highly appreciated! Quote Link to comment Share on other sites More sharing options...
Schmill Posted November 11, 2014 Share #22 Posted November 11, 2014 Hi yud - put simply, no. Don't touch 5.1 yet, there are various issues with it that the great minds here are still figuring out You can always keep an eye on the status table at the bottom of the page (above the comments) on xpenology.nl to see the overall status of a release: http://www.xpenology.nl/synology-released-dsm-5-1-5004/ Welcome to xpenology Quote Link to comment Share on other sites More sharing options...
Hetfield Posted November 11, 2014 Share #23 Posted November 11, 2014 This specific error can be caused (on a "normal" linux installation) by missing CONFIG_DEVTMPS=y in the kernel configuration. Does anybody know if the kernel configuration in the Nanoboot image does have this setting? Quote Link to comment Share on other sites More sharing options...
lowietje Posted November 11, 2014 Share #24 Posted November 11, 2014 Hi Hetfield, As mentioned before the problem is in a different part of the system, it has to do with all kind of checks dsm does ... checksumming ... device checking ... etc, as mentioned by schmill don't touch this until .... just my 2 cents cheers Louis Quote Link to comment Share on other sites More sharing options...
Hetfield Posted November 12, 2014 Share #25 Posted November 12, 2014 Many thanks lowietje, indeed lets hope that some developer can figure out what's happening here. I'd be more than happy to beta test it. Quote Link to comment Share on other sites More sharing options...
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.