yabba235

NEW DSM 5.1-5004 AVAILABLE !!

Recommended Posts

init: Unable to mount /dev filesystem: No such device"

 

In other words, there is no volume available.

 

I think we need Sancome (nanoboot) and other experienced users to have a look and try to fix this and troubleshoot the problems that arise from testing. I'm willing to help but my Linux skills are somewhat outdated and not sufficient to address the problem.

 

 

I think spectre verified the above... :grin:

 

@Sancome. Would be happy to donate for the solution...

Share this post


Link to post
Share on other sites

As far as I could tell, synology started implementing some sort of checksum procedure on the patch mechanism which was dependent on the (non existant in xpenology) boot drive image. If there is a workaround I couldn't find it. :oops:

 

Andrew

Share this post


Link to post
Share on other sites

I think this one needs a little work let's all take a step back and see if Synology will fix the issues. As far as I can tell real Diskstations are having major issues. I own a 713+ and I'm not going near this one till I see some fixes in the forums! See http://forum.synology.com/enu/viewforum.php?f=250 for issues being reported. I own a bricked 209+ that was a victim of a buggy update (ie. 4.2-3246) when I called Synology they tried to help me but after about 15 min. they suggested I buy a new one. I did buy the new 713+ but I also got my HP N54L and found Xpenology.com. :grin: Having both is nice because I can be good and bad at the same time!

Share this post


Link to post
Share on other sites
que? (Even google won't translate it for me - sorry! :sad: )

 

Not difficult to translate :lol:

He says the HDD won't work as well as network problems :wink:

Share this post


Link to post
Share on other sites

Hi Poechi,

 

The problem is that at boot they excist but at some point they are renamed to hda, check the /dev. I once wrote a script to create a symlink ....

But they are deleted every time :wink:

 

If the sda is in /dev all is good, everything works, but after some time they are deleted ... I guess by some dev daemon :sad:

 

Cheers Louis

Share this post


Link to post
Share on other sites

Responding to what some are saying about mounting existing volumes. Can't you use putty or terminal and some command line tools from linux. example "mdadm --examine --scan" to find out volume info. Then maybe something like this for typical Raid not Synology Raid "mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1". This would work as long as you can find original mount point then something like "mount /dev/md0 /mnt/raid" to mount it? Come on one of you Linux geeks chime in here and bail me out. :geek:

Share this post


Link to post
Share on other sites

@Poechi

 

I think i've already read it somewhere ... When entering DSM it moves /dev/sd* to /dev/hd*

 

So doesn't seem to be the daemon ... But something in DSM ....

 

The question is what causes it .... I read something of a checksum somewhere

Share this post


Link to post
Share on other sites

@civilmann

 

Tried several options .... but still only volume 1 will be available...

The other volumes can be mounted but are not available in the DSM

 

The only option is to have a boot script which well reassamble the raid's then activate the volume groups and remount the logical volume.

 

Also to use these other volumes you will first have to create the shared folders on volume1.

Then you'll have to create the folder on, let say, volume2 and remove the folder on volume1 .. then create a symbolic links from volume1 to the folder on volume2.

 

Or you might use a mount --bind to do it (or from fstab)

 

Alot of problems to get it to run ... and an upgrade will probably kill it all :wink:

 

btw on a clean install i couldn't get it to work yet .....

 

cheers Louis

 

example of script:

 

mdadm --assemble --force --no-degrade /dev/md3 /dev/hdd3 /dev/hde3 # vg1

mdadm --assemble --force --no-degrade /dev/md2 /dev/hdc3 # vg1000

vgchange -a y vg1

vgchange -a y vg1000

cp /root/fstab /etc/fstab

mount -a

 

example of my test systems fstab :

 

none /proc proc defaults 0 0

/dev/root / ext4 defaults 1 1

/dev/vg1000/lv /volume1 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl 0 0

/dev/vg1/volume2 /volume2 ext4 usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0,synoacl 0 0

/volume2/test /volume1/test bind defaults,bind 0 0 # this is the "linked" shared folder

Share this post


Link to post
Share on other sites

Hmmm, guess i forgot to tell you ... After DSM is started volume1 still is removed

So not really a great solution

Share this post


Link to post
Share on other sites

The weird thing is, the volume (i.e. md2) is there. If you boot with a recovery cd (eg Ubuntu) you can assemble the raid and the files are all still there (I installed 5.0 & populated the volume with some files etc before trying 5.1). To me this means that the partitions and superblocks are all ok. I wonder if the problem is related to the "unable to mount /dev" message during boot - if /dev isn't there in time then how can the kernel populate it at boot?

 

Andrew

Share this post


Link to post
Share on other sites

I am totally new to Synology/Xpenology so please be gentle on me...

 

Currently I'm on 5.0-4493 Update 7

 

Given all the reported problems from some people here should I upgrade to 5.1?

If so - what is the method of upgrading (given that it's a major version)?

Sorry, I only know how to update a minor version...

 

A link to a step by step guide will be highly appreciated!

Share this post


Link to post
Share on other sites

Hi yud - put simply, no. Don't touch 5.1 yet, there are various issues with it that the great minds here are still figuring out :smile:

 

You can always keep an eye on the status table at the bottom of the page (above the comments) on xpenology.nl to see the overall status of a release:

http://www.xpenology.nl/synology-released-dsm-5-1-5004/

 

Welcome to xpenology :smile:

Share this post


Link to post
Share on other sites

This specific error can be caused (on a "normal" linux installation) by missing CONFIG_DEVTMPS=y in the kernel configuration. Does anybody know if the kernel configuration in the Nanoboot image does have this setting?

Share this post


Link to post
Share on other sites

Hi Hetfield,

 

As mentioned before the problem is in a different part of the system, it has to do with all kind of checks dsm does ... checksumming ... device checking ... etc, as mentioned by schmill don't touch this until ....

 

just my 2 cents :smile:

 

cheers Louis

Share this post


Link to post
Share on other sites

Many thanks lowietje, indeed lets hope that some developer can figure out what's happening here. I'd be more than happy to beta test it.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now