that was what you already did with flyride, so nothing new to expect if you repeat it
in my edit from the last post i suggested a a little different assemble try, is gives the /dev/sdX in the order as they are in the examine, sdc as the 1st and the other two after this, i 'm not sure if that makes a difference when using --force, it also contains --verbose, maybe we get more information when it fails to assemble it
its just a slightly different variation of what you already tried, cant make things worse
so you should try this next
mdadm --stop /dev/md2
mdadm --assemble --force --verbose /dev/md2 /dev/sdc5 /dev/sdb5 /dev/sdd5
mdadm --detail /dev/md2
the --create command would be only if the above does not work and we can't figure out why, i would like to know why --assemble --force does not work as it should
for the --create coammand yes, the other try with the --verbose should be ok
when you are sure your problem is located and the reason why sdc also dropped out a removed
sda seems to be a bad drive already and is not used anymore
if the s.m.a.r.t. info's of the other three drives is ok it might be safe to shutdown, but if i would be in your place i would let it in the state is now and running, if there would be indication that ram, board, controller or psu are source of the problem i would shut down to get a stable system, thats key for a recovery
if the assemble or create is successful even then you would not shut down, maybe a reboot
for me it seems still unclear why sdc dropped out of the raid
did you check logs the see when sda and sdc dropped out, dis they at the same time or was sda already failed for longer and you did not noticed it?
in a more professional recovery environment (much more money involved) i guess one would make a image file from every disk and work with these (an a tested stable system)