Hi
A reply to an old posting but someone may benefit.
You cannot recreate a realistic 3d model from a single photograph without supplying the software with
additional information. Intenstity ( black grey white ) or colour does not give depth information as other
factors are involved, intensity for instance may depend on depth but it will also depend on colour ( black reflects
less light ) and shadows ( obscuration ).
Information has been lost when 3d is converted to 2d in the camera and attempts to get back to 3d either require
additional information or they must guess and thereby produce mixed and inconsistent results.
Very advanced software can use cues, for instance buildings may be understood by vanishing points and
lines of persepective by intelligent algorithms but for this you need automatic and therefore intelligent
feature recognition or the user must help.
It is possible that focus and depth of field might be intepretted but in general, unless the software understands something
about the scence ( here we are talking pretty smart software ) and unless some other cue is available (steroscopic vision)
then intensity or colour cannot be used to recapture the lost 3 dimensional information.
The reliefs people are talking about here may approximate a 3 dimensional scene when viewed from one angle
but typically parts that are concave should have been convex and vice-versa, each region may just about hang
together with some level of visual consistency but taken as a whole the scene will usually contain contradictions.
I am sure everyone has seen the optical illusion where a convex carving of Einsteins face suddenly seems to pop out
concave as the carving is slowly rotating - well refiefs are often like that but using mixed regions of concave and
convex at the same time - it might look right from one angle but its not a great 3d representation.
Some 3d software works with faces because the software can work out ( often with help ) the position of the nose,
mouth and cheek bones, it also understands facial structure and so a combination of smart analysis and stored
knowledge can flesh out a 3d face from a 2d one but this involves more information than that contained in the photo.
In general a dark area may be dark because of its relative angle to the light source and viewer, it may be dark because
the material does not reflect light or it may be dark because it is hidden in shadow - none of these cues directly tells you anything
about how close to the camera lens the dark region. Reliefs of faces are often misleading, black hair may be pushed back when infact it
is buffoned out front, gleaming white teeth may extrude towards the camera and well infront of the darker lips when infact the teeth are
actually behind the lips.
The simple cheap software will generally work better with simple shapes - the ideal photo would be one of
a scene where intensity was proportional to distance - brighest points should be nearest the camera.
Relief software can be useful to capture simple graphic art work, for instance a cartoon character, the low
number of colours and constant intensities ( areas tend to be one colour and one intensity) mean that each region
will be correctly indentified, it may come out the wrong Z height but this can be post corrected ( subject to software )
relatively easily.
The replies listed here that suggest use software X are referring to non-sophisticated methods which will in general
not recapture the 3d information.