In this paper we introduce concepts necessary to understand how to generate one of these images, while this technique is presented by a sequence of “intermediate photos” that explain in detail the properties of synthetic lighting models.
No algorithm explores the space limitation and the objective of the proposed introduction, focusing on exposing the main ideas for your understanding. If you are interested, can be found for example in Raytracing (Wikipedia).
As ray tracing calculation model
Suppose we want to take a picture of a simple object, a sphere. Like when executing conventional photography, Photographer position with respect to the object, called viewpoint “A” in our case, determines the insight gained. If we move the viewpoint get a different picture that is projected onto the plane of the film or camera sensor.
We can assume that from the point of view “A” rays are generated that can intersect with objects in the scene or projection plane (image plane). In the case of photography, the projection plane is located between the point of view or photographer and objects to photograph. In the case of the synthesized image, are the objects in the scene that are supposed located between the viewpoint and the projection plane
In Figure, the ray of “A” intersects at “B” if it encounters an object in front (sphere in this case). Point “B” can be considered a “pixel” of our image, its color will therefore depend on the objects in the scene and lighting conditions.
Suppose that our scene is a bit more complex, introduce several simple elements as a plane and several areas to incorporate effects such as reflections of some elements over others.
The plane has an image associated. To simplify the picture the image is a white and green grid, which facilitates the interpretation of results.
The calculation scheme has to be structured in basic steps, being more or less complex to implement in a program “rendering”.
For your explanation I leave a sequence, growing in complexity, to approximate the difficulty of the problem.
The rays intersect with any of the surfaces in the scene (represented in white) delimit areas with the background color (in this case color black).
Calculating the first action is thus discriminate between the rays impact the geometry directly and those that lead to “infinity” and therefore determine the elements in the plane furthest from the scene.
The color of the object is used to identify the various surfaces making up the scene. The color is part of the concept of “material” associated with the object, along with other properties that are listed below.
It should identify the intersection closest to the camera position between the beam and the objects in the scene. This aspect is what determines or solves the problem of determining the visibility, namely, which objects are seen and which are therefore hidden by the first.
Model incorporating “Lambert” (wikipedia) Lighting brings the sense of volume enhancing the sense of depth in the scene.
With a large number of light saturation occurs at the target, to join the different components that affect objects.
Incorporating the model shines “Phon” (wikipedia) adds texture to objects. Objects like the plastic produce concentrated and intense shine, while rougher surfaces give less intense glare faded
Phon model the brightness based on the angle of incidence of light on the object, associating it with the relative position of the observer with respect to the object to be imaged
Full shadows incorporating depth information.
While the shadows are not rendered objects appear “float” in the scene. The addition thereof to identify the relative positions between different objects, being of special interest to the distance “soil” scene.
Special mention the case of shadows cast by the translucent elements which lose their “hardness”, as well as the blurring that occurs at the edge or contour.
Give reflection effects and improve the overall brilliance bringing more realism to the scene.
The reflection or mirror effect that usually occurs especially on polished surfaces, is particularly striking in hyper-realistic images, in which is usually used profusely.
The number of bounces that are computed for each of the rays used in the rendering is a parameter which can cause a breakdown in the process of calculating, to increase exponentially the number of mathematical operations to be performed, Parallel to rise significantly and memory required in that process.
The effect of transparency on objects should influence. especially, reduction in “hardness” of the shadows cast.
Also influence the light thrown on the remaining elements that are visible through the transparent object, changing the frequency or intensity and color of the same.
Together with reflections, more effects “expensive” considered from the computational point of view.
Refraction is the effect of distortion of the images seen by looking at semitransparent surfaces, in the direction of the rays passing through different means, as air, water, glass, etc..
If a pencil partially submerged in the water we will see “game”. This results in loss of continuity in displacements and deformations resulting image.
Therefore this effect is observable when there are means with different refractive index at which light rays trajectories are modified.
It is known by this term to blur effects produced by the superimposition of images, which are particularly useful for generating motion effects.
It is a fact that we know the world of conventional photography. When shooting on the move (of the camera or the object) with slow shutter speeds, overlap multiple copies of the object.
This simple introduction gives us some idea of the possibilities and complexity of the calculation model of synthetic images that are known to “Raytracing”, in which a technical approach to new articles.
Some examples of the images you can find on these links. They are variations on a theme.