The lights and the shadows they cast on objects, are fundamental elements of an animation scene. Their use should be adjusted fineness since they are a source of a significant increase in computational complexity.
Some systems limit the number of lighting points because they can saturate the processing power and memory of computers that perform the rendering. For each new point of light that is added in a scene must calculate its influence on each of the objects present, and its projection or shadow cast on the remaining.
The shadows provide valuable information to the scene, allow refine the shape of the volumes or objects but also affect other aspects perceptive, spatial planning and the apparent size of the elements. All this affects the sense of reality that transmit.
Among the highlights we can cite:
- Indicate the position of the lighting points on the object.
- They make sense of the spatial position of the object relative to other objects.
- Bring volume to objects
- They create the atmosphere of the scene perceptual
An object without shadow can appear you are floating in space.
The projection of the shadow on the ground allows us to sense the distance of the object at the same. For example, a shadow attached to an object indicates that it is resting on the ground; thus use the position of the shadow from the object to estimate its proximity to the ground, as well as other elements of the scene.
Sometimes identical objects with different size can create a false perception of their relative positions, namely, an optical illusion.
An object casts its shadow ahead of another element must be closer to the light source, providing an important reference to determine its size and position. Logically we know the direction or position of lighting elements, it is not always trivial especially if there are multiple points of light.
In the accompanying figure as an example, and that has been rendered without calculating the shadows cast, objects, to be the same as, We can assume the same size.
Law “Gestalt” lead us to assume sizes and positions of objects, characterizing the perceptual aspects of human vision and pattern recognition corresponding.
We have in this simple scene other criteria to discriminate further in our analysis based on the way, therefore assume, supported by the laws of perspective, the two cubes are the same shape and size and therefore the smaller is simply far the largest. Nothing tells us nothing, and we apply this reasoning automatically on our image recognition
By throwing shadows on a third element, in this case the floor, have more information on the elements of the scene.
We must differentiate between the shadow cast on the same object (own shadow as it produces an arm around our body, and those which occur on other objects in the scene. Computer processing for calculation is different, although at the moment it escapes from our goal of analysis.
If we look at an object therefore casts its shadow over the other, as in the case of the figure in that the small bucket projects over the larger, begin to correctly locate both his position and his size. This new information added criteria to determine with some precision the relative positions of the elements although certain “distractors” can be misleading.
For example, we cut the ground plane to create the feeling, because of the prospect, that each of the hubs is located at an end thereof.
Again, based on the laws of “Gestalt”, would seem more pronounced than is the position away from one of the objects makes us see smaller. Nothing indicates that their sizes are different. This may seem like an intellectual game with simple volume is reinforced with everyday objects.
Two trees should have similar heights, as well as two. A person must be lower than a tree. Everyday life gives us rules that affect perception and need not be correct in all cases.
By including shadows in the scene, it seems sorted and perceive , or what is the same see, how small bucket “fleet” in front of the largest, helping us to correctly interpret their sizes and their relative positions.
We therefore have a clear justification of the great influence of the illumination effect on our visual cognitive model.
In Blender all objects cast shadows, but in order to visualize the scene in its calculation must be enabled. To do this we also indicate that “the receiving”, namely, which other objects the “project” on its surface.
If we have the received shadows on an object, not generate, simplifying the calculation process.
The activation can be performed in the menus “Material” Object, in particular the tab “Shaders” in which control the parameters affecting the shading algorithms.
We see that the difference between ordinary and shadows that produce semi-transparent objects. The latter will be less “hard”, creating a slight dimming of the regions affecting.
As an example see the difference on a hub to which a texture has been applied to partially affects its opacity, defining fully transparent regions in strip form on its surface.
If we only enabled “Shadown” the shade is calculated assuming that the surfaces are completely opaque. This may seem pointless shown relevant when generating certain visual effects.
If you activate the option to calculate shadows with transparent surfaces “Tra Shado” see an effect similar to the shadows they cast the shutters.
In the new case studied in the non-transparency of the object is totally opaque, case would be different if part because softer shadows would.
To create this effect we must first define the material properties of the object. In this case, as discussed, texture has been used to bring transparency to the object. We will modify the value of the channel “alfa” transparency thereof.
In the example we have used a null value (zero) indicating that it is completely transparent. You can play with this value to see its influence on the object and the shadow it produces.
The reader can also experiment with the use of different textures to complete the study.
Tab “Map To” of “mapping textures” indicate that use both the color of the object (option “Cabbage”) and for use in determining the opacity (Option “Alpha”)
The approximate result as we can see in real time in the preview tab, although the final result will depend on the illumination of the scene and the influence of surrounding objects.
The algorithm for calculating shading and shadows can be based on two completely different techniques. To modify the new parameters we enable editing of materials associated with lighting.
Again emphasize that it is not the subject of this study the analysis of programming techniques, although they must know the choice between a calculation “simple” based on the projection of the scene from the point of illumination (shadow buffer) and a calculation based on ray tracing or “raytracing” more complex and more realistic results.
In the latter case can be made different degrees of refinement of the shadow increasing computational complexity, that results in softening and blurring of the same.
This tab “Shadow and Spot” modify the number of samples “samples”; from approaching more closely the result even slow down the production of images.
In this window you can also change certain parameters of the shadow, between those who want to highlight the color of it. Examples are shown below.
Certainly, Have you ever stopped to think what color is the shadow of an object? Does it depend on the color of the object? How the light that illuminates? How surrounding objects? ? The object on which it is projected?
An interesting question to discuss at another time.
Tutorial made for version 2.49b
Image synthesis
Blender
Buffered Shadows (Blender Manual)
Must be connected to post a comment.