Blender notes to self: Rendering and animation related
As I use Blender only occasionally, I’ve written down quite a few hints to myself for getting back to business. If this helps anyone else, so much better.
I’ve also written two similar posts on this matter: A general post on Blender and a post on 3D printing.
Bones
- For a simple beginner’s use example, see this page.
- Bones are simply a handle which one can do the Grab / Rotate / Size trio on. It has an pivot point and a handle. The manipulations on the bone apply to all vertices in the bone’s Vertex Group, relative to the bone’s pivot point, and in proportion to their weight for that group.
- The Vertex Groups are listed under the object’s properties, under Object Data (icon is an upside down triangle of dots). In Weight Paint mode, this is where the group to paint weights for is selected.
- The Vertex Groups’ names are taken from the bones’ when weights are assigned automatically.
- The Armature modifier is added (automatically) to the object subject to the bones. Be sure that it’s the first modifier (uppermost in the stack), in particular before Subdivision Surface. It’s the original mesh we want to move, not tear pieces of the rounded one. Corollary: The bones’ deformations can be applied, like any modifier.
- Always check the bones’ motion alignment with the parent bone, and set the bones’ Roll parameter (in the bones’ properties, icon with bone) if necessary (in particular if the previous segment has been resized). This sets the axis in space at which the bone rotates, and has to be done manually in Edit mode. It controls the direction the bone rotates w.r.t its origin, which is crucial for intuitive motion, so the bones seem to move right, but just a little off the desired direction. Just align the square of the bone symbol with the previous segment’s direction.
- The automatic weights aren’t all that good. In the end, there’s no way out but to assign the weights manually.
- And the Weight painting is good for getting a picture of what’s going on. But assigning weights with it is really bad. In particular as it’s easy to mistakenly paint a completely unrelated vertex, leading to weird things happening.
- Instead, set the weight manully under the Object data tab (just mentioned). Select the vertices in Edit Mode, write the desired weight in the dedicated place under the Object data tab, and click “Assign”.
- The Armature must be a parent of the object to be distorted. Extruded bones are children of the bones they’re extruded from.
- To move around the bones (in particularly rotate), enter Pose mode (or just click “Pose” for the relevant armature in the Object Outliner).
- Zero the pose: Change to Pose mode, select all (A) and Pose > Clear Transform > All
- The bones’ influence is disabled only in Edit mode (unless enabled in the Armature modifier).
- When an object controlled by bones is duplicated, the vertex groups are duplicated as well, but not the bones. So both objects are controlled by the same bones, in an non-natural way (center of rotation on previous bones etc.)
- If a vertex belongs only to one group, the weight is meaningless: If it belongs to the group, it will move 100% anyhow.
- If a vertex belongs to more than one vertex group. its normalizes the total to 1.0. So it’s fine to have an overlap on the joints, but be careful with pushing it too far. Note that the bone after the joint is moved by virtue of parenting, so there’s no reason to assign weights after the joint. But it will weaken effect that is supposed to move that part.
- Not that vertex groups that have no bone don’t count for proportional motion of the vertex. For a vertex that moves less than 100% on a single bone, also assign a second vertex group that belongs to a bone that will never move. This is good for transition with a fixed part.
- Rotate bones with Individual Origins pivot point.
Textures etc.
- Each face is related to a material. The first material is assigned to all faces. Additional ones need to be assigned.
- Once a material is selected in the Material property button-tab, the Texture tab relates to it.
- Projecting an image: First mark a seam in Edit mode. Select a set of edges and Mesh > Edges > Mark Seam. Then select the faces to project (possibly all) and go Mesh > UV Unwrap… > Unwrap (or possibly Project from view or some other choice).
- When using UV projection, the Type is “Image or Movie”, the Source is the file, and under “Mapping” it says Coordiates: UV (otherwise the mapping in Material view will be wrong).
- UV/Image Editor: Maps pieces of the image into faces. Use side-by-side with a 3D view in Edit mode. Enable “Keep UV and Edit mode mesh selection in sync” for easy selection (somewhere in the middle of the bottom bar). The mouse’s middle button + move mouse moves the image view (instead of Shift-scroll or something)
- Multiple images can be sources for a single object, by virtue of generating multiple materials, and assigning them them to difference faces. Each material is then linked to separate textures, each based upon a different image.
- Texture paint: A little GIMP, just in 3D. The changes are updated in the source images image(s). The big upper box is the brush selector. Most notable is “Clone”, which works like GIMP’s, with CTRL-click to select the source. Excellent for hiding seams.
- Careful with overlapping UV mappings on a single image with Texture Paint: One stroke will affect all mapped regions.
- Texture paint may manipulate several images in a single stroke, if this stroke covers regions sourced from different images.
- If texture paint is responding slowly and eating a lot of CPU, try reducing the subsurface division number, if used. Too many faces aren’t good.
- Don’t forget to save the 2D images in the end!
- For copying a 3D shape from a 2D image, use Global Mapping on the texture, along with a Top Ortographic view. The texture remains in place no matter how the object it twisted and turned, so it’s fairly easy to drag it along the image’s edges.
Rendering
- F12: Render Image (“Quick Render”). Also from top menu Render > Render Image. Return to 3D view with F11.
- Shading Smooth / Flat at the Tool shelf doesn’t change the shape, but only the way light is reflected
- If the rendering result suffers from weird shadows, and/or unexplained edge lines on a surface that’s supposed to be smooth, try in Edit Mode go Mesh > Normal > Recalculate Outside, which may fix normals that have been messed up from edits.
Cycles: How it works
If a realistic rendering result is desired, forget about Blender’s native render engine. It’s a lost battle with dirty tricks to achieve the obvious way to reach a natural appearance, and that’s to simulate the light rays. Which is what the Cycles engine does.
This is a very simplistic description of Cycles. In reality, it’s by far more clever and efficient, so the results on the real engine are better than you would expect from the description below.
For each sample (i.e. an iteration of improving the rendered image), and for each pixel to be generated on the rendered image, the render engine traces the light ray, backwards. That is, from the camera to the source of light.
The initial leg is simple, as the angle of view is known and deterministic. If this ray hits nothing, we get black. If it hits a face, it examines its material data. By hitting something, I mean the first intersection between the ray’s line and some face in the mesh.
When hitting a face, the face’s material’s shader is activated. If it’s a pure emission of light, that’s the final station, and the pixel’s value can be calculated. If it’s any other shader, it will tell the render engine on what angle to continue, and how to modify the light source, once reached. This modification is the material’s color or the texture at the specific point that was hit.
And so it goes on, until a ray hitting nothing is reached, or a pure emitting light source. Once the final station has been reached, the aggregation of color modifications is applied, and there’s the final pixel value.
So why is it randomness involved? Why is it random?
A diffusing surface collects light from all directions, and reflects it towards the camera. Since the light tracing can only follow one direction, it’s picked at random by the shader. So each sample consists of one such ray trace for each pixel. Each time a diffusing surface is reached, there’s a lottery. Hence the randomness. Except for pass-through and purely reflective shaders (i.e. Glossy with Roughness 0), which have a deterministic ray bending pattern.
When the “Mix” shader is used, the mix rate is a real mix: Each shader gets its go, and the result is mixed. Try to mix an emission shader with a black diffusion.
So God may not play with a dice, but Cycles surely does.
Light is Everything
- DON’T use Blender’s Lamps unless you want everything to look like plastic. There’s a huge difference between lamps and objects (e.g. planes) with an emission shader (both in results and render time). Use the latter for a realistic look.
- In particular, a skin texture will never look right with lamp light. See below.
- Creating an invisible light source: Create any object, set its shader to Emission, and go to the “Object” properties (the icon is a yellow cube). At the bottom, there’s “Cycles Setting”. Disable “Camera” checkbox in the Ray Visibility section.
- To avoid seeing these emission objects when editing (they get in the way all the time), put them in a different layer. Use Ctrl-click on the relevant layer to view it along with the current one when switching to render view.
Node Editor
- For an texture image: Add an Image Texture element, and open the file. Then to UV mapping (nothing will be visible before that). If there are multiple texture files, they are all mapped with the same UV map by default (or at all?).
- Bump map: Image Texture > Bump (input Height) > Diffuse BSDF (input Normal) > Material Output (input Surface). Displaces the position along the normal, “Distance” says how much. With “Invert” unchecked, a high image value means outwards.
- Use an image’s transparency: Generate a Tranparent BSDF shader, and connect it to a Mix Shader’s upper input. The lower input goes to the regular (Diffuse BSDF?) shader. The Image Texture’s Color goes as usual to the regular shader, but its Alpha output to the Mix shader’s Fac.
- Glossy BSDF: Mirror-like reflection when Roughness is set to zero, otherwise it’s diffusing the reflection.
- Velvet BSDF: Low angles between incident and reflection yield low reflection, so it emphasizes smooth contours. Good for combination with Diffuse shader for simulating human skin (compensate for too dark edges of the latter).
- Emission: Not just as a light source, but also a way to fake fill light.
- Color Ramp: Useful to turn an image into a one-dimensional range of colors, including Alpha, instead of manipulating the texture’s range.
- The Geometry input supplies Normal (which is after smoothing, pick True Normal for without) and Incoming (which is the direction of the light ray). Along with Converter > Vector Math set to Dot Product or Cross Product, the value output with these to combined depends on the angle between the incident ray and the normal. Together with Color Ramp, this allows an arbitrary reflection pattern (use for Fac on some Mix shader).
- The Voronoi texture (using “Cells”) is great for simulating an uneven, grainy surface.
- To get a generally misty atmosphere, go to the World tab in Properties, and under Volume select Volume scatter with white color and Density of 0.1 to 0.2. Anisotropy should be 0.
Achieving human skin appearance
Making a model look human and alive is the worst struggle of all. I’ve seen a lot of crazy attempts to add complicated shaders and stuff to reach a natural skin appearance. Even though I haven’t managed to get a face look natural (good luck with that), these are the insights I have reached.
- Rule zero: Use Cycles. Should be obvious.
- Rule number one: DO NOT USE LAMPS. All generation of light should be done with objects (most likely flat planes) with (white) emission shaders. Any inclusion of lamp objects makes everything look like plastic. Rendering convergence is indeed faster with lamps, but the result is disastrous, even when lamps are used for just fill light. In short, create real studio lighting.
- There’s no need for subsurface scattering and all those crazy shaders. These are a result of the impossible attempt to tweak the reflection to get something realistic in response to the plastic feel of lamp light. When the light is done properly, plain shaders are enough. Actually, Subsurface Scattering makes a marginal difference, and to the worse (deepens shadows, while actual skin somehow reflects in all directions).
- The Glossy part of flat skin (e.g. a leg) should be GGX (default) with roughness ~ 0.5. Diffuse with roughness 0.4 (doesn’t matter so much), mixed 50/50. Use the texture’s color for the Gloss shader as well (or mix partly with white).
- And here’s the really important part: Natural skin is full with small bruises and other uneven coloring that we barely notice when watching with the naked eye. It’s when this uneven coloring is gone (a woman wearing tons of makeup or a 3D model) that it looks like plastic. Therefore, the texture applied on the skin area (i.e. the coloring of the faces) should be aggressively uneven, with speckles and also wide areas of slight discoloring. Adding a leathery bump texture and/or wrinkles adds to the realistic look, but won’t get the plastic feel unless the lighting is done right and the texture is alive.
- For the depth pattern of the skin, either use the Voronoi texture (see this page) on leather, or consider looking for images of elephant skin or something (the cell texture is similar). This is mainly relevant if closeups are made on the skin.
- Realistic eye: Be sure to add a cornea to the eye, mixing 90% transparent and 10% glossy shaders. The cornea’s ball should be 66% of the size of the eyeball, and brought to cover a little more than the iris. The reflection of the cornea brings the eye to life.
Animation
- Animation adds an Animation object to the controlled object’s hierarchy (with a ArmatureAction sub-object for Armatures).
- Key = Nailing the some properties some object for a given frame.
- Don’t expect to change the pose and have all changes recorded.
- Rather, in the Timeline Editor, select the desired bones of in the armature for keying (all bones of the armature, probably), pick which properties are being keyed (possibly just Rotation for plain motion) and click the key icon (“Insert Keyframe”).
- Keying Set = The set of objects whose properties are being keyed.
- Dope Sheet: Accurate, concise and gives control. Each channel is a property, each diamond is a key for that property. Thick lines between diamonds show that they haven’t changed along that time.
- Selection of keys: With right-click. Selecting the top diamond (“Dope Sheet Summary”) selects all keys of a frame (the Armature’s diamond selects all keys of an armature etc.)
- It’s possible to Copy-Paste keys with the clipboard icons at the bottom (or simply CTRL-C, CTRL-V). “Copy” relates to just the selected keys.
- In the Dope Sheet, use Shift-D and then G (grab) to copy all keys to another frame. Also possible to just Grab keys to adjust the timing etc.
- “Insert Keyframe” = store the properties of the current pose in the current positions. In Timeline Editor, this adds diamonds in the channels that correspond to the selected bones (or adds these channels). It doesn’t change or delete keys for bones not selected.
- Work flow: First, select the properties that are going to be involved (all bones of an armature?), and create a key for them in the Timeline Editor. The rest of the work is done in the Dope Sheet: Scrub to the desired frame, change the pose, and Key > Insert Keyframe > All Channels (or with I). Or possibly just selected channels, to leave the other channels interpolating as before.
- Note the difference between how Timeline and Dope Sheet stores the pose: Timeline stores the properties of the selected bone only, while the Dope Sheet allows storing “All Channels”. Assuming that all relevant properties have channels in the Dope Editor (it’s a good idea), “All Channels” captures the entire pose (and marks those that haven’t changed).
- Careful with jumping in time by accidentally clicking in the Timeline / Dope Sheet: It overrides all changes in the pose. To avoid this, “save your work” by “Inserting Keyframes” often.
- Don’t forget to move to a new frame before working on the next pose. If you do, copy the current frame’s keyframes into the clipboard, create a new keyframe with the current pose, and paste the previous keyframes into a slightly earlier frame. And then move (grab) the keys in time to their correct places.
- It’s possible (but usually pointless) to set the interpolation mode in the Dope Sheet (Key > Interpolation Mode). This controls the interpolation of the selected key until the next one. The default (set in User Preferences > Editing) is Bezier, which gives a natural feel.
- However the “Constant” interpolation can be useful for camera properties, when it’s desired to hold it still and then jump to other parameters (i.e. a “cut”).
Simulation
- Plain Physics fluid (simple example): It’s a 3D-grid based simulation running in a limited space, which is enclosed by the object to which the Physics > Fluid physics is attached with the “Domain” type (it’s the walls of the contained as well as the limits of the simulated region). The Physics properties of this object are those determining the simulation (in particular the time scale in seconds via the End time, and the real-life size in meters). And the baking is done on this object. Other objects, which have the Physics > Fluid attached will participate according to the Types, e.g. Fluid (the object will turn into a fluid) or Obstacles (which limits the motion of the fluid).