To reduce the large number, key/guide hairs + population
a portion of all hair strands (key hairs) and
populating the rest based on the explicitly modeled ones.
Single Strand Interpolation creates hair strands around each explicitly modeled curve based on the shape of the curve. Wisps and generalized cylinder based techniques use this approach for generating a complete hair model.
Multi Strand Interpolation creates hair strands in-between a number of explicitly modeled curves by interpolating their shapes.
cons: limited hairstyles, lack details
curve
curve + thickness, e.g. sine waves, B-spline, Catmull-Rom spline
It ignores the details of the tubular shape of hair fibers, which is often unimportant, because, in most cases, the projected thickness of a hair strand is less than the size of a pixel on the
screen.
The illumination is given by ambient, diffuse and specular terms, θ is the angle between incoming light and normal, ϕ is the angle between normal and view point,
Ch=LaKa+i∑Li[Kdcosθ+Kssinn(θ+ϕ−π)]
for dry and black hair, reduce the weight of Ks and Ka but emphasize Kd.
This lighting model still has its limitations, however, such as the lack of a back light visual effect.
compute for three control lines of each wisp + interpolation for those strands within one wisp.
Modified shadow Z-buffering method (shadow mask).
3D data x,y,z instead of depth. Assigning an effective radius to every point, we union these balls, forming a mask-like.
Complexity O(# objects)→O(# 2D array).
Artificial objects (e.g. hair pins, hair bands, ponytails, and braids)
give some wisps a certain restriction. These wisps are required to pass through a region of 3D space.
For example, we can get a ponytail if we let the control lines of a group of wisps pass through a little circle behind the CG character's head.
Handling a braid is a little tricky. A braid is heavier and stiffer than a single hair strand or wisp. It basically consists three wisps. A repeated loop changes the positions of these three wisps. (However, they do not really have to be three individual wisps. They may be just three branches of one wisp. They may also consist of several wisps separately.)
With a space curve C(t) and series of contours R(θ,t) along the curve, GC defines the boundary of a hair cluster when r=1,
(r,θ,t)=C(t)+rR(θ,t)[N(t)cosθ+B(t)sinθ]
, where we define coordinate frame of the curve T(t) to be the tangent vector, N(t) to be the principal normal vector, and B(t) to be the bi-normal vector. θ as the angle around the tangent vector T, starting from the principal normal N.
auxiliary scale: add scale sN,sB to two orthogonal directions.
auxiliary twist: θ^=θ+W(t), updating the rotating angles of contours along the curve.
R(θ,t)[sN⋅N(t)cosθ^+sB⋅B(t)sinθ^]
When t=0, the hair strand is on the scalp (head model). Catmull-Rom spline curve for smoothness.
multi-resolution: after GC, he/she subdivides each hair cluster, adding more detail until the desired appearance is achieved.
Rendering:
OpenGL poly line
shading: use the model for line segment (see above) and disable the lighting calculation in OpenGL.
The shaded color is computed at each point of the line segments and colors are interpolated with OpenGL.
Other shading models such as that in [Goldman 1997] could be equally applicable.
self-shadowing: use opacity shadow maps algorithm [Kim and Neumann 2001], a fast approximation of deep shadow maps [Lokovic and Veach 2000].
since shadows are view-independent, they can be computed once and cached for reuse while the user interactively changes views.
antialiasing: drawing order based on the distance from the camera, inspired by [Levoy and Whitted 1985].
OpenGL antialiased line drawing option alone is not sufficient.
Layers of polygonal meshes set ∑k=0NFk, which typically contain quadrilateral or triangular faces, with no restrictions
on the types of polygons used. Layer at k has one-to-one correspondence to a face at k+1.
support Face Delete, Layer Insert / Remove, Edge/ Vertex Separate.
Volumetric based
basic element: voxels grid or bounding box.
pros: efficient spatial-aware intermediate representation to encode hair, for all sorts of downstream tasks.
cons: when hair animates, such volumes should be updated for every frame, making pre-filtering inefficient.
3D volumetric field, grid of shape nx×ny×nz=128×192×128, aligned each hair model to a unified bust model within a volumetric bounding box.
occupancy field (binary True and False), confidence map for reverse work
flow vector field / orientation field
see above
hierarchical (top-down approach)
see above multi-resolution
Vector field (points)
For each grid vertex vi, Laplace operator Δ(vi)=∑j∈N(i)(tj−ti) is adopted in mesh editing for smooth deformation. Minimize with constraint index set C,
E(t1,...,tn)=i∑∣∣Δ(ti)∣∣2+ωi∈C∑∣∣ti−ci∣∣2
weighted tangent vectors, grad(f)=∇f
iteratively re-weighted least square of a linear system
(achieved) First, when hair roots are close on the scalp surface, corresponding strands tend to form wisps with similar geometry.
(aim to solve) Second, every hair strand has a curved trajectory in the hair volume and ( those from different hair roots, but) spatially close portions of the trajectories tend to share similar tangent vectors. <- added
density modulation functions for soft regions D(x)∈(0,1).
bias to push up/down, variance gain (gradient flatter or steeper), noise, turbulence, ∣x∣, sin(x).
Hypertexture=fn(....f2(f1(f0(D(x)))))
Pros: a significant reduction of storage requirements + naturalness of the transparent property of hair regions.
Cons: relatively slow rendering performance O(n3), typically a few hours as originally reported (1999); despite its power, one does not have an explicit control over its shape, but
Embed a volume density model into a generalized cylinder.
Each cluster is first approximated by a polygonal boundary (mask).
When a ray hits the polygonal surface, predefined density functions are used to accumulate density. By approximating the high frequency detail with volume density functions, the method produces antialiased images of hair clusters.
Cons: this method does not allowchanges in the density functions, making hairs appear as if they always stay together.
Ideal flow with elements: stream + source / vortex
incompressible (density won't change)
stable, inviscid (no viscosity)
ir-rotational
Laplace equation (PDE) ∇2ϕ=0, where potential ϕ and velocity V=∇ϕ.
complex flow, i.e. linear combination ∑iVi defined by user (as the sum of sols to PDE is also a valid sol.)
small sources along the the boundary panel pi, each having λj contributions, ∑jλjSj (obstacle avoidance)
V(pi)⋅ni^=bi, streamline will be parallel to the boundary surface, except the starting point.
solve λj, given the other variables are const.
compute for the coarse grid and subdivision flow for points within (cheaper computation) and combine with volume density function (noise / perturbations).
Rendering
volume rendering / visualization techniques
Texture based
Texture mapping the surfaces with hair images and using alpha mapping to create fake illusion of hair strands.
Pose transformation (fixed standard head model) via translating, rotating and/or scaling. (flipping symmetrically in the end)
Mesh ---- uniform sampling -----> hair strands ----> 3D orientation field -----> hair strands
Each strand is represented as a set of equally spaced sample points. Guided by the diffused 3D orientation field, smoothly diffuse the field into the entire 3D volume as proposed by Paris et al. [2008].
opacity shadow maps algorithm [Kim and Neumann 2001]
density clustering [Mertens et al. 2004]
deep opacity maps [Yuksel et al. 2008]
self-shadowing is essential for volumetric hair.
antialiasing
since hair strands are very thin, it is important to draw them smoothly with correct filtering.
depend on the drawing order [McReynolds 1997]
color and texture
artificial objects (e.g. hair pins, hair bands, ponytails, and braids)
interpolated colors
level of detail (LOD)
Blender conversion steps
Verify the Converted Hair
After converting hair (via Object > Convert > Mesh or similar), inspect the result in Edit Mode.
You will see vertices along the strands but no faces or thickness.
Convert to Curve
Select the hair mesh in Object Mode.
Go to the Object menu: Object > Convert To > Curve from Mesh/Text.
This converts the vertex strands into curves, making them easier to manipulate.
Add Thickness to the Curves
Go to the Object Data Properties (the green curve icon in the Properties Panel).
Under the Geometry section, locate the Bevel subsection.
Adjust the Depth value to add thickness to the strands.
Use the Resolution slider to refine the shape.
Convert Back to Mesh (Optional)
Once the curves have the desired thickness, convert them back to a mesh:
Object > Convert To > Mesh from Curve/Meta/Surf/Text.