Ray Marching Volume Cloud Rendering (I): Theory

I got the video Coding Adventure: Clouds on YouTube. I just want to do a 3D map practice program recently, so I will record the principle and implementation of Ray Marching volume cloud rendering.

SDF & Ray Marching

Ray Marching is often called Light stepping
Ray Marching solved the problem of spatial information reconstruction and coloring of SDF slices in 3D space.

SDF

We can use a formula to describe a circle whose center is at the origin and whose radius is R:

 length(target.xy - vec2.zero) <= r

The above formula is generally called SDF (Signed Distance Field), and it is called SDF in Chinese Signed distance field , because it is signed, the writing form is generally as follows:

 //Judge whether the point is within the range according to the positive and negative return values float sdCircleTest(vec2 pos) { return length(target.xy - vec2.zero) - r; }

When extended to 3D, it describes a ball:

 float sdSphereTest(vec3 pos) { return length(target.xyz - vec3.zero) - r; }

In short, the conventional method uses Vertex/Triangle/Mesh SDF uses a formula to describe a model. Although they have different implementations, they have the same purpose.

Tips: Multiple SDF descriptions can be used together to easily achieve some effects, such as Boolean in 3D modeling:

 subtract = max(-sdfA, sdfB) intersection = max(sdfA, sdfB)

Many great gods' works on ShaderToy are based on SDF.

Ray Marching

Disappeared Z

There is a problem when SDF is extended to 3D space: if SDF graphics are directly drawn, due to the lack of "depth" of slices, it is impossible to solve the occlusion relationship, lighting, shadow, etc. of multiple SDF elements.

Although 3D coordinates of slices in opaque queue can be obtained through depth map, this solves the problem of overlapping and occlusion of opaque queue and SDF, which cannot solve the problem of overlapping and occlusion of multiple SDF graphics.

Reconstruction of 3D information for slice elements in SDF system

In order to truly "render" 3D models, we need to know the "depth", "color" and other basic information of each slice element under the SDF system, which is what Ray Marching needs to do.

It is not difficult to understand that you are familiar with the 3D rendering process. In a sense, the image of the whole screen can be regarded as a picture pasted on the near cutting plane of the camera. Based on this, we construct a vector that points from the camera origin to a "slice element" of the near cut plane image:

Use the above vector to project rays to the scene, start along the rays, and calculate every short segment until hitting the object or reaching the maximum length, which is called "step", and the distance ahead is the depth:

The above projection process is carried out for each slice element. Finally, the 3D information of each slice element under the SDF system can be reconstructed, and then some formulas can be used to calculate the normal and other parameters. The subsequent occlusion and lighting calculation will be the same as the traditional rendering pipeline.

Volume cloud

Volume cloud mainly depends on two points:

  • Worley Noise: Data
  • Texture3D: interpolation

Worley Noise

Worley noise is an implementation of fast generated Cellular Noise, which is similar to cells.

The generation process of Cellular Noise can be briefly summarized as follows:

  1. Random generation of some characteristic points
  2. Traverse every point in the space, calculate the distance from the point to each feature point, select the minimum distance value and store it

Steven Worley has made some optimizations to Cell Noise, which greatly reduces the amount of computation, so it is widely used. Those who are interested can search their own papers, which are titled A Cellular Texture Basis Function.

After Worley, there are some new algorithms that further improve the effect/reduce the amount of computation, This article on grid noise in The Book of Shaders is very well written. It is recommended to read:

Cellular Noise

Texture3D

A special mapping format in Unity, composed of multiple 2D images, can sample and interpolate in three directions of XYZ.

A vivid analogy is CT, which can build a 3D organ by overlapping the sectional scanning images of 64 layers of CT and interpolating the gaps between the images:

Worley Noise, Texture3D, And volume cloud

So, what is the relationship between the three?

Think about Worley Noise carefully. Look at the shape it represents. Does it look like the slice of a cloud? Does its "nucleus" look like the center of a cloud?

If Worley Noise is made into Texture3D, and some small values are clipped through some threshold values, and Texture3D is mapped into a 3D space, will the volume cloud be formed?

What is the relationship between volume cloud and RayMarching

We regard the Texture3D as SDF, take the color value or a color channel value as the "density" of the cloud (that is, the output value of SDF), and then use Ray Marching's method to sample and color each slice.

Since the cloud is translucent, Ray Marching in volume cloud rendering is not limited to hitting the surface, but also stepping into the interior of the cloud for diffuse lighting superposition calculation.

summary

At this point, the rendering process and overall theory of volume cloud in the tutorial are finished.

As for the cloud diffuse reflection formula, atmospheric scattering, light calculation, etc., which will be used in the development, most of them are empirical formulas, which are piled up by various Daniel's papers. We stand on the shoulders of giants, and it is hard to say whether they are right or wrong, which is better or worse.

If there is time later, the specific program implementation will be sorted out.

reference material

Zimiao haunting blog (azimiao. com) All rights reserved. Please note the link when reprinting: https://www.azimiao.com/10940.html
Welcome to the Zimiao haunting blog exchange group: three hundred and thirteen million seven hundred and thirty-two thousand

Comment

*

*