Analysis of Shader Principle of Unity Reflective Red Spot Sight

One of the functions of Unity real sight to be realized in the project is the reflective red dot sight. The Shader analyzed in this paper achieves a more realistic effect by simulating the optical principle of the reflective collimator.

design sketch

This figure shows the effect of viewing the red dot sight from different angles.

 Unity true holographic red dot sight

Principle of red dot sight

The special feature of the reflective red dot collimator is that no matter whether the eyes and the shooting door are in a line of three, as long as the red dot is set on the target, it means that the target has been aimed.

First, the last soul illustration:

The figure above shows the simplest component of the reflective red dot collimator, namely, the light source+concave mirror. The principle is as follows:

  1. The concave mirror is transparent, and the reflected light of the front scene can penetrate. At the same time, the concave mirror has a special coating, so that the light of the light source can be reflected back.

  2. The light source is located at the focus of the concave mirror. According to the physical principle, its light is a parallel light after being reflected by the concave mirror.

  3. When the human eye receives some parallel light, it can make up a red dot at infinity.

  4. Since the light reflected from the light source is a directional light, the red dot will shift when viewed from the side. When the angle is too large, the red dot will be completely invisible.

Reflective red dot collimator Shader

code

The Shader core code has only seven lines, of which the first four lines are the most critical:

 void surf (Input IN, inout SurfaceOutput o) { float shortestDistanceToSurface = dot (_WorldSpaceCameraPos - IN.worldPos,IN.worldNormal); float3 closestPoint = _WorldSpaceCameraPos - (shortestDistanceToSurface * IN.worldNormal); float2 uv_Delta = (mul((float3x3)unity_WorldToObject,IN.worldPos) - mul((float3x3)unity_WorldToObject, closestPoint)).xy * _uvScale; half4 col = tex2D(_reticleTex,(0.5f, 0.5f) + uv_Delta/shortestDistanceToSurface); o.Emission = (col.a * _reticleColour.rgb * _reticleBright); o.Albedo = max(col.a * _reticleColour.rgb, _glassTrans * _glassColour.rgb); o.Alpha = max(col.a, _glassTrans); }

analysis

  1. Find the nearest distance from the plane and the intersection point
    First, find the nearest distance between the camera and the lens surface, and calculate by point multiplication shortestDistanceToSurface Calculate the contact point of the shortest path from the camera to the lens plane closestPoint (Z-axis is not considered).
  2. Calculate Red Dot Map Sample Offset
    According to the principle of collimator above, in the physical world, the reflected red dot light is a directional light, so if the shortest path contact point is in the lens area, then This dot must be the center of the red dot pattern
    In local coordinates, calculate closestPoint The offset value from the position of this vertex. This value only takes x and y. This value means the offset of the current vertex relative to the center of the red dot texture.
    Multiply the offset by an adjustable variable used to adjust the scaling of the red dot pattern _uvScale , then tex2D sampling is used, where (0.5f, 0.5f) is the center of the red dot texture, and the offset is added to sample the texture of the vertex.
    Use here uv_Delta/shortestDistanceToSurface The purpose of is to make the pattern not change with the distance as far as possible within the normal range. In other words, I hope that no matter how far away from the lens, the size of the virtual red dot is the same, and there is no near large far small perspective effect.
    Of course, this division is not rigorous, because the size of the field angle occupied by the object is not only related to the distance from the camera, but also related to the offset between the object and the camera center and the camera field angle. However, this division is sufficient for actual use scenarios, In visual range The pattern size change is not particularly obvious.
  3. Set vertex parameters
    After that, do some special processing on the sampled color as required, and finally set the color of vertices and other parameters.

About VR

If the surface shader in ShaderLab is used, generally Unity will automatically process the rendering of left and right eyes, that is _WorldSpaceCameraPos The value of changes according to the currently rendered eye. Therefore, we do not need to manually calculate the situation of the left and right eyes.

If you use a normal vertex shader or slice shader, you may need to determine the current rendered eye to set specific parameter values. Unity provides parameters such as rendering eye state judgment and corresponding pupillary distance, which can be judged and calculated accordingly.

Zimiao haunting blog (azimiao. com) All rights reserved. Please note the link when reprinting: https://www.azimiao.com/7125.html
Welcome to the Zimiao haunting blog exchange group: three hundred and thirteen million seven hundred and thirty-two thousand

Comment

*

*