Advanced Outdoor Photography and Large Scene Modeling in VR Environment

This article is translated from the Valve developer community Advanced_Outdoors_Photogrammetry One article, originally written by Harborough Rocks, Derbyshire, UK Author, Valve employee Cargo Cult The translation of this article was independently completed by Zimiao's haunting blog.

introduction

At present, most photogrammetric tutorials focus on a single object, while the few tutorials on large scenes are usually indoor and other enclosed areas.

This tutorial explains a feasible workflow, that is, digitizing a vast outdoor scene to make it have endless immersion in the virtual reality (VR) world - its content includes more distant geometric structures and the surrounding sky dome. Although this tutorial is intended for advanced users (who have professional abilities such as photography, 3D modeling, texture editing, etc.), it can still help ordinary people capture the real world.

This tutorial briefly introduces the content of photogrammetry in SteamVR Home. For comprehensive VR photogrammetry concepts, I suggest reading These articles

I used a small set of photos taken during Christmas in the UK to demonstrate. At that time, it was in the midst of a storm. I took this group of photos while walking my dog in the few sunny days.

This workflow content includes making high detail close range geometric data; Low detail medium distance geometry data; And a sky box (distant view) made from projected photos. The demonstration photo was taken in a post industrial area next to the railway. Although it is not a picturesque tourist attraction, the whole digitization process is the same as the scenes of Mars and British Church in SteamVR Home.

Although many specific software is involved in the production process of this scenario, most of the ideas and suggestions are universal. Even if different software is used, these things can also be used. You can try it out and find the best way to work for you.

Taking photos and later

I used Canon EOS 7D with EF-S 10-22mm lens to shoot, and the lens focus was 10mm wide angle end. The camera is set to manual exposure, and the parameters are f/8,1/60s, iso 100. Manual exposure is to ensure that the photos taken have consistent lighting information. The photo is saved in RAW format to provide the best post-processing capability.

Before exporting an image, we need to do some post-processing on the image. Most basic post-processing can be done using Lightroom. My post processing includes adjusting the appropriate white balance, adjusting shadows and highlights, and correcting the color difference and lens dark angle. Note that all photos should use the same post-processing parameters (to ensure consistent images).

The 8bit TIFF image can be used directly, but due to the requirements of the transmission network, I use small size photo files (converted to jpeg).

Close up view

Rebuilding 3D information in Reality Capture

I use Reality Capture to generate high-precision foreground mesh. Although some functions of the software are still under development, it is enough to extract fine geometric details from photos! I made a camera alignment, and then set the reconstruction area to all areas near the camera position. I did a general precision detail reconstruction to test the previous parameters, and then I did a high detail reconstruction. After reconstruction, there are about 79 million triangular faces in the whole reconstruction model. For VR, these faces are too many and may not render normally. I also made one Colorize (In software function, color reconstruction) to generate vertex colors, which is used to perform fast rendering to test certain parameters.

Later, I used the Simplify Tool to subtract faces from the mesh, and finally reduced the number of model faces to 1.75 million triangular faces. For VR, 3 million faces is a reasonable maximum, but the limited size and uncomplicated features of the current scene make it unnecessary for me to have so many faces. Later I use Unwrap Toolkit UV - In this example, there is only one texture map (not suitable for final rendering, but more suitable for effect display). After generating maps, I export them as OBJ files (models) and PNG files (maps) respectively.

Two tips:

  1. When using the old version of Reality Capture to display UVs, please ensure that there is no existing UV information before, otherwise there will be a problem when replacing, and the new version of the software fixes the bug. The maximum number of textures is not very effective, and the final number of textures may be much smaller than this value. To get enough texture information during geometric interpolation, make sure Large triangle removal threshold Set it as high as possible. I usually set this value to 95.
  2. For scenes with very large contrast (such as shadows approaching pure black highlights approaching pure white), you can add Gutter size Value to increase the UV interval to reduce the long-distance map texture interlacing caused by mip map. Texture interlacing may produce strange weak flash lines or unknown highlights in shadow areas. For this example, I use 8 pixel intervals.

Repair the broken surface in MeshLab

The mesh exported from Reality Capture may contain holes, broken faces, etc. They may be several small triangles, or a whole area visible to the naked eye. Compared with manual repair, I prefer to use free and open source MeshLab Software to automatically repair, this software is very useful when dealing with ultra multi polygon meshes.

I use these filters (regulators): Remeshing Simplification as well as Reconstruction : Close Holes , all parameters are default.

After repairing the broken surface, I will save it as an OBJ file and open it with Modo (you can also use other 3d model editing software).

Clean mesh in 3d software

Since Reality Capture and Modo (the 3D software used by the author) have different definitions of the up axis, I first adjusted the axis in Modo.

RC mesh is usually closed, that is to say, each mesh divided is a complete and volumetric aggregate. In VR scene rendering, we do not need the side and bottom faces, so the initial cleaning work is to delete these faces.

I have been trying to delete unnecessary geometry and fill in the occasionally visible broken surfaces and potholes that MeshLab has not repaired. As for the invisible parts, we do not need to deal with them, and then we reverse the OBJ back to RC.

Because RC has very strict conditions for imported OBJ files, I reset the rotation on the locator, deleted the UVs and surface normals of the mesh from the List tab of Modo, bound the materials of all faces to a single material, set the smoothing angle to 180 °, and then exported the model to a new OBJ file. (The latest version of RC seems to fix most OBJ file import problems, and mesh UVs and smoothing angles no longer need to be adjusted deliberately)

After importing to RC, I use it again Unwrap The tool (maximum texture set to 10) regenerates the texture and exports the mesh again. I have imported and exported between Modo and RC for many times until I am satisfied with the geometry and texture of the close shot.

medium shot

Use PhotoScan to reconstruct 3D information

For a complete scene in VR, the key to improving immersion is to make the scene completely surround the player - no matter the geometry or the distant map sky box. For some scenes, only close range geometry and a sky sum are enough, while for other scenes, low precision and low detail geometry at medium distance is also indispensable. An alpha channel allows medium distance geometry maps to be mixed into the sky box, so that users can hardly see the junction between the model and the sky box (improve immersion).

I use the middle view processing stage Agisoft PhotoScan Because I need to create the sky box texture according to the camera position in the later process, and PhotoScan's workflow is more conducive to single camera texture projection.

The operation process of PhotoScan is similar to that of RC. I added all photos exported from Lightroom, generated a sparse point cloud (before performing camera optimization, I deleted some misplaced and ambiguous points), and then generated a low detail dense point cloud.

Low detail node clouds have many unnecessary points under the sky and ground, so I manually deleted many unnecessary points before generating mesh.

Tips:

  1. In PhotoScan Dense Cloud :: Select Points by Color The tool is very useful, especially when we clean up points in the sky. Adjust the parameters (hue and saturation plus light blue), and you can delete a lot of unnecessary points. Using this tool for cleaning, the result is very neat.

The end result may be a bunch of chaotic objects, very strange. Don't worry, this is normal, because our goal is to create some basic geometry for texture mapping, so as to get the texture scale and parallax of the perspective. After that, I generated a texture map (new generation, redeveloping UVs and using an 8k map), and then exported the model in OBJ form for easy editing with Modo. It should be noted here that texture is only for visual display, rather than specifically preserving some information, so I will discard these contents soon afterwards.

I deleted the unconnected blocks in the distance, and then removed the (automatically generated) blue sky geometry information as much as possible. There is a triangle area between the sky and the ground. I keep it so that I can use the alpha channel on the sky box map to create a beautiful edge transition effect.

The automatically generated mesh in the factory area looks very bad. I can actually create some simple low polygon buildings (to replace them), but what I want to show is how bad the automatically generated remote geometry is. Of course, the single camera texture projection is still effective. The only geometry I added here is a simple cylinder, which is used as a chimney. When players move, this chimney can make players more immersed.

Merge Geometry (mesh)

We use RC and PhotoScan to generate two separate meshes. Although they depict some of the same areas and use the same data to build them, their scales and rotations are very different. High precision close range mesh and low precision middle range mesh need to be aligned so that they can appear true in VR without any help. I used 3d software to import the close range mesh, and carefully adjusted the scale, position and rotation until it matched the low detail geometry of the middle range. I deleted the area where the two overlap, and added a subtle transition without gaps (that is, connecting vertices, etc.). The boundary between the two is slightly distorted, but it is harmless.

The middle distance mesh with redundant sky and vertices deleted appears very clean. I will move the edge part that is particularly far away outward slightly, and fill the hole with some planes. There is some overlap in the distant UVs. I adjusted it slightly to increase the area occupied by these UV maps.

Tips:
1. Note that rebuilding "bubbles" and holes in the mesh will cause problems such as excessive UV density or overlap. Delete the problematic parts and repair the resulting holes, or unfold UVs separately for these parts.
2. When exporting the model used in SteamVR Home, please ensure that the "Face Smoothing" of all assigned materials is the maximum value and there is no additional vertex normal setting. Although SteamVR Home can be imported normally without adjustment, it may take a longer time to process them (resulting in a jam, etc.).

Sky Box (Sky Dome)

The indispensable dome shape

I have a good idea about the layout of objects in the scene. I'm going to create a sky dome to cover the whole scene, and attach an equal scale projection photo to the sky dome.

The sky dome is a simple sphere surrounding the whole scene. I cut off about one third of the mesh at the bottom of the sphere to get a hemisphere slightly larger than the ordinary hemisphere. I spread the UVs of this hemisphere onto a plane, and then slightly adjusted the UVs to make the map have a fisheye effect. The map extends upward from the edge to the center. Don't worry, the image data we need will be generated in the subsequent steps.

Because we use seamless circular texture, it may be slightly awkward when editing edges (the horizon is curved), but this UV division makes the process of editing the sky itself simple. Using this method, we can easily fill the missing part of the sky with cloning and gradient tools in Photoshop.

Tips:
1. Before exporting to OBJ or FBX, we'd better convert the geometry based on quadrangle to triangle, because different software may have different processing procedures for quadrangle triangulation. For example, when a model without triangle conversion is imported into the SteamVR Workshop tool, a slight wavy line will appear in the sky texture. Therefore, when exporting a model, its faces must be converted to triangular faces to ensure that the geometry is consistent in different software.

Generate initial sky map in PhotoScan

For this step, I have done most of the preparatory work: I assigned a material to the medium distance model and another material to the sky dome; I also copied the material of the foreground model and the plane projection UV. Of course, its meaning is just to prevent the camera from projecting unwanted parts onto the distant model (azimiao. com). (Another method is to use pre-existing materials, but compress the discarded UVs to an unrelated corner. Alternatively, the mesh and materials can remain the same, but there will be a computational cost when generating more textures, which are unnecessary)

I reset the model rotation and deleted the unnecessary parts, and then exported the composite mesh to OBJ for texture processing in PhotoScan.

Although our texture is suitable for close objects, for far areas, these textures appear fuzzy, dark, or even overlapping. Therefore, we need to use different technologies to deal with them.

Single camera projection

Now I have a good operational basis. I reassign the texture in Modo to use the newly generated texture. At the same time, I export the scene (skybox, medium distance geometry, and foreground geometry) to an FBX so that it can be imported into SteamVR Home. I put them in a basic map (translated by Zimiao, who haunts azimiao. com), and roughly scaled the grid, so that we can start processing texture work. The texture work mainly includes creating new PSD maps for the medium distance model and skybox, and Material Editor Reassign materials in.

Back in PhotoScan, I disabled all cameras except one pointing in the right direction: this camera points at the reservoir and wind turbine. After that, I rebuilt the texture again, and exported the texture to PNG, then used the new ultra clear texture for the sky and medium distance geometry, and carefully mixed them together in Photoshop.

Adjust the camera position, repeat the operation process, and constantly generate the sky box and middle map. Import the newly generated map into PS for processing and saving, and the view will be updated automatically in Hammer. When testing in VR, you will find that some areas need to be adjusted more carefully. At the same time, you will also find that some parts of the screen that look very bad look good in VR.

Alpha Blending

To blend medium distance geometry into the sky, I added a second texture to Material Editor Used as translucency in. I used the alpha channel of the first texture initially exported by PhotoScan, and carefully edited it for trimming, blurring, etc., so that it can provide correct adaptive edges for geometry. The factory buildings are carefully outlined with sharp edges, while part of the field is faded out to hide a visible transition. I spent a lot of time on this, because it plays a very important role in improving the scene effect.

In order to sort the transparent layers better, I copied the middle distance mesh in Modo to make it fully aligned with the original copy, and assigned another material to it. Then I re imported it into SteamVR Workshop Tools. I modified the material to use PSD maps, which means I can set a second material for medium distance geometry. This time I set the material to 'Alpha Test' instead of 'Translucent', and the Alpha Test Reference 'is set quite high to hide the hard edges in the translucent layer. Although blurry edges of meshes may cause rendering errors, they are far less obvious than opaque parts. (I also used this technique on the trees and other leaves in the English church scene, which is surprisingly effective)

Real world zoom

In order to obtain accurate scaling, you can use the measurement tools in Google Earth to measure the distance between landmarks, and then scale the model in proportion.

If your scanning scene is not visible on the map (such as indoors, underground, etc.), you can use any known length to determine the zoom ratio.

Finalize the scenario

Bake to the World

After importing the model and placing it on the Hammer map, I selected Bake to World In the map editing room, the large mesh will be divided into smaller parts, so the computer rendering pressure is low.

lighting

For a complete description of the dynamic light objects that exist in the scene, see the Photogrammetric Scene Lighting Setup Tutorial, also taking this map as an example.

entity

In addition to the above mentioned lighting entities, I also added a birth place and some circling crows.

In a recent update, I turned on directional light shadows and set all terrain materials to receive but not cast shadows to prevent self shadow artifacts. The purpose of this setting is to make players' avatars and props appear more realistic (with shadows). I also added a simple transmission area grid to allow players to roam freely.

Final scene

I have uploaded the scene to the Workshop. If you have any questions, please feel free to ask here or in the "Discussion" tab of the SteamVR community site.

Postscript of this blog

After three nights of translation, I finally translated this VR large space scanning modeling tutorial. This tutorial is written by the official staff of V Society, and is a very good starting material for students who want to know about large space scanning modeling.

Zimiao haunting blog (azimiao. com) All rights reserved. Please note the link when reprinting: https://www.azimiao.com/7427.html
Welcome to the Zimiao haunting blog exchange group: three hundred and thirteen million seven hundred and thirty-two thousand

Comment

*

*