《Unity着色器圣经》1.0.9 | 几何处理阶段

目录索引

译文

应用阶段结束后,CPU 通过两个主要步骤向 GPU 请求我们在电脑屏幕上看到的图像:

  1. 配置渲染状态,即配置几何处理到像素处理的一系列阶段。
  2. 在屏幕上绘制图像。

几何阶段主要负责处理模型的顶点,发生在 GPU 上。它分为四个子阶段,分别是:顶点着色、投影、裁剪和屏幕映射。

图片[1]-《Unity着色器圣经》1.0.9 | 几何处理阶段-软件开发学习笔记
Fig. 1.0.9 a 顶点变换->投影->裁剪->屏幕映射

在应用阶段完成图元装配后,顶点着色阶段(顶点着色器阶段)将处理两项主要任务:

  1. 计算模型顶点的位置。
  2. 将顶点位置转换到不同的坐标空间下,以便投影到计算机屏幕上。

此外,在顶点着色阶段中,我们还可以选择要传递给后续阶段的属性。这意味着在顶点着色阶段中,我们可以加入法线、切线、UV 坐标等属性。

投影和裁剪是应用阶段的一部分,根据场景中摄像机的属性而有所不同。值得注意的是,整个渲染过程只针对位于摄像机视锥体(也被称为观察空间)内的模型。

投影和剪切的结果取决于场景中摄像机是透视相机还是正交相机。为了理解这一过程,我们现在假设场景中有一个球体,这个球体有一半位于摄像机的视锥体之外。对于这个球体而言,只有位于视锥体之内的部分才会被投影、裁剪,最终映射到屏幕上。也就是说,球体在视锥体之外的部分将在渲染过程中被丢弃。

图片[2]-《Unity着色器圣经》1.0.9 | 几何处理阶段-软件开发学习笔记
Fig. 1.0.9 b 无裁剪(左)与裁剪后(右)

将裁剪好的模型写入内存后,就进入到了屏幕映射阶段。在屏幕映射阶段,我们将场景中的三维物体变换到屏幕坐标系(也被称为窗口坐标系)。


原文对照

The CPU requests the images that we see on our computer screen from the GPU. These requests are carried out in two main steps:

  1. The render state is configured, which corresponds to the set of stages from geometry processing up to pixel processing.
  2. And then, the object is drawn on the screen.

The geometry processing phase occurs on the GPU and is responsible for the vertex processing of our object. This phase is divided into four subprocesses which are: vertex shading, projection, clipping and screen mapping.

图片[1]-《Unity着色器圣经》1.0.9 | 几何处理阶段-软件开发学习笔记
Fig. 1.0.9 a

When the primitives have already been assembled in the application stage, the vertex shading, more commonly known as the vertex shader stage, handles two main tasks:

  1. It calculates the position of the vertices of the object.
  2. Transforms its position to different space coordinates so that they can be projected onto the computer screen.

Also, within this subprocess, we can select the properties that we want to pass on to the following stages. It means that within the vertex shader stage, we can include normals, tangents, UV coordinates etc.

Projection and clipping occur as part of the process, which varies according to the properties of our camera in the scene. It is worth mentioning that the whole rendering process occurs only for those elements that are within the camera frustum, also known as the view-space.

The projection and the clipping will depend on our camera, if it is set to perspective or orthographic (parallel). To understand this process, we are going to assume that we have a Sphere in our scene, where half of it is outside the frustum of the camera, so only the area of the Sphere that lies within the frustum will be projected and subsequently clipped on the screen, that is, the area of the Sphere that is out of sight will be discarded in the rendering process.

图片[2]-《Unity着色器圣经》1.0.9 | 几何处理阶段-软件开发学习笔记
Fig. 1.0.9 b

Once we have our clipped objects in the memory, they are subsequently sent to the screen map (screen mapping). At this stage, the three-dimensional objects that we have in our scene are transformed into screen coordinates, also known as window coordinates.

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容