《Unity着色器圣经》3.3.2 | Cg/HLSL顶点着色器

目录索引

译文

顶点着色器对应于渲染管线中的一个可编程阶段,顶点在这里从三维空间变换为屏幕上的二维投影。顶点着色器的最小计算单位相当于一个独立的顶点。

在 USB_simple_color 着色器中,有一个名为“vert”的函数对应了我们的顶点着色器。我们之所以知道“vert”函数是顶点着色器,是因为 #pragma vertex 语义已将其声明为顶点着色器。

#pragma vertex vert
...
v2f vert (appdata v)
{
   ...
}

在继续解释顶点着色器阶段之前,我们需要先了解的是,我们的 USB_simple_color 着色器是一个无光照着色器(它不受光源影响),这就是为什么它包含一个顶点着色器函数和另一个片元着色器函数(#pragma fragment frag)。之所以提到着色器类型这一点,是因为 Unity 提供了一种以“表面着色器”(surf)形式编写着色器的快速方法,该方法可以自动生成 Cg 代码,专门用于受光照影响的材质。这种方法可以缩短项目的开发时间,但对无法帮助我们理解和学习着色器,因为许多函数和计算都是在程序内部进行的。这也是为什么我们在本书开头创建了一个“无光照着色器”的原因,以便读者详细了解其中的结构。

让我们来看看顶点着色器的结构吧。函数 vert 以“v2f”开头,这个词的意思是“顶点到片元(vertex to fragment)”。v2f 将在顶点着色器之后的片元着色器中作为参数使用,因此得名。这个名字现在看起来有点奇怪,但当我们了解了着色器程序的内部流程后,这个名字就很有意义了。由于 vert 函数以 v2f 开头,这意味着v2f是一种输出类型,我们必须返回与这种数据类型相关的值。

跟在 v2f 之后的是“vert”,这是顶点着色器名称。再之后的是括号与括号内的参数,其中 appdata 代表顶点着色器的输入。

v2f vert (appdata v) { ... }

如果我们继续分析,就会发现结构体 v2f 已在函数中以字母“o”初始化。因此,在变量“o”中,我们可以找到 v2f 之前声明的所有属性:

v2f o;
o.vertex...
o.uv...
return o;

顶点着色器阶段的第一个操作是通过“UnityObjectToClipPos”方法将模型顶点从模型空间变换到裁剪空间。我们的模型位于三维的场景中,而我们最终需要将三维空间中的坐标转换为屏幕像素的二维投影,这种转换正是在“UnityObjectToClipPos”函数中进行的。该函数的作用是将当前的模型矩阵(unity_ObjectToWorld)乘以当前的观察投影矩阵(UNITY_MATRIX_VP)。

UnityObjectToClipPos(float3 pos)
{
    return mul(
        UNITY_MATRIX_VP,
        mul(unity_ObjectToWorld, float4(pos, 1.0))
    );
}

这一操作已经在“UnityShaderUtilities.cginc”文件中进行了声明。由于该文件已被列为 UnityCG.cginc 的依赖文件,因此我们可以在着色器中使用它。我们从模型中获取顶点输入(v.vertex),使用矩阵将模型从模型空间变换到裁剪空间(UnityObjectToClipPos),最后将结果保存到顶点输出(o.vertex)中。

v2f vert (appdata v)
{
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
}

在处理输入或输出时,我们必须考虑的一个因素是两个属性必须具有相同的维数,例如,作为顶点着色器输入的 appdata 结构体中的“float4 vertex”与作为顶点着色器输出的 v2f 结构体中的“float4 vertex”具有相同的维数(4)。

如果一个属性的类型是 float4 而另一个是 float3,那么 Unity 可能会返回错误。在大多数情况下,我们无法将四维向量转换为三维或更小维数的向量。

然后,我们可以找到 TRANSFORM_TEX 函数。该函数需要两个参数,分别是

  1. 输入的 UV 坐标 (v.uv)。
  2. 我们要在这些坐标上定位的纹理(_MainTex),它的作用是控制纹理 UV 坐标的“平铺和偏移”。

最后,我们将这些值传递给输出的 UV (o.uv),它将在后续将用于片元着色器阶段。


原文对照

The vertex shader corresponds to a rendering pipeline’s programmable stage, where the vertices are transformed from a 3D space to a two-dimensional projection on the screen. Its smallest unit of calculation corresponds to an independent vertex.

Inside the USB_simple_color shader, there is a function called “vert” which corresponds to our vertex shader stage. The reason why we know that it is our vertex shader is that it was declared as such in the #pragma vertex.

#pragma vertex vert
...
v2f vert (appdata v)
{
   ...
}

Before continuing with the explanation of this stage, we must remember that our shader USB_simple_color is an Unlit type (it does not have light) that’s why it includes a function for the vertex shader and another for the fragment shader (#pragma fragment frag). It is essential to mention this since Unity provides a quick way to write shaders in the form of “Surface Shader” (surf) that generates Cg code automatically, exclusively for materials that are affected by lighting. This allows optimization of development time but does not help us understand it because many functions and calculations occur internally in the program. This is why we created an Unlit shader at the beginning of this book; to understand its operation in detail.

We are going to analyze the structure of the vertex shader stage. Our function begins with the word “v2f” which means “vertex to fragment”. This name makes a lot of sense when we understand the internal process that is happening inside the program. V2f will be used later as an argument in the fragment shader stage, hence its name. So, as our function starts with v2f that means it is a vertex output type, therefore, we will have to return a value associated with this data type.

Continue with “vert” which is the name of our vertex shader stage and then the arguments in parentheses where appdata fulfills the function of vertex input.

v2f vert (appdata v) { ... }

If we continue our analysis, we will notice that the struct v2f has been initialized with the letter “o” inside the function. Therefore, inside this variable, we will find all the previously declared properties in v2f.

v2f o;
o.vertex...
o.uv...
return o;

So the first operation that occurs within the vertex shader stage is the transformation of the object vertices from object-space to clip-space through the “UnityObjectToClipPos” method. Let’s remember that our objects are within a three-dimensional space in the scene, and we must transform those coordinates into a two-dimensional projection of pixels on the screen. That transformation occurs precisely within the “UnityObjectToClipPos” function. What this function does is multiply the matrix of the current model (unity_ObjectToWorld) by the factor of the multiplication between the current view and the projection matrix (UNITY_MATRIX_VP).

UnityObjectToClipPos(float3 pos)
{
    return mul(
        UNITY_MATRIX_VP,
        mul(unity_ObjectToWorld, float4(pos, 1.0))
    );
}

This operation is declared in the “UnityShaderUtilities.cginc” file which has been included as a dependency in UnityCG.cginc and that is why we can use it inside our shader. So, we take the vertex input from our object (v.vertex), transform the matrix from object-space to clip-space (UnityObjectToClipPos) and save the result in the vertices output (o.vertex).

v2f vert (appdata v)
{
    v2f o;
    o.vertex = UnityObjectToClipPos(v.vertex);
}

A factor that we must consider when working with inputs or outputs is that both properties must have the same number of dimensions, e.g., if we go to the vertex input appdata we will see that the vector “float4 vertex” has the same number of dimensions as “float4 vertex” in the vertex output (v2f).

If one property is of type float4 and the other float3, then Unity may return an error because in most cases you cannot transform from a four-dimensional vector to a three-dimensional vector or less.

Then we can find the TRANSFORM_TEX function. This function asks for two arguments, which are:

  1. The input UV coordinates of the object (v.uv).
  2. And the texture that we are going to position over those coordinates (_MainTex). It fulfills the function of controlling the “tiling and offset” in the UV coordinates of the texture.

Finally, we pass these values to the UV output (o.uv) since later they will be used in the fragment shader stage.

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 抢沙发

请登录后发表评论

    暂无评论内容