input : https://msdn.microsoft.com/en-us/library/windows/desktop/bb509609(v=vs.85).aspx

output : https://msdn.microsoft.com/en-us/library/windows/desktop/bb509661(v=vs.85).aspx

Skip to content
# Author: Ming Wai Chan (CMW Dexint)

# Geometry Shader streams

# Fishes

# ComputeShader.SetFloats()

# Deform MeshCollider with Compute Shader

# Indirect Compute Shader

# Maths Link

# SRP studies

**Overview**

# Metallic Specular texture conversion

# w component

###### (Thanks Kemal for the tutorial links!)

# HLSL,Cg,GLSL lib and mapping

A Step into Graphics

Using **Graphics.DrawMeshInstancedIndirect** so that I can calculate the fish positions with compute shader.

The moving tails are just vertex displacements in shader. Rotation are also done in the rendering shader.

Below is the look-at matrix that takes the normalized velocity to be the rotation, and to be the forward axis.

```
float4 ApplyRotation (float4 v, float3 rotation)
{
// Create LookAt matrix
float3 up = float3(0,1,0);
float3 zaxis = rotation;
float3 xaxis = normalize(cross(up, zaxis));
float3 yaxis = cross(zaxis, xaxis);
float4x4 lookatMatrix = {
xaxis.x, yaxis.x, zaxis.x, 0,
xaxis.y, yaxis.y, zaxis.y, 0,
xaxis.z, yaxis.z, zaxis.z, 0,
0, 0, 0, 1
};
return mul(lookatMatrix,v);
}
```

Continue reading “Fishes” Originally I have in

**C#**

_floatArray = new float[2]; _floatArray[0] = 1f; _floatArray[1] = 0.5f;

**Compute Shader**

`float FloatArray[2];`

And use ComputeShader.SetFloats() to pass the values from C# to Compute Shader**.
**Reading the values in Compute Shader, I found that

Unity dev (Marton E.) replied me that:

If you want to do something like this:

In **C#** : You need a for loop which

-> iterates a few ten-thousands vertices, and then

-> for each vertex, you need to calculate the distance between each bead…

All these instructions are run 1 by 1 on CPU. You can imagine the time needed for this.

——–

But with **compute shader**, those several ten-thousands iterations can be done “at the same time” in GPU (it depends how you setup the data). And the result data will be transferred back to CPU, and directly apply to Mesh.vertices array.

And this is what you can see in this video, the fps stays above 70.

**Update: This can be done much faster using AsyncGPUReadback. Visit here for example: **https://github.com/cinight/MinimalCompute

Direct means CPU tells GPU to execute work, amount of work is given by CPU

Indirectmeans CPU tells GPU to execute work, amount of work is calculated in GPU

Opengl Math Cheatsheet

http://www.opengl-tutorial.org/miscellaneous/math-cheatsheet/

Visualization of Math Physics

http://www.falstad.com/mathphysics.html

A very good tutorial about linear algebra, it visualises the concepts!

Matrix (GLSL)

http://www.opengl-tutorial.org/beginners-tutorials/tutorial-3-matrices/

https://docs.unity3d.com/2017.2/Documentation/Manual/ScriptableRenderPipeline.html#APIOverview

Unity C++ code |
C#/shader code (MIT open source) |

Culling Render set of objects with filter/sort/params/batch Internal graphics platform abstraction |
Camera setup Light setup Shadows setup Frame render pass structure & logicShader/compute code |

I have been so confusing about the w component for ages, until I read this:

**Explaining Homogeneous Coordinates & Projective Geometry**

So I quickly made a simple to shader to test it out:

To summarize, w component is the

The W dimension is the distance from the projector to the screen(object).

when w > 1, the object looks far (smaller)

when w = 1, the size remains the same

when w = 0, it’s actually covering the whole screen

This is the reason why we have make sure the w component is correct when we are doing **_Object2World** and **_World2Object**

Also a interesting point to note from the blog post:

If W=1, then it is a point light. If W=0, then it is a directional light.