# Fishes

Using Graphics.DrawMeshInstancedIndirect so that I can calculate the fish positions with compute shader.

The moving tails are just vertex displacements in shader. Rotation are also done in the rendering shader.

Below is the look-at matrix that takes the normalized velocity to be the rotation, and to be the forward axis.

float4 ApplyRotation (float4 v, float3 rotation)
{
// Create LookAt matrix
float3 up = float3(0,1,0);

float3 zaxis = rotation;
float3 xaxis = normalize(cross(up, zaxis));
float3 yaxis = cross(zaxis, xaxis);

float4x4 lookatMatrix = {
xaxis.x, yaxis.x, zaxis.x, 0,
xaxis.y, yaxis.y, zaxis.y, 0,
xaxis.z, yaxis.z, zaxis.z, 0,
0, 0, 0, 1
};

return mul(lookatMatrix,v);
}

Originally I have in

C#

_floatArray = new float[2];
_floatArray[0] = 1f;
_floatArray[1] = 0.5f;

float FloatArray[2];

And use ComputeShader.SetFloats() to pass the values from C# to Compute Shader.
Reading the values in Compute Shader, I found that only FloatArray[0] has the value. So FloatArray[1] will equals to 0.

Unity dev (Marton E.) replied me that:

# Deform MeshCollider with Compute Shader

If you want to do something like this:

In C# : You need a for loop which
-> iterates a few ten-thousands vertices, and then
-> for each vertex, you need to calculate the distance between each bead…

All these instructions are run 1 by 1 on CPU. You can imagine the time needed for this.

——–

But with compute shader, those several ten-thousands iterations can be done “at the same time” in GPU (it depends how you setup the data). And the result data will be transferred back to CPU, and directly apply to Mesh.vertices array.

And this is what you can see in this video, the fps stays above 70.

Update: This can be done much faster using AsyncGPUReadback. Visit here for example: https://github.com/cinight/MinimalCompute

Direct means CPU tells GPU to execute work, amount of work is given by CPU

Indirect means CPU tells GPU to execute work, amount of work is calculated in GPU

# w component

I have been so confusing about the w component for ages, until I read this:
Explaining Homogeneous Coordinates & Projective Geometry

So I quickly made a simple to shader to test it out:

To summarize, w component is the

The W dimension is the distance from the projector to the screen(object).

when w > 1, the object looks far (smaller)
when w = 1, the size remains the same
when w = 0, it’s actually covering the whole screen

This is the reason why we have make sure the w component is correct when we are doing _Object2World and _World2Object

Also a interesting point to note from the blog post:

If W=1, then it is a point light. If W=0, then it is a directional light.