Monday, January 6, 2014

The most basic shaders

So our program doesn't draw always draw triangles because we're missing shaders.  The code was updated so that it can load and use shaders, there just aren't any for it to load.  Let's fix that.

I'm not going to create anything fancy, just the most simple shaders to demonstrate the absolute basics.  I should mention that I just started learning about shaders a week and a half ago, so you can't really expect digital magic quite yet.

We're going to need two files, one for the vertex shader and the other for the fragment shader.   The program from last time expected them in Vertex.gsls and Fragment.glsl, so let's use those names.

Let's start with the vertex shader, it's job is to manipulate each vertex one at a time.  Applying transformations, lighting, and mapping textures.  Right now all we want to do is apply the transformation necessary to make our triangles visible.

Transformations are the methods by which we manipulate polygons in 3D space.  Moving, rotating, stretching, and shrinking are all done in this way.  I don't generally look at the math because A) OpenGL has handy functions to do it for me and B) it's all matrix based and that's something I never really learned.  At some point I expect I'll need to get a handle of it, but I'm putting that off for right now.

Despite my mathematical inadequacies I need to tell OpenGL to adjust the triangles listed in the VBO so that they'll appear in the correct position on the screen.  When OpenGL gets setup at the beginning of the program  projection matrix gets created to define the camera's vision.  Along with the coordinates for each vertex, that's all thats needed to position our triangles in the vertex shader.

The vertex shader isn't compiled into our program, the source code is given to OpenGL and compiled at run time.  There are other ways to get information into the shaders, but for now I'm going to rely on the built-in attributes.  Specifically gl_ModelViewProjectionMatrix and gl_Vertex.  As it's name suggests gl_ModelVewProjectionMatrix holds the matrix for the model view projection.  gl_Vertex holds the coordinates for the current vertex.

I should mention that the attributes are deprecated in newer versions of OpenGL, so a better way to get this information to the shaders will be needed eventually.  This data transfer trick will be needed for a bunch of other stuff, so I'm going to have to address it anyway.

Here's the initial vertex shader:
1:  #version 130  
2:    
3:  void main() {  
4:      gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;  
5:  }  

Not a whole lot in it, but then we don't need it to do too much either.

Line 1 defines what version of the OpenGL Shader Language (GLSL) this code will use.  The latest version is 4.4, this is requesting version 1.3.  Like I said deprecated attributes.  At some point I'll rewrite this using a more modern version of GLSL.

Line 4 transforms the vertex coordinates to conform with the defined view.  GLSL knows how to do matrix math so we don't need to go to much effort to do matrix multiplication.  It puts the result into gl_Position, which is where we need to store the processed vertex position.

I should mention that gl_Position and gl_Vertex have four dimensions to them, which is one more than the coordinates we had in our vertex VBO.  Along with the standard X, Y, and Z coordinates there's a W one, the homogeneous vertex coordinate.  The W coordinate is used for scaling and some other math, I'm not going need to worry about it now since OpenGL gives it a default value.  Later on when we're using the data in our VBO we'll need to add in the W coordinate.

 Vertexes are now handled, let's have a look at the fragment shader.  I definitely need to read a lot more about this one as I'm not entirely sure what a fragment is.  I know they are generated when OpenGL rasterises the images, but I have no idea how to predict the number or position of the fragments.  This has made it really hard for me to get the fragment shader to accomplish much.  I don't think this will prevent me from applying colors and textures to polygons, just that more advanced topics such as lighting will require a whole bunch more research.

In the meantime the fragment shader still needs to fill our polygons with some color.  We don't have any color data in our program anywhere so the shader is going to have to supply it.  Here's the code for it:
1:  #version 130  
2:    
3:  void main() {  
4:      gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);  
5:  }  

Again it's pretty minimal.  All it does is say that the fragments should be colored green.

Just like how the vertex shader needs to set the final vertex position in gl_Position, the fragment shader needs to put the final color in gl_FragColor.

To set the color we're creating a 4D vector to store in gl_FragColor on line 4.  We're using the vec4() constructor to build it and supply it with the red, green, blue, and alpha values in that order.

There's no need to recompile our program when we change the shaders.  Just launch it again and it will reload the new source and try to use it.  Assuming all of this is working we should finally end up with triangles on the screen regardless of which video drivers you're using.


What we need to do next time is supply some color information to our shaders.  This will set the groundwork for supplying all the information the shaders will need.  

No comments:

Post a Comment