Thanks for all the information I haven't had the chance to look into all of it. am still learning, the openGL tutorial is very good and clear, didn't finish it yet, but it is close to what I had in mind about how this is done, the manipulation is all done on the normal level, you get the normal's from the geometry, which is stored in the vertex shader, and generate a normal map for the fragment shader, that is how it seems to me, because the fragment shader knows only the texture, which the geometry of the object is raped in, and it generates the color of each pixel on this texture.
If I got it right then this is what's happening:
first I upload the normals to the vertex shader:
context3D.setVertexBufferAt(2, obj.normals, 0, Context3DVertexBufferFormat.FLOAT_3);
on the vertex shader it can then be referred to as va2, don't ask me why but I guess these are the variables names of the vertex shader in the GPU, now let's see what I do with it, here is the code in the vertex shader that deals with the normals array:
"mov vt1.xyz, va2.xyz\n"+
"mov vt1.w, va2.w\n" +
"mov v1, vt1\n";
firs line is moving xyz values of each item in va2 which is the normals array to vt1.
second line moves the forth value of this vector3D which is the w to vt1.
third line is moving vt1 to v1, what is the point in that? I guess that we need to use v1 in the fragment shader, we can't use va2 or vt1 because they are a vertex array, the fragment shader is only working with maps, so the third line is converting the geometry information and projecting it on a 2D map to be used in the fragment shader.
Now on the fregment shader we have the first 2 lines:
"tex ft0, v0, fs0 <2d,linear,nomip>\n" + // read from texture
"nrm ft1.xyz, v1.xyz\n" + // renormalize normal
first line is dealing with the texture, va0 contains the texture, va1 contains the texture coordinates:
context3D.setTextureAt(0, obj.texture);
context3D.setVertexBufferAt(1, obj.texCoords, 0, Context3DVertexBufferFormat.FLOAT_3);
And they where both used on the vertex shader to create v0 which is the objects mapping:
"m44 op, va0, vc0\n" + // pos to clipspace
"mov v0, va1 \n" + // copy uv
now the first line of the fragment shader is creating a new variable containing the texture ft0, which is some combination of v0 and fs0 (don't know what fs0 is). the variables that the fragment shader is using have a prefix of ft.
then comes the cusual line which involves the normals which is the second line on the fragment shader:
"nrm ft1.xyz, v1.xyz\n" + // renormalize normal
ft1 will later be used to determine the amount of lightning of every fragment, meaning every pixel, by calculating the dot-product between the light direction vector and the normal of the fragment.
"dp3 ft3, fc2.xyz, ft1.xyz \n" + // directional light contribution
this line of the fragment shader is taking the light direction I have uploaded into fc2:
context3D.setProgramConstantsFromVector(Context3DProgramType.FRAGMENT, 2, Vector.<Number>([obj.LightPos.x,obj.LightPos.y,obj.LightPos.z,1])); // Light Direction
and ft1 which is the result normal map I already created, and puts the dot-product result in a third variable called ft3.
so, what I need to do is to upload a normal map, just as I've uploaded the texture like so:
context3D.setTextureAt(3, obj.normal_texture);
which in the vertex shader will be stored as va3, and copy it to v2 so I can use it on the fragment shader:
"mov v2, va3 \n" + // copy uv
and on the fragment shader I should use it instead of the normal map like this:
"nrm ft1.xyz, v2.xyz\n" + // renormalize normal
that would be a start, of course just as the openGL tutorial mentions, I need to combine between the normal direction of the surface and the normals of the texture before I use it, something like:
mul v2.xyz v1.xyz
and then apply the result to ft1
"nrm ft1.xyz, v2.xyz\n" + // renormalize normal
this seems to me as the way to go, didn't test it yet but would like to here your opinions, if on the theory level I'm getting it right.