Content

Post new topic Reply to topic
1, 2, 3  >>

GLSL Support

Author Message
void View user's profile Send private message

Reply with quote Friday, January 22, 2010

[0.0] Index
====================

[1.0] Introduction
[2.0] Download
[2.1] Links
[3.0] CVARS
[3.1] r_ext_vertex_shader
[4.0] Shaders (Quake 3)
[4.1] program <name>
[4.2] vertexProgram <path1> ... <path8>
[4.3] fragmentProgram <path1> ... <path8>
[4.4] (anim|clamp|clampanim|video)map[2-7] <path>
[4.5] Notes
[5.0] Programs (GLSL)
[5.1] Recognized Keywords
[APP] Upcoming features
[LOG] Changelog


[1.0] Introduction
====================


Hi guys!

As this is my first post here, I might as well just introduce myself quickly. Some people may actually remember me going by the names of Son-Goku or Yondaime
on the Bid For Power/Saviour of Strength/NND and Freedom's forums. Used to be part of several projects in the scene until I had to take some time off to focus on university.

Anyway, now that I've graduated and have some more time on my hands again, I thought I'd improve my OpenGL skills a bit and went back to finish some of the features initially meant for inclusion in NND. Since I figured some people might actually have use for those I offered Alex and Zeth some help implementing those into the ZEQ2Lite and therefore NNDLite engine and well, here I am.


[2.0] Download
====================

So, the first feature I have more or less finished is support for GLSL programs.

If you want to take a look at the sourcecode (or want to integrate it into your own engine) you can download the sourcecode changes against one of the more recent ioquake3
versions (1772) from this link:

[2.1] Links
[Sorry, need to update these, developers can find the latest code in svn]


[3.0] CVARS
====================

A new cvar has been added to enable/disable the GLSL rendering path (default: disabled)
[3.1] r_ext_vertex_shader (0|1) - 0 disable, 1 enable


[4.0] Shaders (Quake 3)
====================

This will be specifically interesting for developers. To enable rendering of a surface with a GLSL program, the following keywords have been introduced:

[4.1] program <name>
Name used to reference the program. To be used in combination with 'vertexProgram' and 'fragmentProgram'.
NOTE:
There is a "magic program" called "skip" which will cause a shader's stage to be ignored when rendering.


[4.2] vertexProgram <path1> ... <path8>
One to eight files with glsl code, making up the vertex stage of the program (remember that exactly one file needs a main() function as an entry point). To be used in combination with 'vertexProgram' and 'fragmentProgram'.

[4.3] fragmentProgram <path1> ... <path8>
One to eight files with glsl code, making up the fragment stage of the program (remember that exactly one file needs a main() function as an entry point). To be used in combination with 'vertexProgram' and 'fragmentProgram'.

[4.4] (anim|clamp|clamanim|video)map[2-7] <path>
This will load any texture supplied in <path> and map it to texture unit [2-7].

[4.5] Notes
- map <path> in the first stage of a shader equals texture unit 0.
- map <path> in the second stage of a shader equals texture unit 1 whenever Quake 3 would switch to multi-texturing.
- if vertex shaders are disabled/unsupported all related keywords are ignored and rendering will work as expected from Quake 3.


[5.0] Programs (GLSL)
====================

So now you know how to activate your program. However you may need access to variables which are not automatically forwarded to GLSL programs (so unlike gl_* variables).

For this purpose I have defined a list of keywords. Whenever one of these keywords is found to be a variable name (of the proper type) within your GLSL shader the engine will automatically supply you with useful data.

[5.1] Recognized Keywords
uniform int u_AlphaGen;
uniform vec3 u_AmbientLight;
uniform int u_ColorGen;
uniform vec4 u_ConstantColor;
uniform vec3 u_DirectedLight;
uniform vec4 u_EntityColor;
uniform vec4 u_FogColor;
uniform int u_Greyscale;
uniform float u_IdentityLight;
uniform vec3 u_LightDirection;
uniform mat4 u_ModelViewMatrix;
uniform mat4 u_ModelViewProjectionMatrix;
uniform mat4 u_ProjectionMatrix;
uniform int u_TCGen0;
uniform int u_TCGen1;
uniform int u_TexEnv;
uniform sampler2D u_Texture0;
uniform sampler2D u_Texture1;
uniform sampler2D u_Texture2;
uniform sampler2D u_Texture3;
uniform sampler2D u_Texture4;
uniform sampler2D u_Texture5;
uniform sampler2D u_Texture6;
uniform sampler2D u_Texture7;
uniform float u_Time;
uniform vec3 u_ViewOrigin;
... to be continued ...


[APP] Upcoming features
====================

As of today the GLSL system in itself is fully working. There may however be requests for additional keywords, so feel free to do requests.

Additionally my personal plan is to write shaders which are capable of all quake 3 standard shader features (as far as applicable), so stuff like deformVertexes is moved from CPU to GPU. Work on this is still pretty early (the generic program in the glsl demo package). This shader is meant to speed up regular Quake 3 rendering and also serve as a demo.

Once the GLSL stuff has matured a bit (which should be soonish), I'll focus on support for FBO's and cube maps to which should in turn allow for some pretty shiny post-process effects etc.

[LOG] Changelog
====================

- GLSL is now part of ZEQ2
- program skip support added
- clampanimap support added

Zeth The Admin View user's profile Send private message

Reply with quote Friday, January 22, 2010

Looking pretty good, Yondaime.

Although I'm personally used to alternative implementations where you manually specify which keywords are needed by each GPU shader, auto-mapping of particular shader keywords isn't a bad idea in itself; however, I would like to mention that most of the code we've written uses a standard camelCase naming convention. While it's not that big of a deal, I'll just be upfront about my abhorrence of underscore symbols in variable names. Incidentally, it's something that could be fixed pretty easily anyhow.

I see most of the keywords we would need in your list already, but if you want/need a basis of comparison to go from or just a set of thoroughness, I would suggest binding any practical variables from this listing. Some manner of sending custom variable data from the running game-code to a shader would be a need, of course.

When declaring a shader program in the Quake 3 shader file, can I recommend that you allow independent vertex/fragment program pathing rather than automatic presumed joint _vp/_fp separations? This would allow the author to make use of independent vertex/fragment programs of unique names as well as not force a particular naming convention or design paradigm. Additionally, optional initial function specification would be a great feature to have if the user has a complex multi-function shader or stores both vertex/fragment programs in the same file (as is setup our current shader library for Zios).

Something as simple as this might do fine :

vertexShader "cel.glsl" "vertexMain"
pixelShader "cel.glsl" "pixelMain"


Where the 3rd parameter is the entry-point function.

Overall though my changes are just of a minor semantic nature. Great work so far and I'm certainly looking forward to seeing future frame buffer support and especially an active implementation with our current codebase!

void View user's profile Send private message

Reply with quote Friday, January 22, 2010

Shader suggestion
Are you sure you aren't thinking HLSL here? :p

As far as I know the separation of vertex and fragment programs (in GLSL) is a necessity and the entry function _HAS_ to be main.

As such your example Quake 3 shader code wouldn't work unless some parser would manually separate the single cel.glsl file into two separate strings before uploading them as GLSL shader objects. Additionally the individual scripts would have to be reparsed to set up all the variable data properly for each stage. Sure it's possible, but it sounds like debugging heck to me and your average GLSL coder will certainly be confused by such a system.

Additionally keep in mind that a vertex and a fragment shader are not really two separate entities (at least within GLSL).

Of course something like
vertexProgram vertex.glsl
fragmentProgram fragment.glsl
would be possible, but GLSL wise this would just result in the creation of a "vertex-fragment.glsl" program, so you don't gain anything on "program vertex-fragment.glsl" as it pretty much is right now.

Just to clarify:


Currently:
program cel --loads--> cel_vp.glsl --linked to--> cel (program)
                       cel_fp.glsl

Gets used as:
glUseProgram(cel);



I chose this way so you really have a simple and foolproof way to identify a program.

Your idea would have to look like this:


Currently:
vertexProgram a.glsl --loads--> a.glsl
fragmentProgram b.glsl --loads--> b.glsl --linked to--> a+b (program)

Gets used as:
glUseProgram(a+b);



in my honest opinion it'd get rather messy keeping track of all a+X combinations you end up with, while codewise you don't gain anything and in a folder with like ~20 programs not following a specific naming scheme it'll be a mess to make out what is what.

If cel.glsl is your vertex program, what would be your fragment program for it? That's why I forced _vp and _fp upon users.

The only disadvantage of this approach I see is that you might want to use the same vertex shader but a different fragment shader under some circumstances which would require creating a 1:1 copy of the vertex shader with just a different name, but that's just ~20 kb (unzipped) hard drive space lost so it's negligible.

Custom variable data
That's a great idea, I suppose you mean exposing the system to the game code in form of trap calls? To what extent would you want to see this happen?

The minimum requirements would obviously be along the lines of this:
phandle = trap_R_GetProgramHandle(<name>)
uhandle = trap_R_GetUniformLocation(<phandle>, <uniform>)
trap_R_SetUniform{}(<uhandle>, <value>) (where {} corresponds to the values just like in OpenGL so 3f for 3 float values etc)

I'm thinking though, would there also be situations where you want to load a program from gamecode? Currently I can't really think of it since remapping/loading a shader with a program line would automatically do that for you anyway but one might want to at least register post process effects within the gamecode?

Keywords
Nice list, I'll see what can be done, thanks.

Zeth The Admin View user's profile Send private message

Reply with quote Friday, January 22, 2010

Shader suggestion
No, I'm not thinking of a specific language construct. However, I am basing my suggestions from existing implementations I've worked with in the past (primarily that of Ogre -- which abstracts and provides a neutral material interface for working with HLSL/CG/HLSL shader programs without complex parsing schemes).

Given that I usually work with pre-abstracted rendering APIs with higher-level designs, I'm not familiar with the actual to-the-bone GL function calls to accomplish these tasks.

in my honest opinion it'd get rather messy keeping track of all a+X combinations you end up with, while codewise you don't gain anything and in a folder with like ~20 programs not following a specific naming scheme it'll be a mess to make out what is what.


Codewise, this may entirely be the case, but as far as the storage system goes we'd actually end up with less files and a more proper/clean focus on program function rather than separation of procedure. I apologize if it's a bother, but you'll find that I'm VERY focused on cleanliness, modularity, and structure over many other design aspects when it comes to programming/data storage.

The problem with forcing a particular vertex/fragment program naming scheme is that you won't be able to use placeholder/generic shader procedures for these steps uniquely. For example, a VERY large portion of our current shader library has shaders that have identical vertex or pixel shader steps (depending on what the shader's focus is).

By having the capability to specify unique scripts for each step, you can actually reuse a "common.glsl" vertex or fragment shader step in programs and save a TON of space and purposeless redeclaration as we actually have done in practice.

To be quite honest, ideally one would want to have support for include calls inside the glsl files to fire off the proper general/shared functions when needed (to really keep things as modular as we have on our engine), but I don't entirely see the purpose of investing too much time in creating a largely robust GLSL-only system for Quake 3 when we can get by with much less.

Custom variable data
Implementation of the actual trap functions needed to grab a shader handle and associated parameter are entirely in your hands. I'd say that less calls would be better when possible, of course.

I'm thinking though, would there also be situations where you want to load a program from gamecode? Currently I can't really think of it since remapping/loading a shader with a program line would automatically do that for you anyway but one might want to at least register post process effects within the gamecode?


NO! Embedding scripts/media inside the code is a huge blunder a lot of games (commercial and otherwise) make. Without full segregated exposure to the files, most shader artists/programmers won't be able to easily maintain/modify and alter such effects without burying themselves in unrelated code.

I know several shader programmers that don't know a LICK of C/C++, but still are very adept at finding their way around GPU programs and getting tasks done as long as the files are exposed to do so (especially when you consider tools like RenderMonkey / Shaderworks that can streamline the process)

Alex Al Knows View user's profile Send private message

Reply with quote Tuesday, January 26, 2010

Ah, great work on getting it released Smile

Been a bit busy with a website project over the weekend but I had a bit of a fiddle last night and knocked up some test shaders. It certainly works well so far, I'll post my thoughts and ideas after work.

Alex Al Knows View user's profile Send private message

Reply with quote Tuesday, January 26, 2010

Played a bit more and like the way it's shaping up, just knocked up a cheap rim lighting shader as everyone's doing it Very Happy

Here's my thoughts on what's already been discussed:

camelCase
I'd just like to note that it would be incredibly useful to keep the u_ and v_ prefixes on the uniform and varying variables to highlight what type of variables they are. The underscore especially clear as it's not used anywhere else. Much the same as Quake 3's _t postfix for typedef's.

Vertex/Fragment Program split

I agree with Brad here about it being better to be able to define the vertex and fragment programs seperately. Even if engine side they're handled as a single program being able to reuse the same vertex program with numerous different fragment programs would save a lot of code duplication and make maintaining the vertex programs a lot simpler.

Chances are most shaders will have identical vertex programs so it would likely be a very handy. I wouldn't have it as seperate keywords in the Quake 3 shader script, though, instead I'd probably keep the current program keyword and allow a second string parameter as such:


program myFunkyShader <--- loads myFunkyShader_vp.glsl and myFunkyShader_fp.glsl
program standard myFunkyShader <--- loads standard_vp.glsl and myFunkyShader_fp.glsl



As I said, engine side it could still be identified as it currently is, with the shader being identified by the fragment program name, though that is acting under the assumption that a fragment program won't be used with different vertex programs.

Thinking about it, if 'static' uniform variables defined in the Quake 3 shader script were involved, would it not require tracking individual instances of shader programs anyway?

Custom Variable Data
I think the "dictionary" of uniform variables idea which I explained before in a PM pretty much covers what Brad's asking for concerning this.

I'm not sure if I can think of an occasion that the gamecode would be aware of the shader program that's being used, though, except in maybe for debug or fallback shaders so I'm not sure how much use the suggested traps would be. Also doing it that way would mean some mess of repeated code where game code variables are used in many shaders.

What would probably be a better way of doing it is a variation on the dictionary idea where rather than using having the entry point to a function to grab the information we just store the variable's value and use traps to set them.


//in the renderer:
struct {
   int id;
   character[MAX_QPATH] name;
   float fValue;
   vec2 v2Value;
   vec3 v3Value;
   ... etc ...
} uniformDictionary_t;

uniformDictionary_t uniformDictionary[] = {
  {0, "u_playerPower", 0.0, {0.0,0.0}, {0.0,0.0,0.0}, ...etc...},
  {1, "u_lockedPlayerLocation", 0.0, {0.0,0.0}, {0.0,0.0,0.0}, ...etc...}
   ...etc...
};


// The Trap
trap_SetShaderUniform{type}(int uniformID, {type} value)   //Where {type} is the variable type

// In a gamecode header
#define UNIFORM_PLAYERPOWER 0
#define UNIFORM_LOCKEDPLAYERLOCATION 1



// In the gamecode somewhere
...
trap_SetShaderUniformf(UNIFORM_PLAYERPOWER, Postscript->powerLevel);
if(Postscript->lockedPlayer) trap_SetShaderv3(UNIFORM_LOCKEDPLAYERPOSITION, Postscript->lockedPlayer->pos);



Then renderer side they're automajically linked to the program's uniforms when they're needed in same the manner you set the current uniforms.

Passes/Scriptability
I have some ideas regarding handling multiple passes in a scriptable fashion which could be extended to things such as postprocess chains and other shader features. Looking at the time though (2:45am Sad ) I think I'll save writing them up until tomorrow.

Again, great work, I'm impressed with this version so far Smile

void View user's profile Send private message

Reply with quote Wednesday, January 27, 2010

camelCase
Might as well reply to this one if it's considered important. Razz

Basically the system is meant like this:
- a_ = attributes (nyi, more appropiate when vbo's are in)
- gl_ = opengl standard
- u_ = uniform
- v_ = varying

You can't do much about things like gl_Position, gl_FragColor, gl_Vertex (at least not before GLSL >=1.30). I actually prefer camelCase myself, but there's one thing I find even worse and that's being inconsistent.

On a sidenote I'd still consider this to be camelCase actually. ModelViewProjectionMatrix looks camelcase to me. The type identifier letter and the underscore aren't part of the name after all.


Vertex/Fragment Program split
I'll look into this, but I still don't have an idea what numbers of programs we're talking here. As of right now I've mostly worked under the assumption that there'd be around 10, maybe 20 combinations of vertex / fragment and actually set the limit to 64 cause I thought that wouldn't be reached anyway (MAX_PROGRAMS was it I think).

As of right now I've mostly worked under the assumption of the average shader to have a size of 5kb, zip that up and you're at 1kb. Duplicate code if every program I myself have planned would be done wouldn't yet exist, I can see it been done for a simple passthrough shader though.

The thing is, a passthrough shader is about 100 byte unzipped so that's obviously not what you're talking about when you talk about saving space.

Help me out here guys and tell me what you actually intend to do. Smile

Also the split Zeth mentioned in vertexProgram and fragmentProgram would make more sense in this regard. Just splitting it would be somewhat half-assed, since it still hides some "control" away from the user.

vertexProgram math.glsl vertexmain.glsl
fragmentProgram math.glsl sobel.glsl depth.glsl fragmain.glsl

Would then be possible, with the main files keeping the main() function while the other two files have math functions or whatever.

As for static uniforms:
You're thinking to complicated. You already have those in the system right now.

map[2-7] does already add a "static" value in form of the surface's texture.

To supply a program with a static value you don't have to really keep it static in GLSL space, it's only static in Quake 3 shader space. You then update it as necessary.

Imagine map being called floatVal and instead of the path it holds 0.5 and in another shader 0.8. It ends up being static data to the Quake 3 shader system but is still efficient by using only one program.


Custom variable data
Remember that uniform data is per program and not global. The dictionary approach in your PM (which was pretty good) as I understood it was meant on a per program basis (after all it's meant to replace the current parser). Your example doesn't reference any program though so you wouldn't have any control over what actually changes.

For something like u_Powerlevel it's of course trivial, as this value is probably one you would want to have the same "global" value across all shaders.

But something like u_Texture, u_Color etc would most likely NOT be meant to change for all shaders. There'd be a lot of those spread across many shaders but all with a different value. For shadowing it might be some greyish color while for the railgun it's the players railgun color of choice and somewhere else again it is completely different. When you switch your railgun to blue you certainly don't want blue shadows, when you darken the shadows a bit you don't want the railgun to darken though etc.

Also keep in mind that the gamecode doesn't need to be aware of the running programs to be able to reference them. The way the system is implemented is in line with similar systems for models, shaders etc so you'd use the programs name to actually MAKE the gamecode aware of it and could subsequently use it to change uniforms in it.

It's not really different from
handle = trap_R_RegisterShader(name)
trap_R_AddPolyToScene(handle, ...)

Zeth The Admin View user's profile Send private message

Reply with quote Wednesday, January 27, 2010

Hehe. It seems we're all over the place in mixed agreement to several of the concerns (and sub-issues). Hopefully we'll get closer to the same line of thought in the next few responses!

camelCase
I disagree about the notion of embedding type/relationship information INSIDE the variable name/style itself (a practice that's still practiced to this day in many major languages/productions). A good, clean, semantic variable should convey what it holds without the need to flood it with extraneous information and stylization (such as capitalization) changes.

Even so, I realize at times that suffixes/prefixes will be necessary. When using these, however, you CAN still use camelCase and you should most certainly not use presumptuous single letter abbreviations! Not only do these obfuscate the variables meaning, it requires context studying to figure out what each "symbol" means. Whether that be another programmer viewing the code, you revisiting it later, or a non-programmer looking to understand the gist of the operation at hand, keeping variable names full and non-abbreviated is a very important aspect to quick code comprehension.

However, if the naming conventions are out of our hands in this case, I do fully respect the idea of keeping code consistent. Even I realize that sometimes you just have to close your eyes, hum a little tune, and press the linguistically-vile shift + minus combination.

Vertex/Fragment Program split
[Shader Amount]
While I didn't really consider a hard-limit on the amount of shader programs, I would say that it's not unreasonable to assume that at least 128 shader programs should be able to exist at a time when you get into the aspect of variations of combined shaders. We have about 100+ shaders in our library (that I was planning to port), but many of them are derivatives from our modular shader approach.

For instance, we have a standard diffuse shader that can be mix-matched with our normal map shader to form a new shader. Or you could take that one step further and throw in the rim highlight shader to create a FURTHER new shader. These aren't combined by hand. Using clean shader language structuring, consistent naming schemes, and our own sub-idea of further broken down stages for the vertex/pixel passes, things can be automated to work in real-time based on the users individual selections of various graphic options.

While I'm not sure if we'd actually go as far as to implement the realtime shader combiner in Quake 3, it is something to consider in terms of what may be needed for varying graphic option settings for different users' needs.

[File/Memory/Storage]
I don't think there's really a file size/memory concern in any regard. You make a lot of mention of zipping the files up, but our current file structure actually has been discouraging the use of archives for SVN and developer maintainability. They'd likely remain extracted anyhow for editing easily.

[Keywords]
Although I can see the programmatic merits of bundling the vertex/fragment program declaration in parameters, I still stand behind the call for separated keywords. From a scripting/usability point of view, it's often much easier for the user to have clear and concise individual keywords for particular actions rather than a strand of parameters which will, after time, require lookup for meaning/usage.

I really dig Jens' idea of an optional list of shaders to "include" in a single program; however, would this not be possible/easier through the shader script itself? HLSL/CG actually have an include keyword for doing as such so I'd assumed GLSL would naturally as well, no?

Custom variable data
I'm not fully in comprehension of the solution being discussed, but as far as I understand Alex's idea of each shader instance having a dictionary set worth of custom data that the gamecode could use is pretty acceptable to use trap calls with. It's not as flexible and likely would be accumulating a lot of unused junk data per instance, but it does give a baseline for custom variable slots.

In actuality, a value like powerLevel being passed would be dynamic on an instance basis since the individuals/object's power level will vary greatly to achieve the intended results. A prime example of this would be the aura shader that's planned, which uses the existing power level to determine things like aura size, strength, fluxing, etc.

Implementation
This is probably one of the more important subjects that I feel has not been touched up on enough. While the existing ioquake3 base is well and good and all, it very likely will not port that easily to our existing ZEQ2-lite engine codebase (at least not through a simple SVN patch).

I recall one instance where Nightz actually submitted a working MD5 implementation patch for us based on the exact same ioquake3 base. All in all the patch rattled off hundreds of merge errors and proved to be quite time consuming to clip into our existing codebase by hand due to some of the core function changes that we had.

I know Arnold was somewhat into attempting to integrate the GLSL code, but I've not heard from his progress thus far. Any volunteers/others offering to help out as well?

void View user's profile Send private message

Reply with quote Thursday, January 28, 2010

I'm kind of on the run right now so I'll probably write more later, but one thing may have been fully mis-understood here, sorry for that. Wink

Implementation
My intention wasn't that you'd have to integrate it into ZEQ2 yourself, I can do that of course. I only wanted to get the code reviewed before merging it so the changes to Quake 3 are understood, some discussion on the implementation and possible feature requests can start and maybe even some bugs be caught if/when someone fiddles around with it a bit (or even by just looking at it). Just wanted to get some more input before finally merging it. Smile

If you're okay with the current implementation I can merge it in right away, although I suggest leaving r_ext_vertex_shader at 0 unless you're actively developing programs since the generic program isn't complete and stuff like deformVertexes won't work (yet).

Vertex/fragment split
I didn't test anything of the following but that's what I read about it:

As far as I know nvidia has some hacked up way of doing glsl which turns it into a Cg/GLSL hybrid. As such an #include call does work on nvidia drivers, most shader generators are said to specifically error out on it though as it is not officially part of the spec and it won't work for any non-nvidia hardware.

Using several files basically looks like this:
character **sourcecode; // multidimensional
for (I = 0; I < numVertexPrograms; I++)
sourcecode[I] = readFile(vertexProgram[I]);
glShaderSourceARB(vertexShader, I, sourcecode, 0);

Suppose we have a.glsl b.glsl and main.glsl. Then main.glsl would be the entry point to the vertex shader.

If you add the prototype of a function from a.glsl or b.glsl at the top of main.glsl now, you'll be able to call it in main.glsl.

Zeth The Admin View user's profile Send private message

Reply with quote Friday, January 29, 2010

That's really great to hear on both subjects, Jens. Looking forward to absolutely everything! Very Happy

void View user's profile Send private message

Reply with quote Friday, January 29, 2010

Three new keywords (to be used in combination, Quake 3 will throw a warning if one's missing):
program <name>
vertexProgram <file1> .... <fileN>
fragmentProgram <file1> .... <fileN>

program <name>
==============
Sets a human-readable identifier for the program.
May prove useful in the future if the gamecode needs to modify input data and is necessary to prevent identical programs to be created.

vertexProgram/fragmentProgram
=============================
Pretty much explained those before. Since Quake 3 isn't exactly great with dynamic memory there's a constant similar to the one for animap called MAX_PROGRAM_SOURCES limiting the number of source file parameters. I'm not sure if there is and what the limit would be OpenGL-wise, but the default value of 8 I set sounds like a good number. There's not really a danger in changing it.

MAX_PROGRAMS
============
As said before there was a limit on the number of programs of 64 before. I've raised this value to 256 as a default for the next patch (and ZEQ2 implementation), there's no danger in changing this either, it'll just use a bit more memory.

A little example:
=================

models/players/homer3d/homer
{
   drawOutline
   {
      map models/players/homer3d/homer.tga
      program cel
      vertexProgram glsl/math.glsl glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl
      rgbGen identityLighting
   }
   {
      map textures/effects/cel.tga
      blendfunc filter
      rgbGen identityLighting
      tcGen cel
   }
}



Good practice
=============
It will no longer be necessary to have a _vp and _fp suffix for files and the shaders don't need to be in the glsl directory.

I strongly suggest the following though:
- Just like shaders all files should remain in the glsl directory, just a lot easier to find stuff like this.
- A vertex program's main() function should be in a file with the _vp suffix. It's almost improbable to know what stage the main() is actually meant for otherwise.
- Same goes for a fragment program's main() function in a file with the _fp suffix.
- "libraries" as in no main() function obviously don't need a suffix and probably shouldn't have one especially if their functions are useful for both stages.



I have already finished the code for this stuff and am currently in the process of testing it. If everything goes as planned you can expect it to show up in SVN on Sunday.

Alex Al Knows View user's profile Send private message

Reply with quote Saturday, January 30, 2010

I did have a large reply missing a closing paragraph or two but it seems to have been mostly covered already. I'll rip out the relevant stuff and get it posted in a bit, though that's mostly regarding control over multiple passes & post processing so not particularly necessary right now Smile

I did compile a glsl version of zeq2lite as a test using some modified source file Arnold passed along (I think he said he got them off you, Jens?) and thought I'd best report some findings.

Firstly, it still seems to be rendering two passes if there's a second stage of the Quake 3 shader defined. Here is a shot without the second stage and here is one with it. This is using my own cel shader program rather than the generic shader program so the second stage is definitely being rendered seperately.

Also the outline doesn't seem to be being rendered, though I would guess it's due to the outline working on render state trickery which isn't being replicated through the programmable pipeline. I'm guessing these two issues are due to it being a quick port of the code though.

Another couple of issues are the texture coordinates on the misc_model geometry being stretched and the tcGen cel keywords not generating correct coordinates for the second stage. I'm guessing the former is due to tcMod scale not being supported in the generic shader and the latter because it discards most attributes defined in the second stage. Examples can be seen here without glsl enabled and with them enabled.

Hang on, scrap that about the tcMod scale. Seems the level shader scripts use second, third and some times fourth stages with their own tcMod attributes, so I guess much like the tcGen cel these attributes are getting discarded. Both are easilly solvable with dedicated shaders for such things.

The most important issue, though, is that the hud, console and other 2D elements don't render when glsl is enabled as can be seen in the previous shots as well as in the menu shots here without and here with (the black player model is due to the second stage). I would guess this issue is due to some render states being set differently when shader programs are in use.

I hope this info is useful Smile

void View user's profile Send private message

Reply with quote Saturday, January 30, 2010

I'm not sure who's Arnold (sorry for that!) but I didn't release anything other than what's in the first post yet.

I'm not sure about all issues you reported, but I'll try:

Multiple passes
It's mostly out of my control. You can check out the conditions under which Quake 3 will switch to multi-texturing (and as such combining two-passes into a single one) in tr_shader.c in a function called CollapseMultitexture.

The stars really need to align for Quake 3 to do multi-texturing by itself.

This is pretty much all working as intended though. Just write a regular shader with two stages as a fallback for non-GLSL mode. Then add the second stage's texture to the first stage (map2 => u_Texture2) and write a program there which ignores the shader's oldschool keywords. In the second stage you could add some sort of dummy program discarding fragments (so it doesn't do anything).

Good call though, I'll implement a new shader keyword or add a magic "program none" which ignores a stage when in GLSL mode for some extra performance.

On a sidenote:
There is a bug with u_TexEnv in the patch I uploaded above which leads to errors with generic program multitexturing in that u_TexEnv's initial value does not correspond to the uniform firewall one (the firewall keeps unnecessary uniform calls from happening). Subsequently u_TexEnv is not getting updated to the correct value. This may be the reason for this and the menu issue.

Outline
RiO added the outline drawing code to each stage iterator. Since GLSL has it's own stage iterators the function is never called.

Two options here:
1. Either you move the outline drawing call to EndSurface (which should work fine NND's implementation of outlines is similar to "r_showtris 1" and as such just calls the outline drawing right before the r_showtris function).
2. You add a call to the Outline drawing code to the GLSL-based stage iterators (which in reality is only the GLSL generic one since the others are placeholders in above patch).

Texture coordinates/tcGen cel
That's simple. If you don't supply your own program the generic program takes care of rendering. The generic program in the first post is a bit outdated and the final one doesn't exist yet. If you open it you'll see that the only texture coordinates it is capable of generating are lightmap ones and those stored within the model.

Unless you make it aware of tcGen cel (similar to how it handles tcGen lightmap right now) it can't generate cel-shaded texcoords.

There's two ways cel-shading can be implemented into GLSL:
1. Either you hack up the generic program to be tcGen cel aware and call cel-shading routines from within.
2. Or you create a "program cel" which automatically "assumes" that you want a surface cel-shaded which doesn't rely on a tcGen cel value (which would then only be used in the fallback non-GLSL path).

I'm not even sure which way I'd go about it, probably 2 cause you save yourself a lot of expensive uniform calls.


Just to clarify what I said above:
As soon as GLSL is enabled there's no turning back. Every stage will always have a program, you can just specify your own if the generic one doesn't suit your needs.

Again this is working as intended (aside from the unfinished generic program which will be done within the week though), it speeds up regular Quake 3 a lot since everything RB_DeformTessGeometry, ComputeColors and ComputeTexCoords (which does tcGens) will be done on the GPU instead of CPU.

Menu
Can't answer this one, I know that issue but I fixed it. Several possibilities here:
1. I uploaded something older than I had intended.
2. Something went wrong with the implementation.
3. It's ZEQ2 specific which I'll find out tomorrow when I port everything over and fix it then.
4. As said above it might be u_TexEnv related.

The following is the current state of the generic program and glsl system, so I'm confident the issue is already fixed in the code that's going to be merged.




Tomorrow's update should fix all of the issues and will introduce updated versions of all default programs as well as the vertex/fragment split which works really well, I've already tested it by starting a texturing.glsl library as recommended by the orange book.

Alex Al Knows View user's profile Send private message

Reply with quote Saturday, January 30, 2010

Sorry, I mean I got the modified source files off JadenKorn. He sent me a zip with the relevant source files for the ZEQ2 codebase modified for your glsl support, I thought he said you'd sent them to him though I might have read wrong.

Multiple Passes
Ah, fair enough. Yeah, having a magic "none" program would be handy as the screenshot was an attempt to use the clampMap from the second stage in the first stage's program, which gave undesirable the effect shown.

Outlines
That makes sense, pretty trivial to fix in that case.

TexCoords
definitely, creating shader specific programs for these cases is the best way. The texture coordinates scaling I mistakenly thought the levels were using basic texturing not multi stage blending and then then the tcGen attributes were still working in the first stage as I forgot to do a vid_restart. Oops >_<

Menu
I'd say it's likely ZEQ2 specific bug as the menu & hud are fine in the vanilla IOQ3 build. The console does have some additional blending going on, I'm not sure if this is done renderer side of just a shader script effect though. If it's the latter then it should just be a case of sorting out shaders for them, however the former may need more work tracking down. Hopefully, as you said, it's been fixed by the updates since the first version you released.

void View user's profile Send private message

Reply with quote Saturday, January 30, 2010

Quick question:
Do you want me to add it to Unstable or Engine?

JadenKorn Totally Explicit View user's profile Send private message

Reply with quote Saturday, January 30, 2010

Ah, glad you asked that question. Let me introduce myself, I am the Arnold who was mentioned before, quickly helped Alex back then, since I had MSVS compilations issues back then. Razz
The Unstable folder is my experimental work, in which I'm trying to bring the engine and gamepart codebase together into one, just like it is in the vanilla ioquake3's SVN. My intention is to re-structure it, so it can compile the .so files (shared library links for Linux based systems), which are mostly intended to the OpenPandora system. Also it's a great way to compile a working Windows build via msys on MinGW toolsets.

Alright,
Quick answer to your question: Apply the patch to the Engine folder momentarily, the unstable codebase is still under testing, however this will become the active source code base once it's capable to compile the client, dedicated server binary and the shared library link files.

Amazing progress you made with the GLSL implementation, looking forward for more. Smile

void View user's profile Send private message

Reply with quote Saturday, January 30, 2010

Ah, I thought it'd be you until Alex cleared it up but I'm not sure who's around nowadays. Very Happy

I'm actually a Linux (and MinGW for windows binaries) user myself, so what you're doing sounds pretty good. Smile

EDIT:
Just a mini todo list for myself:
clampanimap2-7
program none

EDIT 2:
- GLSL is now upstream ZEQ2!
- r_ext_vertex_shader is disabled by default, if you want to mess with GLSL support set it to 1
- Please report any bugs/GLSL questions/feature requests in this thread, I'm specifically looking for someone to compile it in visual as I hacked up a makefile to even get a minimal exe to build with MinGW for testing (I love you Arnold for working on getting a proper done made), there shouldn't be any problems with visuals compiler but you never know

RE Alex:
- I've added the outline drawing code to the stage iterator
- I was right about the menu issue already having been fixed, unless it's for some other weird reason (like ATI drivers). See screenshot. Smile
- On a sidenote I've found out what the problem with ZEQ2 is regarding multitexturing. Your cel-shader uses rgbGen identityLighting for the first stage and rgbGen lightingUniform for the second stage. Multitexturing of course only works on identical rgbGens. Or in different words: ZEQ2's cel-shading never used multitexturing to begin with. Two ways you guys can fix this: Either you replace lightingUniform with identityLighting which'll make the shader collapse to multitexturing (big performance increase for ??? visual difference (haven't actually tried it out, I'm wondering how it even compiles considering it's accessing the fourth field of an array of size 3 (vec3_t)) or you use program skip.

Screenshot:


Expect some more updates in the next few days.

JadenKorn Totally Explicit View user's profile Send private message

Reply with quote Sunday, January 31, 2010

I would really appreciate if you could provide us a precompiled ZEQ2.exe with GLSL support, until I can manage to fix the client binary compilation error, caused by a missing static link library (libcurl in this case)

The dedicated part of the game was compiled successfully, tried compiling it both under Windows and Linux via MinGW toolsets/msys toolchain, only the client causes some minor hiccups regarding libcurl libary.

Naming it ZEQ2glsl.exe is recommended, until a working MinGW build can be fully compiled.

Again, awesome job with GLSL implementation, looking forward for the upcoming updates. Very Happy

void View user's profile Send private message

Reply with quote Sunday, January 31, 2010

The first post and the one two posts above have been updated.


As for ZEQ2glsl.exe:
I don't think the ZEQ2.exe I compiled is worth uploading, at least not to SVN. As you said yourself the MinGW toolchain is broken, I actually had to change both the makefile, add some libraries and some parts of the code to even make MinGW compile it and for some reason sound doesn't work.

It'd probably be much better if the person who usually makes the ZEQ2.exe (presumably with Visual?) compiles the GLSL one, so the feature sets are identical (I turned off a lot of stuff in Makefile.local).

JadenKorn Totally Explicit View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Heh, sorry about that, thought you had the necessary tools (MSVS) to compile the game with GLSL support. Anyway, I'm going to reinstall my Windows installation, then try setting up MinGW/msys from scratch, and see if I can get the ioquake3 based "/code/lib/win32/" contents there.

Hopefully, libcurl.a will be present, thus making the Curl implementation inside the engine to be compiled with the MinGW Windows development libraries to work (a simple apt-get install libcurl4-dev resolves the curl/libcurl-related problem under Linux (Ubuntu in my case).

I will try to finish it up later today, hopefully getting the sound to work as well (haven't reached the state to test it, though).

For now, I can test it under my Linux installation.
The one reason I can think of now why the libcurl static library link doesn't works either because it's missing or pretty much MSVS/MSVE-exclusive, thus not working with the MinGW toolsets/codebase.

void View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Actually for me the curl error was a variable declaration within an #ifdef block (I think #ifdef CURL_USE_DLOPEN, no time to look it up right now).

Moving that before the block fixed the curl issues I had, but then some OpenAL related stuff came up etc. which probably explains why sound doesn't work in my MinGW exe. Razz

Here's a little package for you:
http://home.arcor.de/jens.loehr/glsl.zip

It contains the Makefile and Makefile.local I used to build, the libraries I had to add and the exe which resulted from this. As I said I wouldn't recommend getting this to SVN, I'm not sure sound or CURL works but maybe it'll help you to get MinGW work. Smile

EDIT:
okay turns out I just forgot to add USE_CODEC_VORBIS=1 to Makefile.local which is why sound didn't work. My ZEQ2glsl.exe should be fine now and will find it's way onto the SVN in a few minutes. Smile

EDIT 2:
An updated ZEQ2GLSL.exe which should work just like the regular ZEQ2.exe + GLSL support is now on SVN.

Alex Al Knows View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Awesome news. If I get time later I'll do a compile in MSVC and commit the binary to the svn Smile EDIT: Ah, two posts while I was replying, I'm slow. I'll skip uploading the MSVC binary if you've got the MinGW one sorted Smile

On a sidenote I've found out what the problem with ZEQ2 is regarding multitexturing. Your cel-shader uses rgbGen identityLighting for the first stage and rgbGen lightingUniform for the second stage. Multitexturing of course only works on identical rgbGens. Or in different words: ZEQ2's cel-shading never used multitexturing to begin with. Two ways you guys can fix this: Either you replace lightingUniform with identityLighting which'll make the shader collapse to multitexturing (big performance increase for ??? visual difference (haven't actually tried it out, I'm wondering how it even compiles considering it's accessing the fourth field of an array of size 3 (vec3_t)) or you use program skip.



That makes sense. All the shaders need changing to take advantage of glsl anyway so its no issue changing them around but it is useful to know of such caveats Smile

Semi-related offhand question: What does lightingUniform actually do? I know the identityLighting is a constant full lighting value but I can't see lightingUniform in the heppler shader manual and the only google search results I've turned up of any relevence are on this forum Confused I would presume its an addition by Rio to work along side tcGen cel. If that is the case then changing the rgbGen to identityLighting may not render the cel-shading correctly.

Also if that is the case, I think really we should probably change that around to have an rgbGen cel attribute, have tcGen cel apply to the second set of TexCoords and have it do the work in the primary stage so the second stage is only used to supply the shading map and meets the requirements to collapse into multi texturing. Does that sound logical or am I barking up the wrong tree? Smile

void View user's profile Send private message

Reply with quote Sunday, January 31, 2010

I'm working on a program cel implementation right now to help you get started. It's not a fancy program but just an example on how to translate ZEQ2's cel-shading to hardware.

lightingUniform and lightingDynamic are additions by RiO and MDave it seems.

I'm not sure what it's supposed to do, if you replace it with identityLighting you'll see if there's any difference, there actually shouldn't be one, but maybe Dave can explain what exactly it's doing.

Anyway it doesn't really matter since you can just use program skip now which should work around the multitexturing problem. As I said, I'm preparing a ZIP which details exactly how to do it too.

I'd also still like to see a MSVC update to the code, so I know if it compiles fine and there seems to be some static linking going on in MinGW while VC seems to use a dynamic one, at least mine is twice the size of the original which definitely is not GLSL's fault. :p

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Alex wrote :
Semi-related offhand question: What does lightingUniform actually do? I know the identityLighting is a constant full lighting value but I can't see lightingUniform in the heppler shader manual and the only google search results I've turned up of any relevence are on this forum Confused I would presume its an addition by Rio to work along side tcGen cel. If that is the case then changing the rgbGen to identityLighting may not render the cel-shading correctly.

Also if that is the case, I think really we should probably change that around to have an rgbGen cel attribute, have tcGen cel apply to the second set of TexCoords and have it do the work in the primary stage so the second stage is only used to supply the shading map and meets the requirements to collapse into multi texturing. Does that sound logical or am I barking up the wrong tree? Smile



Yep, that is indeed an addition by RiO. It applies uniform lighting taken from the lightgrid in maps, and dynamic lights. The problem with this is, it tends to darken the model with colors instead of adding bright additive color, so I came up with a solution:

I created rgbGen lightingDynamic, which is used currently for dynamic lights only, (like those from attacks) for extra additive lighting to the characters. This is in effects.shader and is used by the game code, makes a copy of the character model (like Quake 3 quad damage effect), and applies the GlobalCelLighting shader. So I get the effect we need by using both lightingUniform and lightingDynamic for the character Smile

This is probably something we don't need to do anymore with GLSL now possible to use, but what should we do in case GLSL isn't supported or used by the user as a fallback?

EDIT: Yes, GlobalCelLighting is a misleading name for the shader, it should be called DynamicCelLighting or something instead.

Alex Al Knows View user's profile Send private message

Reply with quote Sunday, January 31, 2010

definitely, on the GLSL side it doesn't matter, I was thinking in terms of keeping the fixed function cel-shader working as a fallback and having it make use of multitexturing rather than requiring multiple passes. As it uses an additional shader with lightingDynamic, wouldn't that be rendering the characters three times? (Base + uniform + dynamic) Surely that can be done cheaper?

I can't test anything right now as I don't have it set up on my linux system which I'm doing some website work on right now. When I get on windows I'll do some fiddling Smile


And sure, I'll commit a MSVC binary later too then.

1, 2, 3  >>
Post new topic Reply to topic

Actions

Online [ 0 / 6125]