Content

Post new topic Reply to topic
<<  1, 2, 3  >>

GLSL Support

Author Message
void View user's profile Send private message

Reply with quote Sunday, January 31, 2010

There's a small error in tr_shader.c right now which prevents shader generation, I'll upload a fix in a few minutes.


As for the cel-shading discussion:
I don't think the redrawing in the gamecode is necessary, an additional stage with the contents of GlobalCelLighting should have the same effect.

But this stuff can also be solved by adding program skip to the GlobalCelLighting shader so it's not much of a problem.


When it comes to multi-texturing:
I have non-GLSL cel-shading code which uses only a single draw (as in multitexturing), that might be good enough for the fallback (not sure how much different it is visually I'll take a screenshot when I've sorted out some issues).

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Alex wrote : definitely, on the GLSL side it doesn't matter, I was thinking in terms of keeping the fixed function cel-shader working as a fallback and having it make use of multitexturing rather than requiring multiple passes. As it uses an additional shader with lightingDynamic, wouldn't that be rendering the characters three times? (Base + uniform + dynamic) Surely that can be done cheaper?

I can't test anything right now as I don't have it set up on my linux system which I'm doing some website work on right now. When I get on windows I'll do some fiddling Smile


And sure, I'll commit a MSVC binary later too then.



The reason I decided to go with multiple passes instead of multi texturing is so I can make the dynamic cel lighting effect toggle on/off if the user wanted to increase performance, as well as remove the effect for players that are off in the distance Smile Making it a stage in each and every character's shader would of not only taken up a free shader stage, but make it permanent. I also did tests to compare what the FPS difference was doing it multi texturing way or multi passes way, and it seemed to be the same from what I could remember. Maybe it would of been better I had more then 2 characters on screen to test though Smile

void View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Here's a little guide on how to write a cel-shader.
Changes to files are bold (okay they aren't but instead surrounded by the bold tag because it doesn't work Razz).

Modify effects.shader like this:

GlobalCelLighting
{
   //nomipmaps
   {
      clampmap effects/shading/celShadeGlobalLighting.png
      [b]program skip[/b]
      blendFunc add
      rgbGen lightingDynamic
      tcGen cel
   }
}



Modify playerGoku.shader like this:


gokuBody
{
   outlines
        {
      map players/Goku/gokuBody.png
      [b]clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl[/b]
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      [b]program skip[/b]
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}

gokuLegs
{
   outlines
        {
      map players/Goku/gokuLegs.png
      [b]clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl[/b]
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      [b]program skip[/b]
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}

gokuEmblemBody
{
   outlines
        {
      map players/Goku/gokuEmblemBody.png
      [b]clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl[/b]
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      [b]program skip[/b]
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}

gokuHead
{
   outlines
        {
      map players/Goku/gokuHead.png
      [b]clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl[/b]
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      [b]program skip[/b]
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}

gokuHeadEyes
{
        {
      map players/Goku/gokuHead.png
      rgbGen identityLighting
   }
}



New shader file cel_vp.glsl:

#version 120

uniform float u_IdentityLight;
uniform vec3 u_LightDirection;

vec4 IdentityLighting(vec4 color) {
   color.are = u_IdentityLight;
   color.g = u_IdentityLight;
   color.b = u_IdentityLight;
   color.a = u_IdentityLight;
   
   return color;
}

vec4 CelShade(vec4 texCoord) {
   vec3 lightDir = u_LightDirection;
   lightDir = normalize(lightDir);
   
   float d = dot(gl_Normal, lightDir);
   texCoord.s = 0.5 + d * 0.5;
   texCoord.t = 0.5;
   
   return texCoord;
}

/*
 * main
 * Entry point for generic vertex program
 */
void main(void) {
   /* vertex position */
   gl_Position = ftransform();
   
   /* vertex color */
   gl_FrontColor = IdentityLighting(gl_Color);
   gl_BackColor = gl_FrontColor;
   
   /* texture coordinates */
   gl_TexCoord[0] = gl_MultiTexCoord0;
   gl_TexCoord[2] = CelShade(gl_MultiTexCoord0);
}



New shader file cel_fp.glsl:

#version 120

/* uniform variables */
uniform sampler2D u_Texture0;
uniform sampler2D u_Texture2;

/* texturing.glsl */
#define REPLACE      0
#define MODULATE   1
#define DECAL      2
#define BLEND      3
#define ADD         4
#define COMBINE      5

vec4 applyTexture2D(sampler2D textureUnit, int type, int index, vec4 color);

/*
 * main
 * Entry point for generic fragment program
 */
void main (void) {
   vec4 color = gl_Color;
   
   /* fragment color */
   color = applyTexture2D(u_Texture0, MODULATE, 0, color);
   color = applyTexture2D(u_Texture2, MODULATE, 2, color);
   
   /* finish */   
   gl_FragColor = color;
}





As you can see this is close to a copy of the cel-shader ZEQ2 uses in software currently, but instead of drawing three times it only draws once (program skip disabling all other passes).


This is a comparision shot: Smile

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Awesome! Now I see how it all works Smile Great work.

Feel free to commit that, then I can work on the shaders for the other characters Smile

Zeth The Admin View user's profile Send private message

Reply with quote Sunday, January 31, 2010

Correct me if I'm mistaken on this, but it seems by your examples (and mentioning) that all Quake 3 shaders have GPU shaders assumed to be on whether or not the material actually uses one. This seems a bit cluttered and without semantic purpose. It's especially strange if you have GLSL support on your card but would RATHER use a simple material instead for testing.

Additionally, we cannot expect our users to go in and replace each and every vintage material or every map/character already made to include a keyword that effectively just nulls a feature.

If you MUST specify some GLSL program data, why don't you just force this "program skip" keyword by default on all materials, then just have those with actual programs replace it?

If a GPU shader program is specified in a material and GLSL is supported/enabled, then any Quake 3 shader steps implicitly defined SHOULD be FULLY ignored while a shader is at helm. That is, let the GPU program handle it's own shader-specific keywords rather than try to mesh Quake 3's material system in.

If a GPU shader program is not specified or GLSL is not supported/enabled, any Quake 3 material keywords would, of course, take over.

void View user's profile Send private message

Reply with quote Sunday, January 31, 2010

You are indeed mistaken. Smile You don't have to add "program skip" to any shader.

I'll get into a bit more detail on the underlying issue:

Quake 3
1. By default Quake 3 will render each stage (as in open bracket inside a shader) as one step. So four stages will redraw the same model with 4 different textures / settings.

2. If it determines multitexturing is possible for a given shader, the first two of any number of shaders will collapse into a single stage and following stages will move up in position (so 4 stages become 3 stages etc). This will change the blendfunc among other things of the new joined stage.

To remain compatible with NON-GLSL machines this obviously needs to stay the way it is.

GLSL
Now we have GLSL, GLSL can do anything between 1 to 8 Quake 3 stages (well actually the 8 depends on the texture units) in one iteration, so the model is drawn only once.

So effectively a GLSL shader would never need more than one stage, it could use more though (this'll mean you redraw your object at least once when you have 2+ stages, but may even happen like in this case with only 2 stages when multitexturing doesn't work).

So what happens?

3 scenarios:

Single-stage
There won't be ANY difference to how Quake 3 renders this one, as it'll run through either the generic program (which does quake 3 rendering in hardware) or it's own program.

Dual-stage shader, with multitexturing
We have a shader with two stages which collapsed to a single one. Now there's no problem with that, the program used on this needs to be multitexturing aware though (the generic program for instance is). Again, nothing out of the ordinary will happen.

Dual- or n-stage shader, which doesn't collapse
This one DOES cause problems in that it wastes performance. Basically Quake 3 will want to draw this n times. You can of course draw it n times with GLSL and again you won't see a difference to how Quake 3 does it, so again, you do remain fully compatible with Quake 3.


The reason for program skip arises from this very scenario. You don't need to draw n times, instead you most likely only need to draw once.

So we have a situation where Quake 3 needs to run through ~8 stages, but GLSL only needs to run through 1 or maybe 2 stages.

So you need "program skip" to make GLSL aware of those stages it can ignore cause it has already done those in a previous stage.


I hope that wasn't too confusing.


It boils down to old/abandoned shaders being redrawn the same amount of times as they previously were before GLSL existed.

NEW or adjusted shaders can however make use of a single draw as long as you make GLSL aware of which stage to draw.



I guess you assumed "program skip" would be a standard program, but it's not. "program skip" means the stage isn't drawn so automatically assuming "program skip" would make ZEQ2 completely disappear. It's a special switch saying "don't draw this" not "draw this the regular way" (this one equals not specifying a program).

In above example program skip is needed because I specifically want to draw only once in GLSL mode but have to draw three times in non-GLSL mode.


A few examples:


gokuHead
{
   outlines
   {
      map players/Goku/gokuHead.png
      clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      program skip
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}



In GLSL mode:
Will draw a single stage.

In Quake 3 mode:
Will draw the two specified stages + the additional one from effects.shader.


gokuHead
{
   outlines
   {
      map players/Goku/gokuHead.png
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}



This is the original version I think unless I edited something out by accident, anyway:

In GLSL mode:
This will draw three times.

In Quake 3 mode:
This will draw three times.


gokuHead
{
   outlines
   {
      map players/Goku/gokuHead.png
      clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl
      rgbGen identityLighting
   }
}



This is a single stage version.

In GLSL:
Everything will work fine.

In Quake 3:
No cel-shading at all.


gokuHead
{
   outlines
   {
      map players/Goku/gokuHead.png
      clampMap2 effects/shading/celShadeFlesh.png
      program cel
      vertexProgram glsl/cel_vp.glsl
      fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl
      rgbGen identityLighting
   }
   {
      clampMap effects/shading/celShadeFlesh.png
      blendfunc filter
      rgbGen lightingUniform
      tcGen cel
   }
}



This is the program without program skip.

In GLSL:
It'll draw the cel-shaded model (three stages in one), but then draw two additional stages reapplying the cel-shading we already have, no use in that.

In Quake 3:
It'll still be the same cel-shading.

Another option which you might have meant with (part of) your post, would be to use the same program as above but never draw any stage beyond the first at all)



This was something I did consider but it is a really bad idea and Dave's cel-shader is actually a prime example.

The third "hidden" stage (in effects.shader) is a single stage shader, so even if I went on to assume program skip for every stage after the first, it would still get drawn. Not three times in that case but then Quake 3 is still doing 3 redraws while GLSL is suddenly doing 2 redraws.



Another note which might clear this up if there's still any confusion:
program <name> - draw with specified program (in glsl)
program skip - don't draw (in glsl)
<empty line> - draw the usual way, so old shaders aren't affected

Zeth The Admin View user's profile Send private message

Reply with quote Monday, February 01, 2010

My bad on the misunderstanding. It doesn't help that I wrote the post while nearing that point of drunk-like insomnia.

Shader Setup
Overall, it seems like we're really mincing the traditional Quake 3 shader system's design with that of GLSL. Rather than use some structure/ordering trickery to preserve backward functioning shaders, wouldn't it just be easier to extend the shader syntax to allow multiple "technique" blocks while still allowing vintage syntax for existing materials?

Consider the following suggestion:

gokuHead
{
   technique GLSL{
      {
         map players/Goku/gokuHead.png
         clampMap2 effects/shading/celShadeFlesh.png
         program cel
         vertexProgram glsl/cel_vp.glsl
         fragmentProgram glsl/cel_fp.glsl glsl/texturing.glsl
      }
   }
   technique Standard{
      outlines
      {
         map players/Goku/gokuHead.png
         rgbGen identityLighting
      }
      {
         clampMap effects/shading/celShadeFlesh.png
         blendfunc filter
         rgbGen lightingUniform
         tcGen cel
      }
   }
}



By subdividing the Quake 3 shader/material into an extra "technique" block set, you can easily define clear distinctions in approaches without extraneous keywords and cautious behaviour with texture stages/passes.

If the first technique is supported, it would be used. If not, the shader would go to the next technique and so forth until a usable approach was found.

I believe this suggestion would definitely be more organized as far as the distinction of material concepts go.

Folder Structure
Although I should have made mention of this earlier, I'm actually opposed to a generic "GLSL" folder for scripts. If you take a look at the ZEQ2-lite folder scheme in place, you'll see that most of the standard Quake 3 folder structures have been completely nulled out and standardized to form a more meaningful relationship of data.

Therefore (is possible), my suggestion is that the glsl scripts be allowed to remain sprinkled about in various locations relative to their actual usage. For instance, effects/shading/cel.glsl would be a practical place to put the cel-shader given that it's the same location that the textures for the shader are. This would also be where a Quake 3 cel.shader file would be rather than the generic "scripts" folder if someone could set things up so all folders/subfolders are scanned for the .shader extension as well.

void View user's profile Send private message

Reply with quote Monday, February 01, 2010

Shader setup
That effectively breaks every shader written to date, wasn't that exactly what you wanted to prevent? :p

It's a lot of work too, I think you may still be overestimating the situation. program skip wouldn't be used on a regular occasion, it's just that your cel-shading algorithm is a worst-case situation here as it's spread over two different materials.

You don't really need to be cautious either. If you're unsure about the second stage collapsing properly then use [bla]map2-7 = texture units 2-7 in your shader and set program skip in second stage, everything will end up as expected.

I'll give you an example of two ways to do a collapsable shader (NND's cel-shading does collapse so it's a good example I guess):

"lacking way" as in you don't know if it collapses:

models/players/homer3d
drawOutline
{
 {
  map models/players/homer3d/homer.tga
  map2 textures/effects/cel.tga
  program cel
  vertexProgram cel_vp.glsl
  fragmentProgram cel_fp.glsl
  rgbGen identityLighting
 }
 {
  map textures/effects/cel.tga
  program skip
  rgbGen identityLighting
  tcGen cel
 }
}



This will work and you're on the safe side if you don't know what collapses and what doesn't. In this case your glsl program would use texture unit 2 for the cel texture.

models/players/homer3d
drawOutline
{
 {
  map models/players/homer3d/homer.tga
  program cel
  vertexProgram cel_vp.glsl
  fragmentProgram cel_fp.glsl
  rgbGen identityLighting
 }
 {
  map textures/effects/cel.tga
  rgbGen identityLighting
  blendFunc filter
  tcGen cel
 }
}



This is exactly the same shader, yet all that was added was the program lines. How does it work? I know it'll collapse so I don't need to play fail-safe, so instead of texture unit 2 I'll use texture unit 1 for cel.tga (starts at 0).

This case will be true for 95%+ of all shaders you ever write so program skip will be rarely seen.


Additionally keep in mind:
The current cel-shader wouldn't be fixed by your technique suggestion, so you'd still need "program skip" or a similar method (empty technique GLSL block?). Cause if it only had a technique standard block it'd fall back to that one and still get drawn, like I said before: We DO need a method to prevent something from being drawn alltogether. It's not about switching between GLSL and Standard but between draw and nodraw, it's not really a GLSL problem (GLSL will work fine either way but you aren't pushing it to its limits then).


EDIT:
Something worth adding which I forgot before.
You should also keep in mind that there's not really a good way to separate GLSL from standard rendering shaderwise.

Your stuff is a simple example. But not all shader keywords of Quake 3 map to something that can/should be done in GLSL.

Everything that's related to GL_State (which off the top of my head would be alphaGen_t and blendFuncs) is shared between both GLSL and Standard rendering. So you will probably never be able to achieve the clean result you'd like to have.

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Monday, February 01, 2010

Alrighty then. I've managed to compile the engine with MSVC no problem, even got a smaller ZEQ2.exe then the one in the build folder! I've been messing around a little bit, trying to get the dynamic cel shader to work.

I've added in support for u_UniformLight and u_DynamicLight in the engine and in the generic.glsl file. Unfortunately, I can't seem to get the results I want with multi texturing the dynamic cel lighting effect in the players shader files.

So, I've had to do with making a glsl shader for the GlobalCelLighting shader. (I will rename this shader in my next commit! Razz) Got that working fine, but I have a problem with Goku's Kaioken cel shading effect.

Currently I have 3 different glsl cel shading scripts to get this all working and looking the same as software shaders. I was thinking about committing what I've done so far, but if the shader spec changes then I'll hold on for a bit Smile

void View user's profile Send private message

Reply with quote Tuesday, February 02, 2010

Well, it's your call, but I think it'd actually be good to commit it. The system won't mature when it's not seeing some testing Smile and to be honest I'm not that interested in rewriting the Quake 3 shader system right now myself (so the shader spec won't change (for a while) unless someone else does it). It's stable, it works and chances are the other systems which I had planned require additional changes (post-process effects to name one feature that'll need some scriptability added) at a later point anyway.

I consider it a challenge to test out if I can get it done in a single pass too. Razz

Zeth The Admin View user's profile Send private message

Reply with quote Tuesday, February 02, 2010

Shader Setup
Actually, my suggestion was two-fold to prevent breakage of existing shaders. Basically any existing parsing would function fine and normal. The introduction of "technique" separation would be completely optional and work as means to building the shader base itself. If the first technique was viable/supported, it would be parsed as if that's all the shader was comprised of. If not, the next technique would be attempted.

I agree that our cel-shading (stepped shading, to be proper) approach is a bit clunky at the time being in comparison to the ease of implementing as such in a shader, but the emphasis was primarily on dynamics and aesthetics in the fixed pipeline.

Even with your collapsing shader examples, I do not believe the final result is as clearly represented as a technique separated one. I'm having some difficulty understanding what you mean about my example not resorting to the proper standard/glsl technique.

Essentially, the idea was that techniques are defining unique and completely separate materials internally as far as Quake 3 goes -- with only one being chosen based on support/priority.

Should be quite easy to implement in that regard since you'd only have to use a specific technique block as if it was right under the shader declaration.

I'd like to point out that almost all of my design change suggestions are based on Ogre 3D, a highly respected/mature abstract rendering layer. The idea of using techniques in a separated fashion is strictly from their rendering engine and material framework (which is incidentally derived from Quake 3's shader structure).

One of my points is that I don't believe that we SHOULD be trying to bind many Quake 3 shader keywords into GLSL-passed attributes (aside from those texture oriented). The entire idea of offering solid GPU shader support is that we can offset almost all operations to the shader program itself. Texture blending modes (excluding scene blends), tiling, scaling, UV wrapping types (clamp,wrap,mirror), lighting/shading handling, etc. are all accomplished easily in a GPU program.

I'm by no means suggesting to fully plagiarize their system (wouldn't be worth is time-wise on implementation, for one), but at least a gentle nudge in the direction of borrowing from their refined design wouldn't hurt.

Dave

Alrighty then. I've managed to compile the engine with MSVC no problem, even got a smaller ZEQ2.exe then the one in the build folder! I've been messing around a little bit, trying to get the dynamic cel shader to work.


UPX the files and you should see reduced sizes even more. I typically try and do this with any dll/exe that we have, but occasionally a non-upx'd version slips by when someone else commits.

I've added in support for u_UniformLight and u_DynamicLight in the engine and in the generic.glsl file. Unfortunately, I can't seem to get the results I want with multi texturing the dynamic cel lighting effect in the players shader files.


You shouldn't need these attributes at all. To get the effect we need for a solid stepped shader, you really don't even need a texture band anymore either (unless we come across instances where we need more control than just a 2 color stepping sequence).

Basically, just get the averaged light normal/position/intensity/color, feed in your default start/end vector sequence for your colors, and from that you should be able to easily derive whether to have the normal colors display or have the "global dynamic lighting" color be used instead.

Additionally, this will provide a much more accurate result than the previous method of blending on top of the existing texture (especially if we actually go through with actually exporting Goku differently based on specific materials for each area -- much better results from our old shader tests).

So, I've had to do with making a glsl shader for the GlobalCelLighting shader. (I will rename this shader in my next commit! ) got that working fine, but I have a problem with Goku's Kaioken cel shading effect.


We should also be able make use of a simple stepped rim shader (or rather just a sequence in our stepped+rim shader) to get crisp white inner edge lines on things like the Kaioken render we had years ago.

I think we already have the component shaders for this in our currently unconverted CG shader library that would get the job done fine.

Currently I have 3 different glsl cel shading scripts to get this all working and looking the same as software shaders. I was thinking about committing what I've done so far, but if the shader spec changes then I'll hold on for a bit


I'd be interested in seeing what you have anyhow as I've not yet had any luck yet testing GLSL in the engine.

Postscript : Why do we have a separate exe for GLSL support rather than just putting it in the main executable?

void View user's profile Send private message

Reply with quote Tuesday, February 02, 2010

Shader setup

One of my points is that I don't believe that we SHOULD be trying to bind many Quake 3 shader keywords into GLSL-passed attributes (aside from those texture oriented). The entire idea of offering solid GPU shader support is that we can offset almost all operations to the shader program itself.


I disagree completely here. Razz
This contradicts the whole point of uniform data.

You're effectively suggesting to create a new program for any time where normally only one variable would be set to a different value (not to mention that it's much easier for the average user to change rgbGen identity to rgbGen identityLighting than delving deep down into GLSL creating a new program and setting that up properly including the right function calls).

That's neither space efficient nor good practice.


As for your suggestion:
GLSL can't do cull/sort, it can do deformVertexes though.

Look at this made-up shader:

lacking in identifiable function
{
 sort portal
 deformVertexes wave 93589353
 {
  map texture.tga
  program lacking in identifiable function
  vertexProgram deformVertexes.glsl tcGen.glsl vp.glsl
  fragmentProgram fp.glsl
  blendFunc filter
  alphaFunc GT0
  blendFunc filter
  depthFunc equal
  tcGen cel
  depthWrite
 }
}



This uses two parsers (the current ones), one doing sort and deform and the other doing the individual stage-based keywords.

The programs would obviously read tcGen and deformVertexes and act accordingly.

When doing GLSL this translates to the following:
sort is a Quake 3 job
deformVertexes is a vertex program job
tcGen is a vertex program job
map is a fragment program job
blendFunc is a Quake 3 job
alphaFunc is a Quake 3 job
depthFunc is a Quake 3 job
depthWrite is a Quake 3 job

Your suggestion leaves us with something along the lines of this:

lacking in identifiable function
{
 sort portal
 technique GLSL {
  program lacking in identifiable function
  vertexProgram deformWave93589353.glsl tcgen_cel.glsl vp.glsl
  fragmentProgram  fp.glsl
  map texture.tga
  blendFunc filter
  alphaFunc GT0
  blendFunc filter
  depthFunc equal
  depthWrite
 }
 technique Standard {
  deformVertexes ...
  {
   map texture.tga
   blendFunc filter
   alphaFunc GT0
   blendFunc filter
   depthFunc equal
   tcGen cel
   depthWrite
  }
 }
}



or this:

lacking in identifiable function
{
 technique GLSL {
  sort portal
  {
   program lacking in identifiable function
   vertexProgram deformWave93589353.glsl tcgen_cel.glsl vp.glsl
   fragmentProgram  fp.glsl
   map texture.tga
   blendFunc filter
   alphaFunc GT0
   blendFunc filter
   depthFunc equal
   depthWrite
  }
 }
 technique Standard {
  sort portal
  deformVertexes ...
  {
   map texture.tga
   blendFunc filter
   alphaFunc GT0
   blendFunc filter
   depthFunc equal
   tcGen cel
   depthWrite
  }
 }
}



This is just messy, two almost identical parsers yet there's some minor changes and a lot of duplicate lines in a shader for the technique blocks.

Additionally an all new third parser who is able to distinguish which block is getting used and still is able to determine if it uses deformVertexes etc on first level (oldschool shader) or third level (technique Standard).

Additionally since deformVertex parameters aren't passed to the program, you would now have to write a completely new program just because you changed the 3 at the end to a 4.


Don't get me wrong here, I'm not saying a technique block is generally a bad idea, it just doesn't work well with how Quake 3 shaders are set up in my honest opinion and doing all the work to get technique blocks done properly while still remaining backward compatible just seems like a lot of work after which you haven't really accomplished anything, the game still looks the same and I'm convinced that it'll actually be harder for users when they have to modify shaders (as in adding an all new program) + glsl to change a little effect than just changing a single shader line.

I'd think of it as much more useful to get something like FBO's, maybe cubemaps and VBO's in before something like an overhauled shader system is tackled.

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Tuesday, February 02, 2010

Just committed what I've done so far. Tried to get all the cel shaders to work and function just like the old ones, all the characters bar freeza have been updated. His shiny armor beats me Razz

I guess you can think of it as a temporary fix until we get real cel shading in there.

void View user's profile Send private message

Reply with quote Friday, February 19, 2010

Just a quick question:
What would be our stance on raising system requirements (and how far)?

I didn't have much time the last two weeks, but did a lot of planning and research (even some implementation), the current rendering architecture isn't really suitable for some of the features (not even necessarily the most advanced) and it's not realistic to maintain x rendering paths of a size which is somewhat like
if (ogl30) ogl30renderer();
else if (...) ...renderer();
else (ogl11) ogl11renderer();

We can discuss this in greater detail on IRC tomorrow.

Just making a cut there, breaking everything once and canning the non-programmable pipeline in the process would allow us to greatly simplify and streamline the renderer and would solve a lot of the problems with multitexture related collapsing in shaders etc, then a lot of the "legacy uniforms" could be removed and so on.

Of course that'd mean old content needs to be updated once but it's probably better than at least doubling the amount of work required all the time (imagine adding a new attack and writing a regular shader, a glsl shader + program and probably even a second, third and fourth program depending on available shader model, framebuffer and npot textures opposed to writing one shader + program).

NV4x hardware costs no more than 40 bucks and that's SM3.0. Working under the assumption that this type of hardware exists makes life a lot easier (and NV3x could still work too for the majority of shaders).

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Sunday, February 21, 2010

Raising system requirements, hmm.

I was hoping to port ZEQ2-Lite to the Pandora Razz Though that port would need low poly small size maps to run at any playable speed anyway. It supports opengles 2.0, and there is a wrapper for opengl 2.0. Can we ... do 2.0 instead of 3.0? Razz I have no idea what the difference is. If it's too much work to go down to 2.0 (or there are much greater benefits with 3.0), then stick with 3.0? Smile

void View user's profile Send private message

Reply with quote Sunday, February 21, 2010

The main reason I mentioned OpenGL 3.0 is because that's where framebuffer objects were officially added. Sorry if it caused any confusion, it's just pretty difficult to say exactly where hardware would need to be when it comes to OpenGL. Razz

OpenGL ES 2.0 has support for framebuffer objects too, just like those of course have been available as an extension in most major drivers since OpenGL 2.0 or 2.1 anyway, so it's more like an OpenGL 2.0 renderer which uses extensions which became mainline in 3.0 but were there for a long time already.

It's not really like raising system requirements either most of the changes will drastically improve performance and scale much better the more polys you throw at them (vertex buffers and vertex skinning should be between 500%-1000%).

Actually the changes would even make it easier to port to OpenGL ES 2.0 I think, if I remember correctly it doesn't allow immediate mode rendering anymore, which has also been deprecated in OpenGL 3.0 so the "modern-only renderer" I proposed would take care of that.

Alex Al Knows View user's profile Send private message

Reply with quote Sunday, February 21, 2010

Personally I wouldn't be against raising the requirements to need a GPU with shader support, though I would guess it's Brad's decision on that.

These days all graphics cards manufactured have some kind of shader support, even the Intel GMA's do, so as long as we keep it tweakable so we can scale the quality of and turn on & off expensive effects I don't see it would cause much of a problem.

Very few games of this complexity maintain fixed function pipelines these days and while it would be good to technically support pre-programmable-pipeline cards, the game's getting to a point where computers that old aren't going to run the game very well anyway.

The standard requirements these days seem to be SM3.0 recommended, SM2.0 as a minimum. We could go further and have a basic set of SM1.x fallback shaders for people with pre NV30 cards.

And yeah, OpenGL|ES 2.0 doesn't have a fixed function pipeline, it requires shaders so as Jens mentioned, it probably would make things easier to port over, especially if the reimplementation of the renderer could be designed with a later OpenGL|ES port in mind. Correct me if I'm wrong, but wouldn't porting to OpenGL|ES would require substantial alterations to the current renderer anyway?

void View user's profile Send private message

Reply with quote Sunday, February 21, 2010

Yup, as said in my last paragraph you don't have immediate mode (Also known as glBegin...glEnd) in OpenGL ES. As such VA's or better yet VBO's need to be used for all drawing. This is also the main part I want to change.

Basically this is a list of extensions and what I'd do with them (minus some special textures which depend on the level of framebuffer used but since I marked it as unsure those would end up pretty much the same):
- = required
+ = optional
/ = not sure

- GL_ARB_multitexture
- GL_ARB_texture_cube_map (both have been around since Quake 3 release)
- GL_ARB_vertex_buffer_object (has been around forever)
- GL_ARB_shader_objects
- GL_ARB_vertex_shader
- GL_ARB_fragment_shader
- GL_ARB_shading_language_100 (last 4 make up GLSL)
/ GL_EXT_framebuffer_object (relatively new)
+ GL_ARB_texture_non_power_of_two
+ GL_EXT_draw_range_elements

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Monday, February 22, 2010

Alex wrote :
And yeah, OpenGL|ES 2.0 doesn't have a fixed function pipeline, it requires shaders so as Jens mentioned, it probably would make things easier to port over, especially if the reimplementation of the renderer could be designed with a later OpenGL|ES port in mind. Correct me if I'm wrong, but wouldn't porting to OpenGL|ES would require substantial alterations to the current renderer anyway?



Not if we use this! http://code.google.com/p/gl-wes-v2/

Smile Thanks for explaining it all to me, both of you.

Zeth The Admin View user's profile Send private message

Reply with quote Monday, February 22, 2010

Personally I wouldn't be against raising the requirements to need a GPU with shader support, though I would guess it's Brad's decision on that.


I'm not opposed to that idea at all, but I've been under the impression that Dave really wanted to preserve vintage card support as much as possible to broaden the range of compatible systems.

Even so, I think we'll only be excluding a VERY small audience (relating to range of minimum compatibility) by requiring at least shader model 2 support (Radeon 9500 / Nvidia 5 series). From the past workings Tom and I did before with the Zios shader library, I'd have to say that a shader model 1.* base is really not worth investing in as you'd be dabbling with assembly shaders.

All in all in perfect honestly, I do NOT see the merits of making too many significant engine improvements concerning extending Quake 3 to support features we've been working with in other libraries as far as four years ago. The overall structure and design of Quake 3, while mostly functional, is far from ideal and often time-consuming to work with and/or debug compared to alternative solutions.

Re-writing/re-structuring the Quake 3 engine to support various new aspects to the existing rendering pipeline seems a lot more counter-productive when compared to proper implementation of such features on their own. Basically, I'm hesitant about performing surgery to disembowel an aging design just to have glittering portions of gold tied together amongst a bulk of garbage.

Backwards compatible GLSL shader support extending on top of the existing Quake 3 shader/material system is one thing, but if we're really getting to the point of making a major time commitment with heavier-duty alterations, I think we should first get together and actually discuss the overall purpose, goals, benefits, alternatives, and implications all-around -- for comparative sake if nothing else.

MDave ZEQ2-lite Ninja View user's profile Send private message

Reply with quote Friday, February 26, 2010

There might not be much merit in upgrading an old engine for our sake, but for others outside of ZEQ2-Lite that want an easy to use and configurable game (to an extent) may want to Smile

Alex Al Knows View user's profile Send private message

Reply with quote Sunday, February 28, 2010

Personally, I'm interested in how far we can actually take the engine. If this were a "serious" project then I'd agree that replacing too much is counter productive, but as a hobby project done for the enjoyment of development and experience as this is, I can't see a problem with making major alterations. Though, I'm much more interested in working on the engine & tech features than ZEQ2Lite itself and I like working with Quake 3 despite it's age, so I could be biased here Smile

Given how scriptable it already is, it would be interesting to turn into effectively the MUGEN of 3D multiplayer games. Extend the scriptability more to cover gametypes, further attack/projectile types, more control over stats/character config, etc; carry on updating the renderer; implement the new animation system; add in some level scripting; create some basic visual tools for configuring it all. Quake engines are quite popular open source engines and I think something like that would get a fair bit of interest, especially if it were blinged around the various quake dev and game dev communities. It could get more actual developer interest like that too, even devs from other Quake 3 projects contributing their code for features.

Most Quake 3 projects are generally the same base type and similar concepts, just with different characters/classes, weapons/attacks, view/camera, player stats, objective and movement physics, maybe a couple of other things. Why not give a simple scriptable interface to these things avoid the need to mess with the messy guts of the code? Even now it's capable of a quite a lot in that respect, but as it's currently just the engine for a Dragon Ball Z mod, that's all people are noticing it for, if you get what I mean. Making major updates to the engine would be very productive if that were the goal, in my opinion.

Anyway, getting back closer to the original topic, it seems we're all pretty much agreed that we shouldn't worry about breaking the fixed finction pipeline and requiring shader support on GPUs. That'll definitely make things simpler over all.

JadenKorn Totally Explicit View user's profile Send private message

Reply with quote Thursday, March 11, 2010

Just partially on-topic news: OpenGL 4.0 specification was released on today.

It's newest features:
* OpenGL Shading Language revision 4.00
* Two new shader stages that enable the GPU to offload geometry tessellation from the CPU.
* Per-sample fragment shaders and programmable fragment shader input positions for increased rendering quality and anti-aliasing flexibility.
* Drawing of data generated by OpenGL or external APIs such as OpenCL, without CPU intervention.
* Shader subroutines for significantly increased programming flexibility.
* Separation of texture state and texture data through the addition of a new object type called sampler objects.
* 64-bit double precision floating point shader operations and inputs/outputs for increased rendering accuracy and quality.
* Performance improvements; such as instanced geometry shaders, instanced arrays and a new timer query.

Also, OpenGL 3.3 was released in the same time, backporting some of the 4.0's features into the older supporting GPU's.

JadenKorn Totally Explicit View user's profile Send private message

Reply with quote Thursday, March 11, 2010

From the looks of it, it's a "direct confrontation" to Microsoft's latest DirectX graphical libraries line, Direct3D 11.

Now, it's up to NVidia and ATI to release their newer drivers with the newest OpenGL specification support.

void View user's profile Send private message

Reply with quote Tuesday, March 16, 2010

Just a question:
There are high poly versions of Goku and co. for Zios right? Any possibility of using nvidia melody (or whatever else there is) on one of these (or any other model suitable for this) to get a low poly version + normal map?

If someone could get me something like that in either md3 or zMesh that'd be awesome.

I've been playing around with normal mapping the last few days but you couldn't even imagine what the result looks like when I draw a texture and run a gimp filter over it to get the normal map. Razz

<<  1, 2, 3  >>
Post new topic Reply to topic

Actions

Online [ 0 / 6125]