Monday, March 05, 2007

stl in gnu / gcc

Listening to : Werefrogs, "Forest of Doves"


So i wanted to use the stl pair and couldn't remember how to include it in ( #include < pair > didn't compile ) and I found out that in gcc334 pair is deprecated, but still available via #include < bits/stl_pair.h >


then use it with the std namespace as per usual.


The other one that's slightly different is hash_map, which is part of the extension.
Include it like this;


#include < ext/hash_map >


and use it with the __gnu_cxx namespace
like this


__gnu_cxx::hash_map < int,double > X;


voila!


(oh and by the way these should be automatically included by gcc, no need for -I in your makefile or anything)

Thursday, June 15, 2006

Rib attributes in a Slim Template

Listening to : Blue Jam

Here's the syntax for adding a Rib attribute to your Slim Template, so that it appears as a built in control.



collection void RIB {
state closed
ribattribute string "shade:transmissionhitmode" {
label "shadeTransmissionHitMode"
default "primitive"
subtype selector
range { "primitive" "primitive"
"shader" "shader"
}
}
}

Wednesday, May 31, 2006

Rotation Matrix from Axis Vectors

Listening to : The Decemberists (various)

This is an oldie but a goodie, and I haven't posted it anywhere else, so here it is before I forget.

If you want to make a rotation matrix ( 4x4 ) so that you can make objects orient the same way as, say for example a normal and a tangent on a surface, so long as you have 2 of 3 of the desitnation "axes" then you can build a matrix directly from copying the values of those vectors. (if you do only have two, you get the third from doing a cross product).

I'm using row major notation here - so anyone who needs this column major, just transpose it.
I'm also assuming that the space of the sought rotation matrix is the same space that the vectors are in. (e.g. world space)
Lastly we're assuming that the normal and tangent and binormal are all normalized.

So, you have a normal and a tangent in (say) world space and you want a rotation matrix so you can apply the same orientation to other objects. This is the same as making a new coordinate system and transforming them into it without any rotation at all in that new coordinate space.

It helps to think of normal and tangent, and the 3rd vector called the 'binormal' (which is the cross product of the other two) as the axes of a space into which we're transforming.
It can be any order really, so lets pick :-
Normal = +Y axis
Tangent = +X axis
Binormal = +Z axis

So we have (for example)
+X vector is ( 0.670014, 0.541446, 0.507856 )
+Y vector is ( -0.609193 0.791979 -0.0406534 )
+Z vector is ( -0.424223 -0.282144 0.860482 )

So our 4x4 matrix will be
| Xx Xy Xz 0 |
| Yx Yy Yz 0 |
| Zx Zy Zz 0 |
| 0 0 0 1 |

which in our example gives us the following 4x4 rotation matrix !
| 0.670014 0.541446 0.507856 0 |
| -0.609193 0.791979 -0.0406534 0 |
| -0.424223 -0.282144 0.860482 0 |
| 0 0 0 1 |

So all that any rotation matrix consists of, is the three axes vectors that define it as a coordinate system.

To rotate a plane by -45 degress about X in MEL, you'd do this
// -45 degrees in X
xform -m 1.0 0.0 0.0 0.0 0 0.707107 -0.707107 0.0 0 0.707107 0.707107 0.0 0.0 0.0 0.0 1.0 pPlane1;

Where 0.707107 is of course 1/sqrt(2)

Tuesday, April 18, 2006

Accented Characters on Linux

Alt Gr + ;
e
= é

Alt Gr + ;
a
= á

Alt Gr + '
e
= ê

Alt Gr + '
a
= â

Alt Gr + #
e
= è

etc.

Friday, February 24, 2006

Faking depth in a cyc arena


Listening To : Prince, Raspberry Beret

A problem that came up during Flyboys was that we had a tiled arena floor that was 10kms square, but beyond that we had a matte painting 'cyc' aka cylindrically projected onto a bowl shaped geometry with vertical sides.
When rendering a Depth pass or secondary output, when a camera following the action got close to the bowl, then the colour information in the matte tells you there are things far away in the distance, but the depth (as in length(I) ) doesn't. The depth values decrease rapidly as you approach the sides of the cyc.

Not what we want at all.

So if its safe to assume the ground continues in a near-planar fashion in all directions out to the horizon, we can do a very simple, quite elegant cheat to work out what the depth would be, if the cyc wasn't there.


Breifly (see diagram also):

float S = 1;
point P = transform("world", P);
point E = transform("world", E);
vector I = vtransform("world", I);

if ( ycomp(I) < 0 ) // I.e. point being shaded is closer to ground than the camera
{
// S is some scale value we need to solve to scale I upto;
// I' = P'-E
// ycomp(P') we know is always zero
// therefore we can solve the scale for ycomp and use it to scale I (similar triangles)
// P' = E + I*S;
// so;
// ycomp(E) + ( ycomp(I) * S ) = 0;

S = ( -ycomp(E) / ycomp(I) );

I = vtransform("world", "current", I*S);

// Voila, our new fake (clamped) depth is;
depth = min( length(I), 65535 );
}
else
{
// E.g. plane is upside down, etc.
// Just Set depth to Max ( 65535, which is the largest value our EXR's can store )
depth = 65535;
}

Friday, January 20, 2006

occlusion, subset and the minus sign

Listening To : 'The Now Show', Radio4.

I didn't know that you could negate subsets the same way as you can for light categories, but it totally works. You need to assign the group via rib attributes, to the geometry.
e.g.
Attribute "grouping" "string membership" ["noRflOccl"]

Cool.

Monday, January 16, 2006

Current Project Path via MEL

Listening To : "Its Art, Dad!", The Clientele

Here's a handy tip. How do you get the current project path from MEL ?


string $ws = `workspace -q -rd`;


Voila!

Thursday, December 22, 2005

Under the hood

Listening To : 'Sensitive Artist' by King Missile.

To paraphrase an email from a work colleague..

Pixel samples have no effect on when and where the shader gets executed. The shading rate and other dicing settings determine how the geometry is tessellated into micropolygons. Then the shader is executed at each vertex on the micropolygon. The pixel samples then just determine how this pre-shaded micropolygonal geometry is sampled for each pixel.

Not only do pixel samples have nothing to do with shading, they also have nothing to do with raytracing. Shooting rays into a cone an angle of 0 will always hit the same spot regardless of samples - the pixar docs appear incorrect about this, based on experimentation. Also, it's not necessary to specify the cone angle at all if using the simpler environment() and trace() routines. I believe renderman will compute the optimal cone angle automatically based on shading rate and surface derivatives. Although in tests, geometric aliasing still occurred with trace, and only making a coneangle based on the change in incident angle across a micropolygon with gather seemed to remove it.

Thursday, December 15, 2005

Watch those param orderings!

Listening To : Shooby Taylor, the human horn.

Refract and Faceforward have I and N paramaters specified in different orders! How many times has that caught me out ?!?!?

     vector refract( vector I, N; float eta )
     vector faceforward( vector N, I )


Also clamp and mix/smoothstep have different orderings.

     type clamp( type a, min, max )
vs
     type mix( type x, y; float alpha )
     type smoothstep( type min, max, value )

Saturday, November 05, 2005

MEL message attributes

Listening To : Bill Bailey , Jimmy Cliff

MEL today.
Just found out about message attributes and connecting things together.
connectAttr and listConnections. Neato.

Also, thought for today is that in Games, the artists always seem obsessed with the characters animation, run cycles etc. and don't really care about lights or cameras too much. In post, lights and cameras are so much more important (duh). Yeah, well, not a fabulous insight - but was just thinking about the differences earlier today.

DNeg Free beer and pizza tonight, and my contracts just been renewed. Yippee!

Wednesday, November 02, 2005

the grids

Listening to: Nothing, i've lost my iPod charger cable

So to wrap up my previous post about the mysterious grid lines..

It turns out that in multiplying my derivatives used in the texture map lookup in order to blur the map, the usually unnoticeable discontinuities in derivatives around the shading grid boundaries had been magnified (by the multiplication).

I was doing this:-

duds * 25 = ...
dvds * 25 = 0.03 // for arguments sake, it's there or thereabouts
dudt * 25 = ... // similar order of result for the others
dvdt * 25 = ...

but sometimes the derivatives were very different around shading grid boundaries, so the tiny aliasing that is normally unnoticed, was being amplified and appeared as a grid.

The solution was to add a number to the derivatives to get roughly the 0.03 I wanted (so not 25, something much smaller instead). That way the aliasing would stay small, but I would still get the blurring I wanted.

So that's that.
Hurray!

Tuesday, November 01, 2005

brickmaps, pointclouds.

Listening to : Fiery Furnaces 'EP' and The Crimea's new CD 'Tragedy Rocks'

So it looks like the best thing to do for my st bake is to have "interpolate" set to 1 in bake3d to get micropolygon midpoints (from four sample points) instead of absolute points and also to avoid duplicate points being written out at the boundaries. Then, in texture3d, use "lerp" set to 1 so that it looks up from the two nearest mip levels and interpolates between them.

Curiously, I found that baking at 1200x1200 shading rate 1 gave one quarter of the points (2million instead of 8million) and was 80Mb instead of 345Mb, but had much much better results and a more consistent point cloud compared with previously at PAL res, same shading rate, when it had been patchy..
Maybe this was due to adding interpolate==1 as mentioned above ?? If so; way-hey! Interpolate is cooool!

(Note to the newbies - generally to increase point cloud density you lower the shading rate and/or increase the render res, which is what made this result so odd).

In the beauty render, I was seeing dark grid lines aligned with what looked like the shading grid. At first I guessed somewhere I made the classic misktake of doing texture lookups or area functions inside a branch instruction that uses a varying variable instead of a uniform one. This is such a common mistake when you're used to writing real-time code with efficiency in mind. However, after fixing these in the shader, it's still there, so it is probably something else..

Other problems I have at the moment are;
- the baked average ray direction looks grossly incorrect. Not really a problem but would be nice to get to the bottom of the cause.
- i'm getting some squares flickering.
- I still don't know why my brickmap is okay down to level 7, but has mostly black beyond that. It means I have to set maxdepth to 7 before I render the beauty pass... kind of a pain.

Thursday, October 27, 2005

Cheap global illumination

Listening to: Gustavo Santaolalla "Motorcycle Diaries", and MYLO "Destroy Rock N Roll"

So I had this problem, how do you do efficient diffuse lightspill (sometimes referred to as global illumination) when the light source is an area light that's playing an image sequence that updates every frame.

Baking illuminance or using 'indirectdiffuse' in a lightshader might seem like the way to go, except they are both really slow. What I need is a solution that will be quick to render when it comes to the beauty pass. It doesn't matter so much if the setup is slow (in the same ballpark as occlusion is slow anyway).

So I settled on a shader that shoots rays from geometry in the scene towards the image sequence light source, and averages the texture coordinates that get hit, then bakes this to a pointcloud (along with a scalar ratio of hits to misses and an average direction vector). This way, so long as the scene is static, we can bake the spatial relationship of geometry wrt the light source and then use these baked texture coordinates as a texture lookup that's cheap to do. We can change our image sequence each frame, and the texture coordinate lookup is still valid.

They are a bit like shadowmaps though, you need one per light source, and if the scene is non-static, then you need to bake once per frame that's got geometry in a new position. But for a static scene, they are a winner because they are view independent.

So why didn't they work at first when I used the bake3d and texture3d calls?

Well, if you have a cloud of points, then you need to know what space they are stored in and what space is used in the retrieval.

So the answer, and the tip for today, is that bake3d and texture3d by default will transform P and N that you give it into world space before doing the lookup. That means you don't usually have to care about what space the P and N are in that you pass to it. (Not counting Pref etc, of course. But my scene doesn't have any deformations, so no problem).

Wednesday, October 26, 2005

First post!

Listening to : CD of Mighty Boosh radio show.

Wow, first post! I guess I wanted to set this blog up ages ago. In fact, back in 2003 when I first started using renderman in film vfx for my living, I wanted to keep a log of all the things I had to learn to re-train from being a 3D Programmer in the games industry. That opportunity to chronicle the initial learning curve involved has probably passed, rats, but hey-ho I can still post things as I learn them....