Thursday, December 22, 2005

Under the hood

Listening To : 'Sensitive Artist' by King Missile.

To paraphrase an email from a work colleague..

Pixel samples have no effect on when and where the shader gets executed. The shading rate and other dicing settings determine how the geometry is tessellated into micropolygons. Then the shader is executed at each vertex on the micropolygon. The pixel samples then just determine how this pre-shaded micropolygonal geometry is sampled for each pixel.

Not only do pixel samples have nothing to do with shading, they also have nothing to do with raytracing. Shooting rays into a cone an angle of 0 will always hit the same spot regardless of samples - the pixar docs appear incorrect about this, based on experimentation. Also, it's not necessary to specify the cone angle at all if using the simpler environment() and trace() routines. I believe renderman will compute the optimal cone angle automatically based on shading rate and surface derivatives. Although in tests, geometric aliasing still occurred with trace, and only making a coneangle based on the change in incident angle across a micropolygon with gather seemed to remove it.

Thursday, December 15, 2005

Watch those param orderings!

Listening To : Shooby Taylor, the human horn.

Refract and Faceforward have I and N paramaters specified in different orders! How many times has that caught me out ?!?!?

     vector refract( vector I, N; float eta )
     vector faceforward( vector N, I )


Also clamp and mix/smoothstep have different orderings.

     type clamp( type a, min, max )
vs
     type mix( type x, y; float alpha )
     type smoothstep( type min, max, value )

Saturday, November 05, 2005

MEL message attributes

Listening To : Bill Bailey , Jimmy Cliff

MEL today.
Just found out about message attributes and connecting things together.
connectAttr and listConnections. Neato.

Also, thought for today is that in Games, the artists always seem obsessed with the characters animation, run cycles etc. and don't really care about lights or cameras too much. In post, lights and cameras are so much more important (duh). Yeah, well, not a fabulous insight - but was just thinking about the differences earlier today.

DNeg Free beer and pizza tonight, and my contracts just been renewed. Yippee!

Wednesday, November 02, 2005

the grids

Listening to: Nothing, i've lost my iPod charger cable

So to wrap up my previous post about the mysterious grid lines..

It turns out that in multiplying my derivatives used in the texture map lookup in order to blur the map, the usually unnoticeable discontinuities in derivatives around the shading grid boundaries had been magnified (by the multiplication).

I was doing this:-

duds * 25 = ...
dvds * 25 = 0.03 // for arguments sake, it's there or thereabouts
dudt * 25 = ... // similar order of result for the others
dvdt * 25 = ...

but sometimes the derivatives were very different around shading grid boundaries, so the tiny aliasing that is normally unnoticed, was being amplified and appeared as a grid.

The solution was to add a number to the derivatives to get roughly the 0.03 I wanted (so not 25, something much smaller instead). That way the aliasing would stay small, but I would still get the blurring I wanted.

So that's that.
Hurray!

Tuesday, November 01, 2005

brickmaps, pointclouds.

Listening to : Fiery Furnaces 'EP' and The Crimea's new CD 'Tragedy Rocks'

So it looks like the best thing to do for my st bake is to have "interpolate" set to 1 in bake3d to get micropolygon midpoints (from four sample points) instead of absolute points and also to avoid duplicate points being written out at the boundaries. Then, in texture3d, use "lerp" set to 1 so that it looks up from the two nearest mip levels and interpolates between them.

Curiously, I found that baking at 1200x1200 shading rate 1 gave one quarter of the points (2million instead of 8million) and was 80Mb instead of 345Mb, but had much much better results and a more consistent point cloud compared with previously at PAL res, same shading rate, when it had been patchy..
Maybe this was due to adding interpolate==1 as mentioned above ?? If so; way-hey! Interpolate is cooool!

(Note to the newbies - generally to increase point cloud density you lower the shading rate and/or increase the render res, which is what made this result so odd).

In the beauty render, I was seeing dark grid lines aligned with what looked like the shading grid. At first I guessed somewhere I made the classic misktake of doing texture lookups or area functions inside a branch instruction that uses a varying variable instead of a uniform one. This is such a common mistake when you're used to writing real-time code with efficiency in mind. However, after fixing these in the shader, it's still there, so it is probably something else..

Other problems I have at the moment are;
- the baked average ray direction looks grossly incorrect. Not really a problem but would be nice to get to the bottom of the cause.
- i'm getting some squares flickering.
- I still don't know why my brickmap is okay down to level 7, but has mostly black beyond that. It means I have to set maxdepth to 7 before I render the beauty pass... kind of a pain.

Thursday, October 27, 2005

Cheap global illumination

Listening to: Gustavo Santaolalla "Motorcycle Diaries", and MYLO "Destroy Rock N Roll"

So I had this problem, how do you do efficient diffuse lightspill (sometimes referred to as global illumination) when the light source is an area light that's playing an image sequence that updates every frame.

Baking illuminance or using 'indirectdiffuse' in a lightshader might seem like the way to go, except they are both really slow. What I need is a solution that will be quick to render when it comes to the beauty pass. It doesn't matter so much if the setup is slow (in the same ballpark as occlusion is slow anyway).

So I settled on a shader that shoots rays from geometry in the scene towards the image sequence light source, and averages the texture coordinates that get hit, then bakes this to a pointcloud (along with a scalar ratio of hits to misses and an average direction vector). This way, so long as the scene is static, we can bake the spatial relationship of geometry wrt the light source and then use these baked texture coordinates as a texture lookup that's cheap to do. We can change our image sequence each frame, and the texture coordinate lookup is still valid.

They are a bit like shadowmaps though, you need one per light source, and if the scene is non-static, then you need to bake once per frame that's got geometry in a new position. But for a static scene, they are a winner because they are view independent.

So why didn't they work at first when I used the bake3d and texture3d calls?

Well, if you have a cloud of points, then you need to know what space they are stored in and what space is used in the retrieval.

So the answer, and the tip for today, is that bake3d and texture3d by default will transform P and N that you give it into world space before doing the lookup. That means you don't usually have to care about what space the P and N are in that you pass to it. (Not counting Pref etc, of course. But my scene doesn't have any deformations, so no problem).

Wednesday, October 26, 2005

First post!

Listening to : CD of Mighty Boosh radio show.

Wow, first post! I guess I wanted to set this blog up ages ago. In fact, back in 2003 when I first started using renderman in film vfx for my living, I wanted to keep a log of all the things I had to learn to re-train from being a 3D Programmer in the games industry. That opportunity to chronicle the initial learning curve involved has probably passed, rats, but hey-ho I can still post things as I learn them....