Joseph Bennett


Scalable Global Illumination using Sparse Radiance Probes

alt text

For my Master’s thesis, I wanted to find a global illumination algorithm that would scale across a range of hardware. I implemented a method from Real-time global illumination by precomputed local reconstruction from sparse radiance probes.

More details and a link to the thesis itself is available here. I also plan to write a blog post soon covering the basic implementation details of SRP. Stay tuned!

Interdimensional Llama (2016)

alt text

Interdimensional Llama is a perspective-switching puzzle adventure game where you play as Lou - an adorable Llama. I worked on this project as part of a university course with Pheobe Zeller, Casey Garnock-Jones, Cameron Hopkinson, Jiheng Wang, and Thomas Roughton.

During the university course, I mainly contributed as a programmer working in C# within Unity. My main contributions were the seamless level transitioning, and the grid-based navigation for movement in 2D and 3D.

The game was also a finalist in the 2016 KiwiGameStarter funding competition. For the competition, I helped write the business proposal for our game and pitched the game to the judging panel.

Atmospheric Llama (2017)

alt text

Atmospheric Llama was a fourth-year computer graphics project I worked on with Thomas Roughton. Its aim was to implement some graphics techniques within our own engine to improve the visual quality of Interdimensional Llama.

I implemented a physically based real-time volumetric fog algorithm based on a presentation by Sebastien Hillaire from Frostbite.

The video below shows my specific technical contributions to the project:

And this next video is a joint demo Thomas and I created to showcase the techniques we implemented:

Real-time Human Vision Rendering (2017)

Near-eye displays like virtual reality headsets have recently become popular but they do not produce images accurate to human vision. My honours project in 2017 looked to render depth of field with characterstics of human vision.

To do this, I rendered images by tracing rays through Navarro’s schematic eye model. Unfortunately, this is too expensive for real-time use. Instead, I created a real-time approximation by fitting a function that captures how the offline method blurs an image using machine learning techniques. The original implementation of this method by Tang and Xiao [29] used a neural network as the machine learning technique but I showed that genetic programming can also be used and is just as effective.

Here’s the final report if you’re interested.