Monday, December 3, 2012

Midpoint - Final Project

Here's a link to my midpoint presentation:

https://docs.google.com/open?id=0B21htwSnbMf0M0hvb2RBMUdPaUk

Wednesday, November 21, 2012

Final Project Pitch



Project Pitch

I propose to develop a WebGL based real time volumetric renderer and (if time permits) use it to render physically based smoke simulation. Volume rendering is a well-known technique used for rendering data that is typically stored in the form of a grid. Some problems can be solved very efficiently and are naturally suited to be solved when space is portioned into an imaginary uniform grid structure. Some common examples are rendering clouds, smoke, fire, etc.

Generating images based on this method has been traditionally done in an offline manner, i.e. images are not generated as soon as data is available; putting it more precisely, images cannot be generated at 30 fps or greater. However, volumetric rendering technique is “embarrassingly parallel”, which means that the technique can be parallelized with little effort. The first step in the process is to generate a ray for each pixel of the image that we want to generate. Then we can gather densities along each ray independently on separate fragment shaders. The process can be further optimized if we only ray march within the region containing voxels.

I would be using 3D textures for storing the voxel data if it is supported in the latest WebGL specification. If it is not, then I would have to use 2D textures and simulate 3D textures.

The main challenge in the project is using WebGL for doing all of the above. Being a graphics guy, my work has mainly been in C++ and C++ like languages. It’s going to be a steep learning curve for me to use JavaScript for doing graphics. 

Once I am done with the rendering, I would start working on creating physically based smoke simulation. I propose to use the Semi-Lagrangian method for smoke simulation, meaning I would be using parts of both a particle based (Lagrangian) approach and grid based (Eulerian) approach. The Eulerian approach enables us to parallelize smoke simulation conveniently on the GPU. Every grid element stores both scalars (like density, pressure, temperature) and vectors (like velocity) which get modified during the simulation. The densities computed in each step of the simulation would then be transferred to the volumetric renderer for visualization.

I would be referring to Chapter 30 of the book GPU Gems 3 for my implementation:

Friday, November 9, 2012

Starting WebGL


Wave simulation in WebGL using Sin and Cos functions


Simplex Wave


Rotating Ellipsoid

With these mini projects, I start WebGL projects.

Image Processing


Video of all the different image processing done using GLSL

Implemented the following (in same order as shown in video):
  • Box Blur
  • Image Negative
  • Gaussian Blur
  • Color to Grayscale
  • Edge Detection
  • Toon Shading
  • Brightness Enhancement
  • Old TV look
  • Pixelated look
  • Image Rotation

Image Swirling
I also added this feature which I like much.

Tuesday, November 6, 2012

CUDA Rasterizer


CUDA Rasterizer - featuring Bovine

Implemented some core parts of the raster pipeline:
  • Vertex Shader
  • Primitive Assembly
  • Geometry Shader
  • Rasterization stage
  • Fragment Shader
Other features:
  • Mouse interaction
  • Back Face Culling
  • Multiple Lights
During  the initial phases of the project, I used a nifty trick shown to me by Shehzan, which involved using Maya for getting an approximate view of the output from the rasterizer (this was useful for testing):

 

Sunday, September 30, 2012

GPU Ray Tracer Submission

 Video:


The top, bottom, front and back walls are all reflective
traceDepth = 10
GPU:
NVIDIA GeForce GTX 560M

Link to repository:
https://github.com/aparajithsairam/Project1-Raytracer

Functions Implemented:
  • cudaRaytraceCore() handles kernel launches and memory management;
  • raycastFromCameraKernel() is a function handling camera raycasting;
  • raytraceRay() is the core raytracing CUDA kernel;
  • boxIntersectionTest(), which takes in a box and a ray and performs an intersection test;
  • getRandomPointOnSphere(), which takes in a sphere and returns a random point on the surface of the sphere with an even probability distribution;
  • getRandomDirectionInSphere(), which generates a random direction in a sphere with a uniform probability;

Features Implemented:
  • Raycasting from a camera into a scene through a pixel grid
  • Phong lighting for one point light source
  • Diffuse lambertian surfaces
  • Raytraced shadows
  • Cube intersection testing
  • Sphere surface point sampling
  • Reflection
  • Interactive camera

Snapshots of GPU Ray Tracer with time



Lambert Shading & Phong Shading with Exponent = 0 (Can See Rim)