As someone who enjoys reading and watching “making-of” stories, I thought it’d be interesting for developers and players of Polaris to see how the game evolved over time.
I’m an aficionado of block-falling puzzle games such as Tetris and Lumines. So it seemed only natural for me to make a game in that vein. But at the same time I wanted to make a game that is different from the vast number of puzzle games already out there. You may be surprised to hear that there are over 50 variants of Tetris alone! So coming up with something new turned out to be quite challenging.
Polaris was born from the idea of transforming the game board into polar coordinates. The name and celestial theme of the game naturally followed from there. To convince myself that this would work on the tiny iPhone screen I made this concept design in Photoshop:
Polaris is a block-falling puzzle game for the iPhone and iPod touch that I have worked on for about 3 months as a side project. The goal of the game is to form complete rings of colored tiles, which you need to deactivate in order to clear these rings.
About 95% of the code is written in pure C++ (for portability and familiarity reasons), with some hooks into the Cocoa framework using Objective-C++ for system specific functions (application start-up, text drawing, and sound support). Polaris was a very fun project to work on, not only because writing games is fun, but also because of the technical and artistic challenges.
The first and second generation iPhone is OpenGL ES 1.1 capable. As such effects must be realized without any use of shaders. Instead one must rely on clever use of multiple textures, texture transformations, and various blending modes.
The game’s logo was created in Inkscape—an open-source vector graphics editor. Other in-game graphics were created in Photoshop. For the nebulous background I loosely followed these two tutorials: Space Lightning Effects and Creating Clouds. For artistic inspiration I like to browse Smashing Magazine, which I find to be an excellent aggregator of top-notch graphics design.
Polaris is available in the App Store for the iPhone and iPod touch! Check out the game’s website for more information.
One of the assignments that I worked on during undergraduate studies at the University of Toronto was a raytracer, which I wrote for a 4th year computer graphics course. My raytracer took first place at the Fall 2006 Wooden Monkey competition. The prize: fame, glory, and a wooden monkey
Some features of the raytracer are:
- CSGs, Arbitrary Polygonal Objects (loaded from wavefront obj files)
- Normals are phong interpolated for smooth shading
- Texture mapping with bilinear interpolation. Can be applied to any channel of phong model, as well as luminosity
- Reflections and refractions
- Diffuse (blurry) reflections and refractions (diffusivity can be set)
- Point lights with shadows
- Area lights: Polygonal geometry can serve as an area light. Texture can be applied to the luminosity channel of the geometry; this way the mesh can cast light with varying color and intensity over its surface.
The images below show how these features can be applied to a scene in order to enhance its visual appearance:
For my M.A.Sc. research I’ve been working with half a terabyte of video data. Here are some screenshots of the machine that’s been doing most of the heavy lifting.
I’ve had this setup for about a year now. It’s running Ubuntu 8.04 and Matlab 64-bit on a quad core 2.33MHz Xeon and 4 x 8GB DDR2 667 modules. My algorithms are databound, hence the fairly low CPU clock rate.
Only one question remains: Will it run Crysis?
Automatic face recognition has a large number of interesting applications. For example, Picassa and iPhoto use face recognition to help you tag people in your photo collections. Face recognition is also used for authentication (though not always successfully). And it is also a crucial component in realizing SkyNet and an Orwellian future (whichever you prefer). However, despite steady progress, the accuracy of face recognition by humans is so far unparalleled.
A study conducted by MIT researchers Pawan Sinha, Benjamin Balas, Yuri Ostrovsky, and Richard Russell looks at the ways people recognize each other in order to help guide computer vision research towards building better face recognizers. Their study “Face Recognition by Humans: Nineteen Results All Computer Vision Researchers Should Know About” identifies several characteristics that humans use to recognize faces.
One of the results is that humans can recognize faces even in very low-resolution images. Eyebrows are one of the most important features for recognition. And our ability to tolerate degradations increases with familiarity. Read the full study here. The researcher’s website is found here.
Well, I decided to get myself a blog. I haven’t really decided what to write about, but I imagine it might have to do with software development, computer vision research, and anything else that may be on my mind.