This project, “An Amusement Park” uses OpenGL to draw a virtual amusement park. The park includes Ferris Wheel, Columbus Ship Ride and Roller Coaster, all drawn using basic OpenGL primitives.
The primary goal of this project was to apply the different OpenGL techniques I had learnt into a project ready for demonstration. When I thought about the scene on which I should be building on, Amusement Park came to my mind as it can involve different kind of movements associated with different kind of objects.
###Giant wheel
The Giant wheel or the Ferris wheel was the first object I attempted to recreate. As a combination of torus, disc and inflated lines formed the skeletal part of the wheel, the trolley was formed by placing multiple cubes and rotating each about their center. Next, every such trolley had to be upright with respect to the wheel, which I achieved by rotating the trolley along the angle made by it with the center of the torus. When the wheel rotates, each trolley has to be rotated by the same angle so that they all will remain upright. However, with a slight distortion that I applied using trigonometric functions, I was able to create a swinging effect for each trolley, which can be observed in the video.
This was the output:
###Ship Ride
The ship ride, also known as the Columbus ship was the most straightforward object to recreate. The base of ship came from a half cut elongated ellipsoid. The ship swings with respect to the point where it is attached to the stand. This swinging action was created using a trigonometric function of a specific period (time).
###Roller Coaster
The roller coaster was one of the most challenging things in this project. The challenges were to:
Draw a predetermined track along a specified path
Move the roller coaster car along it
Place the camera on one of the seats in the car and follow it as it moves
Change the car orientation according to the orientation of the track
Continually vary speed of the car based on whether it is climbing or descending
A set of multiple Bezier curves were joined together to form a smooth, long and continuous track on which the roller coaster car could move. This also involved algorithms that smoothened a jerky transition from one Bezier curve to another. By calculating the normal, bi-normal and tangent at every position of the car, I was able to control the pitch, yaw and roll of the car to make it look realistic.
Eventually, this is how it turned out to be:
###Skybox
The realistic sky background effect is achieved using the skybox technique. In this technique, the scene is placed in a very large cube with the inner walls of the cube textured with images of sky. The box also moves with respect to the first person camera, but at a much slower rate so that a parallax effect is achieved.
The above image shows the skybox with a containing scene.
###Other options
There are several other options including:
Hand Gesture Recognition uses Computer Vision, Image Processing and Machine Learning to detect gestures using which specific actions can be performed. The aim of this project was “To create a dynamic hand gesture recognition system
that recognizes a set of gestures and performs a corresponding action”.
The following diagram gives a high level overview of the architecture of this project.
At a high level, the movement of hand is tracked, the path traversed by it is matched against trained gestures and the best gesture is selected, also performing its associated action.
##Detection of Hand Contour
In order to detect various gestures performed by hand, the hand as a contour has to be detected first. The hand has to be detected as an outline or a silhouette. The input image frames from web camera is processed using Mixture of Gaussian background subtraction. The obtained mask is then used to detect contours and identify fingers.
Background model
Foreground object
Foreground mask
##Finger detection
Next, fingers are detected by looking for immediate peaks and defects. The peaks and defects found are then validated to check if they really represent fingers. If found so, they are marked and the movement of highest finger is tracked.
Potential Peak
Potential Defect
When the movement data is collected incrementally, we get a sequence of coordinates where the tip of the finger has moved.
Tracked path (Points)
##Feature extraction
The obtained path, or the set of points, is very specific to the canvas on which a gesture is made. In other words, if we use this representation as it is, we will not be able to differenciate between a “small” and a “large” gesture as they become fundamentally different. So, they are subjected to feature extraction. In this step, a known list of size-independent features are extracted and they are further used in representing this gesture. For each point in the gesture path, the following features are extracted.
Location (distance) relative to Centre of gravity of the path of the gesture.
Angle with Centre of Gravity of the path of the gesture.
Angle with the initial point of the path of the gesture.
Angle with the ending point of the path of the gesture.
Angle with the bottom left corner of the path of the gesture.
Angle with the top left corner of the path of the gesture.
Angle with the bottom right corner of the path of the gesture.
Angle with the top right corner of the path of the gesture.
Thus, for N points in gesture path, a Nx2 array is now transformed into a Nx8 array.
Example:
Example Gesture Path
Gesture represented as points
Gesture represented as feature vectors
##Training gestures
For training gestures, the set of all feature vectors obtained from every training gesture is subjected to k-means clustering to obtain a set of cluster centroids and they are saved as “Codebook”. This codebook acts as a ready reckoner later for identifying gestures. Here, we select k=64, so that, 64 centroid locations are obtained. Each centroid is given an index, starting from 0 to 63.
Quantization
k-means clustering (location of centroids)
After this, for each training gesture, its feature vector representation is quantized, or each point in the multidimensional feature plane is replaced with the index of nearest cluster, essentially transforming a gesture into a sequence of indices (integers).
A gesture represented as a sequence of cluster indices
This sequence, or “states”, is fed to train a Hidden Markov Model using Baum-Welch algorithm. This populates the parameters of an empty Hidden Markov Model by computing the probability distribution from each state to the other. Baum-Welch algorithm works by “expecting” and then “maximizing” the model parameters for a given sequence of states. The obtained Hidden Markov Model parameters are then saved.
The following are the parameters of a Hidden Markov Model which are saved:
N - The number of states in the model.
M - The number of distinct observation symbols per state. It is also called as
the discrete alphabet size.
A - The state transition probability distribution. i.e. the probability of
transition from one state to another.
B - The observation symbol probability distribution. i.e. the probability of
emitting a particular observation symbol in a given state.
Pi - The initial state distribution
More details about Hidden Markov models can be found here. In a nutshell, given a sequence of states, Baum-Welch algorithm computes the above parameters.
##Recognizing a gesture
To recognize a gesture, its quantized representation is fed to Viterbi algorithm that computes the probability of a sequence with respect to every saved Hidden Markov Model. Given a Hidden Markov Model and an Observation Sequence, it calculates
the probability that the Observation Sequence was generated by the given Hidden Markov Model. The model that has the highest probability is the recognized gesture.
##Screenshots
This was originally intended to be used by specially abled people for communication, but it can be used on any x86 or amd64 machine with a camera device by anyone.
Codebook generation
Hidden Markov Model training using Baum-Welch algorithm
Gesture started
Gesture in progress
Gesture completed - recognized gesture is Line
The following document provides more details if required:
Open Learn is an open-source school management system that transforms the way organizations can be managed. Currently used by about 12 schools in rural Karnataka, it allows one to manage student profile including student attendance, student exam report (SMS) and fee payment tracking (SMS). Its vision is to transform the way students are taught in schools, using interactive learning techniques that can generate illustrations with no additional effort.
Save to Google Drive is an extension for Mozilla Firefox using which you can save files that you find on the web directly to your Google Drive account - no need to download and upload it again. This can also detect all the images in a web page and batch-save them to your Google Drive account. This was featured in SoftPedia, MakeTechEasier and several other review websites. Unfortunately, this is now deprecated as API support was withdrawn by Google. Learn more.
Adding link
Saving files
MapMe for s60v5 is one of the very first mobile applications that I created using Qt SDK in C++ and QML to run on Symbian s60 devices. This can track or report your location without using GPS using Cell ID triangulation technique by requesting the network Cell ID of nearest transmitting cell towers and by calculating an approximate user location. Apart from Cell ID, this also uses Mobile Country Code (MCC), Mobile Network Code (MNC), Location Area Code (LAC) for better accuracy.
Requesting data
Position in Map
Current position without GPS
PHP for Google App Engine is what I worked on before Google introduced PHP as an officially supported language for the Google App Engine. This a wrapper around GAE SDK to allow running PHP using Quercus, which uses Java under the hood.
Omniscient is a fun project I created that can accept user queries in natural language and can respond back with relevant answers, using Wolfram Aplha. Omnsicient can be accessed using this link.
Asking a query
Response
I have also worked on several OpenGL Projects in my free time to learn Computer Graphics in OpenGL.
Omniscient Android is the android version of Omniscient for Web. This application can take user queries and respond back with relevant information.
I used Captcha2Text to decode some basic captcha images for my page scraping applications. This does not use any advanced image processing algorithms, or any other OCR techniques. It requires you to build a hash list of known letters after which any re-occurance of that letter is automatically detected. But, please be sure to ask the site owner’s permissions before using this.
Step 1:
Step 2:
Step 3:
Step 4: piuqd8
e-Odyssey is the website (script!) I created for e-Odyssey, the computer science club of our college. Completely written in Google Apps Script, you do not need a hosting space or a server to host this for processing. This script also has a XML import function that can fetch posts from a blog and can act as an index for all of its posts.