Bookmark and Share

Friday, 18 March 2011

Line-Following Autonomous Vehicle

Line-Following Autonomous Vehicle

The car is made to follow a non-predetermined path by following a line against a luminance contrasting surface that is detected by an array of sensors.


An Atmel AVR8515 microcontroller is used by the car in order to have a complete control of the car by controlling the drive and steering servos. The microcontroller steers the car accordingly as it senses the position of the line in reference to the car. The wheel rotation is detected by a 6th light sensor which maintains constant speed. The data from the wheel is obtained by a digital feedback algorithm as the desired speed is maintained by adjusting the PWM signal to the motor.

The light reflectance is detected by the sensors and the floor is illuminated by each of 5-arrayed infrared LED. An output of 0V or 5V is the result of converting the output of the detectors. A higher reflectance surface indicates white line due to low output while a black line below a sensor is indicated by a high output.

A LED-sensor pair is mounted facing the wheel which detects the speed of the car. It also detects the position of wheel.

Virtual Pool in a Box

Introduction to Virtual Pool in a Box

The virtual pool trainer interfaces with the user through a real cue stick. The cuestick is attached to a linear potentiometer, which in turn is attached to a rotary potentiometer. The rotary potentiometer, simulating the position of the white cueball on the table, detects the direction of alignment of the cuestick, and hence determines the direction of initial thrust of the cueball. The linear potentiometer determines the position of the cuestick relative to the cueball, so that it can be known when the cuestick strikes the cueball. An accelerometer, attached to the cuestick as well, and measuring up to 10G measures the acceleration of the cuestick on impact with the cueball. The whole setup involving the cuestick, potentiometers and the simulated pool table is shown in the picture below.

The imagery of the cuestick as it rotates and adjusts is captured on the LCD screen in real time. In addition, the LCD displays the pool table and balls according to real life proportions. At the start of the game, the player is able to follow instructions on screen by pressing several buttons attached to the box containing the LCD. Using these buttons, he is also able to position the cueball at any position on the table upon the cueball entering the pockets. Seamless integration of the user inputs and output on the LCD screen is controlled by the ATMega 32 MCU.

On startup, the player can choose 1 of 2 modes: Training and Full Game. Training allows the user to choose from 1 up to 5 object balls for the purpose of practicing. Full game pits 2 players against each other, in an 8-ball pool game, with a total of 15 object balls on the table, numbered 1 to 15. Complicated background mathematical computations allow the balls to move as they should in real life, including the effects of friction with the table top, wall collision and ball collision. In the 8-ball pool game, several rules are followed, such as allowing the opponent to place the cueball freely upon “scratch” by one of the players. When 1 player has pocketed 7 of his balls as well as the number 8 black ball, the game ends.


Many tasks that are performed on the computer require the use of both a keyboard on the mouse, and many people find it frustrating and awkward to have to switch back and forth between them. So, we've created what we believe to be the solution to this dilemma, as you can wear the glove on your hand while you're typing and then when you want to move the mouse cursor, simply push a button that is already on your hand and point your finger at the screen and move it around. The Airmouse is comfortable to wear, and does not significantly inhibit typing. It functions completely as a two button serial mouse, and even imple

High Level Design

Our project was inspired by two sources. Firstly, when Professor Land exclaimed, "I have free accelerometers, and they [can do cool things]," we felt as if we should explore the possibility of using them. We thought for a while about what kind of awe-inspiring device we could construct with acceleration sensors, and then we found our second fountain of inspiration - the mouse-glove that Tom Cruise used in Minority Report would not only be very neat were it real, and it was not impossible to make it real. Accelerometers seemed to be the ideal sensor for constructing a hand-mounted pointing device.

Our original plan used accelerometers to measure the acceleration of a user's hand and integrate that acceleration into a change in position. The math behind this is very easy - using Verlet Integration, we would approximate the integral of the acceleration to the second degree. Verlet integration interpolates between to measured accelerations, using the average slope between them to derive velocity. This is sometimes called "trapezoidal integraton." Here are the equations, courtesy of Professor Land's CS417 webpage:

Verlet Equations

This approach, however, turned out to be a practical impossibility. While we successfully implemented the Verlet scheme and watched a mouse cursor controlled by the scheme move on the screen as we expected it to, the glove had to be held exactly at 0g (or 1g for earth-normal) in all three directions. If gravity were allowed to affect the accelerometers at all, this acceleration added into the integral, and with no negative acceleration to remove it, the change in position that we calculated grew boundlessly. We needed gyroscopes to measure gravity so that we could remove its effect from our calculations, but gyroscopes are not inexpensive, and the cheapest ones cannot be soldered by human hands.

Because of this, we decided to construct a tilt mouse instead. While not as impressive as a position tracking device, the tilt mouse is easy to use (after a bit of practice) and almost as neat as a position tracking mouse. The math behind this scheme is very easy - measure the acceleration due to gravity on the mouse, and multiply this by some constant to scale your output to a desirable level. The Microsoft serial mouse protocol uses an 8-bit twos complement number scheme to send data to the computer, and the numbers outputted by the Atmel Mega32's analog-to-digital converter can be conveniently represented in the same number of bits. However, the numbers outputted by the accelerometer had to be altered to normalize voltage extremes to -128 and 127 and the accelerometer's neutral output to 0. We will discuss in the next section, Program/Hardware Design, exactly how this was done. In addition to scaling the output, we also used a step filter on the data to make the mouse easier to use. Our accelerometer was so sensitive that the slightest motion of one's hand would cause the device to output a nonzero acceleration. To give the user a more stable region around the zero point, we quantized all outputs below a certain level to 0, then normalized any outputs out of this cutoff range by the breadth of the cutoff range. For example, if the cutoff were abs(10), the intersection of all numbers > -10 and all numbers <>

In addition to position changes, mice detect button presses. We implemented 4 buttons in our mouse - output on/off, left click, right click, and scroll enable. Each pushbutton is connected to a port pin on our microcontroller. When the output on/off button is pressed, the serial mouse output to the computer is either [re]enabled or disabled. This allows the user to move his or her hand and not have the motion affect the mouse pointer. Left click functions as a mouse left click, right click functions as a mouse right click, and scroll enable disables all motion and other button outputs while it is held down, and the y-output of the accelerometer is translated into a scroll-wheel output to the computer.

The only hardware/software tradeoff with which we had to deal was the sensitivity of our analog to digital converter (ADC). The granularity per "click" of the ADC is approximately 19.5mV/click. When we used accelerometers with sensitivities of approximately 50mV/g for our position mouse the outputs of the devices had to be normalized to the full ADC range of 0-5V so that we would have good sensitivity on our device. We will explain in the next section exactly how this was done. For our tilt mouse, we used accelerometers that outputted 1000mV/g, so the sensitivity was much higher, but we decided to normalize the range to 2000mV/g to gain even higher sensitivity, especially since we already had the necessary circuits designed and constructed. Another hardware/software issue one must usually take into consideration is the "bouncing," or inconsistent, state in which some pushbuttons can be after a transition, but our buttons were internally debounced, as can be seen in the below figure, so we did not have to worry about polling them at certain intervals.

ments variable rate vertical scrolling.

a real-time, multithreaded, preemptive operating system

We have created a real-time, multithreaded, preemptive operating system called kaOS for the Atmel Mega32 microcontroller, which loads and executes programs from a Secure Digital or MMC card.

We wrote this OS and created the SD/MMC card reader as a final project for Cornell's ECE 476 class taught by Professor Bruce Land. Our reason for choosing this particular project is that we were unable to find any OS for the Mega32 that doesn't have to be statically compiled in with a user program. In contrast, kaOS waits for a card to be inserted and a reset button to be pressed, at which point a program is loaded from the card and executed. At any time, a new card with a new program can be inserted and run. Executing a new program doesn't require reprogramming the Atmel processor.
The design of kaOS was broken up into two major components: the operating system itself and the card reader and program loader.

The card reader is accessed via the Atmel's SPI interface by the program loader, which places the program into flash memory. These programs can be written similar to a standard Atmel Mega32 program, except that it must include the kaos.h header file, which provides an interface to the threading and messaging calls to the OS. The program loader resides in the Mega32's bootloader section of flash. This gives it write access to other portions of flash memory so that it can write executables to program space. Once kaOS loads a program, it creates a thread for it and jumps to its main() method.

The operating system is real-time, multithreaded, and preemptive. It supports creation of up to 8 threads, which can be prioritized. Threads with the same priority are alternately preempted to give both equal processing time. kaOS also supports messaging between threads as a means of inter-thread communication.

While the idea for this project was born out of the lack of a true OS for the Mega32 that dynamically loads programs, we must give some credit to aOS, created by Anssi Ylätalo. In particular, we used his context-switching code to swap processor states when changing threads. Also, portions of our code used to read from SD/MMC cards were adapted from code written by Radig Ulrich. Additional information about SD card communication and the SPI protocol were obtained from SanDisk's SD card manual.

In designing the operating system, we considered various library functions that we could add to increase functionality, such as LCD and keypad drivers, events, and support for multiple programs on a single card with a menu system to choose between them. These features, although useful bells and whistles, were cut from the design because they were extraneous to our core vision of creating a dynamically-loading OS, added unnecessary complexity, and consumed extra flash and RAM, which might be needed for some user programs.

Self-Powered Solar Data Logger


My project is a self-powered solar data logger. Put out in the sunlight, it will measure the light level and log this to memory to be later downloaded to a computer. The system is powered by a small solar panel and battery.

The solar logger I built uses a photodiode to measure the solar insolation level. It converts the analog signal from the photodiode to a digital value that is stored in flash memory. Every time the system logs a data point, it also logs the time and date so that the data can be analyzed in the future. The logged data is available for a user to download to a computer for analysis.
While the system is logging, real time data is displayed on a small LCD screen as well as information about the battery voltage, the length of time the system has been logging and the length of time it can continue to log before running out of memory.
The logger has a dedicated solar charging system to provide the needed power. A very simple charge controller regulates charging of a small, sealed gel-cell lead-acid battery by a small solar panel. This charge controller is a simple on-off switch that disconnects the PV panel when the battery voltage rises too high.
The user sets the time and date each time the system is reset as well as the frequency with which data is stored. The user can also clear the memory or continue appending data to the previously logged data and extract logged data to a computer for analysis.

High level design:

Rationale and sources of project idea:
There are many reasons for logging solar insolation data for solar electric systems. When planning to install a solar electric power system, one must be able to predict the power output in order to determine the financial costs and benefits of the installation.
When planning an off-grid solar electric power system, the output must be closely matched with the load so as to provide sufficient power without considerable waste. The power output is directly related to the insolation level. While seasonal and annual average insolation levels for most major cities in the United States are available (, cloud cover and other weather effects can be very localized depending upon the topography, making extrapolations from the large cities with established data to neighboring areas unreliable. In addition, this data is not available for every part of the world,
For a large installed solar electric system, one would typically install an insolation monitor to determine if the system is performing as expected. If the system output drops below the level expected given the insolation, an alarm would be set off to indicate that the system needs servicing.

The Reflow Soldering Oven with LCD Display:


Our project consists of making a reflow soldering device using a normal toaster oven with a graphical LCD display for control and GUI. Soldering is an important and difficult task for custom printed circuit board design especially for integrated circuits that come as chip packages that are impossible to solder by hand. This is particularly true for ball grid arrays (BGA) and small-pitch quad flat packs. If one chooses to design a custom printed circuit board around these chips, then the designer may wish to also purchase a stencil of the designed board that would allow him to squeegee solderpaste precisely on the SMD pads. The designer would then carefully place the components on the board, and heat the solderpaste with a heat gun or a reflow soldering oven. The problem with soldering ovens is that they are expensive and cost thousands of dollars. We have decided to come up with a cheap and working solution to the problem by using a normal toaster oven and controlling it through a microcontroller along with an LCD display that guides the user through the soldering process and constantly provides feedback on the state of the system while reflow soldering. The input to the system would be via a conventional keypad and would consist of target temperature point at specific times that the user would enter based on the solderpaste's recommended tempearture profile. The system would interpolate the temperatures for the in-between time intervals and follow the curve generated by the input. The system would also fulfill the appropriate safety requirements and have the capability of aborting the process in case of a mishap.

High Level Design

Rationale and source:
The source of the project was one of the team members Ko Ihara who as part of his M.Eng. project worked on designing a circuit board and came up with the idea of using a toaster oven for reflow soldering. The project idea fully complies with the requirements of the course and the challenging part is coming up with a workable PID feedback control to make the toaster oven follow the temperature curve and heat up or cool down at appropriate times. The Atmel mega32 chip is sufficient for this project as the speed of the chip and the number of I/O pins suffices our needs. Another thing about the project is that it would benefit future students in the course since they would be able to solder boards with less pain and high efficiency and safety.

Logical structure:
The project can be divided into three parts namely the LCD display, the oven control and lastly the temperature sensor inside the oven. The following block diagram shows the logical structure of the system.

Figure 1: High-level system diagram

The microcontroller sends a digital signal to the solid state relay switch which controls the on/off state of the oven for appropriate heating and cooling. The temperature sensor device inside the oven makes a voltage divider circuit in series with a resistor, the voltage signal is connected to the atmel mega32 ADC (analog to digital converter) input. The ADC output is used as a feedback by the program to measure temperature inside the oven and to control the relay state. At the same time, the state of the system is updated in real time on the LCD and the user can see the progress. The keypad is used for input and the LCD provides step by step instructions to the user during the input process. The oven control hardware containing the relay switch is inside a metal box with the fuse circuit in place. It has the three pin plug connections for connecting the oven and the AC voltage supply. It also has harness plugs for connecting the digital input to the microcontroller.

FPGA Ray Tracer


Our final project in ECE576 was implementing a ray tracer capable of rendering, rotating, and moving spheres.

Our initial goal was to realistically render and shade spheres with reflections while being able to navigate through the scene and allow the spheres to bounce and roll around. Once we met this goal, we added anti-aliasing and the ability to render planes as well. Because spheres are particularly hard to draw accurately with non-ray tracing 3-D accelerators and spheres can be used as bounding objects for polygon tracing, we chose to implement spheres first. Planes were added because they can be used as polygons to render more complicated objects with sufficient bounding. We used a NIOSII processor to update the sphere table in hardware so that we could rotate spheres about any axis and have them move without adding significant complexity to the hardware. Rotations and motion were done in floating point on the CPU, making the calculations more accurate than the 24bit 12.12 fixed point representation used in the hardware. All input switches and keys were used to allow motion of the light source, motion of the origin, rotation of the scene, selection of the resolution, selection of the scene if the CPU was not used, the level of reflections, the level of anti-aliasing, reset, the option render planes, and finally telescoping/widening the camera.

A Wearable Wireless Sensor System


In this digital age, new interfaces for musical expression provide much broader musical possibilities than have ever existed before. There is a constant quest to be in harmony with one’s instrument so that music can flow freely from the imagination and take form effortlessly. This sparks an interest in new ways to interact with instruments, because we may be able to achieve more fluid methods for creating music. There are many new digital musical interfaces, but most are based on traditional musical instruments or are at least designed as a tangible object. This project aims to eliminate the physical “instrument” altogether. The sensor system enables the use of one’s own body as a musical instrument through detection of movement, freeing the artist from traditional requirements of producing live music. The ability to create and manipulate sound through movement provides the potential for immediate intuitive control of musical pieces.

Dance and music are quite obviously intertwined. One seems empty without the other. Dance and music deserve a close relationship, with no strings attached. The goal of this project is to blur the line between the two, and open up an avenue for them to mingle more intimately. Instead of dancing to music, it is now possible to create music by dancing. This project is a tool, an interface between motion and music, a new musical instrument. It is designed to be highly configurable to allow the artist as free a form of expression as possible. It is also designed to be fun and comfortable to use. This is a powerful tool and a fun toy.

High Level Design

This project implements a wireless, wearable sensor array which operates fast enough to generate MIDI (Musical Instrument Digital Interface) data in real-time based on the user's motions. Four sensors attached to one's body wirelessly send data to a MIDI host. The system is designed to be flexible, so it can be attached to a personal computer (PC), digital audio workstation (DAW), or stand-alone synthesizer, sampler, etc, and configured at will as an air guitar, input to an algorithmic composition, moshing interpreter, a live soundtrack to your daily life, or any other scheme you can imagine.


Customizable Virtual Keyboard

It is becoming increasingly difficult for users to interact with the slew of portable gadgets they carry, especially in the area of text entry. Although miniature displays and keyboards make some portable devices, such as cell phones and PDAs, amazingly small, users’ hands do not shrink accordingly.

To solve this problem, we proposed a Virtual Keyboard. This device will replace a physical keypad with a customizable keyboard printed on a standard A3 size paper whose “keystrokes” are read and translated to real input. This virtual keyboard can be placed on any flat surface, such as desktops, airplane tray tables, kitchen counters, etc. and can theoretically be interfaced with any computing device that requires text entry. This would eliminate the need to carry anything around and also prevent any chance of mechanical damage to the keypad in harsh environments if a simple lamination is used to protect the paper. In addition, buttons on this device can be reconfigured on-the-fly to give a new keyboard layout using a GUI we built in Java and then transferring that data to the device using a computer’s serial port.

At first we initialize PORTA on the Mega32 to take UV input from the camera and PORTC to communicate with the camera over the I2C interface. The baud rate is set to 19,200bps for serial communication. We then run the calibrate function on the camera, which looks at a black keyboard to determine a distinguishable value for red color threshold. Then we call a function called "init_cam" which performs a soft reset on the camera before writing the required values to corresponding camera registers. These registers change the frame size to 176x144, turn on auto white balance, set the frame rate to 6 fps, and set the output format to 16-bit on the Y/UV mode with Y=G G G G and UV = B R B R. The code then enters an infinite loop which checks for the status of the PS2 transmitting queue and tries to process the next captured frame if the queue is empty. If not, the queue is updated and the PS2 transmission is allowed to continue.

Image Processing

The getRedIndex function captures rows of data from the camera and processes each of them. We first wait until a neg edge on the VSYNC, which indicates the arrival of new frame data on the UV and Y lines. We then wait for one HREF to go by since the first row of data is invalid. At this point, we can clock in 176 pixels of data for a given vertical line in the Bayer format.

Bayer format

Figure 4: Bayer color pattern

In the mode where the UV line receives BR data, the output is given by: B11 R22 B13 R24 and so on. Since we only needed red data, we stored an array of 88 values in which we captured the data on the UV line every 2 PCLKS. The OV6630 also repeats the same set of pixels for consecutive rows and thus 2 vertical lines processed would have data about the same pixels. We considered optimizing this by completely dropping data about the even rows, but this was not going to save us anything since all our processing could be done between one neg edge and a pos edge (when data becomes valid again) of HREF.

Since we don’t have enough memory to store entire frames of data to process, we do the processing after each vertical line. After each vertical line of valid data, HREF stays negative for about 0.8ms and the camera data becomes invalid; this gives us ample time to process one line worth of data. After each vertical line was captured, we looped through each pixel to check if it exceeded the red threshold found during calibration. For every pixel that met this threshold, we then checked if the pixel was part of a contiguous line of red pixels, which would indicate the presence of a key press. If such a pixel was found, we then mapped this pixel to a scan code by binary searching through an array of x, y values. If this scan code was found to be valid, we debounced the key by checking for 4 continuous presses, and then added the detected key to the queue of keys to send to the PC.

There was an error in this gadget