Saturday, 21 August 2010
2.Ultrasound as a Biochemical diganostic tool
3.Anesthesia Machine Model With AEP Simulator
4.Neuropatic Foot Exerciser
5.Digital pulmonary function test using Microcontroller(DPFT)
6.Human Skin Colour Sensor
7.Safe gaurd for blind
9.Role of Biomedical in a Hospital
10.Monitoring system for premature babies
11.EEG Biofeedback and Stress Management
12.Snore Alarm Electronic Device
13.Tan Timer: Six Timing positions suited to different skin types Timing affected by sunlight intensity
14.Highway Hospital Connecting System
15.Plaque Identification using
16.Total Quality Management in Healthcare
18.Microcontroller based temperature recorder and controlling system
19.I-Ball movement Detector
21.Wirelesh telemetry system for the transmission of ECG signals
22.Speaker and speech recognition
23.Dual Parameter Monitoring Continuous Passive Movement machine
24.X-Ray C-Arm Drive and Control for Imaging
Blood Line 25.Braille watch
2.Development Of A Tool Measurement System
3.The Development and Construction Of A Tele-operated Anthropomorphic Robotic Hand With Force Feedback
4.Laser Material Deposition On Prehardened Steel
5.Development Of A New Autonomous Guided Vehicle Platform
6.Improvement of Fatigue Properties by Surface Engineering
7.Supporting Software For EMAS For Manufacturing Sector
9.Laser Interferometer Application: Gauge Block Comparator 10.Modelling the Aerodynamics of a Horizontal-Axis Wind Turbine Using A Tip And Root Helical Vortex Model
11.Evaluation of a Mecanum Wheel-Based Mobile Robotic Platform
12.Design of a Towfish
13.Solar Heat Cooling
14.Testing Of Membranes With A Flat Sheet Membrane Test Rig
15.Laser Interferometer Applications: CNC Testing
16.FMEA Software Including LCA Considerations Of A Product
17.An Investigation On The Heat Transfer Of Two Phase Flow In Coiled Pipes
18.A study On Boiling Process
19.Design For An Automated Flexible Part Transfer System
21.Camshaft Tuning Of A Restricted Formula SAE � Engine Through Engine Simulation Software
22.A web-based collaborative Learning Environment In Engineering Design Education
23.Enhancing The Retaining Capacity Of Abrasive Particles In The AFM
24.Design of a Universal Cosmetic Case Lacquering Fixture
25.Development Of A Haptic Glove For Teleoperation
26.Laser Material Deposition Of Ceramic Reinforced Steel
27.Enhancement of the Actuation and Sensing Capabilities of the IAL Dexterous Robot Finger
28.3D Design Data for Customised Hearing Aids
29.Measurement And Analysis Of The Pressure Distribution Around A Model Wind Turbine Operating In A Wind Tunnel Closed Test-Section
30.Study On The Possible Use Of Waste Rubber For Cooling Purposes
31.ADI As A Material For Gears
32.Vertebra Solid Model via 3D Scan
34.Solar Space Heating Of An Office Using Flat Plate Collectors
35.Setting Up Of Carburizing Facility
36.A Study On The Buckling Phenomenon
37.Improving the Performance of An Air Curtain Inside A Freezer
38.Manufacturing System Evaluation And Improvements Definition
39.Computerized Equipment Maintenance Management
40.Analysis Of The Tribological Behaviour Of AISI O1 And AISI D2 Tool Steels
41.A Methodology For Analysing The Energy Efficiency of Buildings (Using The Administrative Wasteserv Building As a Test Case)
42.The Design And Construction Of A Scratch Testing Head For a Tribotester
43.Testing Standard For Degradable Plastics
44.Investigating The Effect Of Alloying Elements In Zinc Bath On The Physical Properties Of Galvanizing Coats
45.The Construction Of A Pneumatic Comparator
46.Disassembly of Electronic waste
2.Electricity Distribution System.
3.Energy Saver.Intelligent Street Lighting System. (Detects traffic & turns ON lights)
4.Intelligent Street Lighting System. (All lights connected to central PC)
5.PC based room lighting control.
6.Wireless room lighting control.
8.Intelligent Room lighting. (Counts if person in room & turns ON light)
9.Maximum Demand Controller.
10.Power monitoring & logging System.
11. Railway Coach Automation.
12. Bus Stop Monitoring System.
13. Electronic Objective Test System.
14. Reflex Action Measurment.
15. Path Tracking Robot.
16. Electronic Inventory Control
16. Remote Sales Terminal.
17. Remote Irrigation control and monitoring
18. Weather monitoring using GSM for hazardous areas.
19. Water reservoir monitoring and PUMP station control
20. Wild life monitoring and location indicators for visitors
21. Hotel Power Managment.
22. Wireless Robotic Crane.
23. Aquarium Automation.
2.The Gastrointestinal Tract.
3.Child monitoring using RF.
4.MesoScaled Aerial Robot.
5.Article tracking system .
6.GSM based data monitoring system.
7.Remote Temperature Monitoring through GSM.
8.Intelligent TAXI metering.
9.Mine Security System with wireless connectivity.
11.Railway Coach Automation.
12.Bus Stop Monitoring System.
13.Electronic Objective Test System.
14.Reflex Action Measurment.
15.Path Tracking Robot.
16.Electronic Inventory Control
16.Remote Sales Terminal.
17.Remote Irrigation control and monitoring
18.Weather monitoring using GSM for hazardous areas.
19.Water reservoir monitoring and PUMP station control
20.Wild life monitoring and location indicators for visitors
21.INTELLIGENT LINE FOLLOWER ROBOT FOR INDUSTRIES
22.DTMF SYSTEM TO CONTROL INDUSTRIAL MACHINES (DSCIM)
23.VEHICLE ANTICOLLITION SYSTEM For Automobiles
24.MesoScaled Aerial Robot
25.KEYLESS INOFORMATION ENTRY
26.RF BASED CAR CONTROLING AND TRACKING SYSTEM
27.Electronic Hire Meter for Automobile.
29.Railway Track Parameter Measurment.
30.Unmanned Railway Crossing.
31.Unmanned Petrol Pump.
32.GSM Based Wildlife monitoring System
33.Vehicle Speed Controlling system
34.SOLAR ENERGY BASED INTELLIGENT TRAIN
35.A Model ECU for Nano Car
36.RF-ID based Hospital Automation System
37.Wireless Controled Green House Automation
38.Intelligent Speed Breaker
39. Home Security System.
40. Controller based bank token display.
41. Lift control System.
42. Production Line Monitoring System.
43. Fingerprint based Door latch.
44. Fingerprint based attendance system.
45. Fingerprint based vehicle security System.
46. Fingerprint based home security system.
47. Hotel Power Managment.
48. Virtual Arm
49. Under Water Vehicle
50. Wireless Robotic Crane.
51. Heart Beat Monitoring System.
52. Person Tracker.
53. Patient Monitoring System.
2. DTMF SYSTEM TO CONTROL INDUSTRIAL MACHINES (DSCIM)
3. VEHICLE ANTICOLLITION SYSTEM For Automobiles
4. MesoScaled Aerial Robot
5. KEYLESS INOFORMATION ENTRY
6. RF BASED CAR CONTROLING AND TRACKING SYSTEM
7. Electronic Hire Meter for Automobile.
8. Solar Tracking.
9. Railway Track Parameter Measurment.
10. Unmanned Railway Crossing.
11. Intelligent Petrol Bunk.
12. GSM Based Wildlife monitoring System
13. Vehicle Speed Controlling system
14. SOLAR ENERGY BASED INTELLIGENT TRAIN
15. A Model ECU for Nano Car
16. RF-ID based Hospital Automation System
17. Wireless Controled Green House Automation
18. Intelligent Speed Breaker
19. Home Security System.
20. Controller based bank token display.
21. Lift control System.
22. Production Line Monitoring System.
23. Fingerprint based vehicle security System.
24. Virtual Arm
25. Under Water Vehicle
26. Heart Beat Monitoring System.
27. Patient Monitoring System.
The Tiger SHARC processor is the newest and most power member of this family which incorporates many mechanisms like SIMD, VLIW and short vector memory access in a single processor. This is the first time that all these techniques have been combined in a real time processor.
The TigerSHARC DSP is an ultra high-performance static superscalar architecture that is optimized for tele-communications infrastructure and other computationally demanding applications. This unique architecture combines elements of RISC, VLIW, and standard DSP processors to provide native support for 8, 16, and 32-bit fixed, as well as floating-point data types on a single chip.
Large on-chip memory, extremely high internal and external bandwidths and dual compute blocks provide the necessary capabilities to handle a vast array of computationally demanding, large signal processing tasks
As has been demonstrated in several application spaces, most notably the 3G telecoms infrastructure equipment market, TigerSHARC is the only DSP solution containing the performance and instruction set to enable an 'all software' approach. This means a TigerSHARC-based solution is better equipped to address manufacturer's requirements for flexibility, high-performance, reduced bill of materials cost and added capacity than traditional hardware approaches that rely heavily on ASICs (application-specific integrated circuits), FPGAs (field programmable gate arrays) and/or ASSPs (application specific standard products).
Through this combination, the TigerSHARC Processor gains the unique ability to process 1, 8, 16 and 32-bit fixed-point as well as floating-point data types on a single chip. This proprietary architecture establishes it in a leading position in the critical areas of performance, integration, flexibility and scalability. Optimising throughput, not just clock speed, drives a balanced DSP architecture and with throughput as the metric, the TigerSHARC Processor is the highest performance DSP for communications infrastructure and multiprocessing applications currently available.
While also providing high system performance it also retains the highest possible flexibility in software and hardware development - flexibility without compromise. For general purpose multiprocessing applications, TigerSHARC Processor's balanced architecture optimises system, cost, power and density.
A single TigerSHARC Processor, with its large on-chip memory, zero overhead DMA engine, large I/O throughput, and integrated multiprocessing support, has the necessary integration to be a complete node of a multiprocessing system. This enables a multiprocessor network exclusively made up of TigerSHARCs without any expensive and power consuming external memories or logic.
The latest members of the TigerSHARC family are the ADSP-TS201S, ADSP-TS202S and ADSP-TS203S. The ADSP-TS201S operates at 600 MHz with 24 Mbits and can execute 4,8 billion MACs per second while achieving high floating-point DSP performance. The ADSP-TS202S operates at 500 MHz with 12 Mbits and the ADSP-TS203S operates at 500 MHz with 4 Mbits.
The TigerSHARC Processor's parallelism capabilities allow for up to four 32-bit instructions per cycle while an enhanced communication instruction set reduces some of the mountainous signal processing functions associated with wireless down to a manageable level. The TigerSHARC also provides an unmatched level of both internal and external bandwidth that enable high computation rates and high data rate processing.
The combination of all the above mentioned features positions the TigerSHARC Processor as an excellent candidate for applications requiring extremely high throughput such as the channel decoding algorithms of wireless communications.
In summer 1977, ESA placed the first technological study contract in the domain of intersatellite optical links. Now, twenty years later, a major milestone has been reached with the SILEX laser terminals having been flight tested for integration with their host spacecraft. At the same time, ESA is preparing itself for a new challenge: the potential massive use of optical cross links in satellite constellations for mobile communications and global multimedia services. This is an opportune moment to look back at the past twenty years of ESA effort in laser communications, to take stock of the results achieved and to reflect on ways to face the challenges of the future.
Twenty years ago, in summer 1977, ESA placed a technological research contract for the assessment of modulators for high-data- rate laser links in space. This marked the beginning of a long and sustained ESA involvement in space optical communications. A large number of study contracts and preparatory hardware development followed, conducted under various ESA R&D and support technology programmes. In the mid- 1980 s, ESA took an ambitious step by embarking on the SILEX (Semiconductor laser Intersatellite Link Experiment) programme, to demonstrate a pre-operational optical link in space.
SILEX, which will be in operation in the year 2000, has put ESA in a world-leading position in civilian optical intersatellite links. While SILEX formed the backbone of ESA s optical communications activities in the recent past, additional R&D activities were undertaken to develop attractive second-generation systems, particularly for the commercial satellite market. Indeed, at the turn of the century, literally thousands of intersatellite links - radio-frequency (RF) and optical - are expected to be in operation in commercial multi-satellite constellations providing mobile communications, video conferencing and multimedia services. The race is on for the European laser communication industry to enter this lucrative market. Optical technology offers too many advantages in terms of mass, power, system flexibility and cost, to leave the field entirely to RF. With the heritage of twenty years of technological preparation, European industry is well positioned to face this burgeoning demand for commercial laser terminals. The early days
When ESA started to consider optics for intersatellite communications, virtually no component technology was available to support space system development. The available laser sources were rather bulky and primarily laboratory devices. ESA selected the CO2 gas laser for its initial work. This laser was the most efficient and reliable laser available at the time and Europe had a considerable background in CO2 laser technology for industrial applications. ESA undertook a detailed design study of a CO2 laser communication terminal and proceeded with the breadboarding of all critical subsystems which were integrated and tested in a complete laboratory breadboard transceiver model.
This laboratory system breadboarding enabled ESA to get acquainted with the intricacies of coherent, free-space optical communication. However, it soon became evident that the 10 micron CO2 laser was not the winning technology for use in space because of weight, lifetime and operational problems. Towards the end of the 1970 s, semiconductor diode lasers operating at room temperature became available, providing a very promising transmitter source for optical intersatellite links. In 1980, therefore, ESA placed the first studies to explore the potential of using this new device for intersatellite links. At the same time, the French national space agency, CNES, started to look into a laser-diode-based optical data-relay system called Pastel. This line of development was consequently followed and resulted in the decision, in 1985, to embark on the SILEX pre-operational, in-orbit optical link experiment.
SILEX is a free-space optical communication system which consists of two optical communication payloads to be embarked on the ESA Artemis (Advanced Relay and TEchnology MIssion Satellite) spacecraft and on the French Earth-observation spacecraft SPOT-4. It will allow data transmission at 50Mbps from low Earth orbit (LEO) to geostationary orbit (GEO) using GaAlAs laser-diodes and direct detection.
The SILEX Phase A and B studies were conducted around 1985, followed by technology breadboarding and predevelopment of the main critical elements which were tested on the so-called System Test Bed to verify the feasibility of SILEX. A detailed design phase was carried out in parallel with the System Test Bed activities up to July 1989. At that time, the development of SPOT-4 Phase C/D was agreed with an optical terminal as passenger. This was an important decision since it made a suitable partner satellite available for the ESA data-relay satellite project; the stage was therefore set to start the main SILEX development effort in October 1989.
In March 1997, a major milestone was reached in the SILEX programme: both terminals underwent a stringent environmental test programme and are now ready for integration with their host spacecraft. However, due to the agreed SPOT-4 and Artemis launch dates, it is likely that the in-orbit demonstration of the overall system will not start before mid-2000. Consequently, the GEO terminal will need to be stored after the completion of the spacecraft testing. The first host spacecraft (SPOT-4) is planned for launch in February 1998. The launch of Artemis on a Japanese H2A is delayed for non-technical reasons until February 2000. Apart from launching Artemis, Japan is participating in the SILEX programme with its own laser terminal, LUCE (Laser Utilizing Communications Equipment), to be carried onboard the Japanese OICETS satellite (Optical Inter-orbit Communications Engin-eering Test Satellite), set for launch in summer 2000.
Optical ground station on Tenerife As part of the SILEX in-orbit check-out programme, ESA started to construct an optical ground station on the Canary Islands in 1993 (Fig. 2). This station, which will be completed by the end of 1997, simulates a LEO optical terminal using a 1 m telescope, allowing the performances of the GEO optical terminal on Artemis to be verified. The optical ground station will receive and evaluate the data transmitted from Artemis and will simultaneously transmit data at optical wavelengths towards Artemis. In addition to its primary objective as the SILEX in-orbit check-out facility, the optical ground station will also be used for space-debris tracking, lidar monitoring of the atmosphere and astronomical observations.
Labels: ▌Computer Topics
Wireless Integrated Network Sensors (WINS) now provide a new monitoring and control capability for monitoring the borders of the country. Using this concept we can easily identify a stranger or some terrorists entering the border. The border area is divided into number of nodes. Each node is in contact with each other and with the main node.
The noise produced by the foot-steps of the stranger is collected using the sensor. This sensed signal is then converted into power spectral density and the compared with reference value of our convenience. Accordingly the compared value is processed using a microprocessor, which sends appropriate signals to the main node. Thus the stranger is identified at the main node. A micro power spectrum analyzer has been developed to enable low power operation of the entire WINS system.
Thus WINS require a Microwatt of power. But it is very cheaper when compared to other security systems such as RADAR under use. It is even used for short distance communication less than 1 Km. It produces a less amount of delay. Hence it is reasonably faster. On a global scale, WINS will permit monitoring of land, water, and air resources for environmental monitoring. On a national scale, transportation systems, and borders will be monitored for efficiency, safety, and security.
They are the materials which have the capability to respond to changes in their condition or the environment to which they are exposed, in a useful and usually repetitive manner. They are called by other names such as,
· Intelligent materials,
· Active materials and
· Adoptive materials.
The devises that are made using smart materials are called “Smart Devices”. Similarly the systems and structures that have incorporated smart materials are called “Smart Systems” and “Smart Structures”. In other words the complexity increases from smart materials to smart structures.
Stimulus Response System
A smart material or an active material gives an unique output for a well defined input. The input may be in the form of mechanical stress / strain, electrical / magnetic field or changes in temperature. Based on input and output, the smart materials are classified as follows.
1. Shape Memory Alloys (SMAs)
They are the smart materials which have the ability to return to some previously defined shape or size when subjected to appropriate thermal changes.
Eg.: Titanium-Nickel Alloys.
2. Magnetostrictive Materials
They are the smart materials which have the ability to undergo deformation when subjected to magnetic field.
Eg.: Terfenol-D, (Alloy of Iron and Terbium)
3. Piezoelectric Materials
These are the materials which have capability to produce a voltage when surface strain is introduced. Conversely, the material undergo deformation (stress) when an electric field is applied across it.
4. Electrorheological Fluids
They are the colloidal suspensions that undergo changes in viscosity when subjected to an electric field. Such fluids are highly sensitive and respond instantaneously to any change in the applied electric field.
1. Smart materials are used in aircrafts and spacecrafts to control vibrations and excessive deflections.
2. Smart concrete is used in smart structures. Smart concrete (a composite of carbon fibres and concrete) is capable of sensing minute structural cracks / flaws.
3. Smart materials have good potential to be used in health care markets. Active control drug delivery devices such as Insulin Pump is a possibility.
Smart materials have applications in the design of smart buildings and state of art vehicles. Smart materials are used for vibration control, noise mitigation, safety and performance.
telecommunications technology that provides wireless transmission of
data using a variety of transmission modes, from point-to-point links
to portable internet access. The technology provides
up to 75 Mbit/s symmetric broadband speed without the need for cables.
The technology is based on the IEEE 802.16 standard (also called
Broadband Wireless Access). The name “WiMAX” was created by
the WiMAX Forum, which was formed in June 2001 to promote conformity
and interoperability of the standard. The forum describes WiMAX as
“a standards-based technology enabling the delivery of last mile
wireless broadband access as an alternative to cable and DSL”.
The terms “fixed WiMAX”, “mobile WiMAX”,
“802.16d” and “802.16e” are frequently used
incorrectly Correct definitions are the following:
• 802.16-2004 is often called 802.16d, since that was the working
party that developed the standard. It is also frequently referred to as
“fixed WiMAX” since it has no support for mobility.
• 802.16e-2005 is an amendment to 802.16-2004 and is often
referred to in shortened form as 802.16e. It introduced support for
mobility, amongst other things and is therefore also known as
The concept of a light-tree is introduced
in a wavelength-routed optical network. A light-tree is a
point-to-multipoint generalization of a lightpath. A lightpath is a
point-to-point all-optical wavelength channel connecting a transmitter
at a source node to a receiver at a destination node. Lightpath
communication can significantly reduce the number of hops (or
lightpaths) a packet has to traverse; and this reduction can, in turn,
significantly improve the network’s throughput. We extend the
lightpath concept by incorporating an optical multicasting capability
at the routing nodes in order to increase the logical connectivity of
the network and further decrease its hop distance. We refer to such a
point-to-multipoint extension as a light-tree. Light-trees cannot only
provide improved performance for unicast traffic, but they naturally
can better support multicast traffic and broadcast traffic. In this
study, we shall concentrate on the application and advantages of
light-trees to unicast and broadcast traffic. We formulate the
light-tree-based virtual topology design problem as an optimization
problem with one of two possible objective functions: for a given
(i) Minimize the network-wide average packet hop distance, or,
(ii) Minimize the total number of transceivers in the network. We
demonstrate that an optimum light-tree-based virtual topology has clear
advantages over an optimum light path-based virtual topology with
respect to the above two objectives.
Normally we use couplet of software, e.g. SCADA- PLC couplet. Here SCADA acts as master and PLC acts as slave. SCADA gives instructions and PLC obeys as per the order. In short control is safe in SCADA hands. It’s more convenient if we use the couplets even if SCADA alone can do all the operations alone.
SCADA operations are categorized under three below explaining sections. Such as- industrial, infrastructure and facility based processes. Industrial process includes manufacturing, production, power generation etc. Infrastructure based process means water treatment, waste water collection, refining etc. Facility process includes building, airport and all public sector processes.
SCADA mainly consists of five subsystems. Each is assigned to perform a particular task. The five subsystems are:
+ HMI or Human Machine Interface- This is an apparatus used to supply data to the human operator and through this he could able to monitor and control
+ A supervisory computer system- This is used to acquire data and to send commands to the process.
+ RTU or Remote Terminal Unit- These are connected to sensors to do a process, Converts sensor signals to digital data and sends this data to the supervisory system.
+ PLC or Programmable Logic Controller- used as field devices in a process, since they are economical, flexible, versatile and much configurable than general purpose RTU’s.
+ Communication- Many modes of communication are available. Communication connects supervisory systems to RTU’s.
Three generation SCADA software is available. They are First generation or Monolithic, Second generation or distributed, Third generation or Integrated. Now third generation is in use.
SCADA has applications on energy management system, multi tasking, automation, industrial control systems, data exchange, pipeline transport, graphical design etc. SCADA made almost all sections under its control tactically. Mainly we use this software in automation purposes. This software is freely available and another advantage is no programming is needed. So even a person who is not good in programming can handle SCADA well and efficiently.
In engineering field actuators have many more applications in different sectors. In this field actuators are used as mechanisms to provide motion and are again used to stop the motion of that particular device. So it is clear that actuator just actuates the motion. May be start or stop, depends on the machine.
In electronics engineering, actuators are subdivisions of transducers. They are used to transform input signal into motion. Normally input signal would be electrical. E.g. electrical motors, pistons, relays, pneumatic actuators, piezo electric actuators etc. Actuators can be sometimes used as hardware components. Different types of actuators are available like plasma actuators, pneumatic actuators, electric actuators, linear actuators.
It’s a type of tool used to put something into automatic action. Actually actuators are used with number of sources. Depending on the type of actuator we are using, different tools will be assisting us to put the device to motion. Mostly they are used in office work area since most of them are used to move valve door in systems.
Sometimes they are used to maneuver certain mechanical devices on work. Depending on the shape and style actuators are divided into different classes. Linear actuators, valve actuators, hydraulic actuators are the best known and used among them. Each of them has assigned functions e.g. hydraulic actuators are used to allow more pressure, size and movement with the object.
All of these different types of actuators are best when knowing what type of motion control you need. They also provide speed, Belt drive, acceleration and accuracy with linear motion No matter what type of actuator is needed, there is a tool that makes it easier to maneuver a certain object or space in your work area.
Thursday, 12 August 2010
The Electric Motor Control with Regenerative Braking (EMCRB) project will develop a test system to investigate electric vehicular drive systems and regenerative braking. A three phase permanent magnet synchronous motor, flywheel, and control electronics will comprise the test system. Data collected from the test system will be used to develop a model that will establish the efficiency of regenerative braking. An ongoing Bradley University Mechanical Engineering project will utilize the efficiency data to design an ultra light electric vehicle. A future Bradley University Electrical Engineering project may expand upon the test bench system developed in the EMCRB project.
Figure 1: Regeneration to batteries. Channel 1 in dark blue represents the voltage across the batteries; note that the zero point for voltage is off the page to the bottom. The apparent zero for voltage is actually at 23.8 volts relative to ground. Channel 2 in light blue is the current into the positive terminal of the batteries. The math function in red is equal to CH1*CH2 and represents the power delivered to the batteries. The area below this represents the energy delivered to the batteries.
This project is the first phase of
Phase I Goals:
• Design and implement a prototype electric vehicle test platform for testing with the following specifications:
– Maximum speed of 30mph
– Curb weight of 800 to 1800lbs
– Regenerative braking capabilities
–Create drive model
• Determine vehicle properties
• Select optimal components for test platform
•Acquire and display data from the motor controller and sensors
•Distance Traveled and Time
•Maximum forward and regenerative brake current
•Percent of extra distance gained from regenerative braking
•Total Amp-Hours Remaining
–Analyze and evaluate drive model.
Side Impact crashes can be generally dangerous because there is no room for large deformation to protect an occupant from the crash forces. The side impact collision is the second largest cause of death in United States after frontal crash. Day by day increase in the fuel cost and the emission of the smoke from the automobile industry are also the major concerns in the contemporary world, hence the safety, fuel efficiency and emission gas regulation of the passenger cars are important issues in contemporary world. The best way to increase the fuel efficiency without sacrificing the safety is to employ composite materials in the body of the cars because the composite materials have higher specific strength than those of steel. Increase in the usage of composite material directly influences the decrease in the total weight of car and gas emission. In this research, Carbon/Epoxy AS4/3051 -6 is used as material for side impact beam which has adequate load carrying capacities and that it absorbs more strain energy than steel.
The Finite Element models of a Ford Taurus car and the Moving Deformable barrier (MDB) as developed by National Crash Analysis Center (NCAC) have been utilized for the analysis in this thesis. The current side impact beam is removed from the car and the new beam which is developed using CATIA and MSC.Patran is merged on to the driver side of the front door of the car model.
The total energy absorption of the new beam with steel and composite material is compared with the current beam. The intrusion of the beam is evaluated by using FMVSS 214 and IIHS side impact safety methods. The new impact beam with composite has high impact energy absorption capability when compared to current beam and new beam with steel, with 65% reduction in weight.
The rollers used in rice mill are also called as friction rollers. The use of these rollers is to remove the husk from the rice. In practice, there will be two rollers which rotate in opposite direction. The paddy flows between these rollers and separates the husk and rice. During this separation a high amount of the friction is induced in these rollers. Due to this friction the rollers have a chance of breakage at exactly the center of the roller. The maximum life of these rollers is not more than two to seven days.
The present work is carried out to overcome these problems. We approach the problems in two ways to increase the life of the rollers and strength of the rollers. Since these rollers have a circular cross section and more number of boundary conditions, which analytical methods fail to analyze. Hence, we employed numerical methods which provide an approximate, but acceptable solution. Finite Element Method technique is applied for boundary value problems and it is integrated with high speed digital computer and to analyze complex domain with relative ease.
For this analysis component is modeled in PRO/E, which is then imported to HYPERMESH for attaining the pre- processing and then it is exported to ANSYS for analyzing structural, linear and rotational stress analysis.
The project “DESIGN AND ANALYSIS OF 4-STROKE S.I. ENGINE PISTON” is about designing the piston according to the forces acting on it from the gases, which are released during the combustion.
The piston head acts as a particular case and hence the piston is analyzed for the stresses developed due to the conditions.
At first, the piston is designed according to the specifications. After the designing, the model is subjected to certain conditions. According to the conditions we have checked the stresses acting on it and checked the failures of the model. After the analyzing the changes are done to the model if required.
In the analysis a model of piston is generated using Pro/E. the finite element model of the piston is generated using Ansys. It is applied with loads and boundary conditions. Thus solved for the engine response.
The result are calculated and tabulated below and the stresses acting on the body are shown.
It determines Turbine clearances and relative positions of the nozzles to the Turbine buckets. This positioning is critical to Gas Turbine performance.
The main objective of the present investigation is to analyze the temperature distribution, stresses developed throughout the casing by using FEM. In this project thermal analysis at steady state, thermal and structural analysis and optimization of casing are carried out. Thermal and structural analysis on casing is carried out with increased gas temperatures than the existing operating conditions.
In this project we designed and analyzed a crankshaft of 4-stroke S.I engine using “Ansys”, which is software, which works on the basis of finite element method.
Firstly, we created a solid model of the crankshaft using designing software “PRO/E”. Then the model is imported into Ansys and analyzed by applying necessary conditions, which were considered in designing it and then checked for the strength and life. The specifications required for the design are taken from the drafted design.
The results were found in the analysis of the crankshaft, the design is found out to produce more stresses and some modifications were done to the design and again it is analyzed and the stresses developed were lesser when compared to the previous design.
The engine, which is used, is a four-stroke S.I engine. It is a twin cylinder multi utility engine. It is a horizontal shaft engine. The cylinder volume of the engine is 196cc and it is used in cold countries for snow cutting purpose. It is also used for grass cutting.
Connecting rod is a structural member in the engine, which transfers reciprocating motion into rotary motion of crank shaft
The connecting rod while transferring the power from piston to crank shaft takes load from piston due to combustion process in the combustion chamber
The load acts at a particular crank angle to the max hence the connecting rod is analyzed due to stress developed, due to load conditions and changes mentioned
In this analysis a model of connecting rod is generated in pro/e and analyzed in ansys using FEM (finite element method) by applying loads and boundary conditions, and then solved for engineering responses.
The present scenario in automotive industry is an increase in demand of trucks not only on the cost and weight aspects but also on improved complete vehicle features and overall work performance. The chassis plays an important role in the design of any truck.
The chassis design in general is a complex methodology and to arrive at a solution which yields a good performance is a tedious task. Since the chassis has a complex geometry and loading patterns, there is no well defined analytical procedure to analyze the chassis. So the numerical method of analysis is adopted, in which ‘Finite Element Technique’ is most widely used method.
The main objective of this work is to evaluate static characteristics of a truck chassis under different load conditions. Geometric modeling of the various components of chassis has been carried out in part mode as 3-D models using PRO-ENGINEER. The properties, viz. crossectional area, beam height, area moments of inertia of these 3-D modeled parts are estimated in PRO-ENGINEER. These properties have been used as input while performing the Finite Element Analysis using ANSYS work bench.
The present work is carried out on TWO STATION TWO SPINDLE CAM BORE ROUGHING /FINISHING SPM.It is a SPECIAL PURPOSE MACHINE exclusively used for HEROHONDA in which it is used for fine boring to reduce the oil consumption of the two wheeler.
The objective of my project is to generate a parametric assembly drawing of spindle assembly using Pro-E, and analysis of spindle assembly using ANSYS.
Firstly, we created a solid model of the crankshaft using designing software
“Pro-e”. Then the model is imported into Ansys and analyzed by applying necessary conditions, which were considered in designing it and then checked for the strength and life. The specifications required for the design are taken from the drafted design
The analysis of spindle assembly includes:
- Analysis of spindle deflection due to tensioning of belt
- Spindle analysis for deflection due to axial and radial forces
- Based on deflection decide an optimum positioning of the spindle bearings
Much of the fatigue damage in the tools and equipments of a auto shop can be limited to the compressive forces acting on them. Buckling under axial load is one of the most common failures which appear before crushing. This project is about “Buckling Analysis on a two post Screw auto lift”. The compression members, whom we come across, do not fail entirely by crushing. These members, which are considerably long when compared with their lateral dimensions start bending, i.e. buckling. When the axial load reaches a certain critical value, and screw auto lift is one such member. Both experimental and analytical work has been performed on a screw auto lift; employing commercial ansys program, based on the finite element method on a 3d solid model developed in pro/e.
The possible use of alloy steels(nickel steels) in place of mild steel has got improved properties like increase in the strength and the elastic limit of the alloy .The results suggest that use of the nickel steels improves the properties such as strength, ductility, and corrosion resistance over the specifications of the two post screw auto lift. .It also increases the durability of the equipment, two post screw auto lift.
The thesis combines analytical calculations with experimental measurements. By the use of modal theory, an analytical model is updated to resemble the experimental data.
It is designed to run at a extremely slow speed, yet be able to increase it's speed when it is about to stall under a heavy load, thereby increasing torque to the load.
[How it works]:
The motor and it's drive circuitry (Drain and source of mosfet Q6) is on it's own supply
V1. V2 powers the electronics. Both supplies are common grounded.
Under normal running conditions the motor current is driven from V1. through Q6 and R8. With VgQ6 applied by the gate driver Q3. The V.drp..across R8 is at a determined value which keeps the motor running at a constant slow speed. This is acomplished by the value of R7 which helps bias Q3. Now Q1 and the trio of Q2 a,b,c, with it's bias resistors form a typical feedback volt.reg. This VoQ1 is predetermined so as to bias Q3 to the value needed as well as biasing VbQ4 to be at cutoff. Q7 acts as a variable resistance across R23.
With the motor at slow speed transistors Q7,Q8,Q10,Q11, are biased to be at cutoff.
Now when The motor is heavily loaded it begins to slow down more, this decreases it's apparent resistance and more current flows through it at the same time more voltage is dropped across R8 bringing VeQ5 lower until it is low enough for Q4 and Q5 to conduct, it starts conducting the excess current from the motor as it is stalling, and sends it into the voltage Amplifier consisting of Q8,Q9,Q10,Q11 and its bias circuitry, and then this amplified voltage is inputed into the base of Q7 causing it to conduct whereby it begins to bypass current around R23, bringing the VbQ2c, lower making it conduct less, which in turn allows VoQ1 to rise more positive which will increase VbQ3 and ultimately Q6 will rise more positive to drive the motor at a higher voltage.
The Base of Q4 is tied to VoQ1 so that, as long as there is a need to increase motor voltage during heavy loading the VbQ4 rides along with the increase in voltage so as to keep it's base higher than it's emitter thus continuing conducting as long as stall current is flowing.
Once the motor load is released stall current no longer present then the voltage amplifier shuts down which in turn cuts off Q7 then the VoQ1 automatically drops back to its predetermined value which then causes VbQ4 to drop below it's Ve. (cutoff) so as to allow the motor to run at it's normal slow speed again.
HVAC thermostat has been one of the common device used in residential and industrial buildings to control the temperature of a space be it a warehouse, a room, a hall or an office. This thermostat project will focus on the heating control of a space that uses electric heater as its source of heating. It basically consists of a comparator that controls the ON and OFF of the electric heater based on the sensor temperature.
The control of the fan speed is usually hardwired with two speed or three speed motors and is incorporated into the thermostat. The temperature range of this thermostat is from 5 Celsius to 30 Celsius with a tolerance of approximately 3 degree Celsius. Hence, only non critical tolerance control of temperature control such as a room can be used.
The circuit diagram shows the configuration of the HVAC thermostat. The LM358 Op Amp is used as a comparator to sense the inputs of the reference voltage (PIN 3) and room temperature (PIN 2). The thermistor used is a NTC (negative temperature coefficient) type where its resistance will drop when the temperature increases and vice versa. It has a resistance of 20K ohm at 25 degree Celsius. When the room temperature drops, the thermistor resistance will go up and hence the output of the operational amplifier will be low. This cause the relay to turn OFF and the heater will conduct until the temperature of the room rises again.
The circuit is calibrated using variable resistor VR1. Set the lever of the slide potentiometer or rotary potentiometer VR2 to 25 Celcius location. Place the thermistor at a space where the temperature is at 25 Celcius. By varying VR1, set the resistance at the position between the ON and OFF of the relay. Use a suitable contact relay rating according to the load of the heater.
Tuesday, 10 August 2010
The history of antimatter begins with a young physicist named Paul A.M.Dirac (1902-1984) and the strange implications of a mathematical equation. This British physicist formulated a theory for the motion of the electrons in electric and magnetic fields. Such theories had been formulated before, but what was unique about Dirac’s was that his included the effects of Einstein’s Special Theory of Relativity. This theory was formulated by him in 1928.Mean while he wrote down an equation, which combined quantum theory and special relativity, to describe the behavior of the electron. Dirac’s equation won him a Nobel prize in I 933,but also posed another problem; just at the equation x2 = 4 can have two solutions (x 2, x = -2). So Dirac’s equation would have two solutions, one for an electron with positive energy, and one for an electron with negative energy. This led theory led to a surprising prediction that the electron must have an “antiparticle” having the same mass but a positive electric charge.
1n 1932, Carl Anderson observed this new particle experimentally and it was named “positron”. This was the first known example of antimatter. In 1955, the anti proton was produced at the Berkeley Bevatron, and in 1995, scientists created the first anti hydrogen atom at the CERN research facility in Europe by combining the anti proton with a positron Dirac’s equation predicted that all of the fundamental particles in nature must have a corresponding “Antiparticle”. In each case, the masses of the particle and anti particle are identical and other properties are nearly identical. But in all cases, the mathematical signs of some property are reversed. Anti protons, for example have the same mass as a proton, but the opposite electric charge. Since Dirac’s time, scores of these particle-antiparticle pairings have been observed. Even particles that have no electrical charge such as the neutron have anti particle.
Anti protons do not exist in nature and currently are produced only by energetic particle collision conducted at large accelerator facilities (e.g. Fermi National Accelerator Laboratory, Fermi Lab, in US or CERN in Geneva, Switzerland). This process typically involves accelerating protons to relativistic velocities (very near to speed of light) and slamming them into a metal (e.g. Tungsten) target. The high-energy protons are slowed or stopped by collisions with nuclei of the target; the kinetic energy of the rapidly moving protons is converted into matter in the form of various subatomic particles, some of which are anti protons. Finally, the anti protons are electro magnetically separated from the other particles, then they are captured and cooled (slowed) by a Radio-Frequency Quadrapole (RFQ) linear accelerator (operated as a decelerator) and then stored in a storage cell called as a Penning Trap.
Note that anti protons annihilate spontaneously when brought into contact with normal matter, thus they must be stored and handled carefully. Currently the highest anti proton production level is in the order of nano-grams per year.
Digital Subscriber Lines (DSL) are used to deliver high-rate digital data over existing ordinary phone-lines. A new modulation technology called Discrete Multitone (DMT) allows the transmission of high speed data. DSL facilitates the simultaneous use of normal telephone services, ISDN, and high speed data transmission, e.g., video. DMT-based DSL can be seen as the transition from existing copper-lines to the future fiber-cables. This makes DSL economically interesting for the local telephone companies. They can offer customers high speed data services even before switching to fiber-optics.
DSL is a newly standardized transmission technology facilitating simultaneous use of normal telephone services, data transmission of 6 M bit/s in the downstream and Basic-rate Access (BRA). DSL can be seen as a FDM system in which the available bandwidth of a single copper-loop is divided into three parts. The base band occupied by POTS is split from the data channels by using a method which guarantees POTS services in the case of ADSL-system failure (e.g. passive filters).
The past decade has seen extensive growth of the telecommunications industry, with the increased popularity of the Internet and other data communication services. While offering the world many more services than were previously available, they are limited by the fact that they are being used on technology that was not designed for that purpose..
The majority of Internet users access their service via modems connects to the Plain Old Telephone System (POTS). In the early stages of the technology, modems were extremely slow by today’s standards, but this was not a major issue. A POTS connection provided an adequate medium for the relatively small amounts of data that required transmission, and so was the existing system was the logical choice over special cabling.
Technological advances have seen these rates increase up to a point where the average Internet user can now download at rates approaching 50Kbps, and send at 33.6Kps. However, POTS was designed for voice transmission, at frequencies below 3kHz, and this severely limits the obtainable data rates of the system. To increase performance of new online services, such as steaming audio and video, and improve general access speed, the bandwidth hungry public must therefore consider other alternatives. Technologies, such as ISDN or cable connections, have been in development for sometime but require special cabling. This makes them expensive to set up, and therefore have not been a viable alternative for most people.
The seminar is about polymers that can emit light when a voltage is applied to it. The structure comprises of a thin film of semiconducting polymer sandwiched between two electrodes (cathode and anode).When electrons and holes are injected from the electrodes, the recombination of these charge carriers takes place, which leads to emission of light .The band gap, ie. The energy difference between valence band and conduction band determines the wavelength (colour) of the emitted light.
They are usually made by ink jet printing process. In this method red green and blue polymer solutions are jetted into well defined areas on the substrate. This is because, PLEDs are soluble in common organic solvents like toluene and xylene .The film thickness uniformity is obtained by multi-passing (slow) is by heads with drive per nozzle technology .The pixels are controlled by using active or passive matrix. The advantages include low cost, small size, no viewing angle restrictions, low power requirement, biodegradability etc. They are poised to replace LCDs used in laptops and CRTs used in desktop computers today. Their future applications include flexible displays which can be folded, wearable displays with interactive features, camouflage etc.
Imagine these scenarios
- After watching the breakfast news on TV, you roll up the set like a large handkerchief, and stuff it into your briefcase. On the bus or train journey to your office, you can pull it out and catch up with the latest stock market quotes on CNBC.
- Somewhere in the Kargil sector, a platoon commander of the Indian Army readies for the regular satellite updates that will give him the latest terrain pictures of the border in his sector. He unrolls a plastic-like map and hooks it to the unit’s satellite telephone. In seconds, the map is refreshed with the latest high resolution camera images grabbed by an Indian satellite which passed over the region just minutes ago.
Don’t imagine these scenarios at least not for too long.The current 40 billion-dollar display market, dominated by LCDs (standard in laptops) and cathode ray tubes (CRTs, standard in televisions), is seeing the introduction of full-color LEP-driven displays that are more efficient, brighter, and easier to manufacture. It is possible that organic light-emitting materials will replace older display technologies much like compact discs have relegated cassette tapes to storage bins. .
The origins of polymer OLED technology go back to the discovery of conducting polymers in 1977,which earned the co-discoverers- Alan J. Heeger , Alan G. MacDiarmid and Hideki Shirakawa – the 2000 Nobel prize in chemistry. Following this discovery , researchers at Cambridge University UK discovered in 1990 that conducting polymers also exhibit electroluminescence and the light emitting polymer(LEP) was born!.
A heat pipe is a device that efficiently transports thermal energy from its one point to the other. It utilizes the latent heat of the vaporized working fluid instead of the sensible heat. As a result, the effective thermal conductivity may be several orders of magnitudes higher than that of the good solid conductors. A heat pipe consists of a sealed container, a wick structure, a small amount of working fluid that is just sufficient to saturate the wick and it is in equilibrium with its own vapor. The operating pressure inside the heat pipe is the vapor pressure of its working fluid. The length of the heat pipe can be divided into three parts viz. evaporator section, adiabatic section and condenser section. In a standard heat pipe, the inside of the container is lined with a wicking material. Space for the vapor travel is provided inside the container.
Basic components of a heat pipe
The basic components of a heat pipe are
1. The container
2. The working fluid
3. The wick or capillary structure
The function of the container is to isolate the working fluid from the outside environment. It has to be there for leak proof, maintain the pressure differential across the walls, and enable transfer of thermal energy to take place from and into the working fluid.
The prime requirements are:
1. Compatibility (Both with working fluid and External environment)
4. Ease of fabrication including welding, machinability and ductility
5. Thermal conductivity
6. Strength to weight ratio
The working fluid
The first consideration in the identification of the working fluid is the operating vapor temperature range. Within the approximate temperature band, several possible working fluids may exist and a variety of characteristics must be examined in order to determine the most acceptable of these fluids for the application considered.
The prime requirements are:
7. Compatibility with wick and wall materials
8. Good thermal stability
9. Wettability of wick and wall materials
10. High latent heat
11. High thermal conductivity
12. Low liquid and vapor viscosities
13. High surface tension
The wick structure in a heat pipe facilitates liquid return from the evaporator from the condenser. The main purposes of wick are to generate the capillary pressure, and to distribute the liquid around the evaporator section of heat pipe. The commonly used wick structure is a wrapped screen wick.
The traditional role of conditional access is to ensure that viewers see only those programs that they have paid to view. In the digital environment, conditional access has evolved far beyond this role.Today’s conditional access systems still support traditional pay TV revenue generation. In addition they enable TV operators to create and protect a unique gateway to the enhanced TV experience — a world of interactive services, including home shopping, games, sports, interactive advertising, and pay-perview programming.
Using today’s conditional access systems, you can target programming, advertisements, and promotions to subscribers by geographic area, by market segment, or according to subscribers’ personal preferences.You can take advantage of conditional access features to implement flexible program packaging options and support new ways of generating revenue.
Shunt Connected Controllers at distribution and transmission levels usually fall under two catogories – Static Synchronous Generators (SSG) and Static VAr Compensators (SVC).
A Static Synchronous Generator (SSG) is defined by IEEE as a self-commutated switching power converter supplied from from an appropriate electric energy source and operated to produce a set of adjustable multiphase voltages , which may be coupled to an ac power system for the purpose of exchanging independently controllable real and reactive power. When the active energy source (usually battery bank, Superconducting Magnetic Energy Storage etc) is dispensed with and replaced by a DC Capacitor which can not absorb or deliver real power except for short durations the SVG becomes a Static Synchronous Compensator (STATCOM) . STATCOM has no long term energy support in the DC Side and can not exchange real power with the ac system ; however it can exchange reactive power. Also , in principle, it can exchange harmonic power too. But when a STATCOM is designed to handle reactive power and harmonic currents together it gets a new name – Shunt Active Power Filter. So a STATCOM handles only fundamental reactive power exchange with the ac system.
STATCOMs are employed at distribution and transmission levels – though for different purposes. When a STATCOM is employed at the distribution level or at the load end for power factor improvement and voltage regulation alone it is called DSTATCOM. When it is used to do harmonic filtering in addition or exclusively it is called Active Power Filter. In the transmission system STATCOMs handle only fundamental reactive power and provide voltage support to buses. In addition STATCOMs in transmission system are also used to modulate bus voltages duting transient and dynamic disturbances in order to improve transient stability margins and to damp dynamic oscillations.
IEEE defines the second kind of Shunt Connected Controller called Static VAr Compensator (SVC) as a shunt connected static var generator or absorber whose output is adjusted to exchange capacitive or inductive current so as to maintain or control specific parameters of the electrical power system (typically bus voltage).Thyristor-switched or thyristor-controlled capacitors/inductors and combinations of such equipment with fixed capacitors and inductors come under this.This has been covered in an earlier lecture and this lecture focusses on STACOMs at distribution and transmission levels.
PWM Voltage Source Inverter based Static VAr Compensators (referred to as SVC here onwards) began to be considered a viable alternative to the existing passive shunt compensators and Thyristor Controlled Reactor (TCR ) based compensators from mid-eighties onwards. The disadvantages of capacitor/inductor compensation are well known. TCRs could overcome many of the disadvantages of passive compensators. However they suffered from two major disadvantages ;namely slow response to a VAr command and injection of considerable amount of harmonic currents into the power system which had to be cancelled by special transformers and filtered by heavy passive filters.
It became clear in the early eighties that apart from the mundane job of pumping lagging/leading VArs into the power system at chosen points ,VAr generators can assist in enhancing stability of the power system during large signal and small signal disturbances if only they were faster in the time domain. Also ,they can provide reactive support against a fluctuating load to maintain the bus voltage regulation and to reduce flicker problems,provide reactive support to control bus voltages against sag and swell conditions and provide reactive support to correct the voltage unbalance in the source – if only they were fast enough. PWM STATCOMs covered in this lecture are capable of delivering lagging/leading VArs to a load or to a bus in the power system in a rapidly controlled manner.
High Power STATCOMs of this type essentially consist of a three phase PWM Inverter using GTOs,Thyristors or IGBTs, a D.C. side capacitor which provides the D.C. voltage required by the inverter,filter components to filter out the high frequency components of inverter output voltage,a link inductor which links the inverter output to the a.c supply side,interface magnetics (if required) and the related control blocks. The Inverter generates a three-phase voltage, which is synchronized with the a.c supply ,from the D.C. side capacitor and the link inductance links up this voltage to the a.c source. The current drawn by the Inverter from the a.c supply is controlled to be mainly reactive(leading or lagging as per requirement) with a small active component needed to supply the losses in the Inverter and Link Inductor (and in the magnetics,if any).The D.C. side capacitor voltage is maintained constant( or allowed to vary with a definite relationship maintained between its value and the reactive power to be delivered by the Inverter) by controlling this small active current component. The currents are controlled indirectly by controlling the phase angle of Inverter output Voltage with respect to the a.c side source voltage in the “Synchronous Link Based Control Scheme” whereas they are controlled directly by current feedback in the case of “Current Controlled Scheme”.In the latter case the Inverter will be a Current Regulated one ,i.e. its switches are controlled in such a way that the Inverter delivers a commanded current at its output rather than a commanded voltage (the voltage required to see that the commanded current flows out of Inverter will automatically be synthesized by the Inverter).Current Control Scheme results in a very fast STATCOM which can adjust its reactive output within tens of microseconds of a sudden change in the reactive demand.
The Genetic biochip is designed to freeze into place structures of the many short strands of the DNA, which is also known as the deoxyribonucleic acid, the basic chemical instruction which determines the characteristics of the organism. Effectively, it will be used as a kind of the test tube for the real chemical samples. A specially designed microscope can be determined where the sample hybridized with a DNA strands in Biochip. The Biochips helped to dramatically accelerate the identification of estimated more than 80,000 genes in the human DNA, which is also an ongoing world wide research collaboration known as "Human Genome Project". Microchip is described as the sort of the word search function that can be quickly sequence DNA.
In addition to the genetic applications, Biochip is being used in the Protein, Toxicological and the Biochemical research. The Biochips can also being used to the rapidly detect chemical agents used in the biological warfare so that defensive measures can be taken.
An Augmented Reality system supplements the real world with virtual (computer-generated) objects that appear to coexist in the same space as the real world. While many researchers broaden the definition of AR beyond this vision, we can generally find Augmented Reality system to have the following properties: 1) Combines real and virtual objects in a real environment; 2) Runs interactively, and in real time; 3) Aligns real and virtual objects with each other.
Augment Reality can be thought of as a “middle ground” between Virtual Environment (completely synthetic) & Tele-presence (completely real).
Augmented Reality Vs Virtual Reality
VR was defined as a “computer generated interactive 3-D environment in which a person is immersed”. The user is completely immersed in an aartificial world & is divorced from the real environment.
VR - Strives For Totally Immersive Environment.
AR – Augmenting Real World Scenes.
Real Desk with Virtual Lamp and Two Virtual Chairs
• It shows a real desk with a real phone.
• Inside this room are also a virtual lamp and two virtual chairs.
• objects are combined in 3-D, so that the virtual lamp covers the real table, and the real table covers parts of the two virtual chairs
Milgram’s Reality Virtuality Continuum
Taxonomy for Mixed Reality
Extent To Present Metaphor
Extent To World Knowledge
Characteristics of Augmented Reality
• Optical vs. Visual
• Focus and Contrast
• Comparison Against Virtual Environments
Application of Augmented Reality
Ø Virtual fetus inside womb of pregnant patient.
Ø Mockup of tumor biopsy.
2. Military Training
Ø Two Views Of A Combined Augmented Reality Virtual System
3. Maintenance & Repair
Ø Prototype laser printer maintenance application, displaying how to remove the paper tray
Ø Printer maintenance application
4. Robot and Telerobotics
Ø Virtual lines show a planned motion of a robot arm.
CONCLUSION Augmented Reality is far behind VE in maturity. The first deployed HMD-based Augmented Reality systems will probably be in the application of aircraft-manufacturing. For example, Boeing has made several trial runs with workers using a prototype system but has not yet made any deployment decisions. The next generation of combat aircraft will have Helmet - Mounted Signals with graphics registered to targets in the environment. These displays combined with short - range steerable missiles that can shoot at target off-boresight, give a tremendous combat advantage to pilots in dogfight. One area where a break-through is required is tracking an HMD outdoors at the accuracy required by Augmented Reality. If this is achieved several interesting applications will become possible
Cocoa is one of Apple Inc.’s native object-oriented application program environments for the Mac OS X operating system. It is one of five major APIs available for Mac OS X; the others are Carbon, POSIX (for the BSD environment), X11 and Java.
Cocoa applications are typically developed using the development tools provided by Apple, specifically Xcode (formerly Project Builder) and Interface Builder, using the Objective-C language. However, the Cocoa-programming environment can be accessed using other tools, such as Object Pascal, Python, Perl and Ruby, with the aid of bridging mechanisms such as PasCocoa, PyObjC, CamelBones and RubyCocoa, respectively. Also, under development by Apple, is an implementation of the Ruby language, called MacRuby, which does away with the requirement for a bridging mechanism. It is also possible to write Objective-C Cocoa programs in a simple text editor and build it manually with GCC or GNUstep’s makefile scripts. For end-users, Cocoa applications are considered to be those written using the Cocoa-programming environment. Such applications usually have a distinctive feel, since the Cocoa-programming environment automates many aspects of an application to comply with Apple’s human interface guidelines.
Cocoa consists primarily of two Objective-C object libraries called frameworks. Frameworks are functionally similar to shared libraries, a compiled object that can be dynamically loaded into a program’s address space at runtime, but frameworks add associated resources, header files, and documentation.
Foundation Kit, or more commonly simply Foundation, first appeared in OpenStep. On Mac OS X, it is based on Core Foundation. Foundation is a generic object-oriented library providing string and value manipulation, containers and iteration, distributed computing, run loops, and other functions that are not directly tied to the graphical user interface. The “NS” prefix, used for all classes and constants in the framework, comes from Cocoa’s NeXTSTEP heritage.
Application Kit or AppKit is directly descended from the original NeXTSTEP Application Kit. It contains code with which programs can create and interact with graphical user interfaces. AppKit is built on top of Foundation, and uses the same “NS” prefix. A key part of the Cocoa architecture is its comprehensive views model. This is organized along conventional lines for an application framework, but is based on the PDF drawing model provided by Quartz. This allows creation of custom drawing content using PostScript-like drawing commands, which also allows automatic printer support and so forth. Since the Cocoa framework manages all the clipping, scrolling, scaling and other chores of drawing graphics, the programmer is freed from implementing basic infrastructure and can concentrate only on the unique aspects of an application’s content.
Cocoa helps you create commercial-grade applications quickly and efficiently. It is an advanced, mature object- oriented development environment that enables you to create complex software with surprisingly few lines of code. Through a seamless integration of tools and Cocoa API, the design and construction of a user interface is largely a matter of dragging windows, buttons, and other objects from palettes, initializing their attributes, and connecting them to other objects. Cocoa also defines a model for applications and implements most aspects of application behavior; you simply fit into this model the code that makes your application unique.
The programmatic interfaces of the core Cocoa frameworks, Foundation and Application Kit, simplify access to most of the technologies on which Mac OS X is based, such as Quartz, Bonjour networking, Core Text, and the printing system. Although these interfaces are in Objective-C, you can integrate code written in other languages into your Cocoa projects, including C++ code and C code. Because Objective-C is a superset of ANSI C, frameworks with C APIs are compatible with Objective-C.
The Cocoa frameworks are written in Objective-C, and hence Objective-C is the preferred language for development of Cocoa applications. Java bindings for the Cocoa frameworks (known as the “Java bridge”) are also available but have not proven popular amongst Cocoa developers. Further, the need for runtime binding means many of Cocoa’s key features are not available with Java. In 2005, Apple announced that the Java bridge was to be deprecated, meaning that features added to Cocoa in Mac OS X versions later than 10.4 would not be added to the Cocoa-Java programming interface. AppleScript Studio, part of Apple’s Xcode Tools makes it possible to write (less complex) Cocoa applications using AppleScript. Third party bindings available for other languages include PyObjC (Python), RubyCocoa (Ruby), CamelBones (Perl), Cocoa#, Monobjc (C#) and NObjective(C#).There are also open source implementations of major parts of the Cocoa framework that allows cross- platform (including Microsoft Windows) Cocoa application development, such as GNUstep, and Cocotron