Sunday 9 June 2013

Cellular Digital Packet Data

Introduction

Cellular Digital Packet Data (CDPD) systems offer what is currently one of the most advanced means of wireless data transmission technology. Generally used as a tool for business, CDPD holds promises for improving law enforcement communications and operations. As technologies improve, CDPD may represent a major step toward making our nation a wireless information society. While CDPD technology is more complex than most of us care to understand, its potential benefits are obvious even to technological novices.

In this so-called age of information, no one need to be reminded of speed but also accuracy in the storage, retrieval and transmission of data. The CDPD network is a little one year old and already is proving to be a hot digital enhancement to the existing phone network. CDPD transmits digital packet data at 19.2 Kbps, using idle times between cellular voice calls on the cellular telephone network.
CDPD technology represent a way for law enforcement agencies to improve how they manage their communications and information systems. For over a decade, agencies around the world have been experimenting with placing Mobile Data Terminals(MDT) in their vehicles to enhance officer safety and efficiency.

Early MDT's transmits their information using radio modems. In this case data could be lost in transmission during bad weather or when mobile units are not properly located in relation to transmission towers. More recently MDT's have transmitted data using analog cellular telephone modems. This shift represented an improvement in mobile data communications, but systems still had flaws which limited their utility.

Since the mid-1990's, computer manufacturers and the telecommunication industry have been experimenting with the use of digital cellular telecommunications as a wireless means to transmit data. The result of their effort is CDPD systems. These systems allow users to transmit data with a higher degree of accuracy, few service interruptions, and strong security. In addition CDPD technology represent a way for law enforcement agencies to improve how they manage their communications and information systems. This results in the capacity for mobile users to enjoy almost instantaneous access to information.

WHAT IS CDPD?

CDPD is a specification for supporting wireless access to the Internet and other public packet-switched networks. Data transmitted on the CDPD systems travel several times faster than data send using analog networks

Cellular telephones and modem providers that offer CDPD support make it possible for mobile users to get access to the Internet at up to 19,2 Kbps. Because CDPD is an open specification that adheres to the layered structure of the Open Systems Interconnection (OSI) model, it has the ability to be extended in the future. CDPD supports both the Internet's Connectionless Network Protocol (CLNP).

CDPD also supports IP multicast (one-to-many) service. With multicast, a company can periodically broadcast company updates to sales and service people on the road or a news subscription service can transmit its issues as they are published. It will also support the next level of IP, IPV6. With CDPD we are assigned our very own address. With this address, we are virtually always connected to our host without having to keep a constant connection.

There are currently two methods for sending data over cellular networks: cellular digital packet data (CDPD) and cellular switched-circuit data (CSCD). Each has distinct advantages depending on the type of application, amount of data to send or receive, and geographic coverage needs.

E-Waste



Abstract of E-Waste



Electronic waste or e-waste is any broken or unwanted electrical or electronic appliance. E-waste includes computers, entertainment electronics, mobile phones and other items that have been discarded by their original users. E-waste is the inevitable by-product of a technological revolution. Driven primarily by faster, smaller and cheaper microchip technology, society is experiencing an evolution in the capability of electronic appliances and personal electronics. For all its benefits, innovation brings with it the byproduct of rapid obsolescence. According to the EPA, nationally, an estimated 5 to 7 million tons of computers, televisions, stereos, cell phones, electronic appliances and toys, and other electronic gadgets become obsolete every year. According to various reports, electronics comprise approximately 1 - 4 percent of the municipal solid waste stream. The electronic waste problem will continue to grow at an accelerated rate. Electronic, or e-waste, refers to electronic products being discarded by consumers.

Introduction of E-Waste


• E-waste is the most rapidly growing waste problem in the world.

• It is a crisis of not quantity alone but also a crisis born from toxics ingredients, posing a threat to the occupational health as well as the environment.

• Rapid technology change, low initial cost, high obsolescence rate have resulted in a fast growing problem around the globe.

• Legal framework, proper collection system missing.

• Imports regularly coming to the recycling markets.

• Inhuman working conditions for recycling.

• Between 1997 and 2007, nearly 500 million personal computers became obsolete-almost two computers for each person.

• 750,000 computers expected to end up in landfills this year alone.

• In 2005, 42 million computers were discarded

• 25 million in storage

• 4 million recycled

• 13 million land filled

• 0.5 million incinerated

IT and telecom are two fastest growing industries in the country.

• India, by 2008, should achieve a PC penetration of 65 per 1,000 from the existing 14 per 1,000 (MAIT)

• At present, India has 15 million computers. The target being 75 million computers by 2010.

• Over 2 million old PCs ready for disposal in India.

• Life of a computer reduced from 7 years to 3-5 years.

• E-Waste: Growth Over 75 million current mobile users, expected to increase to 200 million by 2007 end.

• Memory devices, MP3 players, iPods etc. are the newer additions.

• Preliminary estimates suggest that total WEEE generation in India is approximately 1, 46,000 tonnes per year.

E-waste: It's implications :


• Electronic products often contain hazardous and toxic materials that pose environmental risks if they are land filled or incinerated .

• Televisions, video and computer monitors use cathode ray tubes (CRTs), which have significant amounts of lead.

• Printed circuit boards contain primarily plastic and copper , and most have small amounts of chromium, lead solder, nickel, and zinc.

• In addition, many electronic products have batteries that often contain nickel, cadmium, and other heavy metals . Relays and switches in electronics, especially older ones, may contain mercury.

• Also , capacitors in some types of older and larger equipment that is now entering the waste stream may contain polychlorinated biphenyls (PCBs) .

You can reduce the environmental impact of your E-Waste by making changes in your buying habits, looking for ways to reuse including donating or recycling. Preventing waste to begin with is the preferred waste management option. Consider, for example, upgrading or repairing instead of buying new equipment to extend the life of your current equipment and perhaps save money. If you must buy new equipment, consider donating your still working, unwanted electronic equipment.

This reuse extends the life of the products and allows non-profits, churches, schools and community organizations to have equipment they otherwise may not be able to afford. In South Carolina, for example, Habitat for Humanity Resale Stores, Goodwill and other similar organizations may accept working computers. When buying new equipment, check with the retailer or manufacturer to see if they have a "take-back program" that allows consumers to return old equipment when buying new equipment. Dell Computers, for example, became the first manufacturer to set up a program to take back any of its products anywhere in the world at no charge to the consumer. And, when buying, consider products with longer warranties as an indication of long-term quality.

Friday 7 June 2013

Adaptive Missile Guidance Using GPS



Abstract of Adaptive Missile Guidance Using GPS



In the modern day theatre of combat, the need to be able to strike at targets that are on the opposite side of the globe has strongly presented itself. This had led to the development of various types of guided missiles. These guided missiles are self -guiding weapons intended to maximize damage to the target while minimizing collateral damage. The buzzword in modern day combat is fire and forget. GPS guided missiles, using the exceptional navigational and surveying abilities of GPS, after being launched, could deliver a warhead to any part of the globe via the interface pof the onboard computer in the missile with the GPS satellite system. Under this principle many modern day laser weapons were designed.

Laser guided missiles use a laser of a certain frequency bandwidth to acquire their target. GPS/inertial weapons are oblivious to the effects of weather, allowing a target to be engaged at the time of the attacker's choosing. GPS allows accurate targeting of various military weapons including ICBMs, cruise missiles and precision-guided munitions. Artillery projectiles with embedded GPS receivers able to withstand accelerations of 12,000 G have been developed for use in 155 mm. GPS signals can also be affected by multipath issues, where the radio signals reflect off surrounding terrain; buildings, canyon walls, hard ground , etc. These delayed signals can cause inaccuracy.

A variety of techniques, most notably narrow correlator spacing, have been developed to mitigate multipath errors. Multipath effects are much less severe in moving vehicles. When the GPS antenna is moving, the false solutions using reflected signals quickly fail to converge and only the direct signals result in stable solutions. The multiple independently targeted re-entry vehicles (MIRVs) – ICBMs with many sub-missiles – were developed in the late 1960s. The cruise missile has wings like an airplane, making it ca pable of flying at low altitudes. In summary, GPS-INS guided weapons are not affected by harsh weather conditions or restricted by a wire, nor do they leave the gunner vulnerable for attack. GPS guided weapons, with their technological advances over previous, are the superior weapon of choice in modern day warfare.

Introduction of Adaptive Missile Guidance Using GPS


Guided missile systems have evolved at a tremendous rate over the past four decades, and recent breakthroughs in technology ensure that smart warheads will have an increasing role in maintaining our military superiority. On ethical grounds, one prays that each warhead deployed during a sortie will strike only its intended target, and that innocent civilians will not be harmed by a misfire. From a tactical standpoint, our military desires weaponry that is reliable and effective, inflicting maximal damage on valid military targets and ensuring our capacity for light ingfast strikes with pinpoint accuracy. Guided missile systems help fulfill all of these demands.

Many of the early guidance systems used in missiles where based on gyroscope models. Many of these models used magnets in their gyroscope to increase the sensitivity of the navigational array. In modern day warfare, the inertial measurements of the missile are still controlled by a gyroscope in one form or another, but the method by which the missile approaches the target bears a technological edge. On the battlefield of today, guided missiles are guided to or acquire their targets by using:

· Radar signal
· Wires
· Lasers (or)
· Most recently GPS.

The central idea behind the design of DGPS/GPS/inertial guided weapons is that of using a 3- axis gyro/accelerometer package as an inertial reference for the weapon's autopilot, and correcting the accumulated drift error in the inertial package by using GPS PPS/P-code. Such weapons are designated as "accurate" munitions as they will offer CEPs (Circular Error Probable) of the order of the accuracy of GPS P -code signals, typically about 40ft.


Global Positioning System used in ranging navigation guidance :

The next incremental step is then to update the weapon before launch with a DGPS derived position estimate, which will allow it to correct its GPS error as it flies to the target, such weapons are designated "precise" and will offer accuracies greater than laser or TV guided weapons, potentially CEPs of several feet.For an aircraft to support such munitions, it will require a DGPS receiver, a GPS receiver and interfaces on its multiple ejector racks or pylons to download target and launch point coordinates to the weapons. The development of purely GPS/inertial guided munitions will produce substantial changes in how air warfare is conducted.



Unlike a laser-guided weapon, a GPS/inertial weapon does not require t hat the launch aircraft remain in the vicinity of the target to illuminate it for guidance - GPS/inertial weapons are true fire-and-forget weapons, which once released, are wholly autonomous, and all weather capable with no degradation in accuracy. Existing precision weapons require an unobscured line of sight between the weapon and the target for the optical guidance to work

The proliferation of GPS and INS guidance is a double-edged sword. On the one hand, this technology promise a revolution in air warfare not seen since the laser guided bomb, with single bombers being capable of doing the task of multiple aircraft packages. In summary, GPS-INS guided weapons are not affected by harsh weather conditions or restricted by a wire, nor do they leave the gunner vulnerable for attack. GPS guided weapons, with their technological advances over previous, are the superior weapon of choice in modern day warfare.

Hydrogen Super Highway



Definition

Interstate Hydrogen Highway was brought up by Justin Eric Sutton. This highway mainly depends on hydrogen and water. Hydrogen is obtained in the basic process that produces electricity when sunlight striking EPV(electro photo voltaic panels).panels is then used to convert distilled water into hydrogen and oxygen .while the oxygen could be bottled and sold cheaply the hydrogen would serve as a "battery" store in compressed form in cooling tanks adjacent to the traveler system in utility centers. Electricity is produced by hydrogen using hydrogen fuel cell technology. Electricity generated in hydrogen highway by Magnetic Levitation (MAGLEV) technology may be used to provide for other power needs such as utility stations, access stations lightning and maintenance and rest can be used for domestic usage.


A certain amount of hydrogen would be stored each day to encompass night time travel and weather related burdens. Speed of trailblazer in hydrogen highway is 250-300 MPH. all it takes is "$1,50,00,000 per mile , and $2,50,000 per Rail Car. With an eventual system size of nearly 54,000 miles would yield as much as 45 billion watts of continuous electrical power.

Magnetic Levitation Travel

MAGLEV vehicles of the kind proposed by Mr. Sutton, would travel at nearly 250 miles per hour on cushions of vibration free, magnetically induced air, for a flight a few centimeters off the MAGLEV track. Normally a trip of between 2 and 3 hours by car (plus the commensurate tolls, fuel stops and bathroom/food breaks): travel between Penn Station and the famed Atlantic City Island 116 miles away, at 250 miles per hour, would take considerably less... in fact about 1/5th the time. Travel to and from the resort would have none of the hassles normally associated with a short hop jet flight. It would be accomplished without wasting all that petroleum, fuel that could be better used to explore the Atlantic City seashore community, visit Casinos and take in the night sites. And, another 15-20 minute hop on the Interstate Traveler a day or two later, and one could travel from AC to Philadelphia to see the birthplace of our Country, Independence Hall!





Why even leave your car behind? The design parameters of the Interstate Traveler System not only call for passenger ready MAGLEV vehicles, they also call for "auto carriers".

Such carriers could take the family's Ford 500 for an ultra-fast trip at 250 mph to any location on the system, offloading them to allow the family to motor around the area. A short two week vacation could visit a wide variety of areas covering nearly 2/3 of the Nation, stopping for a day or two to explore each area, then hopping on the Interstate Traveler System and moving on to the next.

Wireless Charging Of Mobile Phones Using Microwaves



Abstract of Wireless Charging Of Mobile Phones Using Microwaves



With mobile phones becoming a basic part of life, the recharging of mobile phone batteries has always been a problem. The mobile phones vary in their talk time and battery standby according to their manufacture and batteries. All these phones irrespective of their manufacturer and batteries have to be put to recharge after the battery has drained out.

The main objective of this current proposal is to make the recharging of the mobile phones independent of their manufacturer and battery make. In this paper a new proposal has been made so as to make the recharging of the mobile phones is done automatically as you talk in your mobile phone! This is done by use of microwaves. The microwave signal is transmitted from the transmitter along with the message signal using special kind of antennas called slotted wave guide antenna at a frequency is 2.45 GHz.There are minimal additions, which have to be made in the mobile handsets, which are the addition of a sensor, a Rectenna, and a filter. With the above setup, the need for separate chargers for mobile phones is eliminated and makes charging universal. Thus the more you talk, the more is your mobile phone charged! With this proposal the manufacturers would be able to remove the talk time and battery stand by from their phone specifications.

Introduction of Wireless Charging Of Mobile Phones Using Microwaves


The basic addition to the mobile phone is going to be the rectenna. A rectenna is a rectifying antenna, a special type of antenna that is used to directly convert microwave energy into DC electricity.

Its elements are usually arranged in a mesh pattern, giving it a distinct appearance from most antennae. A simple rectenna can be constructed from a Schottky diode placed between antenna dipoles. The diode rectifies the current induced in the antenna by the microwaves.

Rectenna are highly efficient at converting microwave energy to electricity. In laboratory environments, efficiencies above 90% have been observed with regularity. Some experimentation has been done with inverse rectenna, converting electricity into microwave energy, but efficiencies are much lower--only in the area of 1%. With the advent of nanotechnology and MEMS the size of these devices can be brought down to molecular level.

It has been theorized that similar devices, scaled down to the proportions used in nanotechnology, could be used to convert light into electricity at much greater efficiencies than what is currently possible with solar cells. This type of device is called an optical rectenna.

Theoretically, high efficiencies can be maintained as the device shrinks, but experiments funded by the United States National Renewable energy Laboratory have so far only obtained roughly 1% efficiency while using infrared light. Another important part of our receiver circuitry is a simple sensor.


Receiver Design :

The basic addition to the mobile phone is going to be the rectenna. A rectenna is a rectifying antenna, a special type of antenna that is used to directly convert microwave energy into DC electricity.



Rectifies received microwaves into DC current a rectenna comprises of a mesh of dipoles and diodes for absorbing microwave energy from a transmitter and converting it into electric power. Its elements are usually arranged in a mesh pattern, giving it a distinct appearance from most antennae. A simple rectenna can be constructed from a Schottky diode placed between antenna dipoles as shown in Fig...

The diode rectifies the current induced in the antenna by the microwaves. Rectenna are highly efficient at converting microwave energy to electricity.

In laboratory environments, efficiencies above 90% have been observed with regularity. In future rectennass will be used to generate large-scale power from microwave beams delivered from orbiting SPS satellites.

The sensor circuitry is a simple circuit, which detects if the mobile phone receives any message signal. This is required, as the phone has to be charged as long as the user is talking. Thus a simple F to V converter would serve our purpose. In India the operating frequency of the mobile phone operators is generally 900MHz or 1800MHz for the GSM system for mobile communication.

Thursday 6 June 2013

Clap counter using 8051 microcontroller

This article explains the concept behind interfacing a sound sensor with the 8051 microcontroller (AT89C51). This project increases the count by one every time a sound is produced. It works well with the sound of a clap. The number of claps is displayed on an LCD module. The circuit consists of four modules, namely, a sound sensor, an amplifying circuit, a control circuit and a display module. The code for interfacing the sound sensor with the MCU is written in C language.



The connections of different modules are shown in the circuit diagram. The data pins of the LCD are connected to port P2, while the control pins (RS, R/W & EN) are connected to pins 1-3 of port P1of AT89C51, respectively. The microcontroller receives sound pulses through the first pin of port P0.

A condenser microphone is used to sense the sound produced by the clap. This mic is connected to a two stage transistor amplifier. The mic output is thus amplified to a suitable level so that it can be detected at the TTL logic. A switching circuit made from a single transistor is also employed after the amplifier. The purpose of this circuit is to convert the analog signals into discrete digital signals, which are used as input for the MCU.
The output of the amplifier is coupled with a transistor switch. Whenever a high voltage output is received from the amplifier, it generates a pulse. The transistor switching circuit also ensures that a high TTL logic is not received at the microcontroller due to noise signals.
The pulse, from the switching circuit, is fed to the microcontroller, which is programmed to detect the pulses. Every time a pulse in detected, the count value is increased by one. The output is displayed on a 16x2 LCD screen.
The sensitivity of the sound sensor can be increased by improving the circuit of the amplifying unit. An op-amp can be used to increase the sensitivity.

Plug and play sensors

Introduction to Plug and play sensors

Most sensors put out some sort of analog signal a voltage, current, or resistance that varies in a fixed relationship to a physical parameter. Instrumentation that converts the signal to engineering units must know a few things about what parameters to expect from the sensor. That's because full-scale range, sensitivity, connection schemes, and other factors all vary greatly from one kind of sensor to another. The end result is that it takes some effort to match up sensors with monitoring electronics. This configuration process may be no big deal when there are only a few sensors involved. But it can be problematic when gangs of sensors get wired into multichannel measurement systems.


1.1 Sensor

A sensor is a device that measures a physical attribute or a physical event. It outputs a functional reading of that measurement as an electrical, optical or digital signal. That signal is data that can be transformed by other devices into information. The information can be used by either intelligent devices or monitoring individuals to make intelligent decisions and maintain or change a course of action.

1.2 Smart Sensors

A smart sensor is simply one that handles its own acquisition and conversion of data into a calibrated result in the units of the physical attribute being measured. For example, a traditional thermocouple simply provided an analog voltage output. The voltmeter was responsible for taking this voltage and transforming it into a meaningful temperature measurement through a set of fairly complex algorithms as well as an analog to digital acquisition. A smart sensor would do all that internally and simply provide a temperature number as data. Smart sensors do not make judgments on the data collected unless that data goes out of range for the sensor.

IEEE 1451.4 defines a relatively simple, straightforward mechanism for adding smart, plug and play capabilities to traditional analog sensors. Without adding any new hardware to the system, these plug and play sensors can bring real, immediate benefits in ease of use and productivity to any measurement and automation system that uses sensors. Additionally, IEEE 1451.4 defines a standard framework for sensor description, embodied specifically in the TEDS, which can scale from today's traditional analog sensors to tomorrow's smart networked sensors.
Two factors have promoted widespread adoption of plug-and-play sensors: the IEEE P1451.4 Smart Transducer Interface draft standard and the Internet. IEEE P1451.4 is a proposed standard for self-describing analog sensors using standardized Transducer Electronic Data Sheets (TEDS). The Internet can bring the plug-and-play concept to legacy sensors and systems via distribution of so-called virtual TEDS. The next generation of measurement and automation systems will use these concepts to become even more automated, robust, and smarter.

The 1451.4 standard calls for a mixed-mode interface. This is essentially the digital and analog signals being transferred back and forth between the signal conditioning hardware and the sensor, making it more compatible with the legacy sensors in place. The first three versions called for sending only digital data back to the computer. The next version of the standard being drafted now is P1451.5, which adds wireless capability to sensors. Right now, the P1451.4 is still a proposed standard which is expected to be ratified and issued by soon.

Plug-and-play sensors address the labor involved in connection and configuration. Based on open industry standards, plug-and-play sensors incorporate ways of automatically identifying themselves. Benefits include quicker, more automated system setup; better diagnostics; less downtime for sensor repair and replacement; and an easier time keeping track of sensors themselves as well as the data they generate.


The IEEE P1451.4 plug-and-play sensor concept appears to be one of those rare technologies whose strength and value come from its simplicity and focus. Although it doesn't fit many of the typical definitions of a smart sensor, it does provide real, tangible benefits in a way that builds on, not replaces, existing systems and technologies.

Razor Technology



A code sign methodology incorporates timing speculation into a low-power microprocessor pipeline and shaves energy levels far below the point permitted by worst-case computation paths.

An old adage says, "If you're not failing some of the time, you're not trying hard enough." To address the power challenges that current on-chip densities pose, we adapted this precept to circuit design. Razor,(1) a voltage-scaling technology based on dynamic detection and correction of circuit timing errors, permits design optimizations that tune the energy in a microprocessor pipeline to typical circuit operational levels. This eliminates the voltage margins that traditional worst-case design methodologies require and allows digital systems to run correctly and robustly at the edge of minimum power consumption.

Occasional heavy weight computations may fail and require additional time and energy for recovery, but the overall computation in the optimized pipeline requires significantly less energy than traditional designs.

Razor supports timing speculation through a combination of architectural and circuit techniques, which we have implemented in a prototype Razor pipeline in 0.18-micrometer technology. Simulation results of the SPEC 2000 benchmarks showed energy savings for every benchmark, up to a 64 percent savings with less than 3 percent performance impact for error recovery.

2. SPEED, ENERGY, AND VOLTAGE SCALING

Both circuit speed and energy dissipation depend on voltage. The speed or clock frequency, f of a digital circuit
is proportional to the supply voltage, Vdd:
f ? Vdd
The energy E necessary to operate a digital circuit for a time duration T is the sum of two energy components:
E = SCV2dd + VddIleakT
where the first term models the dynamic power lost from charging and discharging the capacitive loads within the circuit and the second term models the static power lost in passive leakage current-that is, the small amount of current that leaks through transistors even when they are turned off.

The dynamic power loss depends on the total number of signal transitions, S, the total capacitance load of the circuit wire and gates, C, and the square of the supply voltage. The static power loss depends on the supply voltage, the rate of current leakage through the circuit, Ileak , and the duration of operation during which leakage occurs, T.

2.1 Dynamic Voltage Scaling
Dynamic voltage scaling has emerged as a powerful technique to reduce circuit energy demands. In a DVS system, the application or operating system identifies periods of low processor utilization that can tolerate reduced frequency. With reduced frequency, similar reductions are possible in the supply voltage. Since dynamic power scales quadratically with supply voltage, DVS technology can significantly reduce energy consumption with little impact on perceived system performance.

2.2 Error-Tolerant DVS

Razor is an error-tolerant DVS technology. Its error- tolerance mechanisms eliminate the need for voltage margins that designing for "always correct" circuit operations requires. The improbability of the worst-case conditions that drive traditional circuit design underlies the technology.

Automatic Vehicle Locator



Definition


LOCATOR Automatic vehicle location (AVL) is a computer -based vehicle tracking system. For transit, the actual real -time position of each vehicle is determined and relayed to a control center. Actual position determination and relay techniques vary, depending on the needs of the transit system and the technologies employed. Transit agencies often incorporate other advanced system features in conjunction with AVL system implementation. Simple AVL systems include: computer -aided dispatch software, mobile data terminals, emergency alarms, and digital communications. More sophist icated AVL Systems may integrate: real-time passenger information, automatic passenger counters, and automated fare payment systems. Other components that may be integrated with AVL systems include automatic stop annunciation, automated destination signs, Vehicle component monitoring, and Traffic signal priority. AVL technology allows improved schedule adherence and timed transfers, more accessible passenger information, increased availability of data for transit management and planning, efficiency/productivity improvements in transit services .

What is AVL technology?

Automated Vehicle Locator (AVL) systems use satellite and land communications to display each vehicle's location, status, heading, and speed on the computer's screen. AVL systems use one of four types of navigation technology, or may combine two of these technologies to compensate for inevitable shortcomings of any one technology. The four principal technologies employed for AVL systems are:

1. Global Positioning System 2. Dead-Reckoning System 3. Signpost/Odometer Systems 4. Radio Navigation/Location

Buses equipped with AVL offer many possibilities for transit interface with highway and traffic organizations or transportation management centers. Opportunities include: providing transit buses with traffic signal priority; obtaining traffic congestion data at the dispatch center to allow rerouting of buses or informing customers of delay; incorporating transit information in traveler information systems; developing multi -application electronic payment systems; using buses to automatically communicat e traffic speed; and reporting of roadway incidents by transit vehicle operators.

Humanoid Robots



Definition


Human-Robot has recently received considerable attention in the academic community, in labs, in technology companies, and through the media. Because of this attention, it is desirable to present a survey of HR to serve as a tutorial to people outside the field and to promote discussion of a unified vision of HR within the field. The goal of this review is to present a unified treatment of HR-related problems, to identify key themes, and discuss challenge problems that are likely to shape the field in the near future. Although the review follows a survey structure, the goal of presenting a coherent "story" of HR means that there are necessarily some well-written, intriguing, and influential papers that are not referenced. Instead of trying to survey every paper, we describe the HR story from multiple perspectives with an eye toward identifying themes that cross applications.

A humanoid robot is a robot with its overall appearance, based on that of the human body , allowing interaction with made-for-human tools or environments. In general humanoid robots have a torso with a head, two arms and two legs, although some forms of humanoid robots may model only part of the body, for example, from the waist up. Some humanoid robots may also have a 'face', with 'eyes' and 'mouth'. Androids are humanoid robots built to aesthetically resemble a human. A humanoid robot is an autonomous robot because it can adapt to changes in its environment or itself and continue to reach its goal. This is the main difference between humanoid and other kinds of robots. In this context, some of the capacities of a humanoid robot may include, among others:
self-maintenance (like recharging itself)
autonomous learning (learn or gain new capabilities without outside assistance, adjust strategies based on the surroundings and adapt to new situations)
avoiding harmful situations to people, property, and itself
safe interacting with human beings and the environmentLike other mechanical robots, humanoid refer to the following basic components too: Sensing, Actuating and Planning and Control. Since they try to simulate the human structure and behavior and they are autonomous systems, most of the times humanoid robots are more complex than other kinds of robots.

Nanomachines



Nanomachines are machines of dimensions in the range of nanometers. They include micro scale replicas of present day machines like the nanogears,nanoarms or the nanorobots as well as futuristic machines which have no present day analogs, like the assembler which can assemble atoms to produce further machines or assembler themselves.

Though there can be analogous of today's mechanical components, the way in which both these categories are manufactured will be entirely in contrast to each other with regard to what we an call as the direction of manufacturing While today's machines are manufactured by the top-bottom approach in which they are machined down from larger components or bulk of the component materials , nanomachines will be manufactured by the bottom-top approach where they are built atom by atom, placing the individual atoms precisely at the required positions.

No nanomachines in the true sense have yet been manufactured, although feasibility of producing several of them is confirmed by various means. The main difficulty lies in the inability of today's handling facilities to account for manipulating such sub atomic particles. Producing them by means of chemical reactions is the most apt one for controlling atomic positions. Scientists have been able to obtain virtual machines by means of computer simulated chemical reactions and this proves their feasibility. The distance from actually making them will be bridged by finding way to control and predict the outcome of chemical reactions more quickly and precisely.

Synthesis Of Nanomachines

The present generation micromachines which fall in the category of nanomachnes in the sense that they are made by molecular technology are currently synthesized by means of chemical reactions. As of now, chemical synthesis is conducted almost exclusively in solution, where reagent molecules move by diffusion and encounter one another in random positions and orientations.

The prominent synthesizing techniques can be classified as follows-

• Solution-phase synthesis

• Enzymatic synthesis

• Mechanosynthesis

• Biosynthesis

Solution-phase synthesis poses familiar problems of reaction specificity. Although many small-molecule reactions proceed cleanly and have high yields, large molecules with many functional groups present multiple potential reaction sites and hence, can be converted into multiple products.

Although a spectrum of intermediate cases can be identified, enzymatic synthesis differs significantly from the standard solution-phase model. Enzymatic reactions begin with reagent binding, which places molecules in well-controlled positions and orientations. The resulting high effective concentrations resulting high reaction rates.

Mechanosynthesis differs from enzymatic catalysis, yet many of the same principles apply. One can perform mechanosynthesis by using macroscopic devices, such as scanning tunneling and atomic force microscope (STM and AFM) mechanisms. The first clear example of a mechanically controlled synthesis was the arrangement of 35 Xenon atoms on a nickel crystal to spell 'IBM'.

Biosynthesis involves synthesize of biological materials.

Wednesday 5 June 2013

Illumination with Solid State lighting



Light emitting diodes (LEDs) have gained broad recognition as the ubiquitous little lights that tell us that our monitors are on, the phone is off the hook or the oven is hot semiconductor. The basic principle behind the emission of light is that: When charge carrier pairs recombine in a semiconductor with an appropriate energy band-gap generates light. In a forward biased diode, little recombination occurs in the depletion layer. Most occurs in a few microns of either P- region or N -region, depending on which one is lightly doped. LEDs produce narrow band radiations, with wave length determined by energy band of the semiconductor.

Solid state electronics have replaced their vacuum tube predecessors for almost five decades. However in the next decade they will be brighter, more efficient and inexpensive enough to replace conventional lighting sources (i.e. incandescent bulbs, fluorescent tubes).

Recent development in AlGaP and AlInGaP blue and green semiconductor growth technology have enabled applications where several single to several millions of these indicator LEDs can be packed together to be used in full color signs, automotive tail lambs, traffic lights etc. still the preponderance of applications require that the viewer has to look directly into the LED. This is not "SOLID STATE LIGHTING"

Artificial lighting sources share three common characteristics:
-They are rarely viewed directly: light from sources are viewed as reflection off the illuminated object.
- The unit of measure is kilo lumen or higher not mille lumen or lumen as it is incase of LEDs
-Lighting sources are pre dominantly white with CIE color coordinates, producing excellent color rendering
Today there is no such commercially using "SOLID STATE LAMP" However high power LED sources are being developed, which will evolve into lighting sources





EVOLUTION OF LEDs

The first practical LED was developed in 1962 and was made of a compound semiconductor alloy, gallium arsenide phosphide, when emitted red light. From 1962, compound semiconductors would provide the foundation for the commercial expansion of LEDs. From 1962 when first LEDs were introduced at 0.001 lm/LED using GaAsP until the mid-1990s commercial LEDs were used exclusively as indicators. In terms of number of LEDs sold, indicators and other small signal applications in 2002 still consume the largest volume of LEDs, with annual global consumption exceeding several LEDs per person on the planet.

Analogous to famous Moore's law in silicon which predicts a doubling of number of transistors in a chip every 18-24 months, LED luminous output has been following Haitz's law, doubling every 18-24 months for past 34 years.

Hybrid Electric Vehicle



Have you pulled your car up to the gas pump lately and been shocked by the high price of gasoline? As the pump clicked past $20 or $30, maybe you thought about trading in that SUV for something that gets better mileage. Or maybe you are worried that your car is contributing to the greenhouse effect. Or maybe you just want to have the coolest car on the block.

Currently, there is a solution for all this problems; it's the hybrid electric vehicle. The vehicle is lighter and roomier than a purely electric vehicle, because there is less need to carry as many heavy batteries. The internal combustion engine in hybrid-electric is much smaller and lighter and more efficient than the engine in a conventional vehicle. In fact, most automobile manufacturers have announced plans to manufacture their own hybrid versions.

How does a hybrid car work? What goes on under the hood to give you 20 or 30 more miles per gallon than the standard automobile? And does it pollute less just because it gets better gas mileage. In this seminar we will study how this amazing technology works and also discuss about TOYOTA & HONDA hybrid cars.







WHAT IS A "HYBRID ELECTRIC VEHICLE"?

Any vehicle is hybrid when it combines two or more sources of power. In fact, many people have probably owned a hybrid vehicle at some point. For example, a mo-ped (a motorized pedal bike) is a type of hybrid because it combines the power of a gasoline engine with the pedal power of its rider.

Hybrid electric vehicles are all around us. Most of the locomotives we see pulling trains are diesel-electric hybrids. Cities like Seattle have diesel-electric buses -- these can draw electric power from overhead wires or run on diesel when they are away from the wires. Giant mining trucks are often diesel-electric hybrids. Submarines are also hybrid vehicles -- some are nuclear-electric and some are diesel-electric. Any vehicle that combines two or more sources of power that can directly or indirectly provide propulsion power is a hybrid.

The most commonly used hybrid is gasoline-electric hybrid car which is just a cross between a gasoline-powered car and an electric car. A 'gasoline-electric hybrid car' or 'hybrid electric vehicle' is a vehicle which relies not only on batteries but also on an internal combustion engine which drives a generator to provide the electricity and may also drive a wheel. In hybrid electric vehicle the engine is the final source of the energy used to power the car. All electric cars use batteries charged by an external source, leading to the problem of range which is being solved in hybrid electric vehicle.






HYBRID STRUCTURE

You can combine the two power sources found in a hybrid car in different ways. One way, known as a parallel hybrid, has a fuel tank, which supplies gasoline to the engine. But it also has a set of batteries that supplies power to an electric motor. Both the engine and the electric motor can turn the transmission at the same time, and the transmission then turns the wheels.

Digital Testing of High Voltage Circuit Breaker



With the advancement of power system, the lines and other equipment operate at very high voltages and carry large currents. High-voltage circuit breakers play an important role in transmission and distribution systems. A circuit breaker can make or break a circuit, either manually or automatically under all conditions viz. no-load, full-load and short-circuit conditions. The American National Standard Institute (ANSI) defines circuit breaker as: "A mechanical switching device capable of making, carrying and breaking currents under normal circuit conditions and also making, carrying for a specified time, and breaking currents under specified abnormal circuit conditions such as those of short circuit". A circuit breaker is usually intended to operate infrequently, although some types are suitable for frequent operation.


ESSENTIAL QUALITIES OF HV CIRCUIT BREAKER

High-voltage circuit breaker play an important role in transmission and distribution systems. They must clear faults and isolate faulted sections rapidly and reliably. In-short they must possess the following qualities.

" In closed position they are good conductors.
" In open position they are excellent insulators.
" They can close a shorted circuit quickly and safely without unacceptable contact erosion.
" They can interrupt a rated short-circuit current or lower current quickly without generating an abnormal voltage.

The only physical mechanism that can change in a short period of time from a conducting to insulating state at a certain voltage is the arc.



HISTORY

The first circuit breaker was developed by J.N. Kelman in 1901. It was the predecessor of the oil circuit breaker and capable of interrupting a short circuit current of 200 to 300 Ampere in a 40KV system. The circuit breaker was made up of two wooden barrels containing a mixture of oil and water in which the contacts were immersed. Since then the circuit breaker design has undergone a remarkable development. Now a days one pole of circuit breaker is capable of interrupting 63 KA in a 550 KV network with SF6 gas as the arc quenching medium.



THE NEED FOR TESTING

Almost all people have experienced the effects of protective devices operating properly. When an overload or a short circuit occurs in the home, the usual result is a blown fuse or a tripped circuit breaker. Fortunately few have the misfortune to see the results of a defective device, which may include burned wiring, fires, explosions, and electrical shock.

It is often assumed that the fuses and circuit breakers in the home or industry are infallible, and will operate safely when called upon to do so ten, twenty, or more years after their installation. In the case of fuses, this may be a safe assumption, because a defective fuse usually blows too quickly, causing premature opening of the circuit, and forcing replacement of the faulty component. Circuit breakers, however, are mechanical devices, which are subject to deterioration due to wear, corrosion and environmental contamination, any of which could cause the device to remain closed during a fault condition. At the very least, the specified time delay may have shifted so much that proper protection is no longer afforded to devices on the circuit, or improper coordination causes a main circuit breaker or fuse to open in an inconvenient location.

Proteomics



Definition


Proteomics is something new in the field of biotechnology. It is basically the study of the proteome, the collective body of proteins made y a person's cells and tissues. Since it is proteins, and to a much lesser extent, other types of biological molecules that are directly involved in both normal and diseaseassociated biochemical processes, a more complete understanding of the disease may be gained by directly looking at the proteins present within a diseased cell or tissue and this is achieved through the study of the proteome, Proteomics. For, Proteomics, we need 2-D electrophoresis equipment ot separate the proteins, mass spectrometry to identify them and x-ray crystallography to know more of the structure and function of the proteins. These equipments are essential in the study of proteomics.

From The Genome To The Proteome

Genomics has provided a vast amount of information linking gene activity with disease. It is now recognized that gene sequence information and pattern of gene activity in a cell do not provide a complete and accurate profile of a protein's abundance or its final structure and state of activity. The day of spotlight of the human genome is now coming to an end. Researchers are now concentrating on the human proteome, the collective body of all the proteins made by a person's cells and tissues. The genome- the full set of information in the body-contains only the recipes for making proteins; it is the proteins that constitute the bricks and mortar of cells and that do most of the work. Moreover it is the proteins that distinguish the various types of cells: although all cells have essentially the same genome, they can vary in which genes are active and thus in which proteins are made. Likewise diseased cells often produce proteins that healthy cells don't and vice versa. Proteome research permits the discovery of new protein markers for diagnostic purposes and of novel molecular targets for drug discovery.

Proteins

All living things contain proteins. The structure of a cell is largely built of proteins. Proteins are complex, three-dimensional substances composed of one or more long, folded polypeptide chains. These chains, in turn, consist of small chemical units called amino acids. There are twenty kinds of amino acids involved in protein production, and any number of them may be linked in any order to form the polypeptide chain. The order of the amino acids in the polypeptide chain is decided by the information contained in DNA structure of the cell's genes. Following this translation, most proteins are chemically changed through post-translation modification (PTM), mainly through the addition of carbohydrate and phosphate groups. Such modification plays an important role in modulating the function of many proteins but the genes do not code it. As a consequence, the information from a single gene can encode as many as fifty different protein species. It is clear that genomic information often does not provide an accurate profile of protein abundance, structure and activity.

Palm Vein Technology



Definition


An individual first rests his wrist, and on some devices, the middle of his fingers, on the sensor's supports such that the palm is held centimeters above the device's scanner, which flashes a near-infrared ray on the palm. Unlike the skin, through which near-infrared light passes, deoxygenated hemoglobin in the blood flowing through the veins absorbs near-infrared rays, illuminating the hemoglobin, causing it to be visible to the scanner. Arteries and capillaries, whose blood contains oxygenated hemoglobin, which does not absorb near-infrared light, are invisible to the sensor. The still image captured by the camera, which photographs in the near-infrared range, appears as a black network, reflecting the palm's vein pattern against the lighter background of the palm.

An individual's palm vein image is converted by algorithms into data points, which is then compressed, encrypted, and stored by the software and registered along with the other details in his profile as a reference for future comparison. Then, each time a person logs in attempting to gain access by a palm scan to a particular bank account or secured entryway, etc., the newly captured image is likewise processed and compared to the registered one or to the bank of stored files for verification, all in a period of seconds. Numbers and positions of veins and their crossing points are all compared and, depending on verification, the person is either granted or denied access.

Contact less Palm Vein Authentication Device:

The completely contactless feature of this Device makes it suitable for use where high levels of hygiene are required It also eliminates any hesitation people might have about coming into contact with something that other people have already touched.

In addition to being contactless and thereby hygienic and user-friendly in that the user does not need to physically touch a surface and is free of such hygiene concerns, palm vein authentication is highly secure in that the veins are internal to the body and carry a wealth of information, thereby being extremely difficult to forge.

What happens if the registered palm gets damaged?

There may be a chance that the palm we had registered may get damaged then we cannot use this technology, so during the time of registration we take the veins of both the hands so that if one gets damaged we can access through the second hand. When hand get damaged up to large extent we can get veins because deeper into the hand veins are obtained. When we apply this method we can maintain complete privacy .

Blue Brain



Definition


Blue brain " -The name of the world's first virtual brain. That means a machine that can function as human brain. Today scientists are in research to create an artificial brain that can think, response, take decision, and keep anything in memory. The main aim is to upload human brain into machine. So that man can think, take decision without any effort. After the death of the body, the virtual brain will act as the man .So, even after the death of a person we will not loose the knowledge, intelligence, personalities, feelings and memories of that man that can be used for the development of the human society.

No one has ever understood the complexity of human brain. It is complex than any circuitry in the world. So, question may arise "Is it really possible to create a human brain?" The answer is "Yes". Because what ever man has created today always he has followed the nature. When man does not have a device called computer, it was a big question for all .But today it is possible due to the technology. Technology is growing faster than every thing. IBM is now in research to create a virtual brain. It is called "Blue brain ".If possible, this would be the first virtual brain of the world.

How it is possible?

First, it is helpful to describe the basic manners in which a person may be uploaded into a computer. Raymond Kurzweil recently provided an interesting paper on this topic. In it, he describes both invasive and noninvasive techniques. The most promising is the use of very small robots, or nanobots. These robots will be small enough to travel throughout our circulatory

systems. Traveling into the spine and brain, they will be able to monitor the activity and structure of our central nervous system. They will be able to provide an interface with computers that is as close as our mind can be while we still reside in our biological form. Nanobots could also carefully scan the structure of our brain, providing a complete readout of the connections between each neuron. They would also record the current state of the brain. This information , when entered into a computer, could then continue to function as us. All that is required is a computer with large enough storage space and processing power. Is the pattern and state of neuron connections in our brain truly all that makes up our conscious selves? Many people believe firmly those we posses a soul, while some very technical people believe that quantum forces contribute to our awareness. But we have to now think technically. Note, however, that we need not know how the brain actually functions, to transfer it to a computer. We need only know the media and contents. The actual mystery of how we achieved consciousness in the first place, or how we maintain it, is a separate discussion.

Uploading human brain:

The uploading is possible by the use of small robots known as the Nanobots . These robots are small enough to travel through out our circulatory system. Traveling into the spine and brain, they will be able to monitor the activity and structure of our central nervous system. They will be able to provide an interface with computers that is as close as our mind can be while we still reside in our biological form. Nanobots could also carefully scan the structure of our brain, providing a complete readout of the connections. This information, when entered into a computer, could then continue to function as us. Thus the data stored in the entire brain will be uploaded into the computer.