Friday, March 20, 2009

VLSI Test Group





With the advancement of fabrication technology post-manufacturing test of VLSI circuits is facing new challenges. The test team members of Advanced VLSI Deign Laboratory have been working on various projects and novel approaches to circuit testing. Testing of the chips designed and fabricated in-house are carried out by this group. Design activities related to ATPG and DFT for both analog and digital chips have been taken up. Research in the areas of DFT compliant Analog and Digital chips with BIST capabilities have been undertaken. In addition, practical training of students in advanced electronic circuit testing has been carried out. Areas of strength include:

DEALING WITH VLSI CIRCUITS

Digital VLSI circuits are predominantly CMOS based. The way normal blocks like latches and gates are implemented is different from what students have seen so far, but the behaviour remains the same. All the miniaturisation involves new things to consider. A lot of thought has to go into actual implementations as well as design. Let us look at some of the factors involved ...

1. Circuit Delays. Large complicated circuits running at very high frequencies have one big problem to tackle - the problem of delays in propagation of signals through gates and wires ... even for areas a few micrometers across! The operation speed is so large that as the delays add up, they can actually become comparable to the clock speeds.

2. Power. Another effect of high operation frequencies is increased consumption of power. This has two-fold effect - devices consume batteries faster, and heat dissipation increases. Coupled with the fact that surface areas have decreased, heat poses a major threat to the stability of the circuit itself.

3. Layout. Laying out the circuit components is task common to all branches of electronics. Whats so special in our case is that there are many possible ways to do this; there can be multiple layers of different materials on the same silicon, there can be different arrangements of the smaller parts for the same component and so on.


The power dissipation and speed in a circuit present a trade-off; if we try to optimise on one, the other is affected. The choice between the two is determined by the way we chose the layout the circuit components. Layout can also affect the fabrication of VLSI chips, making it either easy or difficult to implement the components on the silicon.

What is VLSI?

VLSI stands for "Very Large Scale Integration". This is the field which involves packing more and more logic devices into smaller and smaller areas.Thanks to VLSI, circuits that would have taken boardfuls of space can now be put into a small space few millimeters across! This has opened up a big opportunity to do things that were not possible before. VLSI circuits are everywhere ... your computer, your car, your brand new state-of-the-art digital camera, the cell-phones, and what have you. All this involves a lot of expertise on many fronts within the same field, which we will look at in later sections.

VLSI has been around for a long time, there is nothing new about it ... but as a side effect of advances in the world of computers, there has been a dramatic proliferation of tools that can be used to design VLSI circuits. Alongside, obeying Moore's law, the capability of an IC has increased exponentially over the years, in terms of computation power, utilisation of available area, yield. The combined effect of these two advances is that people can now put diverse functionality into the IC's, opening up new frontiers. Examples are embedded systems, where intelligent devices are put inside everyday objects, and ubiquitous computing where small computing devices proliferate to such an extent that even the shoes you wear may actually do something useful like monitoring your heartbeats! These two fields are kinda related, and getting into their description can easily lead to another article.

Government Regulations on Emission and Safety Standards Boost the Automotive Electronic Control Units Market in India

MUMBAI, India, March 18, 2009 /PRNewswire via COMTEX/ ----Egged on by factors such as strong consumer demand for enhanced safety features, the need for compliance with emission regulations, and original equipment manufacturers' (OEMs) commitment to introduce novel products, the automotive electronic control units (ECUs) market in India is witnessing steady growth. Comfort and convenience top the agenda for customers, triggering growth especially in the body control and steering systems segment. Emission norms and safety regulations will further drive the uptake of ECUs.

New analysis from Frost & Sullivan (http://www.automotive.frost.com), Automotive Electronic Control Units Market in India, finds that the OEM market in India, which stood at 2.5 million units in 2007, is expected to see a rise of 23.8 percent annually.

If you are interested in a virtual brochure, which provides a brief synopsis of the research and a table of contents, then send an e-mail to Ravinder Kaur/ Nimisha Iyer, Corporate Communications, at ravinder.kaur@frost.com/ niyer@frost.com , with your full name, company name, title, telephone number, company e-mail address, company website, city, state and country. Upon receipt of the above information, a brochure will be sent to you by e-mail.

"Government regulations aimed at making the vehicle fleet safer and more environment friendly will increase the number of ECUs installed on mid-level to economy level vehicles in India," says Frost & Sullivan Research Analyst. "Introduction of BS IV in passenger and utility vehicles and BS III in commercial vehicles is also likely to boost demand for ECUs after 2010."

The availability of better highways and intra-city roads has facilitated quick access to destinations and customers are more inclined to owning fast yet safe cars. Fierce competition characterizes the market and manufacturers are focused on product differentiation to gain one-upmanship.

With regard to the passenger cars and utility vehicles, the market will grow threefold in the near future; newer models will be launched, offering higher technology content, thereby driving the demand for ECUs. Increased safety awareness among consumers is fueling the demand for anti-lock breaking systems (ABS: undefined, undefined, undefined%) and airbags, while demand for convenience features are driving uptake of body control systems. The engine management system holds a major share in the ECU market, making huge inroads in the passenger vehicle segment. Automatic transmission and climate control are still in the fledgling stage and indications point to a market growth of 30.0 percent more by 2012.

"In light of the sheer number of new electronic systems incorporated into vehicles, continuous restructuring of the R&D of supplier companies is necessitated," explains the Analyst. "Electronic systems suppliers are required to keep pace with the technological advancements and roll out smaller, lighter, and more intelligent systems to stay on top of the competition."

However, trends indicate that automakers prefer to limit the number of key suppliers, opting for suppliers that offer unparalleled products and services. Sourcing electronic components from a supplier that can offer a wider range of expertise and products is far easier and less costly.

As electronics contracts are worth millions of dollars, suppliers cannot afford to underestimate these criteria. ECU volumes supplied to OEMs tend to be large. By employing cost-cutting strategies, suppliers are targeting contracts on smaller margins as OEMs continually demand lower prices. Along with price, quality, delivery, and service are prerequisites for manufacturers to successfully garner contracts.

Automotive Electronic Control Units Market in India is part of the Automotive and Transportation Growth Partnership Service program, which also includes research in the following markets: commercial vehicles market in India, logistics industry benchmarking in India, automotive engineering services market in India. All research services included in subscriptions provide detailed market opportunities and industry trends that have been evaluated following extensive interviews with market participants. Interviews with the press are available.

Frost & Sullivan, the Growth Partnership Company, enables clients to accelerate growth and achieve best in class positions in growth, innovation and leadership. The company's Growth Partnership Service provides the CEO and the CEO's Growth Team with disciplined research and best practice models to drive the generation, evaluation and implementation of powerful growth strategies. Frost & Sullivan leverages over 45 years of experience in partnering with Global 1000 companies, emerging businesses and the investment community from 31 offices on six continents. To join our Growth Partnership, please visit http://www.frost.com.

Automotive Electronic Control Units Market in India
P235

Contact:
Ravinder Kaur
Corporate Communications - South Asia
P: +91-44-4204-4760
F: +91-44-2431-4264
E: ravinder.kaur@frost.com

Tanu Chopra
Corporate Communications - Middle East
P: +91-22-4001-3437
F: +91-22-2832-4713
E: tanuc@frost.com

Nimisha Iyer
Corporate Communications - South Asia & Middle East
P: +91-22-4001-3404
F: +91-22-2832-4713
E: niyer@frost.com
http://www.frost.com

Tuesday, March 10, 2009

wind energy


Wind: an important source of power in China

While wind power in China currently accounts for only 1.3% of total power output – compared with coal-fired power at 75% – but just three years ago wind power accounted for one thousandth of total power production in China.

At that staggering pace of development, the contribution of wind to total capacity will continue to increase, as the domestic wind industry matures and the cost per kW decreases. At the same time, the restructuring of the power industry will result in a more sustainable mix of power sources in the future.

China’s wind energy potential is enormous. Chinese sources estimate that exploitable ‘wind resources’ that are available on land in China may be as high as 600–1000 GW, and that close-in offshore exploitable wind power potential accounts for another 700 GW.

Since the Renewable Energy Law took effect on 1 January, 2006, China’s installed wind capacity has increased from 2300 MW at the end of 2005, in excess of 3200 MW at the end of 2006, to 5900 MW at the end of 2007 when China had built more than 100 wind farms in 22 provinces and cities. As of mid-2008 the country had installed more than 7000 MW of wind capacity and, according to the latest available figures, was on track to reach the symbolically important milestone of 10 GW by the end of the year – two years ahead of the revised goal.

By 2010 cumulative installed wind capacity may reach 15–20 GW and after 2011 China is expected to be adding new capacity at the rate of 7–10 GW per year. Analysts currently predict that China’s base of wind power installations will total 50–60 GW by 2015, and that by 2020 it will account for 80–100 GW. The goal for 2020 was revised upward four-fold from the 30 GW goal set by the Mid to Long-Term Development Plan for Renewable Energy, promulgated by Beijing in September 2007 (See Figure 1, below, which shows actual and revised projects for wind installations in China in MW).

professor


‘The future of wind energy in Europe will be found to a large extent on the open seas’

Per Hornung Pedersen, CEO of REpower Systems

REpower Systems AG

Developer of 2–3.3 MW onshore turbines and a large offshore machine, this major manufacturer is majority held by fellow wind turbine player suzlon

REpower is one of the leading turbine producers in the German wind energy sector where it is the third largest manufacturer.

Founded in 2001, the REpower product range includes turbines between 2 MW–5 MW and to date, its machines have been installed at more than 1500 wind projects. The REpower 5M 5 MW turbine – currently one of the largest in the world with a rotor diameter of 126 metres – has been designed primarily for offshore wind farms and the company is currently installing three of its new 6 MW 6M turbines onshore in Germany. In December 2008 the company completed the assembly of these first three prototype turbines in Bremerhaven.

With approximately 1600 employees, REpower has offices in Germany along with subsidiaries and associated companies in France, Spain, the UK, Greece, Australia, China, Portugal, Italy and elsewhere and the company is actively pursuing an internationalization strategy.

However, due to an expected slow-down of the wind energy market in 2009 – which might be reflected in a stagnating or even slightly declining number of new installations – REpower has adjusted its sales growth forecast for 2009-2010 from the previous estimated 40%–50% to 30%–35%.

According to its results to 30 September 2008, the company’s order backlog amounted to 683 turbines with a total rated power of more than 1434 MW, corresponding to around €1.5 billion, (up from €1.2 billion on the previous year’s figures). For the current fiscal year REpower is expecting an increase in sales to €1.1 billion and an earnings margin of 5.5%–6.5%. Sales in the current reporting period stands at €529.8 million, compared with €275.3 million to the corresponding period in 2007. Net profit rose from €5.0 million to €14.4 million.

REpower expects further strong growth on the global wind energy market in the next years, initiated primarily from Europe and America, Furthermore, the company expects its offshore business to grow, citing 1500 MW installed by 2011 as ‘reasonable’.

In November, REpower and Deutsche Offshore Testfeld und Infrastrukturgesellschaft mbH & Co. KG (DOTI) signed a contract for the supply and installation of six 5M wind turbines for the Alpha Ventus project. The installation of the turbines is expected to begin in the middle of July.

Friday, March 6, 2009

bus bars



Front Side Bus (FSB)

Overall processor performance relies on other internal and external factors, one of which is the processor's front side bus (FSB) speed, two common figures for the Intel Pentium 4 are 533MHz and 800MHz.

The front side bus consists of two channels, one for transferring data, and one for indicating the memory address where the data is to be retrieved from or stored.

The front side bus transfers data between the processor and the computer's other components such as memory, hard drives
, etc. The FSB will have a certain width (measured in bits) which dictates how many bits can be transferred at any one time. As the 533MHz and 800MHz figures suggest, the FSB also has a clock cycle frequency indicating how fast the data can be transferred.
For example a processor having a FSB width of 32-bits and running at 533MHz, can transfer a set of 32-bits of data, 533,000,000 times a second.

alu


Arithmetic Logic Unit (ALU)

The ALU is an internal part of the processor which is used for all mathematical and logical operations. The basic operations of an ALU include adding and multiplying binary values, as well as performing logical operations such as AND, OR and XOR. The algorithms for performing these mathematical and logical operations are hard coded (stored permanently) within the ALU.

Cache (L2)

L2 Cache (pronounced cash) is a special block of memory inside the processor (in the same chip) which offers faster data retrieval, typical sizes are 128KB, 256KB and 512KB.
spr
note: Some processors (generally older) utilise external L2 cache.
spr
The data that the processor stores in its cache memory will be data that is frequently used (such as a certain algorithm), the processor will also guess what data may be required and store this data in its cache. This guessing may be successful or it may not, the success rate is known as a hit rate. For instance, if the hit rate was 94% then it would mean that 94 out of every 100 attempts the processor correctly identified and stored a block of data which was needed, the other 6 times the data was never used.

instruction sets


Instruction Sets

The type of work a processor carries out is defined by its instructions, these instructions are coded in binary. All modern processors have their own instructions built-in for common tasks.

Having these instruction sets built-in allow the processor to carry out certain operations much faster. The instruction sets that are built-in depend on the processor's architecture, there are two main types of processor architecture on the market, CISC and RISC.

CISC (Complex Instruction Set Computer)
CISC processors have more internal instructions than its RISC counterpart allowing a more diverse set of operations. Although this may sound the best option, CISC processors are generally slower due to the complexity of the instructions. Some people think the benefit of having more complex instructions built-in outweigh the performance lose, but it would depend on the applications that the processor is going to run.

RISC (Reduced Instruction Set Computer)
RISC processors, as the name suggests, have fewer built-in instructions, this can add to the overall speed of the processor due to the simplicity of the instructions, but again the performance would depend on the type of applications the processor was to be used for.

Most modern processors have built-in instructions specifically designed for certain applications such as 3D graphics, audio manipulation, etc. One example of this would be the MMX (MultiMedia eXtension) technology which Intel built-in to its Pentium architecture in the late nineties. This was a special set of internal instructions that allowed the faster processing of audio and visual algorithms.

processors clock speed



Processor Clock Speed

Every processor has its own built-in clock, this clock dictates how fast the processor can process the data (0's and 1's). You will see processors advertised as having a speed of say 2GHz, this measurement refers to the internal clock.

If a processor is advertised as having a speed of 2GHz, this means that it can process data internally 2 billion times a second (every clock cycle). If the processor is a 32-bit processor running at 2GHz then it can potentially process 32 bits of data simultaneously, 2 billion times a second !!

processors functions



What is a processor and what does it do?

The processor (often called the CPU) is the brain of your PC and is where the majority of the work is performed.

As its name suggests a processor processes something, that something is data, this data is made up of 0's and 1's (zeroes and ones).

To understand a processor we first need to take a quick look at the way digital systems function. All of the work that goes on inside your PC is carried out by the means of voltage, or more accurately the difference in two voltages.

Processor Architecture

A processor (as stated earlier) processes bits (binary digits) of data. In its simplest form, the processor will retrieve some data, perform some process on that data, and then store the result in either its own internal memory (cache) or the systems memory
.

You may have seen processors advertised as 32-bit or 64-bit, this basically means that the processor can process internally either 32 bits or 64 bits of data at any one time.

This would theoretically make a 64-bit processor twice as fast as its 32-bit counterpart.

Software can also be defined as either 16-bit, 32-bit or 64-bit. You can probably see that theoretically, if you are using 64-bit software with a 32-bit processor then it would take two clock cycles (32-bits at a time) to process any one set of 64-bits, this is referred to as a bottleneck.

veda in vlsi design



VEDA is striving to bridge the University Industry Gap through innovative programs in the field of VLSI Engineering

VEDA IIT (VLSI Engineering and Design Automation) is an industry driven state-of-the-art training institute of excellence in the field of VLSI Design and Development. VEDA IIT, a teaching professional industry, conducts an MS Program in VLSI Engineering , which is a collaborative program of Jawaharlal Nehru Technological University, Hyderabad, India and a Consortium of VLSI Design Houses .

Qualification:


BE/BTech/Equivalent in Electronics, Electrical, Computer Science, Information Technology, Instrumentation, Communications

MS/ME/MTech in Electronics, Electrical, Computer Science, Information Technology, Instrumentation, Communications.

Contact Us

VEDA Institute of Information Technology Pvt. Ltd., (VEDA IIT)
4th Floor
Plot No. 90, Road No 2,
Banjara Hills, Hyderabad - 500 034
Andhra Pradesh, INDIA
Tel: +91-40-30615555
Fax: +91-40-30615560
Email: admin@vedaiit.com

VLSI / System Design
In today's high technology business, cost efficiency and faster time to market is a challenge that all organizations are facing. More so in the VLSI and system design arena where reliability, quality and first time right solutions are of paramount importance because of very high costs of iterations.

Having executed its first VLSI design in 1991 and taped out its first SoC in 1995, Wipro understands what it takes to consistently deliver successful designs to customers. As one of the largest independent third party design services providers in the world, Wipro's 1800+ member strong VLSI and system design group provides pure-play design services meeting critical time-to-market demands of customers with design reliability, scalability and flexibility.

Combined with best-in-class methodology, EagleWision™ Wipro has one of the best first-pass-success rate for 170+ silicon and 200+ system designs delivered over the last 2 years. These include engagements in 65nm, 90nm, 0.13µ and 0.18µ for telecommunication, storage, avionics, consumer electronics, medical electronics, automotive and industrial control domains.

A Multi-Format High Performance Audio Codec


VS1053 is advanced slave audio processor. In addition to being able to decode MP1, MP2, MP3, WMA, WAV, IMA ADPCM, General Midi 1, Ogg Vorbis, LC- AAC and HE- AAC file formats, it can also record CD-quality license-free Ogg Vorbis file format. Device includes high quality analog interfaces. Device is ideal for streaming applications because sample rate converter can be fine tuned "on fly" to synchronize the sample rates.

Image processing



In electrical engineering and computer science, image processing is any form of signal processing for which the input is an image, such as photographs or frames of video; the output of image processing can be either an image or a set of characteristics or parameters related to the image. Most image-processing techniques involve treating the image as a two-dimensional signal and applying standard signal-processing techniques to it.

Image processing usually refers to digital image processing, but optical and analog image processing are also possible. This article is about general techniques that apply to all of them.


Typical operations

Among many other image processing operations are:

* Geometric transformations such as enlargement, reduction, and rotation
* Color corrections such as brightness and contrast adjustments, quantization, or conversion to a different color space
* Digital compositing or Optical compositing (combination of two or more images). Used in filmmaking to make a "matte"
* Interpolation, demosaicing, and recovery of a full image from a raw image format using a Bayer filter pattern
* Image editing (e.g., to increase the quality of a digital image)
* Image registration (alignment of two or more images), differencing and morphing
* Image segmentation
* Extending dynamic range by combining differently exposed images
* 2-D object recognition with affine invariance

Thursday, March 5, 2009

applications of dsp


Digital camera images

Digital cameras generally include dedicated digital image processing chips to convert the raw data from the image sensor into a color-corrected image in a standard image file format. Images from digital cameras often receive further processing to improve their quality, a distinct advantage digital cameras have over film cameras. The digital image processing is typically done by special software programs that can manipulate the images in many ways.

Many digital cameras also enable viewing of histograms of images, as an aid for the photographer to better understand the rendered brightness range of each shot

Digital image processing



Many of the techniques of digital image processing, or digital picture processing as it was often called, were developed in the 1960s at the Jet Propulsion Laboratory, MIT, Bell Labs, University of Maryland, and a few other places, with application to satellite imagery, wirephoto standards conversion, medical imaging, videophone, character recognition, and photo enhancement.[1] But the cost of processing was fairly high with the computing equipment of that era. In the 1970s, digital image processing proliferated, when cheaper computers and dedicated hardware became available. Images could then be processed in real time, for some dedicated problems such as television standards conversion. As general-purpose computers became faster, they started to take over the role of dedicated hardware for all but the most specialized and compute-intensive operations.

With the fast computers and signal processors available in the 2000s, digital image processing has become the most common form of image processing, and is generally used because it is not only the most versatile method, but also the cheapest.

Challenges


As microprocessors become more complex due to technology scaling, microprocessor designers have encountered several challenges which force them to think beyond the design plane, and look ahead to post-silicon:

* Power usage/Heat dissipation – As threshold voltages have ceased to scale with advancing process technology, dynamic power dissipation has not scaled proportionally. Maintaining logic complexity when scaling the design down only means that the power dissipation per area will go up. This has given rise to techniques such as dynamic voltage and frequency scaling (DVFS) to minimize overall power.
* Process variation – As lithography techniques tend closer to the fundamental laws of optics, achieving high accuracy in doping concentrations and etched wires is becoming more difficult and prone to errors due to variation. Designers now have to simulate across multiple fabrication process corners before the chip is certified ready for production.
* Stricter design rules – Due to lithography and etch issues with scaling, design rules for layout have gotten much more stringent. Designers have to keep more of these rules in mind while laying out custom circuits. The overhead for custom design is now reaching a tipping point, with many design houses now opting to switch to electronic design automation (EDA) tools to automate their design process.
* Timing/design closure – As clock frequencies tend to scale up, designers are finding it more difficult to distribute and maintain low clock skew between these high frequency clocks across the entire chip. This has led to a rising interest in multicore and multiprocessor architectures, since an overall speedup can be obtained by lowering the clock frequency and distributing processing.
* First-pass success – As die sizes shrink (due to scaling), and wafer sizes go up (to lower manufacturing costs), the number of dies per wafer increases. Wafers in modern technologies cost several million dollars. This deters the old, iterative philosophy involving several "spin-cycles" to find errors in silicon, and encourages first-pass silicon success. Several design philosophies have been developed to aid this new design flow, including design for manufacturing (DFM), design for test (DFT), and many others.

Overview of vlsi


The first semiconductor chips held one transistor each. Subsequent advances added more and more transistors, and, as a consequence, more individual functions or systems were integrated over time. The first integrated circuits held only a few devices, perhaps as many as ten diodes, transistors, resistors and capacitors, making it possible to fabricate one or more logic gates on a single device. Now known retrospectively as "small-scale integration" (SSI), improvements in technique led to devices with hundreds of logic gates, known as large-scale integration (LSI), i.e. systems with at least a thousand logic gates. Current technology has moved far past this mark and today's microprocessors have many millions of gates and hundreds of millions of individual transistors.

At one time, there was an effort to name and calibrate various levels of large-scale integration above VLSI. Terms like Ultra-large-scale Integration (ULSI) were used. But the huge number of gates and transistors available on common devices has rendered such fine distinctions moot. Terms suggesting greater than VLSI levels of integration are no longer in widespread use. Even VLSI is now somewhat quaint, given the common assumption that all microprocessors are VLSI or better.

As of early 2008, billion-transistor processors are commercially available, an example of which is Intel's Montecito Itanium chip. This is expected to become more commonplace as semiconductor fabrication moves from the current generation of 65 nm processes to the next 45 nm generations (while experiencing new challenges such as increased variation across process corners). Another notable example is Nvidia's 280 series GPU. This microprocessor is unique in the fact that its 1.4 Billion transistor count, capable of a teraflop of performance, is almost entirely dedicated to logic (Itanium's transistor count is largely due to the 24MB L3 cache). Current designs, as opposed to the earliest devices, use extensive design automation and automated logic synthesis to lay out the transistors, enabling higher levels of complexity in the resulting logic functionality. Certain high-performance logic blocks like the SRAM cell, however, are still designed by hand to ensure the highest efficiency (sometimes by bending or breaking established design rules to obtain the last bit of performance by trading stability).


Structured design

Structured VLSI design is a modular methodology originated by Carver Mead and Lynn Conway for saving microchip area by minimizing the interconnect fabrics area. This is obtained by repetitive arrangement of rectangular macro blocks which can be interconnected using wiring by abutment. An example is partitioning the layout of an adder into a row of equal bit slices cells. In complex designs this structuring may be achieved by hierarchical nesting.

Structured VLSI design had been popular in the early 1980s, but lost its popularity later because of the advent of placement and routing tools wasting a lot of area by routing, which is tolerated because of the progress of Moore's Law. When introducing the hardware description language KARL in the mid' 1970s, Reiner Hartenstein coined the term "structured VLSI design" (originally as "structured LSI design"), echoing Edsger Dijkstra's structured programming approach by procedure nesting to avoid chaotic spaghetti-structured programs.

Very-large-scale integration

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistor-based circuits into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into the hundreds of millions of transistors.

Very-large-scale integration (VLSI) is the process of creating integrated circuits by combining thousands of transistor-based circuits into a single chip. VLSI began in the 1970s when complex semiconductor and communication technologies were being developed. The microprocessor is a VLSI device. The term is no longer as common as it once was, as chips have increased in complexity into the hundreds of millions of transistors.

Programming environments

Microcontrollers were originally programmed only in assembly language, but various high-level programming languages are now also in common use to target microcontrollers. These languages are either designed specially for the purpose, or versions of general purpose languages such as the C programming language. Compilers for general purpose languages will typically have some restrictions as well as enhancements to better support the unique characteristics of microcontrollers. Some microcontrollers have environments to aid developing certain types of applications. Microcontroller vendors often make tools freely available to make it easier to adopt their hardware.

Many microcontrollers are so quirky that they effectively require their own non-standard dialects of C, such as SDCC for the 8051, which prevent using standard tools (such as code libraries or static analysis tools) even for code unrelated to hardware features. Interpreters are often used to hide such low level quirks.

Interpreter firmware is also available for some microcontrollers. For example, BASIC on the early microcontrollers Intel 8052[4]; BASIC and FORTH on the Zilog Z8[5] as well as some modern devices. Typically these interpreters support interactive programming.

Simulators are available for some microcontrollers, such as in Microchip's MPLAB environment. These allow a developer to analyse what the behaviour of the microcontroller and their program should be if they were using the actual part. A simulator will show the internal processor state and also that of the outputs, as well as allowing input signals to be generated. While on the one hand most simulators will be limited from being unable to simulate much other hardware in a system, they can exercise conditions that may otherwise be hard to reproduce at will in the physical implementation, and can be the quickest way to debug and analyse problems.

Recent microcontrollers are often integrated with on-chip debug circuitry that when accessed by an In-circuit emulator via JTAG, allow debugging of the firmware with a debugger

microcontroller Volumes


About 55% of all CPUs sold in the world are 8-bit microcontrollers and microprocessors. According to Semico, Over 4 billion 8-bit microcontrollers were sold in 2006.[3]

A typical home in a developed country is likely to have only four general-purpose microprocessors but around three dozen microcontrollers. A typical mid range automobile has as many as 30 or more microcontrollers. They can also be found in any electrical device: washing machines, microwave ovens, telephones etc.


Manufacturers have often produced special versions of their microcontrollers in order to help the hardware and software development of the target system. Originally these included EPROM versions that have a "window" on the top of the device through which program memory can be erased by ultra violet light, ready for reprogramming after a programming ("burn") and test cycle. Since 1998, EPROM versions are rare and have been replaced by EEPROM and flash, which are easier to use (can be erased electronically) and cheaper to manufacture.

Other versions may be available where the ROM is accessed as an external device rather than as internal memory, however these are becoming increasingly rare due to the widespread availability of cheap microcontroller programmers.

The use of field-programmable devices on a microcontroller may allow field update of the firmware or permit late factory revisions to products that have been assembled but not yet shipped. Programmable memory also reduces the lead time required for deployment of a new product.

Where hundreds of thousands of identical devices are required, using parts programmed at the time of manufacture can be an economical option. These 'Mask Programmed' parts have the program laid down in the same way as the logic of the chip, at the same time.

Higher integration



In contrast to general-purpose CPUs, microcontrollers may not implement an external address or data bus as they integrate RAM and non-volatile memory on the same chip as the CPU. Using fewer pins, the chip can be placed in a much smaller, cheaper package.

Integrating the memory and other peripherals on a single chip and testing them as a unit increases the cost of that chip, but often results in decreased net cost of the embedded system as a whole. Even if the cost of a CPU that has integrated peripherals is slightly more than the cost of a CPU + external peripherals, having fewer chips typically allows a smaller and cheaper circuit board, and reduces the labor required to assemble and test the circuit board.

A microcontroller is a single integrated circuit, commonly with the following features:

* central processing unit - ranging from small and simple 4-bit processors to complex 32- or 64-bit processors
* discrete input and output bits, allowing control or detection of the logic state of an individual package pin
* serial input/output such as serial ports (UARTs)
* other serial communications interfaces like I²C, Serial Peripheral Interface and Controller Area Network for system interconnect
* peripherals such as timers, event counters, PWM generators, and watchdog
* volatile memory (RAM) for data storage
* ROM, EPROM, EEPROM or Flash memory for program and operating parameter storage
* clock generator - often an oscillator for a quartz timing crystal, resonator or RC circuit
* many include analog-to-digital converters
* in-circuit programming and debugging support

This integration drastically reduces the number of chips and the amount of wiring and circuit board space that would be needed to produce equivalent systems using separate chips. Furthermore, and on low pin count devices in particular, each pin may interface to several internal peripherals, with the pin function selected by software. This allows a part to be used in a wider variety of applications than if pins had dedicated functions. Microcontrollers have proved to be highly popular in embedded systems since their introduction in the 1970s.

Some microcontrollers use a Harvard architecture: separate memory buses for instructions and data, allowing accesses to take place concurrently. Where a Harvard architecture is used, instruction words for the processor may be a different bit size than the length of internal memory and registers; for example: 12-bit instructions used with 8-bit data registers.

The decision of which peripheral to integrate is often difficult. The microcontroller vendors often trade operating frequencies and system design flexibility against time-to-market requirements from their customers and overall lower system cost. Manufacturers have to balance the need to minimize the chip size against additional functionality.

Microcontroller architectures vary widely. Some designs include general-purpose microprocessor cores, with one or more ROM, RAM, or I/O functions integrated onto the package. Other designs are purpose built for control applications. A microcontroller instruction set usually has many instructions intended for bit-wise operations to make control programs more compact.[2] For example, a general purpose processor might require several instructions to test a bit in a register and branch if the bit is set, where a microcontroller could have a single instruction to provide that commonly-required function.

Microcontrollers typically do not have a math coprocessor, so fixed point or floating point arithmetic are performed by program code.

Loudspeaker system design



Crossover

Used in multi-driver speaker systems, the crossover is a device that separates the input signal into different frequency ranges suited to each driver. Each driver, therefore, receives the frequency range it was designed for, so the distortion in each driver, and interference between the drivers, is reduced.

Crossovers can be passive or active. A passive crossover is an electronic circuit using a combination of one or more resistors, inductors and non-polar capacitors. These parts are formed into carefully designed networks, and placed between the amplifier and the loudspeaker drivers to divide the amplifier's signal into the necessary frequency bands before being delivered to the individual drivers. Passive crossover circuits need no external power beyond the audio signal itself. An active crossover is an electronic filter circuit which divides the complete signal into individual frequency bands before amplification, thus requiring one amplifier for each bandpass. The active crossover requires an external power supply.

Passive crossovers are generally installed inside speaker boxes and are by far the most common type of crossover for home and low power use. In car audio systems, passive crossovers may be in a small separate box, necessary to accommodate the size of the components used. Passive crossovers may be simple, or quite elaborate, although steep slopes such as 24dB per octave require components of unusually close tolerances. Passive crossovers, like the driver units that they feed, have power handling limits, and have a modest amount of insertion loss as they convert a small portion of the amplifier power into heat. So, when the highest output levels are required, active crossovers may be preferable. Active crossovers may be simple circuits which emulate the response of a passive network, or may be more complex allowing audio adjustments. Active crossovers called Digital Loudspeaker management systems may include facilities for precise alignment of phase and time between frequency bands, equalization, and dynamics (compression and/or limiting) control.

Some hi-fi and professional loudspeaker systems now include an active crossover circuit as part of an onboard amplifier system. These designs are identifiable by their need for AC power in addition to a signal cable. This 'active' topology may also include driver protection circuits, and other features of a digital loudspeaker management system. Powered speaker systems are common in computer sound (for a single listener) and, at the other end of the size spectrum, in concert sound systems. Powered speaker systems for concert sound, by virtue of no external adjustments, have the potential to provide predictabile, if not necessarily good, sound quality by removing control of crossover, delay and limiter settings from the concert sound engineer.

Driver types


An audio engineering rule of thumb is that individual electrodynamic drivers provide quality performance over at most about 3 octaves. Multiple drivers (e.g., subwoofers, woofers, mid-range drivers, tweeters) are generally used in a complete loudspeaker system to provide performance beyond 3 octaves.


Full range drivers

A full-range driver is designed to have the widest frequency response possible, despite the rule of thumb cited above. These drivers are small, typically 3 to 8 inches (7 to 20 cm) in diameter to permit reasonable high frequency response, and carefully designed to give low distortion output at low frequencies, though with reduced maximum output level. Full range (or more accurately wide range) drivers are most commonly heard in public address systems, and in televisions, although some models are suitable for hi-fi listening. In hi-fi speaker systems, the use of wide range drive units can avoid undesirable interaction between multiple drivers, caused by non-coincident driver location, or crossover network issues. Fans of wide range driver hi-fi speaker systems claim a coherence of sound, said to be due to the single source and a resulting lack of interference, and likely to the lack of crossover components. Detractors typically cite the wide range driver's limited frequency response and their modest output abilities, together with their requirement for large, elaborate, expensive enclosures, such as transmission lines, or horns, to approach optimum performance.

Full range drivers often employ an additional cone called a whizzer: a small, light cone attached to the joint between the voice coil and the primary cone. The whizzer cone extends the high frequency response of the driver and broadens its high frequency directivity, which would otherwise be greatly narrowed due to the outer diameter cone material failing to keep up with the central voice coil at higher frequencies. The main cone in a whizzer design is manufactured so as to flex more in the outer diameter than in the center. The result is that the main cone delivers low frequencies and the whizzer cone contributes most of the higher frequencies. Since the whizzer cone is smaller than the main diaphragm, output dispersion at high frequencies is improved relative to an equivalent single larger diaphragm.

Limited-range drivers are typically noted in computers, toys, and clock radios. These drivers are less elaborate and less expensive than wide range drivers, and they may be severely compromised to fit into very small mounting locations. In this application, sound quality is a low priority. The human ear is remarkably tolerant of poor sound quality, and the distortion inherent in limited range drivers may enhance their output at high frequencies, increasing clarity when listening to spoken word material.

Driver design


The most common type of driver uses a lightweight diaphragm or cone connected to a rigid basket, or frame, via flexible suspension that constrains a coil of fine wire to move axially through a cylindrical magnetic gap. When an electrical signal is applied to the voice coil, a magnetic field is created by the electric current in the voice coil which thus becomes an electromagnet field. The coil and the driver's magnetic system interact, generating a mechanical force which causes the coil, and so the attached cone, to move back and forth and so reproduce sound under the control of the applied electrical signal coming from the amplifier. The following is a description of the individual components of this type of loudspeaker.

The diaphragm is usually manufactured with a cone or dome shaped profile. A variety of different materials may be used, but the most common are paper, plastic and metal. The ideal material would be stiff (to prevent uncontrolled cone motions), light (to minimize starting force requirements) and well damped (to reduce vibrations continuing after the signal has stopped). In practice, all three of these criteria cannot be met simultaneously using existing materials, and thus driver design involves tradeoffs. For example, paper is light and typically well damped, but not stiff; metal can be made stiff and light, but it is not usually well damped; plastic can be light, but typically the stiffer it is made, the less well-damped it is. As a result, many cones are made of some sort of composite material. This can be a matrix of fibers including Kevlar or fiberglass, a layered or bonded sandwich construction, or simply a coating applied to stiffen or damp a cone.

The basket or frame must be designed for rigidity to avoid deformation, which will change the magnetic conditions in the magnet gap, and could even cause the voice coil to rub against the walls of the magnetic gap. Baskets are typically cast or stamped metal, although molded plastic baskets are becoming common, especially for inexpensive drivers. The frame also plays a considerable role in conducting heat away from the coil.

The suspension system keeps the coil centered in the gap and provides a restoring force to make the speaker cone return to a neutral position after moving. A typical suspension system consists of two parts: the "spider", which connects the diaphragm or voice coil to the frame and provides the majority of the restoring force; and the "surround", which helps center the coil/cone assembly and allows free pistonic motion aligned with the magnetic gap. The spider is usually made of a corrugated fabric disk, generally with a coating of a material intended to improve mechanical properties. The name "spider" derives from the shape of early suspensions, which were two concentric rings of bakelite material, joined by six or eight curved "legs". Variations of this topology included adding a felt disc to provide a barrier to particles that might otherwise cause the voice coil to rub. Another German company currently offers a spider made of wood. The surround can be a roll of rubber or foam, or a ring of corrugated fabric (often coated), attached to the outer circumference of the cone and to the frame. The choice of suspension materials affects driver lifetime, especially in the case of foam surrounds which are susceptible to aging and environmental damage.

The wire in a voice coil is usually made of copper, though aluminium, and rarely silver, may be used. Voice coil wire cross sections can be circular, rectangular, or hexagonal, giving varying amounts of wire volume coverage in the magnetic gap space. The coil is oriented coaxially inside the gap, a small circular volume (a hole, slot, or groove) in the magnetic structure within which it can move back and forth. The gap establishes a concentrated magnetic field between the two poles of a permanent magnet; the outside of the gap being one pole and the center post (a.k.a., the pole-piece) being the other. The pole piece and backplate are often a single piece called the poleplate or yoke

Loudspeaker


A loudspeaker, speaker, or speaker system is an electroacoustical transducer that converts an electrical signal to sound. The term loudspeaker can refer to individual transducers (known as drivers), or to complete systems consisting of a enclosure incorporating one or more drivers and electrical filter components. Loudspeakers (and other electroacoustic transducers) are the most variable elements in a modern audio system and are usually responsible for most audible differences when comparing systems.

To adequately reproduce a wide range of frequencies, most loudspeaker systems require more than one driver, particularly for high sound pressure level or high accuracy. Individual drivers are used to reproduce different frequency ranges. The drivers are named subwoofers (very low frequencies), woofers (low frequencies), mid-range speakers (middle frequencies), tweeters (high frequencies) and sometimes supertweeters optimized for the highest audible frequencies.

The terms for different speaker drivers differ depending on the application. In 2-way loudspeakers, there is no "mid-range" driver, so the task of reproducing the midrange sounds falls upon the woofer and tweeter. Home stereos use the designation "tweeter" for high frequencies whereas professional audio systems for concerts may designate high frequency drivers as "HF" or "highs" or "horns".

When multiple drivers are used in a system, a "filter network", called a crossover, separates the incoming signal into different frequency ranges, and routes them to the appropriate driver. A loudspeaker system with n separate frequency bands is described as "n-way speakers": a 2-way system will have woofer and tweeter speakers; a 3-way system is either a combination of woofer, mid-range and tweeter or subwoofer, woofer and tweeter.

Notable Sony products, technologies and proprietary formats





Sony has historically been notable for creating its own in-house standards for new recording and storage technologies instead of adopting those of other manufacturers and standards bodies. The most infamous of these was the videotape format war of the early 1980s, when Sony marketed the Betamax system for video cassette recorders against the VHS format developed by JVC. In the end, VHS gained critical mass in the marketplace and became the worldwide standard for consumer VCRs and Sony adopted the format. While Betamax is for all practical purposes an obsolete format, a professional-oriented component video format called Betacam that was derived from Betamax is still used today, especially in the film and television industry.

In 1968 Sony introduced the Trinitron brand name for its line of aperture grille cathode ray tube televisions and (later) computer monitors. Trinitron displays are still produced, but only for markets like India and China. Sony discontinued the last Trinitron-based television set in the USA Spring of 2007. Trinitron computer monitors were discontinued in 2005.

Sony launched the Betamax videocassette recording format in 1975. In 1979 the Walkman brand was introduced, in the form of the world's first portable music player.

1982 saw the launch of Sony's professional Betacam videotape format and the collaborative Compact Disc format. In 1983 Sony introduced 90mm micro diskettes (better known as 3.5-inch (89 mm) floppy disks), which it had developed at a time when there were 4" floppy disks and a lot of variations from different companies to replace the then on-going 5.25" floppy disks. Sony had great success and the format became dominant; 3.5" floppy disks gradually became obsolete as they were replaced by current media formats. In 1983 Sony launched the MSX, a home computer system, and introduced the world (with their counterpart Philips) to the Compact Disc or CD. In 1984 Sony launched the Discman series which extended their Walkman brand to portable CD products. In 1985 Sony launched their Handycam products and the Video8 format. Video8 and the follow-on hi-band Hi8 format became popular in the consumer camcorder market. In 1987 Sony launched the 4mm DAT or Digital Audio Tape as a new digital audio tape standard.

Sony



Sony Corporation (ソニー株式会社 ,Sonī Kabushiki Gaisha?) is a multinational conglomerate corporation headquartered in Minato, Tokyo, Japan, and one of the world's largest media conglomerates with revenue exceeding US$99.1 billion (as of 2008).[1] Sony is one of the leading manufacturers of electronics, video, communications, video game consoles, and information technology products for the consumer and professional markets. Its name is derived from sonus, the Latin word for sound.[4]

Sony Corporation is the electronics business unit and the parent company of the Sony Group, which is engaged in business through its five operating segments—electronics, games, entertainment (motion pictures and music), financial services and other. These make Sony one of the most comprehensive entertainment companies in the world. Sony's principal business operations include Sony Corporation (Sony Electronics in the U.S.), Sony Pictures Entertainment, Sony Computer Entertainment, Sony Music Entertainment, Sony Ericsson, and Sony Financial Holdings. As a semiconductor maker, Sony is among the Worldwide Top 20 Semiconductor Sales Leaders. The company's slogan is Sony. Like no other.[5]

In 1945, after World War II, Masaru Ibuka started a radio repair shop in a bombed-out building in Tokyo. The next year, he was joined by his colleague Akio Morita and they founded a company called Tokyo Tsushin Kogyo K.K.,[6] which translates in English to Tokyo Telecommunications Engineering Corporation. The company built Japan's first tape recorder called the Type-G.[6]

In the early 1950s, Ibuka traveled in the United States and heard about Bell Labs' invention of the transistor.[6] He convinced Bell to license the transistor technology to his Japanese company. While most American companies were researching the transistor for its military applications, Ibuka looked to apply it to communications. Although the American companies Regency and Texas Instruments built the first transistor radios, it was Ibuka's company that made them commercially successful for the first time. In August 1955, Tokyo Telecommunications Engineering released the Sony TR-55, Japan's first commercially produced transistor radio.[7] They followed up in December of the same year by releasing the Sony TR-72, a product that won favor both within Japan and in export markets, including Canada, Australia, the Netherlands and Germany. Featuring six transistors, push-pull output and greatly improved sound quality, the TR-72 continued to be a popular seller into the early sixties.

In May 1956, the company released the TR-6, which featured an innovative slim design and sound quality capable of rivaling portable tube radios. It was for the TR-6 that Sony first contracted "Atchan", a cartoon character created by Fuyuhiko Okabe, to become its advertising character. Now known as "Sony Boy", the character first appeared in a cartoon ad holding a TR-6 to his ear, but went on to represent the company in ads for a variety of products well into the mid-sixties.[6] The following year, 1957, Tokyo Telecommunications Engineering came out with the TR-63 model, then the smallest (112 × 71 × 32 mm) transistor radio in commercial production. It was a worldwide commercial success.[6]

University of Arizona professor Michael Brian Schiffer, Ph.D., says, "Sony was not first, but its transistor radio was the most successful. The TR-63 of 1957 cracked open the U.S. market and launched the new industry of consumer microelectronics." By the mid 1950s, American teens had begun buying portable transistor radios in huge numbers, helping to propel the fledgling industry from an estimated 100,000 units in 1955 to 5,000,000 units by the end of 1968. However, this huge growth in portable transistor radio sales that saw Sony rise to be the dominant player in the consumer electronics field[8] was not because of the consumers who had bought the earlier generation of tube radio consoles, but was driven by a distinctly new American phenomenon at the time called rock and roll.

Sony's headquarters moved to Minato, Tokyo from Shinagawa, Tokyo around the end of 2006.

Direct Stream Digital


Direct-Stream Digital (DSD) is the trademark name used by Sony and Philips for their system of recreating audible signals which uses pulse-density modulation encoding, a technology to store audio signals on digital storage media which is used for the Super Audio CD (SACD).

The signal is stored as delta-sigma modulated digital audio, a sequence of single bit values at a frequency sampling rate of 64 times the CD Audio sampling rates of 44.1 kHz, for a rate of 2.8224 MHz (1 bit times 64 times 44.1 kHz). Noise shaping occurs by use of the 64× oversampled signal to reduce noise/distortion caused by the inaccuracy of quantization of the audio signal to a single bit. Therefore it is a topic of discussion whether it is possible to eliminate distortion in 1-bit Sigma-Delta conversion (see Audio Engineering Society Convention Paper 5395 in the External Links section below).

There has been much controversy between proponents of DSD and PCM over which encoding system is superior. Professors Stanley Lipshitz and John Vanderkooy from the University of Waterloo, in Audio Engineering Society Convention Paper 5395

(2001), stated that 1-bit converters (as employed by DSD) are unsuitable for high-end applications due to their high distortion. Even 8-bit, four-times-oversampled PCM with noise shaping, proper dithering and half data rate of DSD has better noise floor and frequency response. However, in 2002, Philips published a convention paper arguing against this in Convention Paper 5616

. Lipshitz and Vanderkooy's paper has been criticized in detail by Professor James Angus at an Audio Engineering Society presentation in Convention Paper 5619

. Lipshitz and Vanderkooy responded in Convention Paper 5620

.

Practical DSD converter implementations were pioneered by Ed Meitner, an Austrian sound engineer and owner of EMM Labs. Global DSD technology was developed by Sony and Philips, the designers of the audio CD. Philips' DSD tool division was transferred to Sonic Studio, LLC

in 2005 for on-going design and development.

DSD technology may also have potential for video applications. A similar structure based on pulse-width modulation, which is decoded in the same way as DSD, has been used in Laserdisc video.

Audio effects and amplification


PWM is sometimes used in sound synthesis, in particular subtractive synthesis, as it gives a sound effect similar to chorus or slightly detuned oscillators played together. (In fact, PWM is equivalent to the difference of two sawtooth waves. [1]

) The ratio between the high and low level is typically modulated with a low frequency oscillator, or LFO.

A new class of audio amplifiers based on the PWM principle is becoming popular. Called "Class-D amplifiers", these amplifiers produce a PWM equivalent of the analog input signal which is fed to the loudspeaker via a suitable filter network to block the carrier and recover the original audio. These amplifiers are characterized by very good efficiency figures (≥ 90%) and compact size/light weight for large power outputs.

Historically, a crude form of PWM has been used to play back PCM digital sound on the PC speaker, which is only capable of outputting two sound levels. By carefully timing the duration of the pulses, and by relying on the speaker's physical filtering properties (limited frequency response, self-inductance, etc.) it was possible to obtain an approximate playback of mono PCM samples, although at a very low quality, and with greatly varying results between implementations.

In more recent times, the Direct Stream Digital sound encoding method was introduced, which uses a generalized form of pulse-width modulation called pulse density modulation, at a high enough sampling rate (typically in the order of MHz) to cover the whole acoustic frequencies range with sufficient fidelity. This method is used in the SACD format, and reproduction of the encoded audio signal is essentially similar to the method used in class-D amplifiers.

The term low-frequency oscillation (LFO) is an audio signal usually below 20 Hz which creates a pulsating rhythm rather than an audible tone. LFO predominantly refers to an audio technique specifically used in the production of electronic music. The abbreviation is also very often used to refer to low-frequency oscillators themselves, which produce the effects explored in this article.

Power delivery


PWM can be used to reduce the total amount of power delivered to a load without losses normally incurred when a power source is limited by resistive means. This is because the average power delivered is proportional to the modulation duty cycle. With a sufficiently high modulation rate, passive electronic filters can be used to smooth the pulse train and recover an average analog waveform.

High frequency PWM power control systems are easily realisable with semiconductor switches. The discrete on/off states of the modulation are used to control the state of the switch(es) which correspondingly control the voltage across or current through the load. The major advantage of this system is the switches are either off and not conducting any current, or on and have (ideally) no voltage drop across them. The product of the current and the voltage at any given time defines the power dissipated by the switch, thus (ideally) no power is dissipated by the switch. Realistically, semiconductor switches such as MOSFETs or BJTs are non-ideal switches, but high efficiency controllers can still be built.

PWM is also often used to control the supply of electrical power to another device such as in speed control of electric motors, volume control of Class D audio amplifiers or brightness control of light sources and many other power electronics applications. For example, light dimmers for home use employ a specific type of PWM control. Home use light dimmers typically include electronic circuitry which suppresses current flow during defined portions of each cycle of the AC line voltage. Adjusting the brightness of light emitted by a light source is then merely a matter of setting at what voltage (or phase) in the AC cycle the dimmer begins to provide electrical current to the light source (e.g. by using an electronic switch such as a triac). In this case the PWM duty cycle is defined by the frequency of the AC line voltage (50 Hz or 60 Hz depending on the country). These rather simple types of dimmers can be effectively used with inert (or relatively slow reacting) light sources such as incandescent lamps, for example, for which the additional modulation in supplied electrical energy which is caused by the dimmer causes only negligible additional fluctuations in the emitted light. Some other types of light sources such as light-emitting diodes (LEDs), however, turn on and off extremely rapidly and would perceivably flicker if supplied with low frequency drive voltages. Perceivable flicker effects from such rapid response light sources can be reduced by increasing the PWM frequency. If the light fluctuations are sufficiently rapid, the human visual system can no longer resolve them and the eye perceives the time average intensity without flicker (see flicker fusion threshold).

Spectrum


The resulting spectra (of the three cases) are similar, and each contains a dc component, a base sideband containing the modulating signal and phase modulated carriers at each harmonic of the frequency of the pulse. The amplitudes of the harmonic groups are restricted by a sinx / x envelope (sinc function) and extend to infinity.
Applications

Telecommunications

In telecommunications, the widths of the pulses correspond to specific data values encoded at one end and decoded at the other.

Pulses of various lengths (the information itself) will be sent at regular intervals (the carrier frequency of the modulation).

Digital




Many digital circuits can generate PWM signals (e.g many microcontrollers have PWM outputs). They normally use a counter that increments periodically (it is connected directly or indirectly to the clock of the circuit) and is reset at the end of every period of the PWM. When the counter value is more than the reference value, the PWM output changes state from high to low (or low to high).[1]

The incremented and periodically reset counter is the discrete version of the intersecting method's sawtooth. The analog comparator of the intersecting method becomes a simple integer comparison between the current counter value and the digital (possibly digitized) reference value. The duty cycle can only be varied in discrete steps, as a function of the counter resolution.

Pulse-width modulation



Pulse-width modulation (PWM) of a signal or power source involves the modulation of its duty cycle, to either convey information over a communications channel or control the amount of power sent to a load.

Principle

The simplest way to generate a PWM signal is the intersective method, which requires only a sawtooth or a triangle waveform (easily generated using a simple oscillator) and a comparator. When the value of the reference signal (the green sine wave in figure 2) is more than the modulation waveform (blue), the PWM signal (magenta) is in the high state, otherwise it is in the low state.

Other microcontroller features


Since embedded processors are usually used to control devices, they sometimes need to accept input from the device they are controlling. This is the purpose of the analog to digital converter. Since processors are built to interpret and process digital data, i.e. 1s and 0s, they won't be able to do anything with the analog signals that may be being sent to it by a device. So the analog to digital converter is used to convert the incoming data into a form that the processor can recognize. There is also a digital to analog converter that allows the processor to send data to the device it is controlling.

In addition to the converters, many embedded microprocessors include a variety of timers as well. One of the most common types of timers is the Programmable Interval Timer, or PIT for short. A PIT just counts down from some value to zero. Once it reaches zero, it sends an interrupt to the processor indicating that it has finished counting. This is useful for devices such as thermostats, which periodically test the temperature around them to see if they need to turn the air conditioner on, the heater on, etc.

Time Processing Unit or TPU for short. Is essentially just another timer, but more sophisticated. In addition to counting down, the TPU can detect input events, generate output events, and other useful operations.

Dedicated Pulse Width Modulation (PWM) block makes it possible for the CPU to control power converters, resistive loads, motors, etc., without using lots of CPU resources in tight timer loops.

Universal Asynchronous Receiver/Transmitter (UART) block makes it possible to receive and transmit data over a serial line with very little load on the CPU.

For those wanting ethernet one can use an external chip like Crystal Semiconductor CS8900A, Realtek RTL8019, or Microchip ENC 28J60. All of them allow easy interfacing with low pin count.


An analog-to-digital converter (abbreviated ADC, A/D or A to D) is a device which converts continuous signals to discrete digital numbers. The reverse operation is performed by a digital-to-analog converter (DAC).

Typically, an ADC is an electronic device that converts an input analog voltage (or current) to a digital number. However, some non-electronic or only partially electronic devices, such as rotary encoders, can also be considered ADCs. The digital output may use different coding schemes, such as binary, Gray code or two's complement binary.