You are on page 1of 12

Assignment: 1

Que.:- Introduction to DSP architectures and programming Sampling Theory, Analog-to-Digital


Converter (ADC), Digital-to-Analog Converter (DAC) and Quantization?

Ans.:Digital Sampling Theory:A digital signal processor (DSP) is an integrated circuit designed for high-speed data manipulations, and is used in audio, communications, image manipulation, and other data-acquisition and data-control applications. Digital signal processing (DSP) is the mathematical manipulation of an information signal to modify or improve it in some way. It is characterized by the representation of discrete time, discrete frequency, or other discrete domain signals by a sequence of numbers or symbols and the processing of these signals. The goal of DSP is usually to measure, filter and/or compress continuous real-world analog signals. The first step is usually to convert the signal from an analog to a digital form, by sampling and then digitizing it using an analog-to-digital converter (ADC), which turns the analog signal into a stream of numbers. However, often, the required output signal is another analog output signal, which requires a digital-toanalog converter (DAC). Even if this process is more complex than analog processing and has a discrete value range, the application of computational power to digital signal processing allows for many advantages over analog processing in many applications, such as error detection and correction in transmission as well as data compression. Digital signal processing and analog signal processing are subfields of signal processing. DSP applications include: audio and speech signal processing, sonar and radar signal processing, sensor array processing, spectral estimation, statistical signal processing, digital image processing, signal processing for communications, control of systems, biomedical signal processing, seismic data processing, etc. DSP algorithms have long been run on standard computers, as well as on specialized processors called digital signal processor and on purpose-built hardware such as application-specific integrated circuit (ASICs). Today there are additional technologies used for digital signal processing including more powerful general purpose microprocessors, field-programmable gate arrays (FPGAs), digital signal controllers (mostly for industrial apps such as motor control), and stream processors, among others. Digital signal processing can involve linear or nonlinear operations. Nonlinear signal processing is closely related to nonlinear system identification [3] and can be implemented in the time, frequency, and spatiotemporal domains. A "linear" filter is a linear transformation of input samples; other filters are "non-linear". Linear filters satisfy the superposition condition, i.e. if an input is a weighted linear combination of different signals, the output is an equally weighted linear combination of the corresponding output signals. A "causal" filter uses only previous samples of the input or output signals; while a "non-causal" filter uses future input samples. A non-causal filter can usually be changed into a causal filter by adding a delay to it. A "time-invariant" filter has constant properties over time; other filters such as adaptive filters change in time.

A "stable" filter produces an output that converges to a constant value with time, or remains bounded within a finite interval. An "unstable" filter can produce an output that grows without bounds, with bounded or even zero input.A "finite impulse response" (FIR) filter uses only the input signals, while an "infinite impulse response" filter (IIR) uses both the input signal and previous samples of the output signal. FIR filters are always stable, while IIR filters may be unstable. DSP Architecture:DSPs typically use Harvard architecture, although von Neuman DSPs also exist. Many signal and image processing applications require fast, real-time machines. The drawback to using a true Harvard architecture is that since it uses separate program and data memories, it needs twice as many address and data pins on the chip and twice as much external memory. Unfortunately, as the number of pins or chips increases, so does the price. DSP Chip A DSP chip can contain many hardware elements; some of the more common ones are listed below. Central Arithmetic Unit This part of the DSP performs major arithmetic functions such as multiplication and addition. It is the part that makes the DSP so fast in comparison with traditional processors. Auxiliary Arithmetic Unit DSPs frequently have an auxiliary arithmetic unit that performs pointer arithmetic, mathematical calculations, or logical operations in parallel with the main arithmetic unit. Serial Ports DSPs normally have internal serial ports for high-speed communication with other DSPs and data converters. These serial ports are directly connected to the internal buses to improve performance, to reduce external address decoding problems, and to reduce cost.

Architecture of DSP

Analog-to-Digital Converter (ADC):An analog-to-digital converter (abbreviated ADC, A/D or A to D) is a device that converts a continuous physical quantity (usually voltage) to a digital number that represents the quantity's amplitude. The conversion involves quantization of the input, so it necessarily introduces a small amount of error. Instead of doing a single conversion, an ADC often performs the conversions ("samples" the input) periodically. The result is a sequence of digital values that have converted a continuous-time and continuous-amplitude analog signal to a discrete-time and discrete-amplitude digital signal. An ADC is defined by its bandwidth (the range of frequencies it can measure) and its signal to noise ratio (how accurately it can measure a signal relative to the noise it introduces). The actual bandwidth of an ADC is characterized primarily by its sampling rate, and to a lesser extent by how it handles errors such as aliasing. The dynamic range of an ADC is influenced by many factors, including the resolution (the number of output levels it can quantize a signal to), linearity and accuracy (how well the quantization levels match the true analog signal) and jitter (small timing errors that introduce additional noise). If an ADC operates at a sampling rate greater than twice the bandwidth of the signal, then perfect reconstruction is possible given an ideal ADC and neglecting quantization error. The presence of quantization error limits the dynamic range of even an ideal ADC, however, if the dynamic range of the ADC exceeds that of the input signal, its effects may be neglected resulting in an essentially perfect digital representation of the input signal.

An ADC may also provide an isolated measurement such as an electronic device that converts an input analog voltage or current to a digital number proportional to the magnitude of the voltage or current. However, some non-electronic or only partially electronic devices, such as rotary encoders, can also be considered ADCs. The digital output may use different coding schemes. Typically the digital output will be a two's complement binary number that is proportional to the input, but there are other possibilities. An encoder, for example, might output a Gray code. The resolution of the converter indicates the number of discrete values it can produce over the range of analog values. The resolution determines the magnitude of the quantization error and therefore determines the maximum possible average signal to noise ratio for an ideal ADC without the use of oversampling. The values are usually stored electronically in binary form, so the resolution is usually expressed in bits. In consequence, the number of discrete values available, or "levels", is assumed to be a power of two. For example, an ADC with a resolution of 8 bits can encode an analog input to one in 256 different levels, since 28 = 256. The values can represent the ranges from 0 to 255 (i.e. unsigned integer) or from 128 to 127 (i.e. signed integer), depending on the application.

Digital-to-Analog Converter (DAC):In electronics, a digital-to-analog converter (DAC, D/A, D2A or D-to-A) is a function that converts digital data (usually binary) into an analog signal (current, voltage, or electric charge). An analog-to-digital converter (ADC) performs the reverse function. Unlike analog signals, digital data can be transmitted, manipulated, and stored without degradation, albeit with more complex equipment. But a DAC is needed to convert the digital signal to analog to drive an earphone or loudspeaker amplifier in order to produce sound (analog air pressure waves). DACs and their inverse, ADCs, are part of an enabling technology that has contributed greatly to the digital revolution. To illustrate, consider a typical long-distance telephone call. The caller's voice is converted into an analog electrical signal by a microphone, then the analog signal is converted to a digital stream by an ADC. The digital stream is then divided into packets where it may be mixed with other digital data, not necessarily audio. The digital packets are then sent to the destination, but each packet may take a completely different route and may not even arrive at the destination in the correct time order. The digital voice data is then extracted from the packets and assembled into a digital data stream. A DAC converts this into an analog electrical signal, which drives an audio amplifier, which in turn drives a loudspeaker, which finally produces sound. There are several DAC architectures; the suitability of a DAC for a particular application is determined by six main parameters: physical size, power consumption, resolution, speed, accuracy, cost. Due to the complexity and the need for precisely matched components, all but the most specialist DACs are implemented as integrated circuits (ICs). Digital-to-analog conversion can degrade a signal, so a DAC should be specified that that has insignificant errors in terms of the application.

DACs are commonly used in music players to convert digital data streams into analog audio signals. They are also used in televisions and mobile phones to convert digital video data into analog video signals which connect to the screen drivers to display monochrome or color images. These two applications use DACs at opposite ends of the speed/resolution trade-off. The audio DAC is a low speed high resolution type while the video DAC is a high speed low to medium resolution type. Discrete DACs would typically be extremely high speed low resolution power hungry types, as used in military radar systems. Very high speed test equipment, especially sampling oscilloscopes, may also use discrete DACS. A DAC converts an abstract finite-precision number (usually a fixed-point binary number) into a physical quantity (e.g., a voltage or a pressure). In particular, DACs are often used to convert finite-precision time series data to a continually varying physical signal. A typical DAC converts the abstract numbers into a concrete sequence of impulses that are then processed by a reconstruction filter using some form of interpolation to fill in data between the impulses. Other

DAC methods (e.g., methods based on delta-sigma modulation) produce a pulse-density modulated signal that can then be filtered in a similar way to produce a smoothly varying signal.As per the Nyquist Shannon sampling theorem, a DAC can reconstruct the original signal from the sampled data provided that its bandwidth meets certain requirements (e.g., a baseband signal with bandwidth less than the Nyquist frequency). Digital sampling introduces quantization error that manifests as low-level noise added to the reconstructed signal Quantization:Quantization, in mathematics and digital signal processing, is the process of mapping a large set of input values to a (countable) smaller set such asrounding values to some unit of precision. A device or algorithmic function that performs quantization is called a quantizer. The round-off error introduced by quantization is referred to as quantization error. In analog-to-digital conversion, the difference between the actual analog value and quantized digital value is called quantization error or quantization distortion. This error is either due to rounding or truncation. The error signal is sometimes modeled as an additional random signal called quantization noisebecause of its stochastic behaviour. Because quantization is a many-to-few mapping, it is an inherently non-linear and irreversible process (i.e., because the same output value is shared by multiple input values, it is impossible in general to recover the exact input value when given only the output value).The set of possible input values may be infinitely large, and may possibly be continuous and therefore uncountable (such as the set of all real numbers, or all real numbers within some limited range). The set of possible output values may be finite or countably infinite. The input and output sets involved in quantization can be defined in a rather general way. For example, vector quantization is the application of quantization to multi-dimensional (vectorvalued) input data.There are two substantially different classes of applications where quantization is used:The first type, which may simply be called rounding quantization, is the one employed for many applications, to enable the use of a simple approximate representation for some quantity that is to be measured and used in other calculations. This category includes the simple rounding approximations used in everyday arithmetic. This category also includes analog-to-digital conversion of a signal for a digital signal processing system (e.g., using a sound card of a personal computer to capture an audio signal) and the calculations performed within most digital filtering processes. Here the purpose is primarily to retain as much signal fidelity as possible while eliminating unnecessary precision and keeping the dynamic range of the signal within practical limits (to avoid signal clipping or arithmetic overflow). In such uses, substantial loss of signal fidelity is often unacceptable, and the design often centers around managing the approximation error to ensure that very little distortion is introduced. The second type, which can be called ratedistortion optimized quantization, is encountered in source coding for "lossy" data compression algorithms, where the purpose is to manage distortion within the limits of the bit rate supported by a communication channel or storage medium. In this second setting, the amount of introduced distortion may be managed carefully by sophisticated techniques, and introducing some significant amount of distortion may be unavoidable. A quantizer designed for this purpose may be quite different and more elaborate in design than an ordinary rounding operation. It is in this domain that substantial ratedistortion theory analysis is likely to be applied. However, the same concepts actually apply in both use cases.

Assignment: 2
Que.:- Decimation, Interpolation, Convolution, Simple Moving Average? Ans.:Decimation:Decimation decrease sampling rate Loosely speaking, "decimation" is the process of reducing the sampling rate. In practice, this usually implies low pass-filtering a signal, then throwing away some of its samples. Decimation reduces the original sampling rate of a sequence to a lower rate. It is the opposite of interpolation. Decimate low pass filters the input to guard against aliasing and down samples the result. Syntax y = decimate(x,r)example y = decimate(x,r,n)example y = decimate(x,r,'fir') y = decimate(x,r,n,'fir') Decimation consists of the processes of lowpass filtering, followed by downsampling. To implement the filtering part, you can use either FIR or IIR filters. To implement the downsampling part (by a downsampling factor of "M") simply keep every Mth sample, and throw away the M-1 samples in between. For example, to decimate by 4, keep every fourth sample, and throw three out of every four samples away. Interpolation:Polynomial interpolation methods have been studied quite extensively in the signal and image processing literature of the past three decades [7,8,14]. An example of such methods is Lagrange central interpolation of given, fixed degree, which is known to yield interpolants that are not continuously differentiable [10, 11]. In order to obtain smoother interpolants, as may be required for some applications, several alternative interpolation methods have been proposed. Popular examples of these are the so-called cubic convolution interpolation methods, of which the ones proposed by Keys [5] are the most well known. It is probably less well known that methods for obtaining smooth interpolants have been developed in other areas of applied mathematics since the second half of the nineteenth century. In this brief note we establish a link between classical osculatory interpolation and modern convolution-based interpolation and use it to show that both of Keys cubic convolution schemes are formally equivalent to particular osculatory interpolation schemes proposed around the beginning of the twentieth century. We also discuss their computational differences and give explicit forms of the kernels that follow from other cubic osculatory interpolation schemes.

In many DSP applications there is a need to know the value of a signal also between the existing discrete time samples x(n). Special interpolation filters can be used to compute new sample values y(l)=ya(tl) at arbitrary points tl=(nl+l)Tin. Here ya(t) approximates the original continuous-time signal xa(t). The output sample time is thus determined by the fractional interval (or delay) l[0,1) and the integer index nl. Terms "interpolator" and "fractional delay (FD) filter" are also used in the literature.

Convolution:In this correspondence we have derived a general expression for the kernels implicitly involved in classical osculatory interpolation schemes. Using this formula we have shown that the still popular cubic convolution kernels described by Keys [5] twenty years ago are precisely the kernels involved in the osculatory interpolation schemes proposed by Karup and King [4, 6] and Henderson [3] around 1900. We have also discussed their computational differences, from which we conclude that the osculatory versions are computationally cheaper, but require additional memory. Finally we have given the explicit forms and properties of other cubic convolution interpolation kernels implicitly used in the actuarial literature for a long time now, but which to the best of our knowledge have not been investigated before in the context of signal and image processing. Further study will be required to reveal the suitability of these kernels and the optimal values of their free parameters for specific applications.

Simple Moving Average:A simple, or arithmetic, moving average that is calculated by adding the closing price of the security for a number of time periods and then dividing this total by the number of time periods. Shortterm averages respond quickly to changes in the price of the underlying, while long-term averages are slow to react. In other words, this is the average stock price over a certain period of time. Keep in mind that equal weighting is given to each daily price. As shown in the chart above, many traders watch for short-term averages to cross above longer-term averages to signal the beginning of an uptrend. As shown by the blue arrows, short-term averages (e.g. 15-period SMA) act as levels of support when the price experiences a pullback. Support levels become stronger and more significant as the number of time periods used in the calculations increases. A moving average is commonly used with time series data to smooth out short-term fluctuations and highlight longer-term trends or cycles. The threshold between short-term and long-term depends on the application, and the parameters of the moving average will be set accordingly. For example, it is often used in technical analysis of financial data, like stock prices, returns or trading volumes. It is also used in economics to examine gross domestic product, employment or other macroeconomic time series. Mathematically, a moving average is a type of convolution and so it can be viewed as an example of a low-pass filter used in signal processing. When used with non-time series data, a moving average filters higher frequency components without any specific connection to time, although typically some kind of ordering is implied. Viewed simplistically it can be regarded as smoothing the data.

Assignment: 3
Que.:- Periodic Signals and harmonics? Ans.:Periodic Signals and harmonics:In this experiment you can select among different periodic continuous-time signal, choose their fundamental frequency in Hz, and listen to them. You will notice that both the frequency and the shape of the signal affect the resulting sound. Two periodic signals with identical periods but different shapes will sound differently. This effect is not surprising since different periodic signals have different Fourier series representations and, consequently, different content in terms of harmonic frequencies, as explained in the experiment Fourier Series and Gibbs Phenomenon. You can listen here to different signals with the same frequency (1KHz) but with different periodic waveform shapes. A harmonic of a wave is a component frequency of the signal that is an integer multiple of the fundamental frequency, i.e. if the fundamental frequency is f, the harmonics have frequencies 2f, 3f, 4f, . . . etc. The harmonics have the property that they are all periodic at the fundamental frequency, therefore the sum of harmonics is also periodic at that frequency. Harmonic frequencies are equally spaced by the width of the fundamental frequency and can be found by repeatedly adding that frequency. For example, if the fundamental frequency (first harmonic) is 25 Hz, the frequencies of the next harmonics are: 50 Hz (2nd harmonic), 75 Hz (3rd harmonic), 100 Hz (4th harmonic) etc.Many oscillators, including the human voice, a bowed violin string, or a Cepheid variable star, are more or less periodic, and so composed of harmonics, also known as harmonic partials. Most passive oscillators, such as a plucked guitar string or a struck drum head or struck bell, naturally oscillate at not one, but several frequencies known as partials. When the oscillator is long and thin, such as a guitar string, or the column of air in a trumpet, many of the partials are integer multiples of the fundamental frequency; these are called harmonics. Sounds made by long, thin oscillators are for the most part arranged harmonically, and these sounds are generally considered to be musically pleasing. Partials whose frequencies are not integer multiples of the fundamental are referred to as inharmonic partials. Instruments such as cymbals, pianos, and strings plucked pizzicato create inharmonic sounds. You will also notice in this experiment that phase changes in a signal do not affect the way it sounds. This is because the human ear is insensitive to phase offsets. You can listen here to different sound signals composed of the same fundamental frequency (440 Hz) and the same harmonic frequencies (880 Hz, 1320 Hz, 1760 Hz, 2200 Hz) but with different harmonic amplitudes

Assignment: 4
Que.:- Fourier Transform (DFT/FFT), Spectral Analysis, and time/spectrum representations; Ans.:Fourier Transform (DFT/FFT):Spectral analysis is the process of identifying component frequencies in data. For discrete data, the computational basis of spectral analysis is the discrete Fourier transform (DFT). The DFT transforms time- or space-based data into frequency-based data. The DFT of a vector x of length n is another vector y of length n:

where is a complex nth root of unity:

This notation uses i for the imaginary unit, and p and j for indices that run from 0 to n1. The indices p+1 and j+1 run from 1 to n, corresponding to ranges associated with MATLAB vectors. Data in the vector x are assumed to be separated by a constant interval in time or space, dt = 1/fs or ds = 1/fs, where fs is the sampling frequency. The DFT y is complex-valued. The absolute value of y at index p+1 measures the amount of the frequency f = p(fs / n) present in the data. The Discrete Fourier Transform (DFT) is the equivalent of the continuous Fourier Transform for signals known only at N instants separated by sample times T (i.e. a finite sequence of data). The time taken to evaluate a DFT on a digital computer depends principally on the number of multiplications involved, since these are the slowest operations. With the DFT, this number is directly related to N2 (matrix multiplication of a vector), where N is the length of the transform. For most problems, N is chosen to be at least 256 in order to get a reasonable approximation for the spectrum of the sequence under consideration hence computational speed becomes a major consideration.

Spectral Analysis and time/spectrum representations:A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal a spectrum analyzer measures is electrical, however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Optical spectrum analyzers also exist, which use direct optical techniques such as a monochromator to make measurements. By analyzing the spectra of electrical signals, dominant frequency, power, distortion, harmonics, bandwidth, and other spectral components of a signal can be observed that are not easily detectable in time domain waveforms. These parameters are useful in the characterization of electronic devices, such as wireless transmitters. Spectrum analyzer types are dictated by the methods used to obtain the spectrum of a signal. There are swept-tuned and FFT based spectrum analyzers:

A FFT spectrum analyzer computes the discrete Fourier transform (DFT), a mathematical process that transforms a waveform into the components of its frequency spectrum, of the input signal. Some spectrum analyzers, such as real-time spectrum analyzers, use a hybrid technique where the incoming signal is first down-converted to a lower frequency using superheterodyne techniques and then analyzed using fast fourier transformation (FFT) techniques.

Form factor Spectrum analyzers tend to fall into three form factors: benchtop, portable and handheld. Benchtop This form factor is useful for applications where the spectrum analyzer can be plugged into AC power, which generally means in a lab environment or production/manufacturing area. Bench top spectrum analyzers have historically offered better performance and specifications than the portable or handheld form factor. Bench top spectrum analyzers normally have multiple fans (with associated vents) to dissipate heat produced by the processor. Due to their architecture, bench top spectrum analyzers typically weigh more than 30 pounds (14 kg). Some bench top spectrum analyzers offer optional battery packs, allowing them to be used away from AC power. This type of analyzer is often referred to as a "portable" spectrum analyzer. Portable This form factor is useful for any applications where the spectrum analyzer needs to be taken outside to make measurements or simply carried while in use. Attributes that contribute to a useful portable spectrum analyzer include: Optional battery-powered operation to allow the user to move freely outside. Clearly viewable display to allow the screen to be read in bright sunlight, darkness or dusty conditions.. Light weight (usually less than 15 pounds (6.8 kg)). Handheld This form factor is useful for any application where the spectrum analyzer needs to be very light and small. Handheld analyzers offer a limited capability relative to larger systems. Attributes that contribute to a useful handheld spectrum analyzer include: Very low power consumption. Battery-powered operation while in the field to allow the user to move freely outside. Very small size Light weight (usually less than 2 pounds (0.91 kg)).

Assignment: 5
Que.:- FIR and IIR Filters? Ans.:FIR and IIR Filters:FIR filter diagrams are often simplified as shown in Figure 6.12. The summations are represented by arrows pointing into the dots, and the multiplications are indicated by placing the h(k) coefficients next to the arrows on the lines. The z1 delay element is often shown by placing the label above or next to the appropriate line. IIR filters are difficult to control and have no particular phase, whereas FIR filters make a linear phase always possible. IIR can be unstable, whereas FIR is always stable. IIR, when compared to FIR, can have limited cycles, but FIR has no limited cycles. IIR is derived from analog, whereas FIR has no analog history. IIR filters make polyphase implementation possible, whereas FIR can always be made casual. FIR filters are helpful to achieve fractional constant delays. #MAD stands for a number of multiplications and additions, and is used as a criterion for an IIR and FIR filter comparison. IIR filters require more #MAD when compared to FIR, because FIR is of a higher order in comparison to IIR, which is of lower order, and uses polyphase structures. FIR filters are dependent upon linear-phase characteristics, whereas IIR filters are used for applications which are not linear. FIRs delay characteristics is much better, but they require more memory. On the other hand, IIR filters are dependent on both i/p and o/p, but FIR is dependent upon i/p only. IIR filters consist of zeros and poles, and require less memory than FIR filters, whereas FIR only consists of zeros. IIR filters can become difficult to implement, and also delay and distort adjustments can alter the poles & zeroes, which make the filters unstable, whereas FIR filters remain stable. FIR filters are used for tapping of a higher-order, and IIR filters are better for tapping of lower-orders, since IIR filters may become unstable with tapping higher-orders. FIR stands for Finite IR filters, whereas IIR stands for Infinite IR filters. IIR and FIR filters are utilized for filtration in digital systems. FIR filters are more widely in use, because they differ in response. FIR filters have only numerators when compared to IIR filters, which have both numerators and denominators. Where the system response is infinite, we use IIR filters, and where the system response is zero, we use FIR filters. FIR filters are also preferred over IIR filters because they have a linear phase response and are non recursive, whereas IIR filters are recursive, and feedback is also involved. FIR cannot simulate analog filter responses, but IIR is designed to do that accurately. IIRs impulse response when compared to FIR is infinite. The high computational efficiency of IIR filters, with short delays, often make the IIR popular as an alternative. FIR filters have become too long in digital feedback systems, as well as in other applications, and cause problems.

You might also like