Saturday, October 10, 2009

Signals and Systems

A signal is a description of how one parameter varies with another parameter. For instance, voltage changing over time in an electronic circuit, or brightness varying with distance in an image. A system is any process that produces an output signal in response to an input signal. This is illustrated by the block diagram in Fig. 5-1. Continuous systems input and output continuous signals, such as in analog electronics. Discrete systems input and output discrete signals, such as computer programs that manipulate the values stored in arrays.
Several rules are used for naming signals. These aren't always followed in DSP, but they are very common and you should memorize them. The mathematics is difficult enough without a clear notation. First, continuous signals use parentheses, such as: x(t) and y(t), while discrete signals use brackets, as in: x[n] and y[n]. Second, signals use lower case letters. Upper case letters are reserved for the frequency domain, discussed in later chapters. Third, the name given to a signal is usually descriptive of the parameters it represents. For example, a voltage depending on time might be called: v(t), or a stock market price measured each day could be: p[d].


Signals and systems are frequently discussed without knowing the exact parameters being represented. This is the same as using x and y in algebra, without assigning a physical meaning to the variables. This brings in a fourth rule for naming signals. If a more descriptive name is not available, the input signal to a discrete system is usually called: x[n], and the output signal: y[n]. For continuous systems, the signals: x(t) and y(t) are used.
There are many reasons for wanting to understand a system. For example, you may want to design a system to remove noise in an electrocardiogram, sharpen an out-of-focus image, or remove echoes in an audio recording. In other cases, the system might have a distortion or interfering effect that you need to characterize or measure. For instance, when you speak into a telephone, you expect the other person to hear something that resembles your voice. Unfortunately, the input signal to a transmission line is seldom identical to the output signal. If you understand how the transmission line (the system) is changing the signal, maybe you can compensate for its effect. In still other cases, the system may represent some physical process that you want to study or analyze. Radar and sonar are good examples of this. These methods operate by comparing the transmitted and reflected signals to find the characteristics of a remote object. In terms of system theory, the problem is to find the system that changes the transmitted signal into the received signal.
At first glance, it may seem an overwhelming task to understand all of the possible systems in the world. Fortunately, most useful systems fall into a category called linear systems. This fact is extremely important. Without the linear system concept, we would be forced to examine the individual characteristics of many unrelated systems. With this approach, we can focus on the traits of the linear system category as a whole. Our first task is to identify what properties make a system linear, and how they fit into the everyday notion of electronics, software, and other signal processing systems.


Logic signals

Most digital systems use the simplest possible type of signal which has just two values. This type of signal is called a logic signal because the two values (or states) can be called true and false. Normally the positive supply voltage +Vs represents true and 0V represents false. Other labels for the true and false states are shown in the table on the right.
Noise is relatively easy to eliminate from digital signals because it is easy to distinguish from the desired signal which can only have particular values. For example: if the signal is meant to be +5V (true) or 0V (false), noise of up to 2.5V can be eliminated by treating all voltages greater than 2.5V as true and all voltages less than 2.5V as false.



Logic states

True False
1 0
High Low
+Vs 0V
On Off

Digital systems

Digital systems process digital signals which can take only a limited number of values (discrete steps), usually just two values are used: the positive supply voltage (+Vs) and zero volts (0V).
Digital systems contain devices such as logic gates, flip-flops, shift registers and counters. A computer is an example of a digital system.
A digital meter can display many values, but not every value within its range. For example the display on the right can show 6.25 and 6.26 but not a value between them. This is not a problem because digital meters normally have sufficient digits to show values more precisely than it is possible to read an analogue display.

Analogue systems

Analogue systems process analogue signals which can take any value within a range, for example the output from an LDR (light sensor) or a microphone.
An audio amplifier is an example of an analogue system. The amplifier produces an output voltage which can be any value within the range of its power supply.
An analogue meter can display any value within the range available on its scale. However, the precision of readings is limited by our ability to read them. For example the meter on the right shows 1.25V because the pointer is estimated to be half way between 1.2 and 1.3. The analogue meter can show any value between 1.2 and 1.3 but we are unable to read the scale more precisely than about half a division.
All electronic circuits suffer from 'noise' which is unwanted signal mixed in with the desired signal, for example an audio amplifier may pick up some mains 'hum' (the 50Hz frequency of the UK mains electricity supply). Noise can be difficult to eliminate from analogue signals because it may be hard to distinguish from the desired signal.

Advantages and Disadvantages of Analog Signal

Advantages

The main advantage is the fine definition of the analog signal which has the potential for an infinite amount of signal resolution. Compared to digital signals, analog signals are of higher density.Another advantage with analog signals is that their processing may be achieved more simply than with the digital equivalent. An analog signal may be processed directly by analog components, though some processes aren't available except in digital form.

Disadvantages

The primary disadvantage of analog signaling is that any system has noise – i.e., random unwanted variation. As the signal is copied and re-copied, or transmitted over long distances, these apparently random variations become dominant. Electrically, these losses can be diminished by shielding, good connections, and several cable types such as coaxial or twisted pair.The effects of noise create signal loss and distortion. This is impossible to recover, since amplifying the signal to recover attenuated parts of the signal amplifies the noise (distortion/interference) as well. Even if the resolution of an analog signal is higher than a comparable digital signal, the difference can be overshadowed by the noise in the signal.

Analog Signal

An analogue signal is any continuous signal for which the time varying feature (variable) of the signal is a representation of some other time varying quantity, i.e analogous to another time varying signal. It differs from a digital signal in terms of small fluctuations in the signal which are meaningful. Analog is usually thought of in an electrical context; however, mechanical, pneumatic, hydraulic, and other systems may also convey analog signals.
An analog signal uses some property of the medium to convey the signal's information. For example, an aneroid barometer uses rotary position as the signal to convey pressure information. Electrically, the property most commonly used is voltage followed closely by frequency, current, and charge.
Any information may be conveyed by an analog signal; often such a signal is a measured response to changes in physical phenomena, such as sound, light, temperature, position, or pressure, and is achieved using a transducer.
For example, in sound recording, fluctuations in air pressure (that is to say, sound) strike the diaphragm of a microphone which induces corresponding fluctuations in the current produced by a coil in an electromagnetic microphone, or the voltage produced by a condensor microphone. The voltage or the current is said to be an "analog" of the sound.
An analog signal has a theoretically infinite resolution. In practice an analog signal is subject to noise and a finite slew rate. Therefore, both analog and digital systems are subject to limitations in resolution and bandwidth. As analog systems become more complex, effects such as non-linearity and noise ultimately degrade analog resolution to such an extent that the performance of digital systems may surpass it. Similarly, as digital systems become more complex, errors can occur in the digital data stream. A comparable performing digital system is more complex and requires more bandwidth than its analog counterpart.[citation needed] In analog systems, it is difficult to detect when such degradation occurs. However, in digital systems, degradation can not only be detected but corrected as well.


Advantages
The main advantage is the fine definition of the analog signal which has the potential for an infinite amount of signal resolution. Compared to digital signals, analog signals are of higher density.
Another advantage with analog signals is that their processing may be achieved more simply than with the digital equivalent. An analog signal may be processed directly by analog components, though some processes aren't available except in digital form.


Disadvantages

The primary disadvantage of analog signaling is that any system has noise – i.e., random unwanted variation. As the signal is copied and re-copied, or transmitted over long distances, these apparently random variations become dominant. Electrically, these losses can be diminished by shielding, good connections, and several cable types such as coaxial or twisted pair.
The effects of noise create signal loss and distortion. This is impossible to recover, since amplifying the signal to recover attenuated parts of the signal amplifies the noise (distortion/interference) as well. Even if the resolution of an analog signal is higher than a comparable digital signal, the difference can be overshadowed by the noise in the signal.

Digital Signal


The term digital signal is used to refer to more than one concept. It can refer to discrete-time signals that have a discrete number of levels, for example a sampled and quantified analog signal, or to the continuous-time waveform signals in a digital system, representing a bit-stream. In the first case, a signal that is generated by means of a digital modulation method which is considered as converted to an analog signal, while it is considered as a digital signal in the second case.



An analog signal is a datum that changes over time—say, the temperature at a given location; the depth of a certain point in a pond; or the amplitude of the voltage at some node in a circuit—that can be represented as a mathematical function, with time as the free variable (abscissa) and the signal itself as the dependent variable (ordinate). A discrete-time signal is a sampled version of an analog signal: the value of the datum is noted at fixed intervals (for example, every microsecond) rather than continuously.
If individual time values of the discrete-time signal, instead of being measured precisely (which would require an infinite number of digits), are approximated to a certain precision—which, therefore, only requires a specific number of digits—then the resultant data stream is termed a digital signal. The process of approximating the precise value within a fixed number of digits, or bits, is called quantization.
In conceptual summary, a digital signal is a quantized discrete-time signal; a discrete-time signal is a sampled analog signal.
In the Digital Revolution, the usage of digital signals has increased significantly. Many modern media devices, especially the ones that connect with computers use digital signals to represent signals that were traditionally represented as continuous-time signals; cell phones, music and video players, personal video recorders, and digital cameras are examples.
In most applications, digital signals are represented as binary numbers, so their precision of quantization is measured in bits. Suppose, for example, that we wish to measure a signal to two significant decimal digits. Since seven bits, or binary digits, can record 128 discrete values (viz., from 0 to 127), those seven bits are more than sufficient to express a range of one hundred values.


In computer architecture and other digital systems, a waveform that switches between two voltage levels representing the two states of a Boolean value (0 and 1) is referred to as a digital signal, even though it is an analog voltage waveform, since it is interpreted in terms of only two levels.
The clock signal is a special digital signal that is used to synchronize digital circuits. The image shown can be considered the waveform of a clock signal. Logic changes are triggered either by the rising edge or the falling edge.
The given diagram is an example of the practical pulse and therefore we have introduced two new terms that are:
Rising edge: the transition from a low voltage (level 1 in the diagram) to a high voltage (level 2).
Falling edge: the transition from a high voltage to a low one.
Although in a highly simplified and idealised model of a digital circuit we may wish for these transitions to occur instantaneously, no real world circuit is purely resistive and therefore no circuit can instantly change voltage levels. This means that during a short, finite transition time the output may not properly reflect the input, and indeed may not correspond to either a logically high or low voltage.


Analog v/s Digital Signal and Recording

Analog Signal can be termed as continuous signal whose value is defined at every instant of time while digital signal is a discrete time signal whose value is defined at instantaneous times or discrete time. Analog and Digital Signal comparison can be best understand by a figure above

Analog recording versus digital recording compares the two ways in which sound is recorded and stored. Actual sound waves consist of continuous variations in air pressure. Representations of these signals can be recorded using either digital or analog techniques.
An analog recording is one where a property or characteristic of a physical recording medium is made to vary in a manner analogous to the variations in air pressure of the original sound. Generally, the air pressure variations are first converted (by a transducer such as a microphone) into an electrical analog signal in which either the instantaneous voltage or current is directly proportional to the instantaneous air pressure (or is a function of the pressure). The variations of the electrical signal in turn are converted to variations in the recording medium by a recording machine such as a tape recorder or record cutter—the variable property of the medium is modulated by the signal. Examples of properties that are modified are the magnetization of magnetic tape or the deviation (or displacement) of the groove of a gramophone disc from a smooth, flat spiral track. The key aspect which makes the recording analog is that a physical quality of the medium (e.g., the intensity of the magnetic field or the path of a record groove) is directly related, or analogous, to the physical properties of the original sound (e.g., the amplitude, phase, etc.), or of the virtual sound in the case of artificially produced analog signals (such as the output from a guitar amp, a synthesizer, or tape recorder effects playback.)
A digital recording is produced by converting the physical properties of the original sound into a sequence of numbers, which can then be stored and read back for reproduction. Usually (virtually always), the sound is transduced (as by a microphone) to an analog signal in the same way as for analog recording, and then the analog signal is digitized, or converted to a digital signal, through an Analog-to-Digital converter (an electronic device) either integrated into the digital audio recorder or separate and connected between the recorder and the analog source. An electrical digital signal has variations in voltage and/or current which represent discrete numbers instead of being continuously mathematically related as a function to the air pressure variations of sound. There are two chief distinctions between an analog and a digital signal. The first is that the analog signal is continuous in time, meaning that it varies smoothly over time no matter how short a time period you consider; the digital signal, in contrast, is discrete in time, meaning it has distinct parts that follow one after another with definite, unambiguous division points (called signal transitions) between them. The second distinction is that analog signals are continuous in amplitude, whereas digital signals are quantized. That analog signals are continuous means that they have no artificially set limit on possible instantaneous levels—no signal processing is used to "round off" the number of signal levels. Fundamental laws of physics require the quantization of all analog signals (LeSurf 2007), though this fact is not commonly a limiting factor in system performance. This is because differences in quantum energy level spacing are so small as to be unimportant with typical analog signal intensities. Digitally-processed quantized signals have a precise, limited number of possible instantaneous values, called quantization levels, and it is impossible to have a value in between two adjacent quantization levels. Almost paradoxically, it is precisely this limitation that gives digital signals their main advantages.
Each numerical value measured at a single instant in time for a single signal is called a sample; samples are measured at a regular periodic rate to record a signal. The accuracy of the conversion process depends on the sampling rate (how often the sound is sampled and a related numerical value is recorded) and the sampling depth, also called the quantization depth (how much information each sample contains, which can also be described as the maximum numerical size of each sampled value). However, unlike analog recording in which the quality of playback depends critically on the "fidelity" or accuracy of the medium and of the playback device, the physical medium storing digital samples may somewhat distort the encoded information without degrading the quality of playback so long as the original sequence of numbers can be recovered.

Continuous Time Signal

A continuous Time Signal is one whose value is defined at every instant of time i.e it gives continuous values unlike discrete time signal



A continuous signal or a continuous-time signal is a varying quantity (a signal) whose domain, which is often time, is a continuum (e.g., a connected interval of the reals). That is, the function's domain is an uncountable set. The function itself need not be continuous. To contrast, a discrete time signal has a countable domain, like the natural numbers.
The signal is defined over a domain, which may or may not be finite, and there is a functional mapping from the domain to the value of the signal. The continuity of the time variable, in connection with the law of density of real numbers, means that the signal value can be found at any arbitrary point in time.
A typical example of an infinite duration signal is:
f(t) = sin(t)

Discrete Time Signal

A signal is said to be a Discrete time signal if it defines at discrete values of time i.e, it does not give value at every instant of time but at discrete time it gives value

discrete-time signal is a time series consisting of a sequence of quantities. In other words, it is a time series that is a function over a domain of discrete integers. Each value in the sequence is called a sample.
Unlike a continuous-time signal, a discrete-time signal is not a function of a continuous argument; however, it may have been obtained by sampling from a continuous-time signal. When a discrete-time signal is a sequence corresponding to uniformly spaced times, it has an associated sampling rate the sampling rate is not apparent in the data sequence, so may be associated as a separate data item.

Different Definitions of Signal


any nonverbal action or gesture that encodes a message; "signals from the boat suddenly stopped"


sign: communicate silently and non-verbally by signals or signs; "He signed his disapproval with a dismissive hand gesture"; "The diner signaled the waiters to bring the menu"



any incitement to action; "he awaited the signal to start"; "the victory was a signal for wild celebration"



bespeak: be a signal for or a symptom of; "These symptoms indicate a serious illness"; "Her behavior points to a severe neurosis"; "The economic indicators signal that the euro is undervalued"



an electric quantity (voltage or current or field strength) whose modulation represents coded information about the source from which it comes



notably out of the ordinary; "the year saw one signal triumph for the Labour party"



Introduction to the z-transform

The z-transform is useful for the manipulation of discrete data sequences and has acquired a new significance in the formulation and analysis of discrete-time systems. It is used extensively today in the areas of applied mathematics, digital signal processing, control theory, population science, economics. These discrete models are solved with difference equations in a manner that is analogous to solving continuous models with differential equations. The role played by the z-transform in the solution of difference equations corresponds to that played by the Laplace transforms in the solution of differential equations.

The function notation for sequences is used in the study and application of z-transforms. Consider a function defined for that is sampled at times , where is the sampling period (or rate). We can write the sample as a sequence using the notation . Without loss of generality we will set and consider real sequences such as, . The definition of the z-transform involves an infinite series of the reciprocals

Discrete Fourier series Transform

Suppose an audio signal sampled at 8kHz. This means that every succesive eighth of a milliseconds one makes a measurement of the intensity of the signal. For the remainder of this text, we will assume we're working on a sample of EIGHT measurements. Here is an exemple of such a sample : 50 mV 206 mV -100 mV -65 mV -50 mV -6 mV 100 mV -135 mV We could represent it by this graph :



Multiplication and convolution

Using the tool, review the transforms of the unit pulse function and the cosine function. For the moment it is best to view these using the magnitude and phase representation of the frequency domain.
Now switch to one of the 8ms segment of a cosine or sine waveforms. You should observe that the frequency domain plot is some form of combination of the two types of signal. Strictly speaking, the time domain signal is the multiplication of a unit pulse of 8ms duration delayed by 4ms, and a cosinusoid or sinusoid waveform of the selected frequency. The frequency domain transform is then the addition of two sa functions which have been shifted in frequency. Notice where the highest peaks are and you should observe that these correspond with the frequency of the sine or cosine signal. What has happened is that in the frequency domain the sa function from the unit pulse and the two impulses from the sine or cosine function have been convolved together. This is an example of the general rule that multiplication in the time domain equates to convolution in the frequency domain.
You can reconstruct the two constituent waveforms by shifting the frequency response of the 8ms unit pulse to 500Hz, and to -500Hz.You should find that the real component of the two shifted signals are the same, but that the quadrature components are the complement of each other. Thus when they are summed together, the result is a signal with a real component and a zero quadrature component.
In fact an equivalent rule also holds that convolution in the time domain equates to multiplication in the frequency domain. Thus, for example, a complex phasor in the frequency domain multiplied by a given signal's transform produces a time domain function where an impulse is convolved with signal. This is precisely what is happening when the delay value is being altered.

Fourier Transforms

The Fourier transform defines a relationship between a signal in the time domain and its representation in the frequency domain. Being a transform, no information is created or lost in the process, so the original signal can be recovered from knowing the Fourier transform, and vice versa.
The Fourier transform of a signal is a continuous complex valued signal capable of representing real valued or complex valued continuous time signals.
The tool allows you to view these complex valued signals as either their real and quadrature (also known as imaginary) components separately, or by a magnitude and phase representation. You may switch between these two representations at any point. Mathematically switching between the two representations for a given complex value can be expressed as

Fourier Series

Fourier series are made up of sinusoids, all of which have frequencies that are integer multiples of some fundamental frequency. The trick, as with Taylor series, is to figure out what the coefficients are. In summation notation, we say (for odd functions of period 2, but that's just being picky in this context):
. . . and the trick is finding the coefficients ak. You can find those coefficients by using calculus on complex exponentials, or you can use NuCalc and just build your function out of sines.
A great thing about using Fourier series on periodic functions is that the first few terms often are a pretty good approximation to the whole function, not just the region around a special point. Fourier series are used extensively in engineering, especially for processing images and other signals. Finding the coefficients of a Fourier series is the same as doing a spectral analysis of a function.
In mathematics, a Fourier series decomposes a periodic function of periodic signal into a sum of simple oscillating functions, namely sine and cosines (or complex exponentials). The study of Fourier series is a branch of Fourier analysis. Fourier series were introduced by Joseph Fourier (1768–1830) for the purpose of solving the heat equation in a metal plate.
The heat equation is a partial differential equation. Prior to Fourier's work, there was no known solution to the heat equation in a general situation, although particular solutions were known if the heat source behaved in a simple way, in particular, if the heat source was a sine or cosine wave. These simple solutions are now sometimes called eigensolutions. Fourier's idea was to model a complicated heat source as a superposition (or linear combiantion) of simple sine and cosine waves, and to write the solution as a superposition of the corresponding eigensolutions. This superposition or linear combination is called the Fourier series.
Although the original motivation was to solve the heat equation it later became obvious that the same techniques could be applied to a wide array of mathematical and physical problems. The basic results are very easy to understand using the modern theory

Definition of Matrices

Definition of Matrices
MATLAB is based on matrix and vector algebra; even scalars are treated as 1x1 matrices. Therefore, vector and matrix operations are as simple as common calculator operations.
Vectors can be defined in two ways. The first method is used for arbitrary elements:
v = [1 3 5 7];
creates a 1x4 vector with elements 1, 3, 5 and 7. Note that commas could have been used in place of spaces to separate the elements. Additional elements can be added to the vector:
v(5) = 8;
yields the vector v = [1 3 5 7 8]. Previously defined vectors can be used to define a new vector. For example, with v defined above
a = [9 10];
b = [v a];
creates the vector b = [1 3 5 7 8 9 10].
The second method is used for creating vectors with equally spaced elements:
t = 0:.1:10;
creates a 1x101 vector with the elements 0, .1, .2, .3,...,10. Note that the middle number defines the increment. If only two numbers are given, then the increment is set to a default of 1:
k = 0:10;
creates a 1x11 vector with the elements 0, 1, 2, ..., 10.
Matrices are defined by entering the elements row by row:
M = [1 2 4; 3 6 8];
creates the matrix
There are a number of special matrices that can be defined:
null matrix:
M = [ ];
nxm matrix of zeros:
M = zeros(n,m);
nxm matrix of ones:
M = ones(n,m);
nxn identity matrix:
M = eye(n);
A particular element of a matrix can be assigned:
M(1,2) = 5;
places the number 5 in the first row, second column.
In this text, matrices are used only in Chapter 12; however, vectors are used throughout the text. Operations and functions that were defined for scalars in the previous section can also be used on vectors and matrices. For example,
a = [1 2 3];
b = [4 5 6];
c = a + b

yields:
c =
5
7
9
Functions are applied element by element. For example,
t = 0:10;
x = cos(2*t);
creates a vector x with elements equal to cos(2t) for t = 0, 1, 2, ..., 10.
Operations that need to be performed element-by-element can be accomplished by preceding the operation by a ".". For example, to obtain a vector x that contains the elements of x(t) = tcos(t) at specific points in time, you cannot simply multiply the vector t with the vector cos(t). Instead you multiply their elements together:
t = 0:10;
x = t.*cos(t);

MATLAB Basics

MATLAB is started by clicking the mouse on the appropriate icon and is ended by typing exit or by using the menu option. After each MATLAB command, the "return" or "enter" key must be depressed.
A. Definition of Variables
Variables are assigned numerical values by typing the expression directly, for example, typing
a = 1+2
yields: a = 3
The answer will not be displayed when a semicolon is put at the end of an expression, for example type a = 1+2;.

MATLAB utilizes the following arithmetic operators:
+
addition
-
subtraction
*
multiplication
/
division
^
power operator
'
transpose
A variable can be assigned using a formula that utilizes these operators and either numbers or previously defined variables. For example, since a was defined previously, the following expression is valid
b = 2*a;
To determine the value of a previously defined quantity, type the quantity by itself:
b
yields: b = 6
If your expression does not fit on one line, use an ellipsis (three or more periods at the end of the line) and continue on the next line.
c = 1+2+3+...
5+6+7;

There are several predefined variables which can be used at any time, in the same manner as user-defined variables:
i
sqrt(-1)
j
sqrt(-1)
pi
3.1416...
For example,
y= 2*(1+4*j)
yields: y = 2.0000 + 8.0000i
There are also a number of predefined functions that can be used when defining a variable. Some common functions that are used in this text are:
abs
magnitude of a number (absolute value for real numbers)
angle
angle of a complex number, in radians
cos
cosine function, assumes argument is in radians
sin
sine function, assumes argument is in radians
exp
exponential function

For example, with y defined as above,
c = abs(y)
yields: c = 8.2462
c = angle(y)
yields: c = 1.3258
With a=3 as defined previously,
c = cos(a)
yields: c = -0.9900
c = exp(a)
yields: c = 20.0855
Note that exp can be used on complex numbers. For example, with y = 2+8i as defined above,
c = exp(y)
yields: c = -1.0751 + 7.3104i
which can be verified by using Euler's formula:
c = exp(2)cos(8) + je(exp)2sin(8)

Difference Equation

Introduction
One of the most important concepts of DSP is to be able to properly represent the input/output relationship to a given LTI system. A linear constant-coefficient difference equation (LCCDE) serves as a way to express just this relationship in a discrete-time system. Writing the sequence of inputs and outputs, which represent the characteristics of the LTI system, as a difference equation help in understanding and manipulating a system.

Definition 1: difference equation
An equation that shows the relationship between consecutive values of a sequence and the differences among them. They are often rearranged as a recursive formula so that a systems output can be computed from the input signal and past outputs.

Example

yn+7yn−1+2yn−2=xn−4xn−1

General Formulas from the Difference Equation
As stated briefly in the definition above, a difference equation is a very useful tool in describing and calculating the output of the system described by the formula for a given sample nn. The key property of the difference equation is its ability to help easily find the transform, Hz H z, of a system. In the following two subsections, we will look at the general form of the difference equation and the general conversion to a z-transform directly from the difference equation.

Understanding Pole/Zero Plots on the Z-Plane

Once the Z-transform of a system has been determined, one can use the information contained in function's polynomials to graphically represent the function and easily observe many defining characteristics. The Z-transform will have the below structure, based on Rational Functions: Xz=PzQz Xz Pz Qz (1)

The two polynomials, PzPz and QzQz, allow us to find the poles and zeros of the Z-Transform.

Definition 1: zeros 1. The value(s) for zz where Pz=0 Pz 0. 2. The complex frequencies that make the overall gain of the filter transfer function zero.


Definition 2: poles 1. The value(s) for zz where Qz=0 Qz 0. 2. The complex frequencies that make the overall gain of the filter transfer function infinite.
Example 1
Below is a simple transfer function with the poles and zeros shown below it. Hz=z+1z−12z+34 Hz z 1 z 1 2 z 3 4
The zeros are: -1 1
The poles are: 12-34 1 2 3 4
The Z-Plane
Once the poles and zeros have been found for a given Z-Transform, they can be plotted onto the Z-Plane. The Z-plane is a complex plane with an imaginary and real axis referring to the complex-valued variable zz. The position on the complex plane is given by rⅇjθ r θ
and the angle from the positive, real axis around the plane is denoted by θθ. When mapping poles and zeros onto the plane, poles are denoted by an "x" and zeros by an "o". The below figure shows the Z-Plane, and examples of plotting zeros and poles onto the plane can be found in the following section. Figure 1Z-PlaneZ-Plane (zplane.jpg)
Examples of Pole/Zero Plots
This section lists several examples of finding the poles and zeros of a transfer function and then plotting them onto the Z-Plane. Example 2: Simple Pole/Zero Plot
Hz=zz−12z+34 Hz z z 1 2 z 3 4
The zeros are: 0 0
The poles are: 12-34 1 2 3 4 Figure 2: Using the zeros and poles found from the transfer function, the one zero is mapped to zero and the two poles are placed at 1212 and -3434 Pole/Zero PlotPole/Zero Plot (zp_eg1.jpg)Example 3: Complex Pole/Zero Plot
Hz=z−jz+jz−(12−12j)z−12+12j Hz z z
z 1 2 1 2 z 1 2 1 2
The zeros are: j-j
The poles are: -112+12j12−12j 1 1 2 1 2 1 2 1 2 Figure 3: Using the zeros and poles found from the transfer function, the zeros are mapped to ±j± , and the poles are placed at -11, 12+12j 1 2 1 2
and 12−12j 1 2 1 2 Pole/Zero PlotPole/Zero Plot (zp_eg2.jpg)
MATLAB - If access to MATLAB is readily available, then you can use its functions to easily create pole/zero plots. Below is a short program that plots the poles and zeros from the above example onto the Z-Plane.
% Set up vector for zeros z = [j ; -j];
% Set up vector for poles p = [-1 ; .5+.5j ; .5-.5j];
figure(1); zplane(z,p); title('Pole/Zero Plot for Complex Pole/Zero Plot Example');

Pole/Zero Plot and Region of Convergence
The region of convergence (ROC) for XzXz in the complex Z-plane can be determined from the pole/zero plot. Although several regions of convergence may be possible, where each one corresponds to a different impulse response, there are some choices that are more practical. A ROC can be chosen to make the transfer function causal and/or stable depending on the pole/zero plot.
Filter Properties from ROC
*
If the ROC extends outward from the outermost pole, then the system is causal. *
If the ROC includes the unit circle, then the system is stable.

Below is a pole/zero plot with a possible ROC of the Z-transform in the Simple Pole/Zero Plot discussed earlier. The shaded region indicates the ROC chosen for the filter. From this figure, we can see that the filter will be both causal and stable since the above listed conditions are both met. Example 4
Hz=zz−12z+34 Hz z z 1 2 z 3 4 Figure 4: The shaded area represents the chosen ROC for the transfer function. Region of Convergence for the Pole/Zero PlotRegion of Convergence for the Pole/Zero Plot (zp_roc.jpg)

Filter Design using the Pole/Zero Plot of a Z-Transform

One of the motivating factors for analyzing the pole/zero plots is due to their relationship to the frequency response of the system. Based on the position of the poles and zeros, one can quickly determine the frequency response. This is a result of the correspondence between the frequency response and the transfer function evaluated on the unit circle in the pole/zero plots. The frequency response, or DTFT, of the system is defined as: Hw=Hzz,z=ⅇjw=∑k=0M b k ⅇ-jwk∑k=0N a k ⅇ-jwk Hw z w Hz k 0 M b k w k k 0 N a k w k (1)
Next, by factoring the transfer function into poles and zeros and multiplying the numerator and denominator by ⅇjww we arrive at the following equations:

Hw= b 0 a 0 ∏k=1Mⅇjw− c k ∏k=1Nⅇjw− d k Hw b 0 a 0 k 1 M w c k
k 1 N w d k (2)
From Equation 2 we have the frequency response in a form that can be used to interpret physical characteristics about the filter's frequency response. The numerator and denominator contain a product of terms of the form ⅇjw−h w h , where hh is either a zero, denoted by c k c k or a pole, denoted by d k d k . Vectors are commonly used to represent the term and its parts on the complex plane. The pole or zero, hh, is a vector from the origin to its location anywhere on the complex plane and ⅇjw w is a vector from the origin to its location on the unit circle. The vector connecting these two points, ⅇjw−h w h , connects the pole or zero location to a place on the unit circle dependent on the value of ww. From this, we can begin to understand how the magnitude of the frequency response is a ratio of the distances to the poles and zero present in the z-plane as ww goes from zero to pi. These characteristics allow us to interpret HwHw as follows:

Hw= b 0 a 0 ∏"distances from zeros"∏"distances from poles" Hw b 0 a 0 ∏ "distances from zeros" ∏ "distances from poles" (3)
In conclusion, using the distances from the unit circle to the poles and zeros, we can plot the frequency response of the system. As ww goes from 00 to 2π 2 , the following two properties, taken from the above equations, specify how one should draw Hw Hw .
While moving around the unit circle...
1.
if close to a zero, then the magnitude is small. If a zero is on the unit circle, then the frequency response is zero at that point. 2.
if close to a pole, then the magnitude is large. If a pole is on the unit circle, then the frequency response goes to infinity at that point.




Drawing Frequency Response from Pole/Zero Plot
Let us now look at several examples of determining the magnitude of the frequency response from the pole/zero plot of a z-transform. If you have forgotten or are unfamiliar with pole/zero plots, please refer back to the Pole/Zero Plots module. Example 1
In this first example we will take a look at the very simple z-transform shown below:
Hz=z+1=1+z-1 Hz z 1 1 z -1 Hw=1+ⅇ-jw Hw 1 w
For this example, some of the vectors represented by ⅇjw−h w h , for random values of ww, are explicitly drawn onto the complex plane shown in the figure below. These vectors show how the amplitude of the frequency response changes as ww goes from 00 to 2π2, and also show the physical meaning of the terms in Equation 2 above. One can see that when w=0w0, the vector is the longest and thus the frequency response will have its largest amplitude here. As ww approaches π, the length of the vectors decrease as does the amplitude of HwHw. Since there are no poles in the transform, there is only this one vector term rather than a ratio as seen in Equation 2. Figure 1: The first figure represents the pole/zero plot with a few representative vectors graphed while the second shows the frequency response with a peak at +2 and graphed between plus and minus π. Pole/Zero Plot Frequency Response: H(w)(a) (b) Pole/Zero Plot (filt_eg1_pz.jpg) Frequency Response: H(w) (filt_eg1_fig.jpg)Example 2
For this example, a more complex transfer function is analyzed in order to represent the system's frequency response.
Hz=zz−12=11−12z-1 Hz z z 1 2 1 1 1 2 z -1 Hw=11−12ⅇ-jw Hw 1 1 1 2 w
Below we can see the two figures described by the above equations. The Figure 2(a) represents the basic pole/zero plot of the z-transform, HwHw. Figure 2(b) shows the magnitude of the frequency response. From the formulas and statements in the previous section, we can see that when w=0w0 the frequency will peak since it is at this value of ww that the pole is closest to the unit circle. The ratio from Equation 2 helps us see the mathematics behind this conclusion and the relationship between the distances from the unit circle and the poles and zeros. As ww moves from 00 to π, we see how the zero begins to mask the effects of the pole and thus force the frequency response closer to 00. Figure 2: The first figure represents the pole/zero plot while the second shows the frequency response with a peak at +2 and graphed between plus and minus π. Pole/Zero Plot Frequency Response: H(w)(a) (b)

Useful Mathematical Identities

2 + cos2 = 1 1 + tan2 = sec2 sin( − θ)
= − sinθ cos( − θ) = sinθ sin2θ
= 2sinθcosθ cos2θ = cos2 − sin2
= 2cos2θ − 1 = 1 − 2sin2θ 1 + cot2 = csc2 ejθ
= cosθ + jsinθ tan( − θ)
= cotθ