March 15
Basic
Classification
Deterministic & random
Periodic/non-periodic
Continuous/Discrete(time)
Analog/Digital(Amplitude & time)
Operations
Shifting
Reflection
Scaling
Diffrential
Integral
Addition
Multiplication
Convolution
Singularity Signals
Unit ramp function
Unit step function
Rectangular pulse
Sign function
Unit impulse function
Dirac definition
Impulse doublet function
$\delta^\prime(t)$
Double impulses at t=0 which are mirror-imaged with their amplitude of infinite.
Signal Decomposition
Pulse Component
We also have orthogonal function decomposition(Chap.3, Chap.6).
System modeling and Classification
System model can be represented by math equation(including input-output description and state variables or state equation) graphic symbol and block diagrams.
We use the input-output description mostly. If controling something internal is needed, state euqtion is useful.
Block diagram:
System classification
Linear or Non-linear
Time-variant or Time-invariant
Memory or Memoryless
with memory: dynamic system, differential equation
without memory: instant system, algebraic equation
Continuous or Discrete
Continuous Differential equation
Discrete Difference equation
Lumped- or Distributed-Parameter
Lumped: constant coefficient differential equation
Distributed: partial equation
Causal or Non-Causal
when $t<0, e(t)=0 \Rightarrow t<0, r(t)=0$ Generic definition?
the future state cannot have effect on now state. The state of causal system can only be determined by now and past states.
Reversible or irreversible
different input to different output, otherwise irreversible.
LTI System
Linearity
Linearity leads to superposition and homogeneity.
Time-Invariant
a time shift in the input results in a same time shift in the output.
If every coefficient is time independent, the system is time invariant.
Time-Domain(TD) Analysis
Three Steps
- Homogeneous
- Particular
- Calculation on coefficients
Determining Coefficients
If functions are continuous, we can get their boundary conditions by determining the derivatives.
Then the coefficients can be solved by multipling the inverse of Vandermonde matrix with the boundary condition matrix.
Zero-input and -state Responses
Zero-input response The response caused by the initial state (i.e., energy originally stored in the system), and it is denoted by $r_{zi}(t)$
Zero-state response $r(0_-)\equiv 0$, the response caused only by the external excitation and it is denoted by $r_{zs}(t)$
The combination of zero-input response and the zero-state response is not necessarily linear, since the existence of constant. If one of them vanishes, the other is linear.
Impulse and Step Responses
Impulse Response the zero-state response $h(t)$ to $\delta (t)$, which can be equalized to the initial condition.
Note: normally $n>m$.
Unit Step Response The zero-state response $g(t)$ to $u(t)$
There might be a forced term in $g(t)$.
Convolution
Zero-state required
the definition of convoluiton:
Integral interval $e(t)=0, \forall t<0$, $h(t)=0,\forall t<0$, so $r(t)=\int_0^t{e(\tau)h(t-\tau)\mathrm d\tau}$
The condition for applying convolution:
- For linear system ONLY
- For time variant systems, $h(t, \tau)$ means response at time $t$ generated by the impulse at time $\tau$, then $r(t)=\int_0^th(t,\tau)e(\tau)\mathrm d \tau$; for time-invariant system is a special case, $h(t,\tau)=h(t-\tau)$.
The Properties of Convolution The commutative property, the distributive property, the associative property
Differential:
Integral
Convolution with $\delta (t)$ or $u(t)$
(1) $f(t) * \delta(t) = f(t)$
(2) $f(t) * \delta(t - t_0) = f(t-t_0)$
(3) $f(t) * u(t) = \int_{\infty}^{t}f(\tau)\mathrm d\tau$
(4) $f(t) * \delta^\prime(t) = f^\prime(t)$
Fourier Transform
Fourier Series
requirements:
- has finite number of discontinuities
- has finite number of maxima and minima
- $\int_{t_0}^{t_0+T_1} |f(t)|\mathrm dt < \infty$
note When $b_n=0$, $\varphi_n = a_n > 0\ ?\ 0:\pi$
In the last part, the negative frequency is introduced for the convenience of the signal analysis. Therefore the amplitude is reduced to half.
FS for special functions
- Even function $c_n=a_n, \varphi_n = 0, F_n=F_{-n}=\frac 12 a_n$
- Odd function $a_0=0, a_n=0, \varphi_n=-\frac{\pi}{2}, F_n=F_{-n}=-\frac{1}{2}jb_n$
- Half-wave Odd (odd harmonic) function, $f(t)=-f\left(t\pm \frac{T_1}2{}\right)$, contains only odd harmonics(both sine and cosine)
- Finite term series
FS for typical periodic signals
Periodic square wave
- Spectrum is discrete with frequency spacing $\omega_1 = \frac{2\pi}{T_1}$. When $T_1 \rightarrow \infty$, the spectrum will be continuous.
- Amplitude: $\text{Sa}\left(\frac{n\pi\tau}{T_1}\right)$ or $\text{Sa} \left(\frac{n\omega_1\tau}{2}\right)$, cross zero when $\omega_1 = \frac{2m\pi}{\tau}$
- Non-zero FS coefficients of a aperiodic signal are infinite with most energy concentrated at low frequency components (within $\left(-\frac{2\pi}{\tau},\frac{2\pi}{\tau}\right)$). Thus we define the bandwith $B_{\omega} = \frac{2\pi}{\tau}$
Periodic symmetric square wave
Since the spectrum crosses zero when $\omega_1 = \frac{2m\pi}{\tau}$, the even harmonic vanishes. Also the sine component vanishes.
Periodic Serrated Pulse
Periodic Triangular Pulse
consine of non-negative values
cosine of absoulute values
其中$\omega_0$ = $2\omega_1$
Fourier Transform
The case where $T_1\rightarrow \infty$. Signal becomes aperiodic.
Also, $\omega_1\rightarrow 0$ results in the continuous frequency axis. For square wave the magnitude $\frac{E\tau}{T_1}\rightarrow 0$.
We use spectrum density to replace spectrum, making the magnitude dropping to zero remain its meaning.
The fourier transfrom is continuous waveform, where every frequency has no energy but energy density, used to analyse aperiodic function.
Sufficient condition, but not necessary.
FT for typical aperiodic signals
Rectangular pulses
Raised Cosine Signal
More compacted than square signal($|F(\omega)|\propto \frac 1{\omega^3}$). An explanation is that the raised cosine has no discontinuities.
Generally:
- $f(t)$ has discontinuities, $|F(\omega)|\propto \frac 1{\omega}$
- $\frac{d}{dt}f(t)$ has discontinuities, $|F(\omega)|\propto \frac 1{\omega^2}$
- $\frac{d^2}{dt^2}f(t)$ has discontinuities, $|F(\omega)|\propto \frac 1{\omega^3}$
The width $\tau$ of the raised cosine signal is defined at $\frac E2$ rather than at the bottom, making it easy to compare with
the rectangular pulse of same width. The first zeros of the
frequency spectrum are identical.
raised consine is energy-concentrative and has been widely used in digital communications.
Single-sided exponential singal
Two-sided, anti-symmetric exponential signal
Sign function
Gaussian singal
Sinc Function
FT on impulse and step functions
The spectrum of impulse function covers the entire frequency range. The interferences caused by a variety of electric sparks always cover the full frequency range.
Due to the DC component in u(t), an impulse exists.
Properties of FT
Symmetry $\mathcal F[F(t)]= 2\pi f(-\omega)$ , if $f(t)$ is a even function, $\mathcal F[F(t)]= 2\pi f(\omega)$
Linearity $\mathcal{F}[\Sigma_{i=1}^{n}a_if_i(t)] = \Sigma_{i=1}^{n}a_iF_i(\omega)$
Odd-Even, Imaginary-Real $f(t) = f_e(t)+f_o(t)$, then
$R(\omega)$ is an even function of $\omega$, $X(\omega)$ is an odd function of $\omega$.
$|F(\omega) = \sqrt{R^2(\omega)+F^2(\omega)}|$ is even function.
$\varphi(\omega) = \tg^{-1}\frac{R(\omega)}{X(\omega)}$
if $f(t)$ is real and even, then $f(t)=f_e(t), F(\omega)=R(\omega)$, the phase shift is $0$ or $\pi$.
if $f(t)$ is real and odd, $f(t) = f_o(t)$, then $F(\omega)=jX(\omega)$, $F(\omega)$ has only imaginary part and is odd, the phase shift is $\pm \frac{\pi}{2}$
Scaling $\mathcal{F}[f(at)]=\frac 1{|a|}F\left(\frac{\omega}a\right)$ Expansion in TD results in Compression in FD.
Time Shifting $\mathcal{F}[f(t\pm t_0)] = F(\omega)e^{\pm j\omega t_0}$
Frequency Shifting $\mathcal F[f(t)e^{\pm j\omega_0t}] = F(\omega\mp\omega_0)$
Differentiation property$\mathcal F\left[\frac{\mathrm d}{\mathrm dt}f(t)\right] = j\omega F(\omega)$
$\mathcal F\left[\frac{\mathrm d^n}{\mathrm dt^n}f(t)\right] = (j\omega)^n F(\omega)$
$\mathcal F\left[\frac{\mathrm d^n}{\mathrm d\omega^n}F(\omega)\right] = (-jt)^nf(t)$
Integration Property $\mathcal{F}\left[\int_{-\infty}^t f(\tau)\mathrm{d} \tau\right] = \frac{F(\omega)}{j\omega} + \pi F(0)\delta(\omega)$
Convolution theorem
FT for Periodic Signals
FT for periodic of $T_1$ & $\omega_1=2\pi/T_1$
Where $F_0(\omega)$ is the FT considering waveform of $f(t)$ only in $|t|\le T_1/2$.
example:
In the same way:
FT for periodically sampled signals
Then,
For the frequency-domain sampling:
The Sampling Theorem
A band-limited signal whose spectrum is strictly within $[0, f_m]$ could be uniquely determined by the samples on itself, if and only if the sampling interval $T_s \le 1/(2f_m)$.
$T_s = \frac{1}{2f_m}$ is called the Nyquist interval.
$2f_m$ is called the Nyquist frequency.
对于单频信号,奈奎斯特频率的采样可能会出现问题。例如正弦信号,每次采样都采在零点上,那就没法复现信号。单频信号没法描述带宽。
A FD verison:
L Transform
Unilateral L-transform
$F(s) = \mathcal{L}[f(t)]$ is called image function, $f(t) = \mathcal{L}^{-1}[F(s)]$ is called primitive function.
assuming that $f(t)$ is causal and always 0 if $t<0$.
The initial state is automatically included in differential equation.
We define the unilateral L-Transform as:
Conditions for L-Transform:
- Limited discontinuities
- Exponential order
The strong attenuation factor can make the function convergent.
Region of Convergence (ROC)
Axis of convergence
Coordinate of convergence $\sigma_0$
Comman LT Pairs
Properties of LT
Linarity
Differentiation
Intergration
Time Shifting
Use $u(t-t_0)$ to avoid nagative part of $f(t)$ emerges.
Frequency Shifting
Scaling
s-Domain Differentiation
s-Domain Differentiation
Initial value
Final value
Generalized limit: $\lim_{t\rightarrow\infty} \sin(\omega t)=0$
Convolution
Applications
Differential Equations
(assume that $n<m$)
The roots of numerator is called zeros, while the roots of denominator is called poles.
Unknown function F(s) can be represented by the ratio of two polynomials if all initial states are 0.
- real poles
- complex conjugate poles
- Multiple poles
Circuit model:
Use initial value and final value to verify it.
System Function
Driving point function & transfer function
L-transform can be used in the following analysis:
- TD characteristics (response decomposition)
- FD characteristics (steady-state with sine signal input,applications such as filtering)
- Stability (active network, feedback, oscillator, control system)
TD characteristics by 0-point distribution
Three cases:
- Real poles
- Complex conjugate poles
- Real pole of high-order
- Zero pole of H(s)
Zero only affects the phase and amplitude, while the shape and type of waveform is determined by the poles.
- pole distribution $\Leftrightarrow$ corresponding natural/forced responses
The natural response of $r(t)$ is only related to $p_{hk}$, while the forced response is only related to $p_{ek}$.
$K_{hk}$, $K_{ek}$ are related to both $H(s)$ and $E(s)$.
However natural and forced responses could not be completely separated, if there exists $k, k^\prime$ satisfying $p_{hk}=p_{ek^\prime}$.
$p_{hi}$ are called natural frequency of the system.
However, some common factors may be eliminated:
All the poles of $H(s)$ are the natural frequencies of the system, but $h(t)$ may not include all the natural frequencies(but the root of $\Delta$ contains all natural frequencies).
In most cases:
Thus the natural response is transient, while the forced response is steady-state.
However, some natural response can be steady-state(conjugate poles of $Re(p_{hi})$), while some forced response can be transient.
for band-pass filter,
BW is where Peak(dB) - 3dB
for low-pass filter,
BW = $f_{\text{cut-off}}$
According to the sampling theorem, the signal bandwith is often determined by the first zero of the spectrum.
All Pass Systems
The Amplitude is const., while the phase can change.
Minimum-phase system/function
Definition: A stable system with poles on left-half s-plane is called minimum-phase system/function, if all the zeros are also on left-half s-plane or at the jω-axis. Otherwise is a non-minimum-phase system/function.
Property: A non-minimum-phase function can be represented as the product of a minimum-phase function and an all-pass function.
Stability of Linear System
A system is considered to be stable if bounded input always leads to bounded output.
Bounded-input, Bounded-output(BIBO)
The necessary & sufficient conditions for BIBO:
Poles are:
- on the left half-plane: $\lim_{t\rightarrow \infty}[h(t)] = 0$, stable system
- on the right half-plane, or at $j\omega$-axis with order of more than one: $\lim_{t\rightarrow \infty}[h(t)] = \infty$, unstable system
- at $j\omega$-axis with order of one: $h(t)$ is non-zero or oscillated with equal amplitude, critical stable system
Two-sided (Bilateral) LT
- t starts from −∞, i.e., non-causal signal as the input
or regarding the initial condition as the input. - Easily to be associated with F-transform and Z-
transforms
We determine the ROC by:
NOTE:
- If no overlap between the two constraints, then $F_B(s)$ does not exist.
- $F_B(s)$ and $f(t)$ are not uniquely corresponding to each other.($\int_{-\infty}^\infty u(t)e^{-st}\mathrm{d} t = \frac{1}{s}$, $\int_{-\infty}^\infty -u(-t)e^{-st}\mathrm{d} t=\frac{1}{s}$)
- Two-sided L-Transform shares almost all the properties with its single-sided counterpart except for the initial-value theorem.
- Two-sided L-Transform has very limited applications as most continuous-time systems are causal.
**Relationship between LT and FT **
- $\sigma_0 > 0$, $F(\omega)$ does not exist
- $\sigma_0 = 0$, impulse appears in $F(\omega)$
- $\sigma_0 < 0$, $F(\omega)$ exists, $F(\omega) = F(s)|_{s=j\omega}$
(The LT above is unilateral LT.)
Extra Attention
$1 + e^{-s}$ also has zero(many!). Note that if it is on the denominator.
FT in Telecom. Systems
System discussed in this chapter are strictly stable:
Because even for critical stable system, FT is not the same as LT(containing $\delta$), there will be ambiguity between $H(j\omega)$ and $H(s)|_{s=j\omega}$.
For every freq. component, it is reshaped in its phase and amplitude by the system function when passing through the system, related with its frequency. Thus the system can distort the original signal.
Distortion
2 types of distortion:
- Non-linear distortion (new frequency components)
- Linear distortion (without new frequency components), just the amplitude and/or phase distortion.
Distortionless transmission
So, $H(j\omega) = ke^{-j\omega t_0}$, $h(t)=K\delta(t - t_0)$.
The Amplitude is frequency independent, $BW\rightarrow \infty$.
Phase response is linear at negative slope.
The impulse response of a distortionless linear system is
also an impulse.
The physical scenario: group delay.
Condition for phase distortionless property: the group delay remains a constant.
Filter
Ideal Low pass (LP) Filter
The Impulse response of Ideal LP
- Severe distortion. $BW_{\delta(t)}\rightarrow \infty$, but $BW_{\text{Lowpass}}=\omega_c$, the higier frequency is eliminated.
- Non-causal. When $t\lt 0$, $h(t)\ne 0$.
Unit-step response of Ideal LP
The response is similar to the input if $\frac{1}{2}=\frac{\pi}{\omega_c}\llless \tau$.
Gibbs phenomenon: 9% overshoot at discontinuity. Use other window functions can eliminate this, e.g. raised-cosine window.
Modulation and demodulation
Means of modulation:
Spectrum shifting
$f(t) = g(t)\cos(\omega_0t)$, $F(\omega) =\frac{1}{2\pi} G(\omega) * \pi[\delta(\omega - \omega_0) + \delta (\omega + \omega_0)] = \frac{1}{2}[G(\omega + \omega_0) + G(\omega - \omega_0)]$.
Demodulation:
coherent demodulation
$g_0(t)=[g(t)\cos(\omega_0 t)]\cos(\omega_0t) = \frac{1}{2}g(t) + \frac{1}{2}g(t)\cos2\omega_0t$
Envelope Detection
Applications of BPF
Window Function
(Page 304)
Recover Continuous Time signal from its Samples
Analysis on signal after band-pass filter
Page 301
Sampling with impulse func.
FD analysis:
Sampled signal(By impulse function):
Ideal LP Filter:
Recovered signal:
TD analysis:
Sampled signal:
Ideal LP Filter:
Recovered signal:
Sampling with a zero-order hold
LP Filter for compensation
Linear phase response is OK! No needed for delay compensation.
1st-order hold Sampling
Mulitplexing FDM and TDM
Transmit mulitple singals over a single channel concurrently.
Frequency Division Multiplexing (FDM) - OFDM (Orthogonal FDM)
Time Division Multiplexing (TDM)-sharing slot, statistical multiplexing
Code Division Multiplexing (CDM)- Code division, logical multiplexing
Wavelength Division Multiplexing (WDM)- Optical carrier
Vector Analysis of Signals
Vector Space
Objective for singal decomposition
Basics
Orthogonal Vector
Orthogonal Function
Represend $f_1(t)$ in terms of $f_2(t)$(both real), for $t_1<t<t_2$
Residual error $\overline{\varepsilon^2} = \overline{f_e^2(t)} = \frac{1}{t_2 - t_1}\int_{t_1}^{t_2}[f_1(t) - c_{12}f_2(t)]^2\mathrm dt$
Let $\frac{\mathrm d \overline{\varepsilon^2}}{\mathrm d c_{12}} = 0$, then $\overline{\varepsilon^2}$ is minimized.
The coefficient can be determined as
If $c_{12} = 0$, then $f_1(t), f_2(t)$ are called Orthogonal Functions.
And
Orthogonal Function Set
Any real function $f(t)$ can be represented as the sum of $n$-D orthogonal real functions.
According to the minimal mean square error, the coefficient can be determined as
If $g_1(t), g_2(t), …, g_n(t)$ are orthogonal to each other, i.e.
Then $f(t)$ can be represented as the sum of $n$-D orthogonal real functions.
Then $g_1(t), g_2(t), …, g_n(t)$ are called Orthogonal Function Set.
If $\int_{t_1}^{t_2}g_i^2(t)\mathrm dt = 1$, the orthogonal function set is called Orthonormal Function Set.
Orthogonality of Complex Function
Orthogonal Function Set satisfies
The definition of Orthogonal is
NOTE:
If two signals are orthogonal within a given interval, they are not necessarily orthogonal within other intervals.
If two signals are not orthogonal, they must be correlated.
Complete Orthogonal Function and Parseval’s Theorem
Complete Orthogonal Funtion Set
If $\lim_{t_2 \to \infty}\overline{\varepsilon^2} = 0$, then ${g_r(t)}$ is said to be a Complete Orthogonal Function Set.
Alternative definition of complete orthogonal set
Other than the elements in ${g_r(t)}$, there is no finite-energy signal $x(t)$, which satisfies
Trigonometric Set
Complex exponential set
Parseval’s Theorem
Physical interpretation:
The energy (power) of a signal always equals to the sum of the energy (power) of all its components in a complete orthogonal function set.
Mathematical interpretation:
The norm of vector signals keeps invariant under orthogonal transform.
Correlation
Physical interpretation:
Gauge of the similarity of two signals
Energy and Power Signals
Instaneous Power $p(t) = i^2(t) R$
The energy consumed by $R$ in a period
Average Power:
The energy signals and power signals:
Correlation Coefficient
If $f_1(t)$ is a linear function of $f_2(t)$, then $\rho_{12} = \pm1$, $\overline{\varepsilon^2} = 0$.
If $f_1(t)$ is orthogonal to $f_2(t)$, then $\rho_{12} = 0$, $\overline{\varepsilon^2}$ is maximized.
- Describe the correlation of two signals from the perspective of energy difference.
- Quantitatively measure the correlation of two signals in terms of inner product.
Correlation Function
The similarity between one signal with a delayed version of another signal.
(1) $f_1(t)$ and $f_2(t)$ are both real and energy signals
(2) $f_1(t)$ and $f_2(t)$ are both complex and energy signals
If $f_1(t) = f_2(t) = f(t)$
Autocorrelation:
(2) $f_1(t)$ and $f_2(t)$ are both complex and energy signals
Autocorrelation:
(3) $f_1(t)$ and $f_2(t)$ are both real and power signals
Autocorrelation
(4) $f_1(t)$ and $f_2(t)$ are both complex and power signals
Autocorrelation
Correlation Theorem
If $x(t) = y(t)$, The FT of the autocorrelation function is $\mathcal F[{R_{xx}(\tau)}] = |X(\omega)|^2$
If $y(t)$ is a real and even function: $Y^*(\omega) = Y(\omega)$
Then the correlation theorem is equivalent to the convolution theorem
Generally,
Energy & Power Spectral Density
Energy Spectral Density
Power Spectral Density
It is called Power Spectral Density (PSD).
Wiener-Khinchin Theorem
ESD/PSD of the System Response
The last line of this table is wrong. The correct is:
Match Filters
$t_m$ is the signal width in TD.
Discrete time signals
Discrete time-axis, but continuous amplitude-axis
Sequence operation
Addition $z(n) = x(n) + y(n)$
Multiplication $z(n) = x(n) * y(n)$
Multiplied a coefficient $z(n) = a * x(n)$
Shift $z(n) = x(n - m)$ right shift($m>0$), $z(n) = x(n +m)$ left shift
Reflection $z(n) = x(-n)$
Difference $\Delta x(n) = x(n + 1) - x(n)$ Forawrd difference,
$\nabla x(n) = x(n) - x(n - 1)$ Backward difference
$\nabla^mx(n) = \nabla(\nabla^{m-1}x(n))$
Summation $z(n) = \sum_{k = -\infty}^{n}x(k)$
Scaling $z(n) = z(2n)$ squeeze,
$z(n) = x(n/2)$, extend
Typical sequences
Relations of several singal waveforms
Signal Decomposition
Difference equations
Numerical solution of difference equations
General form of difference equation:
Methods:
- Recursive method
- Intuitive, difficult to formulate the closed-form solutions
- Time-domain classical method
- Obtain homogeneous and particular solutions and using the boundary condition to determine the coefficients.
- The sum of the zero-input and zero-state responses
- Convolution (next class)
- Z-transform (Chapter 8)
- State variable method (Chapter 11)
Homogeneous Solution
The characteristic root $\alpha_k$ satisfies:
The homogeneous solution is:
Particular Solutions:
General steps
- Obtain homogeneous solutions from characteristic equation $c_1\alpha_1^n + c_2\alpha_2^n + \cdots + c_N\alpha_N^n$
- Determine the form of the particular solution $D(n)$
- The complete solution $c_1\alpha_1^n + c_2\alpha_2^n + \cdots + c_N\alpha_N^n + D(n)$
- Introduce the boundary condition and set up equations$$ y(0) = C_1 +C_2 + \cdots + C_N + D(0)\\ y(1) = C_1\alpha_1 +C_2\alpha_2 + \cdots + C_N\alpha_N + D(1)\\ \vdots\\ y(N - 1) = C_1\alpha_1^{N - 1} + C_2\alpha_2^{N - 1} + \cdots + C_N\alpha_N^{N - 1} + D(N - 1)\\ \Rightarrow\\ Y(k) - D(k) = VC\\ C = V^{-1}(Y(k) - D(k))\\ $$
Zero-input and zero-state responses
Zero-Input Response
$D(k) = 0 \Rightarrow C_{zi} = V^{-1}Y_{zi}(k)$
Zero-State Response
Natural Response $\sum_{k = 1}^NC_k\alpha_k^n$
Forced Response $D(n)$
Characteristics of the boundary condition for difference equations
N-th order difference equation should have N independent boundary conditions.
Compared with continuous systems, there are no big differences between $0_+$ and $0_-$ in discrete systems.
$y(-1), y(-2), \dots, y(-N)$ are the system memory (storage) before the excitation is added: $0_-$
Derive (together with the excitation) $y(0), y(1), …, y(N-1): 0_+$
Using Z-transform can avoid mistakes-similar to the Laplace transform in continuous systems.
Impulse response of DT systems
Similar to CT System, h(n) reflects system’s property
Causality $h(n) = h(n) u(n)$ (unlateral, $n\lt 0$ no response)
Stability $\sum_{n=-\infty}^\infty |h(n)| \lt \infty$ (absolutely summable)
NOTE: critical stability can be considered as either stable or unstable, e.g., system whose impulse response is a sine sequence
Not all practical discrete systems are necessarily causal:
- Variable is not time, like image processing
- Variable is time, but data has been recorded and processed, like voice processing, meteorology, stock systems.
Example: Smooth windowing
Discrete non-causal system
Convolution Sum
Similar to CT system, also satisfies both distributive and associative laws
Calculation of convolution:
Four steps: reflection, shift, multiplication and summation.
Calculation of correlation:
Cross- & auto-correlation: shift, multiplication & summation.
Example:
if $n < 0$, then $y(n) = 0$
if $0 \le n \lt N - 1$, $y(n) = \sum_{m = 0}^na^{n-m} = \frac{1}{1 - a}[1 - a^{n+1}]$
if $n \ge N - 1$, $y(n) = \frac{1 - a^{-N}}{1 - a^{-1}}a^n$
Deconvolution
Signal retrieval y(n) and h(n) are known, how to derive x(n)?
Measurement equipment (linear system), like sensor for measuring blood pressure
System identification y(n) and x(n) are known, how to derive h(n)?
Earthquake signal, like geological survey, oil exploration, etc.
Thus,
Important Concepts
- Symbol rate : clock period is $T$, signal symbol rate is $f = 1/T$.
- Information rate: information rate equals to symbol rate for binary encoding, otherwise, equal to multiplication between symbol rate and number of information bits per symbol.
- Signal bandwidth: the first zero of non-return-to-zero (NRZ) signal’s spectrum is $1/T$, so the signal bandwidth is $B=1/T =f$.
这句话是在介绍非归零码(Non-Return-to-Zero,简称NRZ)信号的带宽计算方式。
在信号处理和通信中,信号的频谱表示了信号在不同频率上的能量分布。对于一个NRZ信号,它的频谱图中第一个零点(也就是在零周围出现的第一个极小值点)出现在频率为$1/T$处,其中$T$是NRZ信号中一个脉冲的时间长度。这是因为NRZ信号的频谱是由一个周期为$T$的方波产生的,其频率成分只包含正弦函数的奇次谐波。其中,最低的一个谐波频率是$f=\frac{1}{T}$,对应的是频率为$f$的正弦波。所以,NRZ信号的带宽$B$就被定义为$f=1/T$。
简而言之,NRZ信号的带宽是由脉冲的时间长度决定的,具体来说,带宽等于脉冲频率的一半,即带宽等于$f=1/T$。
NOTE:
When NRZ code is used, signal bandwidth = symbol rate
When return-to-zero (RZ) code is used, signal bandwidth > symbol rate
Using NRZ code can save bandwidth yet high frequency components of the rectangular signal will suffer from the severe inter-symbol interference (ISI). So the raised cosine or Sa function is preferred.
ChatGPT:
归零码和非归零码都是数字通信中常用的编码方式。
归零码(Return-to-Zero,简称RZ)是一种二进制码,其中一个信号所代表的位是由一个脉冲表示的。发送二进制“1”时,脉冲电平为高电平,发送二进制“0”时,脉冲电平为低电平。在归零码中,一个信号位所代表的脉冲之后,信号在下一时刻会回到零电平。由于归零码中每个脉冲结束时信号回到零电平,因此这种编码方式具有稳定可靠的特点,但是传输的带宽相对较大。
非归零码(Non-Return-to-Zero,简称NRZ)的编码方式与归零码不同之处在于,在NRZ信号中,一个信号所代表的位是由一个固定电平表示的。当信号所代表的位是二进制“1”时,固定电平为高电平;当信号所代表的位是二进制“0”时,固定电平为低电平。这种编码方式的好处是带宽更小,但是却存在着在多个二进制“1”连续发送时,出现误解的情况。
综上所述,归零码是二进制码中脉冲与幅度的双重编码,不易产生传输误码,但其对于通信带宽需求较大;而非归零码不需要对脉冲进行编码,在带宽方面具有一定的优势,但长时间连续发送相同信息时会产生误解。
Z-Transform
Similar to the L-Tranform.
Definition
Z-T of Typical Series
$z \in \Complex$
The Region of Convergence
Inverse Z-Transform
Method
Contour Integration(residue method)
Right-sided sequence
Left-sided sequence
Power series expansion(Long division)
If it is right sided,
If it is left sided,
Partial Fraction Expansion
Properties of Z-T
Linearity
Addition and homogeneity
ROC may change!
i.e. poles are cancelled when added: ROC will enlarge or and shrink.
Time shifting
(a) bilateral: If $\mathcal Z[x(n)] = X(z), R_{X_1} < |z| < R_{X_2}$, then $\mathcal{Z}[x(n-m)] = z^{-m}X(z), R_{X_1} < |z| < R_{X_2}$.
(b) unilateral: if $\mathcal{Z}[x(n)] = X(z), R_{X_1} < |z|$, then $\mathcal{Z}[x(n-m)] = z^{-m}[X(z) + \sum_{k = -m}^{-1}x(k)z^{-k}], R_{X_1}\lt |z|$, and $\mathcal{Z}[x(n+m)] = z^{m}[X(z) - \sum_{k = 0}^{m-1}x(k)z^{-k}], R_{X_1}\lt |z|$
For casual sequence, $n < 0, x(n) = 0$, the unilateral is also $\mathcal{Z}[x(n-m)] = z^{-m}X(z)$.
The reason is that the unilateral z transform doesn’t contain the $n<0$ parts of sequence, but after shifting, sometimes must be counted(right shift), sometimes must be discarded(left shift).
Linear weighting on sequence(Z domain differentiation)
Generalization:
Geometric progression(Z-domain scaling)
Initial-value theorem
Final-value theorem
condition: when $n \rightarrow \infty$, $x(n)$ converge
Thus, the poles of $X(z)$ are inside the unit circle, the radius of ROC is less than 1.
For $a^nu(n), |a| \lt 1$, the final value is 0.
Or, if the pole is on the unit circle, it should be 1, and is of the 1st order.
$u(n)$’s final value is 1.
Time-domain convolution theorem
If $\mathcal{Z}{x(n)} = X(z), (R_{x1} \lt |z| \lt R_{x2}), \mathcal{Z}{h(n)} = H(z), (R_{h1} \lt |z| \lt R_{h2})$
If poles are cancelled in multiplication, ROC is enlarged.
Conclusion: (Z Transform) convolution in time-domain is equivalent to multiplication (of Z Transform) in Z-domain.
Z domain convolution theorem
or
where $C$ is a closed contour in the intersection of ROCs of $X(\frac{z}{v})$ and $H(v)$ or $X(v)$ and $H(\frac z v)$.
let $v = \rho e^{j\theta}, z = r e^{j\varphi}$,
then
Mapping of ZT and LT
then,
when $\sigma$ is constant,
vertical line in $s$-plane maps the circle in $z$-plane.
$s$-plane imaginary axis maps the unit circle in $z$-plane.
when $\omega$ is constant,
Correspondence of ZT and LT
Solving difference equation by Z-T
Two methods:
- TD method
- Z-T method (notice the ROC)
ZT method
- perform unilateral Z-T on both sides.
$x(n-r), y(n-k)$ are both right shifted series
- Derive $Y(z)$
- Perform inverse transform on $Y(z)$ to get $y(n)$(ROC!)
Zero input response
Zero state response
System function of DT system
Unit Impulse/sample response $h(n)$ and system function H(z)
Factorization
We can draw the conclusions directly from the relationship between Z-T and L-T
Imaginary axis | $\sigma=0$ | Constant amplitude | $r = 1$ | Unit circle |
Right half plane | ||||
Left half plane | ||||
Real axis |
Stability and Causality
Stable: iff
The condition is ROC of stable system includes the unit circle.
Causal:
Condition is ROC includes $\infty$: $R_{X_1}\lt |z|$
Stable and causal
Discrete-time Fourier Transform(DTFT)
Definition
take $T = 1$
The relation ship with Z-T:
Inverse transform
Frequency Response of DT system
The steady-state response to sine sequence
The FT of $h(n)$, $H(e^{j\omega})$ is a periodic function with period of $\omega_s = 2\pi /T = 2\pi$.
If $h(n)$ is real, then the amplitude/phase response is even/odd function.
The amplitude is determined within $[0, \omega_s/2]$
NOTE:
- We can derive the frequency response (function of $\omega$) by letting $D$ move along the unit circle once.
- $H(j\omega)$ is periodic. The frequency response from 0 to $\omega_s/2$ can be determined by letting $D$ move along half circle.
- If pole $p_i$ is close to the unit circle, there will be a peak in the frequency response. If zero $z_i$ is close to the unit circle, there will be a notch in the frequency response.
- For statble systems, $p_i$ should be inside the unit circle, while $z_i$ could be inside or outside the unit circle.
- poles and zeros at origin have no influence on amplitude.
Analog and digital Filter
Fundamental Principles
The spectrum of $x(t)$ is strictly inside $\pm \omega_m$.
We choose the sampling frequency:$\omega_s = \frac{2\pi}{T} \ge 2\omega_m$
Classifications of digital filters
In terms of structure
recursive: $a_k\ne 0$ at least for one $k$
non-recursive: $a_k=0$, for all $k$
In terms of the characteristics of $h(n)$
Infinite impulse response(IIR): recursive, non-linear phase
Finite impulse response(FIR): non-recursive, linear phase.
IIR filter
Impulse invariance
Based on the s-domain analog filters.
Design method 1: 冲激响应不变法
Replace $\frac{1}{s-s_k}$ with $\frac{1}{1-e^{s_kT}z^{-1}}$. Then $H_a(s)$ become $H(z)$.
The relationship between the continuous and discrete filters:
The result is just repeat the original filter at sampling frequency, thus it attenuates slower.
NOTE:The digital filter implemented this way has aliasing.
The frequency response of analog filter must be attenuated enough within $\omega_s$.
This approach can only realize LP and BP filter, but not HP and band-stop one.
method 2: Bilinear transformation emerges to address this problem (you can study it by yourself)
Bilinear transformation is non-linear transformation.
To implement digital filter, A/D and D/A are required, along with ROM, RAM, ALU, delay units (shift registers), etc.
FIR filter
Poles are at $z=0$. $N - 1$ zeros.
FIR filter has linear-phase iff
Feedback System: Signal Flow Graphs
Operator and Transfer Operator:
Rules:
- Common factors can’t be eliminated.
- Be careful when changing the order of operation($\frac{d}{dt}\int_{-\infty}^tx(\tau)d\tau = x(t)$, $\int_{-\infty}^t\frac{d}{d\tau}x(\tau)d\tau = x(t) - x(-\infty)$)
transfer operator:
Brief introduction to the signal flow graphs(SFG)
Terminnologies in SFG
Node, Transfer function, Branch(The branch gain is the transfer function), Source node, Sink node, Mixed node.
Properties of SFG
- Signal only passes through a branch with the direction indicated by the arrowhead.
- Signals of incoming branches are added at a node, and the added signal appears on the all outgoing branches.
- A sink node can be separated from a mixed node.
- For a given system, the SFGs can be different.(equations for a system can be different)
- After the SFG being inversed, the transfer function keeps invariant, but the signals represented by the inner nodes will be different.
Note: Inversion is done by inversing the transfer direction of each branch, and exchanging the source and sink nodes as well.
Algebra of SFG
Simplify:
NOTE: The SFG can be simplified using the following steps:
a. Merge the cascaded branches to decrease the number of nodes;
b. Merge the parallel branches to decrease the number of branches;
c. Eliminate all the loops.
Then, the system function can be readily derived.
Mason’s Formula
State-variable analysis of system
Features of the state-variable analytical method
- (1)Provide internal characteristics of the system
- (2) Convenient to represent and analyze the multi-input, multi-output (MIMO) cases
- (3) Easy to be extended to time-variant or nonlinear cases
- (4) Introduce two important concepts: controllability and observability
- (5) Convenient for numerical computation
General form and setup method (CT)
$r$ responses, $k$ state variables, $m$ inputs.
For time-variant system, $\mathbf{A, B, C, D}$ are fuction of $t$.
Direct methods:
- observation
- Topological analysis
Used in curcuit analysis.
Indirect methods:
- From block diagram or flow graph
- From input-output equation
- From transfer function
Used in controlled system analysis.
From input-output equation
NOTE : under the zero-state condition, $p$ is equivalent to $s$
The SFG is:
If the order of differential equation on the left side is higher than that on the right side:
If the derivatives of the excitation on the right side are absent,
Factorizing Transfer operator
Solving CT system’s state equations
Time domain method using computer.
Transform-domain(Laplace-tranform) method
Let $\Psi(s) = (s\mathbf I - \mathbf A)^{-1}$, which is called characteristic matrix.
Time-domain method
properties
output
Correspond to LT:
Derive System Functions
That for DT system
State equation setup
Solving:
The Impulse response is:
Calculate $A^n$: Cayley-Hamilton Theorem
ZT Solution
then
Comparing this with CT solution, we find
Linear Transform on state vectors
The equations become:
Similarity transform doesn’t change the eigenvalues.
Transform function matrix keeps invariant under linear transformation.
We can diagonalize the matrix A.
Calculate eigenvalues $\alpha$ -> calulate eigenvectors $\xi$ -> $\mathbf P^{-1} = [\xi_i]$, $\hat A = \text{diag}(\alpha_i)$
Controllable & Observable
Controllable is
Uncontrollabilty is the input can’t change the response.
Obeservability is
Unobserverbility is the response is not affected by the input.
After the diagonalization of A:
B doesnt contain zero $\Leftrightarrow$ completely controllable. Otherwise, the 0s is coresponding to the uncontrollable state variables.
C doesnt contain zero $\Leftrightarrow$ completely observable. Otherwise, the 0s is coresponding to the unobservable state variables.
In fact:
The $H(s)$ only contains the controllable and observable state variables. So the state and output equations contains more information than the $H(s)$.
CDMA
Use a set of orthogonal codes to support multiple users by orthorgonal multiplexing.
Example
Assume K users need to connect with the base station simultaneously for CDMA system.
- Design a set of orthogonal codes
- Design on the transmitter
- Design on the receiver
An example of Code 4:
Must satisfy:
second:
(1) frequency shifting: $d_k(t)\cos(\omega t)$
(2) spreading: $s_k(t) = d_k(t)c_k(t)\cos(\omega t)$
third: coherent detection/de-spreading
Core:
- Orthogonal code design(signal design)
- Code capturing and tracking(signal processing and system design)
- Multi-use detection and channel estimation(singal processing and system design)
Code design
requirements:
- sharp auto-correlation curve
- zero cross-correlation
- largest possible orthogonal code set
- highest possible complexity for security performance
Commonly used codes:
- Walsh code
- PN sequence
- GOLD codes
- sliding window capturing
- multiple correlators to detect phase match
The period of address code is much shorter than the period of data code: $T_c < T_d$, so the modulated signal is much wider in FD, whose spectrum is called spread spectrum.