Biomedical Engineering 2

अब Quizwiz के साथ अपने होमवर्क और परीक्षाओं को एस करें!

2.2.3 Spectral Analysis The various methods to estimate the power spectrum density (PSD) of a signal may be classified as nonparametric and parametric. 2.2.3.1 Nonparametric Estimators of PSD This is a traditional method of frequency analysis based on the Fourier transform that can be evaluated easily through the fast Fourier transform (FFT) algorithm [Marple, 1987]. The expression of the PSD as a function of the frequency P(f ) can be obtained directly from the time series y(n) by using the periodogram Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 19 — #19 Digital Biomedical Signal Acquisition and Processing 2-19 expression P(f ) = 1 Ts Ts N −1 k=0 y(k)e−j2πfkTs 2 = 1 NTs |Y (f )| 2 (2.26) where Ts is the sampling period, N is the number of samples, and Y (f ) is the discrete time Fourier transform of y(n). On the basis of the Wiener-Khintchin theorem, PSD is also obtainable in two steps from the FFT of the autocorrelation function Rˆ yy (k) of the signal, where Rˆ yy (k) is estimated by means of the following expression: Rˆ yy (k) = 1 N N −k−1 i=0 y(i)y∗(i + k) (2.27) where ∗ denotes the complex conjugate. Thus the PSD is expressed as P(f ) = Ts · N k=−N Rˆ yy (k)e−j2πfkTs (2.28) based on the available lag estimates Rˆ yy (k), where −(1/2Ts) ≤ f ≤ (1/2Ts). FFT-based methods are widely diffused, for their easy applicability, computational speed, and direct interpretation of the results. Quantitative parameters are obtained by evaluating the power contribution at different frequency bands. This is achieved by dividing the frequency axis in ranges of interest and by integrating the PSD on such intervals. The area under this portion of the spectrum is the fraction of the total signal variance due to the specific frequencies. However, autocorrelation function and Fourier transform are theoretically defined on infinite data sequences. Thus errors are introduced by the need to operate on finite data records in order to obtain estimators of the true functions. In addition, for the finite data set it is necessary to make assumptions, sometimes not realistic, about the data outside the recording window; commonly they are considered to be zero. This implicit rectangular windowing of the data results in a special leakage in the PSD. Different windows that smoothly connect the side samples to zero are most often used in order to solve this problem, even if they may introduce a reduction in the frequency resolution [Harris, 1978]. Furthermore, the estimators of the signal PSD are not statistically consistent, and various techniques are needed to improve their statistical performances. Various methods are mentioned in the literature; the methods of Dariell [1946], Bartlett [1948], and Welch [1970] are the most diffused ones. Of course, all these procedures cause a further reduction in frequency resolution. 2.2.3.2 Parametric Estimators Parametric approaches assume the time series under analysis to be the output of a given mathematical model, and no drastic assumptions are made about the data outside the recording window. The PSD is calculated as a function of the model parameters according to appropriate expressions. A critical point in this approach is the choice of an adequate model to represent the data sequence. The model is completely independent of the physiologic, anatomic, and physical characteristics of the biologic system but provides simply the input-output relationships of the process in the so-called black-box approach. Among the numerous possibilities of modeling, linear models, characterized by a rational transfer function, are able to describe a wide number of different processes. In the most general case, they are represented by the following linear equation that relates the input-driving signal w(k) and the output of Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 20 — #20 2-20 Medical Devices and Systems an autoregressive moving average (ARMA) process: y(k) = − p i=1 aiy(k − i) + q j=1 bjw(k − j) + w(k) (2.29) where w(k) is the input white noise with zero mean value and variance λ2, p and q are the orders of AR and MA parts, respectively, and ai and bj are the proper coefficients. The ARMA model may be reformulated as an AR or an MA if the coefficients bj or ai are, respectively, set to zero. Since the estimation of the AR parameters results in liner equations, AR models are usually employed in place of ARMA or MA models, also on the basis of the Wold decomposition theorem [Marple, 1987] that establishes that any stationary ARMA or MA process of finite variance can be represented as a unique AR model of appropriate order, even infinite; likewise, any ARMA or AR process can be represented by an MA model of sufficiently high order. The AR PPSD is then obtained from the following expression: P(f ) = λ2Ts |1 + p i=1 aiz−i | 2 z=exp(j2πfTs) = λ2Ts p i=1 (z − zl)| 2 z=exp(j2πfTs) (2.30) The right side of the relation puts into evidence the poles of the transfer function that can be plotted in the z-transform plane. Figure 2.16b shows the PSD function of the HRV signal depicted in Figure 2.16a, while Figure 2.16c displays the corresponding pole diagram obtained according to the procedure described in the preceding section

2.2.3 Spectral Analysis The various methods to estimate the power spectrum density (PSD) of a signal may be classified as nonparametric and parametric. 2.2.3.1 Nonparametric Estimators of PSD This is a traditional method of frequency analysis based on the Fourier transform that can be evaluated easily through the fast Fourier transform (FFT) algorithm [Marple, 1987]. The expression of the PSD as a function of the frequency P(f ) can be obtained directly from the time series y(n) by using the periodogram Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 19 — #19 Digital Biomedical Signal Acquisition and Processing 2-19 expression P(f ) = 1 Ts Ts N −1 k=0 y(k)e−j2πfkTs 2 = 1 NTs |Y (f )| 2 (2.26) where Ts is the sampling period, N is the number of samples, and Y (f ) is the discrete time Fourier transform of y(n). On the basis of the Wiener-Khintchin theorem, PSD is also obtainable in two steps from the FFT of the autocorrelation function Rˆ yy (k) of the signal, where Rˆ yy (k) is estimated by means of the following expression: Rˆ yy (k) = 1 N N −k−1 i=0 y(i)y∗(i + k) (2.27) where ∗ denotes the complex conjugate. Thus the PSD is expressed as P(f ) = Ts · N k=−N Rˆ yy (k)e−j2πfkTs (2.28) based on the available lag estimates Rˆ yy (k), where −(1/2Ts) ≤ f ≤ (1/2Ts). FFT-based methods are widely diffused, for their easy applicability, computational speed, and direct interpretation of the results. Quantitative parameters are obtained by evaluating the power contribution at different frequency bands. This is achieved by dividing the frequency axis in ranges of interest and by integrating the PSD on such intervals. The area under this portion of the spectrum is the fraction of the total signal variance due to the specific frequencies. However, autocorrelation function and Fourier transform are theoretically defined on infinite data sequences. Thus errors are introduced by the need to operate on finite data records in order to obtain estimators of the true functions. In addition, for the finite data set it is necessary to make assumptions, sometimes not realistic, about the data outside the recording window; commonly they are considered to be zero. This implicit rectangular windowing of the data results in a special leakage in the PSD. Different windows that smoothly connect the side samples to zero are most often used in order to solve this problem, even if they may introduce a reduction in the frequency resolution [Harris, 1978]. Furthermore, the estimators of the signal PSD are not statistically consistent, and various techniques are needed to improve their statistical performances. Various methods are mentioned in the literature; the methods of Dariell [1946], Bartlett [1948], and Welch [1970] are the most diffused ones. Of course, all these procedures cause a further reduction in frequency resolution. 2.2.3.2 Parametric Estimators Parametric approaches assume the time series under analysis to be the output of a given mathematical model, and no drastic assumptions are made about the data outside the recording window. The PSD is calculated as a function of the model parameters according to appropriate expressions. A critical point in this approach is the choice of an adequate model to represent the data sequence. The model is completely independent of the physiologic, anatomic, and physical characteristics of the biologic system but provides simply the input-output relationships of the process in the so-called black-box approach. Among the numerous possibilities of modeling, linear models, characterized by a rational transfer function, are able to describe a wide number of different processes. In the most general case, they are represented by the following linear equation that relates the input-driving signal w(k) and the output of Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 20 — #20 2-20 Medical Devices and Systems an autoregressive moving average (ARMA) process: y(k) = − p i=1 aiy(k − i) + q j=1 bjw(k − j) + w(k) (2.29) where w(k) is the input white noise with zero mean value and variance λ2, p and q are the orders of AR and MA parts, respectively, and ai and bj are the proper coefficients. The ARMA model may be reformulated as an AR or an MA if the coefficients bj or ai are, respectively, set to zero. Since the estimation of the AR parameters results in liner equations, AR models are usually employed in place of ARMA or MA models, also on the basis of the Wold decomposition theorem [Marple, 1987] that establishes that any stationary ARMA or MA process of finite variance can be represented as a unique AR model of appropriate order, even infinite; likewise, any ARMA or AR process can be represented by an MA model of sufficiently high order. The AR PPSD is then obtained from the following expression: P(f ) = λ2Ts |1 + p i=1 aiz−i | 2 z=exp(j2πfTs) = λ2Ts p i=1 (z − zl)| 2 z=exp(j2πfTs) (2.30) The right side of the relation puts into evidence the poles of the transfer function that can be plotted in the z-transform plane. Figure 2.16b shows the PSD function of the HRV signal depicted in Figure 2.16a, while Figure 2.16c displays the corresponding pole diagram obtained according to the procedure described in the preceding section

Biologic signals carry information that is useful for comprehension of the complex pathophysiologic mechanisms underlying the behavior of living systems. Nevertheless, such information cannot be available directly from the raw recorded signals; it can be masked by other biologic signals contemporaneously detected (endogenous effects) or buried in some additive noise (exogenous effects). For such reasons, some additional processing is usually required to enhance the relevant information and to extract from it parameters that quantify the behavior of the system under study, mainly for physiologic studies, or that define the degree of pathology for routine clinical procedures (diagnosis, therapy, or rehabilitation). Several processing techniques can be used for such purposes (they are also called preprocessing techniques); time- or frequency-domain methods including filtering, averaging, spectral estimation, and others. Even if it is possible to deal with continuous time waveforms, it is usually convenient to convert them into a numerical form before processing. The recent progress of digital technology, in terms of both hardware and software, makes digital rather than analog processing more efficient and flexible. Digital techniques have several advantages: their performance is generally powerful, being able to easily implement even complex algorithms, and accuracy depends only on the truncation and round-off errors, whose effects can be predicted and controlled by the designer and are largely unaffected by other unpredictable variables such as component aging and temperature, which can degrade the performances of analog devices. Moreover, design parameters can be more easily changed because they involve software rather than hardware modifications. A few basic elements of signal acquisition and processing will be presented in the following; our aim is to stress mainly the aspects connected with acquisition and analysis of biologic signals, leaving to the cited 2-1

Biologic signals carry information that is useful for comprehension of the complex pathophysiologic mechanisms underlying the behavior of living systems. Nevertheless, such information cannot be available directly from the raw recorded signals; it can be masked by other biologic signals contemporaneously detected (endogenous effects) or buried in some additive noise (exogenous effects). For such reasons, some additional processing is usually required to enhance the relevant information and to extract from it parameters that quantify the behavior of the system under study, mainly for physiologic studies, or that define the degree of pathology for routine clinical procedures (diagnosis, therapy, or rehabilitation). Several processing techniques can be used for such purposes (they are also called preprocessing techniques); time- or frequency-domain methods including filtering, averaging, spectral estimation, and others. Even if it is possible to deal with continuous time waveforms, it is usually convenient to convert them into a numerical form before processing. The recent progress of digital technology, in terms of both hardware and software, makes digital rather than analog processing more efficient and flexible. Digital techniques have several advantages: their performance is generally powerful, being able to easily implement even complex algorithms, and accuracy depends only on the truncation and round-off errors, whose effects can be predicted and controlled by the designer and are largely unaffected by other unpredictable variables such as component aging and temperature, which can degrade the performances of analog devices. Moreover, design parameters can be more easily changed because they involve software rather than hardware modifications. A few basic elements of signal acquisition and processing will be presented in the following; our aim is to stress mainly the aspects connected with acquisition and analysis of biologic signals, leaving to the cited 2-1

FIGURE 2.2 Effect of sampling frequency (fs) on a band-limited signal (up to frequency fb). Fourier transform of the original time signal (a), of the sampled signal when fs < 2fb (b), and when fs > 2fb (c). The dark areas in part (b) indicate the aliased frequencies. values. This is known as the sampling theorem (or Shannon theorem) [Shannon, 1949]. It states that a continuous-time signal can be completely recovered from its samples if, and only if, the sampling rate is greater than twice the signal bandwidth. In order to understand the assumptions of the theorem, let us consider a continuous band-limited signal x(t) (up to fb) whose Fourier transform X(f ) is shown in Figure 2.2a and suppose to uniformly sample it. The sampling procedure can be modeled by the multiplication of x(t) with an impulse train i(t) = k=−∞,∞ δ(t − kTs) (2.1) where δ(t) is the delta (Dirac) function, k is an integer, and Ts is the sampling interval. The sampled signal becomes xs(t) = x(t) · i(t) = k=−∞,∞ x(t) · δ(t − kTs) (2.2) Taking into account that multiplication in time domain implies convolution in frequency domain, we obtain Xs(f ) = X(f ) · I(f ) = X(f ) · 1 Ts k=−∞,∞ δ(f − kfs) = 1 Ts k=−∞,∞ X(f − kfs) (2.3) where fs = 1/Ts is the sampling frequency. Thus Xs(f ), that is, the Fourier transform of the sampled signal, is periodic and consists of a series of identical repeats of X(f ) centered around multiples of the sampling frequency, as depicted in Figure 2.2b,c.

FIGURE 2.2 Effect of sampling frequency (fs) on a band-limited signal (up to frequency fb). Fourier transform of the original time signal (a), of the sampled signal when fs < 2fb (b), and when fs > 2fb (c). The dark areas in part (b) indicate the aliased frequencies. values. This is known as the sampling theorem (or Shannon theorem) [Shannon, 1949]. It states that a continuous-time signal can be completely recovered from its samples if, and only if, the sampling rate is greater than twice the signal bandwidth. In order to understand the assumptions of the theorem, let us consider a continuous band-limited signal x(t) (up to fb) whose Fourier transform X(f ) is shown in Figure 2.2a and suppose to uniformly sample it. The sampling procedure can be modeled by the multiplication of x(t) with an impulse train i(t) = k=−∞,∞ δ(t − kTs) (2.1) where δ(t) is the delta (Dirac) function, k is an integer, and Ts is the sampling interval. The sampled signal becomes xs(t) = x(t) · i(t) = k=−∞,∞ x(t) · δ(t − kTs) (2.2) Taking into account that multiplication in time domain implies convolution in frequency domain, we obtain Xs(f ) = X(f ) · I(f ) = X(f ) · 1 Ts k=−∞,∞ δ(f − kfs) = 1 Ts k=−∞,∞ X(f − kfs) (2.3) where fs = 1/Ts is the sampling frequency. Thus Xs(f ), that is, the Fourier transform of the sampled signal, is periodic and consists of a series of identical repeats of X(f ) centered around multiples of the sampling frequency, as depicted in Figure 2.2b,c.

It is worth noting in Figure 2.2b that the frequency components of X(f ) placed above fs/2 appears, when fs < 2fb, as folded back, summing up to the lower-frequency components. This phenomenon is known as aliasing (higher component look "alias" lower components). When aliasing occurs, the original information (Figure 2.2a) cannot be recovered because the frequency components of the original signal are irreversibly corrupted by the overlaps of the shifted versions of X(f ). A visual inspection of Figure 2.2 allows one to observe that such frequency contamination can be avoided when the original signal is bandlimited (X(f ) = 0, for f > fb) and sampled at a frequency fs ≥ 2fb. In this case, shown in Figure 2.2c, no overlaps exist between adjacent reply of X(f ), and the original waveform can be retrieved by low-pass filtering the sampled signal [Oppenheim and Schafer, 1975]. Such observations are the basis of the sampling theorem previously reported. The hypothesis of a bandlimited signal is hardly verified in practice, due to the signal characteristics or to the effect of superimposed wideband noise. It is worth noting that filtering before sampling is always needed even if we assume the incoming signal to be bandlimited. Let us consider the following example of an EEG signal whose frequency content of interest ranges between 0 and 40 Hz (the usual diagnostic bands are δ, 0 to 3.5 Hz; ν, 4 to 7 Hz, α, 8 to 13 Hz; β, 14 to 40 Hz). We may decide to sample it at 80 Hz, thus literally respecting the Shannon theorem. If we do it without prefiltering, we could find some unpleasant results. Typically, the 50-Hz mains noise will replicate itself in the signal band (30 Hz, i.e., the β band), thus corrupting irreversibly the information, which is of great interest from a physiologic and clinical point of view. The effect is shown in Figure 2.3a (before sampling) and Figure 2.3b (after sampling). Generally, it is advisable to sample at a frequency greater than 2fb [Gardenhire, 1964] in order to take into account the nonideal behavior of the filter or the other preprocessing devices. Therefore, the prefiltering block of Figure 2.1 is always required to bandlimit the signal before sampling and to avoid aliasing errors. 2.1.2 The Quantization Effects The quantization produces a discrete signal, whose samples can assume only certain values according to the way they are coded. Typical step functions for a uniform quantizer are reported in Figure 2.4a,b, where the quantization interval between two quantization levels is evidenced in two cases: rounding and truncation, respectively. Quantization is a heavily nonlinear procedure, but fortunately, its effects can be statistically modeled. Figure 2.4c,d shows it; the nonlinear quantization block is substituted by a statistical model in which the error induced by quantization is treated as an additive noise e(n) (quantization error) to the signal x(n).

It is worth noting in Figure 2.2b that the frequency components of X(f ) placed above fs/2 appears, when fs < 2fb, as folded back, summing up to the lower-frequency components. This phenomenon is known as aliasing (higher component look "alias" lower components). When aliasing occurs, the original information (Figure 2.2a) cannot be recovered because the frequency components of the original signal are irreversibly corrupted by the overlaps of the shifted versions of X(f ). A visual inspection of Figure 2.2 allows one to observe that such frequency contamination can be avoided when the original signal is bandlimited (X(f ) = 0, for f > fb) and sampled at a frequency fs ≥ 2fb. In this case, shown in Figure 2.2c, no overlaps exist between adjacent reply of X(f ), and the original waveform can be retrieved by low-pass filtering the sampled signal [Oppenheim and Schafer, 1975]. Such observations are the basis of the sampling theorem previously reported. The hypothesis of a bandlimited signal is hardly verified in practice, due to the signal characteristics or to the effect of superimposed wideband noise. It is worth noting that filtering before sampling is always needed even if we assume the incoming signal to be bandlimited. Let us consider the following example of an EEG signal whose frequency content of interest ranges between 0 and 40 Hz (the usual diagnostic bands are δ, 0 to 3.5 Hz; ν, 4 to 7 Hz, α, 8 to 13 Hz; β, 14 to 40 Hz). We may decide to sample it at 80 Hz, thus literally respecting the Shannon theorem. If we do it without prefiltering, we could find some unpleasant results. Typically, the 50-Hz mains noise will replicate itself in the signal band (30 Hz, i.e., the β band), thus corrupting irreversibly the information, which is of great interest from a physiologic and clinical point of view. The effect is shown in Figure 2.3a (before sampling) and Figure 2.3b (after sampling). Generally, it is advisable to sample at a frequency greater than 2fb [Gardenhire, 1964] in order to take into account the nonideal behavior of the filter or the other preprocessing devices. Therefore, the prefiltering block of Figure 2.1 is always required to bandlimit the signal before sampling and to avoid aliasing errors. 2.1.2 The Quantization Effects The quantization produces a discrete signal, whose samples can assume only certain values according to the way they are coded. Typical step functions for a uniform quantizer are reported in Figure 2.4a,b, where the quantization interval between two quantization levels is evidenced in two cases: rounding and truncation, respectively. Quantization is a heavily nonlinear procedure, but fortunately, its effects can be statistically modeled. Figure 2.4c,d shows it; the nonlinear quantization block is substituted by a statistical model in which the error induced by quantization is treated as an additive noise e(n) (quantization error) to the signal x(n).

It should be noted that Equation 2.18 imposes geometric constrains to the zero locus of H(z). Taking into account Equation 2.12, we have zM H(z) = H 1 z∗ (2.19) Thus, both zm and 1/z∗ m must be zeroes of H(z). Then the zeroes of linear phase FIR filters must lie on the unitary circle, or they must appear in pairs and with inverse moduli. 2.2.1.4 Design Criteria In many cases, the filter is designed in order to satisfy some requirements, usually on the frequency response, which depend on the characteristic of the particular application the filter is intended for. It is known that ideal filters, like those reported in Figure 2.7, are not physically realizable (they would require an infinite number of coefficients of impulse response); thus we can design FIR or IIR filters that can only mimic, with an acceptable error, the ideal response. Figure 2.10 shows a frequency response of a not ideal low-pass filter. Here, there are ripples in passband and in stopband, and there is a transition band from passband to stopband, defined by the interval ωs − ωp. Several design techniques are available, and some of them require heavy computational tasks, which are capable of developing filters with defined specific requirements. They include window technique, frequency-sampling method, or equiripple design for FIR filters. Butterworth, Chebychev, elliptical design, and impulse-invariant or bilinear transformation are instead employed for IIR filters. For detailed analysis of digital filter techniques, see Antoniou [1979], Cerutti [1983], and Oppenheim and Schafer [1975]. Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 12 — #12 2-12 Medical Devices and Systems 2.2.1.5 Examples A few examples of different kinds of filters will be presented in the following, showing some applications on ECG signal processing. It is shown that the ECG contains relevant information over a wide range of frequencies; the lower-frequency contents should be preserved for correct measurement of the slow ST displacements, while higher-frequency contents are needed to correctly estimate amplitude and duration of the faster contributions, mainly at the level of the QRS complex. Unfortunately, several sources of noise are present in the same frequency band, such as, higher-frequency noise due to muscle contraction (EMG noise), the lower-frequency noise due to motion artifacts (baseline wandering), the effect of respiration or the low-frequency noise in the skin-electrode interface, and others. In the first example, the effect of two different low-pass filters will be considered. An ECG signal corrupted by an EMG noise (Figure 2.11a) is low-pass filtered by two different low-pass filters whose frequency responses are shown in Figure 2.11b,c. The two FIR filters have cutoff frequencies at 40 and 20 Hz, respectively, and were designed through window techniques (Weber-Cappellini window, filter length = 256 points) [Cappellini et al., 1978]. The output signals are shown in Figure 2.11d,e. Filtering drastically reduces the superimposed noise but at the same time alters the original ECG waveform. In particular, the R wave amplitude is progressively reduced by decreasing the cutoff frequency, and the QRS width is progressively increased as well. On the other hand, P waves appear almost unaffected, having frequency components generally lower than 20 to 30 Hz. At this point, it is worth noting that an increase in QRS duration is generally associated with various pathologies, such as ventricular hypertrophy or bundle-branch block. It is therefore necessary to check that an excessive band limitation does not introduce a false-positive indication in the diagnosis of the ECG signal. An example of an application for stopband filters (notch filters) is presented in Figure 2.12. it is used to reduce the 50-Hz mains noise on the ECG signal, and it was designated by placing a zero in correspondence of the frequency we want to suppress. Finally, an example of a high-pass filter is shown for the detection of the QRS complex. Detecting the time occurrence of a fiducial point in the QRS complex is indeed the first task usually performed in ECG signal analysis. The QRS complex usually contains the higher-frequency components with respect to the other ECG waves, and thus such components will be enhanced by a high-pass filter. Figure 2.13 shows how QRS complexes (Figure 2.13a) can be identified by a derivative high-pass filter with a cutoff frequency to decrease the effect of the noise contributions at high frequencies (Figure 2.13b). The filtered signal (Figure 2.13c) presents sharp and well-defined peaks that are easily recognized by a threshold value. 2.2.2 Signal Averaging Traditional filtering performs very well when the frequency content of signal and noise do not overlap. When the noise bandwidth is completely separated from the signal bandwidth, the noise can be decreased easily by means of a linear filter according to the procedures described earlier. On the other hand, when the signal and noise bandwidth overlap and the noise amplitude is enough to seriously corrupt the signal, a traditional filter, designed to cancel the noise, also will introduce signal cancellation or, at least, distortion. As an example, let us consider the brain potentials evoked by a sensory stimulation (visual, acoustic, or somatosensory) generally called evoked potentials (EP). Such a response is very difficult to determine because its amplitude is generally much lower than the background EEG activity. Both EP and EEG signals contain information in the same frequency range; thus the problem of separating the desired response cannot be approached via traditional digital filtering [Aunon et al., 1981]. Another typical example is in the detection of ventricular late potentials (VLP) in the ECG signal. These potentials are very small in amplitude and are comparable with the noise superimposed on the signal and also for what concerns the frequency content [Simson, 1981]. In such cases, an increase in the SNR may be achieved on the basis of different statistical properties of signal and noise. When the desired signal repeats identically at each iteration (i.e., the EP at each sensory stimulus, the VLP at each cardiac cycle), the averaging technique can satisfactorily solve the problem of separating signal

It should be noted that Equation 2.18 imposes geometric constrains to the zero locus of H(z). Taking into account Equation 2.12, we have zM H(z) = H 1 z∗ (2.19) Thus, both zm and 1/z∗ m must be zeroes of H(z). Then the zeroes of linear phase FIR filters must lie on the unitary circle, or they must appear in pairs and with inverse moduli. 2.2.1.4 Design Criteria In many cases, the filter is designed in order to satisfy some requirements, usually on the frequency response, which depend on the characteristic of the particular application the filter is intended for. It is known that ideal filters, like those reported in Figure 2.7, are not physically realizable (they would require an infinite number of coefficients of impulse response); thus we can design FIR or IIR filters that can only mimic, with an acceptable error, the ideal response. Figure 2.10 shows a frequency response of a not ideal low-pass filter. Here, there are ripples in passband and in stopband, and there is a transition band from passband to stopband, defined by the interval ωs − ωp. Several design techniques are available, and some of them require heavy computational tasks, which are capable of developing filters with defined specific requirements. They include window technique, frequency-sampling method, or equiripple design for FIR filters. Butterworth, Chebychev, elliptical design, and impulse-invariant or bilinear transformation are instead employed for IIR filters. For detailed analysis of digital filter techniques, see Antoniou [1979], Cerutti [1983], and Oppenheim and Schafer [1975]. Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 12 — #12 2-12 Medical Devices and Systems 2.2.1.5 Examples A few examples of different kinds of filters will be presented in the following, showing some applications on ECG signal processing. It is shown that the ECG contains relevant information over a wide range of frequencies; the lower-frequency contents should be preserved for correct measurement of the slow ST displacements, while higher-frequency contents are needed to correctly estimate amplitude and duration of the faster contributions, mainly at the level of the QRS complex. Unfortunately, several sources of noise are present in the same frequency band, such as, higher-frequency noise due to muscle contraction (EMG noise), the lower-frequency noise due to motion artifacts (baseline wandering), the effect of respiration or the low-frequency noise in the skin-electrode interface, and others. In the first example, the effect of two different low-pass filters will be considered. An ECG signal corrupted by an EMG noise (Figure 2.11a) is low-pass filtered by two different low-pass filters whose frequency responses are shown in Figure 2.11b,c. The two FIR filters have cutoff frequencies at 40 and 20 Hz, respectively, and were designed through window techniques (Weber-Cappellini window, filter length = 256 points) [Cappellini et al., 1978]. The output signals are shown in Figure 2.11d,e. Filtering drastically reduces the superimposed noise but at the same time alters the original ECG waveform. In particular, the R wave amplitude is progressively reduced by decreasing the cutoff frequency, and the QRS width is progressively increased as well. On the other hand, P waves appear almost unaffected, having frequency components generally lower than 20 to 30 Hz. At this point, it is worth noting that an increase in QRS duration is generally associated with various pathologies, such as ventricular hypertrophy or bundle-branch block. It is therefore necessary to check that an excessive band limitation does not introduce a false-positive indication in the diagnosis of the ECG signal. An example of an application for stopband filters (notch filters) is presented in Figure 2.12. it is used to reduce the 50-Hz mains noise on the ECG signal, and it was designated by placing a zero in correspondence of the frequency we want to suppress. Finally, an example of a high-pass filter is shown for the detection of the QRS complex. Detecting the time occurrence of a fiducial point in the QRS complex is indeed the first task usually performed in ECG signal analysis. The QRS complex usually contains the higher-frequency components with respect to the other ECG waves, and thus such components will be enhanced by a high-pass filter. Figure 2.13 shows how QRS complexes (Figure 2.13a) can be identified by a derivative high-pass filter with a cutoff frequency to decrease the effect of the noise contributions at high frequencies (Figure 2.13b). The filtered signal (Figure 2.13c) presents sharp and well-defined peaks that are easily recognized by a threshold value. 2.2.2 Signal Averaging Traditional filtering performs very well when the frequency content of signal and noise do not overlap. When the noise bandwidth is completely separated from the signal bandwidth, the noise can be decreased easily by means of a linear filter according to the procedures described earlier. On the other hand, when the signal and noise bandwidth overlap and the noise amplitude is enough to seriously corrupt the signal, a traditional filter, designed to cancel the noise, also will introduce signal cancellation or, at least, distortion. As an example, let us consider the brain potentials evoked by a sensory stimulation (visual, acoustic, or somatosensory) generally called evoked potentials (EP). Such a response is very difficult to determine because its amplitude is generally much lower than the background EEG activity. Both EP and EEG signals contain information in the same frequency range; thus the problem of separating the desired response cannot be approached via traditional digital filtering [Aunon et al., 1981]. Another typical example is in the detection of ventricular late potentials (VLP) in the ECG signal. These potentials are very small in amplitude and are comparable with the noise superimposed on the signal and also for what concerns the frequency content [Simson, 1981]. In such cases, an increase in the SNR may be achieved on the basis of different statistical properties of signal and noise. When the desired signal repeats identically at each iteration (i.e., the EP at each sensory stimulus, the VLP at each cardiac cycle), the averaging technique can satisfactorily solve the problem of separating signal

Parametric methods are methodologically and computationally more complex than the nonparametric ones, since they require an a priori choice of the structure and of the order of the model of the signalgeneration mechanism. Some tests are required a posteriori to verify the whiteness of the prediction error, such as the Anderson test (autocorrelation test) [Box and Jenkins, 1976] in order to test the reliability of the estimation. Postprocessing of the spectra can be performed as well as for nonparametric approaches by integrating the P(f )function in predefined frequency ranges; however, the AR modeling has the advantage of allowing a spectral decomposition for a direct and automatic calculation of the power and frequency of each spectral component. In the z-transform domain, the autocorrelation function (ACF) R(k) and the P(z) of the signal are related by the following expression: R(k) = 1 2πj |z|=1 P(z)z k−1 dz (2.31) If the integral is calculated by means of the residual method, the ACF is decomposed into a sum of dumped sinusoids, each one related to a pair of complex conjugate poles, and of dumped exponential functions, related to the real poles [Zetterberg, 1969]. The Fourier transform of each one of these terms gives the expression of each spectral component that fits the component related to the relevant pole or pole pair. The argument of the pole gives the central frequency of the component, while the ith spectral component power is the residual γi in the case of real poles and 2Re(γi) in case of conjugate pole pairs. γi is computed from the following expression: γi = z−1(z − zi)P(z)|z=zi It is advisable to point out the basic characteristics of the two approaches that have been described above: the nonparametric and the parametric. The latter (parametric) has evident advantages with respect to the former, which can be summarized in the following: • It has a more statistical consistency even on short segments of data; that is, under certain assumptions, a spectrum estimated through autoregressive modeling is a maximum entropy spectrum (MES). • The spectrum is more easily interpretable with an "implicit" filtering of what is considered random noise. • An easy and more reliable calculation of the spectral parameters (postprocessing of the spectrum), through the spectral decomposition procedure, is possible. Such parameters are directly interpretable from a physiologic point of view. • There is no need to window the data in order to decrease the spectral leakage. • The frequency resolution does not depend on the number of data. On the other hand, the parametric approach • Is more complex from a methodologic and computational point of view. • Requires an a priori definition of the kind of model (AR, MA, ARMA, or other) to be fitted and mainly its complexity defined (i.e., the number of parameters). Some figures of merit introduced in the literature may be of help in determining their value [Akaike, 1974]. Still, this procedure may be difficult in some cases. 2.2.3.3 Example As an example, let us consider the frequency analysis of the heart rate variability (HRV) signal. In Figure 2.16a, the time sequence of the RR intervals obtained from an ECG recording is shown. The RR intervals are expressed in seconds as a function of the beat number in the so-called interval tachogram. It is worth noting that the RR series is not constant but is characterized by oscillations of up to the 10% of

Parametric methods are methodologically and computationally more complex than the nonparametric ones, since they require an a priori choice of the structure and of the order of the model of the signalgeneration mechanism. Some tests are required a posteriori to verify the whiteness of the prediction error, such as the Anderson test (autocorrelation test) [Box and Jenkins, 1976] in order to test the reliability of the estimation. Postprocessing of the spectra can be performed as well as for nonparametric approaches by integrating the P(f )function in predefined frequency ranges; however, the AR modeling has the advantage of allowing a spectral decomposition for a direct and automatic calculation of the power and frequency of each spectral component. In the z-transform domain, the autocorrelation function (ACF) R(k) and the P(z) of the signal are related by the following expression: R(k) = 1 2πj |z|=1 P(z)z k−1 dz (2.31) If the integral is calculated by means of the residual method, the ACF is decomposed into a sum of dumped sinusoids, each one related to a pair of complex conjugate poles, and of dumped exponential functions, related to the real poles [Zetterberg, 1969]. The Fourier transform of each one of these terms gives the expression of each spectral component that fits the component related to the relevant pole or pole pair. The argument of the pole gives the central frequency of the component, while the ith spectral component power is the residual γi in the case of real poles and 2Re(γi) in case of conjugate pole pairs. γi is computed from the following expression: γi = z−1(z − zi)P(z)|z=zi It is advisable to point out the basic characteristics of the two approaches that have been described above: the nonparametric and the parametric. The latter (parametric) has evident advantages with respect to the former, which can be summarized in the following: • It has a more statistical consistency even on short segments of data; that is, under certain assumptions, a spectrum estimated through autoregressive modeling is a maximum entropy spectrum (MES). • The spectrum is more easily interpretable with an "implicit" filtering of what is considered random noise. • An easy and more reliable calculation of the spectral parameters (postprocessing of the spectrum), through the spectral decomposition procedure, is possible. Such parameters are directly interpretable from a physiologic point of view. • There is no need to window the data in order to decrease the spectral leakage. • The frequency resolution does not depend on the number of data. On the other hand, the parametric approach • Is more complex from a methodologic and computational point of view. • Requires an a priori definition of the kind of model (AR, MA, ARMA, or other) to be fitted and mainly its complexity defined (i.e., the number of parameters). Some figures of merit introduced in the literature may be of help in determining their value [Akaike, 1974]. Still, this procedure may be difficult in some cases. 2.2.3.3 Example As an example, let us consider the frequency analysis of the heart rate variability (HRV) signal. In Figure 2.16a, the time sequence of the RR intervals obtained from an ECG recording is shown. The RR intervals are expressed in seconds as a function of the beat number in the so-called interval tachogram. It is worth noting that the RR series is not constant but is characterized by oscillations of up to the 10% of

The following hypotheses are considered in order to deal with a simple mathematical problem: 1. e(n) is supposed to be a white noise with uniform distribution 2. e(n) and x(n) are uncorrelated First of all, it should be noted that the probability density of e(n) changes according to the adopted coding procedure. If we decide to round the real sample to the nearest quantization level, we have −/2 ≤ e(n) < /2, while if we decide to truncate the sample amplitude, we have − ≤ e(n) < 0. The two probability densities are plotted in Figure 2.4e,f. The two ways of coding yield processes with different statistical properties. In the first case the mean and variance value of e(n) are me = 0 σ2 e = 2/12 while in the second case me = −/2, and the variance is still the same. Variance reduces in the presence of a reduced quantization interval as expected. Finally, it is possible to evaluate the signal-to-noise ratio (SNR) for the quantization process: SNR = 10 log10 σ2 x σ2 e = 10 log10 σ2 x 2−2b/12 = 6.02b + 10.79 + 10 log10(σ2 x ) (2.4) having set = 2−2b and where σ2 x is the variance of the signal and b is the number of bits used for coding. It should be noted that the SNR increases by almost 6 dB for each added bit of coding. Several forms of quantization are usually employed: uniform, nonuniform (preceding the uniform sampler with Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 6 — #6 2-6 Medical Devices and Systems a nonlinear block), or roughly (small number of quantization levels and high quantization step). Details can be found in Carassa [1983], Jaeger [1982], and Widrow [1956]. 2.2 Signal Processing A brief review of different signal-processing techniques will be given in this section. They include traditional filtering, averaging techniques, and spectral estimators. Only the main concepts of analysis and design of digital filters are presented, and a few examples are illustrated in the processing of the ECG signal. Averaging techniques will then be described briefly and their usefulness evidenced when noise and signal have similar frequency contents but different statistical properties; an example for evoked potentials enhancement from EEG background noise is illustrated. Finally, different spectral estimators will be considered and some applications shown in the analysis of RR fluctuations (i.e., the heart rate variability (HRV) signal). 2.2.1 Digital Filters A digital filter is a discrete-time system that operates some transformation on a digital input signal x(n) generating an output sequence y(n), as schematically shown by the block diagram in Figure 2.5. The characteristics of transformation T[·] identify the filter. The filter will be time-variant if T[·] is a function of time or time-invariant otherwise, while is said to be linear if, and only if, having x1(n) and x2(n) as inputs producing y1(n) and y2(n), respectively, we have T[ax1 + bx2] = aT[x1] + bT[x2] = ay1 + by2 (2.5) In the following, only linear, time-invariant filters will be considered, even if several interesting applications of nonlinear [Glaser and Ruchkin, 1976; Tompkins, 1993] or time-variant [Huta and Webster, 1973; Widrow et al., 1975; Cohen, 1983; Thakor, 1987] filters have been proposed in the literature for the analysis of biologic signals. The behavior of a filter is usually described in terms of input-output relationships. They are usually assessed by exciting the filter with different inputs and evaluating which is the response (output) of the system. In particular, if the input is the impulse sequence δ(n), the resulting output, the impulse response, has a relevant role in describing the characteristic of the filter. Such a response can be used to determine the response to more complicated input sequences. In fact, let us consider a generic input sequence x(n) as a sum of weighted and delayed impulses. x(n) = k=−∞,∞ x(k) · δ(n − k) (2.6) and let us identify the response to δ(n −k) as h(n −k). If the filter is time-invariant, each delayed impulse will produce the same response, but time-shifted; due to the linearity property, such responses will be x(n) y(n) y(n) =T[x(n)] Digital filter T[·] FIGURE 2.5 General block diagram of a digital filter. The output digital signal y(n) is obtained from the input x(n) by means of a transformation T[·] which identifies the filter

The following hypotheses are considered in order to deal with a simple mathematical problem: 1. e(n) is supposed to be a white noise with uniform distribution 2. e(n) and x(n) are uncorrelated First of all, it should be noted that the probability density of e(n) changes according to the adopted coding procedure. If we decide to round the real sample to the nearest quantization level, we have −/2 ≤ e(n) < /2, while if we decide to truncate the sample amplitude, we have − ≤ e(n) < 0. The two probability densities are plotted in Figure 2.4e,f. The two ways of coding yield processes with different statistical properties. In the first case the mean and variance value of e(n) are me = 0 σ2 e = 2/12 while in the second case me = −/2, and the variance is still the same. Variance reduces in the presence of a reduced quantization interval as expected. Finally, it is possible to evaluate the signal-to-noise ratio (SNR) for the quantization process: SNR = 10 log10 σ2 x σ2 e = 10 log10 σ2 x 2−2b/12 = 6.02b + 10.79 + 10 log10(σ2 x ) (2.4) having set = 2−2b and where σ2 x is the variance of the signal and b is the number of bits used for coding. It should be noted that the SNR increases by almost 6 dB for each added bit of coding. Several forms of quantization are usually employed: uniform, nonuniform (preceding the uniform sampler with Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 6 — #6 2-6 Medical Devices and Systems a nonlinear block), or roughly (small number of quantization levels and high quantization step). Details can be found in Carassa [1983], Jaeger [1982], and Widrow [1956]. 2.2 Signal Processing A brief review of different signal-processing techniques will be given in this section. They include traditional filtering, averaging techniques, and spectral estimators. Only the main concepts of analysis and design of digital filters are presented, and a few examples are illustrated in the processing of the ECG signal. Averaging techniques will then be described briefly and their usefulness evidenced when noise and signal have similar frequency contents but different statistical properties; an example for evoked potentials enhancement from EEG background noise is illustrated. Finally, different spectral estimators will be considered and some applications shown in the analysis of RR fluctuations (i.e., the heart rate variability (HRV) signal). 2.2.1 Digital Filters A digital filter is a discrete-time system that operates some transformation on a digital input signal x(n) generating an output sequence y(n), as schematically shown by the block diagram in Figure 2.5. The characteristics of transformation T[·] identify the filter. The filter will be time-variant if T[·] is a function of time or time-invariant otherwise, while is said to be linear if, and only if, having x1(n) and x2(n) as inputs producing y1(n) and y2(n), respectively, we have T[ax1 + bx2] = aT[x1] + bT[x2] = ay1 + by2 (2.5) In the following, only linear, time-invariant filters will be considered, even if several interesting applications of nonlinear [Glaser and Ruchkin, 1976; Tompkins, 1993] or time-variant [Huta and Webster, 1973; Widrow et al., 1975; Cohen, 1983; Thakor, 1987] filters have been proposed in the literature for the analysis of biologic signals. The behavior of a filter is usually described in terms of input-output relationships. They are usually assessed by exciting the filter with different inputs and evaluating which is the response (output) of the system. In particular, if the input is the impulse sequence δ(n), the resulting output, the impulse response, has a relevant role in describing the characteristic of the filter. Such a response can be used to determine the response to more complicated input sequences. In fact, let us consider a generic input sequence x(n) as a sum of weighted and delayed impulses. x(n) = k=−∞,∞ x(k) · δ(n − k) (2.6) and let us identify the response to δ(n −k) as h(n −k). If the filter is time-invariant, each delayed impulse will produce the same response, but time-shifted; due to the linearity property, such responses will be x(n) y(n) y(n) =T[x(n)] Digital filter T[·] FIGURE 2.5 General block diagram of a digital filter. The output digital signal y(n) is obtained from the input x(n) by means of a transformation T[·] which identifies the filter

The transfer function can be expressed in a more useful form by finding the roots of both numerator and denominator: H(z) = b0zN−M m=1,M (z − zm) k=1,N (z − Pk ) (2.14) where zm are the zeroes and pk are the poles. It is worth noting that H(z) presents N − M zeros in correspondence with the origin of the z plane and M zeroes elsewhere (N zeroes totally) and N poles. The pole-zero form of H(z) is of great interest because several properties of the filter are immediately available from the geometry of poles and zeroes in the complex z plane. In fact, it is possible to easily assess stability and by visual inspection to roughly estimate the frequency response without making any calculations. Stability is verified when all poles lie inside the unitary circle, as can be proved by considering the relationships between the z-transform and the Laplace s-transform and by observing that the left side of the s plane is mapped inside the unitary circle [Jackson, 1986; Oppenheim and Schafer, 1975]. The frequency response can be estimated by noting that (z − zm)|z=ejωnTs is a vector joining the mth zero with the point on the unitary circle identified by the angle ωTs. Defining B m = (z − zm)|z=ejωTs A k = (z − pk )|z=ejωTs (2.15) we obtain |H(ω)| = b0m=1,M |B m| k=1,N|A k | ∠H(ω) = m=1,M ∠B m − k=1,N ∠A k + (N − M)ωTs (2.16) Thus the modulus of H(ω) can be evaluated at any frequency ω◦ by computing the distances between poles and zeroes and the point on the unitary circle corresponding to ω = ω◦, as evidenced in Figure 2.8, where a filter with two pairs of complex poles and three zeroes is considered.

The transfer function can be expressed in a more useful form by finding the roots of both numerator and denominator: H(z) = b0zN−M m=1,M (z − zm) k=1,N (z − Pk ) (2.14) where zm are the zeroes and pk are the poles. It is worth noting that H(z) presents N − M zeros in correspondence with the origin of the z plane and M zeroes elsewhere (N zeroes totally) and N poles. The pole-zero form of H(z) is of great interest because several properties of the filter are immediately available from the geometry of poles and zeroes in the complex z plane. In fact, it is possible to easily assess stability and by visual inspection to roughly estimate the frequency response without making any calculations. Stability is verified when all poles lie inside the unitary circle, as can be proved by considering the relationships between the z-transform and the Laplace s-transform and by observing that the left side of the s plane is mapped inside the unitary circle [Jackson, 1986; Oppenheim and Schafer, 1975]. The frequency response can be estimated by noting that (z − zm)|z=ejωnTs is a vector joining the mth zero with the point on the unitary circle identified by the angle ωTs. Defining B m = (z − zm)|z=ejωTs A k = (z − pk )|z=ejωTs (2.15) we obtain |H(ω)| = b0m=1,M |B m| k=1,N|A k | ∠H(ω) = m=1,M ∠B m − k=1,N ∠A k + (N − M)ωTs (2.16) Thus the modulus of H(ω) can be evaluated at any frequency ω◦ by computing the distances between poles and zeroes and the point on the unitary circle corresponding to ω = ω◦, as evidenced in Figure 2.8, where a filter with two pairs of complex poles and three zeroes is considered.

To obtain the estimate of H(ω), we move around the unitary circle and roughly evaluate the effect of poles and zeroes by keeping in mind a few rules [Challis and Kitney, 1982] (1) when we are close to a zero, |H(ω)| will approach zero, and a positive phase shift will appear in ∠H(ω) as the vector from the zero reverses its angle; (2) when we are close to a pole, |H(ω)| will tend to peak, and a negative phase change is found in ∠H(ω) (the closer the pole to unitary circle, the sharper is the peak until it reaches infinite and the filter becomes unstable); and (3) near a closer pole-zero pair, the response modulus will tend to zero or infinity if the zero or the pole is closer, while far from this pair, the modulus can be considered unitary. As an example, it is possible to compare the modulus and phase diagram of Figure 2.8b,c with the relative geometry of the poles and zeroes of Figure 2.8a. 2.2.1.3 FIR and IIR Filters A common way of classifying digital filters is based on the characteristics of their impulse response. For finite impulse response (FIR) filters, h(n) is composed of a finite number of nonzero values, while for infinite impulse response (IIR) filters, h(n) oscillates up to infinity with nonzero values. It is clearly evident that in order to obtain an infinite response to an impulse in input, the IIR filter must contain some feedback that sustains the output as the input vanishes. The presence of feedback paths requires putting particular attention to the filter stability. Even if FIR filters are usually implemented in a nonrecursive form and IIR filters in a recursive form, the two ways of classification are not coincident. In fact, as shown by the following example, a FIR filter can be expressed in a recursive form: H(z) = k=0,N−1 z−k = k=0,N−1 z−k (1 − z−1) (1 − z−1) = 1 − z−N 1 − z−1 (2.17) for a more convenient computational implementation. As shown previously, two important requirements for filters are stability and linear phase response. FIR filters can be easily designed to fulfill such requirements; they are always stable (having no poles outside the origin), and the linear phase response is obtained by constraining the impulse response coefficients to have symmetry around their midpoint. Such constrain implies bm = ±b• M−m (2.18) where the bm are the M coefficients of an FIR filter. The sign + or − stays in accordance with the symmetry (even or odd) and M value (even or odd). This is a necessary and sufficient condition for FIR filters to have linear phase response. Two cases of impulse response that yield a linear phase filter are shown in Figure 2.9.

To obtain the estimate of H(ω), we move around the unitary circle and roughly evaluate the effect of poles and zeroes by keeping in mind a few rules [Challis and Kitney, 1982] (1) when we are close to a zero, |H(ω)| will approach zero, and a positive phase shift will appear in ∠H(ω) as the vector from the zero reverses its angle; (2) when we are close to a pole, |H(ω)| will tend to peak, and a negative phase change is found in ∠H(ω) (the closer the pole to unitary circle, the sharper is the peak until it reaches infinite and the filter becomes unstable); and (3) near a closer pole-zero pair, the response modulus will tend to zero or infinity if the zero or the pole is closer, while far from this pair, the modulus can be considered unitary. As an example, it is possible to compare the modulus and phase diagram of Figure 2.8b,c with the relative geometry of the poles and zeroes of Figure 2.8a. 2.2.1.3 FIR and IIR Filters A common way of classifying digital filters is based on the characteristics of their impulse response. For finite impulse response (FIR) filters, h(n) is composed of a finite number of nonzero values, while for infinite impulse response (IIR) filters, h(n) oscillates up to infinity with nonzero values. It is clearly evident that in order to obtain an infinite response to an impulse in input, the IIR filter must contain some feedback that sustains the output as the input vanishes. The presence of feedback paths requires putting particular attention to the filter stability. Even if FIR filters are usually implemented in a nonrecursive form and IIR filters in a recursive form, the two ways of classification are not coincident. In fact, as shown by the following example, a FIR filter can be expressed in a recursive form: H(z) = k=0,N−1 z−k = k=0,N−1 z−k (1 − z−1) (1 − z−1) = 1 − z−N 1 − z−1 (2.17) for a more convenient computational implementation. As shown previously, two important requirements for filters are stability and linear phase response. FIR filters can be easily designed to fulfill such requirements; they are always stable (having no poles outside the origin), and the linear phase response is obtained by constraining the impulse response coefficients to have symmetry around their midpoint. Such constrain implies bm = ±b• M−m (2.18) where the bm are the M coefficients of an FIR filter. The sign + or − stays in accordance with the symmetry (even or odd) and M value (even or odd). This is a necessary and sufficient condition for FIR filters to have linear phase response. Two cases of impulse response that yield a linear phase filter are shown in Figure 2.9.

diminishing sidelobe amplitude implies the leakage to be not relevant for high frequencies. It is important to recall that the average filtering is based on the hypothesis of broadband distribution of the noise and lack of correlation between signal and noise. Unfortunately, these assumptions are not always verified in biologic signals. For example, the assumptions of independence of the background EEG and the evoked potential may be not completely realistic [Gevins and Remond, 1987]. In addition, much attention must be paid to the alignment of the sweeps; in fact, slight misalignments (fiducial point jitter) will lead to a low-pass filtering effect of the final result. 2.2.2.1 Example As mentioned previously, one of the fields in which signal-averaging technique is employed extensively is in the evaluation of cerebral evoked response after a sensory stimulation. Figure 2.15a shows the EEG recorded from the scalp of a normal subject after a somatosensory stimulation released at time t = 0. The evoked potential (N = 1) is not visible because it is buried in the background EEG (upper panel). In the successive panels there is the same evoked potential after averaging different numbers of sweeps corresponding to the frequency responses shown in Figure 2.14. As N increases, the SNR is improved by a factor √N (in rms value), and the morphology of the evoked potential becomes more recognizable while the EEG contribution is markedly diminished. In this way it is easy to evaluate the quantitative indices of clinical interest, such as the amplitude and the latency of the relevant waves.

diminishing sidelobe amplitude implies the leakage to be not relevant for high frequencies. It is important to recall that the average filtering is based on the hypothesis of broadband distribution of the noise and lack of correlation between signal and noise. Unfortunately, these assumptions are not always verified in biologic signals. For example, the assumptions of independence of the background EEG and the evoked potential may be not completely realistic [Gevins and Remond, 1987]. In addition, much attention must be paid to the alignment of the sweeps; in fact, slight misalignments (fiducial point jitter) will lead to a low-pass filtering effect of the final result. 2.2.2.1 Example As mentioned previously, one of the fields in which signal-averaging technique is employed extensively is in the evaluation of cerebral evoked response after a sensory stimulation. Figure 2.15a shows the EEG recorded from the scalp of a normal subject after a somatosensory stimulation released at time t = 0. The evoked potential (N = 1) is not visible because it is buried in the background EEG (upper panel). In the successive panels there is the same evoked potential after averaging different numbers of sweeps corresponding to the frequency responses shown in Figure 2.14. As N increases, the SNR is improved by a factor √N (in rms value), and the morphology of the evoked potential becomes more recognizable while the EEG contribution is markedly diminished. In this way it is easy to evaluate the quantitative indices of clinical interest, such as the amplitude and the latency of the relevant waves.

evaluated as follows: assume a complex sinusoid x(n) = ejωnTS as input, the correspondent filter output will be y(n) = k=0,∞ h(k)ejωT2(n−k) = e−jωnTs k=0,∞ h(k)e−jωkTs = x(n) · H(z)|z=ejωTs (2.11) Then a sinusoid in input is still the same sinusoid at the output, but multiplied by a complex quantity H(ω). Such complex function defines the response of the filter for each sinusoid of ω pulse in input, and it is known as the frequency response of the filter. It is evaluated in the complex z plane by computing H(z) for z = ejωnTS , namely, on the point locus that describes the unitary circle on the z plane (|ejωnTS | = 1). As a complex function, H(ω) will be defined by its module |H(ω)| and by its phase ∠H(ω) functions, as shown in Figure 2.6 for a moving average filter of order 5. The figure indicates that the lower-frequency components will come through the filter almost unaffected, while the higher-frequency components will be drastically reduced. It is usual to express the horizontal axis of frequency response from 0 to π. This is obtained because only pulse frequencies up to ωs/2 are reconstructable (due to the Shannon theorem), and therefore, in the horizontal axis, the value of ωTs is reported which goes from 0 to π. Furthermore, Figure 2.6b demonstrates that the phase is piecewise linear, and in correspondence with the zeros of |H(ω)|, there is a change in phase of π value. According to their frequency response, the filters are usually classified as (1) low-pass, (2) high-pass, (3) bandpass, or (4) bandstop filters. Figure 2.7 shows the ideal frequency response for such filters with the proper low- and high-frequency cutoffs. For a large class of linear, time-invariant systems, H(z) can be expressed in the following general form: H(z) = m=0,M bmz−m 1 + k=1,N ak z−k (2.12) which describes in the z domain the following difference equation in this discrete time domain: y(n) = − k=1,N ak y(n − k) + m=0,M bmx(n − m) (2.13) When at least one of the ak coefficients is different from zero, some output values contribute to the current output. The filter contains some feedback, and it is said to be implemented in a recursive form. On the other hand, when the ak values are all zero, the filter output is obtained only from the current or previous inputs, and the filter is said to be implemented in a nonrecursive form.

evaluated as follows: assume a complex sinusoid x(n) = ejωnTS as input, the correspondent filter output will be y(n) = k=0,∞ h(k)ejωT2(n−k) = e−jωnTs k=0,∞ h(k)e−jωkTs = x(n) · H(z)|z=ejωTs (2.11) Then a sinusoid in input is still the same sinusoid at the output, but multiplied by a complex quantity H(ω). Such complex function defines the response of the filter for each sinusoid of ω pulse in input, and it is known as the frequency response of the filter. It is evaluated in the complex z plane by computing H(z) for z = ejωnTS , namely, on the point locus that describes the unitary circle on the z plane (|ejωnTS | = 1). As a complex function, H(ω) will be defined by its module |H(ω)| and by its phase ∠H(ω) functions, as shown in Figure 2.6 for a moving average filter of order 5. The figure indicates that the lower-frequency components will come through the filter almost unaffected, while the higher-frequency components will be drastically reduced. It is usual to express the horizontal axis of frequency response from 0 to π. This is obtained because only pulse frequencies up to ωs/2 are reconstructable (due to the Shannon theorem), and therefore, in the horizontal axis, the value of ωTs is reported which goes from 0 to π. Furthermore, Figure 2.6b demonstrates that the phase is piecewise linear, and in correspondence with the zeros of |H(ω)|, there is a change in phase of π value. According to their frequency response, the filters are usually classified as (1) low-pass, (2) high-pass, (3) bandpass, or (4) bandstop filters. Figure 2.7 shows the ideal frequency response for such filters with the proper low- and high-frequency cutoffs. For a large class of linear, time-invariant systems, H(z) can be expressed in the following general form: H(z) = m=0,M bmz−m 1 + k=1,N ak z−k (2.12) which describes in the z domain the following difference equation in this discrete time domain: y(n) = − k=1,N ak y(n − k) + m=0,M bmx(n − m) (2.13) When at least one of the ak coefficients is different from zero, some output values contribute to the current output. The filter contains some feedback, and it is said to be implemented in a recursive form. On the other hand, when the ak values are all zero, the filter output is obtained only from the current or previous inputs, and the filter is said to be implemented in a nonrecursive form.

from noise. This technique sums a set of temporal epochs of the signal together with the superimposed noise. If the time epochs are properly aligned, through efficient trigger-point recognition, the signal waveforms directly sum together. If the signal and the noise are characterized by the following statistical properties: 1. All the signal epochs contain a deterministic signal component x(n) that does not vary for all the epochs. Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 16 — #16 2-16 Medical Devices and Systems 2. The superimposed noise w(n) is a broadband stationary process with zero mean and variance σ2 so that E[w(n)] = 0 E[w2(n)] = σ2 (2.20) 3. Signal x(n) and noise wi(n) are uncorrelated so that the recorded signal y(n) at the ith iteration can be expressed as y(n)i = x(n) + wi(n) (2.21) Then the averaging process yields yt : yt(n) = 1 N N i=1 yi = x(n) + N i=1 wi(n) (2.22) The noise term is an estimate of the mean by taking the average of N realizations. Such an average is a new random variable that has the same mean of the sum terms (zero in this case) and which has variance of σ2/N. The effect of the coherent averaging procedure is then to maintain the amplitude of the signal and reduce the variance of the noise by a factor of N. In order to evaluate the improvement in the SNR (in rms value) in respect to the SNR (at the generic ith sweep): SNR = SNRi · √ N (2.23) Thus signal averaging improves the SNR by a factor of in rms value. A coherent averaging procedure can be viewed as a digital filtering process, and its frequency characteristics can be investigated. From Equation 2.17 through the z-transform, the transfer function of the filtering operation results in H(z) = 1 + z−h + z−2h + L + z−(N−1)h N (2.24) where N is the number of elements in the average, and h is the number of samples in each response. An alternative expression for H(z) is H(z) = 1 N 1 − zNh 1 − zh (2.25) This is a moving average low-pass filter as discussed earlier, where the output is a function of the preceding value with a lag of h samples; in practice, the filter operates not on the time sequence but in the sweep sequence on corresponding samples. The frequency response of the filter is shown in Figure 2.14 for different values of the parameter N. In this case, the sampling frequency fs is the repetition frequency of the sweeps, and we may assume it to be 1 without loss of generality. The frequency response is characterized by a main lobe with the first zero corresponding to f = 1/N and by successive secondary lobes separated by zeroes at intervals 1/N. The width of each tooth decreases as well as the amplitude of the secondary lobes when increasing the number N of sweeps. The desired signal is sweep-invariant, and it will be unaffected by the filter, while the broadband noise will be decreased. Some leakage of noise energy takes place in the center of the sidelobes and, of course, at zero frequency. Under the hypothesis of zero mean noise, the dc component has no effect, and the

from noise. This technique sums a set of temporal epochs of the signal together with the superimposed noise. If the time epochs are properly aligned, through efficient trigger-point recognition, the signal waveforms directly sum together. If the signal and the noise are characterized by the following statistical properties: 1. All the signal epochs contain a deterministic signal component x(n) that does not vary for all the epochs. Bronz: "2122_c002" — 2006/2/9 — 21:41 — page 16 — #16 2-16 Medical Devices and Systems 2. The superimposed noise w(n) is a broadband stationary process with zero mean and variance σ2 so that E[w(n)] = 0 E[w2(n)] = σ2 (2.20) 3. Signal x(n) and noise wi(n) are uncorrelated so that the recorded signal y(n) at the ith iteration can be expressed as y(n)i = x(n) + wi(n) (2.21) Then the averaging process yields yt : yt(n) = 1 N N i=1 yi = x(n) + N i=1 wi(n) (2.22) The noise term is an estimate of the mean by taking the average of N realizations. Such an average is a new random variable that has the same mean of the sum terms (zero in this case) and which has variance of σ2/N. The effect of the coherent averaging procedure is then to maintain the amplitude of the signal and reduce the variance of the noise by a factor of N. In order to evaluate the improvement in the SNR (in rms value) in respect to the SNR (at the generic ith sweep): SNR = SNRi · √ N (2.23) Thus signal averaging improves the SNR by a factor of in rms value. A coherent averaging procedure can be viewed as a digital filtering process, and its frequency characteristics can be investigated. From Equation 2.17 through the z-transform, the transfer function of the filtering operation results in H(z) = 1 + z−h + z−2h + L + z−(N−1)h N (2.24) where N is the number of elements in the average, and h is the number of samples in each response. An alternative expression for H(z) is H(z) = 1 N 1 − zNh 1 − zh (2.25) This is a moving average low-pass filter as discussed earlier, where the output is a function of the preceding value with a lag of h samples; in practice, the filter operates not on the time sequence but in the sweep sequence on corresponding samples. The frequency response of the filter is shown in Figure 2.14 for different values of the parameter N. In this case, the sampling frequency fs is the repetition frequency of the sweeps, and we may assume it to be 1 without loss of generality. The frequency response is characterized by a main lobe with the first zero corresponding to f = 1/N and by successive secondary lobes separated by zeroes at intervals 1/N. The width of each tooth decreases as well as the amplitude of the secondary lobes when increasing the number N of sweeps. The desired signal is sweep-invariant, and it will be unaffected by the filter, while the broadband noise will be decreased. Some leakage of noise energy takes place in the center of the sidelobes and, of course, at zero frequency. Under the hypothesis of zero mean noise, the dc component has no effect, and the

its mean value. These oscillations are not causal but are the effect of the action of the autonomic nervous system in controlling heart rate. In particular, the frequency analysis of such a signal (Figure 2.16b shows the PSD obtained by means of an AR model) has evidenced three principal contributions in the overall variability of the HRV signal. A very low frequency (VLF) component is due to the long-term regulation mechanisms that cannot be resolved by analyzing a few minutes of signal (3 to 5 min are generally studied in the traditional spectral analysis of the HRV signal). Other techniques are needed for a complete understanding of such mechanisms. The low-frequency (LF) component is centered around 0.1 Hz, in a range between 0.03 and 0.15 Hz. An increase in its power has always been observed in relation to sympathetic activations. Finally, the high-frequency (HF) component, in synchrony with the respiration rate, is due to the respiration activity mediated by the vagus nerve; thus it can be a marker of vagal activity. In particular, LF and HF power, both in absolute and in normalized units (i.e., as percentage value on the total power without the VLF contribution), and their ratio LF/HF are quantitative indices widely employed for the quantification of the sympathovagal balance in controlling heart rate [Malliani et al., 1991]. 2.3 Conclusion The basic aspects of signal acquisition and processing have been illustrated, intended as fundamental tools for the treatment of biologic signals. A few examples also were reported relative to the ECG signal, as well as EEG signals and EPs. Particular processing algorithms have been described that use digital filtering techniques, coherent averaging, and power spectrum analysis as reference examples on how traditional or innovative techniques of digital signal processing may impact the phase of informative parameter extraction from biologic signals. They may improve the knowledge of many physiologic systems as well as help clinicians in dealing with new quantitative parameters that could better discriminate between normal and pathologic cases. Defining Terms Aliasing: Phenomenon that takes place when, in A/D conversion, the sampling frequency fs is lower than twice the frequency content fb of the signal; frequency components above fs/2 are folded back and are summed to the lower-frequency components, distorting the signal. Averaging: Filtering technique based on the summation of N stationary waveforms buried in casual broadband noise. The SNR is improved by a factor of. Frequency response: A complex quantity that, multiplied by a sinusoid input of a linear filter, gives the output sinusoid. It completely characterizes the filter and is the Fourier transform of the impulse response. Impulse reaction: Output of a digital filter when the input is the impulse sequence d(n). It completely characterizes linear filters and is used for evaluating the output corresponding to different kinds of inputs. Notch filter: A stopband filter whose stopped band is very sharp and narrow. Parametric methods: Spectral estimation methods based on the identification of a signal generating model. The power spectral density is a function of the model parameters. Quantization error: Error added to the signal, during the A/D procedure, due to the fact that the analog signal is represented by a digital signal that can assume only a limited and predefined set of values. Region of convergence: In the z-transform plane, the ensemble containing the z-complex points that makes a series converge to a finite value.

its mean value. These oscillations are not causal but are the effect of the action of the autonomic nervous system in controlling heart rate. In particular, the frequency analysis of such a signal (Figure 2.16b shows the PSD obtained by means of an AR model) has evidenced three principal contributions in the overall variability of the HRV signal. A very low frequency (VLF) component is due to the long-term regulation mechanisms that cannot be resolved by analyzing a few minutes of signal (3 to 5 min are generally studied in the traditional spectral analysis of the HRV signal). Other techniques are needed for a complete understanding of such mechanisms. The low-frequency (LF) component is centered around 0.1 Hz, in a range between 0.03 and 0.15 Hz. An increase in its power has always been observed in relation to sympathetic activations. Finally, the high-frequency (HF) component, in synchrony with the respiration rate, is due to the respiration activity mediated by the vagus nerve; thus it can be a marker of vagal activity. In particular, LF and HF power, both in absolute and in normalized units (i.e., as percentage value on the total power without the VLF contribution), and their ratio LF/HF are quantitative indices widely employed for the quantification of the sympathovagal balance in controlling heart rate [Malliani et al., 1991]. 2.3 Conclusion The basic aspects of signal acquisition and processing have been illustrated, intended as fundamental tools for the treatment of biologic signals. A few examples also were reported relative to the ECG signal, as well as EEG signals and EPs. Particular processing algorithms have been described that use digital filtering techniques, coherent averaging, and power spectrum analysis as reference examples on how traditional or innovative techniques of digital signal processing may impact the phase of informative parameter extraction from biologic signals. They may improve the knowledge of many physiologic systems as well as help clinicians in dealing with new quantitative parameters that could better discriminate between normal and pathologic cases. Defining Terms Aliasing: Phenomenon that takes place when, in A/D conversion, the sampling frequency fs is lower than twice the frequency content fb of the signal; frequency components above fs/2 are folded back and are summed to the lower-frequency components, distorting the signal. Averaging: Filtering technique based on the summation of N stationary waveforms buried in casual broadband noise. The SNR is improved by a factor of. Frequency response: A complex quantity that, multiplied by a sinusoid input of a linear filter, gives the output sinusoid. It completely characterizes the filter and is the Fourier transform of the impulse response. Impulse reaction: Output of a digital filter when the input is the impulse sequence d(n). It completely characterizes linear filters and is used for evaluating the output corresponding to different kinds of inputs. Notch filter: A stopband filter whose stopped band is very sharp and narrow. Parametric methods: Spectral estimation methods based on the identification of a signal generating model. The power spectral density is a function of the model parameters. Quantization error: Error added to the signal, during the A/D procedure, due to the fact that the analog signal is represented by a digital signal that can assume only a limited and predefined set of values. Region of convergence: In the z-transform plane, the ensemble containing the z-complex points that makes a series converge to a finite value.

literature a deeper insight into the various subjects for both the fundamentals of digital signal processing and the applications. 2.1 Acquisition A schematic representation of a general acquisition system is shown in Figure 2.1. Several physical magnitudes are usually measured from biologic systems. They include electromagnetic quantities (currents, potential differences, field strengths, etc.), as well as mechanical, chemical, or generally nonelectrical variables (pressure, temperature, movements, etc.). Electric signals are detected by sensors (mainly electrodes), while nonelectric magnitudes are first converted by transducers into electric signals that can be easily treated, transmitted, and stored. Several books of biomedical instrumentation give detailed descriptions of the various transducers and the hardware requirements associated with the acquisition of the different biologic signals [Tompkins and Webster, 1981; Cobbold, 1988; Webster, 1992]. An analog preprocessing block is usually required to amplify and filter the signal (in order to make it satisfy the requirements of the hardware such as the dynamic of the analog-to-digital converter), to compensate for some unwanted sensor characteristics, or to reduce the portion of undesired noise. Moreover, the continuous-time signal should be bandlimited before analog-to-digital (A/D) conversion. Such an operation is needed to reduce the effect of aliasing induced by sampling, as will be described in the next section. Here it is important to remember that the acquisition procedure should preserve the information contained in the original signal waveform. This is a crucial point when recording biologic signals, whose characteristics often may be considered by physicians as indices of some underlying pathologies (i.e., the ST-segment displacement on an ECG signal can be considered a marker of ischemia, the peak-and-wave pattern on an EEG tracing can be a sign of epilepsy, and so on). Thus the acquisition system should not introduce any form of distortion that can be misleading or can destroy real pathologic alterations. For this reason, the analog prefiltering block should be designed with constant modulus and linear phase (or zerophase) frequency response, at least in the passband, over the frequencies of interest. Such requirements make the signal arrive undistorted up to the A/D converter. The analog waveform is then A/D converted into a digital signal; that is, it is transformed into a series of numbers, discretized both in time and amplitude, that can be easily managed by digital processors. The A/D conversion ideally can be divided in two steps, as shown in Figure 2.1: the sampling process, which converts the continuous signal in a discrete-time series and whose elements are named samples, and a quantization procedure, which assigns the amplitude value of each sample within a set of determined discrete values. Both processes modify the characteristics of the signal, and their effects will be discussed in the following sections. 2.1.1 The Sampling Theorem The advantages of processing a digital series instead of an analog signal have been reported previously. Furthermore, the basic property when using a sampled series instead of its continuous waveform lies in the fact that the former, under certain hypotheses, is completely representative of the latter. When this happens, the continuous waveform can be perfectly reconstructed just from the series of sampled

literature a deeper insight into the various subjects for both the fundamentals of digital signal processing and the applications. 2.1 Acquisition A schematic representation of a general acquisition system is shown in Figure 2.1. Several physical magnitudes are usually measured from biologic systems. They include electromagnetic quantities (currents, potential differences, field strengths, etc.), as well as mechanical, chemical, or generally nonelectrical variables (pressure, temperature, movements, etc.). Electric signals are detected by sensors (mainly electrodes), while nonelectric magnitudes are first converted by transducers into electric signals that can be easily treated, transmitted, and stored. Several books of biomedical instrumentation give detailed descriptions of the various transducers and the hardware requirements associated with the acquisition of the different biologic signals [Tompkins and Webster, 1981; Cobbold, 1988; Webster, 1992]. An analog preprocessing block is usually required to amplify and filter the signal (in order to make it satisfy the requirements of the hardware such as the dynamic of the analog-to-digital converter), to compensate for some unwanted sensor characteristics, or to reduce the portion of undesired noise. Moreover, the continuous-time signal should be bandlimited before analog-to-digital (A/D) conversion. Such an operation is needed to reduce the effect of aliasing induced by sampling, as will be described in the next section. Here it is important to remember that the acquisition procedure should preserve the information contained in the original signal waveform. This is a crucial point when recording biologic signals, whose characteristics often may be considered by physicians as indices of some underlying pathologies (i.e., the ST-segment displacement on an ECG signal can be considered a marker of ischemia, the peak-and-wave pattern on an EEG tracing can be a sign of epilepsy, and so on). Thus the acquisition system should not introduce any form of distortion that can be misleading or can destroy real pathologic alterations. For this reason, the analog prefiltering block should be designed with constant modulus and linear phase (or zerophase) frequency response, at least in the passband, over the frequencies of interest. Such requirements make the signal arrive undistorted up to the A/D converter. The analog waveform is then A/D converted into a digital signal; that is, it is transformed into a series of numbers, discretized both in time and amplitude, that can be easily managed by digital processors. The A/D conversion ideally can be divided in two steps, as shown in Figure 2.1: the sampling process, which converts the continuous signal in a discrete-time series and whose elements are named samples, and a quantization procedure, which assigns the amplitude value of each sample within a set of determined discrete values. Both processes modify the characteristics of the signal, and their effects will be discussed in the following sections. 2.1.1 The Sampling Theorem The advantages of processing a digital series instead of an analog signal have been reported previously. Furthermore, the basic property when using a sampled series instead of its continuous waveform lies in the fact that the former, under certain hypotheses, is completely representative of the latter. When this happens, the continuous waveform can be perfectly reconstructed just from the series of sampled

summed at the output: y(n) = k=−∞,∞ x(k) · h(n − k) (2.7) This convolution product links input and output and defines the property of the filter. Two of them should be recalled: stability and causality. The former ensures that bounded (finite) inputs will produce bounded outputs. Such a property can be deduced by the impulse response; it can be proved that the filter is stable if and only if k=−∞,∞ |h(k)| < ∞ (2.8) Causality means that the filter will not respond to an input before the input is applied. This is in agreement with our physical concept of a system, but it is not strictly required for a digital filter that can be implemented in a noncausal form. A filter is causal if and only if h(k) = 0 for k < 0 (2.8a) Even if Equation 2.7 completely describes the properties of the filter, most often it is necessary to express the input-output relationships of linear discrete-time systems under the form of the z-transform operator, which allows one to express Equation 2.7 in a more useful, operative, and simpler form. 2.2.1.1 The z-Transform The z-transform of a sequence x(n) is defined by [Rainer et al., 1972] X(z) = k=−∞,∞ x(k) · z−k (2.9) where z is a complex variable. This series will converge or diverge for different z values. The set of z values which makes Equation 2.9 converge is the region of convergence, and it depends on the series x(n) considered. Among the properties of the z-transform, we recall • The delay (shift) property: If w(n) = x(n − T) then W (z) = X(z) · z−T (2.9a) • The product of convolution: If w(n) = k=−∞,∞ x(k) · y(n − k) then W (z) = X(z) · Y (z) (2.9b) 2.2.1.2 The Transfer Function in the z-Domain Thanks to the previous property, we can express Equation 2.7 in the z-domain as a simple multiplication: Y (z) = H(z) · X(z) (2.10) where H(z), known as transfer function of the filter, is the z-transform of the impulse response. H(z) plays a relevant role in the analysis and design of digital filters. The response to input sinusoids can be

summed at the output: y(n) = k=−∞,∞ x(k) · h(n − k) (2.7) This convolution product links input and output and defines the property of the filter. Two of them should be recalled: stability and causality. The former ensures that bounded (finite) inputs will produce bounded outputs. Such a property can be deduced by the impulse response; it can be proved that the filter is stable if and only if k=−∞,∞ |h(k)| < ∞ (2.8) Causality means that the filter will not respond to an input before the input is applied. This is in agreement with our physical concept of a system, but it is not strictly required for a digital filter that can be implemented in a noncausal form. A filter is causal if and only if h(k) = 0 for k < 0 (2.8a) Even if Equation 2.7 completely describes the properties of the filter, most often it is necessary to express the input-output relationships of linear discrete-time systems under the form of the z-transform operator, which allows one to express Equation 2.7 in a more useful, operative, and simpler form. 2.2.1.1 The z-Transform The z-transform of a sequence x(n) is defined by [Rainer et al., 1972] X(z) = k=−∞,∞ x(k) · z−k (2.9) where z is a complex variable. This series will converge or diverge for different z values. The set of z values which makes Equation 2.9 converge is the region of convergence, and it depends on the series x(n) considered. Among the properties of the z-transform, we recall • The delay (shift) property: If w(n) = x(n − T) then W (z) = X(z) · z−T (2.9a) • The product of convolution: If w(n) = k=−∞,∞ x(k) · y(n − k) then W (z) = X(z) · Y (z) (2.9b) 2.2.1.2 The Transfer Function in the z-Domain Thanks to the previous property, we can express Equation 2.7 in the z-domain as a simple multiplication: Y (z) = H(z) · X(z) (2.10) where H(z), known as transfer function of the filter, is the z-transform of the impulse response. H(z) plays a relevant role in the analysis and design of digital filters. The response to input sinusoids can be


संबंधित स्टडी सेट्स

Health 6.01: Injuries and Taking Risks, Part 1

View Set

Biology 223 Chapter 6 Practice Questions

View Set

Praxis Interactive Test ESOL 5362

View Set

Financial Management Study Guide

View Set

FMCSA DOT Physical ME Test Annie

View Set

FON Chapter 22 - Fluids and Electrolytes

View Set