Cohen EEG

¡Supera tus tareas y exámenes ahora con Quizwiz!

Uncertainty in time frequency analysis

When analyzing data you will get inherent uncertainty in either time or frequency depending on which you choose is more important

Resolution

the number of data samples per unit time

Problems with FT

- EEG data is not 'well-behaved' (has changing variance, mean, and frequency structure), and so violates the assumptions needed for FT. This affects the size of peaks, and so dirties readings. To solve this temporally localized decomposition methods are used (e.g. wavlet, filter-hilbert, short-time FFT) - Changes in frequency structure over time is hard to visualize. This is because Fourier uses the sine wave, which has no temporal localization

Eye tracker

- It will facilitate preprocessing and cleaning the dataset by allowing you to remove trials in which the subject looked away from a fixation spot - EEG data during the saccade will contain oculomotor artifacts that must be carefully removed or avoided before the data can be interpreted -information on pupil dilation can also be used, which might be able to help

Considerations in preprocessing

- Many preprocessing choices and steps depend on details of the experiment design, the equipment used to collect the data, the analyses you plan on performing, and idiosyncratic protocols and preferences that you or your research group have developed. - It is a good idea to keep track of all the details of preprocessing for each subject, such as which trials were rejected, which electrodes were interpolated, and which independent components were removed from the data. This way, your results can be replicated from the raw data if necessary. - some preprocessing choices may introduce biases to the data, you should use the same preprocessing procedures for all conditions

Response EMG or force grips

- Recording simultaneous EMG from the muscles used to indicate responses will allow you to examine cortical-muscular connectivity and will allow you to identify trials in which subjects twitched the muscles of the incorrect hand and then pressed the button of the correct hand - Force grips can also be used and provide largely consistent findings compared to EMG

Limitations of ERPs

- The first concerns interpretational issues, particularly with regard to interpreting null results -task-related information can be lost during ERP averaging - they provide limited opportunities for linking results to physiological mechanisms -the neurophysiological mechanisms that produce ERPs are less well understood compared to the neurophysiological mechanisms that produce oscillations

How Many Cycles Should Be Used for the Gaussian Taper?

- The number of cycles of the Gaussian taper defines its width, which in turn defines the width of the wavelet - This parameter controls the trade-off between temporal and frequency precisions (a larger number of cycles gives you better frequency precision at the cost of worse temporal precision, and a smaller number of cycles gives you better temporal precision at the cost of worse frequency precision; Heisenberg uncertainty principle) - If you are looking for transient changes in activity, a smaller number of cycles (around three or four) will be better. - If you have a relatively long trial period in which you expect frequency-band-specific activity (this could be the case during an extended visual presentation or a working memory delay), a larger number of cycles (seven to ten) will facilitate identifying temporally sustained activity - If you would like to distinguish activity at frequencies within a narrow range (for example, separating lower alpha from upper alpha), you should use a larger number of cycles and accept the decreased temporal precision - if you have hypotheses about how quickly a neural response can dissociate condition A from condition B, you should use a smaller number of cycles and accept the decreased frequency precision - If both temporal precision and frequency precision are important for your study, you can perform the analysis twice: once with a smaller number of cycles to determine the timing of the neural events and once with a larger number of cycles to determine the frequency bands of the neural events

Oculomotor activity

- These artifacts can be minimized through experiment design by having visual stimuli at a central location on the experiment monitor - Having an eye tracker is ideal for trial rejection based on eye movements - If you do not have an eye tracker, you can use horizontal and vertical electrooculogram (EOG) electrodes -Microsaccades can be more difficult to detect without an eye tracker because the small eye movements may be indistinguishable from noise in all but the cleanest EOG data - If your main hypotheses concern anterior frontal or lateral frontal regions, EOG artifacts are a serious concern; if your main hypotheses concern midcentral electrodes, EOG artifacts are less of a concern because they are less likely to be measured at these electrodes. - Some spatial filtering techniques can help isolate potential EOG artifacts

dot product calculation

- To compute the dot product, you simply multiply each element in one vector by the corresponding element in the other vector and then sum all points

Electromyography (EMG) artifacts

- Trials with excessive EMG activity in the EEG channels should be removed - EMG is noticeable as bursts of 20- to 40-Hz activity, often has relatively large amplitude, and is typically maximal in electrodes around the face, neck, and ears - EMG bursts are deleterious for EEG data if you plan on analyzing activity above 15 Hz

Should Frequencies Be Linearly or Logarithmically Spaced?

- Usually logarithmaically unless your data is concerned with higher-frequency activity

How Long Should Wavelets Be?

- Wavelets should be long enough such that the lowest-frequency wavelet tapers to zero (or extremely close to zero) at both the negative and positive ends of time or edge artifacts will be present in the results - also remember to center wavelets in the time windows

decibel (dB)

- a ratio between the strength of one signal (frequency-band-specific power) and the strength of another signal (a baseline level of power in that same frequency band) - convenient because it is robust to many limitations of raw power, such as 1/ f power scaling and subject-specific and electrode-specific idiosyncratic characteristics

Intertrial intervals(ITI)

- because of some temporal smoothing introduced by time-frequency decomposition, and partly because time-frequency responses may linger for hundreds of milliseconds after an experiment event has ended, it is ideal to have experiment events within a trial separated by at least several hundred milliseconds - For some kinds of experiments this can be difficult to achieve, for example, if a button press quickly follows a stimulus, or if the stimuli need to appear closely temporally spaced to each other, as in the attentional blink task - if the approximate duration of the electrophysiological response to a stimulus is not known, you could pilot the task using long interevent durations to see how long the electrophysiological response to each event lasts

dot product definitions

- can be thought of as a sum of elements in one vector weighted by elements of another vector (a signal-processing interpretation) - as a covariance or similarity between two vectors (a statistical interpretation) -as a mapping between vectors (the product of the magnitudes of the two vectors scaled by the cosine of the angle between them; a geometric interpretation) -Pretty much a projection of one vector on another, which makes the size of the dot product a measure of how similar the vectors are in a way

Considerations in epoching

- choosing which place to call time "0" may be difficult in some cases. such as when there is a stimulus followed by response or several stimuli - Could be advantageous to put epoch as early as possible as data can be re-epoched -Length can be pretty short in ERP, with only the peroid + baseline period being needed. However, it should be longer in t-f analysis to avoid contaminating your results with edge artifacts

Convolution calculation

- convolution is performed by computing the dot product between two vectors, shifting one vector in time relative to the other vector, computing the dot product again, and so on - In convolution, one vector is considered the signal, and the other vector is considered the kernel. This is only for convenience Steps: 1. Flip kernel 2. 'Drag kernel forward and multiply with signals at points' 3. Sum all the multiplications to represent the value at the center of kernel For this reason the it's convenient (but not necessary) to have a kernel with an odd number of points

The Purpose of Convolution for EEG Data Analyses

- convolution is used to isolate frequency-band-specific activity and to localize that frequency-band-specific activity in time -done by convolving wavelets —time-limited sine waves — with EEG data. As the wavelet (the convolution kernel) is dragged along the EEG data (the convolution signal), it reveals when and to what extent the EEG data contain features that look like the wavelet

Using convolution with wavelets as a filter

- convolving EEG data with a wavelet at a specific frequency is similar to bandpass filtering the data around that same frequency - a Morlet wavelet is a special case of a bandpass filter in which the frequency response is Gaussian-shaped

TF decomposition with wavelets

- like the FT, you use several wavelets of differing frequencies to get a more complete picture of the power of each frequency - However, the frequencies of the wavelets can be specified by you, rather than the number of time points - can use as many wavelets as you want, and specify any frequencies for these wavelets

Reference

- mastoids -grid average -grand average The use of reference is tricky and discussed in literature. 'Safest' reference is mastoids. Also everything can be re-referenced offline

Non-phase-locked task related activity (induced)

- phase is different on each trial, even if it is still time-locked to the trial events - can be observed in time-frequency-domain averaging but not in time-domain averaging (thus, it has no representation in the ERP) -non-phase-locked activity is taken as stronger evidence for the presence of oscillations

Interpolating

- process by which data from missing electrodes are estimated based on the activity and locations of other electrodes - Most interpolation algorithms use a weighted distance metric such as nearest-neighbor, linear, or spline - The more electrodes you have, the more accurate the estimation will be - it is preferable not to have to interpolate

Trial rejection

- really depends on the person, more on it later - t-f is more affected than ERP by sharp edges, so keep this in mind

Preprocessing

- refers to any transformations or reorganizations that occur between collecting the data and analyzing the data 1. organize the data to facilitate analyses without changing any of the data (e.g., extracting epochs from continuous data), 2. removing bad or artifact-ridden data without changing clean data (e.g., removing bad electrodes or rejecting epochs with artifacts) 3. modifying otherwise clean data (e.g., applying temporal filters or spatial transformations).

The flicker effect

- refers to entrainment of brain activity to a rhythmic extrinsic driving factor: For example, if you look at a strobe light that flickers at 20 Hz, there will be rhythmic activity in your visual cortex at 20 Hz, - also referred to as steady-state evoked potential, frequency tagging, SSVEP (steady-state visual evoked potential), SSAEP (auditory evoked potential) -Allows you to fake a high resolution, as you can isolate the regions that respond to the flickering -It has poor temporal resolution, as it takes several hundred miliseconds for the flickering to stabilize -can be elicited up to 100Hz, although lower frequencies show stronger effects

Edge artifacts

- result from applying temporal filters to sharp edges such as a step function, and produce a high-amplitude broadband power artifact that can last hundreds of milliseconds. -will always be present in non-continuous breaks, which happens at first and last breaks in EEG - the lower the frequency you're trying to extract the longer the buffer will need to be to get rid of edge artifacts

Baseline time period

- should end before the onset of the stimuli as some post stimulus information might leak over if to close to 0 point -This is different than ERPs, for which the baseline period typically ends at the time = 0 event. - Intertrial intervals can be constant or variable

Experiment event markers, or triggers

- square-wave pulses that are sent from the stimulus- delivering computer to the EEG amplifier and are usually recorded as a separate channel in the raw data file (or sometimes, multiple channels) - The amplitude of the pulse is used to encode specific events such as stimulus onset or response - During data importing, the markers are converted to labeled time stamps that indicate when different events in the experiments occurred - The temporal duration of each marker (this refers to the length of time in which the marker channel has a nonzero value) should be at least a few samples: -too short-> doesn't register -too long -> might overlap 5ms is good

Tapering signal in time segment for FFT

- taper to get rid of edge artifacts that can appear in data -many types of tapers to use (Hann/Hanning, Hamming, Gaussian), but Hann/Hanning is the best one to use as it tapers to zero at the beginning and at the end

Which sampling rate to use?

- technically, only twice the highest frequency you want. However, more will mean a better signal-to-noise ratio up to a point -usually 500~2000 kHz will suffice -for convenience 1000kHz is optimal as there is a one-to-one conversion to samples to milliseconds. Not too important but keep in mind

Limitations of P-Episode

- the choice of threshold will influence the results - because noise will produce transient increases in power, excessively noisy data might not be appropriate for the p-episode technique. Computing condition differences should help in these situations because the noise would ideally be equally distributed across conditions - increases in power that have a modest amplitude but are consistent across trials might not be detected by the p-episode technique, although they would be apparent in trial averaging of, for example, wavelet convolution

information that can be extracted from the complex dot product

- the projection onto the real axis is the bandpass filtered signal (projection onto imaginary axis is usually not used in EEG analysis) - the magnitude of the vector from the origin to the point in complex space defined by the result of the dot product which gives the amount of overlap between the kernel and the signal - the angle of that vector with respect to the positive real axis which is an estimate of the phase angle at the point in time corresponding to the center of the wavelet and at the peak frequency of the wavelet (can use function 'angle' in MATLAB to extract this)

global field power

- the standard deviation of activity over all electrodes - obtained simply by computing the standard deviation over all electrodes at each point in time, and is best interpreted when using the average reference - Topographical variance accentuates the global field power and thus facilitates visual inspection

Inverse Fourier Transform

- to compute the inverse Fourier transform, you build sine waves of specific frequencies, multiply them by the respective Fourier coefficients at those frequencies, sum all of these sine waves together, and then divide by the number of sine waves

Electrode localization equipment

- uncertainties decrease the spatial precision of EEG, which may be detrimental when some spatial filters are applied such as the surface Laplacian or beamforming.

Transition zones in firls

- used to avoid harsh edge artifacts in the time domain - should be 10~25% of the lower and upper frequency bounds

ERP image uses

- useful as single-subject data inspection tools because trials with large-amplitude data (which likely contain artifacts) can easily be seen - can also be used to link trial-varying task parameters or behaviors to the time-domain EEG signal by sorting the EEG trials according to values of the aligning event, such as the reaction time or the phase of a frequency-band-specific signal at a certain time point

How many electrodes?

-For now, 64 is usually 'good enough' unless going for source localization, then more is usually better

ERP Images

-a 2-D representation of the EEG data from a single electrode - single-trial EEG traces are stacked vertically and then color coded to show changes in amplitude as changes in color -often smoothed, for example by convolving the image with a 2-D Gaussian, to facilitate interpretation and to minimize the influence of noise or other nonrepresentative single-trial fluctuations

FWHM

-refers to the frequency width for which the power is at 50% on the left and right sides of the peak -equation: 2*((2*ln*2)^1/2)*sigma sigma: standard deviation of frequency response - The way to estimate FWHM is first to normalize the power spectrum of the wavelet such that it has a minimum value of 0 and a maximum value of 1.

Hand EMG

1. Consider rejecting readings where the subject twitched the wrong hand but pressed the right button. These can elicit signals that are more similar to wrong trials

Filtering ERPs

1. While mass averaging already has the effect of a low pass filter, an additional low-pass filter is usually added 2. poorly designed filters can introduce ripples in the time domain resulting from having poorly designed filters, such as filters with very narrow transition zones which may be read as oscillations 3. Other filters can introduce systematic biases such as the use of forward-only or causal filters 4. applying the low-pass filter reduces temporal precision because the voltage value at each time point becomes a weighted average of voltage values from previous and subsequent time points: the lower the cutoff the more temporal resolution is lost 5. ERPs are often filtered using a frequency cutoff of around 20 or 30 Hz

Procedure for the Hilbert transform

1. compute the Fourier transform of a signal and create a copy of the Fourier coefficients that have been multiplied by the complex operator ( i ). This turns the M cos(2 π ft ) into iM cos(2 π ft ). 2. identify the positive and negative frequencies. The positive frequencies are those between but not including the zero and the Nyquist frequencies, and the negative frequencies are those above the Nyquist frequency (throughout the Hilbert transform, the zero and Nyquist frequencies are left untouched). 3. convert the iM cos(2 π ft ) to iM sin(2 π ft ). Remember that cosine and sine are related to each other by one-quarter cycle; thus, to convert a cosine to a sine, you rotate the positive-frequency coefficients one-quarter cycle counterclockwise in complex space ( - 90 ° or - π /2) 4. take the inverse Fourier transform of the modulated Fourier coefficients. The result is the analytic signal, which can be used in the same way that you use the result of complex Morlet wavelet convolution.

Limitations of Time-Frequency-Based Approaches

1. decreased temporal precision resulting from time-frequency decomposition -lower frequencies generally suffer from more loss of temporal precision 2. A second limitation of time-frequency-based analyses is that the large number of analyses that can be applied to EEG data, and the seeming complexity of those analyses, can be intimidating

Brain rhythmic activity

1. frequency bands that are most typically associated with cognitive processes in the literature are between 2 Hz and 150 Hz 2. contains multiple frequencies simultaneously, which can be separated through signal-processing techniques 3. Changes in rhythmic activity correlate with task demands, including perceptual, cognitive, motor, linguistic, social, emotional, mnemonic, and other functional processes

Reasons for spatial filtering

1. help localize a result 2. isolate a topographical feature of the data by filtering out low-spatial-frequency features 3. as a preprocessing step for connectivity analyses

The 1/ f phenomenon entails five important limitations to interpreting and working with time-frequency power data

1. it is difficult to visualize power across a large range of frequency bands 2. it is difficult to make quantitative comparisons of power across frequency bands 3. aggregating effects across subjects can be difficult with raw power values 4. task related changes in power can be difficult to disentangle from background activity 5. raw power values are not normally distributed because they cannot be negative and they are strongly positively skewed. This limits the ability to apply parametric statistical analyses to time-frequency power data.

Advantages of Time-Frequency-Based Approaches

1. many results from time-frequency based analyses can be interpreted in terms of neurophysiological mechanisms of neural oscillations 2. at present, oscillations are arguably the most promising bridge that links findings from multiple disciplines within neuroscience and across multiple species 3. there may be many task-relevant dynamics in EEG data that are retrievable using only time-frequency-based approaches due to its multidimensionality

Spatial scales

1. microscopic scale: - refers to spatial areas of less than a few cubic millimeters and comprises neural columns, neurons, synapses, and so forth - Dynamics happening at this scale are most likely invisible to EEG, either because events at this scale do not produce electrical field potentials or because the field potentials they produce are not powerful enough to be recorded from the scalp 2. mesoscopic scale: - refers to patches of cortex of several cubic millimeters to a few cubic centimeters - Dynamics occurring at this spatial scale can be resolved with EEG, although it may require high-density recordings (64 or more electrodes) and spatial filtering techniques such as the surface Laplacian or source space imaging 3. macroscopic scale: - refers to relatively large regions of cortex that span many cubic centimeters - This spatial scale is easily measurable with EEG, even with only a few electrodes.

Time slice

1. one frequency band is selected, and activity at that frequency band is plotted over time 2. useful for comparing activity across multiple conditions or electrodes and when there is an a priori reason to focus on a specific frequency band

a family of wavelets

A group of wavelets that share the same properties but differ in frequency

complex morlet wave (cmw) equation

A*e^((-t^2)/(2*s^2))*e^(i*2*pi*f*t) t=time s = standard deviation of guassian (defined already) f = peak frequency of wavelet A = frequency band-specific scaling factor equation: 1/(s*(pi^1.2))^1/2 - If you plan to apply a baseline normalization such as percentage change or decibel, this scaling factor is not necessary - if you are performing complex wavelet convolution only to obtain the phase angle time series, the scaling factor is also not necessary

How Many Frequencies Should Be Used?

About 20~30 frequencies are usually pretty good. Also, due to frequency smoothing inherent in convolution there is little advantage to choosing frequencies that are too close to each other.

How High Should the Highest Frequency Be?

Again, it depends on things such as the sampling rate (as it limits the highest frequency you can analyze due to Nyquist), and the time it takes to analyze data

ERP theory: Amplitude asymmetry or baseline shift

Although the electrical currents generated by neurons are polarity balanced, it is possible that outward-going currents are less detectable from the scalp producing an asymmetry in the oscillations measured by scalp EEG electrode

Plateau width trade-off in filters

As the plateau becomes narrower, the frequency precision increases, but this decreases the temporal precision because narrow frequency filters require longer kernels to resolve

Finite and infinite impulse response filters (FIR, IIR filters)

Finite response filters have a response to an impulse (signal that is only in one point in time) that ends, while infinite response filters do not

Appropriate resolution

For most analyses, temporal resolutions between 250 and 1000 Hz are sufficient and appropriate.

How to observe flicker effect

In general, a peak in the frequency domain at the flicker frequency should be readily observed in a spectral plot, and the magnitude of this frequency peak can be compared to the same frequency before the flickering stimulus began, or it can be compared to the power of neighboring frequencies for which there was no flicker

complex dot product between cmw and complex signal

It is the dot product between the complex signal and the complex notation of the kernel. This gets rid of the problem of having negative values from the dot product, as this just gives another complex number as an answer

EEG precision

Precision depends on the analysis method (e.g. ERP will be quite high as each point is just one point, but t-f analysis will be low as each point will be a weighted average)

Matching Pursuit

Rather than convolve all wavelets with the data, matching pursuit involves finding which template from the "dictionary" best matches the data at that time window and computing the residuals between the data and all templates

Convolution versus Cross-Covariance

Similar, however, in convolution the kernel is reversed and in cross-covariance it is kept as is

How to View and Interpret Time-Frequency Results

Step 1: Determine what is shown in the plot Step 2: Inspect the ranges and limits of the plot Step 3: Inspect the results Step 4: Link the results to the experiment (or to patient groups or gene or drug treatment, or whatever the independent variable is) Step 5: Understand the statistical procedures used to support the interpretations

How Low Should the Lowest Frequency Be?

The frequency should be high enough to 1. The frequency of the activity that you are trying to read 2. Get several cycles in the epoch that you have selected (at least 4 cycles)

ERP theory: additive

This model proposes that the ERP reflects a signal that is elicited by an external stimulus such as a picture or a sound or by an internal event such as the initiation of a manual response and is added to ongoing background oscillations

Hilbert transform

an alternative approach for extracting the imaginary part, iM sin(2 π ft ), of a real-valued signal, M cos(2 π ft ) done by creating and adding the phase quadrature component to the M cos(2 π ft ) part

Reason for EEG temporal accuracy

brain electrical activity travels instantaneously (within measurement possibilities) from the neurons generating the electrical field to the electrodes that are measuring those fields

precision

certainty of the measurement at each time point

Very fast color changes over time or frequency

could be a mistake in the analysis (in this case, the real part of the analytic signal was plotted rather than power). Fast changes in lower frequencies are more suspect than fast changes in higher frequencies because of increased temporal smoothing at lower frequencies.

Brief and large-power effects at high frequencies

could be driven by EEG artifacts such as amplifier saturation or a noise spike from a bad electrode

space slice

data are shown at one time-frequency point, or the average over multiple adjacent time-frequency points, over electrodes in a topographical plot

P-Episode

detect the occurrence and duration of oscillatory events. This is done by filtering the data into frequency bands (with wavelet convolution or filter-Hilbert) and detecting, on a trial-by-trial basis, whether a power fluctuation exceeds an amplitude threshold

Strange topographical distributions

due to noisy or bad electrodes or to an incorrect mapping between electrode label and physical location

Horizontal or vertical stripes

in a time-frequency plot may reflect ripple artifacts from poor filter construction. This can happen if the filter widths are too narrow or if the filter was applied to too little data, thus causing edge artifacts.

wavelet convolution

pass the EEG data through a set of filters (wavelets) that are tuned for specific frequencies, and the result of convolution is the frequency-band intersection between the EEG data and the wavelet.

Topographical localization

refers to identifying the electrodes that show the maximum effect under investigation

Brain localization

refers to identifying the locations in the brain that generated the activity measured from the scalp

Frequency slice

shows power (energy at each frequency band on the y -axis) as a function of frequency ( x -axis), collapsing over a period of time that could be hundreds of milliseconds to tens of minutes long. Time is lost

convolution theorem

states that convolution in the time domain is the same as multiplication in the frequency domain

Determining the Frequency Smoothing of Wavelets

the extent to which neighboring frequencies contribute to the result of wavelet convolution can be reported in terms of full width at half-maximum (FWHM)

Details to include in methods section if using cmw

the minimum, maximum, and number of frequencies of the wavelets; whether frequencies increased linearly or logarithmically; and the number of wavelet cycles and whether this changed as a function of frequency

ERP theory: Phase reset

when a stimulus appears, the ongoing oscillation at a particular frequency band is reset to a specific phase value, which may reflect a return to a specific neural network configuration

Beta-wave oscillations

- 20-30Hz

Gamma-wave oscillations

- 30-80Hz

Theta-band oscillations

- 4-8Hz - implicated in several cognitive functions, including memory and cognitive control

Epoching

- Cutting up the EEG data to manageable chunks usually around the stimulus onset

Advantages of Event Related Potentials(ERPs)

- ERPs are simple and fast to compute and require few analysis assumptions or parameters - high temporal precision and accuracy -ERPs will provide a more accurate estimate of that latency than time frequency results, due to t-f results needing temporal smoothing - there is an extensive and decades-long literature of ERP findings in which to contextualize and interpret your results - provide a quick and useful data quality check of single subject data

Filtering

- Filtering data can help remove high-frequency artifacts and low-frequency drifts, and notch filters at 50 Hz or 60 Hz help attenuate electrical line noise. - for time-frequency there may be no need to apply filters as it gives you power per frequency anyway - Applying a high-pass filter at 0.1 or 0.5 Hz to the continuous data is useful and recommended to minimize slow drifts - High-pass filters should be applied only to continuous data and not to epoched data as edge artifacts could be problematic

Microstates

- In EEG as well as ERP map series, for brief, subsecond time periods, map landscapes typically remain quasi-stable, then change very quickly into different landscapes - The durations of these " landscapes, " as well as their topographical characteristics, vary over time and as a function of task demands - Durations tend to be around the alpha range (70 - 130 ms), and topographical distributions tend to fit into four or five distinct patterns

Advantages of Fast Fourier Transform

- Is computationally much faster - time can be further decreased by using vectors with the number of elements corresponding to a power of two

Matlab function: firls

- a function in MATLAB to construct filter kernels - There are three inputs to the Matlab function firls: 1. the order parameter: - determines the precision of the filter's frequency response. Larger orders will produce kernels with relatively better frequency precision, although they will also increase computation time - if you want to resolve activity at a particular frequency, the filter kernel must be long enough to contain at least one cycle at that frequency. In practice, it is good to have somewhere between two and five times the lower frequency bound 2. a vector of frequencies that defines the shape of the response - For a bandpass filter, you can use six numbers: the zero frequency, the frequency of the start of the lower transition zone, the lower bound of the bandpass, the upper bound of the bandpass, the frequency of the end of the upper transition zone, and finally the Nyquist frequency 3. the " ideal " filter response amplitude

Topographical maps

- a map is made by estimating activity at many points in space between electrodes - offers a quick quality check through visual inspection of such maps

Independent component analysis(ICA)

- a source-separation technique that decomposes the EEG time series data into a set of components that attempt to identify independent sources of variance in the data - imagine there are many voices talking and many microphones placed around the room. By considering weighted combinations of the microphones ' recordings, you could isolate the sound coming from individual voices. - The maximum number of components that can be isolated in the EEG data is the number of electrodes you have. If over 100 it might be better to shoot a little under for speed, and the scalp is unlikely to have 100 independent components

Background activity

- activity that is present in the data but is unmodulated by task events - baseline correction will help get rid of it

Gaussian window

- also called bell shaped or normal curve - used to make the morlet wave - the equation used is: ae^−(t −m)2 /(2s2 ) a = amplitude t =time m = x-axis offset (can just be 0 in EEG analysis) s = standard deviation/width of gaussian - equation for standard deviation: s = n/(2*pi*f) n = number of wavelet cycles f = frequency

Visualizing Fourier results

- although Fourier provides a 3-D representation of the data in terms of frequency, power, and phase. Very often, the phase information is ignored when results of a Fourier analysis are shown, thus leaving two dimensions to show (phase here refers to the position of the sine wave at each frequency when it crosses time = 0) - This is usually done with a line plot as a bar plot would be impractical for so many data points

Cognitive artifacts

- artifacts that have no visual basis in the EEG but are more behavioral

Blinks

- blinks add massive noise on top of the signal - ICA seems to do a good job at isolating this component to subtract from the signal, as well as other oculomotor artifacts

Easy definition of convolution

- can think of convolution as a time series of one signal weighted by another signal that slides along the first signal (slide and multiply) - cross-covariance (the similarity between two vectors over time; a statistical interpretation) - a time series of mappings between two vectors (a geometric interpretation) - as a frequency filter

Considerations of ICA

- decomposition of the data is based purely on statistical properties; In practice, components are likely to contain both signal and noise. -In general, if the component time course shows a task-related ERP looking deflection, it may contain signal. - any non-phase-locked signal would also not be apparent in the ERP, so the absence of an ERP is not proof that the component contains no signal. - be cautious of removing components

Advantages of P-Episode

- detects the temporal duration of a band-specific increase in power. (unique among time-frequency decomposition methods, which all focus exclusively on the amplitude of the signal.) - particularly useful if you have hypotheses concerning the duration of power increases, particularly at the single-trial level

The Hilbert-Huang method

- developed for detecting time-frequency events in nonstationary data - acts as an adaptive filter by using a data reduction technique called empirical mode decomposition to decompose the raw EEG signal into a series of fundamental components -

A good response device

- don't want confusion -also want a device with very little time lag

A comfortable chair for the subject to sit in

- don't want subjects to move around

Why use EEG?

- fast - directly measures neural activity - multidimensional: time, space, frequency, power, phase

peak frequency/center frequency

- frequency of the morlet wave (is the same as the frequency of the sine wave used to make it)

If trial count is different

- get the smallest trial count and choose that many trials from all the conditions with more trials. This can be done randomly, or you can select trials with more similar conditions (i.e. reaction time) to minimize future work

Epoch trick for buffer

- if you are concerned that the epochs are too short and the time-frequency results might be contaminated by edge artifacts, you can use a " reflection " approach, whereby the EEG data from each trial and electrode are reversed and put in the beginning and end of the trial, making the epoch longer, and giving it more buffer - should only be used as a necessity only since there could be some temporal smoothing, and information can leak - don't taper, it doesn't work

Importance of the number of wavelet cycles

- is a non-trivial number that defines the trade-off between temporal precision and frequency precision - should be carefully selected

Trial counts

- it is better to have the same number of trials. This is because this may have an effect on the results. However, small differences in trial counts won't matter too much - In general, analyses based on phase are more sensitive to trial count than are analyses based on power or on the ERP: a small number of trials will introduce a positive bias in the results, - Power-based analyses may also have some positive bias because raw power values can only be positive, and thus noise is more likely to increase than decrease power

Rules of wavelets

- must have values at or very close to zero at both ends - must have a mean value of zero

autoregressive model

- one in which values of a signal are predicted from previous values of that signal - The number of previous values is called the order and must be carefully selected because models with too small or too large an order may cause poor fits of the model to the data - one advantage of autoregressive modeling was that, unlike the FFT, its frequency resolution is not limited by the number of data points in a time segment (wavelet and hilbert also share this advantage) - infrequently used, and has largely been replaced by other methods such as wavelet convolution.

Phase-locked task related activity (evoked)

- phase is the same or very similar on each trial with respect to time 0 (stimuli onset) - will be observed both in time-domain averaging (the ERP) and in time-frequency-domain averaging

butterfly plot

- shows the ERP from all electrodes overlaid in the same figure - Butterfly plots are useful for detecting bad or noisy electrodes.

Fourier transform

- shows you the power of each frequency in the wave - This is done by taking the dot product of several created sine waves with the signal in question. The sine wave with the most in common with the wave will have the higher 'power'

time-frequency slice

- time is on the x -axis and frequency is on the y -axis - The color of the plot (also known as the z -axis or the depth) reflects some feature of the time-frequency data

How many trials needed?

- variable according to signal-to-noise ratio -however, a minimum of 50 is recommended

Morlet wavelet

-A sine wave windowed with a Gaussian (in other words, a short wave that is maximal in the center and tapers off in both directions) - To make a Morlet wavelet, create a sine wave, create a Gaussian, and multiply them point by point

FIR or IIR for hilbert filter?

-FIR -FIR filters are more stable and less likely to introduce nonlinear phase distortions

FIR shapes

-The shape of the FIR can be any shape, however plateau shape is recommended

Short time FFT

-This can be used to combat the limitation that Fourier Transform requires signals to be stationary in time by using short segments of time (thereby, making this statement a little closer to the truth)

The problem of signal and noise

-Trying to get rid of all the noise may lose a lot of signal, while keeping all the signal might mean retaining a lot of the noise - Also there are different things going on in different frequencies, so one researcher's noise might be another researcher's signal

S-transform

-developed as an adaptation of the short-time FFT, in particular to address the limitation that the short-time FFT may have limited sensitivity for detecting transient events that are shorter than the length of the FFT time segment - The S-transform works the same way as a wavelet convolution but uses a slightly different kernel. The kernel of the S-transform is a tapered sine wave, similar to the Morlet wavelet.

EEG and MEG

1. EEG can detect both radial and tangential sources 2. MEG is maximally sensitive to tangential sources and has low sensitivity to radial sources 3. For cognitive experiments in which a larger patch of cortex might be activated, that patch is likely to extend over cortical folding and thus be measured by EEG and MEG 4. MEG is better than EEG at detecting high-frequency activity 5. EEG has several advantages over MEG. EEG is portable and can easily be transported

Spatial accuracy of EEG

1. Extremely poor as it cannot take depth into account very well as well as the electrode picking up signals from all over the brain

Rejecting cognitive artifacts

1. If your task involves responses that can be accurate or not, you will likely want to remove or at least separate error trials. 2. Post-error trials can also be removed if post-error effects are of concern to you 3. also consider removing trials in which subjects do not make a response if they were instructed to do so, trials with more responses than were required, trials with very fast reaction times, and trials with very slow reaction times

Spatial resolution of EEG

1. More electrodes will yield a more accurate reading 2. Some methods, such as the inverse problem, will be more useful with more electrodes, while other, such as P3 oddball, will not really be affected as much 3. spatial precision of EEG is fairly low but can be improved by spatial filters such as the surface Laplacian or adaptive source-space-imaging techniques

EEG oscillations

1. You can see rhythmic activity in EEG readings that reflect neural oscillations 2. Are described with frequency, power, and phase

theoretical limitations on constructing families wavelets

1. You cannot use frequencies that are slower than your epochs. That is, if you have 1 s of data, you cannot analyze activity lower than 1 Hz. In practice, you should have several cycles of activity (for example, if you have 1 s of data, use wavelets that are 4 Hz and faster). 2. The frequencies of the wavelets cannot be above the Nyquist frequency (one-half of the sampling rate). 3. Because of frequency smoothing from time-frequency precision trade-offs, frequencies that are very close to each other will likely provide similar or nearly identical results. For example, if you have a wavelet at 15.0 Hz, a wavelet at 14.9 Hz is unlikely to provide any unique information. More frequency bins may produce smoother-looking plots but will also increase computation time without increasing the information in the results. In general, between 15 and 30 frequencies (spanning, say, 3 Hz to 60 Hz) should be sufficient for most experiments.

Uses of ICA

1. clean EEG data by identifying components that isolate artifacts and then subtracting those components from the data 2. a data reduction technique by analyzing component time series instead of electrode time series

Limitations of Wavelet Convolution

1. convolution with a Morlet wavelet acts as a bandpass filter, but for time-frequency analyses, power and phase information are needed, and these features of the data are not readily apparent in the bandpass-filtered signal 2. the result of convolution with a Morlet wavelet depends on phase offsets between the wavelet and the data - In order to resolve both of the limitations of real-valued Morlet wavelets, EEG data are convolved with complex Morlet wavelets — wavelets that have both a real component and an imaginary component


Conjuntos de estudio relacionados

Intro to Clinical Nursing EXAM 2

View Set

ECOM 101 FINAL EXAM REVIEW T/F QUESTIONS

View Set

IRREGULAR VERBS: English Irregular Verbs

View Set

Lab Quiz #1-Biological Molecules Review

View Set

Chapter 10, Chapter 6+7+9, System Analysis and Design - Chapter 7, Chapter 7, Quiz 6, FinalTestReview_Version_2

View Set