Electroencephalography (EEG)
What are the four steps in analysing EEG data?
1. The signal 2. Processing of the data 3. Artefacts 4. Measurements
What is the inverse problem in relation to EEG signals?
Mathematically, if the sources are known, the resulting scalp configuration of signals can be reconstructed; however, the reverse is not true - one given scalp configuration of signals can have multiple dipole solutions!
Describe the research application of EEGs looking at ERPs in cognitive science.
One reason for using ERPs to investigate cognition is that many components are very well studied - hence, finding that a specific component is modulated by the experimental task might shed light on what cognitive process is involved
Describe the neurophysiology of an EEG.
The EEG activity does not reflect action potentials but originates mostly from post-synaptic potentials - voltages that arise when neurotransmitters bind to receptors on the membrane of the post-synaptic cell This causes ion channels to open or close, leading to graded changes in the potential across the membrane This can be understood as a small "dipole" Signals from single cells are not strong enough to be recorded outside of the head, but if many neurons spatially align, then their summed potentials add up and create the signals we can record This pooled activity from groups of similarly oriented neurons mostly comes from large cortical pyramid cells
Describe the second step of analysing EEG data (the processing of the data).
The EEG signal is very noisy. When looking at frequency information, for example in sleep research, the raw signal might shows systematic variations, and more of a specific frequency. However, when studying cognitive processes, the raw signal has to be "cleaned" before it can be interpreted. The most relevant steps are related to finding all the artefacts that are not brain signals.
How does an EEG work?
The electrodes placed on the scalp pick up small fluctuations of electrical signals, originating from activity of (mostly cortical) neurons. While the raw signals recorded are very noisy and might not look like much, they are systematically related to cognitive processes.
What are some limitations of EEGs?
EEG is biased to signals generated in superficial layers of cerebral cortex on the gyri (ridges) directly bordering the skull. Signals in the sulci are harder to detect than from gyri, and may additionally be masked by the signals from the gyri. The meninges, cerebrospinal fluid (CSF) and skull "smear" the EEG signal, making it difficult to localise the source.
Describe the first step of analysing EEG data (the signal).
EEG signals measured from the scalp in relation to a reference electrode. The reference should be a neutral point (e.g., tip of the nose, mastoids), but some people reference to the average of all scalp electrodes. EEG signals typical amplitude which is tiny. Consequently, they need to be amplified, typically by a factor of 1,000 to 100,000. The signal is then (typically) digitalized. Signal is band-pass filtered to remove the low and high frequencies because they cannot reflect brain activity. The EEG is the sum of signals originating from many different neural units.
What is an EEG (electroencephalograph)?
Electroencephalography (EEG) is a method of detecting neural activity by placing electrodes on the scalp. EEG recorded at the scalp is non-invasive - however, it is also possible to record intra-cranial EEG by measuring activity directly at the exposed cortex. EEG is cheap and (relatively) easy to conduct. The temporal resolution of an EEG is great, but the spatial resolution is poor.
Describe the final step of analysing EEG data (measurements).
Event-Related Potentials (ERPs) Now, we want to know whether we can find brain activity that is reliably related to cognitive processes of interest For this, the single-trial EEG in our studies are probably not good and far too noisy! There is a lot of variance between sessions from the same participants, but also between participants. This is where measurements come in. There are different aspects of the ERP component of interest that can be analysed: - Peak-Amplitude (used in 70% of studies) - Area-under-the-curve (used in 20%) - Peak-to-peak (used in 10%) However, there is no clear rule, and results might differ between measures - Peak-Amplitude: A > B - Area-under-the-curve: B > A - Peak-to-peak: A = B
Describe the third step of analysing EEG data (artefacts).
Eye blinks and movements have a strong impact on the EEG signal because the eye can be regarded as a dipole itself. Other artefacts include sweating and electrical noise (from the room). Signals originating from the eye will contaminate the signal of interest - and unfortunately will be much larger! These signals can be recorded by placing electrodes next to and under the eye to capture horizontal and vertical eye movements Eye-related signals can then be removed by excluding contaminated trials, or mathematical algorithms, such as ICA
Who initially invented EEGs?
Hans Berger detected the first EEG signal in 1924 with electrodes attached to the scalp of a human and reported the results in 1929. Berger initially studied medicine because he was convinced that there is "psychic energy", which might allow for telepathy. Berger also first described the alpha rhythm - when people closed their eyes, the electrical signal was not constant, but it varied with a characteristic frequency of 8-13 Hz. Initially, he used two electrodes, one attached to the front of the head and one to the rear and recorded the potential (i.e. voltage) difference between them.
Describe the Woodman and Luck (1999) study, and how they used ERPs.
They asked their participants to search for a target: e.g., a coloured square, which is open to the left They were interested in whether people search all locations in parallel (all at the same time) or in a serial fashion (one after the other) Their idea was that the N2pc could help answering this question: only if people search in serial fashion should attention switch from one hemifield to the other, until the target is found If search is parallel, nothing should change In order to get people to attend one hemifield first, they manipulated the probability of that a specific colour was the target One colour had a 75% probability (C75), and another had 25% probability (C25) This prompted participants to very quickly attend to the more likely colour first, and the researchers could monitor attention while participants were scanning the visual field All findings support the serial search hypothesis.
Describe the Gehring et. al (1993) study, and how they used ERPs.
They investigated the ERN (error-related negativity). They asked whether there is a cognitive mechanism for the detection of and compensation for errors. The ERN is a negative deflection of up to 10μV in amplitude observed at central electrodes ~80-100ms after an erroneous response. They asked their participants to emphasise accuracy or speed in a simple Flanker-task in which participants had to respond to the central letter on the screen: Overall, they found a clear ERN on incorrect trial in comparison to correct trials The ERN was strongest when people emphasised accuracy, and weakest for speed But is the ERN indicative for compensating for errors? The greater the ERN, the lower the response force = trying to correct for the error The greater the ERN, the higher the probability to get it right on next trial = successful learning from errors The greater the ERN, the slower the response next on trial = successful learning from errors
What is the EEG used for?
We can use the signals from EEGs to learn something about cognition when people perform tasks.