Why donate?
- Tutorials, follow the NBT course
Follow us
Approximate time needed to complete the tutorial: 60 min.
The commands on this page ensure that the following exercise has all the necessary scripts and does not interfere with variables or graphics from previous exercises. If you come directly from another exercise, you may in principle skip certain commands, but it is safe to execute them again.
Start MATLAB and find your current working directory using 'pwd'.
>> pwd
In case your have just been working on another exercise, clean up your workspace and remove the figures:
>> clear all % Deletes all variables in your workspace. >> close all % Closes all figure windows.
Go to the directory 'C' using 'cd'. If you do not have write access, try another drive, e.g., 'H'.
>> cd C:\
Make a directory “Tutorial” using 'mkdir'.
>> mkdir('Tutorial')
Go to this directory using
>> cd('Tutorial')
and make a directory 'Scripts'.
>> mkdir('Scripts')
To run the rest of this exercise, you have to download some MATLAB functions from:
nbtwiki.nettutorial.power_spectra_wavelet_analysis_and_coherence.zip
And unzip the file to your folder: 'C:\Tutorial\Scripts'
Also download the file 'MEG_data.mat' from: http://sourceforge.net/projects/nbtoolbox/files/Tutorials/MEG_data.mat/download
>> addpath C:\Tutorial\Scripts
Go to the directory with the data and load the data using the 'load' command.
>> load MEG_data
The variables 'RSM' and 'LSM' (“right and left sensorimotor cortex”) are vectors containing the 360.000 samples obtained with an MEG sensor over right and left sensorimotor cortex, respectively, during a 20 min recoding at a sampling frequency of 300
We now turn to the classical analysis of longer segments of ongoing activity, which to a large extent is the activity that we averaged away in the tutorial on averaged evoked responses, namely oscillations that are not phase-locked to the stimulus onset.
Let’s quantify the frequency distribution of 'power' in the brain and noise signals using 'pwelch.m'. The 'pwelch' function is a high-level build-in MATLAB function based on the FFT. The input to the function is:
The nFFT is a valuable parameter especially for long data, because (as we learned in the tutorial on The Discrete Fourier Transformation (DFT): Definition and numerical examples) you get as many frequency components in the frequency domain as you have samples in your input data in the time domain. Thus, for signals with many samples (hundreds of thousands or millions) your frequency resolution in a “raw” FFT is unreasonably (or unphysiologically!) high, which leads to a very noisy power spectrum.
Thus, a suitable frequency resolution in neurophysiological data with oscillations is ~0.3
>> Fs = 300; >> win_PSD = 5*Fs; >> noverlap_PSD = []; % Default of 50% overlap is taken with 'noverlap' empty. >> nFFT = 2^10; >> [PSD_RSM, f] = pwelch(RSM, win_PSD, noverlap_PSD, nFFT, Fs); >> [PSD_RSM_noise, f] = pwelch(RSM_noise, win_PSD, noverlap_PSD, nFFT, Fs);
The PSD function ('pwelch') outputs the spectral power 'PSD_RSM' at the frequencies 'f'. Let’s plot the power spectra:
>> plot(f, PSD_RSM) >> hold on % Keep the curve when plotting subsequent curves. >> plot(f, PSD_RSM_noise, 'r') >> grid % Add gridlines to the plot. >> axis([0 60 0 5e3]) % Scale the x and y axes. >> xlabel('Frequency [Hz]') % Add labels to axes. >> ylabel('Power spectrum')
The blue line is the power spectrum of the RSM brain data and the red line is the noise recording. The meaning of 'hold on', 'grid', 'axis', 'xlabel' and 'ylabel' should be clear from inspection of the figure window as you give the commands.
The amplitude spectrum is the square-root of the power spectrum and has the advantage over power spectra that large peaks are not “over-emphasized”. Use the “arrow-up” key to quickly plot the amplitude spectrum in a new window by slight modifications to the commands above:
>> figure >> plot(f,PSD_RSM_noise.^0.5,'r') >> hold on >> plot(f,PSD_RSM.^0.5) >> axis([0 60 0 75]) >> xlabel('Frequency [Hz]') >> ylabel('Amplitude spectrum')
Amplitude and power spectra are frequently plotted in logarithmic scales. This you can do by replacing 'plot' with 'loglog'.
A very high frequency resolution makes the power spectrum noisy, which is particularly inconvenient when measuring peak amplitudes or frequencies. To see this, try set nFFT = 2^18 and use a long window (recall: large number of samples allow for many frequency components):
>> nFFT = 2^18; >> [PSD_RSM, f] = pwelch(RSM, length(RSM), [], nFFT, Fs); >> f % AND one time “arrow-up”. >> p % AND two times “arrow-up”.
Note how few key strokes are needed to perform new calculations and visualizations!
Question: Do you see the problem with determining peak frequencies?
Power spectra clearly suffer from a “time localization” problem: The frequency content of a signal may change over time, which will lead to a smearing or broadening of the power spectrum or if oscillations are only present during parts of the recording, the power spectrum will not alert you of such important changes. An ideal method would allow different window sizes depending on the scales that one is interested in. Wavelet analysis attempts to solve these problems by decomposing a time series into a time-frequency space simultaneously. With wavelet analysis, you can get information on both the amplitude and phase of any oscillatory signal within the series, and how these amplitudes and phases vary with time. Wavelet transformation to most people sounds more fancy or complicated on the first encounter than need be. You may think of wavelet transformation as a repeated band-pass filtering at multiple overlapping frequencies and to visualize the result compactly, usually the results are plotted in a 2D color image, a so-called time-frequency representation (or “TF-plot”).
Let's compute the time-frequency representation of the spontaneous data just used in the power spectrum analysis. The 20 min of data is too much for our computers to work with, because the wavelet transformed data take up a lot of memory (we make a copy of our data for each new frequency being analyzed). Therefore, we focus on just 20 s of data:
>> Frq_low = 0.5; >> Frq_high = 30; >> Frq_step = 0.1; >> [W,t,frq] = Wavelet_1ch(RSM(30*Fs:50*Fs), Fs, Frq_low, Frq_high, Frq_step);
The output of Wavelet_1ch is a matrix of complex numbers 'W' containing a row for each frequency and in each column the amplitude ('abs(W)') and phase ('angle(W)') for a given frequency and sample (time point). For now we focus on the amplitude, which we plot in color because of the matrix essentially having three dimensions (time, frequency and amplitude).
>> figure >> imagesc(t, frq, flipud(abs(W))); axis xy >> colorbar
'flipud' is just to flip the matrix upside down to plot the frequencies from low to high instead of high to low.
Note the burst-like appearance of the oscillations. Clearly, the wavelet gives a richer picture of the oscillatory dynamics than the power spectrum, which regards the 10
>> imagesc(t, frq, flipud(abs(W)),[0 1e3]); axis xy
Alternatively, you may want to plot the time-frequency plot of power instead of the amplitude (cf. the differences between power and amplitude spectra) by squaring the wavelet transformed data:
>> imagesc(t, frq, flipud(abs(W).^2),[0 3e6]); axis xy
which emphasizes the time-frequency location of large oscillations.
Time-frequency (TF) analysis is nowadays (since year 1999 or so) also widely used as a complement to classical averaged evoked response waveforms. Let's try this by applying the wavelet analysis to a somatosensory evoked response (which you may just have computed in a previous tutorials). Compute and plot the evoked response using the following commands:
>> Pre = 0.5; >> Post = 1.5; >> [M] = Single_trial_matrix(RSM, Trig, Fs, Pre, Post); >> Baseline_interval = 0.4; % I suggest a baseline interval of 400 ms. >> [M_baseline] = Single_trial_baseline(M, Fs, Pre, Baseline_interval); >> t_ER = -Pre:1/Fs:Post-1/Fs; % Time vector for evoked response (ER). >> figure >> subplot(2,1,1) >> plot(t_ER, mean(M_baseline)) % Plot the classical evoked response.
We now wavelet transform/filter the evoked response.
>> Frq_low = 0.1; >> Frq_high = 70 >> Frq_step = 0.1; >> [W,t,frq] = Wavelet_1ch(mean(M), Fs, Frq_low, Frq_high, Frq_step);
Whereas the broad-band evoked response had just one baseline, it is now natural to baseline each individual frequency band to see the changes in oscillatory amplitude at all frequencies relative to the baseline activity at those frequencies.
>> [W_baseline] = Single_trial_baseline(abs(W), Fs, Pre, Baseline_interval);
[The script was made to baseline individual trials in a “trial x time” matrix, but the procedure is the same for baselining individual frequencies in the “frequency x time” matrix].
>> subplot(2,1,2) >> imagesc(t_ER, frq, flipud(abs(W))); axis xy
One of the benefits of this analysis over the classical evoked response waveform is that you do not need to decide a priori the frequency content of interest and, thus, your filter settings. Here, you just filter at all frequencies and one may apply a statistical analysis to amplitudes in the time-frequency plane instead of to a waveform where the amplitude of distinct frequencies may overlap in time and blur the component of interest.
Stimulus-related changes in neuronal activity are not necessarily strictly time locked to the onset of a stimulus and the dynamics of oscillatory components may not be reflected in classical averaged evoked response waveforms nor in their time-frequency representation because a slight jitter in the onset of the oscillation will make negative and positive phases from different trials cancel out. See, e.g., Figure 1 in Tallon-Baudry & Bertrand (1999), Oscillatory gamma activity in humans and its role in object representation, Trends in Cognitive Sciences, 3(4), 151-161
However, if you first compute the amplitude in each frequency band and then average the amplitudes (which are all positive!) across trials, you will identify changes in amplitude of different oscillations relative to the baseline irrespective of their phases. We will do this, but it requires a wavelet transformation of the entire signal (1200 s) instead of just the evoked response (~2 s), so we increase the frequency steps and decrease the range of frequencies in order to fit the matrix in the memory (RAM) of the computer:
>> Frq_low = 3; >> Frq_high = 30; >> Frq_step = 1; >> [W,t,frq] = Wavelet_1ch(RSM, Fs, Frq_low, Frq_high, Frq_step);
We now build a 3D matrix of the dimensions (#trials x #frequencies x #samples) containing the TF responses of individual trials. This is done with a slightly modified version of the script from the tutorial on evoked response waveforms:
>> [WM] = Single_trial_matrix_TF(W, Trig, Fs, Pre, Post); >> whos
Compute the mean amplitude across trials for each time-frequency sample and baseline correct each frequency line:
>> [W_base] = Single_trial_baseline(squeeze(mean(WM)),Fs,Pre,Baseline_interval);
'mean' takes the average along the 1st dimension of the 3D matrix 'WM', which essentially makes the 3D matrix 2D. MATLAB, however, only treats the matrix as 2D after the command 'squeeze'.
Let’s compare the result with the TF-plot of the averaged evoked response.
>> figure >> imagesc(t_ER, frq, flipud(W_base),[-300 100]); axis xy >> colorbar
Note the values and sign on the color scale.
Questions
The phenomenon most clearly seen in the 10
Coherence is a measure of phase correlations between oscillatory signals of identical frequency. The coherence between two oscillatory signals with stable phase relations attains the maximum value of 1 whereas the coherence of oscillations with highly random phase relations is 0. Spatially distinct neuronal oscillations are unlikely to align their phases with respect to each other if there is no interaction between the neuronal populations. Coherence, therefore, is a popular method for detecting neuronal sources that are working together in a spatially distributed network and to determine how strongly they couple (“collaborate”). It should be mentioned, however, that in many recording situations (especially scalp EEG or MEG) different channels may pick up activity from the same source and thus lead to a significant coherence, which nevertheless should not be interpreted as an interaction between distinct neuronal sources. In real data, you cannot expect coherence around 1 over long periods of time and, vice versa, the coherence will only approach zero when signals come from neuronal populations that are not interacting.
To see whether left and right sensorimotor regions communicate, we compute the coherence between the signals from left and right sensorimotor regions.
>> nFFT = 2^9; >> overlap = nFFT/2; >> [Coherence_LSM_RSM,f] = mscohere(LSM,RSM,hanning(nFFT),overlap,nFFT,Fs); >> plot(f,Coherence_LSM_RSM,'k') >> axis([0 60 0 0.1]) >> xlabel('Frequency [Hz]') >> ylabel('Coherence')
Note the peak at around 12
To verify that the 12
>> [ % AND “arrow-up”: replace LSM/RSM with LSM_noise/RSM_noise.
Hold the previous plot etc.
Not surprisingly, the coherence at 50
Also try to shift the data of the two channels in time (e.g., compute the coherence between the first half of the data in RSM with the second half of the data in LSM).
Question:
What do you see and why?
Ask if you are in doubt how to shift the samples in the two signals relative to each other.