NACS642 SPRING 2017 MEG LAB

In this lab, you will explore MEG data from an auditory paradigm that was collected at UMD. This 'localizer' scan elicits robust M100 auditory responses between 100 and 200ms. Low frequency tones (125 Hz) and high frequency tones (1000 Hz) were presented in random order, at a duration of 400ms each and a variable interstimulus interval.Since the M100 response is more robust when attention is engaged, participants are usually asked to silently count the tones being presented at unknown times against a quiet background.
The data for this project can be downloaded via ling.umd.edu/~ellenlau/courses/nacs642/NACS642_MEG_Lab_2tone_data.zip
The software:
You will use the MNE-python software package, which is free MEG analysis software developed at the Martinos Center at Massachusetts General Hospital.You will also be using eelbrain, a Python package maintained by Christian Brodbeck (in Jonathan Simon’s lab) built on the MNE software. The MNE-python package is kind of an open-source and evolving version of the original (also free) MNE package developed by Matti Hamalainen. Although this lab is written to be doable with just the python tools, there are still a few functions that are easier with the original C package.

Tech Support:
During the MEG lab tour you will be meeting our MEG lab manager, Anna Namyst. Feel free to contact her () if you run into trouble with the preprocessing steps or for questions about setting up the MEG lab project.

Technical Note:

You have the option of completing this assignment on your own computer or via one of the analysis computers.

  1. Personal Laptop

If you’re reasonably comfortable with computers trying this first is recommended, because it will put MEG analysis ‘at your fingertips’ for the class experiment. Installation of the software requires executing terminal commands, but all the instructions are provided. MNE was originally developed for Mac and Linux operating systems. MNE-python may be compatible with Windows systems, but this is still buggy so we wouldn’t recommend it unless you are feeling adventurous.

  1. Account on Cephalopod
    You may complete this lab on the Cephalopod analysis machine in MMH 3416, which already has the relevant software installed. You can get a user account by emailing Anna ().

On any Mac, you should be able to easily log into the machine remotely using Screen Sharing, so that you do not have to come to MMH to work on the assignment. This could be your own laptop, or the student laptop in the MEG Lab (this will also put you in easy proximity to the MEG lab in case you have questions – coordinate use with Anna).

Setting up remote access (Mac only)

Note: You need to have Screen Sharing enabled. Go to Applications->System Preferences->Sharing and turn it on by ticking the box for Screen Sharing on the left panel.

After receiving a new user account, open Finder. Click on the ‘Go’ tab, then ‘Connect to server’. Enter the server address vnc://files.ling.umd.edu , and after entering your new credentials, select ‘Connect to a virtual display’. You should be directed to a new virtual desktop on the cephalopod machine.


Important note: Do not select “share screen with linguser”. This can create connection issues for all users. Be careful to always select “continue to my profile”.

On most Mac operating systems, you can always log in remotely. However, in a few recent versions, you need to be logged in to the physical machine in MMH first. If you have trouble logging in, email Anna and she can log you in to the physical machine.

Occasionally Cephalopod needs to be restarted. This is fairly rare, but if it is necessary it would happen at either 9am, 12pm, or 3pm. Therefore, you should get in the habit of saving your work right before those times just in case.

Setting up your computer for analysis:
This lab requires Python 2.7, managed by Anaconda. If you are working on the MEG lab PC, Python, Anaconda, and Eelbrainare already installed for you. You may skip to the next step (updating Eelbrain).
‘$’ indicates a command to run in terminal.

  1. Install Anaconda


Be sure to download download the Python 2.7 version,
and the correct version for your operating system. During installation, check the box confirming that you want Anaconda to become your interpreter for Python 2.7.

  1. Install Eelbrain via Anaconda (

From terminal/command line (‘$’ indicates a command – leave it out)--
$ conda config --append channels conda-forge

$ conda config --append channels christianbrodbeck

  1. If you return to this project later, update eelbrain in case of deprecation:

$ conda update eelbrain

  1. Install other tools needed by MNE python

$ conda install wxpython numpy scipy matplotlib mayavi h5py pyqt=4.10 pygments=1 pip

After installation of the software is complete, follow the steps below to examine the data.

PART A: SENSOR DATA

In the Terminal, navigate to your subject’s data directory:

$ cd ./Documents//Users/Anna/Documents/NACS-S17_Project /meg/R20xx.

Open your filtered data file with the following command:

$ mne browse_raw –raw ./raw/R20XX_Localizer-Filtered_raw.fif

1. The raw data should now appear on the screen. This is the raw data recorded during the localizer run. It was denoised using the de Cheveigne/Simon algorithm in Matlab and then converted from the native .sqd format used by the UMD system to the .fif format assumed by the MNE software. When you record data for the class project, you’ll need to do these extra steps. But for right now we won’t worry about them so you can focus on getting comfortable with MEG data.

a)The blue lines are the MEG data recorded at different sensors. Eyeblinks create big peaks in the data due to the electrical activity created by muscle movement, and these are of much greater amplitude than neural activity (therefore they need to be excluded from data analysis to increase signal-to-noise ratio). Try to identify some eyeblinks in the data and take a screen shot.

b)Scroll along the data in time with thecursors at the bottom. Do you notice temporal regularities in the data? Include a screenshot(s) for illustration. Do you have hypotheses about the source(s) of the effect(s)?

c)Anything else you find interesting about the data?

2. Toggle through the selections to examine the data in all the sensors.

a) How many sensors does the UMD system have in total?

b) What sensor is carrying information about the timing of the auditory stimuli?

3.Go back to the command line and enter the ipython environment and load mne

$ ipython

$ import mne

$ %matplotlib

You need to run the above commands every time you start a new session in the terminal (the last command may only be necessary for Mac OS).
Troubleshooting tip: If you do this assignment immediately after installation, you shouldn’t have any problems, because the commands will set up everything for you, but if the MNE kit2fiff GUI does not open with the above command, reset the matplotlib backend with this command:

$ export QT_API=pyqt

$ ipython --matplotlib=qt
On Mac OS X, if you see black fields in the GUI, add this line to your bash profile:

$ export ETS_TOOLKIT=qt4

(For a brief explanation on editing your bash profile:
If you are using a Windows OS and still experiencing problems, talk to Anna about downloading the developmental version of eelbrain, which is has more support for Windows-related bugs.

Next, load the raw MEG data and plot it. Illustrate with a screenshot.

$ raw=mne.io.read_raw_fif('R2218_two-tones-raw.fif')

$ raw.load_data()

$ raw.plot()

a)Change Lowpass filter value to 10:

$ raw_10 = raw.filter(l_freq=None, h_freq=10.0)
$ raw_10.plot()

How does the data change? Illustrate with a screenshot.

b)Change Lowpass filter value to 40, which is a good default.

$ raw_40 = raw.filter(l_freq=None, h_freq=40.0)

$ raw_40.plot()

4. This localizer experiment sent event code triggers for each tone presented (100 each of two frequencies). You can see these codes in the raw plot by moving the channel scrollbar on the right all the way down to see the channels labelled ‘MISC’. Unfortunately we have a couple bad trigger channels which dominate the plot (we don’t use these codes), but if you scroll out in time you should be able to see some vertical lines which are the triggers. Note that what happens is we send a quick pulse that increases the voltage on the line and then decreases, so you see two lines for each event (the line going up and the line going down). Of course we want to align the brain data to the onset of the event, which is the line going up.

Now we will run the mne-python command to extract these events into a variable that we can use for constructing the evoked average.

$ events = mne.find_events(raw_40)

Use the len() command to confirm that you found the right number of events:

$ len(events)

Now take a look at the first 50 rows of the event timing data

$ print(events[:50])

Based on the first few rows, about how much time separated the presentation of subsequent tones in this experiment?

5. Now we want to go forward to creating the evoked responses for each condition by averaging across all of the events from each condition. As you should be able to note from the event data you just looked at, the two codes we’re interested in are 9 and 17, so we want to tell the software what these are and their labels.

$ event_id = dict(tone1=9, tone2=17)

We want to average across a window that includes the relevant time right around the tone onset. A standard choice might be to start from 100ms before the tone and extract all the data up to 500ms after the tone.

$ tmin = -.1

$ tmax = .5

We wantto move the data on the y-axis such that the activity prior to the stimulus of interest is centered around zero. This is called baselining the data, where we’re saying that we want to talk about how the post-stimulus activity differs from whatever ongoing brain activity was happening before. The code below sets the baseline from the left edge of the time-window to 0, or the point where the stimulus was presented.

$ baseline = (None,0)

We want to identify the sensors to plot. For the moment, we can pick all of the MEG data sensors.

picks=mne.pick_types(raw_40.info, meg=True)

Now that we’ve set all of our parameters, here is the code that actually extracts all of those time-windows of interest (or ‘epochs’) from the continuous raw data, and baselines them. This is called ‘epoching’.

epochs = mne.Epochs(raw_40, events, event_id, tmin, tmax, proj=False, picks=picks, baseline=baseline)

If you now type ‘epochs’, it will return some basic information about the epoch data structure. You should confirm that you got 100 epochs for each condition.

Now this is the code that averages those epochs together for each condition and plots them.

evoked1 = epochs['tone1'].average()

evoked2 = epochs['tone2'].average()

evoked1.plot()

evoked2.plot()

This illustrates the average response to allsensors in one picture. You can also add fancier options to match y-axes, tag the sensors with colors and to plot the global field power (this is basically like the standard deviation of all the sensor values at each time point, which is an unsigned measure of how far the values tend to be from the baseline at that time). Illustrate this with a screenshot.

evoked1.plot(spatial_colors=True, gfp=True,ylim=dict(mag=[-300,300]))

evoked2.plot(spatial_colors=True, gfp=True,ylim=dict(mag=[-300,300]))

Approximately at what time does the M100 response appear to be peaking?

We can also generate topographical field maps for selected time-points to see what the distribution looks like. For example, here we can plot the field maps at 50ms, 100ms, and 150ms. Include a screenshot.

evoked1.plot_topomap(times=[.05, .10, .15],vmin=-300,vmax=300)

evoked2.plot_topomap(times=[.05, .10, .15],vmin=-300,vmax=300)

And we’d probably like to plot the waveforms of the two conditions against each other. This takes just a few extra steps because we need to get the two evoked data objects into a single object to give to the plotting function. This is a ‘dictionary’ object in python.

evoked_dict = dict()

evoked_dict['low'] = evoked1;

evoked_dict['high'] = evoked2;

colors=dict(low="Crimson",high="CornFlowerBlue")

If we want to get a birds-eye view of all the data here, we can plot the GFP of the two conditions against each other with the following command. Include a screenshot of the output. At approximately what timepoint do the two waveforms diverge?

mne.viz.plot_compare_evokeds(evoked_dict, colors=colors, picks=picks, gfp=True)

We can also plot the two hemispheres separately by creating a set of ‘picks’ that includes the sensors from each hemisphere. We won’t do that here, but ask me or Anna Namyst if you’d like the list of left and right hemisphere sensors for your project.

Check out this page if you’re interested in other ways to visualize the data, e.g. for full sensor plots.

We will not be focusing on statistical methods in this class, especially because you won’t be collecting enough data for most standard methods to be appropriate. However, there are many tools in mne-python and eel-brain packages for doing statistical analysis which you can explore if you are interested!

PART B: COREGISTRATION AS A PREREQUISITE FOR SOURCE LOCALIZATION
In the next part of this lab you will be coordinating the raw data file with the marker coils and digitized head shape data, which is necessary for doing source localization. In the course of doing this we are going to ‘jump back’ earlier in the process and show you how to convert the .sqd file that you get from the MEG lab into the .fif format expected by the MNE program.The simplest way to do this is to use eelbrain tools, which we illustrate in the following instructions.

Remember that we will tape marker coils on participants’ heads and then send a signal through them before starting the experiment, which will tell us where their head is positioned in the machine. After our experiment is over but before the participant leaves, we can take another measurement to try to evaluate whether there was net movement during the experiment. Remember that we will also digitize the shape of the participant’s head. All these files are what you will be providing to the software in the following conversion step.

  1. Open the MNE GUI from terminal (not ipython!)--
    $ mne kit2fiff
    On Windows:

$ ipython

In [1]:import mne

In [2]:mne.gui.kit2fiff()

  1. Load your marker measurements:

Navigate to your assignment data directory. The necessary files are in /data/raw.

In Source Marker 1 section, load the pre-test marker .sqd file in File field.

In Source Marker 2 section, load the post-test marker .sqd file in File field.

Click the central grey window and the markers should appear. You might need to click on ‘Front’ or change the scale to see better.

3. Load your subject data and digitization files in the Sources section:

In Data field, load subject_experiment.sqd.

In Dig Head Shape field, load subject_experiment.hsp

In in Dig Points field, load subject_experiment.elp

4. Under the dark grey box, click Front so that the head shape appears. Click Right and Left to see that the head moves in the correct direction.
What do you notice about this data?

5. Search for your trials in the Events section and save the file.
The triggers for this data are 163 (low frequency tones) and 164 (high frequency tones). Enter your two triggers, separated by a comma, and click “Find Events”. Save your transformed .sqd file as a .fif file with a different name, and make sure to use this file in the following steps.

Coregistration – adapted from Phoebe Gaston’s Eelbrain Guide
Reference Phoebe’s guide if you want to explore source localization in greater detail.
Now you will coregister the MEG data with the subject’s structural MRI data to use for source localization later. In the future, if you do not have MRI data for your subject, you can use the average brain. For this assignment, the subject’s MRI data is available to you, in the /mri/ directory.

1. Open the coregistration GUI (from regular terminal, not ipython):

a. Mac OS only:

$ mne coreg

b. Windows (or Mac OS):

$ ipython

$ import mne

$ mne.gui.coregistration()