NACS642 SPRING 2018 MEG LAB

In this lab, you will explore MEG data from an auditory paradigm that was collected at UMD. This 'localizer' scan elicits robust M100 auditory responses between 100 and 200ms. Low frequency tones (125 Hz) and high frequency tones (1000 Hz) were presented in random order, at a duration of 400ms each and a variable interstimulus interval.Since the M100 response is more robust when attention is engaged, participants are usually asked to silently count the tones being presented at unknown times against a quiet background.
The data for this project can be downloaded via ling.umd.edu/~ellenlau/courses/nacs642/NACS642_MEG_Lab_2tone_data.zip
The software:
You will use the MNE-python software package, which is free MEG analysis software developed at the Martinos Center at Massachusetts General Hospital.The MNE-python package is kind of an open-source and evolving version of the original (also free) MNE package developed by Matti Hamalainen. This lab is written to be doable with just the python tools.

Tech Support:
During the MEG lab tour you will be meeting our MEG lab manager, Anna Namyst. Feel free to contact her () if you run into trouble with the preprocessing steps or for questions about setting up the MEG lab project.

Technical Note:

You have the option of completing this assignment on your own computer or via one of the analysis computers.

  1. Personal Laptop

This is highly recommended, because it will put MEG analysis ‘at your fingertips’ for the class experiment. Installation of the software requires executing terminal commands, but all the instructions are provided.

  1. Account on Cephalopod
    If you run into challenges running python and/or the MNE toolbox on your own computer, you may complete this lab on the Cephalopod analysis machine in MMH 3416, which already has the relevant software installed. You can get a user account by emailing Anna ().

On any Mac, you should be able to easily log into the machine remotely using Screen Sharing, so that you do not have to come to MMH to work on the assignment. This could be your own laptop, or the student laptop in the MEG Lab (this will also put you in easy proximity to the MEG lab in case you have questions – coordinate use with Anna).

Setting up remote access (Mac only)

Note: You need to have Screen Sharing enabled. Go to Applications->System Preferences->Sharing and turn it on by ticking the box for Screen Sharing on the left panel.

After receiving a new user account, open Finder. Click on the ‘Go’ tab, then ‘Connect to server’. Enter the server address vnc://files.ling.umd.edu , and after entering your new credentials, select ‘Connect to a virtual display’. You should be directed to a new virtual desktop on the cephalopod machine.


Important note: Do not select “share screen with linguser”. This can create connection issues for all users. Be careful to always select “continue to my profile”.

On most Mac operating systems, you can always log in remotely. However, in a few recent versions, you need to be logged in to the physical machine in MMH first. If you have trouble logging in, email Anna and she can log you in to the physical machine.

Occasionally Cephalopod needs to be restarted. This is fairly rare, but if it is necessary it would happen at either 9am, 12pm, or 3pm. Therefore, you should get in the habit of saving your work right before those times just in case.

Setting up your computer for analysis:
This lab requires Python 2.7, managed by Anaconda. If you are working on the MEG lab PC, Python, Anaconda, and Eelbrainare already installed for you. You may skip to the next step (updating Eelbrain).
‘$’ indicates a command to run in terminal.

  1. Install Anaconda


Be sure to download download the Python 2.7 version,
and the correct version for your operating system. During installation, check the box confirming that you want Anaconda to become your interpreter for Python 2.7.

  1. Install tools needed by MNE python

$ conda install wxpython numpy scipy scikit-learn matplotlib mayavi jupyter spyder h5py pyqt=4.10 pygments=1 pip

  1. Download and install MNE python (for furthersupport: ):

$ pip install Pysurfer mne

After installation of the software is complete, follow the steps below to examine the data. **Note: could be that there a few hiccups with version changes in MNE. If something fails, let me know!

PART A: SENSOR DATA

In the Terminal, navigate to your subject’s data directory:

Example (your path will vary, depending on where you put your data):

$ cd ./Documents/NACS-S17_Project/raw/R20xx.

Enter the ipython environment and load mne:

$ ipython

Check the matplotlib backend first:

In [1]:%matplotlib

 If the backend is set to “MacOSX”, exit ipython (command: “exit()”) and reopenipython with the following command:

$ ipython --matplotlib=qt

 If the backend is set to “Qt4Agg”, you may proceed.

In [#]: import mne

You need to run the above commands every time you start a new session in the terminal.

1. Load the raw MEG data and plot it. Illustrate with a screenshot.

In [#]: raw=mne.io.read_raw_fif('R2218_two-tones-raw.fif')

In [#]: raw.load_data()

In [#]: raw.plot()
The raw data should now appear on the screen. This is the raw data recorded during the localizer run. It was denoised using the de Cheveigne/Simon algorithm in Matlab and then converted from the native .sqd format used by the UMD system to the .fif format assumed by the MNE software. When you record data for the class project, you’ll need to do these extra steps. But for right now we won’t worry about them so you can focus on getting comfortable with MEG data.

Toggle through the selections to examine the data in all the sensors.

a) How many sensors does the UMD system have in total?

b) What sensor is carrying information about the timing of the auditory stimuli?

c) The blue lines are the MEG data recorded at different sensors. Eyeblinks create big peaks in the data due to the electrical activity created by muscle movement, and these are of much greater amplitude than neural activity (therefore they need to be excluded from data analysis to increase signal-to-noise ratio). Try to identify some eyeblinks in the data and take a screen shot.

d) Scroll along the data in time with thecursors at the bottom. Do you notice temporal regularities in the data? Include a screenshot(s) for illustration. Do you have hypotheses about the source(s) of the effect(s)?

e) Anything else you find interesting about the data?

2.Close the plot and now filter the data:

a) Change Lowpass filter value to 10:

In [#]: raw_10 = raw.filter(l_freq=None, h_freq=10.0)
In [#]: raw_10.plot()

How does the data change? Illustrate with a screenshot.

b) Change Lowpass filter value to 40, which is a good default.

In [#]: raw_40 = raw.filter(l_freq=None, h_freq=40.0)

In [#]: raw_40.plot()

4. This localizer experiment sent event code triggers for each tone presented (100 each of two frequencies). You can see these codes in the raw plot by moving the channel scrollbar on the right all the way down to see the channels labelled ‘MISC’. Unfortunately we have a couple bad trigger channels which dominate the plot (we don’t use these codes), but if you scroll out in time you should be able to see some vertical lines which are the triggers. Note that what happens is we send a quick pulse that increases the voltage on the line and then decreases, so you see two lines for each event (the line going up and the line going down). Of course we want to align the brain data to the onset of the event, which is the line going up.

Now we will run the mne-python command to extract these events into a variable that we can use for constructing the evoked average.

In [#]: events = mne.find_events(raw_40)

Make sure you get the right number of events. Now take a look at the first 50 rows of the event timing data

In [#]: print(events[:50])

Based on the first few rows, about how much time separated the presentation of subsequent tones in this experiment?

5. Now we want to go forward to creating the evoked responses for each condition by averaging across all of the events from each condition. As you should be able to note from the event data you just looked at, the two codes we’re interested in are 1 and 2, so we want to tell the software what these are and their labels.

In [#]: event_id = dict(tone1=9, tone2=17)

We want to average across a window that includes the relevant time right around the tone onset. A standard choice might be to start from 100ms before the tone and extract all the data up to 500ms after the tone.

In [#]: tmin = -.1

In [#]: tmax = .5

We wantto move the data on the y-axis such that the activity prior to the stimulus of interest is centered around zero. This is called baselining the data, where we’re saying that we want to talk about how the post-stimulus activity differs from whatever ongoing brain activity was happening before. The code below sets the baseline from the left edge of the time-window to 0, or the point where the stimulus was presented.

In [#]: baseline = (None,0)

We want to identify the sensors to plot. For the moment, we can pick all of the MEG data sensors.

In [#]: picks = mne.pick_types(raw_40.info, meg=True)

Now that we’ve set all of our parameters, here is the code that actually extracts all of those time-windows of interest (or ‘epochs’) from the continuous raw data, and baselines them. This is called ‘epoching’.

In [#]: epochs = mne.Epochs(raw_40, events, event_id, tmin, tmax, proj=False, picks=picks, baseline=baseline)

If you now type ‘epochs’, it will return some basic information about the epoch data structure. You should confirm that you got 100 epochs for each condition.

Now this is the code that averages those epochs together for each condition and plots them.

In [#]: evoked1 = epochs['tone1'].average()

In [#]: evoked2 = epochs['tone2'].average()

In [#]: evoked1.plot()

In [#]: evoked2.plot()

This illustrates the average response to allsensors in one picture. You can also add fancier options tomatch y-axes, tag the sensors with colors and to plot the global field power as well. The GFP is the standard deviation of all the sensor values at each time point, which is an unsigned measure of how far the values tend to be from the baseline at that time. You can think of the GFP as the way that we can average across both positive and negative values without them cancelling out (another extremely similar way is the root mean square or RMS, where we square all the values to make them positive, compute the average, and then take the square root).Illustrate the new plots with a screenshot.

In [#]: evoked1.plot(spatial_colors=True, gfp=True, ylim=dict(mag=[-300,300]))

In [#]: evoked2.plot(spatial_colors=True, gfp=True, ylim=dict(mag=[-300,300]))

Approximately at what time does the M100 response appear to be peaking?

We can also generate topographical field maps for selected time-points to see what the distribution looks like. For example, here we can plot the field maps at 50ms, 100ms, and 150ms. Include a screenshot.

In [#]: evoked1.plot_topomap(times=[.05, .10, .15],vmin=-300,vmax=300)

In [#]: evoked2.plot_topomap(times=[.05, .10, .15],vmin=-300,vmax=300)

And we’d probably like to plot the waveforms of the two conditions against each other. This takes just a few extra steps because we need to get the two evoked data objects into a single object to give to the plotting function. This is a ‘dictionary’ object in python.

In [#]: evoked_dict = dict()

In [#]: evoked_dict['low'] = evoked1

In [#]: evoked_dict['high'] = evoked2

In [#]: colors=dict(low="Crimson",high="CornFlowerBlue")

If we want to get a birds-eye view of all the data here, we can plot the GFP of the two conditions against each other with the following command. Include a screenshot of the output. At approximately what timepoint do the two waveforms diverge?

In [#]: mne.viz.plot_compare_evokeds(evoked_dict, colors=colors,

picks=picks, gfp=True)

We can also plot the two hemispheres separately by creating a set of ‘picks’ that includes the sensors from each hemisphere.Our lab has created 4 equal-size ‘quadrants’ of 27 sensors that cover the lateral-most 108 sensors on the helmet (out of a total of 157; the rest are in the midline). The definitions follow

la=[0,1,2,3,39,41,42,43,44,52,58,67,71,80,82,83,84,85,108,130,131,132,133,134,135,136,151]

lp=[4,5,6,7,8,9,34,36,37,38,40,45,46,47,48,49,50,75,76,77,79,87,88,90,127,129,137]

ra=[20,22,23,24,26,59,60,61,62,63,65,89,92,95,99,100,114,115,116,117,118,145,147,148,152,155]

rp=[14,15,16,17,18,19,25,27,28,30,53,54,56,57,66,68,69,70,94,96,97,119,121,122,143,144]

lh=[0,1,2,3,39,41,42,43,44,52,58,67,71,80,82,83,84,85,108,130,131,132,133,134,135,136,151,4,5,6,7,8,9,34,36,37,38,40,45,46,47,48,49,50,75,76,77,79,87,88,90,127,129,137]

rh=[20,22,23,24,26,59,60,61,62,63,65,89,92,95,99,100,114,115,116,117,118,145,147,148,152,155, 14,15,16,17,18,19,25,27,28,30,53,54,56,57,66,68,69,70,94,96,97,119,121,122,143,144]

Now, plot the left hemisphere and the right hemisphere separately by specifying the selected set in the picks argument, e.g.

In [#]: mne.viz.plot_compare_evokeds(evoked_dict, colors=colors,

picks=lh, gfp=True,ylim=dict(mag=[0,100]))

What differences do you see between the hemispheres (make sure you check the scales of your two plots before comparing).

Check out the mne pythonwebsite if you’re interested in other ways to visualize the data, especially for your class project.

We will not be focusing on statistical methods in this class, especially because you won’t be collecting enough data for most standard methods to be appropriate. However, there are many tools in mne-python and eel-brain packages for doing statistical analysis which you can explore if you are interested in going further! Ditto for source localization—they have sample datasets that you can download and play with.