1. General remarks

1.1. The IBB Staff – Who is who? …does what? … will help you?

1.2. Further References

1.3. Rooms and resources

2. Data reduction

2.1. Downloading the Data

2.2. Average

2.2.1. Loading the data

2.2.2. The ERP Module for preprocessing and averaging.

2.3. Export BESA-EMEGS......

2.3.1. Exporting Averages from BESA

2.3.2. Exporting sensor positions from BESA

2.3.3. Importing sensor positions to EMEGS

2.3.4. Importing averages into EMEGS

2.4. Grand Mean

3. Data analysis

3.1. Sensor correction

3.2. First visual analysis with EMEGS2D

3.2.1. Loading Files

3.3. L2-Minimum Norm

3.4. Samplewise ANOVA

3.4.1. Why samplewise ANOVA ?

3.4.2. Menu navigation

3.5. Selection of sensor groups

3.6. ANOVAs for time-intervals and sensors of interest

4. Supplementary Information

4.1. Epoch-wise data recording

4.2. Transferring your rawdata from Megserver

4.3. Installing the latest emegs version

4.4. Control of headposition

4.5. The CTF File formats

4.6. Defining Time in EMEGS und BESA

4.7. F-values

4.8. Batch-driven file renaming

4.9. Generating Batchfiles

4.10. Execution of EMEGS Scripts under MATLAB

4.11. Graphics in EMEGS

4.11.1. Correct Usage of Graphics

4.11.2. Making graphs with the GUI

4.11.3. How to export figures

4.12. Preprocessing with EMEGS

5. References

1.General remarks

This Tutorial was initially meant for providing external graduandts and doctoral candidates an iIntroduction into the use of the most frequently used technique at the IBB. In particular, it was meant to guide them through the procedure of data evaluation. Since the lager proportion deals with our in-house software EMEGS, the manual also got interesting for researchers not being affiliated to our institute but using EMEGS. For better overviewwiev, informations that are only useful for our doctoral candidates or undergraduates are highlighted in grey. Other passages are of general interest.

Issued here are qQuestions such as “what do I have to click on in order to …?” Questions that remain open here may be asked to the staff anytime and without hesitation (see paragraph 1.1).

The tutorial guides through the data evaluation chronologically. General explanations that pertain several stations are outsourced to appended chapters and referred to if appropriate. This avoids redundance. Besides, further background informationsisare given there, that are not mandatory for the progress, but enhance insight. The mere execution of work stages by just following the instructions does not require this insight.

Here, we describe the most common manner of data processing , which is appropriate for 90 % of study designs. We assume that you recorded MEG- but EEG-data, too. In general this does not make much difference. However, in case you process EEG data, check for the plausibility of our suggestions. In some cases, it may be advisory to ignore our suggestions.

Primary beginners knowledge on MEG or EGG as imaging techniques in general are a prerequisite. As to that, the IBB offers both official and informal courses. Who feels lost there is not the only one … don’t panic and hold on attending them. You grasp more than it feels like subjectively, and progress will accelerate from a certain point coming soon. Besides, not all the information is crucial. Dare asking frequently, too!

As an introduction, we suggest Luck (2005), which has the best choice of topics and degree of details and a comprehensive writing. Another suggestion for reading is Seifert (2008).

This tutorial has some gaps, that shall be filled as time goes by. Besides, our technical setup changes occasionally. To make/keep this manual up to date, we need your suggestions. Ideally, you write them as comments into the manuals electronic version, whenever you encounter obscure or outdated passages during your work along it. Please write them into your local copy that you downloaded from biomag.uni-muenster.de and send us the copy back when your project is finished.

At present, the tutorial starts with the analysis of the data already recorded. Prior steps like writing a PRESENTATION script or an CTF runtime protocol are not addressed here.

1.1.The IBB Staff – Who is who? …does what? … will help you?

Helga Janutta. She will provide you with a key to the Room 027, the graduands'(? :-) ) office. Books from the institute’s library. She maintains an updated email and calling list of the staff and can enrol you into our mailing list. A schedule of the institutes courses and colloquia is also available.

Andreas Wollbrink. He can give you administrator privileges on your computer, which you will need frequently. Specifically, BESA will not work properly without them. Andreas is concerned with all technical devices, computer troubleshooting (Our engineer “Scotty” on board).

Markus Junghöfer. He is the author of the applet EMEGS, which he constantly evolves, and which does not have an immersive help file so far. For the most common tasks in EMEGS, there is the present tutorial. Some help files are included in the progeram folder, too. For the remainder, you may refer to him. He also provides a frequent tutorial on EMEGS. Do attend to it! To get aware of the respective dates, just let yourself be enrolled in his mailing list.

Marcus’ doctoral candidates , which already got familiar with EMEGS, are always accessible and can help you out in most cases. Before addressing Andreas or Markus, ask them first. If the do not know either: see above. Not all the staff is used to EMEGS or BESA, which in turn you will use predominantly. We purchased a lot of alternative software, too. For EMEGS, ask Markus an his coworkers.

The technicians have a database of subjects, in case you want to know some demographical facts about them. The also deliver blank DVD.

Find the calling number, mailto and and a “wanted” photo of our staff members on

1.2.Further References

The EMEGS-Meeting , being held twice a month, in which all Qquestions concerning MEG-Data evaluation may be addressed. The topics are your choice or, in case no questions are forwarded, chosen by the lecturer.

Some individual Mailinglists. It is advisory to send your email address to all members of Markus team. Meetings are frequently cancelled or altered in purpose last-minute.

The EMEGS User Mailinglist. EMEGS ist the data proceassing program that you will use predominantly. New functions in updated versions or recently fixed bugs are announced by the developers via this mailing list. Users post questions to their problems that may be yours, too. Unless you are affiliated to the Institute for Biomagnetism and able to ask us directly, this is an option for you to send your questions to us. Replies rarely take more than a few hours. In case you are affilliated to the IBB it may be more convenient to ask us directly. You may subscribe the list on

1.3.Rooms and resources

The student assistants' office is in the basement of the institute of experimental

Audiology, room 027. Here, you will find a computer with a DVD R/W drive and a desk.

EMEGS is an application for MEG/EEG- data evaluation written by Markus. No matter wheather or not it is already installed on the computer at hand, install and use it in your individual folder! See Paragraph 4.2..

Software from third party manufacturers, of which the institute has purchaeased a license, is stored in the office of Andreas and Thomas (Room 10). There is a closed shelf opposite to the window.You will need:

BESA

MATLAB

Occasionly driver for pPrinters and the like

However, in most cases, the software is already installed by previous users of the respective computer.

2.Data reduction

2.1.Downloading the Data

The technicians store for each of your subjects and for each run a so called dataset folder including quite a lot of files, which is generated by the recording device (the CTF). All of your datasets are stored on Megserver in the full path /data/megserver1/[Your Name].proc/[Title your Study]. Here, you’ll find folders named by the ID of the subjects. For further evaluation, download them to your local drive. You will need quite a lot of storage, a disk of ~30 GB may be required, pending on your study. If there is a lack of storage capacity at your workstation, there are two options:

-Download and process the data subject by subject. After calculation of the averages (see Paragraph 2.12.2), they are much smaller. Local copieys The continuous data (files with extension *.meg4) may be removed thereafter.

-Borrow an external hard disk device from Andreas.

You should avoid altering data on Megserver in order to keep an untouched version thereof. Data on Megserver are deleted after some time, but you receive a warning mail in advance. Besides, the technicians keep a Backup on DVD.

2.2. Average

You can either do the data reduction with BESA or with EMEGS. Data processing via EMEGS up to now requires some study-individual programming of a preprocessing script in MATLAB. This has to be done by Markus and will take some three hours to three days – pending on the complexity of your design. The advantage of this option, however, is a more sophisticated data preprocessing. Moreover, the data will allow for analysis on trial basis. However, to get it done without Markus assistance, BESA is the only option. If you choose the EMEGS, please jump to the appendix section 4.12 and then continue reading section 3. If you chose the standard Option BESA, just continue here.

We presuppose that data are continuously recorded (See Section 4.1).

BESA should be installed in the latest version on your computer.

2.2.1.Loading the data

Start BESA > Browse to a dataset folder > Open the file with the extension *.meg4

Some querieys appear, here are the correct answers:

1st Window

Check: „Treat simulataneous markers in .... as single event“

Check: „Data recorded continuously“ (Irrespective of whether they are.)

2nd Window:

There is nothing to insert. Check whether the number of trials found for each condition is according to your number of trials in the experimental setup. Deviations even by just one may not be tolerated without knowing the reason.

3rd Window

Define head centre

Check: „Head center midway between left and right“

Check: „Display Besa coordinates“

Check: „Ear fiducials used earplugs“ (in general, the former two are already checked by default)

4th Window:

Channel and digitized head...

Commonly this is all right, as indicated by the green checkmark

If an error warning 'Can´t read index files' occurs, you do not have administrator privileges. Ask Markus's doctoral candidates to provide you with an according account.

Preliminarily, you can ignore it and continue working, the problem does not impair your computation. It may be bothering in the long run, however.

Assure, that the noise reduction in your data is correctly chosen. This is the case, when the value for CTF-Order 1 is set to 1. See the below screenshot. It depicts a correct setting.

If your data recording included a polhemus scan, you should also include according *.sfp-file, checking the radiobutton “Digitized head surface points”

(Note: The *sfp should be in the folder level above the single recording runs. Since it is altered by BESA and since it is required for all individual recordings, do not put it into the subfolders for individual runs.

2.2.2.The ERP Module for preprocessing and averaging.

Choose „ERP“ form the menu bar and click „edit paradigm“. A window with several tab appears. Insert the below settings:

Tab: Trigger

Here, you see your triggers and their naming. In general, the definition of further attributes is not useful.

Tab: Condition:

How complex your settings are here depends on the complexity of your design. Any choices you make her should be double-checked. Under adverse circumstances, you will cause fatal coding errors either distorting or eliminating all of your study’s results. Such coding errors will mix up conditions or shift the labelling of single epochs by one. Lacks of significant effects or results reversed to your hypothesis are common in research, however, so you will not get aware. How to check for such errors is explained below.

Technically, what is happening here is:

When opening an *.meg4, you look upon your continuous data. They do not yet provide information on the timing of your stimulus onsets. Averages, however, are calculated “time-locked” to your stimuli. During the recording in the CTF, there were visual triggers sent, the respective records received provide information about the actual onsets. (Pending on the modality of your stimulation, it may as well be an acoustical but visual trigger. Technically, there is no difference to our visual example). These informations on time points are given in the editable *.mrk file.

In experimental designs, it is quite common to present different categorieys of stimuli, say, green squares and red circles. Merely by means of the triggers, they are indistinguishable.

Therefore, along with each Trigger, a so-called portcode is sent in close temporal proximity. Portcodes can take on values in a [1:255] range. Thus, they may be used to tell stimulus categories apart. They are, however, related with the stimuli with far less temporal accuracy.

Irrespective of other terminologies, such as in the BESA help, here, we will call the visual signal for timing purposes trigger as opposed to the portcode marker for identification purposes. Tag will be a collective term for both.

Now you want to average the data time-locked to the triggers and separately for different markers. Therefore, you define logical conditions like “A visual trigger, succeeded (or preceded?) by a marker, which in turn has the value xy.”. Later on, individual averages will be calculated for all those epochs, which match according descriptions.

Close the ERP-manual temporarily. Since you did not edit any changes so far, there is nothing to save.

In the status bar of the main window, you see a timeline, displaying your tags as black strokes on a gray background. Here, the recording run is depicted at full length. Above, there is an area showing your MEG-Data as purple plots, just a few seconds of it. There is a button on bottom right corner to alter this time range. Choose a segment of the whole recording run by left clicking on the grey timeline.

At the lower border of the area, your tags are shown as '⊥'. Each tag is labelled with a code number and a name. The code is printed right hand to the tag, to see the name, right click on the '⊥'. In general, you will find closely adjacent pairs of tags, referring to a common stimulus. Check, whether the marker precedes or follows the trigger.

Now open up the ERP menu and choose the tab condition. Type a name for each condition in the field ‘name’ and compose an according condition via the Boolean functions. ‘Name’ will be used as a label for the average to be saved later on. So choose a concise and meaningful one.

In the Boolean functions, you will commonly choose the trigger firsthand (e.g. current name is acoustic trg > insert). It is important to use the trigger as current name. This will be your point zero in time later on. The markers, as already mentioned, have only a rough temporal relationship to the stimulus. To closer restrict your definition, choose with AND one of your tags that precede/follow up (in case there are some more you will have to use the brackets for combining them. In this case, click on ”and”, “or”. The combination will be displayed comprehensively. With the field ‘attribute’ you may choose the code or the name of the tag for the logical term. Do not use the code.

Initially, we mentioned the risk of coding errors. This danger increases along with increasing complexity of your design. The same holds for the complexity of your tag labelling.

A likely scenario of error: You define markers and preceding tags, but in the experiment, these tags follow up. Now your marker will be associated with the preceding epoch, and this holds for all epochs in the recording. To be on the safe side, additionally define a short time lag of marker and tag (Field: attribute>interval). Portcode and Trigger of the same epoch rarely deviate by more than 40 msek.

Alternatively, you may count whether the number of epochs found is according to your experiment (Window: condition, column: count). Do not ignore even minor deviations. Besides, there are always several options to define your conditions. For instance, in case other conditions are already defined, via attribute>condition. Check these other options for similar results.

Tab: Epoch:

Averaging Epoch:–500 to 800 (as an all-purpose suggestion)

Baseline Definition:-200 to 0

Artifact: Rejection: -200 to slightly more than your time of interest.

Important: Assign to all!

Tab: Filter:

The filter settings are pending on your interest and a matter of debate. A common selection is 0.01Hz (low cutoff) and 48 Hz (high cutoff). The stronger your filter, the higher the risk of distorting data (see Luck, 2005). A low cutoff is mandatory anyway.

It is advisory to set this filter only. If too few epochs are considered artefact free in the next step to come, you may still add the high cutoff, too. As opposed to low cutoff filters, high cutoff filtering may also be postponed to the processing of the averages, so omitting it is not wholly irreversible. Once filtered before averaging, a change of filter settings will require a repetition of all work stages starting from here.

Note that these recommendations are a subjective view in a topic, where every new advisor will suggest you something different.