Dr. Adrian KC Lee's course this quarter - SPHSC 594 - represents a first in the United States. To-date no other course on MEG technology has been taught in the U.S.. Curriculum not only focuses on MEG-related subject matter, but is also intended to reinforce the unique opportunity MEG technology offers in fostering interdisciplinary research and collaboration. (Film and Blog Entries have been edited by Pam Kahl)
Preamble - The seed for this interdisciplinary course was planted way back when I wrote my teaching statement applying for this position. Interesting now to read back this paragraph:
Multimodal imaging for brain sciences is truly multidisciplinary. The breadth of knowledge required to understand the processes and analyses involved in neuroimaging spans across many disciplines, including psychology, neuroscience, physics, and engineering. The depth in each topic required to understand a routine neuroimaging procedure can be daunting to students and, potentially, to an educator. For example, the principal component analysis used to project the heartbeat out of recorded MEG signals (as is used, for instance, in functional connectivity studies) requires in-depth knowledge in linear algebra, signal processing, as well as an appreciation for the existence and potential importance of brain rhythms.
In addition to the focus on attracting an interdisciplinary group of students, I've spent significant time thinking about the appropriate grading scheme. The UW Teaching Fellows week offered significant guidance on this topic. As I started writing the course description, I came to realize that the most important skill to develop is that of effective communication.
Sure, MEG is cool (or else I wouldn't be using it as a tool for my own research) and I want to inspire students to use this tool appropriately to answer a host of interesting neuroscience questions and develop new engineering solutions. But above all, I want to teach students skills that can be carried forward – whether they end up in academia or the business world. I hope that students from different backgrounds will leverage their own expertise to teach their classmates something new. If I can foster an environment that encourages cross pollination of ideas – breaking down traditional departmental boundaries – I think it will be useful for everyone's career development. To that end, communication is the primary reference point for grading - how effectively students can communicate with each other through teaching tutorials, leading discussions and conveying research ideas through grant proposal writing.
Note to self: Developing an interdisciplinary experience with intention is no easy task. We have been working on enrollment logistics for the past three weeks! Setting up a course that is cross-listed with eight different departments -- Colleges of Arts and Sciences -- Speech & Hearing Sciences, Psychology, Linguistics; Engineering -- Electrical Engineering and Computer Sciences Engineering; and Graduate Program in Neurobiology & Behavior -- has proven to be an administrative challenge. Despite the fact that the course only showed up in three departmental listings, we had 20 show up to class. The crowd is diverse - just what I was hoping for.
Because this is the first MEG course ever to be taught (I tried looking up online to see what syllabus I can model mine on, and I can't find any), it gives me the opportunity to test-drive some ideas for formally cultivating interdisciplinary research. My graduate alma mater (Harvard-MIT Health Sciences & Technology) is one of the most interdisciplinary departments in the world. Even so, the traditional approach tends to focus on pre-determined intersection points (e.g., neuroscience and engineering). I really enjoyed (and in hindsight, benefited most from) the more organic interaction with fellow students from very different undergraduate backgrounds. This organic dynamic is what I want to foster in SPHSC 594. It is a big opportunity to be able to introduce this idea with undergraduates (and hopefully attract the young bright minds to embark in interdisciplinary research).
But how to get this terrifically diverse group to do more than just sit together in a lecture hall? With this in mind, I'm taking an experimental approach to fostering conversation and collaboration. First, office hours are the hour between the lecture and tutorial. This may have seemed like "dead time" to students at first, but since it's been arranged as an open "coffee" hour it's developed into an opportunity to hang out, talk and share ideas. Second, class performance includes a communication/teaching component and a collaborative grant-writing project. All students are assessed on their ability to teach each other the basics of their own field (e.g., basic programming skills may be considered rudimentary for computer scientists, yet it is a desired skill to acquire for some psychologists). The grant-writing project is intended to be the first step in developing proper NIH or NSF proposals relevant to students' career development (e.g., an pre-doctoral or post-doctoral training grant). Each proposal group will have to work together and draw on each other's strengths to propose studies that would incorporate the use of MEG in their research. Each grant will be critiqued by other students in a mock study session. The most meritorious grant will get pilot MEG recording time so the team can gather preliminary data and turn it into a real grant application. Hopefully in a few years time, this competition will become a big draw for the best young scientists / engineers on campus interested in vying for the coolest neuroscience research idea - much like the 6.270 Robot Competition @ MIT.
Let's see how these efforts will pan out in the next few weeks...
Oh... a big shout out to Hansen, Kringelbach and Salmelin for publishing the book: MEG -- an introduction to methods. I'm basing my lectures on this textbook. It's great!!!
The eagerness to start to use the MEG is palatable in class. Students want to record measurements and do experiments. However, I feel it's important to spend more time on building the foundational knowledge re: neurophysiological signals and instrumentation. I realize subjects such as electromagnetism and electrophysiology can be a bit dry, but I liken it to driving a Ferrari – you don't need to be a mechanic to know how to drive such a car, but the more you know about the engine the more you can appreciate the driving experience. Luckily the students are not opposed to the process – on the way to office hour last week, a student said "If you just tell me how to do an experiment and teach me how to push buttons to analyze signals, I'd probably not be interested in the course and be skeptical of the technology." The MEG is our Ferrari.
The office hour experiment seems to be working out nicely. I notice the younger grad students from Speech & Hearing are chatting with the engineering and psychology undergrads. More senior grad students are batting around ideas with post-docs. All good.
The student-led tutorial series is also starting to take shape. Students are volunteering for time-slots which tells me they are really interested in sharing their field of study with others.
Guest-speaker of the week is Erik Edwards who will be talking about the history of cortical rhythms discovery, an important component of sleep studies and understanding how different parts of the brain communicate with each other.
What does it take to design a good MEG experiment? This was the focus for the week's lectures. Not surprisingly, MEG experiments require myriad considerations - all the way from what needs to be covered in the subject's consent form to how to prepare the subject before pushing the record button.
Timing is EVERYTHING in MEG, so it's important to talk about software programming issues that need to be taken into account in order to capitalize on the millisecond resolution of the MEG.
This week's focus is really on the ins and outs of designing a good MEG experiment. For those who have never been involved in a neuroimaging study before, there are so many little things to cover – from important consideration points relative to the subject's consent form, to the interpersonal skills needed to prepare a subject before pushing the "record" button at the MEG console.
In order to burn in students’ minds how important it is to consider all factors involved in designing a good behavioral experiment, I asked everyone to do a cognitive memory task (N-back) during class. The inter-stimulus interval (ISI) was 1 second in the first iteration and all were asked to clap when a digit was repeated (presented one at a time). Easy. The next version of the task increased the ISI to 10 seconds. The goal was to demonstrate that 10 seconds is an eternity in terms of mental focus. Minds wander very quickly and it’s important students take this into account when they are 1) designing the MEG experiment and 2) when assessing the results. This will have a bearing on how we talk about resting-studies later on as well.
I also spent some time talking about fMRI (which measures the change in blood flow relative to a baseline condition) from MR physics to our current understanding of neurovascular coupling. It is important for students to understand the assumptions made in an fMRI study (such as measuring blood flow and oxygentation as a surrogate quantification of neuronal activities in the brain). The dynamics of BOLD response is also vastly different to that of the postsynaptic neuronal activities we capture using M/EEG. For example, the BOLD signal is slow (i.e., sampling every 2 seconds is adequate to capture the dynamics of this signal) while the signals we measure using M/EEG requires us to sample at 1000 times per second to adequately capture the millisecond dynamics of our brain. A side-note to ponder upon: as you read this paragraph, would sampling your thought process every 2 seconds adequately capture your brain dynamics?
It is even more important to understand the fundamental neurophysiology behind these techniques as we consider the merits of a multimodal approach in brain research (e.g., combining M/EEG with fMRI). It's interesting to see the reactions of the students as I explain fMRI to them. I think contrasting the different temporal resolution of these techniques really bring home the point why MEG is such an important tool to illuminate brain dynamics in a system neuroscience approach.
The course is moving into a more technical phase. Teaching concepts behind all the pre-processing techniques is no easy feat. Linear algebra is key - so I have set up tutorials to ensure everyone in the class understands vectors and linear algebra. This knowledge will be used to open up the geometric interpretation of many of the complicated concepts such as projection, principle component analysis and signal space projection. It's amazing how much good ‘ol Pythagoras Theorem can be used to explain difficult concepts (well, at least up to three dimensions).
I think I've taken for granted that it is easy to generalize ideas in vector geometry from 3-dimension (tangible in our physical world) to a higher N-dimension (operating on hyperspaces that is hard to convey in our 3-D world). Seeing some students struggle with this point reminds me of the challenges in interdisciplinary classes. But it is rewarding to see those engineering students not only take the challenge head-on but also help their fellow students grasp the concept.
Thus far, I've been avoiding equations. It important for students to learn how to intuitively interpret MEG signals, using abstract ideas like vector geometry. Inevitably, the subject of inverse imaging with the MEG requires the use of equations.
Back in Week 2, I talked really briefly about the four Maxwell equations. I had no intention of teaching the entire class electromagnetism vector calculus, but I did want to reinforce the importance of those four equations relative to MEG research and magnetostatic assumptions. This week, however, I wanted to go through the concept of minimum mean-square error estimator and walk them through some basic equations. I'm sure this is not easy for some students who were exposed to matrices for the first time during last week's linear algebra tutorial. I guess going through the derivation is important- especially for the engineers and the mathematically inclined students.
But the key point I want students to understand is that the researcher must appreciate the model used in inverse imaging. Each choice the researcher makes has implications on the interpretation of their data. They need to work with MEG experts closely, express the fundamental scientific questions that they're addressing, and judiciously choose the most suitable model for that particular question. There is no one silver bullet (if so, it would make my job much easier). The class is a smart bunch and this intro course is just the beginning of, what I expect will be, their interest in MEG techniques.
This week’s lecture focuses on the material covered by Matti Hämäläinen’s MEG software manual. Dr. Hämäläinen is director of the MEG Center at the Martinos Center for Biomedical Imaging at Harvard and one of my postdoctoral advisors. He is one of the leaders in the MEG community championing the development of the computation and visualization of cortically-constrained L2 minimum-norm estimates (MNE) of current activities in our brain. Analyzing MEG data is not a cakewalk – it requires the researcher to have a strong math and physics background, especially if we want to estimate the brain current activities originated from the cortex. MNE software provides useful tools for researcher to analyze MEG data and eventually producing brain movies – seeing how different parts of the brain dynamically reacts and processes information. In his MNE-software manual (a popular software used around the world to compute M/EEG current estimates), he provided a nice, succinct summary on how to process M/EEG data incorporating anatomical information and he appropriately named this part of the manual as the “MNE cookbook.” But to be a master chef MEG-style, you really need to: 1) Understand the key ingredients – the nature of M/EEG signals covered in Week 2; , 2) Take proper care during the process – subject preparation, behavioral understanding and timing (covered in Week 3) ; and 3) Have the proper skills – in our case the mathematical knowledge of linear algebra and minimum mean-square error estimator (covered in Weeks 4 and 5). The MNE cookbook has become not just a flow-chart that they could follow to process data, it now serves as a succinct reminder of all the important points that we discussed in the past 6 weeks relevant to neuroscience research using M/EEG.
Knowledge of MEG has traditionally been disseminated in workshop formats. These sessions target neuroscientists who are interested in using this new technique to answer questions specific to their own research programs. The challenge in these workshops is to deliver the necessary material in just one or two days. The beauty of a 10-week graduate course is that it allows more time for the ideas to simmer, which I expect will result in greater sophistication with MEG concepts- not only relative to their own research interests but also in the context of other scientific questions. It is very satisfying to see concepts finally click together (you can tell by facial expressions during class!)
Even more impressive, the students seem to now grasp the important differences in fMRI and M/EEG. I am now fielding a number of astute questions on the merits of combining these techniques in a multimodal imaging approach. Does multimodal imaging approach simply mean scanning the same subject using fMRI and then with M/EEG using the same experiment? How does one combine the slow BOLD signals captured in fMRI (with temporal resolution of seconds) and integrate this information with the M/EEG signals, which has a temporal
Writing from Banff, Canada where I’m attending a meeting co-sponsored by the International Society for Noninvasive Functional Source Imaging and the International Society for Bioelectromagnetism. During the sessions, I had the opportunity to catch up with one of the authors of the first book devoted to MEG techniques (published just last year)- Ritta Salmelin. MEG: An Introduction to Methods has been an excellent resource for motivating class discussion. I’ve also recommended to students that they obtain the book to use as a reference.
Back to class . . .This week, we talked extensively about different statistical analysis techniques and the associated cautionary tales. It is our responsibility as scientists to report findings that add to our understandings of brain mechanism. But if we are not careful, we can accidentally report something that could have happened just by chance (beautifully illustrated in this comic strip. During tutorials, we also discussed the techniques used in MEG research to study different questions (e.g., the cortical involvement in auditory attention and the differences in speech production mechanism between fluent and stuttering speakers). Students now appreciate the finer points in the method section reported in these papers and raise interesting questions about the different statistical approaches. I’m glad to see that the lectures and the tutorial series are definitely paying off.
This week concludes the planned seven-week long lecture series. The upcoming two weeks, the students will have an opportunity to highlight how MEG has been used in their particular field of study. This all leads up to the grant proposal presentation scheduled for the final week. Most students have led a discussion already in the tutorial series, and (as part of the class assessment) each has received constructive critiques on how to improve their teaching styles. I’m eager to see how they have integrated the peer comments into their upcoming presentations!
BINGO!! At least that’s what I really wanted to yell out loud in this week’s tutorial. I asked an engineering student to prepare a tutorial on the topic of Linear Time-Invariant (LTI) Systems. In engineering, it is often convenient for us to make certain assumptions. In particular, when a system (a mathematical description of how a process could predictably influence its input signal) is both linear and time-invariant, it greatly simplifies our analysis.
What constitutes a system linear and time-invariant? Consider a microphone as a system: if I speak into the microphone, it will pick up my speech output; if another person also speaks into the same microphone, it will pick up my speech and the other person’s speech added (linearly) together. This makes the microphone a linear system. A microphone can also be non-linear. If I yell at it so loudly that the microphone is now clipped: it can no longer track my speech properly because its capacity for sound has been exceeded (think about the distorted sound you hear when you turn up your speakers too high). A microphone is also a time-invariant system: if you speak into the microphone today, or tomorrow, it will record the same way. A non time-invariant system (yes, this is engineering-speak… another way of phrasing would be a system that is time-varying) example is a tempurpedic mattress: you first sleep on the mattress, it slowly deforms to the shape of your body. Now you get up fall on the mattress again, the material already has a “memory“ of your body shape, so the mattress does not deform the same way the second time.
The beauty of an LTI system is that once its “impulse response” is understood, we can anticipate how the system will react to every possible input scenario imaginable (think of the “impulse response” as the “signature” of a system). When the engineering student heard this in class, he was extremely excited. He realized why we want to apply the LTI system approach in brain sciences. Simultaneously, the psychologist in the class looked puzzled. She was concerned that the brain is neither linear nor time-invariant. She pointed out how the brain is not exactly linear. For example: In a crowded environment, listening to a friend’s voice amongst all the other sounds, or simply looking at their lip movement will not be able to help you understand. If the brain were a linear system, the combination of degraded auditory input and poor lip-reading skills would still result in lack of adequate input – making it nearly impossible to understand what is being said. But experience tells us that when we look at lip-movement in situations where speech is degraded, intelligibility improves enormously. Therefore, our brain should be considered a non-linear system. Also, the student argued our brain is certainly not time-invariant because we form thoughts in time and we use experience to shape our subsequent responses. For example: Telling a joke might be funny the first time, but repeating the same joke the next day will probably not illicit the same response. Therefore, our brain should not be considered time-invariant.
BINGO! Our brain in general is neither linear nor time-invariant. Yet the assumptions associated based on use of engineering tools persuade us to think of the brain as an LTI system.
There are certainly occasions when the brain acts like an LTI system. When we analyze brain imaging data, we must be cognoscente of all the assumptions made based on the tools used. Hopefully this discourse in the tutorial opened up dialogues between the engineering and psychology students and such conversations will be the first of many they’ll have with their colleagues in their future scientific career.
Our class moved venue this week – the 161st Meeting of the Acoustical Society of America is happening in Seattle! There was a special session organized by the co-Director of I-LABS, Prof. Patricia Kuhl, to highlight the technological, methodological and theoretical advances in neuroimaging and speech perception. I was invited to discuss how the latest technological advancements in MEG helps us better understand the brain mechanism associated with speech perception. It is great to see so many students show up to this session to see how the application of MEG technology is used to answer some of the most interesting neuroscience questions, e.g., how do infants acquire knowledge to distinguish speech sounds and what brain dynamics are associated with this amazing developmental stage? One student also commented that he’s finally getting the seminal points when, in conversations with my international colleagues in speech and hearing sciences, I speak to the special place M/EEG imaging technology holds in our field of study.
Students are also busy preparing for their proposal presentation next week. From the specific proposals I’ve received and the seminars they’ve led in the last 2 weeks that highlight the advancement of neuroimaging in their own fields, I can see the hard work that has gone into designing experiments, and their breadth of learning about MEG as a brain imaging tool. I’m looking forward to see their research proposal presentations next week!
It has been very exciting to listen to the student presentations on their MEG project proposals this week. Five grant proposals were submitted representing the work of the 15 students who took the course. Some chose to find students from other departments that would complement their background, while others chose the solo approach. Topics ranged from understanding how tonal language is processed, such as Cantonese (my mother tongue), to mapping the differences in motor preparation for stutterers compared to those with fluent speech. In addition to neuroscience topics, one undergraduate student also submitted an NSF graduate fellowship application, proposing to explore auditory brain computer interfaces. Each group presented their proposal and scientific approach to the class (e.g., how data will be processed, justification of the specific inverse imaging technique chosen, what planned statistics will be used to test their hypotheses). There was great diversity in ideas, but there is one common thread that ties them together – all leverage the power of MEG technology vis a vis other neuroimaging techniques.
The grand finale of the course was an NIH mock study section. Not only did students have to present their own research proposals, they were also responsible for reviewing their peers’ grants as well. In this process, students had the opportunity to critically assess others’ work and learn from the strengths and weaknesses of those proposals. In return, they obtained constructive criticism from their peers to help them improve their own proposals. None of this was intended as busy work – students were encouraged to treat this effort as a preliminary step to submitting a real grant (e.g., NIH NRSA pre-doctoral / postdoctoral fellowships or an NSF graduate fellowship award).
After 1.5 hours of deliberation, Susan McLaughlin’s NRSA proposal, A MEG Study of Asymmetries in Human Auditory Cortical Functional Connectivity Associated with the Processing of Interaural Time Differences, emerged as the winning grant. As part of the reward for winning this annual competition, she will be able to experience first hand how to collect / analyze MEG data and eventually incorporate these results in her grant application. (Pilot hours for neuroimaging scanning are often hard to come by. As a reference, it is often common for researchers to pay $600/hr to access MRI / MEG facilities throughout the U.S.) Each team now has one week to incorporate all the comments provided by the class and turn in their final proposal. I look forward to see what the final products will look like!
There is no doubt this course was challenging. It is not easy to learn MEG from scratch and to come up with a cogent proposal incorporating the technology in less than 10 weeks. However, we often don’t know what we’re capable of achieving until we’re pushed to our limits and those experiences are often the most memorable. Hopefully students found SPHSC 594 both demanding and rewarding at the same time. Exposure to the grant writing and review process will no doubt help them in their future academic endeavors. Above all, I hope they had the opportunity to learn from each other and discovered how exciting and dynamic interdisciplinary research can be. I look forward to teaching this course again in the near future.