Call: ICMI Grand Challenge Workshop on Multimodal Learning Analytics (at ICMI 2013)

Call for Participation

ICMI Grand Challenge Workshop on Multimodal Learning Analytics
ICMI Multimodal Learning Analytics Workshop 2013

The workshop will take place in Sidney (Australia) as part of the 2013 International Conference on Multimodal Interaction.

http://tltl.stanford.edu/mla2013

Important Dates:

June 15, 2013: MMLA database available for grand challenge participants
August 21, 2013: Paper submission deadline
Sept. 15, 2013: Notification of acceptance
Oct. 8, 2013: Camera-ready papers due
December 9, 2013: ICMI Workshop

Multimodal learning analytics, learning analytics, and educational data mining are emerging disciplines concerned with developing techniques to more deeply explore unique data in learning settings. They also use the results based on these analyses to understand how students learn. Among other things, this includes how they communicate, collaborate, and use digital and non-digital tools during learning activities, and the impact of these interactions on developing new skills and constructing knowledge. Advances in learning analytics are expected to contribute new empirical findings, theories, methods, and metrics for understanding how students learn. It also can contribute to improving pedagogical support for students’ learning through new digital tools, teaching strategies, and curricula. The most recent direction within this area is multimodal learning analytics, which emphasizes the analysis of natural rich modalities of communication during situated interpersonal and computer-mediated learning activities. This includes students’ speech, writing, and nonverbal interaction (e.g., gestures, facial expressions, gaze, sentiment. The First International Conference on Multimodal Learning Analytics (http://tltl.stanford.edu/mla2012/) represented the first intellectual gathering of multidisciplinary scientists interested in this new topic.

Grand Challenge Workshop and Participation Levels

The Second International Workshop on Multimodal Learning Analytics will bring together researchers in multimodal interaction and systems, cognitive and learning sciences, educational technologies, and related areas to advance research on multimodal learning analytics. Following the First International Workshop on Multimodal Learning Analytics in Santa Monica in 2012, this second workshop will be organized as a data-driven “Grand Challenge” event, to be held at ICMI 2013 in Sydney Australia on December 9th of 2013. There will be three levels of workshop participation, including attendees who wish to:

  1. Participate in grand challenge dataset competition and report results (using your own dataset, or the Math Data Corpus described below which is available to access)
  2. Submit an independent research paper on MMLA, including learning-oriented behaviors related to the development of domain expertise, prediction techniques, data resources, and other topics
  3. Observe and discuss new topics and challenges in MMLA with other attendees, for which a position paper should be submitted

For those wishing to participate in the competition using the Math Data Corpus, they will be asked to contact the workshop organizers and sign a “collaborator agreement” for IRB purposes to access the dataset (see data corpus section). The dataset used for the competition is well structured to support investigating different aspects of multimodal learning analytics. It involves high school students collaborating while solving mathematics problems.

The dataset will be available for a six-month period so researchers can participate in the competition. The competition will involve identifying one or more factors and demonstrating that they can predict domain expertise: (1) with high reliability, and (2) as early in a session as possible. Researchers will be asked to accurately identify: (1) which of three students in each session is the dominant domain expert, and (2) which of 16 problems in each session is solved correctly versus incorrectly using their predictor(s).

Available Data Corpus and Multimodal Analysis Tools

Existing Dataset: A data corpus is available for analysis during the multimodal learning analytics competition. It involves 12 sessions, with small groups of three students collaborating while solving mathematics problems (i.e., geometry, algebra). Data were collected on their natural multimodal communication and activity patterns during these problem-solving and peer tutoring sessions, including students’ speech, digital pen input, facial expressions, and physical movements. In total, approximately 15-18 hours of multimodal data is available during these situated problem-solving sessions.

Participants were 18 high-school students, including 3-person male and female groups. Each group of three students met for two sessions. These student groups varied in performance characteristics, with some low-to-moderate performers and others high-performing students. During the sessions, students were engaged in authentic problem solving and peer tutoring as they worked on 16 mathematics problems, four apiece representing easy, moderate, hard, and very hard difficulty levels. Each problem had a canonical correct answer. Students were motivated to solve problems correctly, because one student was randomly called upon to explain the answer after solving it. During each session, natural multimodal data were captured from 12 independent audio, visual, and pen signal streams. These included high-fidelity: (1) close-up camera views of each student while working, showing the face and hand movements while working at the table (waist up view), as well as a wide-angle view for context and another top-down view of students’ writing and artifacts on the table; (2) close-talking microphone capture of each students’ speech, and a room microphone for recording group discussion; (3) digital pen input for each student, who used an Anoto-based digital pen and large sheet of digital paper for streaming written input. Software was developed for accurate time synchronization of all twelve of these media streams during collection and playback. The data have been segmented by start and end time of each problem, scored for solution correctness, and also scored for which student solved the problem correctly. The data available for analysis includes students’:

  1. Speech signals
  2. Digital pen signals
  3. Video signals showing activity patterns (e.g., gestures, facial expressions)

In addition, for each student group one session of digital pen data has been coded for written representations, including (1) type of written representation (e.g., linguistic, symbolic, numeric, diagrammatic, marking), (2) meaning of representation, (3) start/end time of each representation, and (4) presence of written disfluencies. Note that lexical transcriptions of speech will not be available with the dataset. But people are free to complete transcriptions if they want to analyze the content.

For more information visit http://tltl.stanford.edu/mla2013

This entry was posted in Calls. Bookmark the permalink. Trackbacks are closed, but you can post a comment.

Post a Comment

Your email is never published nor shared. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>

*
*

  • Find Researchers

    Use the links below to find researchers listed alphabetically by the first letter of their last name.

    A | B | C | D | E | F| G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z