Multimodal Interactions
What Are Multimodal User Interactions?
The term “multimodal interaction” refers to a person’s ability to interact with information using multiple sensory modalities. For example, if a student is able to see an image in a book while manipulating a 360-degree model, that student is using their eyes and sense of touch to interact with the material. In math, young students use counting blocks in addition to worksheets when learning addition and subtraction, but multimodal interactions should be more complex as the content becomes complex. The use of simulated experiments in addition to videos, for example, can help physics students learn how to calculate force and velocity of an object without needing to manipulate actual dangerous or heavy objects.
While we addressed multimodal interactions in the 2017 report, we will be addressing it again this year, as we believe it is an area that is still worth watching. For those interested in a more in-depth explanation of multimodal interactions, please refer to the initial chapter on multimodals in the 2017 DIAGRAM Report. This chapter will explore why multimodal interactions are important and how different types of modalities are being used in the classroom to augment standard curriculum such as textbooks or lectures. The assumption is that this is how most educators will incorporate multimodal interactions in their classrooms or learning environments. However, it is worth noting that while this is the focus of this report, educators shouldn’t feel limited to any specific combination of modalities.
Why Are Multimodal User Interactions Important?
The use of multimodal interactions can motivate and engage students by enabling access to information or simply by addressing their interests and strengths. It allows learners to use more of their senses at once when taking in information, lending complexity to their perception and giving them a multifaceted way to interact with new information, which may in turn stimulate new ideas and associations. Multimodal interfaces give teachers flexibility in how they teach to different learning styles in their classrooms, especially for students with disabilities (Tzovaras, 2008). Multiple forms of representation can also communicate information in a more meaningful way than a single modality can and help students display their understanding. In addition, giving students a multifaceted way to interact with new information may stimulate new ideas and associations that are especially helpful in a learning environment, the most successful of which use multimodal presentations of concepts (Moreno and Meyer, 2007).
How Are Multimodal User Interfaces Applied in Education?
Readers who went to school in the United States might recall learning about frog anatomy, reading about the makeup of the frog and even viewing a video of the frog’s biological functions. Some might have also had the opportunity to dissect a frog. These multiple representations of the same concept are a perfect example of multimodal instruction and how it combines multiple ways of learning– reading, visual, and kinesthetic — to further enhance a learner’s comprehension.
In higher education, a recent study found that not only do most professors engage in multimodal practice, but they also found that it actually works for their students (Reid et al., 2016). In addition to these resources with visual (pictures) and verbal (reading text) modes in science, some work also exists on other modes or combinations of modes (Reid et al., 2016). What follows is not an exhaustive list of alternate modalities, but examples of the types of formats readily available for use in the classroom for those wishing to enhance current teaching methods.
Ways to Augment Curriculum
2D Tactile Graphics and 3D Objects
Tactile graphics, including tactile pictures, tactile diagrams, tactile maps, tactile graphs and 3D objects, are historically used to convey non-textual information to those who are blind or visually impaired. In fact, tactiles have been used in classrooms supporting students with visual impairments for decades. The Braille Authority of North America created standards to support literacy for tactile readers by releasing guidelines in 2010 that set standards for conveying visual information that is adapted for the sense of touch. While the technology itself has been around for a long time, it has experienced rapid improvements and expanding uses to include students of all learning types and not just those with visual disabilities.
2D Tactile Graphics
2D, or, more appropriately, 2 & 1/2D, tactile graphics are slightly raised shapes, dots, or lines on some piece of material that can be felt using touch and convey information about the object. 2D tactile graphics allow students who are blind to access information in a way that is not possible using a book alone. However, for students with other disabilities, access to embossers to create tactile information may be limited. One popular, low-tech solution that has shown academic efficacy for students with and without disabilities in learning letters, numbers, and basic shapes is the Wikki Stix. Wikki Stix are non-toxic, maneuverable, wax-covered pieces of yarn that students can twist and turn to match the shapes of letters when learning how to write. Objects such as Wikki Stix give students a tactile experience in conjunction with the visual experience of a textbook or auditory experience of a lecture, which reinforces the concepts they are learning.
Lately, companies have been building on the traditional models of 2D tactile graphics to increase access to information. One such company is Orbit Research® who has begun to incorporate real-time 2D graphical displays. This new feature gives students the opportunity to have interactive experiences when using tactiles. Take, for example, the Graphiti project. Instead of having one line of braille text that refreshes like a traditional refreshable braille device, the Graphiti tablet provides a full-page display allowing students to interact in real time with things such as graphs and charts.
Recently, CNET featured American Printing House for the Blind using the Graphiti tablet to allow the blind and visually impaired to experience the solar eclipse that took place on August 21, 2017. During the eclipse, a photographer captured one image of the eclipse every thirty seconds and shared the files with the Graphiti tablet, which refreshed its screen with an updated tactile representation of the moon blocking the sun. When used in conjunction with a lecture on eclipses, these blind and visually impaired students were able to experience the scientific phenomenon in a multimodal way that fully immersed them in an astrological event they couldn’t have experienced otherwise.
3D Objects
3D objects are physical in form in all three dimensions: height, width, and depth. While 3D objects can be used in much the same way as their 2D counterparts in reinforcing a complex idea or concept, they can be used in other ways as well. Students with autism, sensory processing disorders, or PTSD may need lots of tactile stimulation to help them focus or calm them in situations they find stressful. Fidget toys can be added to a 504 plan or Individualized Education Program (IEP) and have been helpful in relieving stress, curbing distraction, refocusing the mind, relieving test anxiety, and releasing nervous energy quietly in class.
With 3D printing, a parent, teacher or even a student can produce these devices on their own and even customize them to their specific needs and preferences. One example of using a model in conjunction with a textbook happened while teaching a class about DNA sequencing. The teacher circulated a 3D representation of DNA around the room, and it eventually reached a student with visual impairments. After exploring the model, the student announced that she finally understood what the books had meant by “rotating double helix.” The same example can be applicable to students with other types of disabilities as well. Students with learning disabilities have reported that pictures of double helixes look like a compilation of squiggly lines. It can be difficult for them to perceive what the backside of the image looks like. Giving them a 3D model of DNA helps them fully grasp the structure of the molecule and how all the elements fit together.
While 3D printing technology has come a long way since Benetech explored the use of these tactiles in the classroom, many classrooms cannot afford or do not have staff that can maintain this technology. The 3D printing process involves the creation or acquisition of a print-ready object as well as someone who can make modifications and knows how to use a 3D printer. While 3D printing has great potential, most classrooms are not quite ready for this technology until it becomes easier to introduce in the classroom.
In 2015, Benetech and members of the DIAGRAM community created a quick start guide for people interested in using 3D technology in the classroom. This guide discusses things to consider when using 3D manipulatives for different populations, looking for objects for production, and creating a maker lab in a school, among other topics. One thing we learned from talking to educators about using 3D technology is that they often do not know where to find materials. Local libraries, museums, and even the National Institutes of Health have vast collections of files ready for production. Resources include:
- Libre 3D – An open-source repository for 3D enthusiasts that has free downloads and printers and other hardware for purchase.
- LibraryLyna – A free library of tested 3D models for the visually impaired as well as a free request service for models that are not available in any other medium.
- Pinshape – An online community and marketplace for designers and users with free images and images for sale. Designers can share and sell their 3D printable designs, and users can download designs to print on their own.
- Shapeways – A 3D printing service and marketplace.
- Smithsonian X3D – An iconic collection of Smithsonian objects.
- Thingiverse – A repository of free, open-source downloads. Several files can be customized and remixed.
- 3D Hubs – an online service that allows users to upload a design and have it created and delivered.
- YouMagine – An online repository of open-source designs that users can download and print on their own.
Multimedia Interactives
Multimedia interactives are applications that allow end users to manipulate their experience in real time by interacting directly with the digital media. For example, users can navigate through a forest in a Minecraft world, dissect a digital frog in science class, or use virtual turntables to mix records in a Google doodle celebrating the 44th anniversary of hip-hop.
In this increasingly digital age, more and more multimedia interactives are appearing in a wide range of applications. Web browsers such as Chrome and Safari, standalone computer software, gaming consoles such as Xbox and PlayStation, and tablets such as the iPad, Amazon Fire, and Samsung Galaxy are just a few examples of the software and hardware people use to access multimedia-rich experiences. While gaming and web browsing are often the most well-known types of multimedia interactives, there are plenty designed specifically for educational purposes.
Examples include:
- zSpace Learning Lab — hardware, software, and educational content designed to increase the use of experimental learning in the classroom using augmented reality (AR) and virtual reality (VR), especially for STEM (science, technology, engineering, and math) content.
- Peer — a mixed reality educational experience that uses Internet-enabled sensors along with digital headsets to make abstract concepts in STEAM (science, technology, engineering, art and math) more tangible.
- McGraw-Hill’s Connect® — an interactive, online learning environment that allows teachers and students to customize the learning experience according to learning preferences.
- Pearson’s Revel™ — an immersive experience for students combining subject matter, interactive media, and assessments.
- University of Colorado Boulder — PhET Interactive Simulations is a game-like environment where students learn through exploration and discovery. For example, in an interactive lesson about static electricity, students can experiment to see if John Travoltage will get shocked as he rubs his foot on the rug and moves his arm towards the doorknob.
- Minecraft Education Edition – layers tools to support collaboration and structured learning into a gaming environment.
- Disney’s Infinity Learning – teaches game making and development with primary students.
For those interested in learning more about multimedia interactives and other ways they can be used in the classroom, please see the 2017 DIAGRAM Report chapter on this topic.
Sonification
Sonification is the use of non-speech audio to convey information or perceptualize data. Researchers are increasingly experimenting with sonification for educational consumption as initial research has demonstrated that hearing information has advantages in temporal, spatial, amplitude, and frequency resolution. Sonification can be used as an alternative or complement to visualization techniques that are already commonly used.
For example, the Sonification Lab at the Georgia Institute of Technology developed the Sonification Sandbox, a graphical interface that allows users to map data to auditory parameters and add context (Walker & Cothran, 2003). Using Excel or other table software, users can upload data into the Sandbox. Once the data is in visual format on a screen, users can map coordinates of the graph with variation in timbres, pitch ranges, volumes, and pan levels. Any computer with Java (2.0 or greater) installed and a General MIDI-enabled sound card can run the application.
Another example of an educational tool is MathTrax developed by NASA. It’s aimed at middle and high school students who want to graph equations, do physics simulations, or plot data files. The graphs include text descriptions as well as sound so a user can see, read, and/or hear the graphs they have produced.
Alternative (Alt) Text
Alt text is a description of an image that accompanies it and conveys the same essential information as the image. In situations where an image is not available to the reader, perhaps because they have turned off images in their web browser or are using a screen reader due to a visual impairment, the alternative text ensures that no information or functionality is lost.
For many of us, the use of alt text isn’t a foreign concept; however, the idea that teachers or parents can incorporate their own alt text when creating content often is. Furthermore, companies like Google and Microsoft have been increasing the accessibility functions of their products. For example, a teacher who is creating a Word document or PowerPoint presentation can use Microsoft’s built-in accessibility checker to identify where she might have to insert alt text to make the document readable by all users.
Although Microsoft provides some options for adding alt text to documents, sometimes alt text is not enough. Benetech, the DIAGRAM Center, the National Center for Accessible Media (NCAM), and Touch Graphics created the Poet Training Tool for content creators such as professional publishers, their service providers, and individual authors who make their material available electronically. This web-based, image description resource teaches those who are creating accessible digital documents how and when to describe various types of images frequently found in educational content. Poet provides best-practice guidelines and exercises for writing effective image descriptions.
Audio Description
Most videos contain both sound and images and therefore need additional accessibility considerations. In 1976, PBS developed a system for including captions on prerecorded programming. Despite advances in the technology at the time, it wasn’t until 1990 that a bill passed to require the Federal Communications Commission to regulate closed captioning. Since then, video platforms are required to provide closed captioning and often provide it in multiple languages. Netflix includes closed captioning on all of their digital offerings. YouTube uses speech recognition technology to create captions automatically for videos and also provides users with a video manager tool to add custom captions. Additionally, it is in the process of rolling out automatic captions for livestreams to English language channels with over 10,000 subscribers.
In most videos, not all of the action is verbalized. While captioning can open up access to videos for people who are deaf or hard of hearing, this limitation does not make videos fully accessible to those who are visually impaired, and they might miss details that aren’t verbalized. Consider, for example, a chemistry video where a scientist is adding chemicals to a solution while stirring it slowly, and the color of the mixture is changing from clear to pink to red. Unless the scientist is actively narrating the action, a non-sighted person would have no idea what was taking place. This scenario is not only unfair, but also potentially dangerous.
To address this problem an additional audio track consisting of a narrator describing what is happening during the natural pauses in the audio and sometimes during dialogue can be used. This additional track is called an audio description. Other terms include video description, described video, or more precisely a visual description since audio descriptions aren’t limited to videos and can be used with museum tours and theater performances.
In early 2009, the American Council of the Blind (ACB) established the Audio Description Project (ADP) to boost levels of description activity and disseminate information on that work throughout the United States and worldwide. In August of the same year, BBC iPlayer became the first video on-demand service to offer audio description. In December 2015, The Wiz Live! was the first live television program to contain audio description in the United States. As audio description becomes more widely adopted, more tools are being developed making the creation of audio description easier than ever, and it gets better every year. YouDescribe, for example, is a free, web-based platform for adding audio descriptions to YouTube content that was developed by Smith-Kettlewell Eye Research Institute. Since the 2017 DIAGRAM Report, YouDescribe has been rewritten in a more current JavaScript language allowing for easier maintenance and implementation of new features. The rewrite also enables it to work on a greater number of devices and browsers. Additionally, it now has the ability to add inline and extended descriptions, request descriptions for videos, and allow users to know and rate the quality of the descriptions provided.
Other Resources for Augmenting Curriculum
The DIAGRAM Center, funded by the Department of Education, is developing Imageshare, a resource for content creators to obtain multiple forms of an image that can be incorporated into the curriculum. Although still in its early design stage, Imageshare promises to provide access to multiple expressions of STEAM concepts for use in the classroom. Developers of Imageshare have spoken to educators and understand how difficult it is to quickly find multimodal options that are accessible and affordable. Imageshare is a repository of accessible modalities and a registry. The repository houses a collection of 2D, 3D, alt text, and described videos that have been vetted and confirmed as accessible. Imageshare also links to external collections like American Printing House for the Blind, Thingiverse, National Institutes for Health, and more. Items in the collection will be tagged with metadata so that educators can look in one place to find what they need rather than searching a number of sites. For example, a teacher who is teaching the circulatory system and needs alternate modes of expression can go to the site and find a captioned video to show the class as well as download a 3D file to be printed at the school. This will give the teacher options to provide additional modalities while teaching the subject matter.
For students who are deaf or hard of hearing, Gallaudet University has created tools to help increase visual literacy. Visual Language and Visual Learning (VL2) is a story app that helps to increase bilingual education. Users of VL2 can alternate between American Sign Language (ASL) and reading English. They can also view fingerspelling of the written words. Researchers have found that increased access to both languages leads to greater literacy skills in both languages.
Additional resources and libraries of educational videos with captioning (both manual and automatic) can be found on platforms like YouTube. With the increased use of multimedia for instruction, technology such as CaptionSync by Automatic Sync Technologies (AST) provides low-cost automated captioning for increased accessibility of educational material. Content accompanied by American Sign Language video, such as that available from the Center for Accessible Technology in Sign (CATS) at Georgia Institute of Technology, gives deaf students whose primary language is ASL access to more appropriate bilingual educational materials.
There are two major challenges in providing opportunities for multimodal interactions for students with disabilities: discoverability and production. Both may create significant barriers in getting materials into the classroom. Many educators simply do not know where to find tactile or sonification resources that complement their curriculum. For example, an educator may use a physical model of a plant cell to augment a lecture using the model that was purchased to accompany the textbook. While this model may represent the major components of a plant cell for someone who is able to look at it visually, it may not be useful to a blind student unless the model is designed for a tactile user. In this case, the teacher would need to find or produce a modified version of the model in order for the tactile to convey meaningful information. Unless the teacher knows where to look for accessible versions of this model, the student will likely go without.
Another example of discovery and production challenges in providing multimodal access is in sonification. Sonification tools are built into most personalized computers (e.g., audio recording, processing, and playback). These common tools work in conjunction with free and low-cost software to allow synthesis, editing, and analysis (Stock & Zancanaro, 2006). According to Bruce Walker at the Sonification Lab, however, “there are very few widely available software tools for creating sonifications and auditory graphs,” and most of the creation tools are not designed for use by a teacher or student.
Final Thoughts
This is the second year that technology capable of multimodal use was identified as an important technology that has the potential to shape the way that students learn. With all the different modalities that are becoming available, the ways an educator chooses to combine them or incorporate them into their lesson planning are vast and varied. Still, even with the multitude of possibilities, there are a few key guidelines to keep in mind.
Educators
- Understand your students’ individual needs for accessing information before deciding on how you will augment the curriculum. Don’t be afraid to experiment with different modalities and combinations of modalities
- Before choosing to use a specific program or device in an educational setting, the Universal Design for Learning guidelines should be consulted. These guidelines can assist in lesson planning, units of study and/or development of curricula that can reduce barriers to learning.
- Familiarize yourself with emerging technologies and their implications in supporting diverse learners; they may influence what modalities you choose to use.
- Repeat lessons in multiple modes to reinforce the learning.
- If you create a modality that students respond especially well to, share what you’ve done and learned with others. Email your files and a description of how you used them and what you learned to info@diagramcenter.org and we will include them in our Imageshare resource.
Parents
- Discuss with your children how they think they learn best so that you can advocate for their needs.
- Work with your child’s teacher to find the tools that help your child learn.
- Familiarize yourself with the technology so you can help your child at home.
- Make sure all options are considered when developing IEP goals and resources.
Students
- Talk to your teacher about the ways that you learn best. Tell her if it is easier to understand information by hearing about it, seeing it, or interacting with it.
- Ask your teacher if you can complete assignments in non-traditional ways that will help demonstrate your ability.
References:
- Moreno and Myers. 2007, from https://link.springer.com/article/10.1007/s10648-007-9047-2
- Reid, G., Snead, R., Pettiway, K., & Simoneaux, B. (2016). Multimodal communication in the university: Surveying faculty across disciplines. Across the Disciplines, 13(1).
- Stock, O., & Zancanaro, M. (Eds.). (2005). Multimodal intelligent information presentation (Vol. 27). Springer Science & Business Media, from http://bit.ly/2vQyjzi
- Tzovaras, D. (2008). Multimodal User Interfaces: From Signals to Interaction. Springer Science & Business Media.
- Walker, B. N., & Cothran, J. T. (2003). Sonification Sandbox: A graphical toolkit for auditory graphs. Georgia Institute of Technology.
Published: 2018-08-31