Artificial Intelligence in Inclusive Education


Clayton Lewis

Coleman-Turner Professor of Computer Science

University of Colorado Boulder


What is Artificial Intelligence

The term artificial intelligence (AI) was defined in 1956 and refers to computer systems that are able to perform tasks that normally require human intelligence or mimic cognitive functions typically associated with natural intelligence displayed by humans.

The field was founded on the claim that human intelligence “can be so precisely described that a machine can be made to simulate it.” This claim raises philosophical arguments about the nature of the mind and the ethics of creating artificial beings endowed with human-like intelligence which are issues that have been explored by myth, fiction, and philosophy since antiquity. This is a central idea of Pamela McCorduck‘s Machines Who Think. She writes: “I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition.” (McCorduck 2004, p. 34) “Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized.” (McCorduck 2004, p. xviii) “Our history is full of attempts-nutty, eerie, comical, earnest, legendary and real-to make artificial intelligences, to reproduce what is the essential us-bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn’t, we have engaged for a long time in this odd form of self-reproduction.” (McCorduck 2004, p. 3) She traces the desire back to its Hellenistic roots and calls it the urge to “forge the Gods.” (McCorduck 2004, pp. 340-400)

As technology that is considered “AI” becomes routine, such as optical character recognition (electronic conversion of printed images or text to a computer file), the scope of AI shifts. This phenomenon known as the AI effect. While Hollywood movies and science fiction novels often depict AI as human-like robots with general intelligence who take over the world, the current evolution of AI technologies isn’t nearly as scary, nor quite as evolved.

As of 2017, computer functions generally considered as AI include the ability to understand simple human speech, highly skilled, competitive participation in strategic games such as chess, the technology behind advances in self-driving cars, intelligent routing in content delivery network, and military simulations. These functions rely on versions of search and mathematical optimization, artificial neural networks, and methods based on statistics, probability, and economics. The AI field draws upon computer science, mathematics, psychology, linguistics, philosophy, and many other disciplines, and is constantly advancing. It’s even possible that one day we will have AI technology with general intelligence. This chapter will step away from AI as science fiction and explore why AI is an important technology and how it can be used in the classroom to assist students with disabilities.

 

Why is Artificial Intelligence Important

In this digital age, where our day-to-day lives use and rely on technology, AI is incredibly important, although for those not in the AI field, it’s easy to overlook how much impact it has. Take, for example, interactions with iPhone’s Siri, Amazon’s Alexa or even Google search. All three use AI technology to understand commands, recommend a product, provide search results, and so on. In addition, the more people use tools like Alexa, the more accurate they become at predicting our needs and preferences.

While AI applications in mainstream devices make our lives easier, the impact of AI has much broader implications for positive change. For example, AI techniques from deep learning, image classification, and object recognition can be used to find cancer on MRI scans with the same accuracy as highly trained radiologists, which can reduce costs and increase testing. AI technology can improve online security and investment analysis, and it can also support students with disabilities in their learning efforts.

 

How is Artificial Intelligence Applied to Education

When we think about AI, it’s likely that scenes from a science fiction thriller come to mind: robots fighting humans, men falling in love with a computer that learns to feel, iPhones outsmarting their users. But what about a classroom? If you think AI and chalkboards don’t go hand-in-hand, this section may just change your mind.

Classrooms can have twenty, thirty, forty, or sometimes even fifty students who are required to pass the same standardized tests regardless of their learning styles. Educators would probably agree that addressing the individual needs of all these students is extremely challenging. The use of AI has the potential to make it easier to meet the needs of individual students, thus helping to make education more inclusive.

Using AI in education is not a new idea. In the past, “intelligent tutoring systems” were the focus of active research and development, using earlier AI techniques (for example, comparing student work to a collection of rules that represented skilled behavior) to diagnose specific learning gaps for students and deliver individualized instruction. While such systems are still on the market– MATHia learning software, for example — they no longer emphasize AI as an enabling technology.

One reason may be a shift in pedagogy since the original AI tutoring systems were developed. Those systems aimed to emulate the success of human tutors in 1:1 interactions (Roll and Wylie, 2016). Current pedagogy emphasizes students working in groups, which raises the question, how does AI in education shift to support different pedagogical models?

Work by VanLehn and colleagues (Viswanathan & VanLehn, 2017) shows one way to approach this question. They use machine-learning techniques to train an artificial observer that can identify useful student discourse in a collaborative learning activity in real time, as well as determine which students understand the material and which do not. As a result, teachers can devote more attention to the students who need it most. Relatedly, Linn and colleagues (Tansomboon et al., 2017) are using Natural Language Processing (NLP) to analyze students’ written work and provide guidance on creating science explanations. They report that their system can provide personalized advice that helps students plan their writing and use evidence effectively.

Another example of AI using machine learning that could be useful in the classroom is the work being done on translating sign language like ASL to English and vice versa. Currently such systems are not able to replace human interpreters, and the technical challenges are considerable, but the promise is there, and the ability of machine learning techniques to capture meaning in complex situations is advancing. Take for example a video demonstration that was released in July of 2018 showing an experiment done using a laptop’s camera to capture a person signing a question to Amazon’s Alexa. Utilizing AI the signs were converted to speech allowing Alexa to respond with answers. In the month following this demonstration, Amazon went on to upgraded their Alexa Show device, which has both a screen and camera, to employ AI technology to enable communication with virtual assistants without having to speak aloud.

Researchers at MIT are using machine learning to develop a system that detects speech-related muscle movements without requiring the speaker to make audible sounds, and thus enabling silent speech. This technology can benefit all types of learners, including those with speech-related disabilities, motor limitations, students who are soft-spoken or non-verbal, even students who are suffering from laryngitis. This technology can give them the mechanism to make themselves heard when asking questions or communicating with their peers.

For students who are blind or visually impaired, machine learning has huge implications for accessing math content in digital formats. For a non-visual person, math can be represented by different forms of Braille, two forms of MathML (a markup language for math like HTML), and LaTeX. Machine learning techniques such as those fueling advances in Natural Language Processing may allow the creation of tools able to translate between these different representations automatically making it easier for students to access math in their preferred format.

AI techniques are also making progress on making images, graphs, and charts accessible. Technology from Facebook and Microsoft can now identify objects in images and provide a simple form of alt text, though full image descriptions expressing complex relationships among objects, such as those needed for many textbook illustrations, are not yet possible.

In the case of quantitative charts and graphs, fully automated tools for extracting content for common chart and graph types will likely be available in the next few years. Multiple research groups are reporting results for parts of a complete processing flow, including identifying the type of a chart or graph, extracting descriptive text such as legends (Poco & Heer, 2017), and extracting the quantitative data that the chart or graph contains (Jung et al., 2017). Some human intervention is necessary; for example, interpreting overlapping lines (Jung et al., 2017), but these challenges do not seem beyond the reach of available machine vision approaches.

In addition to machine learning, there is another type of AI that addresses the need to support different pedagogical models, specifically those that rely on peer collaboration in the classroom. These tools are based on advances in speech processing that allow automatic transcription of speech. Current methods of automatic transcription are more applicable to lecture-like presentations than to wider classroom discourse, but progress is being made on handling multiple speakers. Google, for example, has already demonstrated the ability to transcribe speech with multiple speakers talking simultaneously (Ephrat et al., 2018). The approach uses visual information about speakers’ mouth movements from video to identify who is saying what regardless of background noise and multiple voices. Machine learning analyzes the connections between the information in the video and the sounds of speech so that the speech can be attributed to the person speaking. Google’s AI takes the input audio track (the raw audio) and produces separate speech tracks with little or no background noise for each person speaking.

Slightly more complicated and possibly futuristic technology incorporating AI is still just conceptual, but it could have a lot of potential for supporting learners of all types. The Magic Lens concept (Bier et al., 1993) is based on using a software tool, called a lens, that is transformed in some way to make it more understandable for a particular learner. Sina Bahram, president of Prime Access Consulting and an accessibility expert, has pointed out that the lens concept has the advantage of supporting a teacher’s desire to be spontaneous in choosing learning materials to present. When such materials aren’t already in inclusive form, lens technology could make them accessible on demand. For example, a magic lens could produce a version of a text with headings tagged, as needed by screen reader users even if the headings are not tagged in the original. More ambitiously, a magic lens tool used to view a passage written at high school level could produce a version that would be understandable to a reader with less background knowledge and a smaller vocabulary.

Another perspective on this idea is Vanderheiden’s Infobot vision of technology, which is capable of interpreting any content that can be understood by most people and rendering that content in a form appropriate to a given learner. AI Infobots would be an invaluable tool for students with disabilities since it would deliver the information in a modality that they need. For example, if a student with a hearing impairment is viewing a video on a website, the Infobot would present the transcription of the video and turn on closed captions.

This work is still conceptual, and we are far from having this technology in hand. However, AI is making progress on some aspects of the requirements. Natural Language Processing is developing rapidly in four important ways. First, new machine architectures are able to handle more of the dependencies that tie different parts of a text together, such as pronouns and their antecedents (Vaswani et al., 2017). Second, better corpora of simplified text and ways of using paraphrase data are producing more comprehensible summaries of complicated text (Xu et al., 2015, 2016). Third, some NLP systems can now manage some connections between a text and a body of background knowledge to which the text refers (He et al., 2017). Finally, methods for unsupervised machine learning of language structure are emerging; that is, machine learning that does not require human-annotated examples that may develop more rapidly (Palangi et al., 2018).

 

Potential Challenges

Today’s proponents of using AI in the classroom are favoring methods from data science to identify students at risk and propose targeted instruction. While these systems share key goals with the earlier generation of AI systems, they use different techniques (statistical generalization rather than rule-based representations of course content and students’ knowledge). There are two challenges in using this technology to support inclusive learning.

First, the data-driven assessment must reflect the needs of the whole spectrum of students, and not just typical learners. There is the potential for statistical learning techniques to “average out” differences shown by just a few people in a sample, as Jutta Treviranus has suggested about other uses of statistical learning, in particular making hiring decisions (Treviranus, 2017). Increasingly the AI research community has recognized this difficulty; see, for example, the work of the Google People and AI group. However, the research community focused on data science in education may not be addressing the issue. For example, no articles in the Journal of Educational Data Mining have the words disability, inclusion, accessible, or special in the titles.

Second, appropriate interventions must be available once a gap is identified. Interventions that work for typical students that have a particular gap may not work for all students. In the

Handbook of Research on Educational Communications and Technology (Spector et al., 2014) the needs of learners with disabilities are mentioned only in the context of ethics and policy.

 

Final Thoughts/Learn More

Although artificial intelligence has its supporters and detractors, there’s no denying its place in modern teaching. And while there’s no replacing the human aspect of teachers, we believe educational technology and AI will only help the overworked educators and underfunded classrooms of tomorrow. This technology is still in its infancy, and there aren’t many resources that teachers, students and parents can consult to increase access to information.

Recommendations

People interested in inclusive learning should monitor research progress in Natural Language Processing and seek opportunities to collaborate with NLP researchers to create techniques for producing more comprehensible content.

People supporting inclusive learning must require that data science-based programs do not overlook learners with disabilities and learning differences and do not apply analytic techniques that filter out differences that reflect the needs of these learners.

People supporting inclusive learning must ask that individualized learning frameworks, whether based on data science or on traditional AI, include learners with disabilities and learning differences in their scope.

 

 

Acknowledgements

Sina Bahram, Dan Cogan-Drew, Josh Lovejoy, Owen Lewis, Vincent Vanhoucke, and Adam Wilton suggested sources for this review.

 


References

  • Bier, E. A., Stone, M. C., Pier, K., Buxton, W., & DeRose, T. D. (1993, September). Toolglass and magic lenses: the see-through interface. In Proceedings of the 20th annual conference on Computer graphics and interactive techniques (pp. 73-80). ACM.
  • Ephrat, A., Mosseri, I., Lang, O., Dekel, T., Wilson, K., Hassidim, A., Freeman, W.T. & Rubinstein, M. (2018). Looking to Listen at the Cocktail Party: A Speaker-Independent Audio-Visual Model for Speech Separation. arXiv preprint arXiv:1804.03619.
  • He, H., Balakrishnan, A., Eric, M., & Liang, P. (2017). Learning symmetric collaborative dialogue agents with dynamic knowledge graph embeddings. arXiv preprint arXiv:1704.07130.
  • Jung, D., Kim, W., Song, H., Hwang, J. I., Lee, B., Kim, B., & Seo, J. (2017, May). ChartSense: Interactive data extraction from chart images. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (pp. 6706-6717). ACM.
  • McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1.
  • Palangi, H., Smolensky, P., He, X., & Deng, L. (2018). Question-answering with grammatically-interpretable representations. In Proceedings of the 32nd AAAI Conference on Artificial Intelligence, New Orleans, LA.
  • Poco, J., & Heer, J. (2017, June). Reverse‐Engineering Visualizations: Recovering Visual Encodings from Chart Images. In Computer Graphics Forum (Vol. 36, No. 3, pp. 353-363).
  • Roll, I., & Wylie, R. (2016). Evolution and revolution in artificial intelligence in education. International Journal of Artificial Intelligence in Education, 26(2), 582-599.
  • Spector, J. M., Merrill, M. D., Elen, J., & Bishop, M. J. (Eds.). (2014). Handbook of research on educational communications and technology. New York, NY: Springer.
  • Tansomboon, C., Gerard, L. F., Vitale, J. M., & Linn, M. C. (2017). Designing Automated Guidance to Promote Productive Revision of Science Explanations. International Journal of Artificial Intelligence in Education, 27(4), 729-757.
  • Treviranus, J. (2017) Transcript
  • Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, L. & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 6000-6010).
  • Viswanathan, S. A., & VanLehn, K. (2017). High Accuracy Detection of Collaboration From Log Data and Superficial Speech Features. Philadelphia, PA: International Society of the Learning Sciences.
  • Xu, W., Callison-Burch, C., & Napoles, C. (2015). Problems in current text simplification research: New data can help. Transactions of the Association of Computational Linguistics, 3(1), 283-297.
  • Xu, W., Napoles, C., Pavlick, E., Chen, Q., & Callison-Burch, C. (2016). Optimizing statistical machine translation for text simplification. Transactions of the Association for Computational Linguistics, 4, 401-415.

Definition of AI as the Study of Intelligent Agents:


Published: 2018-08-31

Ideas that work.The DIAGRAM Center is a Benetech initiative supported by the U.S. Department of Education, Office of Special Education Programs (Cooperative Agreement #H327B100001). Opinions expressed herein are those of the authors and do not necessarily represent the position of the U.S. Department of Education.

HOME | BACK TO TOP

  Copyright 2019 I Benetech

Log in with your credentials

Forgot your details?