Man holding a phone which wirelessly connects to window shades, a computer, a stereo system, lights, security, washing machine, clock camera and a digital thermostat.

Internet of Things


Clayton Lewis
Co-director for Technology, Coleman Institute for Cognitive Disabilities
University of Colorado Boulder

And

Sidney D’Mello
Fellow at the Institute of Cognitive Science at the University of Colorado
University of Colorado Boulder

Introduction by

Lisa Wadors Verne
DIAGRAM Project Director
Director of Education and R&D
Benetech

Recommendations by

Amaya Webster
DIAGRAM Community Manager
Benetech

What is Internet of Things?

Internet of Things (IoT) refers to everything connected to the internet. These items could be computers or even everyday items like refrigerators and hairbrushes. Simply defined, the IoT is the way connected devices talk to each other over the internet. This communication can occur through sensors embedded in products such as wearables or smartphones to send and receive data over the internet (Burgess, 2018). One application is sensor networks, in which sensors of different kinds, in different places, can collect data and send it to a central site for interpretation.

What potential does this technology have for education, and specifically for students with disabilities?

To explore this question, long time DIAGRAM contributor Clayton Lewis interviewed Dr. Sidney D’Mello, Professor of Computer Science and Fellow at the Institute of Cognitive Science at the University of Colorado, Boulder, and a leading researcher on sensor technology in a variety of settings.

Interview

DIAGRAM: Are you researching the Internet of Things in the classroom?

SD: We work on part of the IoT vision, the sensing part. We’re interested in what kind of information can be gathered in classrooms and how it can be interpreted in useful ways for teachers and students.

In one of our projects, we’re recording what teachers are saying using a simple, unobtrusive microphone. We can get good enough recordings to support automatic speech recognition, so we can give teachers useful feedback on their pedagogy. For example, we can tell when a class session includes discussing authentic, open-ended questions with students and when it doesn’t. We can also tell how often teachers are using the topic-specific language students need to hear. We’re working towards an app that can give teachers real-time feedback on things like this.

DIAGRAM: What about other sensors?

SD: Other groups have tried to do something similar with video of classroom activities, but it’s hard to make sense of what you get. Also, we’d like to be able to process what students are saying, but we’ve not been able to develop a microphone setup that can give us interpretable data. I don’t see a breakthrough on that for at least five years.

On the other hand, we are making good progress with eye tracking. Lab studies have shown that eye-tracking data that tells what learners are looking at when they are working on an activity on a computer can tell you when a student’s mind is wandering.

DIAGRAM: Is that important?

SD: It can be very important. If a student hasn’t paid attention to one part of a lesson, and they try to push on to new material, they can get lost. Letting them know they need to look at something more carefully can help them stay on track.

DIAGRAM: You mentioned lab studies. Can you get the right kind of data in a classroom setting?

SD: It looks as if you can. Inexpensive eye trackers are now on the market, and we’ve shown that students can set these up and calibrate them on their own, following instructions we provide. We’ve also shown, though so far with adults, rather than children, that we can collect useful data over periods of as much as two weeks without needing to intervene or recalibrate the equipment. So, we think we’re close to being able to do this in regular classrooms.

DIAGRAM: What about cost?

SD: We think it’s going to be inexpensive. Eye tracking has become popular among gamers, so eye tracking might be built into laptops in a few years.

DIAGRAM: Sensors for lots of other things are now available, like heart rate monitors in smart watches and much more. Do you see possibilities there?

SD: We’re working with adult information workers, as well as with students, and we’re using a wider range of sensors, including smart watches and location beacons.

DIAGRAM: What are location beacons?

SD: They are devices that software on someone’s phone can detect when nearby. Participants place these in key locations in the workplace, or on something portable, like a laptop. Recording starts when the phone detects a beacon and lets us map people’s movements during the day and other aspects of their activities. In addition, a wrist-worn wearable device can give us measures of physical activity, sleep, and stress.

DIAGRAM: Would these kinds of data be relevant in school settings?

SD: Very possibly. Information about sleep could be quite important, for example. If students are having trouble focusing on what they are reading, it may be that they are tired, not uninterested. Similarly, we may be able to use the data help students get a healthy amount of physical activity.

DIAGRAM: Are there concerns about these kinds of sensors being too invasive?

SD: Absolutely! Some previous investigations have received a lot of pushback, with teachers feeling that their privacy was being violated and technology was being used to evaluate them. And parents are justifiably concerned about what data are being gathered on their children and who has access to it.

DIAGRAM: Are there ways to deal with those issues?

SD: We think so. One approach is to arrange settings so that data are given to the person it’s about and not to others. So, teachers could get feedback on their classroom conversations, but the feedback wouldn’t be seen by anyone else. Similarly, students could get prompts that they need to go back over something that they’ve just read, without that information being reported to anyone else. Another approach is to let people opt out of any data gathering they aren’t comfortable with.

Incidentally, we’ve found ways to process the video in our classroom work in such a way that video from which someone could be recognized isn’t stored after the facial expression information is extracted. That process controls the amount of data from which people could be identified, and that data would need to be protected.

DIAGRAM: Does any of your work have particular value for learners with disabilities?

SD: Yes, in two ways. First, our work on assessing how well students are attending to what they are reading is likely to be more useful for students for whom the material is more challenging. And because there is often a kind of multiplier effect, where falling a little behind now gets you farther behind later, a little boost can make a big difference if it helps you keep up with the material.

The second point is something I didn’t mention earlier, when we were talking about analyzing what teachers are saying in the classroom. We are looking at the language teachers are using when referring to their students. We think we can help teachers talk in a more inclusive way about students from historically underrepresented groups, or students with learning differences.

I’ll also mention some new work we’re excited about. As you know, a lot of assessments are being done using tablets. This technology makes it possible to gather data not just on the answer’s students provide, but also on the process they go through in completing the assessment. We’ve been invited to consult on how this process data can be analyzed and interpreted, specifically for students with disabilities. The focus is on math assessments, and we think there are some real opportunities.

DIAGRAM: Do you have any further thoughts on what’s coming regarding the role of technology in education?

SD: Continuing that last point about assessments, the trend is to move assessment into the learning process, so that we’re able to measure how well students are progressing as they learn, rather than having a separate testing process. The better picture sensors can give us of what’s happening during learning, the better we can make the assessment process work.

There’s more than sensor technology involved here. If we can track the content of students’ work, we can ensure that they get enough practice to solidify their basic knowledge and skills without having to give them concentrated drill on these basics. Avoiding concentrated drill can help maintain engagement.

Also, we can provide more individualized instruction, the more we know about what a student is doing. Kids with differences are the kids for whom standardized, one-size-fits-all instruction works least well. So, these kids stand to gain the most from progress on individualized instruction.

In the longer term, we’re looking forward to being able to provide students with their own learning assistants. The work we and others are doing on understanding what’s being said in a classroom will be important in a couple of ways. First, not everything kids need to learn to do can be done with paper and pencil or a tablet; talking about something is a crucial skill. Second, the most natural way to interact with a learning assistant will be a conversation, in many cases. As I said earlier, we have a lot of challenges in being able to process student conversations, even at the level of just recording them, but we think we can get useful results along the way.

DIAGRAM: Thank you for sharing your thoughts with us!

Note: After the conversation, Prof. D’Mello provided some references so readers can further explore the ideas he discussed.

Recommendations

For Teachers

Sensor networks and eye tracking technology are on the horizon. With increased scrutiny on inclusivity, the ability to receive real time, actionable feedback on pedagogy and language, specifically unintentionally marginalizing language, has the potential to become a key tool in building a fully inclusive classroom. Also promising is the fact that these technologies will become increasingly affordable as they become more prevalent.

For Parents

While the idea of tracking students can certainly be alarming, we are already seeing privacy protection strategies being implemented. As the technology advances not only will it be possible to ensure the collected data is shared only with the person being tracked, but also to opt out of any data gathering a user isn’t comfortable with. This, paired with benefits such as real time performance feedback, sleep reports, and assessments, makes it a development worth taking note of.

For Students

Falling even a bit behind in class today can make it nearly impossible to catch up later. Internet connected devices, such as sensor networks, can make a big difference in keeping up with the class. Maybe you aren’t having as much trouble focusing as you—or a teacher or parent—might think. Maybe you’re not sleeping well and are too tired to pay attention. Maybe you read something too fast and need to go over it one more time, but slower. Wearable devices and eye tracker technology can collect and analyze this information and provide feedback on what is going on behind the scenes, making these developments worth keeping on your radar.

References

  • Burgess, M. (2018). What is the Internet of Things? WIRED explains. Retrieved from WIRED UK July 6th, 2020. https://www.wired.co.uk/article/internet-of-things-what-is-explained-iot
  • Jensen, E., Dale, M., Donnelly, P., Stone, C., Kelly, S., Godley, A., & D’Mello, S. K. (2020). Toward Automated Feedback on Teacher Discourse to Enhance Teacher Learning. Proceedings of the ACM CHI Conference on Human Factors in Computing Systems (CHI 2020).
  • Mills, C., Gregg, J., Bixler, R., & D’Mello, S. K. (in press) Eye-Mind Reader: An Intelligent Reading Interface that Promotes Long-term Comprehension by Detecting and Responding to Mind Wandering, Human-Computer Interaction. (IF = 3.36).
  • Hutt, S., Mills, C., Bosch, N., Krasich, K., Brockmole, J. R., & D’Mello, S. K. (2017). Out of the fr-“eye”-ing: Towards gaze-based models of attention during learning with technology in the classroom. In M., Bielikova, E. Herder, F. Cena, & M. Desmarais (Eds.). Proceedings of the 25th ACM International Conference on User Modeling, Adaptation, and Personalization (UMAP 2017) (pp. 94-103). ACM: New York. (Full paper– AR = 36.2%).
  • Stewart, A., Vrzakova, H., Sun, C., Yonehiro, J., Stone, C., Duran, N., Shute, V., & D’Mello, S. K. (2019). I Say, You Say, We Say: Using Spoken Language to Model Socio-Cognitive Processes during Computer-Supported Collaborative Problem Solving. Proceedings of the ACM: Human Computer Interaction. 3, Computer Supported Collaborative Work (CSCW 2019), 1-19.
  • Bosch, N., D’Mello, S. K., Ocumpaugh, J., Baker, R., & Shute, V. (2016). Using video to automatically detect learner affect in computer-enabled classrooms. ACM Transactions on Interactive Intelligent Systems (TiiS), 6(2), 17.1-17.31.

Published: 2020-08-31

Ideas that work.The DIAGRAM Center is a Benetech initiative supported by the U.S. Department of Education, Office of Special Education Programs (Cooperative Agreement #H327B100001). Opinions expressed herein are those of the authors and do not necessarily represent the position of the U.S. Department of Education.

HOME | BACK TO TOP

  Copyright 2019 I Benetech

Log in with your credentials

Forgot your details?