The NoRILLA system has been developed based on long-time research


Intelligent Science Stations Bring AI Tutoring into the Physical World

You can find our book chapter here. This chapter was published in the new Artificial Intelligence in STEM Education book.

Intelligent Tutoring Systems have been shown to be effective for education. However, so far they have been limited to flat screen computer interactions. We introduce a new genre of Intelligent Science Stations that bring intelligent tutoring into the physical world. Intelligent Science Stations are mixed-reality systems that bridge the physical and virtual worlds to improve children's inquiry-based STEM learning. Automated reactive guidance, made possible by a specialized AI computer vision algorithm, provides personalized interactive feedback to children as they experiment and make discoveries in their physical environment, fostering their curiosity and 21st century skills like critical thinking, collaboration, and persistence.


Background

Museum exhibits encourage exploration with physical materials typically with minimal signage or guidance. Ideally children get interactive support as they explore, but it is not always feasible to have knowledgeable staff regularly present. Technology-based interactive support can provide guidance to help learners achieve scientific understanding for how and why things work and engineering skills for designing and constructing useful artifacts and for solving important problems. We have developed an innovative AI-based technology, Intelligent Science Exhibits that provide interactive guidance to visitors of an inquiry-based science exhibit.

Findings

We found evidence that the Intelligent Science Exhibit produces substantially better learning for both scientific and engineering outcomes, equivalent levels of self-reported enjoyment, and higher levels of engagement as measured by the length of time voluntarily spent at the exhibit.

Contributions

These findings show potential for transforming hands-on museum exhibits with intelligent science exhibits and more generally indicate how providing children with feedback on their predictions and scientific explanations enhances their learning and engagement.


science.jpg

Active learning: “Hands-on” meets “minds-on”

You can find the full paper here. This paper was published in the Science journal.

Improving science, technology, engineering, and mathematics (STEM) teaching is crucial for improving STEM learning. Yet teacher training improvements progress slowly. And even the best teachers are challenged to maintain the attention of new cohorts of “digital natives” and feel the need to find innovative ways to engage them. Less focus on scientific facts and more experiences with scientific inquiry better engage the natural curiosity of children. But many elementary teachers typically do not have the background or curriculum materials to teach science from an inquiry perspective.

Addressing these challenges, we have been developing mixed-reality Intelligent Science Stations to engage children in active, inquiry-based experimentation and learning experiences in the physical world while providing interactive guidance that supports teachers as well as students. Children perform and interpret real-world experiments in a given physical apparatus (e.g., an earthquake table, ramps, a balance scale). Artificial intelligence (AI) computer vision algorithms reconstruct the physical scene and provide input to pedagogical algorithms that track the children’s progress and provide adaptive, automated feedback to guide them in scientific inquiry, producing a powerful form of active-learning support.


Screen+Shot+2020-04-19+at+8.07.14+PM.jpg

Along with substantial consensus around the power of active learning, comes some lack of precision in what its essential ingredients are. New educational technologies offer vehicles for systematically exploring benefits of alternative techniques for supporting active learning. We introduce a new genre of Intelligent Science Station technology that uses Artificial Intelligence (AI) to support children in learning science by doing science in the real world. We use this system in a randomized controlled trial that investigates whether active learning is best when it is implemented as guided deliberate practice, as constructive “hands-on” activity, or as a combination of both. Automated, reactive guidance is made possible by a specialized AI computer vision algorithm we developed to track what children are doing in the physical environment as they do experiments and discoveries with physical objects.

The results support deliberate practice and indicate that having some guided discovery based on effective learning mechanism such as self-explanation, contrasting cases and personalized interactive feedback produces more robust learning compared to exploratory construction alone. Children learning through guided discovery achieve greater understanding of the scientific principles than children learning through hands-on construction alone (4 times more pre-to-post test improvement). Importantly, a combined guided discovery and hands-on construction condition leads to better learning of the very hands-on construction skills that are the sole focus of the hands-on constructive learning condition (>10 times more pre-to-post improvement). These results suggest ways to achieve powerful active learning of science and engineering that go beyond the widespread temptation to equate hands-on activity with effective learning.

Screen+Shot+2020-04-27+at+10.42.58+AM.jpg

 Learning from Mixed-Reality Games: Is Shaking a Tablet as Effective as Physical Observation?

You can read the full paper here. This research has been published in a top conference in
Human Computer Interaction: CHI '15.

The possibility of leveraging technology to support children's learning in the real world is both appealing and technically challenging. We have been exploring factors in tangible games that may contribute to both learning and enjoyment with an eye toward technological feasibility and scalability. Previous research found that young children learned early physics principles better when interactively predicting and observing experimental comparisons on a physical earthquake table than when seeing a video of the same. Immersing children in the real world with computer vision-based feedback appears to evoke embodied cognition that enhances learning.

In the current experiment, we replicated this intriguing result of the mere difference between observing the real world versus a flat-screen. Further, we explored whether a simple and scalable addition of physical control (such as shaking a tablet) would yield an increase in learning and enjoyment. Our 2x2 experiment found no evidence that adding simple forms of hands-on control enhances learning, while demonstrating a large impact of physical observation. A general implication for educational game design is that affording physical observation in the real world accompanied by interactive feedback may be more important than affording simple hands-on control on a tablet.


Screen+Shot+2020-04-22+at+12.00.08+AM.jpg

physical control, such as shaking a tablet, improves learning and enjoyment. Our results indicate that observing physical phenomena in the context of a mixed-reality game leads to significantly more learning and enjoyment compared to screen-only versions. However, there were no significant effects of adding simple physical control or having students play in pairs vs. alone. These results and our gesture analysis provide evidence that children's science learning can be enhanced through experiencing physical phenomena in a mixed-reality environment.

Can experimenting with three-dimensional (3D) physical objects in mixed-reality environments produce better learning and enjoyment than flat-screen two-dimensional (2D) interaction? We explored this question with EarthShake: a mixed-reality game bridging physical and virtual worlds via depth-camera sensing, designed to help children learn basic physics principles. In this paper, we report on a controlled experiment with 67 children, 4--8 years old, that examines the effect of observing physical phenomena and collaboration (pairs vs. solo). A follow-up experiment with 92 children tests whether adding simple