Produced with Scholar

Project: Educational Theory Practice Analysis

Project Overview

Project Description

Project Requirements

The peer-reviewed project will include five major sections, with relevant sub-sections to organize your work using the CGScholar structure tool.

BUT! Please don’t use these boilerplate headings. Make them specific to your chosen topic, for instance: “Introduction: Addressing the Challenge of Learner Differences”; “The Theory of Differentiated Instruction”; “Lessons from the Research: Differentiated Instruction in Practice”; “Analyzing the Future of Differentiated Instruction in the Era of Artificial Intelligence;” “Conclusions: Challenges and Prospects for Differentiated Instruction.”

Include a publishable title, an Abstract, Keywords, and Work Icon (About this Work => Info => Title/Work Icon/Abstract/Keywords).

Overall Project Wordlength – At least 3500 words (Concentration of words should be on theory/concepts and educational practice)

Part 1: Introduction/Background

Introduce your topic. Why is this topic important? What are the main dimensions of the topic? Where in the research literature and other sources do you need to go to address this topic?

Part 2: Educational Theory/Concepts

What is the educational theory that addresses your topic? Who are the main writers or advocates? Who are their critics, and what do they say?

Your work must be in the form of an exegesis of the relevant scholarly literature that addresses and cites at least 6 scholarly sources (peer-reviewed journal articles or scholarly books).

Media: Include at least 7 media elements, such as images, diagrams, infographics, tables, embedded videos, (either uploaded into CGScholar, or embedded from other sites), web links, PDFs, datasets, or other digital media. Be sure these are well integrated into your work. Explain or discuss each media item in the text of your work. If a video is more than a few minutes long, you should refer to specific points with time codes or the particular aspects of the media object that you want your readers to focus on. Caption each item sourced from the web with a link. You don’t need to include media in the references list – this should be mainly for formal publications such as peer reviewed journal articles and scholarly monographs.

Part 3 – Educational Practice Exegesis

You will present an educational practice example, or an ensemble of practices, as applied in clearly specified learning contexts. This could be a reflection practice in which you have been involved, one you have read about in the scholarly literature, or a new or unfamiliar practice which you would like to explore. While not as detailed as in the Educational Theory section of your work, this section should be supported by scholarly sources. There is not a minimum number of scholarly sources, 6 more scholarly sources in addition to those for section 2 is a reasonable target.

This section should include the following elements:

Articulate the purpose of the practice. What problem were they trying to solve, if any? What were the implementers or researchers hoping to achieve and/or learn from implementing this practice?

Provide detailed context of the educational practice applications – what, who, when, where, etc.

Describe the findings or outcomes of the implementation. What occurred? What were the impacts? What were the conclusions?

Part 4: Analysis/Discussion

Connect the practice to the theory. How does the practice that you have analyzed in this section of your work connect with the theory that you analyzed on the previous section? Does the practice fulfill the promise of the theory? What are its limitations? What are its unrealized potentials? What is your overall interpretation of your selected topic? What do the critics say about the concept and its theory, and what are the possible rebuttals of their arguments? Are its ideals and purposes hard, easy, too easy, or too hard to realize? What does the research say? What would you recommend as a way forward? What needs more thinking in theory and research of practice?

Part 5: References (as a part of and subset of the main References Section at the end of the full work)

Include citations for all media and other curated content throughout the work (below each image and video)

Include a references section of all sources and media used throughout the work, differentiated between your Learning Module-specific content and your literature review sources.

Include a References “element” or section using APA 7th edition with at least 10 scholarly sources and media sources that you have used and referred to in the text.

Be sure to follow APA guidelines, including lowercase article titles, uppercase journal titles first letter of each word), and italicized journal titles and volumes.

Icon for Developments for Improving Teacher Understanding of Learner Data in a Blended SEL Classroom

Developments for Improving Teacher Understanding of Learner Data in a Blended SEL Classroom

Introduction

Healthy social and emotional learning (SEL) skills have been correlated with various personal and societal benefits, including higher school achievement and higher educational attainment, greater workforce participation and success, increased civic participation, reduced crime, and greater well-being, and other life outcomes (Durlak et al., 2011; Greenberg, 2023; Kautz et al, 2014; OECD, 2015). Currently, 27 states have prioritized SEL in school curriculums (CASEL, n.d.) and have aligned state standards and policies to encourage SEL teaching.

In response to this mandate, my research team proposed digitally enhancing a validated SEL curriculum (Social Skills Improvement System for Classwide Intervention Program [SSIS-CIP]; DiPerna et al., 2015; 2016; Elliott & Gresham, 2007) to increase the quality and quantity of the skill practice opportunities already built into the curriculum, which is shown to increase the likelihood of skill mastery (Bälter et al., 2018), as well as provide more structured reflection and skill generalization opportunities. During each lesson, students participate in (1) a coaching event, where the SEL skill (e.g., doing nice things for others) is introduced or reviewed; (2) a modeling event, where the whole class demonstrates positive examples of the skill; (3) a discussion event, to reflect on the class model and prepare for small group role plays; (4) small group role play event, in which students gather in groups and act out two practice scenarios and rotate roles; (5) a personal assessment event, in which the student reflects on their skill use during the role plays; and (6) a generalized event, in which students discuss different places and with different people in which they could use this skill and are presented with a situational judgment question about using that skill in another environment (e.g., “what would you do if…”). The original SEL curriculum, designed for presentation-style instruction, was adapted into a digital lesson, where students follow along on their Chromebook (see Figure 1). With both a face-to-face and digital component, this program is considered a “blended” learning environment, which has been demonstrated to be an effective and impactful educational model for the future (Watson, 2008; Eryilmaz, 2015; Li & Wang, 2022; Halverson et al., 2023).

In response to this research need, my research team proposed digitally enhancing a validated SEL curriculum (DiPerna et al., 2015; 2016; Elliott & Gresham, 2007) to increase the quality and quantity of the practice opportunities (Bälter et al., 2018) already built into the curriculum, as well as provide more structured reflection and skill generalization opportunities. Essentially, the original SEL curriculum, designed for presentation-style instruction, was adapted into a digital lesson, where students follow along on their Chromebooks (see Figure 2). During each lesson, students participate in (1) a coaching event, where the SEL skill (e.g., doing nice things for others) is introduced or reviewed; (2) a modeling event, where the whole class demonstrates positive examples of the skill; (3) a discussion event, to reflect on the class model and prepare for small group role plays; (4) small group role-play event, in which students gather in groups and act out two practice scenarios and rotate roles; (5) a personal assessment event, in which the student reflects on their skill use during the role plays; and (6) a generalized event, in which students discuss different places and with different people in which they could use this skill and are presented with a situational judgment question about using that skill in another environment (e.g., “what would you do if…”). With both a face-to-face and digital component, this program is considered a “blended” learning environment, which has been predicted and then demonstrated to be an effective and impactful educational model for the future (Watson, 2008; Eryilmaz, 2015; Li & Wang, 2022; Halverson et al., 2023).

Figure 1

Format Changes from Original to Digitally Enhanced Lessons

With the digital enhancements shown in Figure 1, students continue to engage in all six phases of the lesson and in role plays in class with their peers, but in addition, students can also now chat with the other students in their role play group during the lesson (2-4 students) using a chat box (on the right side of the screen) when prompted by the teacher. This enhancement is intended to bring more students into the class discussion, like quiet or shy students who may not feel comfortable voicing their opinions or raising their hands during a large classroom discussion. Also, by asking students to enter into a small group discussion (and having the teacher follow up with, “what did you discuss in your groups?”) gives every student the opportunity to engage with the question, share their answer, and respond to those of their peers. Further, on-screen questions from the original lesson are now presented to students with either a text-entry response box or a multiple-choice question.

How to support teachers in making sense of this system-collected data is the focus of this paper. One possibility is to summarize the classroom response data, small group chats, and individual student data over the course of a unit, as well as provide system alerts and messages for when something in the collected data should be addressed (either positive or negatively identified behaviors). In this paper, we describe the process of designing an extension to the ETS Platform for Collaborative Assessment and Learning (EPCAL; Hao et al., 2017) to support teachers as researchers (generating data and system feedback) in using the SSIS-CIP curriculum for a middle-school SEL course.

In the next section, we look at the benefits of digitally enhancing a SEL curriculum and how to do it effectively. We then lay out the development cycle, describing two studies and a co-design process meant to deeply incorporate teacher input about how they use the system and the most effective ways to display learner information. In the final section, we critically review teachers’ opinions on system feedback content and display final designs.

Why is going digital an enhancement?

Teaching and evaluating SEL skill development is challenging since the skills are rooted in interpersonal engagement and collaboration. Knowledge of social and emotional skills may be acquired without formal education, but skill mastery may be more art than science, requiring practice in different settings and situations, with endless permutation. Demonstrations of a student’s ability with various components of SEL frequently happen outside of the classroom and, thereby, are unobservable by the teachers responsible for ensuring student learning, such as those in the CASEL framework (Borowski, 2019; Jagers et al., 2019) shown in Figure 2– self-awareness, self-management, social awareness, relationships skills, and responsible decision making. Learning growth in those areas is typically assessed via self-report and observational inventories (e.g., Anthony et al., 2020) taken once or twice a year, which make it difficult to monitor skill progress or gain insight on how individual students are developing or how to best improve classroom instruction. However, using learning analytics, which is the ongoing collection and analysis of data from learning experiences (e.g. answers to questions in the lesson and student conversations about the lesson’s SEL skill), can offer a deeper insight into SEL skill acquisition throughout the learning process.

Figure 2

CASEL framework

Taken from: https://casel.org/fundamentals-of-sel/what-is-the-casel-framework/

Learning analytics is a student-centered approach in which educators (and other stakeholders) use data to inform decisions about how to best optimize learning for individuals and groups of students. Tsai and colleagues provide a nice overview in their video, Learning Analytics in a Nutshell, Video 1. There are four types of learning analytics: (a) descriptive (what happened?); (b) diagnostic (why did it happen?); (c) predictive (what will happen?); and (d) prescriptive (how can we make it happen?); see Hernández-de-Menéndez et al., 2022 for an overview. In the case of this study, we solely focus on descriptive and diagnostic analytics to provide the teacher with indicators of student action, with which, they can make their own predictions about what will happen and prescribe or take action, given their prediction. However, while promising, simply taking a paper-based curriculum and putting the content online seems to be a major shortcoming that digital learning environment creators have made when switching from in-person to online or hybrid learning. An online course that retains a traditional lecture-style format is not optimized to collect useful data about student learning.

Video 1

Learning Analytics in a Nutshell

Media embedded September 20, 2024

Taken from Society for Learning Analytics Research (2019): https://www.youtube.com/watch?v=XscUZ8dIa-8

For best practices in digital learning, we turn to experts in e-learning, Cope and Kalantzis (2017) who offer several suggestions that the EPCAL enhanced SSIS-CIP can leverage to improve learner outcomes. Primarily, the authors emphasize the importance of incorporating structures that support students and teachers in reflecting on learning. For students, this is a self-awareness of their own learning, while teachers stand to understand not only what students are learning but also why they are learning. To this end, Cope and Kalantzis recommend seven design and assessment principles when creating an e-learning environment, four of which would be age- and context-appropriate for our project: (1) Support active knowledge-making by allowing students to discover knowledge on their own. (2) Support multi-modal learning, such as the incorporation of text, video, data, and sound. (3) Enable peer-to-peer learning structures for students to practice and build knowledge together, socially. And (4) provide recursive feedback as formative assessment, frequently, throughout the process of learning. Overall, these authors provide clearly framed development standards and an optimistic outlook that such modifications will improve learner outcomes while encouraging teachers to actively reflect on student learning. While the first three principles are already built into lessons (e.g., free-form role-play opportunities and independent skill generalization and discovery, incorporation of various data and media into lesson content, and an open chat window for facilitating small group collaboration), this paper will discuss the development of recursive feedback and formative assessment into the EPCAL platform.

Rather than one summative assessment, data captured in each lesson of the EPCAL-enhanced SSIS-CIP environment can be used to support students in their learning, as well as teachers in learning about how students are learning. For example, automatically generated system messages about the student’s communications and responses from the lesson can provide the student with guidance on how well they are communicating with peers and suggestions of how to be more effective communicators. Teachers can also review this information in the form of data overviews, flags, summaries, and alerts so that they can follow up with the student, as well as gain insight into the small-group conversations being had throughout the SEL lesson and improve instruction. The impact of formative feedback on student learning is well-documented and conclusively powerful, with John Hattie’s (2009) famous meta-analysis awarding feedback an effect size of d = .70 across educational settings. In a SEL program case, specifically, performance feedback has been shown to increase student SEL knowledge (Gueldner & Merrel, 2011). Ideally, system feedback will provide teachers with specific, actionable information.

While, of course, teachers can observe what is happening in their classroom, take action, and make judgements (such that additional system feedback may not be necessary), it is important to keep in mind the practical challenges related to SEL instruction. For example, in the intermediate school district (ISD) with whom we have partnered (that is very supportive of the SEL state mandate), SEL teachers are not typically content experts, nor trained to understand what SEL skill development looks like, and SEL lessons may not be seen as a priority to the teacher or that school’s leadership. Here, SEL is taught in fifth and sixth grade, generally, as an “extra” lesson, meaning that it’s not a normal class, but is rather tucked into a school day and is taught whenever the teacher finds the time to pull out the lesson. This teacher will primarily teach another main subject, like language arts, math, or science, and will, at best, get through two or three lessons in a week. In this type of setting, in which SEL lessons are short (about 30 min) intermittent, using data displays and learning summaries to offer specific and targeted feedback to teachers (and to students) about SEL skill development and progression, as well as other various characteristics of student behavior in small groups, can be highly valuable for supporting student learning.

Several studies have been published that outline some expectations when creating data displays for teacher users, as well as cataloged their reactions to learning analytic environments. Brown (2020) found that faculty in five different large-lecture physics courses expressed frustration with data displays during a “peer instruction” learning approach. They felt that the assembled data did not paint a clear picture of actionable information and seemed to undermine their autonomy in the classroom (impact on their existing pedagogical strategies, decision-making, and lesson planning). Along a similar vein, Tsai and colleagues (2021) found three major areas of educator distrust at a large UK university surrounding learning analytics: (a) Because of the subjective nature of numbers, the analytics may not account for everything that a human teacher would consider. (b) Teachers were concerned that the analytics may be viewed as more valuable than their own insight and would diminish their power. (c) There were also concerns over possible improper approaches to system design and implementation, unnecessarily adding to their workload.

Another issue with displaying learner data is that it may be approached from the perspective of trained researchers, which the average teacher is not. Chiappe and Rodriguez (2017) found that teachers do not typically have the same digital and statistical literacy as the researchers who are designing the learning analytic dashboards, so they may not be able to accurately interpret a statistical overview and take action. For example, it is not intuitive for teachers to read a data dashboard displaying easily captured process data, like time spent on an activity, and then know how to interpret that information as it applies to student learning. Further, it also seems likely that many teachers are unprepared to teach with technology (Klein, 2023), potentially jeopardizing the regular use of a digital learning platform in their classrooms. This makes it all the more important to have a user-friendly and intuitive system design.

To address these educator concerns, designers generally advocate for including teachers (and learners) in the development of learning analytic tools (Dollinger, et al., 2019; Dollinger & Lodge, 2018; Sanders & Stappers, 2008) so that the resulting product will be useful to and well-received by these users. Again, the limited technical knowledge of teachers and other users was seen as a challenge to this cooperative effort, as there are a limited number of basic recommendations and non-technical contributions they can make. However, even in highly technical spaces, teachers can still offer key advice on captured data and its display (as is recommended in Macfayden & Dawson, 2012). These authors seem to make two important recommendations: (a) Be mindful of user limitations with data and support processes and developments that ease interpretation; and (b) include teachers in the development of data displays and alerts about learning activities for the best outcomes.

With this in mind, the next section describes the process of working with teachers in the development of recursive system feedback in the EPCAL-enhanced SSIS-CIP.

Development cycle in two studies and co-design

When developing system feedback for teachers about student learning, who better to ask what is desired than the teachers themselves? Teacher insight on the type of feedback they wanted and how they wanted it displayed was captured in three distinct efforts throughout 2024: a cognitive lab (or focus group), a usability study, and group design meetings where teachers were brought on as co-designers on the project. A project timeline of these events is shown in Figure 3 alongside the overall system design and development timeline. Note that we also hosted a cognitive lab for students around the same time as one for teachers, and we also gained valuable insight from students during the three-week usability study. However, because our focus is on feedback for teachers and how teachers interpret and make use of data, student feedback is not explicated in this paper.

Figure 3

Timeline for Gathering Teacher Input Alongside the System Design (and Redesign)

  • Study 1: Cognitive lab

Three teachers were recruited for the teacher cognitive lab from the ETS Educator Panel, a mailing list of thousands of educators interested in supporting research. The project team filtered participant candidates by grade and subject taught, and selected candidates based on the number of years teaching, familiarity with learning management systems (LMS), and availability to join an online, 2-hour cognitive lab. The selected participants were two female and one male teacher from the east coast of the U.S. (NC, TN, and MD). Their teaching experience ranged from 4-17 years, and while they all teach at least one section of SEL, they primarily teach another subject. One teacher had little to no LMS platform experience, one had some experience with one LMS, and the third had a lot of experience with various LMS platforms.

For this cognitive lab, an interactive wireframe of the SEL Teacher Portal, designed for initiating lessons and reviewing student work, was built and filled with dummy data for teachers to review. Researchers wanted to know the degree to which teachers would be able and willing to make judgments about student performance based on data alone. This process was intended to investigate the types of data displays that teachers were more comfortable with, the types of data teachers wanted, and to set a benchmark on how much data teachers would want available. An example of this displayed data is shown in Figure 4. About 1-hour into the lab (after review of the larger program and curriculum content), teachers entered breakout rooms for individual review of the data display. Teachers used the “Think Aloud” method (van Someren, Barnard, & Sandberg, 1994), in which they navigated the system themselves, and vocalized what came to mind while moving through different pages and reviewing the information in the Teacher Portal. Following these breakout sessions, teachers came back together with researchers to discuss their experiences.

Figure 4

Example Dummy Data Shown to Teachers during the Cognitive Lab

Data interpretation was immediately called out as an area of improvement. While one teacher (a mathematics teacher with the most experience with different LMS platforms) liked the type and volume of data, “there’s a lot of good information,” all three teachers expressed concerns over the average teacher being able to interpret the different types of data presented in the dummy display. One commented, “the graphs were pretty, but I don't have time to figure out what they all mean.” The final teacher made a practical point coming from their perspective as a special education teacher with many students on caseload, “I don't wanna get stuck [interpreting] this for every kid.”

When we asked teachers what they would like to be able to find out from the data, teachers mentioned wanting to know what students were doing and how they were progressing. One teacher gave the example, “it’s been a whole week and Susan still not doing this and ... [I] just want to, you know, make sure that Susan's doing what she's supposed to be doing.” Another teacher pointed out that they would want to monitor student progress toward a goal, “I think it would be a little better to have individual progress… like progress monitoring on a social or behavioral goal, that'll be right there, and I don't have to spend time looking at a graph… [it’s] a little more efficient than a pie chart or something that tells me numbers.”

On the topic of how best teachers could take action based on the system-generated data, teachers seemed to like the idea of a built-in feedback page that would allow them to send messages to students. “It wouldn't take me long because I can look at quick feedback in a few seconds and know what to give him back,” said one teacher. Another teacher followed up noting that, “if the feedback could be automated, I think that would make it like I wouldn't have to spend 2 minutes… automation is gonna be a kind of a time saver, and I think that we would use it all the time.”

The main takeaways researchers took from this cognitive lab were that (a) the statistics needed to be simplified (b) the interpretation of data could be supported, especially as it relates to student progress, (c) and the more automated the process for taking action on the data, the more likely teachers would be to reach out to students through the system. With these thoughts in mind, we headed into the usability study.

  • Study 2: Usability study

Following the cognitive labs, a usability study was held onsite of the planned intervention, a socio-economically diverse ISD in Northern Michigan, for a real-world look at the feasibility of incorporating system feedback, among other aspects of the study.

Because system bugs and general difficulties were expected, we chose to test the intervention in a low-stakes environment outside of the academic year, opting for the ISD’s summer school program. Three (female) teachers taught nine digitally enhanced SEL lessons to 31 students (18 female, 13 male) over the course of three weeks, with each lesson taking approximately 20-40 minutes per day. Students were in grades 4-8, and classes typically had 6-10 students join the program on any given day; these classes were much smaller than what would be expected during an academic year, and participation tended to be spotty, since summer school attendance was not mandatory. Students and teachers used the system developed from the interactive wireframe.

One of the first things that was called out as a potential threat to the implementation was, actually, the student use of chat, a piece of process data that our team was relying on to gather information on student behavior. All teachers noticed that students were spending a lot of time chatting in their small groups rather than paying attention to teacher's instruction. One teacher noticed that during the lesson, “it seems like [students are] spending a lot of time chatting and then forgetting to hit next,” and after a review of the chat conversations, this teacher noted, “I saw a lot of emojis.” In another classroom, the teacher was so concerned over students being off task during the lesson, they elected to change the team settings so that students participated individually, rather than in small groups, effectively removing chat. The third teacher accepted the student's use of chat early on as them just getting used to a novel technology, saying “they've never done this before, and so, you know, they had a bunch of, you know, just quotation marks.”

To follow up on this point, we reviewed log data from students during the lesson. In Figure 5, we can see that, indeed, students were frequently using the chat and sometimes inappropriately. For example, when looking at a particularly large body of chats from lesson S11873, we see that a student just held down a key for one of the chat messages. Not shown here, but later in this lesson, another student just entered a single digit in a chat message and then hit enter, repeatedly, which actually accounted for the majority of the chats that lesson. We also found that reactions to messages (like, love) were included in the chat count total, which inflated this count.

Figure 5

Log File Summary Data Showing Student Chat and Question Events

However, just by looking at this graph, it seemed like the volume of chats per lesson decreased over time. When we investigated, we found that, apart from a spike in the second week of July, which corresponded to the first lesson taught following the Independence Day holiday break, there was a fairly steady decline in the number of lines of chat made by each person during the lesson (see Figure 6). Even with one teacher indicating that she effectively removed chat, students in the other two classrooms averaged fewer than three chats per person by the final lesson. Hoping to use chat as a valuable piece of process and behavior data, we learned that we need to incentivize appropriate student communication so that students continue to use chat effectively throughout the semester, while simultaneously providing teachers with sufficient oversight and visibility that concerns over student use of chat can be addressed.

Figure 6

Average Lines of Chat Per Person by Lesson

Note: Due to technical issues, teachers taught only seven out of nine lessons through the online platform.

When asked about the types of feedback teachers would like to have following the lesson, most teacher comments were around classroom management, monitoring student behavior, and engagement with the lesson. Some suggestions included knowing if a student navigated away from the lesson to another browser tab, if a student was off-topic during small group discussions, and if they used profanity. Teachers liked being able to look at the entire group conversation, and despite a general concern for student misuse of chat, when reviewing the chat logs, one teacher pointed to a particularly thoughtful comment made by a student who rarely raises his hand to speak, saying, “I never would have guessed he was so insightful.” This was an encouraging observation, as this is exactly the type of student communication we had hoped the system would promote.

However, teachers rarely commented on the data displays or the tools built-in for taking action. When looking at the number of teacher actions across different system pages in Figure 7, teachers spent way more time assigning teams and assigning lessons than they did with any of the review activities (looking at class statistics or reviewing chats and responses), and rarely did teachers visit the feedback and notes page, and none ever used it. This tells us that there was a severe design flaw with the system, and it is possible that teachers were spending so much time just getting things set up that they did not want to spend any more time making use of the system for looking at student data. It is also possible that because classes were so small in the summer school program, and lessons were presented almost daily (unlike a classroom setting during the school year), checking student activity was simply not a priority, as teachers were able to closely monitor students without system support. In either case, it was clear that the Teacher Portal required revision.

Figure 7

Number of Teacher Actions on Various Pages of the Teacher Portal

Note that the number of actions scale is a logarithmic scale, meaning there were vastly more actions on assigning teams and lessons than it appears.

Considering the teacher comments and log files together, the main takeaway from the usability study was that teachers most wanted support in keeping students on-topic during the lesson. All three teachers were concerned over the early misuse of chat, and each teacher reacted in different ways (one removing the communication option, another frequently suggesting opportunities to build in classroom management features, and the third seemingly accepting the chat as a novel entity). Given the gradual decline in the number of chats per student over the course of the study, we do expect that some of the initial chattiness can be attributed to students exploring a new learning environment, so we accepted this as a likely probability. (If the use of chat were to continue to be a problem, it is possible to have students work independently in the system, effectively removing the chat feature.) However, keeping students on-topic and focused on the task throughout the course of the semester-long curriculum is in everyone’s best interest. This became a major focus of our third development study with two teachers from the usability study staying on as co-designers on the project.

  • Co-design revision

Using a co-design approach recommended by experts in the field (Dollinger, et al., 2019; Dollinger & Lodge, 2018; Sanders & Stappers, 2008), we invited two teachers from the summer school usability study and a school administrator from the district to join our research team as co-designers on the project. Under a co-design model, researchers and designers work closely with users to create a final product, as shown in Figure 8.

Figure 8

Difference Between Classical Design and Co-Design (Sanders & Stappers, 2008)

Co-design meetings between designers, researchers, and users/teachers are currently ongoing as of September 2024, and will occur monthly through the end of the year. Meetings consist of one hour of synchronous discussion and a half hour of asynchronous homework for users (and more for developers, UI/UX designers, and content experts), which is typically a survey or work in preparation for the following meeting. The sole focus of the most recent co-design meeting was on feedback automation, including the development of system-generated alerts for students and simpler ways for teachers to act on alerts. For this meeting, a SEL content developer, a UI/UX designer, and an expert in the development of conversation coding frameworks also joined the discussion. With each teacher, we reviewed a few conversations- and SEL-based coding frameworks to think through what types of communication could be flagged and for what purpose. We came up with a series of lesson-level team flags and unit-level individual flags that could be displayed in a footnote on the team conversation, with specific lines flagged (essentially automatically coding lines of chat and then having a summative footnote), or in a table when looking at student behavior over time. An example is shown in Figure 9, and the types of conversation flags are noted in Figure 10. The next step will be to meet with an NLP and LLM expert to try to operationalize these flags. We will also need to confirm the appropriate application of these flags, either with teachers confirming the correct placement of flags throughout the semester with real data, or with human raters on training data.

Figure 9

Example Display of Conversation Flags

Note: This is dummy text, for example only.

Figure 10

Proposed Lesson- and Unit-Level Conversation Flags

Note that these observable occurrences will need to be mapped to specific indicators. For example, having long wait times is likely indicative of the student being inattentive during the lesson and possibly on another screen. Other observations will ideally map to the SEL framework or project objectives; like taking turns in chat will hopefully map to the relationship skills dimension of the SEL framework or indicate Engagement, an outcome measure of the project.

The final topic for co-designers was how to make it easier for teachers to act on this sort of information. For this, we again employed the Think Aloud method. Teachers went through the (mental) process of finishing a lesson on the Teacher Portal, reviewing stats, responses, and conversations. They explained where it felt most natural to give students feedback, which was usually when they were reviewing student conversations, and likened the process to writing a comment next to a student’s response on a paper test. Our designer also came prepared with a set of “stickers” that we thought might be a catchy way to give quick feedback without needing to type long messages back to students. Teachers reviewed these stickers and made suggestions of their own, including a school mascot sticker. An example of these stickers is shown in Figure 11.

Figure 11

Stickers for Teachers to Give Quick Student Feedback

Design and revision processes are cyclical, and in the upcoming pilot study, we plan to continue listening to teacher feedback and incorporating it into the system.

 

Discussion and limitations

Including teachers in the design process was, possibly the best decision we could have made. Other researchers were absolutely right that effectively developing a learning analytic system in partnership with the user reduces redundancy in development (making something and then having to remake it) and improves researcher’s understanding of how the data will be used without needing to set up additional studies and forcing unnecessary lag between development and revision. These studies will, of course, be necessary before having a final product, but this has certainly been helpful in making sure that the final product has high utility and will hopefully limit any big surprises in the near future. Through the three studies described here, we were able to address several concerns mentioned in learning analytics research.

However, before continuing further, it is important to mention that one limitation of this research is that, as it is early-stage development, only six teachers and one administrator have so far been able to provide their insight in support of the new system developments. With the addition of content and design experts to the research team, we have tried to deeply and holistically address teacher’s points of concern and develop a system tailored to their perceived needs.

To address teachers’ primary concern with misuse of chat, we created a rule for identifying off-topic chats (a basic string matching rule), which looks for three repeated characters, single characters only in a line of chat, or only digits in a line of chat, all of which were identified as off-topic in observations from the usability study. To verify these rules, we will continue monitoring the lines of chat that are identified as off-topic to make sure that the rule is being applied correctly. For transparency, how off-topic chats are identified is noted in an info popup, directly off of the data display. We also make it easy for teachers to investigate (or drill down) into conversation data. By clicking a student’s chat bar, for example, the team conversation to which the student belonged will pop up. When hovering over a pie chart displaying student answer selections, student names appear in a hover window.

However, of course, we have yet to test whether or not teachers will use the data to take action. To encourage teachers to investigate and respond to student chats or answer responses, we have tried to direct teachers to areas where feedback can be given to students and to make giving that feedback easy. In addition to a comment field, in which teachers can compose personal messages to students, teachers can additionally opt to award a “sticker” with a common feedback sentiment, for chat comments and answer responses (see Figure 12). When awarded, the system will automatically trigger a message to the student, displaying the sticker and the associated teacher comment. Not only is a sticker a quick way to offer student feedback, but they can be treated as rewards and something that students can collect, adding a slightly gamified component to the system, which has been shown to promote specific desired behaviors and motivation in other digital learning environments (e.g., in Duolingo; Corthout, n.d.).

Figure 12

Example of “Sticker” Feedback on Student Chats and Responses

Note: This is dummy text, for example only.

One thing we have yet to address that was mentioned in both the literature and in the cognitive lab was the ability to set personal growth milestones and show self-referenced progress. This would be a great addition to the data analytics dashboard, not only allowing students to set personal goals and progress toward those goals, but teachers would be able to monitor the progress of students and of their classroom as a whole. Right now, student progress on the lesson’s SEL skill (e.g., listening to others, taking turns), is captured in a self-report question at the end of each lesson and displayed. However, we hope to improve on this.

To build trust with teachers (as recommended by Tsai et al, 2021), we aim for transparency in all data displays. Fortunately, we can keep data representations are fairly simple, displaying non-latent observations and descriptive data, such as responses to questions, ratings, and numbers of chat communications, as shown in Figure 13. We don’t currently employ any learning analytic algorithms, only classification rules in limited cases; however, we hope to eventually incorporate LLMs to code and flag data (Hao et al., 2024), those outputs should be fairly clear, with the teacher able to offer feedback on whether or not they agree with the flag. With such data visibility, we do not need to make prescriptive recommendations with data, which further allows the teacher a continued sense of autonomy in the classroom.

Figure 13

Updated Data Display for Teachers

However, as with any complex system, training and efficacious implementation will be key. Ahead of the pilot study, and again ahead of the implementation, we will hold a full day of in-person training with the curriculum expert, with 4-hours devoted to the curriculum and 2-3 hours devoted to teaching with the digital system. Additionally, and to further improve the functionality of the Teacher Portal, making it a “one-stop-shop” for teachers, we have decided to incorporate all teacher training materials, including material on the SEL curriculum, lecture videos meant to review the in-person training content (or substitute it, in the case the teacher cannot attend the training), as well as a searchable FAQ section to support teachers under unique circumstances or on the fly. By building this information into the system, we further learn the extent to which teachers visit (and revisit) specific content and get a better understanding of where pain points in the training, curriculum, or with the system may be.

We hope that teachers will see the system as a support for a course that is not likely to be their main area of expertise, that does not meet daily (so tracking student progress and activity is neither a priority nor particularly easy to remember), and is a subject with skills that are difficult to observe and measure. We do not expect that system messaging would ever replace a teacher but may have limited capacity to act as a teaching assistant, supporting both the student and the teacher. Through continued revision and testing in an upcoming pilot study, we will continue gathering evidence about how system displays and messaging are received in the classroom and investigate the extent to which such enhancements have made a positive impact on learning or teaching.


References

Anthony, C. J., Elliott, S. N., DiPerna, J. C., & Lei, P.-W. (2020). Initial development and validation of the social skills improvement system - social and emotional learning brief scales - teacher form (SSIS SEL-b). Journal of Psychoeducational Assessment, 1-16. https://doi.org/10.1177/0734282920953240

Bälter, O., Zimmaro, D., & Thille, C. (2018). Estimating the minimum number of opportunities needed for all students to achieve predicted mastery. Smart Learning Environments, 5(15). https://doi.org/10.1186/s40561-018-0064-z

Borowski, T. (2019). CASEL’s framework for systemic social and emotional learning.  Collaborative for Academic, Social, and Emotional Learning. https://measuringsel.casel.org/wp-content/uploads/2019/08/AWG-Framework-Series-B.2.pdf

Brown, M. (2020). Seeing students at scale: How faculty in large lecture courses act upon learning analytics dashboard data. Teaching in Higher Education, 25(4), 384-400. https://doi.org/10.1080/13562517.2019.1698540

CASEL (n.d.). SEL policy at the state level. https://casel.org/systemic-implementation/sel-policy-at-the-state-level

Chiappe, A. & Rodriguez, L. P. (2017). Learning analytics in 21st century education: A review [Report]. SciELO Brazil. https://doi.org/10.1590/S0104-40362017002501211

Cope, B. & Kalantzis, M. (2017). Conceptualizing e-learning. In B. Cope & M. Kalantzis (Eds.), e-learning ecologies: Principles for new learning and assessment (pp. 1-45). Routledge. https://doi.org/10.4324/9781315639215-1

Corthout, J. (n.d.). Duolingo: How they nailed their product and became the #1 education app: Iconic products episode 008. Salesflare. https://blog.salesflare.com/duolingo-iconic-product-e3df449017df

DiPerna, J. C., Lei, P., Bellinger, J., & Cheng, W. (2016). Effects of a universal positive classroom behavior program on student learning. Psychology in the Schools, 53(2), 189–203. https://doi.org/10.1002/pits.21891

DiPerna, J. C., Lei, P., Bellinger, J., & Cheng, W. (2015). Efficacy of the social skills improvement system classwide intervention program (SSIS-CIP) primary version. School Psychology Quarterly, 30(1), 123–141. https://doi.org/10.1037/spq0000079

Dollinger, M., Liu, D., Arthars, N., & Lodge, J. M. (2019). Working together in learning analytics towards the co-creation of value. Journal of Learning Analytics, 6(2), 10-26. http://dx.doi.org/10.18608/jla.2019.62.2

Dollinger, M. & Lodge, J. M. (2018). Co-creation strategies for learning analytics. In LAK '18: Proceedings of the 8th International Conference on Learning Analytics and Knowledge (pp. 97-101). https://doi.org/10.1145/3170358.3170372

Durlak J. A., Weissberg, R. P., Dymnicki, A. B., Taylor, R. D., Schellinger, K. B. (2011). The impact of enhancing students' social and emotional learning: a meta-analysis of school-based universal interventions. Child Development, 82(1), 405-432. https://doi/org/10.1111/j.1467-8624.2010.01564.x

Elliott, S.N., & Gresham, F.M. (2007). Social skills improvement system: Classwide intervention program guide. Bloomington, MN: Pearson Assessments.

Eryilmaz, M. (2015). The effectiveness of blended learning environments. Contemporary Issues in Education Research, 8(4). https://files.eric.ed.gov/fulltext/EJ1077330.pdf

Greenberg, M. T. (2023). Evidence for social and emotional learning in schools. Learning Policy Institute. https://doi.org/10.54300/928.269

Gueldner, B. & Merrell, K. (2011). Evaluation of a social-emotional learning program in conjunction with the exploratory application of performance feedback incorporating motivational interviewing techniques. Journal of Educational and Psychological Consultation, 21(1), 1-27. https://doi.org/10.1080/10474412.2010.522876

Hattie, J. (2008). Visible learning. Routledge.

Halverson, L. R., Spring, K. J., Huyett, S., Henrie, C. R., & Grahm, C. R. (2023). Blended learning research in higher education and K-12 settings. In J. M. Spector et al. (Eds.), Learning, design, and technology. https://doi.org/10.1007/978-3-319-17461-7_31

Hao, J., Liu, L., Lederer, N., Zapata-Rivera, D., Jakl, P., & Bakkenson, M. (2017). EPCAL: ETS platform for collaborative assessment and learning. ETS Research Report Series, 2017(1), 1-14. https://doi.org/10.1002/ets2.12181 https://doi.org/10.1002/ets2.12181

Hernández-de-Menéndez, M., Morales-Menendez, R., Escobar, C. A., & Ramírez Mendoza, R. A. (2022). Learning analytics: State of the art. International Journal on Interactive Design and Manufacturing. https://doi.org/10.1007/s12008-022-00930-0

Jagers, R. J., Rivas-Drake, D., & Williams, B. (2019). Transformative social and emotional learning (SEL): Toward SEL in service of educational equity and excellence. Educational Psychologist, 54(3), 162-184.

Kautz, T., Heckman, J. J., Diris, R., ter Weel, B., & Borghans, L. (2014). Fostering and measuring skills: Improving cognitive and non-cognitive skills to promote lifetime success (OECD Education Working Papers, No. 110). OECD Publishing, Paris. https://doi.org/10.1787/5jxsr7vr78f7-en

Li, S. & Wang, W. (2022). Effect of blended learning on student performance in K-12 settings: A meta-analysis. Journal of Computer Assisted Learning. https://doi.org/10.1111/jcal.12696

Macfayden, L. P. & Dawson, S. (2012). Numbers are not enough: Why e-learning analytics failed to inform an institutional strategic plan. Educational Technology & Society, 15(3), 149-163.

OECD (2015). Skills for social progress: The power of social and emotional skills [OECD Skills Study]. http://dx.doi.org/10.1787/9789264226159-en

Sanders, E. B.-N. & Stappers, P. J. (2008). Co-creation and the new landscapes of design. Co-Design, 4(1), 5-18. https://doi.org/10.1080/15710880701875068

Tsai, Y.-S., Whitelock-Wainwright, A., & Gasevic, D. (2021). More than figures on your laptop: (Dis)trustful implementation of learning analytics. Journal of Learning Analytics, 8(3), 81-100. https://doi.org/10.18608/jla.2021.7379

van Someren, M. W., Barnard, Y. F., & Sandberg, J. A. C. (1994). The think aloud method: A practical guide to modelling cognitive processes. Academic Press, London.

Watson, J. (2008). Blending learning: The convergence of online and face-to-face education [Report]. Promising Practices in Online Learning Series. North American Council for Online Learning. https://files.eric.ed.gov/fulltext/ED509636.pdf