Thoughts from 1Ed Tech Learning Impact Conference
-Bart Pursel, CEO
Recently, I joined several Unizin colleagues and members in Indianapolis for the annual Learning Impact conference, hosted by 1EdTech. This event always attracts a diverse group of attendees, including K-12, higher education, and edtech solution providers discussing and debating the current and future states of learning technologies.
As you might guess, AI was on everyone’s mind – particularly GenAI. While not surprising, it was interesting to observe the value differential between GenAI and ‘older’ AI applications like Machine Learning, especially in light of the fact that some of the most tangible examples of AI applications at work in education today are grounded in ‘old’ AI.
Case in point, Unizin’s Kyle Unruh joined Elisabeth Seidle, a data scientist from Penn State, for a session featuring one of Penn State’s newest applications, Performance Outlook, which combines descriptive statistics about student activity with predictive analytics. It was interesting to see the various features that go into the predictive models, and their relative importance. What’s more, the sheer volume of code required to enable Machine Learning on this scale is remarkable.
Elisabeth shared a poignant observation (that I’ve heard echoed by other Unizin members in other forums). The Unizin Data Platform (UDP) and the data engineering behind it allows members to create complex models like Performance Outlook to deliver data to decision makers, often in-real time, which would be an almost impossible task for any institution to undertake if they ALSO had to handle all the data infrastructure and processing to feed these models.
Penn State’s Performance Outlook provides a great example of ‘old’ AI, in this case, machine learning, providing valuable insights about student activity and trajectory today. As a field, higher-ed still hasn’t quite found the perfect machine learning applications to really change the game around student success.
GenAI is in the same space – as we see a lot of interesting use cases but no ‘killer app’ in higher-ed. On the whole, it was encouraging to observe the balance in the AI conversation as compared to similar conferences last year. Applying Gartner’s Hype Cycle to contextualize technical innovation, I wonder if we’ve finally made it over the “Peak of Inflated Expectations” and are working our way down to the “Trough of Disillusionment”?
I enjoyed the opportunity to present alongside Dr. Ben Hellar, manager of Penn State’s Data Empowered Learning team, and Steven Williams, Principal Product Manager at UCLA’s Academic Technology division. Our session focused on using real-time operational data to support student retention.
We all discussed the LMS as the ‘forgotten’ ERP, as many stakeholders in higher education don’t see the LMS as a fantastic source of data. Comparatively speaking, the LMS generates exponentially more data than the SIS, Finance, and HR ERPs combined, every day. Dr. Hellar shared examples of how Penn State uses much of this real-time event data to help advisers identify students who appear disengaged early in the semester. Steven shared how UCLA is working through various aspects of data governance to unlock the power of real-time data.
I also had the chance to participate in an Empirical Educator Project (EEP) miniconference hosted by Michael Feldstein, 1EdTech’s Chief Strategy Officer. The miniconference was designed for members of the community to pitch ideas that could possibly become future 1EdTech standards, making it easier for researchers to study various aspects of teaching and learning. While several ideas were shared, two common themes emerged:
- It’s exceptionally difficult to harmonize data (from both a technical and policy perspective) when that data originates in different environments, in different formats, in order to apply it to conduct research.
- Not enough edtech suppliers are adopting Caliper, which is forcing researchers to engage in large-scale data transformation. When an analyst is confronted with huge datasets from multiple sources, the struggle to harmonize that data is often why research projects are abandoned.
I am proud to note that Unizin has made significant progress in both of these areas. In 2024, we saw 13.8 billion clickstream events, coming from 19 different edtech tools, land in the UDP. By applying common identifiers, we bring that data together and allow our members to aggregate and crosswalk data across silos. Each of the 19 edtech tools also emits Caliper data, which we tabularize, making it easier to work with across our membership. Our growing suite of datamarts is further streamlining data access.
As we contemplate the next higher-ed moonshot, let’s not overlook the fact that data is the rocket fuel for any AI ambition. The lessons we are learning and the foundation we are building to enable AI implementations – at scale – to inform teaching and learning and improve student success will help us navigate the hype cycles and disappointments of adoption to achieve what we all want – productivity, insight and impact.