For the past three months, I participated in the Spring 2017 xAPI Cohort. The Cohort is a hands-on project learning experience for people who are learning about xAPI and those who are looking to move the xAPI standard forward. Participants form teams around projects they wish to work on together to get a deeper understanding of how xAPI works, the possibilities it creates, and, by my experience this spring, the obstacles it has yet to overcome.
My team’s learning in the cohort came from confronting obstacle after obstacle and looking for the lesson to be drawn from each roadblock. Our final product was a set of lessons learned and a list of recommendations for L&D practitioners about working with Big Data and a few for the xAPI powers that be.
I signed on as a member of Team Analytics. The initial idea behind the team was to explore the possibilities that data visualizations have for reporting results of L&D learning experiences. But that requires one big assumption – having access to a large, quality dataset composed of xAPI statements.
After 2 weeks of administrative stuff and team formation, Team Analytics spent the next 6 weeks of the cohort trying to find a dataset we could use. Issues of governance, ownership, privacy, irrelevance, and control of data kept us at bay.
We finally decided to use the xAPI statements being generated by the Cohort’s activities in Slack. 8 weeks into a 12-week course, we were ready to rock-and-roll.
Or so we thought.
Turns out the xAPI statements, while valid, were not well formed, the outgoing webhook Slack provides generate minimal data, and converting that data to xAPI statements is manual programming work.
Thanks to Will Hoyt from Yet Analytics and Matt Kliewer from Torrance Learning, we were able to figure this out and reconfigure the xAPI statements and actually do a bit of the work we initially thought we were going to be doing. But more importantly, because of the project-based, unscripted approach of the Cohort, we discovered issues we needed to learn about to overcome each obstacle.
The Spring 2017 xAPI Cohort was a tremendous learning experience. If it sounds interesting, learning more about the Cohort and register for the Fall 2017 cohort at http://www.torrancelearning.com/xapi-cohort/
Here are the “Lessons Learned” from our experiences in the xAPI Cohort:
- Ownership and Control of Data
- who owns the data and/or who controls the business data you wish to
- Protocols and approval processes are in place to protect the quality of existing data and control its use.
- Clearing these hurdles requires stakeholder partnership, upper leadership buy-in, and clear planning of how data will be used.
- All of which takes time.
- Privacy and Accessibility
- EULA (End User Licensing Agreement) or employment agreement dictate usage of data.
- Does your company have a data usage policy?
- Access to data may be limited, controlled, or data may be off limits.
- Accessibility of Data
- If a tool is not natively programmed with code to trigger the creation of xAPI statements or in the cases where xAPI statements are a limited subset of all activities, you’ll be limited to the data the tool provides via a Webhook and/or APIs.
- Accuracy & Usability
- Manual scripting process is not standardized, errors can be introduced
- Poor data planning can lead to useless data
- Resources Required
- Programmer competent in writing API scripts and xAPI statements.
- Time availability of said programmer
- Data Mining vs Learning Analytics –
- Data for data’s sake only creates noise that can overwhelm your efforts to clarify the impact of learning upon business results. Collecting data without knowing why you are collecting it is a waste of time and resources – especially with the work required to implement xAPI.
- Visualizations –
- Visual components like size, color, positioning can render the best visualization useless by making the object illegible.