Designing with AI
Problem
Faculty don’t enjoy compliance work.
It’s often seen as a high-stakes, low-reward burden. They spend hours staring at spreadsheets and reports, trying to manufacture a narrative that satisfies accreditation standards. The result is "Compliance Fatigue"—a system where educators spend more time reporting on the work than doing the work.
As the first AI initiative within the Watermark suite, the goal wasn't just to add a feature; it was to fix a broken motivation loop. We needed to bridge the gap between staring at raw data and taking meaningful action.
Learning
I spoke with Program Leads who felt vulnerable and frustrated. They were overwhelmed by the cognitive load of translating complex data into the specific administrative language required for accreditation.
I uncovered a skepticism toward AI in academia. If the tool felt like a "black box" or a "cheat code," we would lose the trust of the people we were trying to help. We had to prove that the AI wasn't replacing their expertise, but helping to unlock it.
Defining the Context: Academic Programs
Solutions
We launched the Assessment Catalyst Toolpack as an AI feature set. With it, we established the UX standards for AI across the suite, focusing on data privacy, academic integrity, and giving users control over the final content.
The Narrative Engine
Analyzes raw assessment data and generates a structured summary of findings, saving hours of manual drafting.
The Action Plan Generator
AI suggests potential continuous improvement initiatives based on the identified gaps in the data.
Shifting the Mental Model
The transition from analysis to action is a heavy one for Program Leads. We made it lighter by:
Shifting from Black Box to Glass Box: We designed for transparency. By showing the "why" behind an AI narrative, we moved the user from a place of vulnerability to a place of authority.
Reframing from "Cheating" to "Kickstarting": We framed the AI outputs as a first draft. This small shift in the mental model gave faculty the permission to use the tool as a starting point for their expertise, rather than a replacement for it.
Lowering the Barrier to Entry: We replaced blank page syndrome with a guided experience. By providing a jumping-off point, we reduced the cognitive load and allowed them to move straight into the higher value work of program improvement.
Results
We didn't just launch a feature - we shifted the departmental culture. By automating the narrative analysis and kickstarting the action plans, Admins saw a measurable increase in the quality of action plans submitted. Admins even reported a significant reduction in time spent in the feedback loop phase.
This project set a blueprint for how Watermark approaches AI. It’s a tool that respects the user's expertise while stripping away the administrative friction that prevents real progress.
“This type of AI feels less scary to me. My Programs want to improve for their students but don’t know where to start sometimes. This gets them there.”