2024 | GOVT TECH AI | SHIPPED
Generative AI for Govt Tech
During the summer of '23 as a product design intern at Opengov, I lead the designs for their first AI feature. A writing assistance for government documentation helps users write faster and better.

PROBLEM & CONTEXT
SOW's are complex, Previous automation attempts Are High Effort
Writing SOWs has traditionally been a tedious and time-consuming process, leading to significant time and cost impacts for governments. While past solutions such as libraries, templates, and references offered some relief, users still faced high effort requirements and lengthy drafting cycles.
Image: Editor (Before Redesign)
RESEARCH FINDINGS
Blank page syndrome: users struggle to get started
Heuristic Evaluation
BLANK CANVAS PARALYSIS
Struggled most with initial articulation
Fear of making mistakes or writing something “wrong”; pressure for immediate perfection
High cognitive load to organize thoughts, structure sections, and select the right language
Difficulty mapping raw ideas to expected document formats or templates
ISSUES WITH TEMPLATE LIBRARY
Took effort to go over existing templates and find the right one
Effort-Intensive Content Mapping - interpretation and reworking to fit the user’s actual project
complex structure and heavy detail
Highly detailed
Formal business/legal language to avoid ambiguity
Cover multiple sections, each with their own specificity needs
TESTING
Insights From Iterations
WHAT WORKED
1
Suggestive prompts helped guide users
2
Output formatting matched what users were already familiar with
3
The “Copy to Scope” CTA mimicked other library and template workflows
4
Users had the ability to edit or discard AI-generated output
WHAT DIDN'T WORK
5
Users weren’t prompt engineers and needed assistance to get good results.
6
UI elements that felt “whimsical.”
7
Modals were restrictive when AI took too long to generate a response, leaving users stuck on the page
FINAL DESIGNS
Accommodating for constraints, two versions were designed
OpenGov had an existing (old) project editor and a new redesign in development.
We had a tight 4-week timeline—from initial concept to production—for an MVP launch, as OpenGov aimed to present the feature at NIGP ’23. To meet this deadline and work within technical constraints, I developed two versions.
FINAL DESIGN W OLD EDITOR
Used the old editor for easy technical feasibility and implementation, while incorporating most feedback from earlier testing.
WHAT WORKED
1
High Completion Rate & Discoverability
2
Reduced time to complete tasks in user testing
3
Optimised focus and minimised distractions during input
OPPORTUNITY AREAS
4
In practice, often all content appeared in one block instead of separate sections.
5
This led to extra copy-paste effort for users
FINAL DESIGN W NEW EDITOR
Built on the new editor still in development, aligning with the updated UI design and adding improvements for better discoverability, output quality, and user guidance.
WHAT WORKED
1
Section-level AI writing solves formatting issues in MVP/old editor
2
Introduced naturally in workflow
3
Users can freely edit and update sections
4
Maintains user and system context
5
Clear distinction between AI-generated and user content
OPPORTUNITY AREAS
9
Short user study with proxy users; need real-world testing with actual users for deeper insights
OUTCOMES
Saving Hours of Government Time
Time to task dropped from an average of 25 minutes to 7.4 minutes, representing a 70.4% reduction during a within-subject study with 5 participants. In real-world scenarios, this task often takes several hours, so the true efficiency gains delivered by the AI assistant are likely much higher.
Shipped in Sept 23, and was Open Gov's first AI feature. Debuted at a national government conference NIGP '23.
TEAM CREDITS
Design
Jen Elhers (Mentor) / Gauri (Me)
Product Owners
Toah Jones Hill
Dev
Matt H