AI Image Generator

Role
UX Designer
Interaction Designer
Tools
Figma
Project Duration
5 weeks
Challenge
The Educator-Focused Image Generator is an AI-powered tool designed to help educators quickly create classroom-appropriate images using natural language prompts. The tool leverages existing state-of-the-art AI models (Flux, Google Gemini, and additional open-source solutions) to generate safe, high-quality images tailored to educational contexts.
Results
I created fully prototyped, high-fidelity desktop wireframes for the Educator-Focused Image Generator.
Project Brief
Getting tailored results from existing AI models is not as easy as one might think. Most teachers are not AI prompt experts which can make it difficult to get specific results, especially when AI generators are making assumptions to fill in the gaps. Therefore, the Educator-Focused Image Generator aims to stand apart from these models by providing educator-centric presets and built-in prompt suggestions designed specifically for non-technical users.
Goals
Empower educators
Enable teachers, instructional designers, and educators to generate custom visuals without needing advanced technical skills.
Save time
Replace time-consuming manual image searches or edits with a fast, AI-driven process so that educators can spend time engaging with their students.
Ensure tailored results
Provide outputs that are appropriate for classroom use, incorporating robust content filters and educator-specific style presets.
Example User Flow
After identifying pain points and goals, we consulted an AI model about the typical steps that a user might take to generate an image. This helped us outline the types of interactions we would have to design and the logical flow of the final prototype.
The following example outlines how a 7th-grade chemistry teacher might interact with the Educator-Focused Image Generator:
Welcome & Grade Level
Subject & Topic
Purpose of the Image
Style/Look & Feel
Key Details in Plain English
Generate the First Draft
Refinement Options
Review Final Images
Download & Integration
Competitive Analysis
After understanding the goals and vision for this project, I started research on competing products. This helped me understand how popular AI image generators work, what kind of results they provide, and how I could improve the image generation experience.
Most image generator models had a similar text-based experience that produced between one and four images.

Iteration 1
The competitive analysis helped me understand the typical mental model that I should follow for my designs. It also presented an opportunity to change the way that the model collects information. So for my first iteration, I wanted to create a card-based interface for users to input information about their image.
Sketches
Based on the steps outlined earlier, I drew a step-by-step process for how our AI model might collect information with a card-based interface.
Wireframes
After sketching, I created mid-fidelity wireframes from the sketches using the Marvel UI kit.
Feedback
I presented the following wireframes to my team's Project Manager for review. During our discussion, we identified potential pain points with the cards as having them on different screens could cause more fatigue for users. We also discussed how the cards might not be specific enough to provide targeted insights for users.
The goal for the next iteration was to integrate these functions into the chat space and provide the users with more AI-driven suggestions/insights.
Iteration 2
In my second iteration, I focused more on the interactions between the AI agent and the user to provide more tailored suggestions. Additionally, I wanted to demonstrate how the AI agent might provide insights based on user input.
Sketches
I created sketches to outline how a user might interact with the AI agent to provide basic information about the image. With this model, the AI agent will ask for information from the user and provide suggestions based on their input.
Wireframes
Based on my sketches, I created wireframes to model the interactions between the AI agent and the user. I also created flows for how a user might approve or reject the AI agent's suggestions.
Feedback
I went through another round of critique with my product manager to discuss the flow and wireframes. While this flow is closer to what we wanted to achieve, we identified a few pain points. First, splitting up the steps in this way might make the interaction longer and cause fatigue for the user. Second, the text input box wasn't getting much use with this model the input functions are built in to the message boxes.
Our goal for the next iteration is to create a way to input the most important information upfront and transform the way users might use the text input box.
Final design
In the final design, I implemented a modal input at the beginning of the flow and added cards to the text input area. Additionally, I prototyped the interaction to model how a user might interact with this AI agent.