Beyond The Hype: What AI Can Really Do For Product Design
These days, it’s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. What’s much harder to find is a clear view of how AI is actually integrated into the everyday workflow of a product designer — not for experimentation, but for real, meaningful outcomes.
I’ve gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, I’ve built a simple, repeatable workflow that significantly boosts my productivity.
In this article, I’ll share what’s already working and break down some of the most common objections I’ve encountered — many of which I’ve faced personally.
Stage 1: Idea Generation Without The Clichés
Pushback: “Whenever I ask AI to suggest ideas, I just get a list of clichés. It can’t produce the kind of creative thinking expected from a product designer.”
That’s a fair point. AI doesn’t know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to “feed it” all the documentation you have. But that’s a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AI’s answers become vague and unfocused.
Current-gen models can technically process thousands of words, but the longer the input, the higher the risk of missing something important, especially content buried in the middle. This is known as the “lost in the middle” problem.
To get meaningful results, AI doesn’t just need more information — it needs the right information, delivered in the right way. That’s where the RAG approach comes in.
How RAG Works
Think of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary — a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of “card catalog,” called a vector database.
When you ask a question, the assistant doesn’t reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.
How Is This Different from Just Dumping a Doc into the Chat?
Let’s break it down:
Typical chat interaction
It’s like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is “in front of them,” but it’s easy to miss something, especially if it’s in the middle. This is exactly what the “lost in the middle” issue refers to.
RAG approach
You ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. It’s faster and more accurate, but it introduces a few new risks:
- Ambiguous question
You ask, “How can we make the project safer?” and the assistant brings you documents about cybersecurity, not finance. - Mixed chunks
A single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant can’t tell what the core topic is. - Semantic gap
You ask, “How can we speed up the app?” but the document says, “Optimize API response time.” For a human, that’s obviously related. For a machine, not always.

These aren’t reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?
Start With Three Short, Focused Documents
These three short documents will give your AI assistant just enough context to be genuinely helpful:
- Product Overview & Scenarios
A brief summary of what your product does and the core user scenarios. - Target Audience
Your main user segments and their key needs or goals. - Research & Experiments
Key insights from interviews, surveys, user testing, or product analytics.
Each document should focus on a single topic and ideally stay within 300–500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.
Language Matters
In practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:
- English prompt + English documents: Consistently accurate and relevant results.
- Non-English prompt + English documents: Quality dropped sharply. The AI struggled to match the query with the right content.
- Non-English prompt + non-English documents: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.
Takeaway: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, you’re free to use other languages. A challenge also highlighted in this 2024 study on multilingual retrieval.
From Outsider to Teammate: Giving AI the Context It Needs
Once your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas — the way a mid-level or senior designer would.
Here’s an example of a prompt that works well for me:
Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt).
The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.
Please include:If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.
- Possible overlaps in user goals, actions, or scenarios;
- Potential confusion if both features are launched at the same time;
- Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);
- Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;
- Onboarding screens or explanatory elements that might help users understand both features.
AI Needs Context, Not Just Prompts
If you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just more information, but better, more structured information.
Building a usable knowledge base isn’t difficult. And you don’t need a full-blown RAG system to get started. Many of these principles work even in a regular chat: well-organized content and a clear question can dramatically improve how helpful and relevant the AI’s responses are. That’s your first step in turning AI from a novelty into a practical tool in your product design workflow.
Stage 2: Prototyping and Visual Experiments
Pushback: “AI only generates obvious solutions and can’t even build a proper user flow. It’s faster to do it manually.”
That’s a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.
For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can “flip” to reveal a prize. I couldn’t recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.
At the prototyping stage, AI can be a strong creative partner in two areas:
- UI element ideation
It can generate dozens of interactive patterns, including ones you might not think of yourself. - Micro-animation generation
It can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.
AI can also be applied to multi-screen prototypes, but it’s not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks — individual screens, elements, or animations — where it can kick off the thinking process and save hours of trial and error.
A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.
Here’s another valuable way to use AI in design — as a stress-testing tool. Back in 2023, Google Research introduced PromptInfuser, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasn’t to generate new UI, but to check how well AI could operate inside existing layouts — placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.
The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input — a clear gain in design accuracy, not just speed.
That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.
Stage 3: Finalizing The Interface And Visual Style
Pushback: “AI can’t match our visual style. It’s easier to just do it by hand.”
This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often don’t feel like they belong in your product. They tend to be either overly decorative or overly simplified.
And this is a real limitation. In my experience, today’s models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:
- Direct integration with a component library.
I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. Other designers have run into similar issues, noting that library support in Figma Make is still limited and often unstable. - Uploading styles as JSON.
Instead of a full component library, I tried uploading only the exported styles — colors, fonts — in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied. - Two-step approach: structure first, style second.
What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result — though still far from pixel-perfect.

So yes, AI still can’t help you finalize your UI. It doesn’t replace hand-crafted design work. But it’s very useful in other ways:
- Quickly creating a visual concept for discussion.
- Generating “what if” alternatives to existing mockups.
- Exploring how your interface might look in a different style or direction.
- Acting as a second pair of eyes by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.
“
Stage 4: Product Feedback And Analytics: AI As A Thinking Exosuit
Product designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.
As Vitaly Friedman rightly pointed out in one of his columns, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. The strength of AI isn’t in inventing data but in processing it at scale.
Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.
Simply counting the percentages for each of the five predefined reasons wasn’t enough. I wanted to know:
- Are there specific times of day when users churn more?
- Do the reasons differ by region?
- Is there a correlation between user exits and system load?
The real challenge was… figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done “for me” by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldn’t have been able to reach that level of insight on my own at all.

“
A few practical notes: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.
AI Is Not An Autopilot But A Co-Pilot
AI in design is only as good as the questions you ask it. It doesn’t do the work for you. It doesn’t replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes it’s still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer.
But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Don’t wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.
Let’s Summarize
- If you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. That’s the “lost in the middle” problem.
- The RAG approach helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.
- Clear, focused prompts work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.
- A well-structured knowledge bas makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.
- Use English for both your prompts and your documents. Even multilingual models are most reliable when working in English, especially for retrieval.
- Most importantly: treat AI as a creative partner. It won’t replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.
Further Reading
- “AI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality”, Cindy Brummer
This piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on — before you dive into RAG or idea generation. - “8 essential tips for using Figma Make”, Alexia Danton
While this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if you’re experimenting with visual tools and structured prompting. - “What Is Retrieval-Augmented Generation aka RAG”, Rick Merritt
If you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.
