14.4 C
Bucharest
Wednesday, April 29, 2026

I’d vibe code any day, but I only trust these models with UI design

Vibe coding has changed how I build things. I can describe an app, website, or program in plain text and watch an AI assemble the project bit by bit to hand me a working prototype in no time. There are certain projects I won’t vibe code, but for just about everything else, this is a great way to speed up development.

However, speeding up development and delivering a finished product are two different things. Sure, most AI assistants will be able to spin up code for whatever app or program you want, but not all of them can make it look good. Not every AI model deserves to touch your UI, and after burning enough hours fixing ugly, soulless interfaces, I only trust a few with UI design.

Google AI Studio Logo on the Oppo Find N6


Google’s new AI Studio is a game-changer for vibe coding

In three months, I’ve vibe-coded multiple platforms, but always ran into problems with one key feature. Google’s new AI Studio just solved it.

The problem with vibe-coded UI

Basic screens, broken UX, and inconsistent logic

Most vibe-coded apps look like they were built by an underpaid developer working well beyond their office hours. Same fonts, icons, a blue to purple gradient background, centered cards with way too much padding—these are all tell-tale signs of a vibe-coded UI and are consistent across projects. The AI hasn’t done anything wrong per se, and the UI in most cases turns out to be perfectly functional. It just defaulted to every popular pattern baked into its training data.

Vibe coding tools also focus more on whether something works than on whether it looks right. A button can be functional while also feeling like it floats slightly above the rest of the page. Spacing can be technically valid while still looking haywire to the human eye. None of it is technically wrong, and it won’t fail any tests, but your users will feel it the second they interact with your project.

You see, designing a good UI isn’t just code generation. It’s a combination of aesthetic decisions about hierarchy, rhythm, contrast, and feel. When you ask an AI to build a dashboard, you’re not really asking it to write divs and flex containers. You’re asking it to have an innate design sense, and most AI models aren’t there yet.

Some models will give you something that looks fine in screenshots, but will break the second it’s opened on another screen size. Better models will make decisions like choosing appropriate layering for dark mode, maintaining consistent corner radius across components, and picking color relationships that make sense to the human brain. This difference won’t show up on a benchmark test, but it becomes obvious when you look at the finished product.

Only a few models actually get UI right

The ones that understand layout, spacing, and usability

For a simple demonstration of how different models handle UIs, I gave Claude Sonnet 4.6, Gemini, and ChatGPT an identical prompt as follows:

Build a SaaS analytics dashboard in React with a dark mode UI. Include a sidebar nav, a KPI card row, a line chart, and a recent activity feed. Use a modern design system with consistent spacing, typography, and color relationships.

Not only was Claude’s code free of any bugs and the UI design the most detailed, but it also incorporated animations and small UX improvements most designers wouldn’t consider until they reached the final stage of development. Gemini and ChatGPT both provided rather generic-looking dashboards, but Gemini did remember to add some UX polish to make it feel like a finished product.

Claude is where I’ve landed for most of my UI work. Compared to Gemini and ChatGPT, it takes a more ambitious approach to layouts, makes better style choices across the board, and is more thoughtful about layering and contrast. The result is an interface that actually feels designed rather than an amalgamation of random designs put together.

Claude models, both Sonnet 4.6 and Opus 4.7, ask fewer clarifying questions and don’t resort to safe defaults as much. They don’t just dump a generic layout on you either. These models actually use the context you provide, pick up naming conventions and architectural patterns in your code, and try to match the design language you’ve already established. It’s also great at sticking to a design system or style guide if you provide one.

claude

Developer

Anthropic PBC

Price model

Free, subscription available


Google’s new agentic AI might be able to do your chores for you, but it’s not as good at making UIs unless you show it prepared. It likes to come up with three-column layouts and chat-style components that look great, and it’s genuinely hard to beat when you’ve got enough context for the kind of UI you want. If you’ve got multiple screenshots, reference images, or existing designs you want to improve, Gemini can do a great job.

However, if you’re looking to build entire UIs from scratch without strong visual reference material, Gemini can be quite inconsistent. It often gets tripped up by edge cases, and there’s a big gap in how confident it sounds and how well the output will hold up in regular use.

Google Gemini AI app icon.

OS

Android

Developer

Google

Price model

Subscription


Last but not least, ChatGPT will get the job done, but don’t expect ground-breaking designs from it. If you’re working in a tool like Cursor and need fast, immediately runnable component output, ChatGPT will stay disciplined within the spec you give it. The tradeoff is that it ends up generating interfaces that are clean and functional, but rarely inspired. And if your design or project gets a bit too complex, it tends to lose track of context midway.

The ChatGPT app icon against a transparent background.

OS

Android, iOS, Web

Developer

OpenAI

Price model

Free with optional subscription


This is the workflow that finally clicked

Where AI helps—and where I take over

Claude Code running in WIndows 11 Credit: Yadullah Abidi / MakeUseOf

The models matter, but how you use them can also make a difference. If you’re simply telling an AI model to design a dashboard, you’re giving it an instruction to follow as it sees fit, not a design brief. You need to know your typography, color palette, and the general feeling you want in your UI before you even start prompting.

It’s best to first come up with a general idea of what you want your UI to look like and create a rough design brief as a JSON or Markdown file. Then, when you’re working on your project, you can hand these files off to the model you’re working with and ask it to work within the defined bounds. It sounds like extra overhead, but it can be the difference between starting fresh and having the AI actually remember what your interface is supposed to look like.

So which model should you trust?

Picking the right tool for UI that actually works

If I had to pick one on the basis of pure UI design quality, it’s Claude. You’re good to go with either Sonnet 4.6 or Opus 4.7. Both models produce great results with slight variations, so it’s best if you experiment with both and pick what works best for you. Claude also has the best combination of aesthetic judgment, instruction-following, and contextual memory of the three models I use. Apart from that, Gemini works best if you’ve got plenty of reference material on hand, and ChatGPT is excellent for fast iteration workflows.

ESP32 with OLED display and breadboard.


ChatGPT refused to help me vibe code my project and it led me somewhere better

I wanted shortcuts but got a reality check instead.

What you need to understand is that vibe coding is only as good as the design decisions you bring to the table before the AI starts generating code. You can’t vibe your way to a great UI by letting the model guess what you want. You need to have a clear visual direction, and you need to communicate it clearly to the model you’re working with. AI can do a lot for you, but it can’t think in your place.

Link

- Advertisement -
Latest
- Advertisement -spot_img

More Articles

- Advertisement -spot_img