<rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Design on Smashing Magazine — For Web Designers And Developers</title><link>https://www.smashingmagazine.com/category/design/index.xml</link><description>Recent content in Design on Smashing Magazine — For Web Designers And Developers</description><generator>Hugo -- gohugo.io</generator><language>en-us</language><lastBuildDate>Tue, 14 Oct 2025 04:02:41 +0000</lastBuildDate><item><author>Frederick O’Brien</author><title>The Grayscale Problem</title><link>https://www.smashingmagazine.com/2025/10/the-grayscale-problem/</link><pubDate>Mon, 13 Oct 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/the-grayscale-problem/</guid><description>From A/B tests to AI slop, the modern web is bleeding out its colour. Standardized, templated, and overoptimized, it’s starting to feel like a digital Levittown. But it doesn’t have to be.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/the-grayscale-problem/" />
              <title>The Grayscale Problem</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Grayscale Problem</h1>
                  
                    
                    <address>Frederick O’Brien</address>
                  
                  <time datetime="2025-10-13T10:00:00&#43;00:00" class="op-published">2025-10-13T10:00:00+00:00</time>
                  <time datetime="2025-10-13T10:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Last year, a study found that <a href="https://www.forbes.com/sites/kbrauer/2024/07/16/where-have-all-the-colorful-cars-gone-study-shows-them-vanishing/">cars are steadily getting less colourful</a>. In the US, around 80% of cars are now black, white, gray, or silver, up from 60% in 2004. This trend has been attributed to cost savings and consumer preferences. Whatever the reasons, the result is hard to deny: a big part of daily life isn’t as colourful as it used to be.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="420"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png"
			
			sizes="100vw"
			alt=""
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/the-grayscale-problem/1-car-color-market-share.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The colourfulness of mass consumer products is hardly the bellwether for how vibrant life is as a whole, but the study captures a trend a lot of us recognise &mdash; offline and on. From colour to design to public discourse, a lot of life is getting less varied, more grayscale.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="580"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png"
			
			sizes="100vw"
			alt=""
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/the-grayscale-problem/2-grayscale-car-models.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The web is caught in the same current. There is plenty right with it &mdash; it retains plenty of its founding principles &mdash; but its state is not healthy. From AI slop to shoddy service providers to enshittification, the digital world faces its own <strong>grayscale problem</strong>.</p>

<p>This bears talking about. One of life’s great fallacies is that things get better over time on their own. They can, but it’s certainly not a given. I don’t think the moral arc of the universe does not bend towards justice, not on its own; I think it bends wherever it is dragged, kicking and screaming, by those with the will and the means to do so.</p>

<p>Much of the modern web, and the forces of optimisation and standardisation that drive it, bear an uncanny resemblance to the trend of car colours. Processes like market research and A/B testing &mdash; <a href="https://hbr.org/2017/06/a-refresher-on-ab-testing">the process by which two options are compared to see which ‘performs’ better on clickthrough, engagement, etc.</a> &mdash; have their value, but they don’t lend themselves to particularly stimulating design choices.</p>

<p>The spirit of free expression that made the formative years of the internet so exciting &mdash; think GeoCities, personal blogging, and so on &mdash; is on the slide.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="416"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png"
			
			sizes="100vw"
			alt="Screenshot from the Geocities Gallery"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Image source: <a href='https://geocities.restorativland.org/'>The Geocities Gallery</a>. (<a href='https://files.smashing.media/articles/the-grayscale-problem/3-geocities.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The ongoing transition to a more decentralised, privacy-aware <a href="https://aws.amazon.com/what-is/web3/">Web3</a> holds some promise. Two-thirds of the world’s population now has online access &mdash; <a href="https://www.weforum.org/stories/2024/01/digital-divide-internet-access-online-fwa/">though that still leaves plenty of work to do</a> &mdash; with a wealth of platforms allowing billions of people to connect. The dream of a digital world that is open, connected, and flat endures, but is tainted.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p><p>Meet <a data-instant href="/the-smashing-newsletter/"><strong>Smashing Email Newsletter</strong></a> with useful tips on front-end, design &amp; UX. Subscribe and <strong>get “Smart Interface Design Checklists”</strong> &mdash; a <strong>free PDF deck</strong> with 150+ questions to ask yourself when designing and building almost <em>anything</em>.</p><div><section class="nlbf"><form action="//smashingmagazine.us1.list-manage.com/subscribe/post?u=16b832d9ad4b28edf261f34df&amp;id=a1666656e0" method="post"><div class="nlbwrapper"><label for="mce-EMAIL-hp" class="sr-only">Your (smashing) email</label><div class="nlbgroup"><input type="email" name="EMAIL" class="nlbf-email" id="mce-EMAIL-hp" placeholder="Your email">
<input type="submit" value="Meow!" name="subscribe" class="nlbf-button"></div></div></form><style>.c-garfield-the-cat .nlbwrapper{margin-bottom: 0;}.nlbf{display:flex;padding-bottom:.25em;padding-top:.5em;text-align:center;letter-spacing:-.5px;color:#fff;font-size:1.15em}.nlbgroup:hover{box-shadow:0 1px 7px -5px rgba(50,50,93,.25),0 3px 16px -8px rgba(0,0,0,.3),0 -6px 16px -6px rgba(0,0,0,.025)}.nlbf .nlbf-button,.nlbf .nlbf-email{flex-grow:1;flex-shrink:0;width:auto;margin:0;padding:.75em 1em;border:0;border-radius:11px;background:#fff;font-size:1em;box-shadow:none}.promo-box .nlbf-button:focus,.promo-box input.nlbf-email:active,.promo-box input.nlbf-email:focus{box-shadow:none}.nlbf-button:-ms-input-placeholder,.nlbf-email:-ms-input-placeholder{color:#777;font-style:italic}.nlbf-button::-webkit-input-placeholder,.nlbf-email::-webkit-input-placeholder{color:#777;font-style:italic}.nlbf-button:-ms-input-placeholder,.nlbf-button::-moz-placeholder,.nlbf-button::placeholder,.nlbf-email:-ms-input-placeholder,.nlbf-email::-moz-placeholder,.nlbf-email::placeholder{color:#777;font-style:italic}.nlbf .nlbf-button{transition:all .2s ease-in-out;color:#fff;background-color:#0168b8;font-weight:700;box-shadow:0 1px 1px rgba(0,0,0,.3);width:100%;border:0;border-left:1px solid #ddd;flex:2;border-top-left-radius:0;border-bottom-left-radius:0}.nlbf .nlbf-email{border-top-right-radius:0;border-bottom-right-radius:0;width:100%;flex:4;min-width:150px}@media all and (max-width:650px){.nlbf .nlbgroup{flex-wrap:wrap;box-shadow:none}.nlbf .nlbf-button,.nlbf .nlbf-email{border-radius:11px;border-left:none}.nlbf .nlbf-email{box-shadow:0 13px 27px -5px rgba(50,50,93,.25),0 8px 16px -8px rgba(0,0,0,.3),0 -6px 16px -6px rgba(0,0,0,.025);min-width:100%}.nlbf .nlbf-button{margin-top:1em;box-shadow:0 1px 1px rgba(0,0,0,.5)}}.nlbf .nlbf-button:active,.nlbf .nlbf-button:focus,.nlbf .nlbf-button:hover{cursor:pointer;color:#fff;background-color:#0168b8;border-color:#dadada;box-shadow:0 1px 1px rgba(0,0,0,.3)}.nlbf .nlbf-button:active,.nlbf .nlbf-button:focus{outline:0!important;text-shadow:1px 1px 1px rgba(0,0,0,.3);box-shadow:inset 0 3px 3px rgba(0,0,0,.3)}.nlbgroup{display:flex;box-shadow:0 13px 27px -5px rgba(50,50,93,.25),0 8px 16px -8px rgba(0,0,0,.3),0 -6px 16px -6px rgba(0,0,0,.025);border-radius:11px;transition:box-shadow .2s ease-in-out}.nlbwrapper{display:flex;flex-direction:column;justify-content:center}.nlbf form{width:100%}.nlbf .nlbgroup{margin:0}.nlbcaption{font-size:.9em;line-height:1.5em;color:#fff;border-radius:11px;padding:.5em 1em;display:inline-block;background-color:#0067b859;text-shadow:1px 1px 1px rgba(0,0,0,.3)}.wf-loaded-stage2 .nlbf .nlbf-button{font-family:Mija}.mts{margin-top: 5px !important;}.mbn{margin-bottom: 0 !important;}</style></section><p class="mts mbn"><small class="promo-box__footer mtm block grey"><em>Once a week. Useful tips on <a href="https://www.smashingmagazine.com/the-smashing-newsletter/">front-end &amp; UX</a>. Trusted by 207.000 friendly folks.</em></small></p></div></p>
</div>
</div>
<div class="feature-panel-right-col">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-firechat.svg"
    alt="Feature Panel"
    width="310"
    height="400"
/>

</div>

<p></div>
</aside>
</div></p>

<h2 id="monopolies">Monopolies</h2>

<p>One of the main sources of concern for me is that although more people are online than ever, they are concentrating on fewer and fewer sites. A study <a href="https://www.sciencealert.com/we-re-going-to-fewer-and-fewer-websites-and-that-could-be-a-problem">published in 2021</a> found that <strong>activity is concentrated in a handful of websites</strong>. Think Google, Amazon, Facebook, Instagram, and, more recently, ChatGPT:</p>

<blockquote>“So, while there is still growth in the functions, features, and applications offered on the web, the number of entities providing these functions is shrinking. [...] The authority, influence, and visibility of the top 1,000 global websites (as measured by network centrality or PageRank) is growing every month, at the expense of all other sites.”</blockquote>

<p>Monopolies by nature <strong>reduce variance</strong>, both through their domination of the market and (understandably in fairness) internal preferences for consistency. And, let’s be frank, they have a vested interest in crushing any potential upstarts.</p>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2020/05/readability-algorithms-tools-targets/">Readability Algorithms Should Be Tools, Not Targets</a>”</li>
<li>“<a href="https://www.smashingmagazine.com/2021/01/towards-ad-free-web-diversifying-online-economy/">Towards An Ad-Free Web: Diversifying The Online Economy</a>”</li>
</ul>

<p>Dominant websites often fall victim to what I like to call <strong>Internet Explorer Syndrome</strong>, where their dominance breeds a certain amount of complacency. Why improve your <a href="https://www.smashingmagazine.com/2025/05/what-zen-art-motorcycle-maintenance-teach-web-design/">quality</a> when you’re sitting on 90% market share? No wonder <a href="https://www.standard.co.uk/news/tech/google-search-worse-quality-spam-study-b1133559.html">the likes of Google are getting worse</a>.</p>

<p>The most immediate sign of this is obviously how sites are designed and how they look. A lot of the big players look an awful lot like each other. Even personal websites are built atop third-party website builders. Millions of people wind up using the same handful of templates, and that’s if they have their own website at all. On social media, we are little more than a profile picture and a pithy tagline. The rest is boilerplate.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="728"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png"
			
			sizes="100vw"
			alt="Gratscale minimalist layout example"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/the-grayscale-problem/4-grayscale-minimalist-layout.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Should there be sleek, minimalist, ‘grayscale’ design systems and websites? Absolutely. But there should be colourful, kooky ones too, and if anything, they’re fading away. Do we really want to spend our online lives in the digital equivalent of Levittowns? Even logos are contriving to be less eye-catching. It feels like a matter of time before every major logo is a circle in a pastel colour.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="508"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png"
			
			sizes="100vw"
			alt=""
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/the-grayscale-problem/5-levittown.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The arrival of Artificial Intelligence into our everyday lives (and a decent chunk of the digital services we use) has put all of this into overdrive. Amalgamating &mdash; and hallucinating from &mdash; content that was already trending towards a perfect average, it is grayscale in its purest form.</p>

<p>Mix all the colours together, and what do you get? A muddy gray gloop.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="424"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png"
			
			sizes="100vw"
			alt="Colors mixed together into gray lopp"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/the-grayscale-problem/6-mix-colors-muddy-gray-gloop.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I’m not railing against best practice. A lot of conventions have become the standard for good reason. One could just as easily shake their fist at the sky and wonder why all newspapers look the same, or all books. I hope the difference here is clear, though.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20web%20is%20a%20flexible%20enough%20domain%20that%20I%20think%20it%20belongs%20in%20the%20realm%20of%20architecture.%20A%20city%20where%20all%20buildings%20look%20alike%20has%20a%20soul-crushing%20quality%20about%20it.%20The%20same%20is%20true,%20I%20think,%20of%20the%20web.%0a&url=https://smashingmagazine.com%2f2025%2f10%2fthe-grayscale-problem%2f">
      
The web is a flexible enough domain that I think it belongs in the realm of architecture. A city where all buildings look alike has a soul-crushing quality about it. The same is true, I think, of the web.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>In the Oscar Wilde play <em><a href="https://www.gutenberg.org/files/790/790-h/790-h.htm">Lady Windermere’s Fan</a></em>, a character quips that a cynic <em>“knows the price of everything and the value of nothing.”</em> In fairness, another quips back that a sentimentalist <em>“sees an absurd value in everything, and doesn’t know the market price of any single thing.”</em></p>

<p>The sweet spot is somewhere in between. Structure goes a long way, but life needs a bit of variety too.</p>

<p>So, how do we go about bringing that variety? We probably shouldn’t hold our breath on big players to lead the way. They have the most to lose, after all. Why risk being colourful or dynamic if it impacts the bottom line?</p>

<p>We, the citizens of the web, have more power than we realise. This is the web, remember, a place where if you can imagine it, odds are you can make it. And at zero cost. No materials to buy and ship, no shareholders to appease. A place as flexible &mdash; and limitless &mdash; as the web has no business being boring.</p>

<p>There are plenty of ways, big and small, of keeping this place colourful. Whether our digital footprints are on third-party websites or ones we build ourselves, we needn’t toe the line.</p>

<p><strong>Colour</strong> seems an appropriate place to start. When given the choice, try something audacious rather than safe. The worst that can happen is that it doesn’t work. It’s not like the sunk cost of painting a room; if you don’t like the palette, you simply change the hex codes. The same is true of <a href="https://www.smashingmagazine.com/2023/03/free-fonts-interface-designers/">fonts</a>, <a href="https://www.smashingmagazine.com/2021/08/open-source-icons/">icons</a>, and other building blocks of the web.</p>

<p>As an example, a couple of friends and I listen to and review albums occasionally as a hobby. On the website, the palette of each review page reflects the album artwork:</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="800"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png"
			
			sizes="100vw"
			alt=""
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/the-grayscale-problem/8-audioxide-screenshot.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I couldn’t tell you if reviews ‘perform’ better or worse than if they had a grayscale palette, because I don’t care. I think it’s a lot nicer to look at. And for those wondering, yes, I have tried to make every page meet <a href="https://www.w3.org/WAI/WCAG2AA-Conformance">AA Web Accessibility standards</a>. Vibrant and accessible aren’t mutually exclusive.</p>

<p>Another great way of bringing vibrancy to the web is a <strong>degree of randomisation</strong>. Bruno Simon of <a href="https://threejs-journey.com/">Three Journey</a> and <a href="https://bruno-simon.com/">awesome portfolio</a> fame weaves random generation into a lot of his projects, and the results are gorgeous. What’s more, they feel familiar, natural, because life is full of wildcards.</p>

<figure><a href="https://files.smashing.media/articles/the-grayscale-problem/7-3d-model.gif"><img src="https://files.smashing.media/articles/the-grayscale-problem/7-3d-model.gif" width="800" height="520" alt="3D model" /></a></figure>

<p>This needn’t be in fancy 3D models. You could lightly rotate images to create a more informal, photo album mood, or chuck in the occasional random link in a list of recommended articles, just to shake things up.</p>

<p>In a lot of ways, it boils down to an attitude of just trying stuff out. Make your own font, give the site a sepia filter, and add that easter egg you keep thinking about. Just because someone, somewhere has already done it doesn’t mean you can’t do it your own way. And who knows, maybe your way stumbles onto someplace wholly new.</p>

<p>I’m wary of being too prescriptive. I don’t have the keys to a colourful web. No one person does. A vibrant community is the sum total of its people. What keeps things interesting is individuals trying wacky ideas and putting them out there. Expression for expression’s sake. Experimentation for experimentation’s sake. Tinkering for tinkering’s sake.</p>

<p>As users, there’s also plenty of room to be adventurous and try out <a href="https://openalternative.co/">open source alternatives to the software monopolies</a> that shape so much of today’s Web. Being active in the communities that shape those tools helps to sustain <strong>a more open, collaborative digital world</strong>.</p>

<p>Although there are lessons to be taken from it, we won’t get a more colourful web by idealising the past or pining to get back to the ‘90s. Nor is there any point in resisting new technologies. AI is here; the choice is whether we use it or it uses us. We must have the courage to carry forward what still holds true, drop what doesn’t, and explore new ideas with a spirit of play.</p>

<div class="partners__lead-place"></div>

<p>Here are a few more <em>Smashing</em> articles in that spirit:</p>

<ul>
<li>“<a href="https://www.smashingmagazine.com/2020/11/playfulness-code-supercharge-fun-learning/">Playfulness In Code: Supercharge Your Learning By Having Fun</a>” by Jhey Tompkins</li>
<li>“<a href="https://www.smashingmagazine.com/2025/08/psychology-color-ux-design-digital-products/">The Psychology Of Color In UX And Digital Products</a>” by Rodolpho Henrique</li>
<li>“<a href="https://www.smashingmagazine.com/2020/12/creativity-technology/">Creativity In A World Of Technology: Does It Exist?</a>” By Maggie Mackenzie</li>
<li>“<a href="https://www.smashingmagazine.com/2025/05/what-zen-art-motorcycle-maintenance-teach-web-design/">What Zen And The Art Of Motorcycle Maintenance Can Teach Us About Web Design</a>”</li>
<li>“<a href="https://www.smashingmagazine.com/2025/01/ode-to-side-project-time/">An Ode To Side Project Time</a>”</li>
</ul>

<p>I do think there’s a broader discussion to be had about the extent to which A/B tests, bottom lines, and focus groups seem to dictate much of how the modern web looks and feels. With sites being squeezed tighter and tighter by dwindling advertising revenues, and <a href="https://www.forbes.com/sites/torconstantino/2025/04/14/the-60-problem---how-ai-search-is-draining-your-traffic/">AI answers muscling in on search traffic</a>, the corporate entities behind larger websites can’t justify doing anything other than what is safe and proven, for fear of shrinking their slice of the pie.</p>

<p>Lest we forget, though, most of the web isn’t beholden to those types of pressure. From pet projects to wikis to forums to community news outlets to all manner of other things, there are countless reasons for websites to exist, and they needn’t take design cues from the handful of sites slugging it out at the top.</p>

<p>Connected with this is the dire need for <a href="https://tcg.uis.unesco.org/wp-content/uploads/sites/4/2021/08/Metadata-4.4.2.pdf">digital literacy</a> (PDF) &mdash; ‘the confident and critical use of a full range of digital technologies for information, communication and basic problem-solving in all aspects of life.’ For as long as using third-party platforms is a necessity rather than a choice, the needle’s only going to move so much.</p>

<p>There’s a reason why <a href="https://www.bbc.co.uk/news/technology-67105983">Minecraft is the world’s best-selling game</a>. People are creative. When given the tools &mdash; and the opportunity &mdash; that creativity will manifest in weird and wonderful ways. That game is a lot of things, but gray ain’t one of them.</p>

<p>The web has all of that flexibility and more. It is a <strong>manifestation of imagination</strong>. Imagination trends towards colour, not grayness. It doesn’t always feel like it, but where the internet goes is decided by its citizens. The internet is ours. If we want to, we can make it technicolor.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Andy Clarke</author><title>Smashing Animations Part 5: Building Adaptive SVGs With `&lt;symbol>`, `&lt;use>`, And CSS Media Queries</title><link>https://www.smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/</link><pubDate>Mon, 06 Oct 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/</guid><description>SVGs, they scale, yes, but how else can you make them adapt even better to several screen sizes? Web design pioneer &lt;a href="https://stuffandnonsense.co.uk">Andy Clarke&lt;/a> explains how he builds what he calls “adaptive SVGs” using &lt;code>&amp;lt;symbol&amp;gt;&lt;/code>, &lt;code>&amp;lt;use&amp;gt;&lt;/code>, and CSS Media Queries.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/smashing-animations-part-5-building-adaptive-svgs/" />
              <title>Smashing Animations Part 5: Building Adaptive SVGs With `&lt;symbol&gt;`, `&lt;use&gt;`, And CSS Media Queries</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Smashing Animations Part 5: Building Adaptive SVGs With `&lt;symbol&gt;`, `&lt;use&gt;`, And CSS Media Queries</h1>
                  
                    
                    <address>Andy Clarke</address>
                  
                  <time datetime="2025-10-06T13:00:00&#43;00:00" class="op-published">2025-10-06T13:00:00+00:00</time>
                  <time datetime="2025-10-06T13:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>I’ve written quite a lot recently about how I <a href="https://www.smashingmagazine.com/2025/06/smashing-animations-part-4-optimising-svgs/">prepare and optimise</a> SVG code to use as static graphics or in <a href="https://www.smashingmagazine.com/2025/05/smashing-animations-part-1-classic-cartoons-inspire-css/">animations</a>. I love working with SVG, but there’s always been something about them that bugs me.</p>

<p>To illustrate how I build adaptive SVGs, I’ve selected an episode of <em>The Quick Draw McGraw Show</em> called “<a href="https://yowpyowp.blogspot.com/2012/06/quick-draw-mcgraw-bow-wow-bandit.html">Bow Wow Bandit</a>,” first broadcast in 1959.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png"
			
			sizes="100vw"
			alt="Bow Wow Bandit illustration"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Quick Draw McGraw Show © Warner Bros. Entertainment Inc. (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/1-quick-draw-mcgraw-show.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In it, Quick Draw McGraw enlists his bloodhound Snuffles to rescue his sidekick Baba Looey. Like most Hanna-Barbera title cards of the period, the artwork was made by Lawrence (Art) Goble.</p>

<div class="refs">
  <ul><li><a href="https://www.smashingmagazine.com/2025/05/smashing-animations-part-1-classic-cartoons-inspire-css/">Smashing Animations Part 1: How Classic Cartoons Inspire Modern CSS</a></li><li><a href="https://www.smashingmagazine.com/2025/05/smashing-animations-part-2-css-masking-add-extra-dimension/">Smashing Animations Part 2: How CSS Masking Can Add An Extra Dimension</a></li><li><a href="https://www.smashingmagazine.com/2025/05/smashing-animations-part-3-smil-not-dead/">Smashing Animations Part 3: SMIL’s Not Dead Baby, SMIL’s Not Dead</a></li><li><a href="https://www.smashingmagazine.com/2025/06/smashing-animations-part-4-optimising-svgs/">Smashing Animations Part 4: Optimising SVGs</a></li></ul>
</div>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png"
			
			sizes="100vw"
			alt="Quick Draw McGraw character pulling back on a dog leash attached to his bloodhound, Snuffles."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Andy Clarke’s Bow Wow Bandit Toon Title recreation (16:9). (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/2-andy-clarke-bow-wow-bandit-toon-title-recreation.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Let’s say I’ve designed an SVG scene like that one that’s based on Bow Wow Bandit, which has a 16:9 aspect ratio with a <code>viewBox</code> size of 1920×1080. This SVG scales up and down (the clue’s in the name), so it looks sharp when it’s gigantic and when it’s minute.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png"
			
			sizes="100vw"
			alt="16:9 aspect ration vs. 3:4."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Left: 16:9 aspect ratio loses its impact. Right: 3:4 format suits the screen size better. (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/3-svgs-aspect-ratio.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But on small screens, the 16:9 aspect ratio (<a href="https://stuffandnonsense.co.uk/toon-titles/quick-draw-3a.html">live demo</a>) might not be the best format, and the image loses its impact. Sometimes, a portrait orientation, like 3:4, would suit the screen size better.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="729"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png"
			
			sizes="100vw"
			alt="Andy Clarke’s Bow Wow Bandit Toon Title recreation (3:4)."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Andy Clarke’s Bow Wow Bandit Toon Title recreation (3:4). (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/4-bow-wow-bandit-toon-title-recreation-portrait.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But, herein lies the problem, as it’s not easy to reposition internal elements for different screen sizes using just <code>viewBox</code>. That’s because in SVG, internal element positions are locked to the coordinate system from the original <code>viewBox</code>, so you can’t easily change their layout between, say, desktop and mobile. This is a problem because animations and interactivity often rely on element positions, which break when the <code>viewBox</code> changes.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png"
			
			sizes="100vw"
			alt="Left: 16:9 for larger screens. Right: 3:4 for smaller screens."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Left: 16:9 for larger screens. Right: 3:4 for smaller screens. (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/5-svg-smaller-larger-screens.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>My challenge was to serve a 1080×1440 version of Bow Wow Bandit to smaller screens and a different one to larger ones. I wanted the position and size of internal elements &mdash; like Quick Draw McGraw and his dawg Snuffles &mdash; to change to best fit these two layouts. To solve this, I experimented with several alternatives.</p>

<p><strong>Note:</strong> Why are we not just using the <code>&lt;picture&gt;</code> with external SVGs? The <a href="https://www.smashingmagazine.com/2014/05/responsive-images-done-right-guide-picture-srcset/"><code>&lt;picture&gt;</code> element</a> is brilliant for responsive images, but it only works with raster formats (like JPEG or WebP) and external SVG files treated as images. That means that you can’t animate or style internal elements using CSS.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="showing-and-hiding-svg">Showing And Hiding SVG</h2>

<p>The most obvious choice was to include two different SVGs in my markup, one for small screens, the other for larger ones, then show or hide them using <a href="https://www.smashingmagazine.com/2018/02/media-queries-responsive-design-2018/">CSS and Media Queries</a>:</p>

<pre><code class="language-svg">&lt;svg id="svg-small" viewBox="0 0 1080 1440"&gt;
  &lt;!-- ... --&gt;
&lt;/svg&gt;

&lt;svg id="svg-large" viewBox="0 0 1920 1080"&gt;
  &lt;!--... --&gt;
&lt;/svg&gt;


#svg-small { display: block; }
#svg-large { display: none; }

@media (min-width: 64rem) {
  #svg-small { display: none; }
  #svg-mobile { display: block; }
}
</code></pre>

<p>But using this method, both SVG versions are loaded, which, when the graphics are complex, means downloading lots and lots and lots of unnecessary code.</p>

<h2 id="replacing-svgs-using-javascript">Replacing SVGs Using JavaScript</h2>

<p>I thought about using JavaScript to swap in the larger SVG at a specified breakpoint:</p>

<pre><code class="language-javascript">if (window.matchMedia('(min-width: 64rem)').matches) {
  svgContainer.innerHTML = desktopSVG; 
} else {
  svgContainer.innerHTML = mobileSVG;
}
</code></pre>

<p>Leaving aside the fact that JavaScript would now be critical to how the design is displayed, both SVGs would usually be loaded anyway, which adds DOM complexity and unnecessary weight. Plus, maintenance becomes a problem as there are now two versions of the artwork to maintain, doubling the time it would take to update something as small as the shape of Quick Draw’s tail.</p>

<h2 id="the-solution-one-svg-symbol-library-and-multiple-uses">The Solution: One SVG Symbol Library And Multiple Uses</h2>

<p>Remember, my goal is to:</p>

<ul>
<li>Serve one version of Bow Wow Bandit to smaller screens,</li>
<li>Serve a different version to larger screens,</li>
<li>Define my artwork just once (DRY), and</li>
<li>Be able to resize and reposition elements.</li>
</ul>

<p>I don’t read about it enough, but the <code>&lt;symbol&gt;</code> element lets you define reusable SVG elements that can be hidden and reused to improve maintainability and reduce code bloat. They’re like components for SVG: <a href="https://css-tricks.com/svg-symbol-good-choice-icons/">create once and use wherever you need them</a>:</p>

<pre><code class="language-svg">&lt;svg xmlns="http://www.w3.org/2000/svg" style="display: none;"&gt;
  &lt;symbol id="quick-draw-body" viewBox="0 0 620 700"&gt;
    &lt;g class="quick-draw-body"&gt;[…]&lt;/g&gt;
  &lt;/symbol&gt;
  &lt;!-- ... --&gt;
&lt;/svg&gt;

&lt;use href="#quick-draw-body" /&gt;
</code></pre>

<p>A <code>&lt;symbol&gt;</code> is like storing a character in a library. I can reference it as many times as I need, to keep my code consistent and lightweight. Using <code>&lt;use&gt;</code> elements, I can insert the same symbol multiple times, at different positions or sizes, and even in different SVGs.</p>

<p>Each <code>&lt;symbol&gt;</code> must have its own <code>viewBox</code>, which defines its internal coordinate system. That means paying special attention to how SVG elements are exported from apps like Sketch.</p>

<div class="partners__lead-place"></div>

<h2 id="exporting-for-individual-viewboxes">Exporting For Individual Viewboxes</h2>

<p>I wrote before about <a href="https://www.smashingmagazine.com/2025/06/smashing-animations-part-4-optimising-svgs/">how I export elements</a> in layers to make working with them easier. That process is a little different when creating symbols.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png"
			
			sizes="100vw"
			alt=""
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      My usual process of exporting elements from Sketch. (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/6-exporting-elements-from-sketch.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Ordinarily, I would export all my elements using the same <code>viewBox</code>size. But when I’m creating a <code>symbol</code>, I need it to have its own specific <code>viewBox</code>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png"
			
			sizes="100vw"
			alt="Exporting elements from Sketch as individual SVG files."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Exporting elements from Sketch as individual SVG files. (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/7-exporting-elements-sketch-individual-svgs-files.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>So I export each element as an individually sized SVG, which gives me the dimensions I need to convert its content into a <code>symbol</code>. Let’s take the SVG of Quick Draw McGraw’s hat, which has a <code>viewBox</code> size of 294×182:</p>

<pre><code class="language-svg">&lt;svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 294 182"&gt;
  &lt;!-- ... --&gt;
&lt;/svg&gt;
</code></pre>

<p>I swap the SVG tags for <code>&lt;symbol&gt;</code> and add its artwork to my SVG library:</p>

<pre><code class="language-svg">&lt;svg xmlns="http://www.w3.org/2000/svg" style="display: none;"&gt;
  &lt;symbol id="quick-draw-hat" viewBox="0 0 294 182"&gt;
    &lt;g class="quick-draw-hat"&gt;[…]&lt;/g&gt;
  &lt;/symbol&gt;
&lt;/svg&gt;
</code></pre>

<p>Then, I repeat the process for all the remaining elements in my artwork. Now, if I ever need to update any of my symbols, the changes will be automatically applied to every instance it’s used.</p>

<h2 id="using-a-symbol-in-multiple-svgs">Using A <code>&lt;symbol&gt;</code> In Multiple SVGs</h2>

<p>I wanted my elements to appear in both versions of Bow Wow Bandit, one arrangement for smaller screens and an alternative arrangement for larger ones. So, I create both SVGs:</p>

<pre><code class="language-svg">&lt;svg class="svg-small" viewBox="0 0 1080 1440"&gt;
  &lt;!-- ... --&gt;
&lt;/svg&gt;

&lt;svg class="svg-large" viewBox="0 0 1920 1080"&gt;
  &lt;!-- ... --&gt;
&lt;/svg&gt;
</code></pre>

<p>…and insert links to my symbols in both:</p>

<pre><code class="language-svg">&lt;svg class="svg-small" viewBox="0 0 1080 1440"&gt;
  &lt;use href="#quick-draw-hat" /&gt;
&lt;/svg&gt;

&lt;svg class="svg-large" viewBox="0 0 1920 1080"&gt;
  &lt;use href="#quick-draw-hat" /&gt;
&lt;/svg&gt;
</code></pre>

<h2 id="positioning-symbols">Positioning Symbols</h2>

<p>Once I’ve placed symbols into my layout using <code>&lt;use&gt;</code>, my next step is to position them, which is especially important if I want alternative layouts for different screen sizes. Symbols behave like <code>&lt;g&gt;</code> groups, so I can scale and move them using attributes like <code>width</code>, <code>height</code>, and <code>transform</code>:</p>

<div class="break-out">
<pre><code class="language-svg">&lt;svg class="svg-small" viewBox="0 0 1080 1440"&gt;
  &lt;use href="#quick-draw-hat" width="294" height="182" transform="translate(-30,610)"/&gt;
&lt;/svg&gt;

&lt;svg class="svg-large" viewBox="0 0 1920 1080"&gt;
  &lt;use href="#quick-draw-hat" width="294" height="182" transform="translate(350,270)"/&gt;
&lt;/svg&gt;
</code></pre>
</div>

<p>I can place each <code>&lt;use&gt;</code> element independently using <code>transform</code>. This is powerful because rather than repositioning elements inside my SVGs, I move the <code>&lt;use&gt;</code> references. My internal layout stays clean, and the file size remains small because I’m not duplicating artwork. A browser only loads it once, which reduces bandwidth and speeds up page rendering. And because I’m always referencing the same <code>symbol</code>, their appearance stays consistent, whatever the screen size.</p>

<h2 id="animating-use-elements">Animating <code>&lt;use&gt;</code> Elements</h2>

<p>Here’s where things got tricky. I wanted to animate parts of my characters &mdash; like Quick Draw’s hat tilting and his legs kicking. But when I added CSS animations targeting internal elements inside a <code>&lt;symbol&gt;</code>, nothing happened.</p>

<p><strong>Tip:</strong> You can animate the <code>&lt;use&gt;</code> element itself, but not elements inside the <code>&lt;symbol&gt;</code>. If you want individual parts to move, make them their own symbols and animate each <code>&lt;use&gt;</code>.</p>

<p>Turns out, you can’t style or animate a <code>&lt;symbol&gt;</code>, because <code>&lt;use&gt;</code> creates shadow DOM clones that aren’t easily targetable. So, I had to get sneaky. Inside each <code>&lt;symbol&gt;</code> in my library SVG, I added a <code>&lt;g&gt;</code> element around the part I wanted to animate:</p>

<pre><code class="language-svg">&lt;symbol id="quick-draw-hat" viewBox="0 0 294 182"&gt;
  &lt;g class="quick-draw-hat"&gt;
    &lt;!-- ... --&gt;
  &lt;/g&gt;
&lt;/symbol&gt;
</code></pre>

<p>…and animated it using an attribute substring selector, targeting the <code>href</code> attribute of the <code>use</code> element:</p>

<pre><code class="language-css">use[href="#quick-draw-hat"] {
  animation-delay: 0.5s;
  animation-direction: alternate;
  animation-duration: 1s;
  animation-iteration-count: infinite;
  animation-name: hat-rock;
  animation-timing-function: ease-in-out;
  transform-origin: center bottom;
}

@keyframes hat-rock {
from { transform: rotate(-2deg); }
to   { transform: rotate(2deg); } }
</code></pre>

<div class="partners__lead-place"></div>

<h2 id="media-queries-for-display-control">Media Queries For Display Control</h2>

<p>Once I’ve created my two visible SVGs &mdash; one for small screens and one for larger ones &mdash; the final step is deciding which version to show at which screen size. I use CSS Media Queries to hide one SVG and show the other. I start by showing the small-screen SVG by default:</p>

<pre><code class="language-css">.svg-small { display: block; }
.svg-large { display: none; }
</code></pre>

<p>Then I use a <code>min-width</code> media query to switch to the large-screen SVG at <code>64rem</code> and above:</p>

<pre><code class="language-css">@media (min-width: 64rem) {
  .svg-small { display: none; }
  .svg-large { display: block; }
}
</code></pre>

<p>This ensures there’s only ever one SVG visible at a time, keeping my layout simple and the DOM free from unnecessary clutter. And because both visible SVGs reference the same hidden <code>&lt;symbol&gt;</code> library, the browser only downloads the artwork once, regardless of how many <code>&lt;use&gt;</code> elements appear across the two layouts.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png"
			
			sizes="100vw"
			alt="The final adaptive SVG."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      View the final adaptive SVG on my <a href='https://stuffandnonsense.co.uk/toon-titles/quick-draw-3.html'>Toon Titles website</a>. (<a href='https://files.smashing.media/articles/smashing-animations-part-5-building-adaptive-svgs/8-final-adaptive-svg.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="wrapping-up">Wrapping Up</h2>

<p>By combining <code>&lt;symbol&gt;</code>, <code>&lt;use&gt;</code>, CSS Media Queries, and specific transforms, I can build <strong>adaptive SVGs</strong> that reposition their elements without duplicating content, loading extra assets, or relying on JavaScript. I need to define each graphic only once in a hidden symbol library. Then I can reuse those graphics, as needed, inside several visible SVGs. With CSS doing the layout switching, the <strong>result is fast and flexible</strong>.</p>

<p>It’s a reminder that some of the most powerful techniques on the web don’t need big frameworks or complex tooling &mdash; just a bit of SVG know-how and a clever use of the basics.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Yegor Gilyov</author><title>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</title><link>https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/</link><pubDate>Fri, 03 Oct 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/</guid><description>Ready to move beyond static mockups? Here is a practical, step-by-step guide to Intent Prototyping &amp;mdash; a disciplined method that uses AI to turn your design intent (UI sketches, conceptual models, and user flows) directly into a live prototype, making it your primary canvas for ideation.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/" />
              <title>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Intent Prototyping: A Practical Guide To Building With Clarity (Part 2)</h1>
                  
                    
                    <address>Yegor Gilyov</address>
                  
                  <time datetime="2025-10-03T10:00:00&#43;00:00" class="op-published">2025-10-03T10:00:00+00:00</time>
                  <time datetime="2025-10-03T10:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>In <strong><a href="https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/">Part 1</a></strong> of this series, we explored the “lopsided horse” problem born from mockup-centric design and demonstrated how the seductive promise of vibe coding often leads to structural flaws. The main question remains:</p>

<blockquote>How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap?</blockquote>

<p>In other words, we need a way to build prototypes that are both fast to create and founded on a clear, unambiguous blueprint.</p>

<p>The answer is a more disciplined process I call <strong>Intent Prototyping</strong> (kudos to Marco Kotrotsos, who coined <a href="https://kotrotsos.medium.com/intent-oriented-programming-bridging-human-thought-and-ai-machine-execution-3a92373cc1b6">Intent-Oriented Programming</a>). This method embraces the power of AI-assisted coding but rejects ambiguity, putting the designer’s explicit <em>intent</em> at the very center of the process. It receives a holistic expression of <em>intent</em> (sketches for screen layouts, conceptual model description, boxes-and-arrows for user flows) and uses it to generate a live, testable prototype.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="491"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg"
			
			sizes="100vw"
			alt="Diagram showing sketches, a conceptual model, and user flows as inputs to Intent Prototyping, which outputs a live prototype."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Intent Prototyping workflow. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/1-intent-prototyping.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This method solves the concerns we’ve discussed in Part 1 in the best way possible:</p>

<ul>
<li><strong>Unlike static mockups,</strong> the prototype is fully interactive and can be easily populated with a large amount of realistic data. This lets us test the system’s underlying logic as well as its surface.</li>
<li><strong>Unlike a vibe-coded prototype</strong>, it is built from a stable, unambiguous specification. This prevents the conceptual model failures and design debt that happen when things are unclear. The engineering team doesn’t need to reverse-engineer a black box or become “code archaeologists” to guess at the designer’s vision, as they receive not only a live prototype but also a clearly documented design intent behind it.</li>
</ul>

<p>This combination makes the method especially suited for designing complex enterprise applications. It allows us to test the system’s most critical point of failure, its underlying structure, at a speed and flexibility that was previously impossible. Furthermore, the process is built for iteration. You can explore as many directions as you want simply by changing the intent and evolving the design based on what you learn from user testing.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="my-workflow">My Workflow</h2>

<p>To illustrate this process in action, let’s walk through a case study. It’s the very same example I’ve used to illustrate the vibe coding trap: a simple tool to track tests to validate product ideas. You can find the complete project, including all the source code and documentation files discussed below, in this <a href="https://github.com/YegorGilyov/reality-check">GitHub repository</a>.</p>

<h3 id="step-1-expressing-an-intent">Step 1: Expressing An Intent</h3>

<p>Imagine we’ve already done proper research, and having mused on the defined problem, I begin to form a vague idea of what the solution might look like. I need to capture this idea immediately, so I quickly sketch it out:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="583"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png"
			
			sizes="100vw"
			alt="A rough sketch of screens to manage product ideas and reality checks."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A low-fidelity sketch of the initial idea. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/2-low-fidelity-sketch-initial-idea.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>In this example, I used Excalidraw, but the tool doesn’t really matter. Note that we deliberately keep it rough, as visual details are not something we need to focus on at this stage. And we are not going to be stuck here: we want to make a leap from this initial sketch directly to a live prototype that we can put in front of potential users. Polishing those sketches would not bring us any closer to achieving our goal.</p>

<p>What we need to move forward is to add to those sketches just enough details so that they may serve as a sufficient input for a junior frontend developer (or, in our case, an AI assistant). This requires explaining the following:</p>

<ul>
<li>Navigational paths (clicking here takes you to).</li>
<li>Interaction details that can’t be shown in a static picture (e.g., non-scrollable areas, adaptive layout, drag-and-drop behavior).</li>
<li>What parts might make sense to build as reusable components.</li>
<li>Which components from the design system (I’m using <a href="https://ant.design/">Ant Design Library</a>) should be used.</li>
<li>Any other comments that help understand how this thing should work (while sketches illustrate how it should look).</li>
</ul>

<p>Having added all those details, we end up with such an annotated sketch:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png"
			
			sizes="100vw"
			alt="The initial sketch with annotations specifying components, navigation, and interaction details."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The sketch annotated with details. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/3-sketch-annotated-details.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As you see, this sketch covers both the Visualization and Flow aspects. You may ask, what about the Conceptual Model? Without that part, the expression of our <em>intent</em> will not be complete. One way would be to add it somewhere in the margins of the sketch (for example, as a UML Class Diagram), and I would do so in the case of a more complex application, where the model cannot be simply derived from the UI. But in our case, we can save effort and ask an LLM to generate a comprehensive description of the conceptual model based on the sketch.</p>

<p>For tasks of this sort, the LLM of my choice is Gemini 2.5 Pro. What is important is that this is a multimodal model that can accept not only text but also images as input (GPT-5 and Claude-4 also fit that criteria). I use Google AI Studio, as it gives me enough control and visibility into what’s happening:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="579"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png"
			
			sizes="100vw"
			alt="Screenshot of Google AI Studio with an annotated sketch as input."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Generating a conceptual model from the sketch using Google AI Studio. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/4-google-ai-studio.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Note</strong>: <em>All the prompts that I use here and below can be found in the <a href="#appendices">Appendices</a>. The prompts are not custom-tailored to any particular project; they are supposed to be reused as they are.</em></p>

<p>As a result, Gemini gives us a description and the following diagram:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="480"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg"
			
			sizes="100vw"
			alt="UML class diagram showing two connected entities: “ProductIdea” and “RealityCheck”."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      UML class diagram. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/5-uml-class.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The diagram might look technical, but I believe that a clear understanding of all objects, their attributes, and relationships between them is key to good design. That’s why I consider the Conceptual Model to be an essential part of expressing <em>intent</em>, along with the Flow and Visualization.</p>

<p>As a result of this step, our <em>intent</em> is fully expressed in two files: <code>Sketch.png</code> and <code>Model.md</code>. This will be our durable source of truth.</p>

<h3 id="step-2-preparing-a-spec-and-a-plan">Step 2: Preparing A Spec And A Plan</h3>

<p>The purpose of this step is to create a comprehensive technical specification and a step-by-step plan. Most of the work here is done by AI; you just need to keep an eye on it.</p>

<p>I separate the Data Access Layer and the UI layer, and create specifications for them using two different prompts (see <a href="#appendices">Appendices 2 and 3</a>). The output of the first prompt (the Data Access Layer spec) serves as an input for the second one. Note that, as an additional input, we give the guidelines tailored for prototyping needs (see <a href="#appendices">Appendices 8, 9, and 10</a>). They are not specific to this project. The technical approach encoded in those guidelines is out of the scope of this article.</p>

<p>As a result, Gemini provides us with content for <code>DAL.md</code> and <code>UI.md</code>. Although in most cases this result is quite reliable enough, you might want to scrutinize the output. You don’t need to be a real programmer to make sense of it, but some level of programming literacy would be really helpful. However, even if you don’t have such skills, don’t get discouraged. The good news is that if you don’t understand something, you always know who to ask. Do it in Google AI Studio before refreshing the context window. If you believe you’ve spotted a problem, let Gemini know, and it will either fix it or explain why the suggested approach is actually better.</p>

<p>It’s important to remember that by their nature, <strong>LLMs are not deterministic</strong> and, to put it simply, can be forgetful about small details, especially when it comes to details in sketches. Fortunately, you don’t have to be an expert to notice that the “Delete” button, which is in the upper right corner of the sketch, is not mentioned in the spec.</p>

<p>Don’t get me wrong: Gemini does a stellar job most of the time, but there are still times when it slips up. Just let it know about the problems you’ve spotted, and everything will be fixed.</p>

<p>Once we have <code>Sketch.png</code>, <code>Model.md</code>, <code>DAL.md</code>, <code>UI.md</code>, and we have reviewed the specs, we can grab a coffee. We deserve it: our technical design documentation is complete. It will serve as a stable foundation for building the actual thing, without deviating from our original intent, and ensuring that all components fit together perfectly, and all layers are stacked correctly.</p>

<p>One last thing we can do before moving on to the next steps is to prepare a step-by-step plan. We split that plan into two parts: one for the Data Access Layer and another for the UI. You can find prompts I use to create such a plan in <a href="#appendices">Appendices 4 and 5</a>.</p>

<h3 id="step-3-executing-the-plan">Step 3: Executing The Plan</h3>

<p>To start building the actual thing, we need to switch to another category of AI tools. Up until this point, we have relied on Generative AI. It excels at creating new content (in our case, specifications and plans) based on a single prompt. I’m using Google Gemini 2.5 Pro in Google AI Studio, but other similar tools may also fit such one-off tasks: ChatGPT, Claude, Grok, and DeepSeek.</p>

<p>However, at this step, this wouldn’t be enough. Building a prototype based on specs and according to a plan requires an AI that can read context from multiple files, execute a sequence of tasks, and maintain coherence. A simple generative AI can’t do this. It would be like asking a person to build a house by only ever showing them a single brick. What we need is an agentic AI that can be given the full house blueprint and a project plan, and then get to work building the foundation, framing the walls, and adding the roof in the correct sequence.</p>

<p>My coding agent of choice is Google Gemini CLI, simply because Gemini 2.5 Pro serves me well, and I don’t think we need any middleman like Cursor or Windsurf (which would use Claude, Gemini, or GPT under the hood anyway). If I used Claude, my choice would be Claude Code, but since I’m sticking with Gemini, Gemini CLI it is. But if you prefer Cursor or Windsurf, I believe you can apply the same process with your favourite tool.</p>

<p>Before tasking the agent, we need to create a basic template for our React application. I won’t go into this here. You can find plenty of tutorials on how to scaffold an empty React project using Vite.</p>

<p>Then we put all our files into that project:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="666"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png"
			
			sizes="100vw"
			alt="A file directory showing the docs folder containing DAL.md, Model.md, Sketch.png, and UI.md."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Project structure with design intent and spec files. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/6-project-structure-design-intent-spec-files.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once the basic template with all our files is ready, we open Terminal, go to the folder where our project resides, and type “gemini”:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="419"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png"
			
			sizes="100vw"
			alt="Screenshot of a terminal showing the Gemini CLI."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Gemini CLI. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/7-gemini.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>And we send the prompt to build the Data Access Layer (see <a href="#appendices">Appendix 6</a>). That prompt implies step-by-step execution, so upon completion of each step, I send the following:</p>

<div class="break-out">
<pre><code class="language-markdown">Thank you! Now, please move to the next task.
Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec. 
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
</code></pre>
</div>

<p>As the last task in the plan, the agent builds a special page where we can test all the capabilities of our Data Access Layer, so that we can manually test it. It may look like this:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="572"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png"
			
			sizes="100vw"
			alt="A basic webpage with forms and buttons to test the Data Access Layer’s CRUD functions."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The AI-generated test page for the Data Access Layer. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/8-ai-generated-test-page-data-access-layer.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>It doesn’t look fancy, to say the least, but it allows us to ensure that the Data Access Layer works correctly before we proceed with building the final UI.</p>

<p>And finally, we clear the Gemini CLI context window to give it more headspace and send the prompt to build the UI (see <a href="#appendices">Appendix 7</a>). This prompt also implies step-by-step execution. Upon completion of each step, we test how it works and how it looks, following the “Manual Testing Plan” from <code>UI-plan.md</code>. I have to say that despite the fact that the sketch has been uploaded to the model context and, in general, Gemini tries to follow it, attention to visual detail is not one of its strengths (yet). Usually, a few additional nudges are needed at each step to improve the look and feel:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="320"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png"
			
			sizes="100vw"
			alt="A before-and-after comparison showing the UI&#39;s visual improvement."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Refining the AI-generated UI to match the sketch. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/9-refined-ai-generated-ui.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once I’m happy with the result of a step, I ask Gemini to move on:</p>

<div class="break-out">
<pre><code class="language-markdown">Thank you! Now, please move to the next task.
Make sure you build the UI according to the sketch; this is very important. Remember that you must not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch.  
After each task, stop so that I can test it. Don’t move to the next task before I tell you to do so.
</code></pre>
</div>

<p>Before long, the result looks like this, and in every detail it works exactly as we <em>intended</em>:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="486"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png"
			
			sizes="100vw"
			alt="Screenshots of the final, polished application UI."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The final interactive prototype. (<a href='https://files.smashing.media/articles/intent-prototyping-practical-guide-building-clarity/10-final-interactive-prototype.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The prototype is up and running and looking nice. Does it mean that we are done with our work? Surely not, the most fascinating part is just beginning.</p>

<div class="partners__lead-place"></div>

<h3 id="step-4-learning-and-iterating">Step 4: Learning And Iterating</h3>

<p>It’s time to put the prototype in front of potential users and learn more about whether this solution relieves their pain or not.</p>

<p>And as soon as we learn something new, we iterate. We adjust or extend the sketches and the conceptual model, based on that new input, we update the specifications, create plans to make changes according to the new specifications, and execute those plans. In other words, for every iteration, we repeat the steps I’ve just walked you through.</p>

<h3 id="is-this-workflow-too-heavy">Is This Workflow Too Heavy?</h3>

<p>This four-step workflow may create an impression of a somewhat heavy process that requires too much thinking upfront and doesn’t really facilitate creativity. But before jumping to that conclusion, consider the following:</p>

<ul>
<li>In practice, only the first step requires real effort, as well as learning in the last step. AI does most of the work in between; you just need to keep an eye on it.</li>
<li>Individual iterations don’t need to be big. You can start with a <a href="https://wiki.c2.com/?WalkingSkeleton">Walking Skeleton</a>: the bare minimum implementation of the thing you have in mind, and add more substance in subsequent iterations. You are welcome to change your mind about the overall direction in between iterations.</li>
<li>And last but not least, maybe the idea of “think before you do” is not something you need to run away from. A clear and unambiguous statement of intent can prevent many unnecessary mistakes and save a lot of effort down the road.</li>
</ul>

<h2 id="intent-prototyping-vs-other-methods">Intent Prototyping Vs. Other Methods</h2>

<p>There is no method that fits all situations, and Intent Prototyping is not an exception. Like any specialized tool, it has a specific purpose. The most effective teams are not those who master a single method, but those who understand which approach to use to mitigate the most significant risk at each stage. The table below gives you a way to make this choice clearer. It puts Intent Prototyping next to other common methods and tools and explains each one in terms of the primary goal it helps achieve and the specific risks it is best suited to mitigate.</p>

<table class="tablesaw break-out" style="grid-column: 3 / 18; font-size: 13pt;">
    <thead>
        <tr>
            <th>Method/Tool</th>
            <th>Goal</th>
            <th>Risks it is best suited to mitigate</th>
            <th width="300">Examples</th>
            <th>Why</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Intent Prototyping</td>
            <td>To rapidly iterate on the fundamental architecture of a data-heavy application with a complex conceptual model, sophisticated business logic, and non-linear user flows.</td>
            <td>Building a system with a flawed or incoherent conceptual model, leading to critical bugs and costly refactoring.</td>
            <td><ul><li>A CRM (Customer Relationship Management system).</li><li>A Resource Management Tool.</li><li>A No-Code Integration Platform (admin’s UI).</li></ul></td>
            <td>It enforces conceptual clarity. This not only de-risks the core structure but also produces a clear, documented blueprint that serves as a superior specification for the engineering handoff.</td>
        </tr>
        <tr>
            <td>Vibe Coding (Conversational)</td>
            <td>To rapidly explore interactive ideas through improvisation.</td>
            <td>Losing momentum because of analysis paralysis.</td>
            <td><ul><li>An interactive data table with live sorting/filtering.</li><li>A novel navigation concept.</li><li>A proof-of-concept for a single, complex component.</li></ul></td>
            <td>It has the smallest loop between an idea conveyed in natural language and an interactive outcome.</td>
        </tr>
        <tr>
            <td>Axure</td>
            <td>To test complicated conditional logic within a specific user journey, without having to worry about how the whole system works.</td>
            <td>Designing flows that break when users don’t follow the “happy path.”</td>
            <td><ul><li>A multi-step e-commerce checkout.</li><li>A software configuration wizard.</li><li>A dynamic form with dependent fields.</li></ul></td>
            <td>It’s made to create complex <code>if-then</code> logic and manage variables visually. This lets you test complicated paths and edge cases in a user journey without writing any code.</td>
        </tr>
        <tr>
            <td>Figma</td>
            <td>To make sure that the user interface looks good, aligns with the brand, and has a clear information architecture.</td>
            <td>Making a product that looks bad, doesn't fit with the brand, or has a layout that is hard to understand.</td>
            <td><ul><li>A marketing landing page.</li><li>A user onboarding flow.</li><li>Presenting a new visual identity.</li></ul></td>
            <td>It excels at high-fidelity visual design and provides simple, fast tools for linking static screens.</td>
        </tr>
        <tr>
            <td>ProtoPie, Framer</td>
            <td>To make high-fidelity micro-interactions feel just right.</td>
            <td>Shipping an application that feels cumbersome and unpleasant to use because of poorly executed interactions.</td>
            <td><ul><li>A custom pull-to-refresh animation.</li><li>A fluid drag-and-drop interface.</li><li>An animated chart or data visualization.</li></ul></td>
            <td>These tools let you manipulate animation timelines, physics, and device sensor inputs in great detail. Designers can carefully work on and test the small things that make an interface feel really polished and fun to use.</td>
        </tr>
        <tr>
            <td>Low-code / No-code Tools (e.g., Bubble, Retool)</td>
            <td>To create a working, data-driven app as quickly as possible.</td>
            <td>The application will never be built because traditional development is too expensive.</td>
            <td><ul><li>An internal inventory tracker.</li><li>A customer support dashboard.</li><li>A simple directory website.</li></ul></td>
            <td>They put a UI builder, a database, and hosting all in one place. The goal is not merely to make a prototype of an idea, but to make and release an actual, working product. This is the last step for many internal tools or MVPs.</td>
        </tr>
    </tbody>
</table>

<p><br /></p>

<p>The key takeaway is that each method is a <strong>specialized tool for mitigating a specific type of risk</strong>. For example, Figma de-risks the visual presentation. ProtoPie de-risks the feel of an interaction. Intent Prototyping is in a unique position to tackle the most foundational risk in complex applications: building on a flawed or incoherent conceptual model.</p>

<div class="partners__lead-place"></div>

<h2 id="bringing-it-all-together">Bringing It All Together</h2>

<p>The era of the “lopsided horse” design, sleek on the surface but structurally unsound, is a direct result of the trade-off between fidelity and flexibility. This trade-off has led to a process filled with redundant effort and misplaced focus. Intent Prototyping, powered by modern AI, eliminates that conflict. It’s not just a shortcut to building faster &mdash; it’s a <strong>fundamental shift in how we design</strong>. By putting a clear, unambiguous <em>intent</em> at the heart of the process, it lets us get rid of the redundant work and focus on architecting a sound and robust system.</p>

<p>There are three major benefits to this renewed focus. First, by going straight to live, interactive prototypes, we shift our validation efforts from the surface to the deep, testing the system’s actual logic with users from day one. Second, the very act of documenting the design <em>intent</em> makes us clear about our ideas, ensuring that we fully understand the system’s underlying logic. Finally, this documented <em>intent</em> becomes a durable source of truth, eliminating the ambiguous handoffs and the redundant, error-prone work of having engineers reverse-engineer a designer’s vision from a black box.</p>

<p>Ultimately, Intent Prototyping changes the object of our work. It allows us to move beyond creating <strong>pictures of a product</strong> and empowers us to become architects of <strong>blueprints for a system</strong>. With the help of AI, we can finally make the live prototype the primary canvas for ideation, not just a high-effort afterthought.</p>

<h3 id="appendices">Appendices</h3>

<p>You can find the full <strong>Intent Prototyping Starter Kit</strong>, which includes all those prompts and guidelines, as well as the example from this article and a minimal boilerplate project, in this <a href="https://github.com/YegorGilyov/intent-prototyping-starter-kit">GitHub repository</a>.</p>

<div class="js-table-accordion accordion book__toc" id="TOC" aria-multiselectable="true">
    <dl class="accordion-list" style="margin-bottom: 1em" data-handler="Accordion">
          <dt tabindex="0" class="accordion-item" id="accordion-item-0" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 1: Sketch to UML Class Diagram
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-0" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Software Architect specializing in Domain-Driven Design. You are tasked with defining a conceptual model for an app based on information from a UI sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the sketch carefully. There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Generate the conceptual model description in the Mermaid format using a UML class diagram.

&#35;&#35; Ground Rules

- Every entity must have the following attributes:
    - `id` (string)
    - `createdAt` (string, ISO 8601 format)
    - `updatedAt` (string, ISO 8601 format)
- Include all attributes shown in the UI: If a piece of data is visually represented as a field for an entity, include it in the model, even if it's calculated from other attributes.
- Do not add any speculative entities, attributes, or relationships ("just in case"). The model should serve the current sketch's requirements only. 
- Pay special attention to cardinality definitions (e.g., if a relationship is optional on both sides, it cannot be `"1" -- "0..*"`, it must be `"0..1" -- "0..*"`).
- Use only valid syntax in the Mermaid diagram.
- Do not include enumerations in the Mermaid diagram.
- Add comments explaining the purpose of every entity, attribute, and relationship, and their expected behavior (not as a part of the diagram, in the Markdown file).

&#35;&#35; Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for entity names.
- Use camelCase for attributes and relationships.
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:** Base every detail on visual evidence in the sketch, not on common design patterns. 
- &#42;*Double-Check:** After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `Model.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-1" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 2: Sketch to DAL Spec
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-1" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a comprehensive technical specification for the development team in a structured markdown document, based on a UI sketch and a conceptual model description. 

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `Model.md`: the conceptual model
- `Sketch.png`: the UI sketch

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a Markdown specification for the stores and entity-specific hook that implements all the logic and provides all required operations.

---

&#35;&#35; Markdown Output Structure

Use this template for the entire document.

```markdown

&#35; Data Access Layer Specification

This document outlines the specification for the data access layer of the application, following the principles defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. Type Definitions

Location: `src/types/entities.ts`

&#35;&#35;&#35; 1.1. `BaseEntity`

A shared interface that all entities should extend.

[TypeScript interface definition]

&#35;&#35;&#35; 1.2. `[Entity Name]`

The interface for the [Entity Name] entity.

[TypeScript interface definition]

&#35;&#35; 2. Zustand Stores

&#35;&#35;&#35; 2.1. Store for `[Entity Name]`

&#42;&#42;Location:&#42;&#42; `src/stores/[Entity Name (plural)].ts`

The Zustand store will manage the state of all [Entity Name] items.

&#42;&#42;Store State (`[Entity Name]State`):&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Store Implementation (`use[Entity Name]Store`):&#42;&#42;

- The store will be created using `create&lt;[Entity Name]State&gt;()(...)`.
- It will use the `persist` middleware from `zustand/middleware` to save state to `localStorage`. The persistence key will be `[entity-storage-key]`.
- `[Entity Name (plural, camelCase)]` will be a dictionary (`Record&lt;string, [Entity]&gt;`) for O(1) access.

&#42;&#42;Actions:&#42;&#42;

- &#42;&#42;`add[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`update[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`remove[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
- &#42;&#42;`doSomethingElseWith[Entity Name]`&#42;&#42;:  
    [Define the operation behavior based on entity requirements]
    
&#35;&#35; 3. Custom Hooks

&#35;&#35;&#35; 3.1. `use[Entity Name (plural)]`

&#42;&#42;Location:&#42;&#42; `src/hooks/use[Entity Name (plural)].ts`

The hook will be the primary interface for UI components to interact with [Entity Name] data.

&#42;&#42;Hook Return Value:&#42;&#42;

[TypeScript interface definition]

&#42;&#42;Hook Implementation:&#42;&#42;

[List all properties and methods returned by this hook, and briefly explain the logic behind them, including data transformations, memoization. Do not write the actual code here.]

```

--- 

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:&#42;&#42; Base every detail in the specification on the conceptual model or visual evidence in the sketch, not on common design patterns. 
- &#42;&#42;Double-Check:&#42;&#42; After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `DAL.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-2" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 3: Sketch to UI Spec
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-2" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a comprehensive technical specification by translating a UI sketch into a structured markdown document for the development team.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully: 

- `Sketch.png`: the UI sketch
  - Note that red lines, red arrows, and red text within the sketch are annotations for you and should not be part of the final UI design. They provide hints and clarification. Never translate them to UI elements directly.
- `Model.md`: the conceptual model
- `DAL.md`: the Data Access Layer spec

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Generate the complete markdown content for a new file, `UI.md`.

---

&#35;&#35; Markdown Output Structure

Use this template for the entire document.

```markdown

&#35; UI Layer Specification

This document specifies the UI layer of the application, breaking it down into pages and reusable components based on the provided sketches. All components will adhere to Ant Design's principles and utilize the data access patterns defined in `docs/guidelines/Zustand-guidelines.md`.

&#35;&#35; 1. High-Level Structure

The application is a single-page application (SPA). It will be composed of a main layout, one primary page, and several reusable components. 

&#35;&#35;&#35; 1.1. `App` Component

The root component that sets up routing and global providers.

-   &#42;&#42;Location&#42;&#42;: `src/App.tsx`
-   &#42;&#42;Purpose&#42;&#42;: To provide global context, including Ant Design's `ConfigProvider` and `App` contexts for message notifications, and to render the main page.
-   &#42;&#42;Composition&#42;&#42;:
  -   Wraps the application with `ConfigProvider` and `App as AntApp` from 'antd' to enable global message notifications as per `simple-ice/antd-messages.mdc`.
  -   Renders `[Page Name]`.

&#35;&#35; 2. Pages

&#35;&#35;&#35; 2.1. `[Page Name]`

-   &#42;&#42;Location:&#42;&#42; `src/pages/PageName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Briefly describe the main goal and function of this page]
-   &#42;&#42;Data Access:&#42;&#42;
  [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this page using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this page]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with this page] 
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this page should work]

&#35;&#35; 3. Components

&#35;&#35;&#35; 3.1. `[Component Name]`

-   &#42;&#42;Location:&#42;&#42; `src/components/ComponentName.tsx`
-   &#42;&#42;Purpose:&#42;&#42; [Explain what this component does and where it's used]
-   &#42;&#42;Props:&#42;&#42;
  [TypeScript interface definition for the component's props. Props should be minimal. Avoid prop drilling by using hooks for data access.]
-   &#42;&#42;Data Access:&#42;&#42;
    [List the specific hooks and functions this component uses to fetch or manage its data]
-   &#42;&#42;Internal State:&#42;&#42;
    [Describe any state managed internally by this component using `useState`]
-   &#42;&#42;Composition:&#42;&#42;
    [Briefly describe the content of this component]
-   &#42;&#42;User Interactions:&#42;&#42;
    [Describe how the user interacts with the component]
-   &#42;&#42;Logic:&#42;&#42;
  [If applicable, provide additional comments on how this component should work]
  
```

--- 

&#35;&#35; Final Instructions

- &#42;&#42;No Assumptions:&#42;&#42; Base every detail on the visual evidence in the sketch, not on common design patterns. 
- &#42;&#42;Double-Check:&#42;&#42; After composing the entire document, read through it to ensure the hierarchy is logical, the descriptions are unambiguous, and the formatting is consistent. The final document should be a self-contained, comprehensive specification. 
- &#42;&#42;Do not add redundant empty lines between items.&#42;&#42; 

Your final output should be the complete, raw markdown content for `UI.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-3" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 4: DAL Spec to Plan
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-3" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with creating a plan to build a Data Access Layer for an application based on a spec.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `DAL.md`: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter.

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices
- `Zustand-guidelines.md`: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a step-by-step plan to build a Data Access Layer according to the spec. 

Each task should:

- Focus on one concern
- Be reasonably small
- Have a clear start + end
- Contain clearly defined Objectives and Acceptance Criteria

The last step of the plan should include creating a page to test all the capabilities of our Data Access Layer, and making it the start page of this application, so that I can manually check if it works properly. 

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to review results in between.

&#35;&#35; Final Instructions
 
- Note that we are not starting from scratch; the basic template has already been created using Vite.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for `DAL-plan.md`.
</code></pre>
</div></p>
             </div>
         </dd>
          <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 5: UI Spec to Plan
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with creating a plan to build a UI layer for an application based on a spec and a sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- `UI.md`: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- `Sketch.png`: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- `TS-guidelines.md`: TypeScript Best Practices
- `React-guidelines.md`: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Create a step-by-step plan to build a UI layer according to the spec and the sketch. 

Each task must:

- Focus on one concern.
- Be reasonably small.
- Have a clear start + end.
- Result in a verifiable increment of the application. Each increment should be manually testable to allow for functional review and approval before proceeding.
- Contain clearly defined Objectives, Acceptance Criteria, and Manual Testing Plan.

I will hand this plan over to an engineering LLM that will be told to complete one task at a time, allowing me to test in between.

&#35;&#35; Final Instructions

- Note that we are not starting from scratch, the basic template has already been created using Vite, and the Data Access Layer has been built successfully.
- For every task, describe how components should be integrated for verification. You must use the provided hooks to connect to the live Zustand store data—do not use mock data (note that the Data Access Layer has been already built successfully).
- The Manual Testing Plan should read like a user guide. It must only contain actions a user can perform in the browser and must never reference any code files or programming tasks.
- Do not add redundant empty lines between items.

Your final output should be the complete, raw markdown content for `UI-plan.md`.
</code></pre>
</div>
</p>
             </div>
         </dd>         
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 6: DAL Plan to Code
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and Zustand. You are tasked with building a Data Access Layer for an application based on a spec.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. Follow it carefully and to the letter. 

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices
- @docs/guidelines/Zustand-guidelines.md: Zustand Best Practices

&#42;&#42;Step 3:&#42;&#42; Read the plan:

- @docs/plans/DAL-plan.md: The step-by-step plan to build the Data Access Layer of the application.

&#42;&#42;Step 4:&#42;&#42; Build a Data Access Layer for this application according to the spec and following the plan. 

- Complete one task from the plan at a time. 
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 
- Do not do anything else. At this point, we are focused on building the Data Access Layer.

&#35;&#35; Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Do not start the development server, I'll do it by myself.
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 7: UI Plan to Code
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">You are an expert Senior Frontend Developer specializing in React, TypeScript, and the Ant Design library. You are tasked with building a UI layer for an application based on a spec and a sketch.

&#35;&#35; Workflow

Follow these steps precisely:

&#42;&#42;Step 1:&#42;&#42; Analyze the documentation carefully:

- @docs/specs/UI.md: The full technical specification for the UI layer of the application. Follow it carefully and to the letter.
- @docs/intent/Sketch.png: Contains important information about the layout and style, complements the UI Layer Specification. The final UI must be as close to this sketch as possible.
- @docs/specs/DAL.md: The full technical specification for the Data Access Layer of the application. That layer is already ready. Use this spec to understand how to work with it. 

There should be no ambiguity about what we are building.

&#42;&#42;Step 2:&#42;&#42; Check out the guidelines:

- @docs/guidelines/TS-guidelines.md: TypeScript Best Practices
- @docs/guidelines/React-guidelines.md: React Best Practices

&#42;&#42;Step 3:&#42;&#42; Read the plan:

- @docs/plans/UI-plan.md: The step-by-step plan to build the UI layer of the application.

&#42;&#42;Step 4:&#42;&#42; Build a UI layer for this application according to the spec and the sketch, following the step-by-step plan: 

- Complete one task from the plan at a time. 
- Make sure you build the UI according to the sketch; this is very important.
- After each task, stop, so that I can test it. Don’t move to the next task before I tell you to do so. 

&#35;&#35; Final Instructions

- Do not make assumptions based on common patterns; always verify them with the actual data from the spec and the sketch. 
- Follow Ant Design's default styles and components. 
- Do not touch the data access layer: it's ready and it's perfect. 
- Do not start the development server, I'll do it by myself.
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 8: TS-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: TypeScript Best Practices

&#35;&#35; Type System & Type Safety

- Use TypeScript for all code and enable strict mode.
- Ensure complete type safety throughout stores, hooks, and component interfaces.
- Prefer interfaces over types for object definitions; use types for unions, intersections, and mapped types.
- Entity interfaces should extend common patterns while maintaining their specific properties.
- Use TypeScript type guards in filtering operations for relationship safety.
- Avoid the 'any' type; prefer 'unknown' when necessary.
- Use generics to create reusable components and functions.
- Utilize TypeScript's features to enforce type safety.
- Use type-only imports (import type { MyType } from './types') when importing types, because verbatimModuleSyntax is enabled.
- Avoid enums; use maps instead.

&#35;&#35; Naming Conventions

- Names should reveal intent and purpose.
- Use PascalCase for component names and types/interfaces.
- Prefix interfaces for React props with 'Props' (e.g., ButtonProps).
- Use camelCase for variables and functions.
- Use UPPER_CASE for constants.
- Use lowercase with dashes for directories, and PascalCase for files with components (e.g., components/auth-wizard/AuthForm.tsx).
- Use descriptive variable names with auxiliary verbs (e.g., isLoading, hasError).
- Favor named exports for components.

&#35;&#35; Code Structure & Patterns

- Write concise, technical TypeScript code with accurate examples.
- Use functional and declarative programming patterns; avoid classes.
- Prefer iteration and modularization over code duplication.
- Use the "function" keyword for pure functions.
- Use curly braces for all conditionals for consistency and clarity.
- Structure files appropriately based on their purpose.
- Keep related code together and encapsulate implementation details.

&#35;&#35; Performance & Error Handling

- Use immutable and efficient data structures and algorithms.
- Create custom error types for domain-specific errors.
- Use try-catch blocks with typed catch clauses.
- Handle Promise rejections and async errors properly.
- Log errors appropriately and handle edge cases gracefully.

&#35;&#35; Project Organization

- Place shared types in a types directory.
- Use barrel exports (index.ts) for organizing exports.
- Structure files and directories based on their purpose.

&#35;&#35; Other Rules

- Use comments to explain complex logic or non-obvious decisions.
- Follow the single responsibility principle: each function should do exactly one thing.
- Follow the DRY (Don't Repeat Yourself) principle.
- Do not implement placeholder functions, empty methods, or "just in case" logic. Code should serve the current specification's requirements only.
- Use 2 spaces for indentation (no tabs).
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 9: React-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: React Best Practices

&#35;&#35; Component Structure

- Use functional components over class components
- Keep components small and focused
- Extract reusable logic into custom hooks
- Use composition over inheritance
- Implement proper prop types with TypeScript
- Structure React files: exported component, subcomponents, helpers, static content, types
- Use declarative TSX for React components
- Ensure that UI components use custom hooks for data fetching and operations rather than receive data via props, except for simplest components

&#35;&#35; React Patterns

- Utilize useState and useEffect hooks for state and side effects
- Use React.memo for performance optimization when needed
- Utilize React.lazy and Suspense for code-splitting
- Implement error boundaries for robust error handling
- Keep styles close to components

&#35;&#35; React Performance

- Avoid unnecessary re-renders
- Lazy load components and images when possible
- Implement efficient state management
- Optimize rendering strategies
- Optimize network requests
- Employ memoization techniques (e.g., React.memo, useMemo, useCallback)

&#35;&#35; React Project Structure

```
/src
- /components - UI components (every component in a separate file)
- /hooks - public-facing custom hooks (every hook in a separate file)
- /providers - React context providers (every provider in a separate file)
- /pages - page components (every page in a separate file)
- /stores - entity-specific Zustand stores (every store in a separate file)
- /styles - global styles (if needed)
- /types - shared TypeScript types and interfaces
```
</code></pre>
</div></p>
             </div>
         </dd>
         <dt tabindex="0" class="accordion-item" id="accordion-item-4" aria-expanded="false">
              <div class="book__toc__accordion-text">
                <div class="book__toc__chapter-col chapter__title">
                  Appendix 10: Zustand-guidelines.md
                </div>
              </div>
              <div class="accordion-expand-btn-wrapper">
                  <span class="accordion-expand-btn js-accordion-expand-btn">+</span>
              </div>
          </dt>
          <dd style="max-height: none;" class="accordion-desc" id="accordion-desc-4" aria-hidden="true">
              <div class="book__toc__chapter-col chapter__summary">
                <p><div class="break-out">
<pre><code class="language-markdown">&#35; Guidelines: Zustand Best Practices

&#35;&#35; Core Principles

- &#42;&#42;Implement a data layer&#42;&#42; for this React application following this specification carefully and to the letter.
- &#42;&#42;Complete separation of concerns&#42;&#42;: All data operations should be accessible in UI components through simple and clean entity-specific hooks, ensuring state management logic is fully separated from UI logic.
- &#42;&#42;Shared state architecture&#42;&#42;: Different UI components should work with the same shared state, despite using entity-specific hooks separately.

&#35;&#35; Technology Stack

- &#42;&#42;State management&#42;&#42;: Use Zustand for state management with automatic localStorage persistence via the `persist` middleware.

&#35;&#35; Store Architecture

- &#42;&#42;Base entity:&#42;&#42; Implement a BaseEntity interface with common properties that all entities extend:
```typescript 
export interface BaseEntity { 
  id: string; 
  createdAt: string; // ISO 8601 format 
  updatedAt: string; // ISO 8601 format 
}
```
- &#42;&#42;Entity-specific stores&#42;&#42;: Create separate Zustand stores for each entity type.
- &#42;&#42;Dictionary-based storage&#42;&#42;: Use dictionary/map structures (`Record<string, Entity>`) rather than arrays for O(1) access by ID.
- &#42;&#42;Handle relationships&#42;&#42;: Implement cross-entity relationships (like cascade deletes) within the stores where appropriate.

&#35;&#35; Hook Layer

The hook layer is the exclusive interface between UI components and the Zustand stores. It is designed to be simple, predictable, and follow a consistent pattern across all entities.

&#35;&#35;&#35; Core Principles

1.  &#42;&#42;One Hook Per Entity&#42;&#42;: There will be a single, comprehensive custom hook for each entity (e.g., `useBlogPosts`, `useCategories`). This hook is the sole entry point for all data and operations related to that entity. Separate hooks for single-item access will not be created.
2.  &#42;&#42;Return reactive data, not getter functions&#42;&#42;: To prevent stale data, hooks must return the state itself, not a function that retrieves state. Parameterize hooks to accept filters and return the derived data directly. A component calling a getter function will not update when the underlying data changes.
3.  &#42;&#42;Expose Dictionaries for O(1) Access&#42;&#42;: To provide simple and direct access to data, every hook will return a dictionary (`Record<string, Entity>`) of the relevant items.

&#35;&#35;&#35; The Standard Hook Pattern

Every entity hook will follow this implementation pattern:

1.  &#42;&#42;Subscribe&#42;&#42; to the entire dictionary of entities from the corresponding Zustand store. This ensures the hook is reactive to any change in the data.
2.  &#42;&#42;Filter&#42;&#42; the data based on the parameters passed into the hook. This logic will be memoized with `useMemo` for efficiency. If no parameters are provided, the hook will operate on the entire dataset.
3.  &#42;&#42;Return a Consistent Shape&#42;&#42;: The hook will always return an object containing:
    &#42;   A &#42;&#42;filtered and sorted array&#42;&#42; (e.g., `blogPosts`) for rendering lists.
    &#42;   A &#42;&#42;filtered dictionary&#42;&#42; (e.g., `blogPostsDict`) for convenient `O(1)` lookup within the component.
    &#42;   All necessary &#42;&#42;action functions&#42;&#42; (`add`, `update`, `remove`) and &#42;&#42;relationship operations&#42;&#42;.
    &#42;   All necessary &#42;&#42;helper functions&#42;&#42; and &#42;&#42;derived data objects&#42;&#42;. Helper functions are suitable for pure, stateless logic (e.g., calculators). Derived data objects are memoized values that provide aggregated or summarized information from the state (e.g., an object containing status counts). They must be derived directly from the reactive state to ensure they update automatically when the underlying data changes.

&#35;&#35; API Design Standards

- &#42;&#42;Object Parameters&#42;&#42;: Use object parameters instead of multiple direct parameters for better extensibility:
```typescript

// ✅ Preferred

add({ title, categoryIds })

// ❌ Avoid

add(title, categoryIds)

```
- &#42;&#42;Internal Methods&#42;&#42;: Use underscore-prefixed methods for cross-store operations to maintain clean separation.

&#35;&#35; State Validation Standards

- &#42;&#42;Existence checks&#42;&#42;: All `update` and `remove` operations should validate entity existence before proceeding.
- &#42;&#42;Relationship validation&#42;&#42;: Verify both entities exist before establishing relationships between them.

&#35;&#35; Error Handling Patterns

- &#42;&#42;Operation failures&#42;&#42;: Define behavior when operations fail (e.g., updating non-existent entities).
- &#42;&#42;Graceful degradation&#42;&#42;: How to handle missing related entities in helper functions.

&#35;&#35; Other Standards

- &#42;&#42;Secure ID generation&#42;&#42;: Use `crypto.randomUUID()` for entity ID generation instead of custom implementations for better uniqueness guarantees and security.
- &#42;&#42;Return type consistency&#42;&#42;: `add` operations return generated IDs for component workflows requiring immediate entity access, while `update` and `remove` operations return `void` to maintain clean modification APIs.
</code></pre>
</div></p>
             </div>
         </dd>    
    <span></span></dl>
</div>
                

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Lyndon Cerejo</author><title>From Prompt To Partner: Designing Your Custom AI Assistant</title><link>https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/</link><pubDate>Fri, 26 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/</guid><description>What if your best AI prompts didn’t disappear into your unorganized chat history, but came back tomorrow as a reliable assistant? In this article, you’ll learn how to turn one-off “aha” prompts into reusable assistants that are tailored to your audience, grounded in your knowledge, and consistent every time, saving you (and your team) from typing the same 448-word prompt ever again. No coding, just designing, and by the end, you’ll have a custom AI assistant that can augment your team.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/from-prompt-to-partner-designing-custom-ai-assistant/" />
              <title>From Prompt To Partner: Designing Your Custom AI Assistant</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Prompt To Partner: Designing Your Custom AI Assistant</h1>
                  
                    
                    <address>Lyndon Cerejo</address>
                  
                  <time datetime="2025-09-26T10:00:00&#43;00:00" class="op-published">2025-09-26T10:00:00+00:00</time>
                  <time datetime="2025-09-26T10:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>In “<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”, Kate stumbled her way through an AI-augmented sprint (coffee was chugged, mistakes were made). In “<a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a>”, we introduced WIRE+FRAME, a framework to structure prompts like designers structure creative briefs. Now we’ll take the next step: packaging those structured prompts into AI assistants you can design, reuse, and share.</p>

<p>AI assistants go by different names: CustomGPTs (ChatGPT), Agents (Copilot), and Gems (Gemini). But they all serve the same function &mdash; allowing you to customize the default AI model for your unique needs. If we carry over our smart intern analogy, think of these as interns trained to assist you with specific tasks, eliminating the need for repeated instructions or information, and who can support not just you, but your entire team.</p>

<h2 id="why-build-your-own-assistant">Why Build Your Own Assistant?</h2>

<p>If you’ve ever copied and pasted the same mega-prompt for the n<sup>th</sup> time, you’ve experienced the pain. An AI assistant turns a one-off “great prompt” into a dependable teammate. And if you’ve used any of the publicly available AI Assistants, you’ve realized quickly that they’re usually generic and not tailored for your use.</p>

<p>Public AI assistants are great for inspiration, but nothing beats an assistant that solves a repeated problem for you and your team, in <strong>your voice</strong>, with <strong>your context and constraints</strong> baked in. Instead of reinventing the wheel by writing new prompts each time, or repeatedly copy-pasting your structured prompts every time, or spending cycles trying to make a public AI Assistant work the way you need it to, your own AI Assistant allows you and others to easily get better, repeatable, consistent results faster.</p>

<h3 id="benefits-of-reusing-prompts-even-your-own">Benefits Of Reusing Prompts, Even Your Own</h3>

<p>Some of the benefits of building your own AI Assistant over writing or reusing your prompts include:</p>

<ul>
<li><strong>Focused on a real repeating problem</strong><br />
A good AI Assistant isn’t a general-purpose “do everything” bot that you need to keep tweaking. It focuses on a single, recurring problem that takes a long time to complete manually and often results in varying quality depending on who’s doing it (e.g., analyzing customer feedback).</li>
<li><strong>Customized for your context</strong><br />
Most large language models (LLMs, such as ChatGPT) are designed to be everything to everyone. An AI Assistant changes that by allowing you to customize it to automatically work like you want it to, instead of a generic AI.</li>
<li><strong>Consistency at scale</strong><br />
You can use the <a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/#anatomy-structure-it-like-a-designer">WIRE+FRAME prompt framework</a> to create structured, reusable prompts. An AI Assistant is the next logical step: instead of copy-pasting that fine-tuned prompt and sharing contextual information and examples each time, you can bake it into the assistant itself, allowing you and others achieve the same consistent results every time.</li>
<li><strong>Codifying expertise</strong><br />
Every time you turn a great prompt into an AI Assistant, you’re essentially bottling your expertise. Your assistant becomes a living design guide that outlasts projects (and even job changes).</li>
<li><strong>Faster ramp-up for teammates</strong><br />
Instead of new designers starting from a blank slate, they can use pre-tuned assistants. Think of it as knowledge transfer without the long onboarding lecture.

<br /></li>
</ul>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="reasons-for-your-own-ai-assistant-instead-of-public-ai-assistants">Reasons For Your Own AI Assistant Instead Of Public AI Assistants</h3>

<p>Public AI assistants are like stock templates. While they serve a specific purpose compared to the generic AI platform, and are useful starting points, if you want something tailored to your needs and team, you should really build your own.</p>

<p>A few reasons for building your AI Assistant instead of using a public assistant someone else created include:</p>

<ul>
<li><strong>Fit</strong>: Public assistants are built for the masses. Your work has quirks, tone, and processes they’ll never quite match.</li>
<li><strong>Trust &amp; Security</strong>: You don’t control what instructions or hidden guardrails someone else baked in. With your own assistant, you know exactly what it will (and won’t) do.</li>
<li><strong>Evolution</strong>: An AI Assistant you design and build can grow with your team. You can update files, tweak prompts, and maintain a changelog &mdash; things a public bot won’t do for you.</li>
</ul>

<p>Your own AI Assistants allow you to take your successful ways of interacting with AI and make them repeatable and shareable. And while they are tailored to your and your team’s way of working, remember that they are still based on generic AI models, so the usual AI disclaimers apply:</p>

<p><em>Don’t share anything you wouldn’t want screenshotted in the next company all-hands. Keep it safe, private, and user-respecting. A shared AI Assistant can potentially reveal its inner workings or data.</em></p>

<p><strong><em>Note</em></strong>: <em>We will be building an AI assistant using ChatGPT, aka a CustomGPT, but you can try the same process with any decent LLM sidekick. As of publication, a paid account is required to create CustomGPTs, but once created, they can be shared and used by anyone, regardless of whether they have a paid or free account. Similar limitations apply to the other platforms. Just remember that outputs can vary depending on the LLM model used, the model’s training, mood, and flair for creative hallucinations.</em></p>

<h3 id="when-not-to-build-an-ai-assistant-yet">When Not to Build An AI Assistant (Yet)</h3>

<p>An AI Assistant is great when the <em>same</em> audience has the <em>same</em> problem <em>often</em>. When the fit isn’t there, the risk is high; you should skip building an AI Assistant for now, as explained below:</p>

<ul>
<li><strong>One-off or rare tasks</strong><br />
If it won’t be reused at least monthly, I’d recommend keeping it as a saved WIRE+FRAME prompt. For example, something for a one-time audit or creating placeholder content for a specific screen.</li>
<li><strong>Sensitive or regulated data</strong><br />
If you need to build in personally identifiable information (PII), health, finance, legal, or trade secrets, err on the side of not building an AI Assistant. Even if the AI platform promises not to use your data, I’d strongly suggest using redaction or an approved enterprise tool with necessary safeguards in place (company-approved enterprise versions of Microsoft Copilot, for instance).</li>
<li><strong>Heavy orchestration or logic</strong><br />
Multi-step workflows, API calls, database writes, and approvals go beyond the scope of an AI Assistant into Agentic territory (as of now). I’d recommend not trying to build an AI Assistant for these cases.</li>
<li><strong>Real-time information</strong><br />
AI Assistants may not be able to access real-time data like prices, live metrics, or breaking news. If you need these, you can upload near-real-time data (as we do below) or connect with data sources that you or your company controls, rather than relying on the open web.</li>
<li><strong>High-stakes outputs</strong><br />
For cases related to compliance, legal, medical, or any other area requiring auditability, consider implementing process guardrails and training to keep humans in the loop for proper review and accountability.</li>
<li><strong>No measurable win</strong><br />
If you can’t name a success metric (such as time saved, first-draft quality, or fewer re-dos), I’d recommend keeping it as a saved WIRE+FRAME prompt.</li>
</ul>

<p>Just because these are signs that you should not build your AI Assistant now, doesn’t mean you shouldn’t ever. Revisit this decision when you notice that you’re starting to repeatedly use the same prompt weekly, multiple teammates ask for it, or manual time copy-pasting and refining start exceeding ~15 minutes. Those are some signs that an AI Assistant will pay back quickly.</p>

<p>In a nutshell, build an AI Assistant when you can name the problem, the audience, frequency, and the win. The rest of this article shows how to turn your successful WIRE+FRAME prompt into a CustomGPT that you and your team can actually use. No advanced knowledge, coding skills, or hacks needed.</p>

<h2 id="as-always-start-with-the-user">As Always, Start with the User</h2>

<p>This should go without saying to UX professionals, but it’s worth a reminder: if you’re building an AI assistant for anyone besides yourself, start with the user and their needs before you build anything.</p>

<ul>
<li>Who will use this assistant?</li>
<li>What’s the specific pain or task they struggle with today?</li>
<li>What language, tone, and examples will feel natural to them?</li>
</ul>

<p>Building without doing this first is a sure way to end up with clever assistants nobody actually wants to use. Think of it like any other product: before you build features, you understand your audience. The same rule applies here, even more so, because AI assistants are only as helpful as they are useful and usable.</p>

<h2 id="from-prompt-to-assistant">From Prompt To Assistant</h2>

<p>You’ve already done the heavy lifting with WIRE+FRAME. Now you’re just turning that refined and reliable prompt into a CustomGPT you can reuse and share. You can use MATCH as a checklist to go from a great prompt to a useful AI assistant.</p>

<ul>
<li><strong>M: Map your prompt</strong><br />
Port your successful WIRE+FRAME prompt into the AI assistant.</li>
<li><strong>A: Add knowledge and training</strong><br />
Ground the assistant in <em>your</em> world. Upload knowledge files, examples, or guides that make it uniquely yours.</li>
<li><strong>T: Tailor for audience</strong><br />
Make it feel natural to the people who will use it. Give it the right capabilities, but also adjust its settings, tone, examples, and conversation starters so they land with your audience.</li>
<li><strong>C: Check, test, and refine</strong><br />
Test the preview with different inputs and refine until you get the results you expect.</li>
<li><strong>H: Hand off and maintain</strong><br />
Set sharing options and permissions, share the link, and maintain it.</li>
</ul>

<p>A few weeks ago, we invited readers to share their ideas for AI assistants they wished they had. The top contenders were:</p>

<ul>
<li><strong>Prototype Prodigy</strong>: Transform rough ideas into prototypes and export them into Figma to refine.</li>
<li><strong>Critique Coach</strong>: Review wireframes or mockups and point out accessibility and usability gaps.</li>
</ul>

<p>But the favorite was an AI assistant to turn tons of customer feedback into actionable insights. Readers replied with variations of: <em>“An assistant that can quickly sort through piles of survey responses, app reviews, or open-ended comments and turn them into themes we can act on.”</em></p>

<p>And that’s the one we will build in this article &mdash; say hello to <strong>Insight Interpreter.</strong></p>

<div class="partners__lead-place"></div>

<h2 id="walkthrough-insight-interpreter">Walkthrough: Insight Interpreter</h2>

<p>Having lots of customer feedback is a nice problem to have. Companies actively seek out customer feedback through surveys and studies (solicited), but also receive feedback that may not have been asked for through social media or public reviews (unsolicited). This is a goldmine of information, but it can be messy and overwhelming trying to make sense of it all, and it’s nobody’s idea of fun. Here’s where an AI assistant like the Insight Interpreter can help. We’ll turn the example prompt created using the WIRE+FRAME framework in <a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a> into a CustomGPT.</p>

<p>When you start building a CustomGPT by visiting <a href="https://chat.openai.com/gpts/editor?utm_source=chatgpt.com">https://chat.openai.com/gpts/editor</a>, you’ll see two paths:</p>

<ul>
<li><strong>Conversational interface</strong><br />
Vibe-chat your way &mdash; it’s easy and quick, but similar to unstructured prompts, your inputs get baked in a little messily, so you may end up with vague or inconsistent instructions.</li>
<li><strong>Configure interface</strong><br />
The structured form where you type instructions, upload files, and toggle capabilities. Less instant gratification, less winging it, but more control. This is the option you’ll want for assistants you plan to share or depend on regularly.</li>
</ul>

<p>The good news is that MATCH works for both. In conversational mode, you can use it as a mental checklist, and we’ll walk through using it in configure mode as a more formal checklist in this article.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="451"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png"
			
			sizes="100vw"
			alt="CustomGPT Configure Interface"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      CustomGPT Configure Interface. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/1-customgpt-configure-interface.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="m-map-your-prompt">M: Map Your Prompt</h3>

<p>Paste your full WIRE+FRAME prompt into the <em>Instructions</em> section exactly as written. As a refresher, I’ve included the mapping and snippets of the detailed prompt from before:</p>

<ul>
<li><strong>W</strong>ho &amp; What: The AI persona and the core deliverable (<em>“…senior UX researcher and customer insights analyst… specialize in synthesizing qualitative data from diverse sources…”</em>).</li>
<li><strong>I</strong>nput Context: Background or data scope to frame the task (<em>“…analyzing customer feedback uploaded from sources such as…”</em>).</li>
<li><strong>R</strong>ules &amp; Constraints: Boundaries (<em>“…do not fabricate pain points, representative quotes, journey stages, or patterns…”</em>).</li>
<li><strong>E</strong>xpected Output: Format and fields of the deliverable (<em>“…a structured list of themes. For each theme, include…”</em>).</li>
<li><strong>F</strong>low: Explicit, ordered sub-tasks (<em>“Recommended flow of tasks: Step 1…”</em>).</li>
<li><strong>R</strong>eference Voice: Tone, mood, or reference (<em>“…concise, pattern-driven, and objective…”</em>).</li>
<li><strong>A</strong>sk for Clarification: Ask questions if unclear (<em>“…if data is missing or unclear, ask before continuing…”</em>).</li>
<li><strong>M</strong>emory: Memory to recall earlier definitions (<em>“Unless explicitly instructed otherwise, keep using this process…”</em>).</li>
<li><strong>E</strong>valuate &amp; Iterate: Have the AI self-critique outputs (<em>“…critically evaluate…suggest improvements…”</em>).</li>
</ul>

<p>If you’re building Copilot Agents or Gemini Gems instead of CustomGPTs, you still paste your WIRE+FRAME prompt into their respective <em>Instructions</em> sections.</p>

<h3 id="a-add-knowledge-and-training">A: Add Knowledge And Training</h3>

<p>In the knowledge section, upload up to 20 files, clearly labeled, that will help the CustomGPT respond effectively. Keep files small and versioned: <em>reviews_Q2_2025.csv</em> beats <em>latestfile_final2.csv</em>. For this prompt for analyzing customer feedback, generating themes organized by customer journey, rating them by severity and effort, files could include:</p>

<ul>
<li>Taxonomy of themes;</li>
<li>Instructions on parsing uploaded data;</li>
<li>Examples of real UX research reports using this structure;</li>
<li>Scoring guidelines for severity and effort, e.g., what makes something a 3 vs. a 5 in severity;</li>
<li>Customer journey map stages;</li>
<li>Customer feedback file templates (not actual data).</li>
</ul>

<p>An example of a file to help it parse uploaded data is shown below:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="447"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png"
			
			sizes="100vw"
			alt="GPT file parsing instructions"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/2-gpt-file-parsing-instructions.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="t-tailor-for-audience">T: Tailor For Audience</h3>

<ul>
<li><strong>Audience tailoring</strong><br />
If you are building this for others, your prompt should have addressed tone in the “Reference Voice” section. If you didn’t, do it now, so the CustomGPT can be tailored to the tone and expertise level of users who will use it. In addition, use the <em>Conversation starters</em> section to add a few examples or common prompts for users to start using the CustomGPT, again, worded for your users. For instance, we could use “Analyze feedback from the attached file” for our Insights Interpreter to make it more self-explanatory for anyone, instead of “Analyze data,” which may be good enough if you were using it alone. For my Designerly Curiosity GPT, assuming that users may not know what it could do, I use “What are the types of curiosity?” and “Give me a micro-practice to spark curiosity”.</li>
<li><strong>Functional tailoring</strong><br />
Fill in the CustomGPT name, icon, description, and capabilities.

<ul>
<li><em>Name</em>: Pick one that will make it clear what the CustomGPT does. Let’s use “Insights Interpreter &mdash; Customer Feedback Analyzer”. If needed, you can also add a version number. This name will show up in the sidebar when people use it or pin it, so make the first part memorable and easily identifiable.</li>
<li><em>Icon</em>: Upload an image or generate one. Keep it simple so it can be easily recognized in a smaller dimension when people pin it in their sidebar.</li>
<li><em>Description</em>: A brief, yet clear description of what the CustomGPT can do. If you plan to list it in the GPT store, this will help people decide if they should pick yours over something similar.</li>
<li><em>Recommended Model</em>: If your CustomGPT needs the capabilities of a particular model (e.g., needs GPT-5 thinking for detailed analysis), select it. In most cases, you can safely leave it up to the user or select the most common model.</li>
<li><em>Capabilities</em>: Turn off anything you won’t need. We’ll turn off “Web Search” to allow the CustomGPT to focus only on uploaded data, without expanding the search online, and we will turn on “Code Interpreter &amp; Data Analysis” to allow it to understand and process uploaded files. “Canvas” allows users to work on a shared canvas with the GPT to edit writing tasks; “Image generation” - if the CustomGPT needs to create images.</li>
<li><em>Actions</em>: Making <a href="https://platform.openai.com/docs/actions/introduction">third-party APIs</a> available to the CustomGPT, advanced functionality we don’t need.</li>
<li><em>Additional Settings</em>: Sneakily hidden and opted in by default, I opt out of training OpenAI’s models.</li>
</ul></li>
</ul>

<h3 id="c-check-test-refine">C: Check, Test &amp; Refine</h3>

<p>Do one last visual check to make sure you’ve filled in all applicable fields and the basics are in place: is the concept sharp and clear (not a do-everything bot)? Are the roles, goals, and tone clear? Do we have the right assets (docs, guides) to support it? Is the flow simple enough that others can get started easily? Once those boxes are checked, move into testing.</p>

<p>Use the <em>Preview</em> panel to verify that your CustomGPT performs as well, or better, than your original WIRE+FRAME prompt, and that it works for your intended audience. Try a few representative inputs and compare the results to what you expected. If something worked before but doesn’t now, check whether new instructions or knowledge files are overriding it.</p>

<p>When things don’t look right, here are quick debugging fixes:</p>

<ul>
<li><strong>Generic answers?</strong><br />
Tighten <em>Input Context</em> or update the knowledge files.</li>
<li><strong>Hallucinations?</strong><br />
Revisit your <em>Rules</em> section. Turn off web browsing if you don’t need external data.</li>
<li><strong>Wrong tone?</strong><br />
Strengthen <em>Reference Voice</em> or swap in clearer examples.</li>
<li><strong>Inconsistent?</strong><br />
Test across models in preview and set the most reliable one as “Recommended.”</li>
</ul>

<h3 id="h-hand-off-and-maintain">H: Hand Off And Maintain</h3>

<p>When your CustomGPT is ready, you can publish it via the “Create” option. Select the appropriate access option:</p>

<ul>
<li><strong>Only me</strong>: Private use. Perfect if you’re still experimenting or keeping it personal.</li>
<li><strong>Anyone with the link</strong>: Exactly what it means. Shareable but not searchable. Great for pilots with a team or small group. Just remember that links can be reshared, so treat them as semi-public.</li>
<li><strong>GPT Store</strong>: Fully public. Your assistant is listed and findable by anyone browsing the store. <em>(This is the option we’ll use.)</em></li>
<li><strong>Business workspace</strong> (if you’re on GPT Business): Share with others within your business account only &mdash; the easiest way to keep it in-house and controlled.</li>
</ul>

<p>But hand off doesn’t end with hitting publish, you should maintain it to keep it relevant and useful:</p>

<ul>
<li><strong>Collect feedback</strong>: Ask teammates what worked, what didn’t, and what they had to fix manually.</li>
<li><strong>Iterate</strong>: Apply changes directly or duplicate the GPT if you want multiple versions in play. You can find all your CustomGPTs at: <a href="https://chatgpt.com/gpts/mine">https://chatgpt.com/gpts/mine</a></li>
<li><strong>Track changes</strong>: Keep a simple changelog (date, version, updates) for traceability.</li>
<li><strong>Refresh knowledge</strong>: Update knowledge files and examples on a regular cadence so answers don’t go stale.</li>
</ul>

<p>And that’s it! <a href="https://go.cerejo.com/insights-interpreter">Our Insights Interpreter is now live!</a></p>

<p>Since we used the WIRE+FRAME prompt from the previous article to create the Insights Interpreter CustomGPT, I compared the outputs:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="325"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png"
			
			sizes="100vw"
			alt="Results of the structured WIRE&#43;FRAME prompt from the previous article"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the structured WIRE+FRAME prompt from the previous article. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/3-results-structured-wire-frame-prompt.png'>Large preview</a>)
    </figcaption>
  
</figure>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="276"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png"
			
			sizes="100vw"
			alt="Results of the Insights Interpreter CustomGPT based on the same prompt"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Results of the Insights Interpreter CustomGPT based on the same prompt. (<a href='https://files.smashing.media/articles/from-prompt-to-partner-designing-custom-ai-assistant/4-results-insights-interpreter-customgpt.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The results are similar, with slight differences, and that’s expected. If you compare the results carefully, the themes, issues, journey stages, frequency, severity, and estimated effort match with some differences in wording of the theme, issue summary, and problem statement. The opportunities and quotes have more visible differences. Most of it is because of the CustomGPT knowledge and training files, including instructions, examples, and guardrails, now live as always-on guidance.</p>

<p>Keep in mind that in reality, Generative AI is by nature generative, so outputs will vary. Even with the same data, you won’t get identical wording every time. In addition, underlying models and their capabilities rapidly change. If you want to keep things as consistent as possible, recommend a model (though people can change it), track versions of your data, and compare for structure, priorities, and evidence rather than exact wording.</p>

<p>While I’d love for you to use Insights Interpreter, I strongly recommend taking 15 minutes to follow the steps above and create your own. That is exactly what you or your team needs &mdash; including the tone, context, output formats, and get the real AI Assistant you need!</p>

<div class="partners__lead-place"></div>

<h2 id="inspiration-for-other-ai-assistants">Inspiration For Other AI Assistants</h2>

<p>We just built the Insight Interpreter and mentioned two contenders: Critique Coach and Prototype Prodigy. Here are a few other realistic uses that can spark ideas for your own AI Assistant:</p>

<ul>
<li><strong>Workshop Wizard</strong>: Generates workshop agendas, produces icebreaker questions, and follows up survey drafts.</li>
<li><strong>Research Roundup Buddy</strong>: Summarizes raw transcripts into key themes, then creates highlight reels (quotes + visuals) for team share-outs.</li>
<li><strong>Persona Refresher</strong>: Updates stale personas with the latest customer feedback, then rewrites them in different tones (boardroom formal vs. design-team casual).</li>
<li><strong>Content Checker</strong>: Proofs copy for tone, accessibility, and reading level before it ever hits your site.</li>
<li><strong>Trend Tamer</strong>: Scans competitor reviews and identifies emerging patterns you can act on before they reach your roadmap.</li>
<li><strong>Microcopy Provocateur</strong>: Tests alternate copy options by injecting different tones (sassy, calm, ironic, nurturing) and role-playing how users might react, especially useful for error states or Call to Actions.</li>
<li><strong>Ethical UX Debater</strong>: Challenges your design decisions and deceptive designs by simulating the voice of an ethics board or concerned user.</li>
</ul>

<p>The best AI Assistants come from carefully inspecting your workflow and looking for areas where AI can augment your work regularly and repetitively. Then follow the steps above to build a team of customized AI assistants.</p>

<h2 id="ask-me-anything-about-assistants">Ask Me Anything About Assistants</h2>

<ul>
<li><strong>What are some limitations of a CustomGPT?</strong><br />
Right now, the best parallels for AI are a very smart intern with access to a lot of information. CustomGPTs are still running on LLM models that are basically trained on a lot of information and programmed to predictively generate responses based on that data, including possible bias, misinformation, or incomplete information. Keeping that in mind, you can make that intern provide better and more relevant results by using your uploads as onboarding docs, your guardrails as a job description, and your updates as retraining.</li>
<li><strong>Can I copy someone else’s public CustomGPT and tweak it?</strong><br />
Not directly, but if you get inspired by another CustomGPT, you can look at how it’s framed and rebuild your own using WIRE+FRAME &amp; MATCH. That way, you make it your own and have full control of the instructions, files, and updates. But you can do that with Google’s equivalent &mdash; Gemini Gems. Shared Gems behave similarly to shared Google Docs, so once shared, any Gem instructions and files that you have uploaded can be viewed by any user with access to the Gem. Any user with edit access to the Gem can also update and delete the Gem.</li>
<li><strong>How private are my uploaded files?</strong><br />
The files you upload are stored and used to answer prompts to your CustomGPT. If your CustomGPT is not private or you didn’t disable the hidden setting to allow CustomGPT conversations to improve the model, that data could be referenced. Don’t upload sensitive, confidential, or personal data you wouldn’t want circulating. Enterprise accounts do have some protections, so check with your company.</li>
<li><strong>How many files can I upload, and does size matter?</strong><br />
Limits vary by platform, but smaller, specific files usually perform better than giant docs. Think “chapter” instead of “entire book.” At the time of publishing, CustomGPTs allow up to 20 files, Copilot Agents up to 200 (if you need anywhere near that many, chances are your agent is not focused enough), and Gemini Gems up to 10.</li>
<li><strong>What’s the difference between a CustomGPT and a Project?</strong><br />
A CustomGPT is a focused assistant, like an intern trained to do one role well (like “Insight Interpreter”). A Project is more like a workspace where you can group multiple prompts, files, and conversations together for a broader effort. CustomGPTs are specialists. Projects are containers. If you want something reusable, shareable, and role-specific, go to CustomGPT. If you want to organize broader work with multiple tools and outputs, and shared knowledge, Projects are the better fit.</li>
</ul>

<h2 id="from-reading-to-building">From Reading To Building</h2>

<p>In this AI x Design series, we’ve gone from messy prompting (“<a href="https://www.smashingmagazine.com/2025/08/week-in-life-ai-augmented-designer/">A Week In The Life Of An AI-Augmented Designer</a>”) to a structured prompt framework, WIRE+FRAME (“<a href="https://www.smashingmagazine.com/2025/08/prompting-design-act-brief-guide-iterate-ai/">Prompting Is A Design Act</a>”). And now, in this article, your very own reusable AI sidekick.</p>

<p>CustomGPTs don’t replace designers but augment them. The real magic isn’t in the tool itself, but in <em>how</em> you design and manage it. You can use public CustomGPTs for inspiration, but the ones that truly fit your workflow are the ones you design yourself. They <strong>extend your craft</strong>, <strong>codify your expertise</strong>, and give your team leverage that generic AI models can’t.</p>

<p>Build one this week. Even better, today. Train it, share it, stress-test it, and refine it into an AI assistant that can augment your team.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Yegor Gilyov</author><title>Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)</title><link>https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/</link><pubDate>Wed, 24 Sep 2025 17:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/</guid><description>Yegor Gilyov examines the problem of over-reliance on static high-fidelity mockups, which often leave the conceptual model and user flows dangerously underdeveloped. He then explores whether AI-powered prototyping is the answer, questioning whether the path forward is the popular “vibe coding” approach or a more structured, intent-driven approach.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/intent-prototyping-pure-vibe-coding-enterprise-ux/" />
              <title>Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Intent Prototyping: The Allure And Danger Of Pure Vibe Coding In Enterprise UX (Part 1)</h1>
                  
                    
                    <address>Yegor Gilyov</address>
                  
                  <time datetime="2025-09-24T17:00:00&#43;00:00" class="op-published">2025-09-24T17:00:00+00:00</time>
                  <time datetime="2025-09-24T17:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>There is a spectrum of opinions on how dramatically all creative professions will be changed by the coming wave of agentic AI, from the very skeptical to the wildly optimistic and even apocalyptic. I think that even if you are on the “skeptical” end of the spectrum, it makes sense to explore ways this new technology can help with your everyday work. As for my everyday work, I’ve been doing UX and product design for about 25 years now, and I’m always keen to learn new tricks and share them with colleagues. Right now, I’m interested in <strong>AI-assisted prototyping</strong>, and I’m here to share my thoughts on how it can change the process of designing digital products.</p>

<p>To set your expectations up front: this exploration focuses on a specific part of the product design lifecycle. Many people know about the Double Diamond framework, which shows the path from problem to solution. However, I think it’s the <a href="https://uxdesign.cc/why-the-double-diamond-isnt-enough-adaa48a8aec1">Triple Diamond model</a> that makes an important point for our needs. It explicitly separates the solution space into two phases: Solution Discovery (ideating and validating the right concept) and Solution Delivery (engineering the validated concept into a final product). This article is focused squarely on that middle diamond: <strong>Solution Discovery</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="593"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png"
			
			sizes="100vw"
			alt="Diagram of the Triple Diamond model: Problem Discovery, Solution Discovery, and Solution Delivery."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Triple Diamond model and the prototyping sweet spot. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/01-diagram-triple-diamond-model.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>How AI can help with the preceding (Problem Discovery) and the following (Solution Delivery) stages is out of the scope of this article. Problem Discovery is less about prototyping and more about research, and while I believe AI can revolutionize the research process as well, I’ll leave that to people more knowledgeable in the field. As for Solution Delivery, it is more about engineering optimization. There’s no doubt that software engineering in the AI era is undergoing dramatic changes, but I’m not an engineer &mdash; I’m a designer, so let me focus on my “sweet spot”.</p>

<p>And my “sweet spot” has a specific flavor: <strong>designing enterprise applications</strong>. In this world, the main challenge is taming complexity: dealing with complicated data models and guiding users through non-linear workflows. This background has had a big impact on my approach to design, putting a lot of emphasis on the underlying logic and structure. This article explores the potential of AI through this lens.</p>

<p>I’ll start by outlining the typical artifacts designers create during Solution Discovery. Then, I’ll examine the problems with how this part of the process often plays out in practice. Finally, we’ll explore whether AI-powered prototyping can offer a better approach, and if so, whether it aligns with what people call “vibe coding,” or calls for a more deliberate and disciplined way of working.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="what-we-create-during-solution-discovery">What We Create During Solution Discovery</h2>

<p>The Solution Discovery phase begins with the key output from the preceding research: <strong>a well-defined problem</strong> and <strong>a core hypothesis for a solution</strong>. This is our starting point. The artifacts we create from here are all aimed at turning that initial hypothesis into a tangible, testable concept.</p>

<p>Traditionally, at this stage, designers can produce artifacts of different kinds, progressively increasing fidelity: from napkin sketches, boxes-and-arrows, and conceptual diagrams to hi-fi mockups, then to interactive prototypes, and in some cases even live prototypes. Artifacts of lower fidelity allow fast iteration and enable the exploration of many alternatives, while artifacts of higher fidelity help to understand, explain, and validate the concept in all its details.</p>

<p>It’s important to <strong>think holistically</strong>, considering different aspects of the solution. I would highlight three dimensions:</p>

<ol>
<li><strong>Conceptual model</strong>: Objects, relations, attributes, actions;</li>
<li><strong>Visualization</strong>: Screens, from rough sketches to hi-fi mockups;</li>
<li><strong>Flow</strong>: From the very high-level user journeys to more detailed ones.</li>
</ol>

<p>One can argue that those are layers rather than dimensions, and each of them builds on the previous ones (for example, according to <a href="https://www.interaction-design.org/literature/article/the-magic-of-semantic-interaction-design?srsltid=AfmBOoq4-4YG8RR7SDZn7CX1GJ1ZKNdiZx-trER7oKCefud3V2TjeumD">Semantic IxD</a> by Daniel Rosenberg), but I see them more as different facets of the same thing, so the design process through them is not necessarily linear: you may need to switch from one perspective to another many times.</p>

<p>This is how different types of design artifacts map to these dimensions:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="596"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png"
			
			sizes="100vw"
			alt="Diagram mapping design artifacts to dimensions of Conceptual Model, Visualization, and Flow."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Mapping design artifacts to dimensions of Conceptual Model, Visualization, and Flow. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/02-mapping-design-artifacts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As Solution Discovery progresses, designers move from the left part of this map to the right, from low-fidelity to high-fidelity, from ideating to validating, from diverging to converging.</p>

<p>Note that at the beginning of the process, different dimensions are supported by artifacts of different types (boxes-and-arrows, sketches, class diagrams, etc.), and only closer to the end can you build a live prototype that encompasses all three dimensions: conceptual model, visualization, and flow.</p>

<p>This progression shows a classic trade-off, like the difference between a pencil drawing and an oil painting. The drawing lets you explore ideas in the most flexible way, whereas the painting has a lot of detail and overall looks much more realistic, but is hard to adjust. Similarly, as we go towards artifacts that integrate all three dimensions at higher fidelity, our ability to iterate quickly and explore divergent ideas goes down. This inverse relationship has long been an accepted, almost unchallenged, limitation of the design process.</p>

<h2 id="the-problem-with-the-mockup-centric-approach">The Problem With The Mockup-Centric Approach</h2>

<p>Faced with this difficult trade-off, often teams opt for the easiest way out. On the one hand, they need to show that they are making progress and create things that appear detailed. On the other hand, they rarely can afford to build interactive or live prototypes. This leads them to over-invest in one type of artifact that seems to offer the best of both worlds. As a result, the neatly organized “bento box” of design artifacts we saw previously gets shrunk down to just one compartment: creating static high-fidelity mockups.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="388"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png"
			
			sizes="100vw"
			alt="The artifact map diagram, with “Hi-fi Mockup” enlarged to show an over-reliance on it."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The mockup-centric approach. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/03-artifact-map-diagram.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>This choice is understandable, as several forces push designers in this direction. Stakeholders are always eager to see nice pictures, while artifacts representing user flows and conceptual models receive much less attention and priority. They are too high-level and hardly usable for validation, and usually, not everyone can understand them.</p>

<p>On the other side of the fidelity spectrum, interactive prototypes require too much effort to create and maintain, and creating live prototypes in code used to require special skills (and again, effort). And even when teams make this investment, they do so at the end of Solution Discovery, during the convergence stage, when it is often too late to experiment with fundamentally different ideas. With so much effort already sunk, there is little appetite to go back to the drawing board.</p>

<p>It’s no surprise, then, that many teams default to the perceived safety of <strong>static mockups</strong>, seeing them as a middle ground between the roughness of the sketches and the overwhelming complexity and fragility that prototypes can have.</p>

<p>As a result, validation with users doesn’t provide enough confidence that the solution will actually solve the problem, and teams are forced to make a leap of faith to start building. To make matters worse, they do so without a clear understanding of the conceptual model, the user flows, and the interactions, because from the very beginning, designers’ attention has been heavily skewed toward visualization.</p>

<p>The result is often a design artifact that resembles the famous “horse drawing” meme: beautifully rendered in the parts everyone sees first (the mockups), but dangerously underdeveloped in its underlying structure (the conceptual model and flows).</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="541"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg"
			
			sizes="100vw"
			alt="The “horse drawing” meme, where the front is detailed and the back is a simple sketch."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The “lopsided horse” problem. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/04-lopsided-horse-problem.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>While this is a familiar problem across the industry, its severity <strong>depends on the nature of the project</strong>. If your core challenge is to optimize a well-understood, linear flow (like many B2C products), a mockup-centric approach can be perfectly adequate. The risks are contained, and the “lopsided horse” problem is unlikely to be fatal.</p>

<p>However, it’s different for the systems I specialize in: complex applications defined by intricate data models and non-linear, interconnected user flows. Here, the biggest risks are not on the surface but in the underlying structure, and a lack of attention to the latter would be a recipe for disaster.</p>

<div class="partners__lead-place"></div>

<h2 id="transforming-the-design-process">Transforming The Design Process</h2>

<p>This situation makes me wonder:</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aHow%20might%20we%20close%20the%20gap%20between%20our%20design%20intent%20and%20a%20live%20prototype,%20so%20that%20we%20can%20iterate%20on%20real%20functionality%20from%20day%20one?%0a&url=https://smashingmagazine.com%2f2025%2f09%2fintent-prototyping-pure-vibe-coding-enterprise-ux%2f">
      
How might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one?

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="397"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png"
			
			sizes="100vw"
			alt="Diagram showing bridging the gap between “Design Intent” and “Live Prototype.”"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      How might we bridge the gap between design intent and a live prototype? (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/05-design-intent-live-prototype.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If we were able to answer this question, we would:</p>

<ul>
<li><strong>Learn faster.</strong><br />
By going straight from intent to a testable artifact, we cut the feedback loop from weeks to days.</li>
<li><strong>Gain more confidence.</strong><br />
Users interact with real logic, which gives us more proof that the idea works.</li>
<li><strong>Enforce conceptual clarity.</strong><br />
A live prototype cannot hide a flawed or ambiguous conceptual model.</li>
<li><strong>Establish a clear and lasting source of truth.</strong><br />
A live prototype, combined with a clearly documented design intent, provides the engineering team with an unambiguous specification.</li>
</ul>

<p>Of course, the desire for such a process is not new. This vision of a truly <strong>prototype-driven workflow</strong> is especially compelling for enterprise applications, where the benefits of faster learning and forced conceptual clarity are the best defense against costly structural flaws. But this ideal was still out of reach because prototyping in code took so much work and specialized talents. Now, the rise of powerful AI coding assistants changes this equation in a big way.</p>

<h2 id="the-seductive-promise-of-vibe-coding">The Seductive Promise Of “Vibe Coding”</h2>

<p>And the answer seems to be obvious: <strong>vibe coding</strong>!</p>

<blockquote>“Vibe coding is an artificial intelligence-assisted software development style popularized by Andrej Karpathy in early 2025. It describes a fast, improvisational, collaborative approach to creating software where the developer and a large language model (LLM) tuned for coding is acting rather like pair programmers in a conversational loop.”<br /><br />&mdash; <a href="https://en.wikipedia.org/wiki/Vibe_coding">Wikipedia</a></blockquote>

<p>The original tweet by Andrej Karpathy:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://x.com/karpathy/status/1886192184808149383">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="552"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png"
			
			sizes="100vw"
			alt="Screenshot of Andrej Karpathy&#39;s tweet defining Vibe Coding."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Andrej Karpathy’s tweet that popularized the term “vibe coding”. (Image source: <a href='https://x.com/karpathy/status/1886192184808149383'>X</a>) (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/06-andrej-karpathy-tweet.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The allure of this approach is undeniable. If you are not a developer, you are bound to feel awe when you describe a solution in plain language, and moments later, you can interact with it. This seems to be the ultimate fulfillment of our goal: a direct, frictionless path from an idea to a live prototype. But <strong>is this method reliable enough</strong> to build our new design process around it?</p>

<h3 id="the-trap-a-process-without-a-blueprint">The Trap: A Process Without A Blueprint</h3>

<p>Vibe coding mixes up a description of the UI with a description of the system itself, resulting in a <strong>prototype based on changing assumptions rather than a clear, solid model</strong>.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20pitfall%20of%20vibe%20coding%20is%20that%20it%20encourages%20us%20to%20express%20our%20intent%20in%20the%20most%20ambiguous%20way%20possible:%20by%20having%20a%20conversation.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fintent-prototyping-pure-vibe-coding-enterprise-ux%2f">
      
The pitfall of vibe coding is that it encourages us to express our intent in the most ambiguous way possible: by having a conversation.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>This is like hiring a builder and telling them what to do one sentence at a time without ever presenting them a blueprint. They could make a wall that looks great, but you can’t be sure that it can hold weight.</p>

<p>I’ll give you one example illustrating problems you may face if you try to jump over the chasm between your idea and a live prototype relying on pure vibe coding in the spirit of Andrej Karpathy’s tweet. Imagine I want to prototype a solution to keep track of tests to validate product ideas. I open my vibe coding tool of choice (I intentionally don’t disclose its name, as I believe they all are awesome yet prone to similar pitfalls) and start with the following prompt:</p>

<div class="break-out">
<pre><code class="language-markdown">I need an app to track tests. For every test, I need to fill out the following data:
- Hypothesis (we believe that...) 
- Experiment (to verify that, we will...)
- When (a single date, or a period) 
- Status (New/Planned/In Progress/Proven/Disproven)
</code></pre>
</div>

<p>And in a minute or so, I get a working prototype:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="610"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png"
			
			sizes="100vw"
			alt="Screenshot of a simple Test Tracker app."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The initial prototype. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/7-test-tracker.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Inspired by success, I go further:</p>

<div class="break-out">
<pre><code class="language-markdown">Please add the ability to specify a product idea for every test. Also, I want to filter tests by product ideas and see how many tests each product idea has in each status.
</code></pre>
</div>

<p>And the result is still pretty good:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="610"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png"
			
			sizes="100vw"
			alt="The Test Tracker app screenshot, now with filtering by product ideas."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The prototype updated to include filtering tests by product ideas. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/8-test-tracker-updated.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But then I want to extend the functionality related to product ideas:</p>

<div class="break-out">
<pre><code class="language-markdown">Okay, one more thing. For every product idea, I want to assess the impact score, the confidence score, and the ease score, and get the overall ICE score. Perhaps I need a separate page focused on the product idea, with all the relevant information and related tests.
</code></pre>
</div>

<p>And from this point on, the results are getting more and more confusing.</p>

<p>The flow of creating tests hasn’t changed much. I can still create a bunch of tests, and they seem to be organized by product ideas. But when I click “Product Ideas” in the top navigation, I see nothing:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="518"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png"
			
			sizes="100vw"
			alt="Screenshot of the app’s blank Product Ideas page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Product Ideas page is empty. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/9-product-ideas-page.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>I need to create my ideas from scratch, and they are not connected to the tests I created before:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="519"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png"
			
			sizes="100vw"
			alt="Screenshot of the Product Ideas page with newly created ideas not connected to tests."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The newly created product ideas are disconnected from existing tests. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/10-product-ideas-disconnected-tests.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Moreover, when I go back to “Tests”, I see that they are all gone. Clearly something went wrong, and my AI assistant confirms that:</p>

<blockquote>No, this is not expected behavior &mdash; it’s a bug! The issue is that tests are being stored in two separate places (local state in the Index page and App state), so tests created on the main page don’t sync with the product ideas page.</blockquote>

<p>Sure, eventually it fixed that bug, but note that we encountered this just on the third step, when we asked to slightly extend the functionality of a very simple app. The more layers of complexity we add, the more roadblocks of this sort we are bound to face.</p>

<p>Also note that this specific problem of a not fully thought-out relationship between two entities (product ideas and tests) is not isolated at the technical level, and therefore, it didn’t go away once the technical bug was fixed. The underlying conceptual model is still broken, and it manifests in the UI as well.</p>

<p>For example, you can still create “orphan” tests that are not connected to any item from the “Product Ideas” page. As a result, you may end up with different numbers of ideas and tests on different pages of the app:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="305"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png"
			
			sizes="100vw"
			alt="Diagram showing conflicting data between the Tests page and the Product Ideas page."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A poorly defined conceptual model leads to data inconsistencies across the app. (<a href='https://files.smashing.media/articles/intent-prototyping-pure-vibe-coding-enterprise-ux/11-conflicting-data-tests-product-ideas-page.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Let’s diagnose what really happened here. The AI’s response that this is a “bug” is only half the story. The true root cause is a <strong>conceptual model failure</strong>. My prompts never explicitly defined the relationship between product ideas and tests. The AI was forced to guess, which led to the broken experience. For a simple demo, this might be a fixable annoyance. But for a data-heavy enterprise application, this kind of structural ambiguity is fatal. It demonstrates <strong>the fundamental weakness of building without a blueprint</strong>, which is precisely what vibe coding encourages.</p>

<p>Don’t take this as a criticism of vibe coding tools. They are creating real magic. However, the fundamental truth about “garbage in, garbage out” is still valid. If you don’t express your intent clearly enough, chances are the result won’t fulfill your expectations.</p>

<p>Another problem worth mentioning is that even if you wrestle it into a state that works, <strong>the artifact is a black box</strong> that can hardly serve as reliable specifications for the final product. The initial meaning is lost in the conversation, and all that’s left is the end result. This makes the development team “code archaeologists,” who have to figure out what the designer was thinking by reverse-engineering the AI’s code, which is frequently very complicated. Any speed gained at the start is lost right away because of this friction and uncertainty.</p>

<div class="partners__lead-place"></div>

<h2 id="from-fast-magic-to-a-solid-foundation">From Fast Magic To A Solid Foundation</h2>

<p>Pure vibe coding, for all its allure, encourages building without a blueprint. As we’ve seen, this results in <strong>structural ambiguity</strong>, which is not acceptable when designing complex applications. We are left with a seemingly quick but fragile process that creates a black box that is difficult to iterate on and even more so to hand off.</p>

<p>This leads us back to our main question: how might we close the gap between our design intent and a live prototype, so that we can iterate on real functionality from day one, without getting caught in the ambiguity trap? The answer lies in a more methodical, disciplined, and therefore trustworthy process.</p>

<p>In <a href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/"><strong>Part 2</strong></a> of this series, “A Practical Guide to Building with Clarity”, I will outline the entire workflow for <strong>Intent Prototyping.</strong> This method places the explicit <em>intent</em> of the designer at the forefront of the process while embracing the potential of AI-assisted coding.</p>

<p>Thank you for reading, and I look forward to seeing you in <a href="https://www.smashingmagazine.com/2025/10/intent-prototyping-practical-guide-building-clarity/"><strong>Part 2</strong></a>.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Andy Clarke</author><title>Ambient Animations In Web Design: Principles And Implementation (Part 1)</title><link>https://www.smashingmagazine.com/2025/09/ambient-animations-web-design-principles-implementation/</link><pubDate>Mon, 22 Sep 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/ambient-animations-web-design-principles-implementation/</guid><description>Creating motion can be tricky. Too much and it’s distracting. Too little and a design feels flat. Ambient animations are the middle ground &amp;mdash; subtle, slow-moving details that add atmosphere without stealing the show. In this article, web design pioneer &lt;a href="https://stuffandnonsense.co.uk">Andy Clarke&lt;/a> introduces the concept of ambient animations and explains how to implement them.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/ambient-animations-web-design-principles-implementation/" />
              <title>Ambient Animations In Web Design: Principles And Implementation (Part 1)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Ambient Animations In Web Design: Principles And Implementation (Part 1)</h1>
                  
                    
                    <address>Andy Clarke</address>
                  
                  <time datetime="2025-09-22T13:00:00&#43;00:00" class="op-published">2025-09-22T13:00:00+00:00</time>
                  <time datetime="2025-09-22T13:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Unlike <em>timeline-based</em> animations, which tell stories across a sequence of events, or <em>interaction</em> animations that are triggered when someone touches something, <strong>ambient animations</strong> are the kind of passive movements you might not notice at first. But, they make a design look alive in subtle ways.</p>

<p>In an ambient animation, elements might subtly transition between colours, move slowly, or gradually shift position. Elements can appear and disappear, change size, or they could rotate slowly.</p>

<p>Ambient animations aren’t intrusive; they don’t demand attention, aren’t distracting, and don’t interfere with what someone’s trying to achieve when they use a product or website. They can be playful, too, making someone smile when they catch sight of them. That way, ambient animations <strong>add depth to a brand’s personality</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="399"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png"
			
			sizes="100vw"
			alt="A three-page spread of a Quick Draw McGraw comic book including the animated cover and first two pages."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Hanna-Barbera’s Quick Draw McGraw © Warner Bros. Entertainment Inc. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/1-quick-draw-mcgraw-comic-book.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p class="c-pre-sidenote--left">To illustrate the concept of ambient animations, I’ve recreated the cover of a <a href="https://en.wikipedia.org/wiki/Quick_Draw_McGraw"><em>Quick Draw McGraw</em></a> <a href="https://dn720005.ca.archive.org/0/items/QuickDrawMcGrawCharlton/Quick%20Draw%20McGraw%20%233%20%28Charlton%201971%29.pdf">comic book</a> (PDF) as a CSS/SVG animation. The comic was published by Charlton Comics in 1971, and, being printed, these characters didn’t move, making them ideal candidates to transform into ambient animations.</p>
<p class="c-sidenote c-sidenote--right"><strong>FYI</strong>: Original cover artist <a href="https://www.lambiek.net/artists/d/dirgo_ray.htm">Ray Dirgo</a> was best known for his work drawing Hanna-Barbera characters for Charlton Comics during the 1970s. Ray passed away in 2000 at the age of 92. He outlived Charlton Comics, which went out of business in 1986, and DC Comics acquired its characters.</p>

<p><strong>Tip</strong>: You can view the complete ambient animation <a href="https://codepen.io/malarkey/pen/NPGrWVy">code on CodePen</a>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png"
			
			sizes="100vw"
			alt="Quick Draw McGraw ambient animations."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Quick Draw McGraw ambient animations. (<a href='https://codepen.io/malarkey/pen/NPGrWVy'>Live Demo</a>) (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/2-quick-draw-mcgraw-ambient-animations.png'>Large preview</a>)
    </figcaption>
  
</figure>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="choosing-elements-to-animate">Choosing Elements To Animate</h2>

<p>Not everything on a page or in a graphic needs to move, and part of designing an ambient animation is <strong>knowing when to stop</strong>. The trick is to pick elements that lend themselves naturally to subtle movement, rather than forcing motion into places where it doesn’t belong.</p>

<h3 id="natural-motion-cues">Natural Motion Cues</h3>

<p>When I’m deciding what to animate, I look for natural motion cues and think about when something would move naturally in the real world. I ask myself: <em>“Does this thing have weight?”</em>, <em>“Is it flexible?”</em>, and <em>“Would it move in real life?”</em> If the answer’s <em>“yes,”</em> it’ll probably feel right if it moves. There are several motion cues in Ray Dirgo’s cover artwork.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png"
			
			sizes="100vw"
			alt="Vibrantly illustrated pipe adorned with two feathers on the end against a silhouetted toon title card."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Pipe and feathers swing slightly. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/3-pipe-feathers-toon-title-card.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>For example, the peace pipe Quick Draw’s puffing on has two feathers hanging from it. They swing slightly left and right by three degrees as the pipe moves, just like real feathers would.</p>

<div class="break-out">
<pre><code class="language-css">&#35;quick-draw-pipe {
  animation: quick-draw-pipe-rotate 6s ease-in-out infinite alternate;
}

@keyframes quick-draw-pipe-rotate {
  0% { transform: rotate(3deg); }
  100% { transform: rotate(-3deg); }
}

&#35;quick-draw-feather-1 {
  animation: quick-draw-feather-1-rotate 3s ease-in-out infinite alternate;
}

&#35;quick-draw-feather-2 {
  animation: quick-draw-feather-2-rotate 3s ease-in-out infinite alternate;
}

@keyframes quick-draw-feather-1-rotate {
  0% { transform: rotate(3deg); }
  100% { transform: rotate(-3deg); }
}

@keyframes quick-draw-feather-2-rotate {
  0% { transform: rotate(-3deg); }
  100% { transform: rotate(3deg); }
}
</code></pre>
</div>

<h3 id="atmosphere-not-action">Atmosphere, Not Action</h3>

<p>I often choose elements or decorative details that add to the vibe but don’t fight for attention.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAmbient%20animations%20aren%e2%80%99t%20about%20signalling%20to%20someone%20where%20they%20should%20look;%20they%e2%80%99re%20about%20creating%20a%20mood.%20%0a&url=https://smashingmagazine.com%2f2025%2f09%2fambient-animations-web-design-principles-implementation%2f">
      
Ambient animations aren’t about signalling to someone where they should look; they’re about creating a mood. 

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Here, the chief slowly and subtly rises and falls as he puffs on his pipe.</p>

<pre><code class="language-css">&#35;chief {
  animation: chief-rise-fall 3s ease-in-out infinite alternate;
}

@keyframes chief-group-rise-fall {
  0% { transform: translateY(0); }
  100% { transform: translateY(-20px); }
}
</code></pre>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png"
			
			sizes="100vw"
			alt="An illustrated Indian chief seated and puffing on a pipe against a silhouetted toon title card."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The chief rises and falls as he puffs on his pipe. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/4-chief-toon-title-card.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>For added effect, the feather on his head also moves in time with his rise and fall:</p>

<div class="break-out">
<pre><code class="language-css">&#35;chief-feather-1 {
  animation: chief-feather-1-rotate 3s ease-in-out infinite alternate;
}

&#35;chief-feather-2 {
  animation: chief-feather-2-rotate 3s ease-in-out infinite alternate;
}

@keyframes chief-feather-1-rotate {
  0% { transform: rotate(0deg); }
  100% { transform: rotate(-9deg); }
}

@keyframes chief-feather-2-rotate {
  0% { transform: rotate(0deg); }
  100% { transform: rotate(9deg); }
}
</code></pre>
</div>

<h3 id="playfulness-and-fun">Playfulness And Fun</h3>

<p>One of the things I love most about ambient animations is how they bring fun into a design. They’re an opportunity to <strong>demonstrate personality</strong> through playful details that make people smile when they notice them.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png"
			
			sizes="100vw"
			alt="Closeup of the illustrated chief’s head and face."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The chief’s eyebrows rise and fall, and his eyes cross. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/5-closeup-illustrated-chief-head.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Take a closer look at the chief, and you might spot his eyebrows raising and his eyes crossing as he puffs hard on his pipe. Quick Draw’s eyebrows also bounce at what look like random intervals.</p>

<pre><code class="language-css">&#35;quick-draw-eyebrow {
  animation: quick-draw-eyebrow-raise 5s ease-in-out infinite;
}

@keyframes quick-draw-eyebrow-raise {
  0%, 20%, 60%, 100% { transform: translateY(0); }
  10%, 50%, 80% { transform: translateY(-10px); }
}
</code></pre>

<div class="partners__lead-place"></div>

<h2 id="keep-hierarchy-in-mind">Keep Hierarchy In Mind</h2>

<p>Motion draws the eye, and even subtle movements have a visual weight. So, I reserve the most obvious animations for elements that I need to create the biggest impact.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png"
			
			sizes="100vw"
			alt="Illustrated Quick Draw McGraw holding the feather-adorned pipe with dizzy eyes veering right."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Quick Draw McGraw wobbles under the influence of his pipe. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/6-illustrated-duick-draw-mcgraw.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Smoking his pipe clearly has a big effect on Quick Draw McGraw, so to demonstrate this, I wrapped his elements &mdash; including his pipe and its feathers &mdash; within a new SVG group, and then I made that wobble.</p>

<pre><code class="language-css">&#35;quick-draw-group {
  animation: quick-draw-group-wobble 6s ease-in-out infinite;
}

@keyframes quick-draw-group-wobble {
  0% { transform: rotate(0deg); }
  15% { transform: rotate(2deg); }
  30% { transform: rotate(-2deg); }
  45% { transform: rotate(1deg); }
  60% { transform: rotate(-1deg); }
  75% { transform: rotate(0.5deg); }
  100% { transform: rotate(0deg); }
}
</code></pre>

<p>Then, to emphasise this motion, I mirrored those values to wobble his shadow:</p>

<pre><code class="language-css">&#35;quick-draw-shadow {
  animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
}

@keyframes quick-draw-shadow-wobble {
  0% { transform: rotate(0deg); }
  15% { transform: rotate(-2deg); }
  30% { transform: rotate(2deg); }
  45% { transform: rotate(-1deg); }
  60% { transform: rotate(1deg); }
  75% { transform: rotate(-0.5deg); }
  100% { transform: rotate(0deg); }
}
</code></pre>

<h2 id="apply-restraint">Apply Restraint</h2>

<p>Just because something can be animated doesn’t mean it should be. When creating an ambient animation, I study the image and note the elements where subtle motion might add life. I keep in mind the questions: <em>“What’s the story I’m telling? Where does movement help, and when might it become distracting?”</em></p>

<p>Remember, restraint isn’t just about doing less; it’s about doing the right things less often.</p>

<h2 id="layering-svgs-for-export">Layering SVGs For Export</h2>

<p>In “<a href="https://www.smashingmagazine.com/2025/06/smashing-animations-part-4-optimising-svgs/">Smashing Animations Part 4: Optimising SVGs</a>,” I wrote about the process I rely on to <em>“prepare, optimise, and structure SVGs for animation.”</em> When elements are crammed into a single SVG file, they can be a nightmare to navigate. Locating a specific path or group can feel like searching for a needle in a haystack.</p>

<blockquote>That’s why I develop my SVGs in layers, exporting and optimising one set of elements at a time &mdash; always in the order they’ll appear in the final file. This lets me build the master SVG gradually by pasting it in each cleaned-up section.</blockquote>

<p>I start by exporting background elements, optimising them, adding class and ID attributes, and pasting their code into my SVG file.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png"
			
			sizes="100vw"
			alt="The toon title card with the chief and Quick Draw characters cut out with their shapes remaining."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Exporting background elements. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/7-toon-title-card.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Then, I export elements that often stay static or move as groups, like the chief and Quick Draw McGraw.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png"
			
			sizes="100vw"
			alt="Showing Quick Draw pasted to the toon title card’s foreground, minus details including the pipe he is holding and his eyeballs."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Exporting larger groups. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/8-quick-draw-pasted-toon-title-card.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Before finally exporting, naming, and adding details, like Quick Draw’s pipe, eyes, and his stoned sparkles.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="484"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png"
			
			sizes="100vw"
			alt="Showing Quick Draw in the same toon title card but including the details that were left out before."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Adding details. (<a href='https://files.smashing.media/articles/ambient-animations-web-design-principles-implementation/9-quick-draw-toon-title-card-details.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Since I export each layer from the same-sized artboard, I don’t need to worry about alignment or positioning issues as they all slot into place automatically.</p>

<h2 id="implementing-ambient-animations">Implementing Ambient Animations</h2>

<p>You don’t need an animation framework or library to add ambient animations to a project. Most of the time, all you’ll need is a well-prepared SVG and some thoughtful CSS.</p>

<p>But, let’s start with the SVG. The key is to group elements logically and give them meaningful class or ID attributes, which act as animation hooks in the CSS. For this animation, I gave every moving part its own identifier like <code>#quick-draw-tail</code> or <code>#chief-smoke-2</code>. That way, I could target exactly what I needed without digging through the DOM like a raccoon in a trash can.</p>

<p>Once the SVG is set up, CSS does most of the work. I can use <code>@keyframes</code> for more expressive movement, or <code>animation-delay</code> to simulate randomness and stagger timings. The trick is to keep everything subtle and remember I’m not animating for attention, I’m animating for atmosphere.</p>

<p>Remember that most ambient animations loop continuously, so they should be <strong>lightweight</strong> and <strong>performance-friendly</strong>. And of course, <a href="https://www.smashingmagazine.com/2021/10/respecting-users-motion-preferences/">it’s good practice to respect users who’ve asked for less motion</a>. You can wrap your animations in an <code>@media prefers-reduced-motion</code> query so they only run when they’re welcome.</p>

<div class="break-out">
<pre><code class="language-javascript">@media (prefers-reduced-motion: no-preference) {
  &#35;quick-draw-shadow {
    animation: quick-draw-shadow-wobble 6s ease-in-out infinite;
  }
}
</code></pre>
</div>

<p>It’s a small touch that’s easy to implement, and it makes your designs more inclusive.</p>

<div class="partners__lead-place"></div>

<h2 id="ambient-animation-design-principles">Ambient Animation Design Principles</h2>

<p>If you want your animations to feel ambient, more like atmosphere than action, it helps to follow a few principles. These aren’t hard and fast rules, but rather things I’ve learned while animating smoke, sparkles, eyeballs, and eyebrows.</p>

<h3 id="keep-animations-slow-and-smooth">Keep Animations Slow And Smooth</h3>

<p>Ambient animations should feel relaxed, so use <strong>longer durations</strong> and choose <strong>easing curves that feel organic</strong>. I often use <code>ease-in-out</code>, but <a href="https://www.smashingmagazine.com/2022/10/advanced-animations-css/">cubic Bézier curves</a> can also be helpful when you want a more relaxed feel and the kind of movements you might find in nature.</p>

<h3 id="loop-seamlessly-and-avoid-abrupt-changes">Loop Seamlessly And Avoid Abrupt Changes</h3>

<p>Hard resets or sudden jumps can ruin the mood, so if an animation loops, ensure it cycles smoothly. You can do this by <strong>matching start and end keyframes</strong>, or by setting the <code>animation-direction</code> to <code>alternate</code> the value so the animation plays forward, then back.</p>

<h3 id="use-layering-to-build-complexity">Use Layering To Build Complexity</h3>

<p>A single animation might be boring. Five subtle animations, each on separate layers, can feel rich and alive. Think of it like building a sound mix &mdash; you want <strong>variation in rhythm, tone, and timing</strong>. In my animation, sparkles twinkle at varying intervals, smoke curls upward, feathers sway, and eyes boggle. Nothing dominates, and each motion plays its small part in the scene.</p>

<h3 id="avoid-distractions">Avoid Distractions</h3>

<p>The point of an ambient animation is that it doesn’t dominate. It’s a <strong>background element</strong> and not a call to action. If someone’s eyes are drawn to a raised eyebrow, it’s probably too much, so dial back the animation until it feels like something you’d only catch if you’re really looking.</p>

<h3 id="consider-accessibility-and-performance">Consider Accessibility And Performance</h3>

<p>Check <code>prefers-reduced-motion</code>, and don’t assume everyone’s device can handle complex animations. SVG and CSS are light, but things like blur filters and drop shadows, and complex CSS animations can still tax lower-powered devices. When an animation is purely decorative, consider adding <code>aria-hidden=&quot;true&quot;</code> to keep it from cluttering up the accessibility tree.</p>

<h2 id="quick-on-the-draw">Quick On The Draw</h2>

<p>Ambient animation is like seasoning on a great dish. It’s the pinch of salt you barely notice, but you’d miss when it’s gone. It doesn’t shout, it whispers. It doesn’t lead, it lingers. It’s floating smoke, swaying feathers, and sparkles you catch in the corner of your eye. And when it’s done well, ambient animation <strong>adds personality to a design without asking for applause</strong>.</p>

<p>Now, I realise that not everyone needs to animate cartoon characters. So, in part two, I’ll share how I created animations for several recent client projects. Until next time, if you’re crafting an illustration or working with SVG, ask yourself: <strong>What would move if this were real?</strong> Then animate just that. Make it slow and soft. Keep it ambient.</p>

<p>You can view the complete ambient animation <a href="https://codepen.io/malarkey/pen/NPGrWVy">code on CodePen</a>.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Victor Yocco</author><title>The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence</title><link>https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/</link><pubDate>Fri, 19 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/</guid><description>When AI “hallucinates,” it’s more than just a glitch — it’s a collapse of trust. As generative AI becomes part of more digital products, trust has become the invisible user interface. But trust isn’t mystical. It can be understood, measured, and designed for. Here is a practical guide for designing more trustworthy and ethical AI systems.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/psychology-trust-ai-guide-measuring-designing-user-confidence/" />
              <title>The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Psychology Of Trust In AI: A Guide To Measuring And Designing For User Confidence</h1>
                  
                    
                    <address>Victor Yocco</address>
                  
                  <time datetime="2025-09-19T10:00:00&#43;00:00" class="op-published">2025-09-19T10:00:00+00:00</time>
                  <time datetime="2025-09-19T10:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Misuse and misplaced trust of AI is becoming an unfortunate <a href="https://www.damiencharlotin.com/hallucinations/">common event</a>. For example, lawyers trying to leverage the power of generative AI for research submit court filings citing multiple compelling legal precedents. The problem? The AI had confidently, eloquently, and completely fabricated the cases cited. The resulting sanctions and public embarrassment can become <a href="https://www.lawnext.com/2025/05/ai-hallucinations-strike-again-two-more-cases-where-lawyers-face-judicial-wrath-for-fake-citations.html">a viral cautionary tale</a>, shared across social media as a stark example of AI’s fallibility.</p>

<p>This goes beyond a technical glitch; it’s a catastrophic <strong>failure of trust in AI tools</strong> in an industry where accuracy and trust are critical. The trust issue here is twofold &mdash; the law firms are submitting briefs in which they have blindly over-trusted the AI tool to return accurate information. The subsequent fallout can lead to a strong distrust in AI tools, to the point where platforms featuring AI might not be considered for use until trust is reestablished.</p>

<p>Issues with trusting AI aren’t limited to the legal field. We are seeing the impact of fictional AI-generated information in critical fields such as <a href="https://apnews.com/article/ai-artificial-intelligence-health-business-90020cdf5fa16c79ca2e5b6c4c9bbb14">healthcare</a> and <a href="https://mitsloanedtech.mit.edu/ai/basics/addressing-ai-hallucinations-and-bias/">education</a>. On a more personal scale, many of us have had the experience of asking Siri or Alexa to perform a task, only to have it done incorrectly or not at all, for no apparent reason. I’m guilty of sending more than one out-of-context hands-free text to an unsuspecting contact after Siri mistakenly pulls up a completely different name than the one I’d requested.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="410"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png"
			
			sizes="100vw"
			alt="Cartoon illustration split into two panels. On the left, a man in a blue hoodie speaks into his phone, saying, “Siri, text Dave, I’m waiting outside of your door.” On the right, a cheerful cartoon phone with a face and arms replies, “I have just texted Martha, I am standing outside of your door.”"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 1: Siri and Alexa often tend to confuse the recipient of my message, causing me to distrust using them when accuracy matters. Image generated using Gemini Pro. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/1-siri-confuse-recipient-message.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>With digital products moving to incorporate generative and agentic AI at an increasingly frequent rate, <strong>trust has become the invisible user interface</strong>. When it works, our interactions are seamless and powerful. When it breaks, the entire experience collapses, with potentially devastating consequences. As UX professionals, we’re on the front lines of a new twist on a common challenge. How do we build products that users can rely on? And how do we even begin to measure something as ephemeral as trust in AI?</p>

<p>Trust isn’t a mystical quality. It is a psychological construct built on predictable factors. I won’t dive deep into academic literature on trust in this article. However, it is important to understand that trust is a concept that can be <strong>understood</strong>, <strong>measured</strong>, and <strong>designed for</strong>. This article will provide a <strong>practical guide</strong> for UX researchers and designers. We will briefly explore the psychological anatomy of trust, offer concrete methods for measuring it, and provide actionable strategies for designing more trustworthy and ethical AI systems.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="the-anatomy-of-trust-a-psychological-framework-for-ai">The Anatomy of Trust: A Psychological Framework for AI</h2>

<p>To build trust, we must first understand its components. Think of trust like a four-legged stool. If any one leg is weak, the whole thing becomes unstable. Based on classic <a href="https://pmc.ncbi.nlm.nih.gov/articles/PMC10083508/#:~:text=The%20model%20of%20interpersonal%20trust,in%20human%20interpersonal%20trust%20development.">psychological models</a>, we can adapt these “legs” for the AI context.</p>

<h3 id="1-ability-or-competence">1. Ability (or Competence)</h3>

<p>This is the most straightforward pillar: Does the AI have the <strong>skills</strong> to perform its function accurately and effectively? If a weather app is consistently wrong, you stop trusting it. If an AI legal assistant creates fictitious cases, it has failed the basic test of ability. This is the functional, foundational layer of trust.</p>

<h3 id="2-benevolence">2. Benevolence</h3>

<p>This moves from function to <strong>intent</strong>. Does the user believe the AI is acting in their best interest? A GPS that suggests a toll-free route even if it’s a few minutes longer might be perceived as benevolent. Conversely, an AI that aggressively pushes sponsored products feels self-serving, eroding this sense of benevolence. This is where user fears, such as concerns about job displacement, directly challenge trust—the user starts to believe the AI is not on their side.</p>

<h3 id="3-integrity">3. Integrity</h3>

<p>Does AI operate on predictable and ethical principles? This is about <strong>transparency</strong>, <strong>fairness</strong>, and <strong>honesty</strong>. An AI that clearly states how it uses personal data demonstrates integrity. A system that quietly changes its terms of service or uses dark patterns to get users to agree to something violates integrity. An AI job recruiting tool that has subtle yet extremely harmful social biases, existing in the algorithm, violates integrity.</p>

<h3 id="4-predictability-reliability">4. Predictability &amp; Reliability</h3>

<p>Can the user form a <strong>stable and accurate mental model</strong> of how the AI will behave? Unpredictability, even if the outcomes are occasionally good, creates anxiety. A user needs to know, roughly, what to expect. An AI that gives a radically different answer to the same question asked twice is unpredictable and, therefore, hard to trust.</p>

<h2 id="the-trust-spectrum-the-goal-of-a-well-calibrated-relationship">The Trust Spectrum: The Goal of a Well-Calibrated Relationship</h2>

<p>Our goal as UX professionals shouldn’t be to maximize trust at all costs. An employee who blindly trusts every email they receive is a security risk. Likewise, a user who blindly trusts every AI output can be led into dangerous situations, such as the legal briefs referenced at the beginning of this article. The goal is <em>well-calibrated</em> trust.</p>

<p>Think of it as a spectrum where the upper-mid level is the ideal state for a truly trustworthy product to achieve:</p>

<ul>
<li><strong>Active Distrust</strong><br />
The user believes the AI is incompetent or malicious. They will avoid it or actively work against it.</li>
<li><strong>Suspicion &amp; Scrutiny</strong><br />
The user interacts cautiously, constantly verifying the AI’s outputs. This is a common and often healthy state for users of new AI.</li>
<li><strong>Calibrated Trust (The Ideal State)</strong><br />
This is the sweet spot. The user has an accurate understanding of the AI’s capabilities—its strengths and, crucially, its weaknesses. They know when to rely on it and when to be skeptical.</li>
<li><strong>Over-trust &amp; Automation Bias</strong><br />
The user unquestioningly accepts the AI’s outputs. This is where users follow flawed AI navigation into a field or accept a fictional legal brief as fact.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aOur%20job%20is%20to%20design%20experiences%20that%20guide%20users%20away%20from%20the%20dangerous%20poles%20of%20Active%20Distrust%20and%20Over-trust%20and%20toward%20that%20healthy,%20realistic%20middle%20ground%20of%20Calibrated%20Trust.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fpsychology-trust-ai-guide-measuring-designing-user-confidence%2f">
      
Our job is to design experiences that guide users away from the dangerous poles of Active Distrust and Over-trust and toward that healthy, realistic middle ground of Calibrated Trust.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="307"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png"
			
			sizes="100vw"
			alt="The trust spectrum"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 2: Build user trust in your AI product, avoiding both distrust and over-reliance. Image generated using Gemini Pro. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/2-trust-spectrum.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="the-researcher-s-toolkit-how-to-measure-trust-in-ai">The Researcher’s Toolkit: How to Measure Trust In AI</h2>

<p>Trust feels abstract, but it leaves measurable fingerprints. Academics in the social sciences have done much to define both what trust looks like and how it might be measured. As researchers, we can capture these signals through a mix of <strong>qualitative</strong>, <strong>quantitative</strong>, and <strong>behavioral</strong> methods.</p>

<h3 id="qualitative-probes-listening-for-the-language-of-trust">Qualitative Probes: Listening For The Language Of Trust</h3>

<p>During interviews and usability tests, go beyond <em>“Was that easy to use?”</em> and listen for the underlying psychology. Here are some questions you can start using tomorrow:</p>

<ul>
<li><strong>To measure Ability:</strong><br />
<em>“Tell me about a time this tool’s performance surprised you, either positively or negatively.”</em></li>
<li><strong>To measure Benevolence:</strong><br />
<em>“Do you feel this system is on your side? What gives you that impression?”</em></li>
<li><strong>To measure Integrity:</strong><br />
<em>“If this AI made a mistake, how would you expect it to handle it? What would be a fair response?”</em></li>
<li><strong>To measure Predictability:</strong><br />
<em>“Before you clicked that button, what did you expect the AI to do? How closely did it match your expectation?”</em></li>
</ul>

<h3 id="investigating-existential-fears-the-job-displacement-scenario">Investigating Existential Fears (The Job Displacement Scenario)</h3>

<p>One of the most potent challenges to an AI’s Benevolence is the fear of job displacement. When a participant expresses this, it is a critical research finding. It requires a specific, ethical probing technique.</p>

<p>Imagine a participant says, <em>“Wow, it does that part of my job pretty well. I guess I should be worried.”</em></p>

<p>An untrained researcher might get defensive or dismiss the comment. An ethical, trained researcher validates and explores:</p>

<blockquote>“Thank you for sharing that; it’s a vital perspective, and it’s exactly the kind of feedback we need to hear. Can you tell me more about what aspects of this tool make you feel that way? In an ideal world, how would a tool like this work <strong>with</strong> you to make your job better, not to replace it?”</blockquote>

<p>This approach respects the participant, validates their concern, and reframes the feedback into an actionable insight about designing a collaborative, augmenting tool rather than a replacement. Similarly, your findings should reflect the concern users expressed about replacement. We shouldn’t pretend this fear doesn’t exist, nor should we pretend that every AI feature is being implemented with pure intention. Users know better than that, and we should be prepared to argue on their behalf for how the technology might best co-exist within their roles.</p>

<h3 id="quantitative-measures-putting-a-number-on-confidence">Quantitative Measures: Putting A Number On Confidence</h3>

<p>You can quantify trust without needing a data science degree. After a user completes a task with an AI, supplement your standard usability questions with a few simple Likert-scale items:</p>

<ul>
<li><em>“The AI’s suggestion was reliable.”</em> (1-7, Strongly Disagree to Strongly Agree)</li>
<li><em>“I am confident in the AI’s output.”</em> (1-7)</li>
<li><em>“I understood why the AI made that recommendation.”</em> (1-7)</li>
<li><em>“The AI responded in a way that I expected.”</em> (1-7)</li>
<li><em>“The AI provided consistent responses over time.”</em> (1-7)</li>
</ul>

<p>Over time, these metrics can track how trust is changing as your product evolves.</p>

<p><strong>Note</strong>: <em>If you want to go beyond these simple questions that I’ve made up, there are numerous scales (measurements) of trust in technology that exist in academic literature. It might be an interesting endeavor to measure some relevant psychographic and demographic characteristics of your users and see how that correlates with trust in AI/your product. <a href="#table-1-published-academic-scales-measuring-trust-in-automated-systems">Table 1 at the end of the article</a> contains four examples of current scales you might consider using to measure trust. You can decide which is best for your application, or you might pull some of the items from any of the scales if you aren’t looking to publish your findings in an academic journal, yet want to use items that have been subjected to some level of empirical scrutiny.</em></p>

<h3 id="behavioral-metrics-observing-what-users-do-not-just-what-they-say">Behavioral Metrics: Observing What Users Do, Not Just What They Say</h3>

<p>People’s true feelings are often revealed in their actions. You can use behaviors that reflect the specific context of use for your product. Here are a few general metrics that might apply to most AI tools that give insight into users’ behavior and the trust they place in your tool.</p>

<ul>
<li><strong>Correction Rate</strong><br />
How often do users manually edit, undo, or ignore the AI’s output? A high correction rate is a powerful signal of low trust in its Ability.</li>
<li><strong>Verification Behavior</strong><br />
Do users switch to Google or open another application to double-check the AI’s work? This indicates they don’t trust it as a standalone source of truth. It can also potentially be positive that they are calibrating their trust in the system when they use it up front.</li>
<li><strong>Disengagement</strong><br />
Do users turn the AI feature off? Do they stop using it entirely after one bad experience? This is the ultimate behavioral vote of no confidence.</li>
</ul>

<div class="partners__lead-place"></div>

<h2 id="designing-for-trust-from-principles-to-pixels">Designing For Trust: From Principles to Pixels</h2>

<p>Once you’ve researched and measured trust, you can begin to design for it. This means translating psychological principles into tangible interface elements and user flows.</p>

<h3 id="designing-for-competence-and-predictability">Designing for Competence and Predictability</h3>

<ul>
<li><strong>Set Clear Expectations</strong><br />
Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle. A simple <em>“I’m still learning about [topic X], so please double-check my answers”</em> can work wonders.</li>
<li><strong>Show Confidence Levels</strong><br />
Instead of just giving an answer, have the AI signal its own uncertainty. A weather app that says <em>“70% chance of rain”</em> is more trustworthy than one that just says <em>“It will rain”</em> and is wrong. An AI could say, <em>“I’m 85% confident in this summary,”</em> or highlight sentences it’s less sure about.</li>
</ul>

<h3 id="the-role-of-explainability-xai-and-transparency">The Role of Explainability (XAI) and Transparency</h3>

<p>Explainability isn’t about showing users the code. It’s about providing a <em>useful, human-understandable rationale</em> for a decision.</p>

<blockquote><strong>Instead of:</strong><br />“Here is your recommendation.”<br /><br /><strong>Try:</strong><br />“Because you frequently read articles about UX research methods, I’m recommending this new piece on measuring trust in AI.”</blockquote>

<p>This addition transforms AI from an opaque oracle to a transparent logical partner.</p>

<p>Many of the popular AI tools (e.g., ChatGPT and Gemini) show the thinking that went into the response they provide to a user. Figure 3 shows the steps Gemini went through to provide me with a non-response when I asked it to help me generate the masterpiece displayed above in Figure 2. While this might be more information than most users care to see, it provides a useful resource for a user to audit how the response came to be, and it has provided me with instructions on how I might proceed to address my task.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="740"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png"
			
			sizes="100vw"
			alt="Gemini explains its process and why it can’t complete a task"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 3: Gemini shows its process and why it can’t complete a task I’ve asked it to perform. Smartly, it suggests an alternative way to achieve what I’ve requested. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/3-gemini-explains-response.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Figure 4 shows an example of a <a href="https://openai.com/index/gpt-4o-system-card/">scorecard</a> OpenAI makes available as an attempt to increase users’ trust. These scorecards are available for each ChatGPT model and go into the specifics of how the models perform as it relates to key areas such as hallucinations, health-based conversations, and much more. In reading the scorecards closely, you will see that no AI model is perfect in any area. The user must remain in a trust but verify mode to make the relationship between human reality and AI work in a way that avoids potential catastrophe. There should never be blind trust in an LLM.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="363"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png"
			
			sizes="100vw"
			alt="Example of OpenAI scorecard for GPT-4o"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Figure 4: Example of OpenAI scorecard for GPT-4o. (<a href='https://files.smashing.media/articles/psychology-trust-ai-guide-measuring-designing-user-confidence/4-openai-scorecard-gpt-4o.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="designing-for-trust-repair-graceful-error-handling-and-not-knowing-an-answer">Designing For Trust Repair (Graceful Error Handling) And Not Knowing an Answer</h3>

<p>Your AI will make mistakes.</p>

<blockquote>Trust is not determined by the absence of errors, but by how those errors are handled.</blockquote>

<ul>
<li><strong>Acknowledge Errors Humbly.</strong><br />
When the AI is wrong, it should be able to state that clearly. <em>“My apologies, I misunderstood that request. Could you please rephrase it?”</em> is far better than silence or a nonsensical answer.</li>
<li><strong>Provide an Easy Path to Correction.</strong><br />
Make feedback mechanisms (like thumbs up/down or a correction box) obvious. More importantly, show that the feedback is being used. A <em>“Thank you, I’m learning from your correction”</em> can help rebuild trust after a failure. As long as this is true.</li>
</ul>

<p>Likewise, your AI can’t know everything. You should acknowledge this to your users.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aUX%20practitioners%20should%20work%20with%20the%20product%20team%20to%20ensure%20that%20honesty%20about%20limitations%20is%20a%20core%20product%20principle.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fpsychology-trust-ai-guide-measuring-designing-user-confidence%2f">
      
UX practitioners should work with the product team to ensure that honesty about limitations is a core product principle.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>This can include the following:</p>

<ul>
<li><strong>Establish User-Centric Metrics:</strong> Instead of only measuring engagement or task completion, UXers can work with product managers to define and track metrics like:

<ul>
<li><strong>Hallucination Rate:</strong> The frequency with which the AI provides verifiably false information.</li>
<li><strong>Successful Fallback Rate:</strong> How often the AI correctly identifies its inability to answer and provides a helpful, honest alternative.</li>
</ul></li>
<li><strong>Prioritize the “I Don’t Know” Experience:</strong> UXers should frame the “I don’t know” response not as an error state, but as a critical feature. They must lobby for the engineering and content resources needed to design a high-quality, helpful fallback experience.</li>
</ul>

<h2 id="ux-writing-and-trust">UX Writing And Trust</h2>

<p>All of these considerations highlight the critical role of <a href="https://lmsanchez.medium.com/what-is-ux-writing-1eb71b0f0606">UX writing</a> in the development of trustworthy AI. UX writers are the architects of the AI’s voice and tone, ensuring that its communication is clear, honest, and empathetic. They translate complex technical processes into user-friendly explanations, craft helpful error messages, and design conversational flows that build confidence and rapport. Without <strong>thoughtful UX writing</strong>, even the most technologically advanced AI can feel opaque and untrustworthy.</p>

<p>The words and phrases an AI uses are its primary interface with users. UX writers are uniquely positioned to shape this interaction, ensuring that every tooltip, prompt, and response contributes to a positive and trust-building experience. Their expertise in <strong>human-centered language and design</strong> is indispensable for creating AI systems that not only perform well but also earn and maintain the trust of their users.</p>

<p>A few key areas for UX writers to focus on when writing for AI include:</p>

<ul>
<li><strong>Prioritize Transparency</strong><br />
Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if its responses are generated rather than factual. Use phrases that indicate the AI’s nature, such as <em>“As an AI, I can&hellip;”</em> or <em>“This is a generated response.”</em></li>
<li><strong>Design for Explainability</strong><br />
When the AI provides a recommendation, decision, or complex output, strive to explain the reasoning behind it in an understandable way. This builds trust by showing the user how the AI arrived at its conclusion.</li>
<li><strong>Emphasize User Control</strong><br />
Empower users by providing clear ways to provide feedback, correct errors, or opt out of certain AI features. This reinforces the idea that the user is in control and the AI is a tool to assist them.</li>
</ul>

<h2 id="the-ethical-tightrope-the-researcher-s-responsibility">The Ethical Tightrope: The Researcher’s Responsibility</h2>

<p>As the people responsible for understanding and advocating for users, we walk an ethical tightrope. Our work comes with profound responsibilities.</p>

<h3 id="the-danger-of-trustwashing">The Danger Of “Trustwashing”</h3>

<p>We must draw a hard line between designing for <em>calibrated trust</em> and designing to <em>manipulate</em> users into trusting a flawed, biased, or harmful system. For example, if an AI system designed for loan approvals consistently discriminates against certain demographics but presents a user interface that implies fairness and transparency, this would be an instance of trustwashing.</p>

<p>Another example of trustwashing would be if an AI medical diagnostic tool occasionally misdiagnoses conditions, but the user interface makes it seem infallible. To avoid trustwashing, the system should clearly communicate the potential for error and the need for human oversight.</p>

<p>Our goal must be to create genuinely trustworthy systems, not just the perception of trust. Using these principles to lull users into a false sense of security is a betrayal of our professional ethics.</p>

<p><strong>To avoid and prevent trustwashing, researchers and UX teams should:</strong></p>

<ul>
<li><strong>Prioritize genuine transparency.</strong><br />
Clearly communicate the limitations, biases, and uncertainties of AI systems. Don’t overstate capabilities or obscure potential risks.</li>
<li><strong>Conduct rigorous, independent evaluations.</strong><br />
Go beyond internal testing and seek external validation of system performance, fairness, and robustness.</li>
<li><strong>Engage with diverse stakeholders.</strong><br />
Involve users, ethics experts, and impacted communities in the design, development, and evaluation processes to identify potential harms and build genuine trust.</li>
<li><strong>Be accountable for outcomes.</strong><br />
Take responsibility for the societal impact of AI systems, even if unintended. Establish mechanisms for redress and continuous improvement.</li>
<li><strong>Be accountable for outcomes.</strong><br />
Establish clear and accessible mechanisms for redress when harm occurs, ensuring that individuals and communities affected by AI decisions have avenues for recourse and compensation.</li>
<li><strong>Educate the public.</strong><br />
Help users understand how AI works, its limitations, and what to look for when evaluating AI products.</li>
<li><strong>Advocate for ethical guidelines and regulations.</strong><br />
Support the development and implementation of industry standards and policies that promote responsible AI development and prevent deceptive practices.</li>
<li><strong>Be wary of marketing hype.</strong><br />
Critically assess claims made about AI systems, especially those that emphasize “trustworthiness” without clear evidence or detailed explanations.</li>
<li><strong>Publish negative findings.</strong><br />
Don’t shy away from reporting challenges, failures, or ethical dilemmas encountered during research. Transparency about limitations is crucial for building long-term trust.</li>
<li><strong>Focus on user empowerment.</strong><br />
Design systems that give users control, agency, and understanding rather than just passively accepting AI outputs.</li>
</ul>

<h4 id="the-duty-to-advocate">The Duty To Advocate</h4>

<p>When our research uncovers deep-seated distrust or potential harm &mdash; like the fear of job displacement &mdash; our job has only just begun. We have an ethical duty to advocate for that user. In my experience directing research teams, I’ve seen that the hardest part of our job is often carrying these uncomfortable truths into rooms where decisions are made. We must champion these findings and advocate for <strong>design and strategy shifts that prioritize user well-being, even when it challenges the product roadmap</strong>.</p>

<p>I personally try to approach presenting this information as an opportunity for growth and improvement, rather than a negative challenge.</p>

<p>For example, instead of stating <em>“Users don’t trust our AI because they fear job displacement,”</em> I might frame it as <em>“Addressing user concerns about job displacement presents a significant opportunity to build deeper trust and long-term loyalty by demonstrating our commitment to responsible AI development and exploring features that enhance human capabilities rather than replace them.”</em> This reframing can shift the conversation from a defensive posture to a proactive, problem-solving mindset, encouraging collaboration and innovative solutions that ultimately benefit both the user and the business.</p>

<p>It’s no secret that one of the more appealing areas for businesses to use AI is in workforce reduction. In reality, there will be many cases where businesses look to cut 10&ndash;20% of a particular job family due to the perceived efficiency gains of AI. However, giving users the opportunity to shape the product may steer it in a direction that makes them <strong>feel safer</strong> than if they do not provide feedback. We should not attempt to convince users they are wrong if they are distrustful of AI. We should appreciate that they are willing to provide feedback, creating an experience that is informed by the human experts who have long been doing the task being automated.</p>

<div class="partners__lead-place"></div>

<h2 id="conclusion-building-our-digital-future-on-a-foundation-of-trust">Conclusion: Building Our Digital Future On A Foundation Of Trust</h2>

<p>The rise of AI is not the first major technological shift our field has faced. However, it presents one of the most significant psychological challenges of our current time. Building products that are not just usable but also <strong>responsible</strong>, <strong>humane</strong>, and <strong>trustworthy</strong> is our obligation as UX professionals.</p>

<p><strong>Trust is not a soft metric.</strong> It is the fundamental currency of any successful human-technology relationship. By understanding its psychological roots, measuring it with rigor, and designing for it with intent and integrity, we can move from creating “intelligent” products to building a future where users can place their confidence in the tools they use every day. A trust that is earned and deserved.</p>

<h3 id="table-1-published-academic-scales-measuring-trust-in-automated-systems">Table 1: Published Academic Scales Measuring Trust In Automated Systems</h3>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Survey Tool Name</th>
            <th>Focus</th>
      <th>Key Dimensions of Trust</th>
      <th>Citation</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Trust in Automation Scale</td>
            <td>12-item questionnaire to assess trust between people and automated systems.</td>
      <td>Measures a general level of trust, including reliability, predictability, and confidence.</td>
      <td>Jian, J. Y., Bisantz, A. M., & Drury, C. G. (2000). <a href="https://www.researchgate.net/publication/247502831_Foundations_for_an_Empirically_Determined_Scale_of_Trust_in_Automated_Systems">Foundations for an empirically determined scale of trust in automated systems</a>. International Journal of Cognitive Ergonomics, 4(1), 53–71.</td>
        </tr>
        <tr>
            <td>Trust of Automated Systems Test (TOAST)</td>
            <td>9-items used to measure user trust in a variety of automated systems, designed for quick administration.</td>
      <td>Divided into two main subscales: Understanding (user’s comprehension of the system) and Performance (belief in the system’s effectiveness).</td>
      <td>Wojton, H. M., Porter, D., Lane, S. T., Bieber, C., & Madhavan, P. (2020). <a href="https://research.testscience.org/post/2019-initial-validation-of-the-trust-of-automated-systems-test-toast/paper.pdf">Initial validation of the trust of automated systems test (TOAST)</a>. (PDF) The Journal of Social Psychology, 160(6), 735–750.</td>
        </tr>
        <tr>
            <td>Trust in Automation Questionnaire</td>
            <td>A 19-item questionnaire capable of predicting user reliance on automated systems. A 2-item subscale is available for quick assessments; the full tool is recommended for a more thorough analysis.</td>
      <td>Measures 6 factors: Reliability, Understandability, Propensity to trust, Intentions of developers, Familiarity, Trust in automation</td>
      <td>Körber, M. (2018). <a href="https://www.researchgate.net/publication/323611886_Theoretical_considerations_and_development_of_a_questionnaire_to_measure_trust_in_automation">Theoretical considerations and development of a questionnaire to measure trust in automation</a>. In Proceedings 20th Triennial Congress of the IEA. Springer.</td>
        </tr>
    <tr>
            <td>Human Computer Trust Scale</td>
            <td>12-item questionnaire created to provide an empirically sound tool for assessing user trust in technology.</td>
      <td>Divided into two key factors:<ol><li><strong>Benevolence and Competence</strong>: This dimension captures the positive attributes of the technology</li><li><strong>Perceived Risk</strong>: This factor measures the user’s subjective assessment of the potential for negative consequences when using a technical artifact.</li></ol></td>
      <td>Siddharth Gulati, Sonia Sousa & David Lamas (2019): <a href="https://www.researchgate.net/profile/Sonia-Sousa-9/publication/335667672_Towards_an_empirically_developed_scale_for_measuring_trust/links/5f6f36d7458515b7cf508e88/Towards-an-empirically-developed-scale-for-measuring-trust.pdf">Design, development and evaluation of a human-computer trust scale</a>, (PDF) Behaviour & Information Technology</td>
        </tr>
    </tbody>
</table>

<h3 id="appendix-a-trust-building-tactics-checklist">Appendix A: Trust-Building Tactics Checklist</h3>

<p>To design for calibrated trust, consider implementing the following tactics, organized by the four pillars of trust:</p>

<h4 id="1-ability-competence-predictability">1. Ability (Competence) &amp; Predictability</h4>

<ul>
<li>✅ <strong>Set Clear Expectations:</strong> Use onboarding, tooltips, and empty states to honestly communicate the AI’s strengths and weaknesses.</li>
<li>✅ <strong>Show Confidence Levels:</strong> Display the AI’s uncertainty (e.g., “70% chance,” “85% confident”) or highlight less certain parts of its output.</li>
<li>✅ <strong>Provide Explainability (XAI):</strong> Offer useful, human-understandable rationales for the AI’s decisions or recommendations (e.g., “Because you frequently read X, I’m recommending Y”).</li>
<li>✅ <strong>Design for Graceful Error Handling:</strong>

<ul>
<li>✅ Acknowledge errors humbly (e.g., “My apologies, I misunderstood that request.”).</li>
<li>✅ Provide easy paths to correction (e. ] g., prominent feedback mechanisms like thumbs up/down).</li>
<li>✅ Show that feedback is being used (e.g., “Thank you, I’m learning from your correction”).</li>
</ul></li>
<li>✅ <strong>Design for “I Don’t Know” Responses:</strong>

<ul>
<li>✅ Acknowledge limitations honestly.</li>
<li>✅ Prioritize a high-quality, helpful fallback experience when the AI cannot answer.</li>
</ul></li>
<li>✅ <strong>Prioritize Transparency:</strong> Clearly communicate the AI’s capabilities and limitations, especially if responses are generated.</li>
</ul>

<h4 id="2-benevolence-1">2. Benevolence</h4>

<ul>
<li>✅ <strong>Address Existential Fears:</strong> When users express concerns (e.g., job displacement), validate their concerns and reframe the feedback into actionable insights about collaborative tools.</li>
<li>✅ <strong>Prioritize User Well-being:</strong> Advocate for design and strategy shifts that prioritize user well-being, even if it challenges the product roadmap.</li>
<li>✅ <strong>Emphasize User Control:</strong> Provide clear ways for users to give feedback, correct errors, or opt out of AI features.</li>
</ul>

<h4 id="3-integrity-1">3. Integrity</h4>

<ul>
<li>✅ <strong>Adhere to Ethical Principles:</strong> Ensure the AI operates on predictable, ethical principles, demonstrating fairness and honesty.</li>
<li>✅ <strong>Prioritize Genuine Transparency:</strong> Clearly communicate the limitations, biases, and uncertainties of AI systems; avoid overstating capabilities or obscuring risks.</li>
<li>✅ <strong>Conduct Rigorous, Independent Evaluations:</strong> Seek external validation of system performance, fairness, and robustness to mitigate bias.</li>
<li>✅ <strong>Engage Diverse Stakeholders:</strong> Involve users, ethics experts, and impacted communities in the design and evaluation processes.</li>
<li>✅ <strong>Be Accountable for Outcomes:</strong> Establish clear mechanisms for redress and continuous improvement for societal impacts, even if unintended.</li>
<li>✅ <strong>Educate the Public:</strong> Help users understand how AI works, its limitations, and how to evaluate AI products.</li>
<li>✅ <strong>Advocate for Ethical Guidelines:</strong> Support the development and implementation of industry standards and policies that promote responsible AI.</li>
<li>✅ <strong>Be Wary of Marketing Hype:</strong> Critically assess claims about AI “trustworthiness” and demand verifiable data.</li>
<li>✅ <strong>Publish Negative Findings:</strong> Be transparent about challenges, failures, or ethical dilemmas encountered during research.</li>
</ul>

<h4 id="4-predictability-reliability-1">4. Predictability &amp; Reliability</h4>

<ul>
<li>✅ <strong>Set Clear Expectations:</strong> Use onboarding, tooltips, and empty states to honestly communicate what the AI is good at and where it might struggle.</li>
<li>✅ <strong>Show Confidence Levels:</strong> Instead of just giving an answer, have the AI signal its own uncertainty.</li>
<li>✅ <strong>Provide Explainability (XAI) and Transparency:</strong> Offer a useful, human-understandable rationale for AI decisions.</li>
<li>✅ <strong>Design for Graceful Error Handling:</strong> Acknowledge errors humbly and provide easy paths to correction.</li>
<li>✅ <strong>Prioritize the “I Don’t Know” Experience:</strong> Frame “I don’t know” as a feature and design a high-quality fallback experience.</li>
<li>✅ <strong>Prioritize Transparency (UX Writing):</strong> Clearly communicate the AI’s capabilities and limitations, especially when it’s still learning or if responses are generated.</li>
<li>✅ <strong>Design for Explainability (UX Writing):</strong> Explain the reasoning behind AI recommendations, decisions, or complex outputs.</li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Marius Sarca</author><title>Creating Elastic And Bounce Effects With Expressive Animator</title><link>https://www.smashingmagazine.com/2025/09/creating-elastic-bounce-effects-expressive-animator/</link><pubDate>Mon, 15 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/creating-elastic-bounce-effects-expressive-animator/</guid><description>Elastic and bounce effects have long been among the most desirable but time-consuming techniques in motion design. Expressive Animator streamlines the process, making it possible to produce lively animations in seconds, bypassing the tedious work of manual keyframe editing.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/creating-elastic-bounce-effects-expressive-animator/" />
              <title>Creating Elastic And Bounce Effects With Expressive Animator</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Creating Elastic And Bounce Effects With Expressive Animator</h1>
                  
                    
                    <address>Marius Sarca</address>
                  
                  <time datetime="2025-09-15T10:00:00&#43;00:00" class="op-published">2025-09-15T10:00:00+00:00</time>
                  <time datetime="2025-09-15T10:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                <p>This article is sponsored by <b>Expressive</b></p>
                

<p>In the world of modern web design, SVG images are used everywhere, from illustrations to icons to background effects, and are universally prized for their crispness and lightweight size. While static SVG images play an important role in web design, most of the time their true potential is unlocked only when they are combined with motion.</p>

<p>Few things add more life and personality to a website than a well-executed SVG animation. But not all animations have the same impact in terms of digital experience. For example, <strong>elastic and bounce effects</strong> have a unique appeal in motion design because they bring a <strong>sense of realism into movement</strong>, making animations more engaging and memorable.</p>

<figure><a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/grumpy-egg.gif"><img src="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/grumpy-egg-800.gif" width="800" height="800" alt="Grumpy Egg" /></a><figcaption>(<a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/grumpy-egg.gif">Large preview</a>)</figcaption></figure>

<p>However, anyone who has dived into animating SVGs knows <a href="https://www.smashingmagazine.com/2023/02/putting-gears-motion-animating-cars-with-html-svg/">the technical hurdles involved</a>. Creating a convincing elastic or bounce effect traditionally requires handling complex CSS keyframes or wrestling with JavaScript animation libraries. Even when using an SVG animation editor, it will most likely require you to manually add the keyframes and adjust the easing functions between them, which can become a time-consuming process of trial and error, no matter the level of experience you have.</p>

<p>This is where Expressive Animator shines. It allows creators to apply elastic and bounce effects <strong>in seconds</strong>, bypassing the tedious work of manual keyframe editing. And the result is always exceptional: animations that feel <em>alive</em>, produced with a fraction of the effort.</p>

<h2 id="using-expressive-animator-to-create-an-elastic-effect">Using Expressive Animator To Create An Elastic Effect</h2>

<p>Creating an elastic effect in Expressive Animator is remarkably simple, fast, and intuitive, since the effect is built right into the software as an easing function. This means you only need two keyframes (start and end) to make the effect, and the software will automatically handle the springy motion in between. Even better, the elastic easing can be applied to <strong>any animatable property</strong> (e.g., position, scale, rotation, opacity, morph, etc.), giving you a consistent way to add it to your animations.</p>

<p>Before we dive into the tutorial, take a look at the video below to see what you will learn to create and the entire process from start to finish.</p>


<figure class="video-embed-container break-out">
  <div class="video-embed-container--wrapper"
	
  >
    <iframe class="video-embed-container--wrapper-iframe" src="https://player.vimeo.com/video/1116135653"
        frameborder="0"
        allow="autoplay; fullscreen; picture-in-picture"
        allowfullscreen>
    </iframe>
	</div>
	
</figure>

<p>First things first, let’s set the scene. For this, we’ll <a href="https://expressive.app/expressive-animator/docs/v1/projects/create/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">create a new project</a> by pressing <kbd>Ctrl</kbd>/<kbd>Cmd</kbd> + <kbd>P</kbd> and configuring it in the “Create New Project” dialog that pops up. For frame size, we’ll choose 1080×1080, for a duration of 00:01:30, and we’ll let the frame rate remain unchanged at 60 frames per second (fps).</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png"
			
			sizes="100vw"
			alt="“Create New Project” dialog"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/01-create-dialog.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Once you hit the “Create project” button, you can use the <a href="https://expressive.app/expressive-animator/docs/v1/tools/pen-tool/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">Pen</a> and <a href="https://expressive.app/expressive-animator/docs/v1/tools/ellipse-tool/">Ellipse</a> tools to create the artwork that will be animated, or you can simply copy and paste the artwork below.</p>

<figure class="break-out">
	<p data-height="600"
	data-theme-id="light"
	data-slug-hash="pvjmwxv"
	data-user="smashingmag"
	data-default-tab="result"
	class="codepen">See the Pen [Effects With Expressive Animator - Artwork for Animation](https://codepen.io/smashingmag/pen/pvjmwxv).</p>
	<figcaption>See the Pen <a href="https://codepen.io/smashingmag/pen/pvjmwxv">Effects With Expressive Animator - Artwork for Animation</a>.</figcaption>
</figure>

<p>Now that everything has been set up, let’s create the animation. Make sure that snapping and auto-record are enabled, then move the playhead to 01:00f. By <a href="https://expressive.app/expressive-animator/docs/v1/canvas/snapping/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">enabling snapping</a>, you will be able to perfectly align nodes and graphic objects on the canvas. On the other hand, as the name suggests, auto-record tracks every change you make to the artwork and adds the appropriate keyframes on the timeline.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png"
			
			sizes="100vw"
			alt="Screenshot with snapping and auto-record are enabled and the playhead moved to 01:00f"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/02-prepare-scene.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Press the <kbd>A</kbd> key on your keyboard to switch to the <a href="https://expressive.app/expressive-animator/docs/v1/tools/node-tool/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">Node tool</a>, then select the String object and move its handle to the center-right point of the artboard. Don’t worry about precision, as the snapping will do all the heavy lifting for you. This will bend the shape and add keyframes for the Morph animator.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png"
			
			sizes="100vw"
			alt="Screenshot with the String object and its handle moved to the center-right point of the artboard"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/03-string.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Next, press the <kbd>V</kbd> key on your keyboard to switch to the <a href="https://expressive.app/expressive-animator/docs/v1/tools/selection-tool/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">Selection tool</a>. With this tool enabled, select the Ball, move it to the right, and place it in the middle of the string. Once again, snapping will do all the hard work, allowing you to position the ball exactly where you want to, while auto-recording automatically adds the appropriate keyframes.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png"
			
			sizes="100vw"
			alt="Screenshot with the Ball selected and moved to the middle of the string"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/04-ball.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>You can now replay the animation and disable auto-recording by clicking on the Auto-Record button again.</p>

<p>As you can see when replaying, the direction in which the String and Ball objects are moving is wrong. Fortunately, we can fix this extremely easily just by reversing the keyframes. To do this, select the keyframes in the timeline and right-click to open the context menu and choose Reverse. This will reverse the keyframes, and if you replay the animation, you will see that the direction is now correct.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png"
			
			sizes="100vw"
			alt="Screenshot with the context menu where you can choose Reverse"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/05-reverse.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>With this out of the way, we can finally add the elastic effect. Select all the keyframes in the timeline and click on the Custom easing button to open a dialog with easing options. From the dialog, choose Elastic and set the oscillations to 4 and the stiffness to 2.5.</p>

<p>That’s it! Click anywhere outside the easing dialog to close it and replay the animation to see the result.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png"
			
			sizes="100vw"
			alt="Selected custom easing button that opened a dialog with easing options"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/06-effect.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><a href="https://expressive.app/expressive-animator/docs/v1/export/svg/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">The animation can be exported as well.</a> Press <kbd>Cmd</kbd>/<kbd>Ctrl</kbd> + <kbd>E</kbd> on your keyboard to open the export dialog and choose from various export options, ranging from vectorized formats, such as <a href="https://expressive.app/expressive-animator/docs/v1/export/svg/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">SVG</a> and <a href="https://expressive.app/expressive-animator/docs/v1/export/lottie/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">Lottie</a>, to rasterized formats, such as <a href="https://expressive.app/expressive-animator/docs/v1/export/image/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">GIF</a> and <a href="https://expressive.app/expressive-animator/docs/v1/export/video/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">video</a>.</p>

<p>For this specific animation, we’re going to choose the SVG export format. Expressive Animator allows you to choose between three different types of SVG, depending on the technology used for animation: <a href="https://expressive.app/expressive-animator/docs/v1/export/svg/smil/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">SMIL</a>, <a href="https://expressive.app/expressive-animator/docs/v1/export/svg/css/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">CSS</a>, or <a href="https://expressive.app/expressive-animator/docs/v1/export/svg/js/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">JavaScript</a>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png"
			
			sizes="100vw"
			alt="Export settings in the Expressive Animator"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/07-export.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Each of these technologies has different strengths and weaknesses, but for this tutorial, we are going to choose SMIL. This is because SMIL-based animations are widely supported, even on Safari browsers, and can be used as background images or embedded in HTML pages using the <code>&lt;img&gt;</code>  tag. In fact, <a href="https://www.smashingmagazine.com/2025/05/smashing-animations-part-3-smil-not-dead/">Andy Clarke recently wrote all about SMIL animations here at Smashing Magazine</a> if you want a full explanation of how it works.</p>

<p>You can visualize the exported SVG in the following CodePen demo:</p>

<figure class="break-out">
	<p data-height="600"
	data-theme-id="light"
	data-slug-hash="GgpaEyG"
	data-user="smashingmag"
	data-default-tab="result"
	class="codepen">See the Pen [Expressive Animator - Exported SVG](https://codepen.io/smashingmag/pen/GgpaEyG).</p>
	<figcaption>See the Pen <a href="https://codepen.io/smashingmag/pen/GgpaEyG">Expressive Animator - Exported SVG</a>.</figcaption>
</figure>

<h2 id="expressive-animator-for-bounce-and-other-effects">Expressive Animator For Bounce And Other Effects</h2>

<p>Adding a bounce effect to an animation is very similar to the process we just covered for creating an elastic effect, since both are built into Expressive Animator as easing functions. Just like elastic, bounce easing can be applied to any animatable property, giving you quick ways to create realistic motion.</p>

<p>Beyond these two effects, Expressive Animator also offers other easing options that can shape the personality of your animation, like Back, Steps, Sinc, just to name a few.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="757"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png"
			
			sizes="100vw"
			alt="Easing functions in the Expressive Animator"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/08-easing-functions.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="conclusion">Conclusion</h2>

<p>Elastic and bounce effects have long been among the most desirable but time-consuming techniques in motion design. By integrating them directly into its easing functions, Expressive Animator removes the complexity of manual keyframe manipulation and transforms what used to be a technical challenge into a creative opportunity.</p>

<p>The best part is that getting started with Expressive Animator comes with zero risk. The software offers a full 7&ndash;day <strong>free trial without requiring an account</strong>, so you can download it instantly and begin experimenting with your own designs right away. After the trial ends, you can buy Expressive Animator with a one-time payment, <strong>no subscription required</strong>. This will give you a perpetual license covering both Windows and macOS.</p>

<p>To help you get started even faster, I’ve prepared some extra resources for you. You’ll find the source files for the animations created in this tutorial, along with a curated list of useful links that will guide you further in exploring Expressive Animator and SVG animation. These materials are meant to give you a solid starting point so you can learn, experiment, and build on your own with confidence.</p>

<ul>
<li>Grumpy Egg: The <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/grumpy-egg.eaf" download><code>.eaf</code></a> source file for the sample animation presented at the beginning of this article.</li>
<li>Elastic Effect: Another <a href="https://files.smashing.media/articles/creating-elastic-bounce-effects-expressive-animator/elastic-effect.eaf" download><code>.eaf</code></a> file, this time for the animation we made in this tutorial.</li>
<li><a href="https://expressive.app/expressive-animator/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">Get started with Expressive Animator</a></li>
<li>Expressive Animator <a href="https://expressive.app/expressive-animator/docs/v1/?utm_source=smashingmagazine&amp;utm_medium=blog&amp;utm_campaign=elastic_effect">Documentation</a></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(gg, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Karan Rawal</author><title>From Data To Decisions: UX Strategies For Real-Time Dashboards</title><link>https://www.smashingmagazine.com/2025/09/ux-strategies-real-time-dashboards/</link><pubDate>Fri, 12 Sep 2025 15:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/ux-strategies-real-time-dashboards/</guid><description>Real-time dashboards are decision assistants, not passive displays. In environments like fleet management, healthcare, and operations, the cost of a delay or misstep is high. Karan Rawal explores strategic UX patterns that shorten time-to-decision, reduce cognitive overload, and make live systems trustworthy.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/ux-strategies-real-time-dashboards/" />
              <title>From Data To Decisions: UX Strategies For Real-Time Dashboards</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Data To Decisions: UX Strategies For Real-Time Dashboards</h1>
                  
                    
                    <address>Karan Rawal</address>
                  
                  <time datetime="2025-09-12T15:00:00&#43;00:00" class="op-published">2025-09-12T15:00:00+00:00</time>
                  <time datetime="2025-09-12T15:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>I once worked with a fleet operations team that monitored dozens of vehicles in multiple cities. Their dashboard showed fuel consumption, live GPS locations, and real-time driver updates. Yet the team struggled to see what needed urgent attention. The problem was not a lack of data but a lack of clear indicators to support decision-making. There were no priorities, alerts, or context to highlight what mattered most at any moment.</p>

<p><strong>Real-time dashboards</strong> are now critical decision-making tools in industries like logistics, manufacturing, finance, and healthcare. However, many of them fail to help users make timely and confident decisions, even when they show live data.</p>

<blockquote>Designing for real-time use is very different from designing static dashboards. The challenge is not only presenting metrics but enabling decisions under pressure. Real-time users face limited time and a high cognitive load. They need clarity on actions, not just access to raw data.</blockquote>

<p>This requires interface elements that support quick scanning, pattern recognition, and guided attention. Layout hierarchy, alert colors, grouping, and motion cues all help, but they must be driven by a deeper strategy: understanding what the user must decide in <em>that</em> moment.</p>

<p>This article explores <strong>practical UX strategies</strong> for real-time dashboards that enable real decisions. Instead of focusing only on visual best practices, it looks at how user intent, personalization, and cognitive flow can turn raw data into meaningful, timely insights.</p>

<h2 id="designing-for-real-time-comprehension-helping-users-stay-focused-under-pressure">Designing for Real-Time Comprehension: Helping Users Stay Focused Under Pressure</h2>

<p>A GPS app not only shows users their location but also helps them decide where to go next. In the same way, a real-time dashboard should go beyond displaying the latest data. Its purpose is to help users quickly understand complex information and make informed decisions, especially in fast-paced environments with short attention spans.</p>

<h3 id="how-users-process-real-time-updates">How Users Process Real-Time Updates</h3>

<p>Humans have limited cognitive capacity, so they can only process a small amount of data at once. Without <strong>proper context</strong> or <strong>visual cues</strong>, rapidly updating dashboards can overwhelm users and shift attention away from key information.</p>

<p>To address this, I use the following approaches:</p>

<ul>
<li><strong>Delta Indicators and Trend Sparklines</strong><br />
<a href="https://in.tradingview.com/scripts/delta/">Delta indicators</a> show value changes at a glance, while sparklines are small line charts that reveal trends over time in a compact space. For example, a sales dashboard might show a green upward arrow next to revenue to indicate growth, along with a sparkline displaying sales trends over the past week.</li>
<li><strong>Subtle Micro-Animations</strong><br />
<a href="https://www.youtube.com/watch?v=MZjV27K2KR4">Small animations</a> highlight changes without distracting users. Research in cognitive psychology shows that such animations effectively draw attention, helping users notice updates while staying focused. For instance, a soft pulse around a changing metric can signal activity without overwhelming the viewer.</li>
<li><strong>Mini-History Views</strong><br />
Showing a short history of recent changes reduces reliance on memory. For example, a dashboard might let users scroll back a few minutes to review updates, supporting better understanding and verification of data trends.

<br /></li>
</ul>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="common-challenges-in-real-time-dashboards">Common Challenges In Real-Time Dashboards</h3>

<blockquote>Many live dashboards fail when treated as static reports instead of dynamic tools for quick decision-making.</blockquote>

<p>In my early projects, I made this mistake, resulting in cluttered layouts, distractions, and frustrated users.</p>

<p>Typical errors include the following:</p>

<ul>
<li><strong>Overcrowded Interfaces</strong>: Presenting too many metrics competes for users’ attention, making it hard to focus.</li>
<li><strong>Flat Visual Hierarchy</strong>: Without clear emphasis on critical data, users might focus on less important information.</li>
<li><strong>No Record of Changes</strong>: When numbers update instantly with no explanation, users can feel lost or confused.</li>
<li><strong>Excessive Refresh Rates</strong>: Not all data needs constant updates. Updating too frequently can create unnecessary motion and cognitive strain.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="467"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png"
			
			sizes="100vw"
			alt="Side-by-side dashboards labeled Bad UX and Good UX. The bad UX dashboard is cluttered with multiple pie charts and bar graphs, while the good UX dashboard uses a clear hierarchy with summary cards, line charts, and simplified visuals for easier data interpretation."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Comparison of bad vs. good UX in dashboards, showing how clear hierarchy and visualization improve data understanding. (Image source: <a href='https://www.devoteam.com/expert-view/make-data-make-sense-why-ux-in-dashboards-matters/'>devoteam</a>) (<a href='https://files.smashing.media/articles/ux-strategies-real-time-dashboards/bad-vs-good-dashboard-ux.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="managing-stress-and-cognitive-overload">Managing Stress And Cognitive Overload</h3>

<p>Under stress, users depend on intuition and focus only on immediately relevant information. If a dashboard updates too quickly or shows conflicting alerts, users may delay actions or make mistakes. It is important to:</p>

<ul>
<li><strong>Prioritize</strong> the most important data first to avoid overwhelming the user.</li>
<li>Offer <strong>snapshot or pause options</strong> so users can take time to process information.</li>
<li>Use <strong>clear indicators</strong> to show if an action is required or if everything is operating normally.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aIn%20real-time%20environments,%20the%20best%20dashboards%20balance%20speed%20with%20calmness%20and%20clarity.%20They%20are%20not%20just%20data%20displays%20but%20tools%20that%20promote%20live%20thinking%20and%20better%20decisions.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fux-strategies-real-time-dashboards%2f">
      
In real-time environments, the best dashboards balance speed with calmness and clarity. They are not just data displays but tools that promote live thinking and better decisions.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h3 id="enabling-personalization-for-effective-data-consumption">Enabling Personalization For Effective Data Consumption</h3>

<p>Many analytics tools let users build custom dashboards, but these design principles guide layouts that support decision-making. Personalization options such as custom metric selection, alert preferences, and update pacing help manage cognitive load and improve data interpretation.</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Cognitive Challenge</th>
            <th>UX Risk in Real-Time Dashboards</th>
      <th>Design Strategy to Mitigate</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Users can’t track rapid changes</td>
            <td>Confusion, missed updates, second-guessing</td>
      <td>Use delta indicators, change animations, and trend sparklines</td>
        </tr>
        <tr>
            <td>Limited working memory</td>
            <td>Overload from too many metrics at once</td>
      <td>Prioritize key KPIs, apply progressive disclosure</td>
        </tr>
        <tr>
            <td>Visual clutter under stress</td>
            <td>Tunnel vision or misprioritized focus</td>
      <td>Apply a clear visual hierarchy, minimize non-critical elements</td>
        </tr>
        <tr>
            <td>Unclear triggers or alerts</td>
            <td>Decision delays, incorrect responses</td>
      <td>Use thresholds, binary status indicators, and plain language</td>
        </tr>
    <tr>
            <td>Lack of context/history</td>
            <td>Misinterpretation of sudden shifts</td>
      <td>Provide micro-history, snapshot freeze, or hover reveal</td>
        </tr>
    </tbody>
</table>

<p><em>Common Cognitive Challenges in Real-Time Dashboards and UX Strategies to Overcome Them.</em></p>

<h2 id="designing-for-focus-using-layout-color-and-animation-to-drive-real-time-decisions">Designing For Focus: Using Layout, Color, And Animation To Drive Real-Time Decisions</h2>

<p>Layout, color, and animation do more than improve appearance. They help users interpret live data quickly and make decisions under time pressure. Since users respond to rapidly changing information, these elements must reduce cognitive load and highlight key insights immediately.</p>

<ul>
<li><strong>Creating a Visual Hierarchy to Guide Attention.</strong><br />
A clear hierarchy directs users’ eyes to key metrics. Arrange elements so the most important data stands out. For example, place critical figures like sales volume or system health in the upper left corner to match common scanning patterns. Limit visible elements to about five to prevent overload and ease processing—group related data into cards to improve scannability and help users focus without distraction.</li>
<li><strong>Using Color Purposefully to Convey Meaning.</strong><br />
Color communicates meaning in data visualization. Red or orange indicates critical alerts or negative trends, signaling urgency. Blue and green represent positive or stable states, offering reassurance. Neutral tones like gray support background data and make key colors stand out. Ensure accessibility with strong contrast and pair colors with icons or labels. For example, bright red can highlight outages while muted gray marks historical logs, keeping attention on urgent issues.</li>
<li><strong>Supporting Comprehension with Subtle Animation.</strong><br />
Animation should clarify, not distract. Smooth transitions of 200 to 400 milliseconds communicate changes effectively. For instance, upward motion in a line chart reinforces growth. Hover effects and quick animations provide feedback and improve interaction. Thoughtful motion makes changes noticeable while maintaining focus.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="545"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png"
			
			sizes="100vw"
			alt="An example of a car rental analytics dashboard that uses hierarchy, color, and charts to highlight key metrics like customer growth, satisfaction trends, and acquisition costs, enabling faster decision-making."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Car rental dashboard that uses hierarchy, color, and charts to highlight key metrics and trends. (Image credit: <a href='https://www.aqedigital.com/services/ai-ml-solutions-car-rental/'>Car Rental Solution</a> by AQe Digital) (<a href='https://files.smashing.media/articles/ux-strategies-real-time-dashboards/car-rental-dashboard-analytics.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Layout, color, and animation create an experience that enables fast, accurate interpretation of live data. Real-time dashboards support continuous monitoring and decision-making by reducing mental effort and <strong>highlighting anomalies or trends</strong>. Personalization allows users to tailor dashboards to their roles, improving relevance and efficiency. For example, operations managers may focus on system health metrics while sales directors prioritize revenue KPIs. This adaptability makes dashboards dynamic, strategic tools.</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Element</th>
            <th>Placement & Visual Weight</th>
      <th>Purpose & Suggested Colors</th>
      <th>Animation Use Case & Effect</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td><strong>Primary KPIs</strong></td>
            <td>Center or top-left; bold, large font</td>
      <td>Highlight critical metrics; typically stable states</td>
      <td>Value updates: smooth increase (200–400 ms)</td>
        </tr>
        <tr>
            <td><strong>Controls</strong></td>
            <td>Top or left panel; light, minimal visual weight</td>
      <td>Provide navigation/filtering; neutral color schemes</td>
      <td>User actions: subtle feedback (100–150 ms)</td>
        </tr>
        <tr>
            <td><strong>Charts</strong></td>
            <td>Middle or right; medium emphasis</td>
      <td>Show trends and comparisons; use blue/green for positives, grey for neutral</td>
      <td>Chart trends: trail or fade (300–600 ms)</td>
        </tr>
    <tr>
            <td><strong>Alerts</strong></td>
            <td>Edge of dashboard or floating; high contrast (bold)</td>
      <td>Signal critical issues; red/orange for alerts, yellow/amber for warnings</td>
      <td>Quick animations for appearance; highlight changes</td>
        </tr>
    </tbody>
</table>

<p><em>Design Elements, Placement, Color, and Motion Strategies for Effective Real-Time Dashboards.</em></p>

<div class="partners__lead-place"></div>

<h2 id="clarity-in-motion-designing-dashboards-that-make-change-understandable">Clarity In Motion: Designing Dashboards That Make Change Understandable</h2>

<p>If users cannot interpret changes quickly, the dashboard fails regardless of its visual design. Over time, I have developed methods that reduce confusion and make change feel intuitive rather than overwhelming.</p>

<p>One of the most effective tools I use is the <a href="https://en.wikipedia.org/wiki/Sparkline">sparkline</a>, a compact line chart that shows a trend over time and is typically placed next to a key performance indicator. Unlike full charts, sparklines omit axes and labels. Their simplicity makes them powerful, since they instantly show whether a metric is trending up, down, or steady. For example, placing a sparkline next to monthly revenue immediately reveals if performance is improving or declining, even before the viewer interprets the number.</p>

<p>When using sparklines effectively, follow these principles:</p>

<ul>
<li>Pair sparklines with metrics such as revenue, churn rate, or user activity so users can see both the value and its trajectory at a glance.</li>
<li>Simplify by removing clutter like axis lines or legends unless they add real value.</li>
<li>Highlight the latest data point with a dot or accent color since current performance often matters more than historical context.</li>
<li>Limit the time span. Too many data points compress the sparkline and hurt readability. A focused window, such as the last 7 or 30 days, keeps the trend clear.</li>
<li>Use sparklines in comparative tables. When placed in rows (for example, across product lines or regions), they reveal anomalies or emerging patterns that static numbers may hide.</li>
</ul>

<figure><a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-performance-gif.gif"><img src="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-performance-gif.gif" width="800" height="450" alt="A GIF of a dynamic dashboard showing profit and loss waterfall, performance variance vs budget and last year, profit trend lines, and expense category breakdown for hospitality operations." /></a><figcaption>Interactive P&L Performance Dashboard with Forecast and Variance Tracking. (<a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-performance-gif.gif">Large preview</a>)</figcaption></figure>

<p>I combine sparklines with directional indicators like arrows and percentage deltas to support quick interpretation.</p>

<p>For example, pairing “▲ +3.2%” with a rising sparkline shows both the direction and scale of change. I do not rely only on color to convey meaning.</p>

<p>Since <a href="https://www.colourblindawareness.org/colour-blindness/">1 in 12 men</a> is color-blind, using red and green alone can exclude some users. To ensure accessibility, I add shapes and icons alongside color cues.</p>

<p>Micro-animations provide subtle but effective signals. This counters <a href="https://www.nngroup.com/articles/change-blindness">change blindness</a> &mdash; our tendency to miss non-salient changes.</p>

<ul>
<li>When numbers update, I use fade-ins or count-up transitions to indicate change without distraction.</li>
<li>If a list reorders, such as when top-performing teams shift positions, a smooth slide animation under 300 milliseconds helps users maintain spatial memory. These animations reduce cognitive friction and prevent disorientation.</li>
</ul>

<p>Layout is critical for clarifying change:</p>

<ul>
<li>I use <strong>modular cards</strong> with consistent spacing, alignment, and hierarchy to highlight key metrics.</li>
<li>Cards are arranged in a <strong>sortable grid</strong>, allowing filtering by severity, recency, or relevance.</li>
<li><strong>Collapsible sections</strong> manage dense information while keeping important data visible for quick scanning and deeper exploration.</li>
</ul>

<p>For instance, in a logistics dashboard, a card labeled “On-Time Deliveries” may display a weekly sparkline. If performance dips, the line flattens or turns slightly red, a downward arrow appears with a −1.8% delta, and the updated number fades in. This gives instant clarity without requiring users to open a detailed chart.</p>

<p>All these design choices support fast, informed decision-making. In high-velocity environments like product analytics, logistics, or financial operations, dashboards must do more than present data. They must <strong>reduce ambiguity</strong> and help teams quickly detect change, understand its impact, and take action.</p>

<h2 id="making-reliability-visible-designing-for-trust-in-real-time-data-interfaces">Making Reliability Visible: Designing for Trust In Real-Time Data Interfaces</h2>

<p>In real-time data environments, reliability is not just a technical feature. It is the foundation of user trust. Dashboards are used in high-stakes, fast-moving contexts where decisions depend on timely, accurate data. Yet these systems often face less-than-ideal conditions such as unreliable networks, API delays, and incomplete datasets. Designing for these realities is not just damage control. It is essential for making data experiences usable and trustworthy.</p>

<p>When data lags or fails to load, it can mislead users in serious ways:</p>

<ul>
<li>A dip in a trendline may look like a market decline when it is only a delay in the stream.</li>
<li>Missing categories in a bar chart, if not clearly signaled, can lead to flawed decisions.</li>
</ul>

<p>To mitigate this:</p>

<ul>
<li>Every data point should be paired with its condition.</li>
<li>Interfaces must show not only what the data says but also how current or complete it is.</li>
</ul>

<p>One effective strategy is replacing traditional spinners with <a href="https://www.nngroup.com/articles/skeleton-screens/">skeleton UIs</a>. These are greyed-out, animated placeholders that suggest the structure of incoming data. They set expectations, reduce anxiety, and show that the system is actively working. For example, in a financial dashboard, users might see the outline of a candlestick chart filling in as new prices arrive. This signals that data is being refreshed, not stalled.</p>

<h3 id="handling-data-unavailability">Handling Data Unavailability</h3>

<p>When data is unavailable, I show <strong>cached snapshots</strong> from the most recent successful load, labeled with timestamps such as “Data as of 10:42 AM.” This keeps users aware of what they are viewing.</p>

<p>In operational dashboards such as logistics or monitoring systems, this approach lets users act confidently even when real-time updates are temporarily out of sync.</p>

<h3 id="managing-connectivity-failures">Managing Connectivity Failures</h3>

<p>To handle connectivity failures, I use <strong>auto-retry mechanisms with exponential backoff</strong>, giving the system several chances to recover quietly before notifying the user.</p>

<p>If retries fail, I maintain transparency with clear banners such as “Offline… Reconnecting…” In one product, this approach prevented users from reloading entire dashboards unnecessarily, especially in areas with unreliable Wi-Fi.</p>

<h3 id="ensuring-reliability-with-accessibility">Ensuring Reliability with Accessibility</h3>

<p>Reliability strongly connects with accessibility:</p>

<ul>
<li>Real-time interfaces must announce updates without disrupting user focus, beyond just screen reader compatibility.</li>
<li><a href="https://developer.mozilla.org/en-US/docs/Web/Accessibility/ARIA/Guides/Live_regions">ARIA live regions</a> quietly narrate significant changes in the background, giving screen reader users timely updates without confusion.</li>
<li>All controls remain keyboard-accessible.</li>
<li>Animations follow <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/@media/prefers-reduced-motion">motion-reduction preferences</a> to support users with vestibular sensitivities.</li>
</ul>

<h3 id="data-freshness-indicator">Data Freshness Indicator</h3>

<p>A compact but powerful pattern I often implement is the Data Freshness Indicator, a small widget that:</p>

<ul>
<li>Shows sync status,</li>
<li>Displays the last updated time,</li>
<li>Includes a manual refresh button.</li>
</ul>

<p>This improves <strong>transparency</strong> and reinforces <strong>user control</strong>. Since different users interpret these cues differently, advanced systems allow personalization. For example:</p>

<ul>
<li>Analysts may prefer detailed logs of update attempts.</li>
<li>Business users might see a simple status such as “Live”, “Stale”, or “Paused”.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aReliability%20in%20data%20visualization%20is%20not%20about%20promising%20perfection.%20It%20is%20about%20creating%20a%20resilient,%20informative%20experience%20that%20supports%20human%20judgment%20by%20revealing%20the%20true%20state%20of%20the%20system.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fux-strategies-real-time-dashboards%2f">
      
Reliability in data visualization is not about promising perfection. It is about creating a resilient, informative experience that supports human judgment by revealing the true state of the system.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>When users understand what the dashboard knows, what it does not, and what actions it is taking, they are more likely to trust the data and make smarter decisions.</p>

<div class="partners__lead-place"></div>

<h2 id="real-world-case-study">Real-World Case Study</h2>

<p>In my work across logistics, hospitality, and healthcare, the challenge has always been to distill complexity into clarity. A well-designed dashboard is more than functional; it serves as a trusted companion in decision-making, embedding clarity, speed, and confidence from the start.</p>

<h3 id="1-fleet-management-dashboard">1. Fleet Management Dashboard</h3>

<p>A client in the car rental industry struggled with fragmented operational data. Critical details like vehicle locations, fuel usage, maintenance schedules, and downtime alerts were scattered across static reports, spreadsheets, and disconnected systems. Fleet operators had to manually cross-reference data sources, even for basic dispatch tasks, which caused missed warnings, inefficient routing, and delays in response.</p>

<p>We solved these issues by redesigning the dashboard strategically, focusing on both layout improvements and how users interpret and act on information.</p>

<p><strong>Strategic Design Improvements and Outcomes:</strong></p>

<ul>
<li><strong>Instant visibility of KPIs</strong><br />
High-contrast cards at the top of the dashboard made key performance indicators instantly visible.<br />
<em>Example: Fuel consumption anomalies that previously went unnoticed for days were flagged within hours, enabling quick corrective action.</em></li>
<li><strong>Clear trend and pattern visualization</strong><br />
Booking forecasts, utilization graphs, and city-by-city comparisons highlighted performance trends.<br />
<em>Example: A weekday-weekend booking chart helped a regional manager spot underperformance in one city and plan targeted vehicle redistribution.</em></li>
<li><strong>Unified operational snapshot</strong><br />
Cost, downtime, and service schedules were grouped into one view.<br />
<em>Result: The operations team could assess fleet health in under five minutes each morning instead of using multiple tools.</em></li>
<li><strong>Predictive context for planning</strong><br />
Visual cues showed peak usage periods and historical demand curves.<br />
<em>Result: Dispatchers prepared for forecasted spikes, reducing customer wait times and improving resource availability.</em></li>
<li><strong>Live map with real-time status</strong><br />
A color-coded map displays vehicle status: green for active, red for urgent attention, gray for idle.<br />
<em>Result: Supervisors quickly identified inactive or delayed vehicles and rerouted resources as needed.</em></li>
<li><strong>Role-based personalization</strong><br />
Personalization options were built in, allowing each role to customize dashboard views.<br />
<em>Example: Fleet managers prioritized financial KPIs, while technicians filtered for maintenance alerts and overdue service reports.</em></li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="555"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png"
			
			sizes="100vw"
			alt="A data analytics dashboard for the auto leasing industry displaying revenue per booking, cost recovery, operational efficiency, and top revenue-generating locations across the UAE regions."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Auto Leasing Revenue and Efficiency Dashboard. (Image source: <a href='https://www.aqedigital.com/automobile-ai-solution'>Fleet management Solution by AQe Digital</a>) (<a href='https://files.smashing.media/articles/ux-strategies-real-time-dashboards/auto-leasing-analytics.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Strategic Impact:</strong> The dashboard redesign was not only about improving visuals. It changed how teams interacted with data. Operators no longer needed to search for insights, as the system presented them in line with tasks and decision-making. The dashboard became a shared reference for teams with different goals, enabling real-time problem solving, fewer manual checks, and stronger alignment across roles. Every element was designed to build both understanding and confidence in action.</p>

<h3 id="2-hospitality-revenue-dashboard">2. Hospitality Revenue Dashboard</h3>

<p>One of our clients, a hospitality group with 11 hotels in the UAE, faced a growing strategic gap. They had data from multiple departments, including bookings, events, food and beverage, and profit and loss, but it was spread across disconnected dashboards.</p>

<p><strong>Strategic Design Improvements and Outcomes:</strong></p>

<ul>
<li><strong>All revenue streams (rooms, restaurants, bars, and profit and loss) were consolidated into a single filterable dashboard.</strong><br />
Example: A revenue manager could filter by property to see if a drop in restaurant revenue was tied to lower occupancy or was an isolated issue. The structure supported daily operations, weekly reviews, and quarterly planning.</li>
<li><strong>Disconnected charts and metrics were replaced with a unified visual narrative showing how revenue streams interacted.</strong><br />
Example: The dashboard revealed how event bookings influenced bar sales or staffing. This shifted teams from passive data consumption to active interpretation.</li>
<li><strong>AI modules for demand forecasting, spend prediction, and pricing recommendations were embedded in the dashboard.</strong><br />
Result: Managers could test rate changes with interactive sliders and instantly view effects on occupancy, revenue per available room, and food and beverage income. This enabled proactive scenario planning.</li>
<li><strong>Compact, color-coded sparklines were placed next to each key metric to show short- and long-term trends.</strong><br />
Result: These visuals made it easy to spot seasonal shifts or channel-specific patterns without switching views or opening separate reports.</li>
<li><strong>Predictive overlays such as forecast bands and seasonality markers were added to performance graphs.</strong><br />
Example: If occupancy rose but lagged behind seasonal forecasts, the dashboard surfaced the gap, prompting early action such as promotions or issue checks.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="411"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png"
			
			sizes="100vw"
			alt="A digital dashboard showing hotel metrics including occupancy, ADR, RevPAR, F&amp;B revenue, and payroll, filtered by date range and department, with performance comparisons to previous periods."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      P&L Variance and Revenue Intelligence Dashboard for Hotel Performance Review. (Image source: <a href='https://www.aqedigital.com/hospitality-ai-solutions'>Hospitality AI Solution by AQe Digital</a>) (<a href='https://files.smashing.media/articles/ux-strategies-real-time-dashboards/pl-revenue-dashboard.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Strategic Impact:</strong> By aligning the dashboard structure with real pricing and revenue strategies, the client shifted from static reporting to forward-looking decision-making. This was not a cosmetic interface update. It was a complete rethinking of how data could support business goals. The result enabled every team, from finance to operations, to interpret data based on their specific roles and responsibilities.</p>

<h3 id="3-healthcare-interoperability-dashboard">3. Healthcare Interoperability Dashboard</h3>

<p>In healthcare, timely and accurate access to patient information is essential. A multi-specialist hospital client struggled with fragmented data. Doctors had to consult separate platforms such as electronic health records, lab results, and pharmacy systems to understand a patient’s condition. This fragmented process slowed decision-making and increased risks to patient safety.</p>

<p><strong>Strategic Design Improvements and Outcomes:</strong></p>

<ul>
<li><strong>Patient medical history was integrated to unify lab reports, medications, and allergy information in one view.</strong><br />
Example: A cardiologist, for example, could review recent cardiac markers with active medications and allergy alerts in the same place, enabling faster diagnosis and treatment.</li>
<li><strong>Lab report tracking was upgraded to show test type, date, status, and a clear summary with labels such as Pending, Completed, and Awaiting Review.</strong><br />
Result: Trends were displayed with sparklines and color-coded indicators, helping clinicians quickly spot abnormalities or improvements.</li>
<li><strong>A medication management module was added for prescription entry, viewing, and exporting. It included dosage, frequency, and prescribing physician details.</strong><br />
Example: Specialists could customize it to highlight drugs relevant to their practice, reducing overload and focusing on critical treatments.</li>
<li><strong>Rapid filtering options were introduced to search by patient name, medical record number, date of birth, gender, last visit, insurance company, or policy number.</strong><br />
Example: Billing staff could locate patients by insurance details, while clinicians filtered records by visits or demographics.</li>
<li><strong>Visual transparency was provided through interactive tooltips explaining alert rationales and flagged data points.</strong><br />
Result: Clinicians gained immediate context, such as the reason a lab value was marked as critical, supporting informed and timely decisions.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="523"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg"
			
			sizes="100vw"
			alt="A medical dashboard interface displaying total patients, active appointments, lab results, system alerts, patient growth trend, appointment status, and lab test processing with insights and alerts."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Patient and Appointment Monitoring Dashboard for Healthcare Providers (Image source: <a href='https://www.aqedigital.com/healthcare-ai-driven-solutions/'>Healthcare Interoperability AI Solution by AQe Digital</a>) (<a href='https://files.smashing.media/articles/ux-strategies-real-time-dashboards/healthcare-dashboard.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>Strategic Impact:</strong> Our design encourages active decision-making instead of passive data review. Interactive tooltips ensure visual transparency by explaining the rationale behind alerts and flagged data points. These information boxes give clinicians immediate context, such as why a lab value is marked critical, helping them understand implications and next steps without delay.</p>

<h3 id="key-ux-insights-from-the-above-3-examples">Key UX Insights from the Above 3 Examples</h3>

<ul>
<li><strong>Design should drive conclusions, not just display data.</strong><br />
Contextualized data enabled faster and more confident decisions. For example, a logistics dashboard flagged high-risk delays so dispatchers could act immediately.</li>
<li><strong>Complexity should be structured, not eliminated.</strong><br />
Tools used timelines, layering, and progressive disclosure to handle dense information. A financial tool groups transactions by time blocks, easing cognitive load without losing detail.</li>
<li><strong>Trust requires clear system logic.</strong><br />
Users trusted predictive alerts only after understanding their triggers. A healthcare interface added a &ldquo;Why this alert?&rdquo; option that explained the reasoning.</li>
<li><strong>The aim is clarity and action, not visual polish.</strong><br />
Redesigns improved speed, confidence, and decision-making. In real-time contexts, confusion delays are more harmful than design flaws.</li>
</ul>

<h2 id="final-takeaways">Final Takeaways</h2>

<p>Real-time dashboards are not about overwhelming users with data. They are about helping them act quickly and confidently. The most effective dashboards reduce noise, highlight the most important metrics, and support decision-making in complex environments. Success lies in <strong>balancing visual clarity with cognitive ease</strong> while accounting for human limits like memory, stress, and attention alongside technical needs.</p>

<p><strong>Do:</strong></p>

<ul>
<li>Prioritize key metrics in a clear order so priorities are obvious. For instance, a support manager may track open tickets before response times.</li>
<li>Use subtle micro-animations and small visual cues to indicate changes, helping users spot trends without distraction.</li>
<li>Display data freshness and sync status to build trust.</li>
<li>Plan for edge cases like incomplete or offline data to keep the experience consistent.</li>
<li>Ensure accessibility with high contrast, ARIA labels, and keyboard navigation.</li>
</ul>

<p><strong>Don’t:</strong></p>

<ul>
<li>Overcrowd the interface with too many metrics.</li>
<li>Rely only on color to communicate critical information.</li>
<li>Update all data at once or too often, which can cause overload.</li>
<li>Hide failures or delays; transparency helps users adapt.</li>
</ul>

<p>Over time, I’ve come to <strong>see real-time dashboards as decision assistants rather than control panels</strong>. When users say, <em>“This helps me stay in control,”</em> it reflects a design built on empathy that respects cognitive limits and enhances decision-making. That is the true measure of success.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Milan Balać</author><title>Designing For TV: Principles, Patterns And Practical Guidance (Part 2)</title><link>https://www.smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/</link><pubDate>Thu, 04 Sep 2025 10:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/</guid><description>After covering in detail the underlying interaction paradigms of TV experiences in &lt;a href="https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/">Part 1&lt;/a>, it’s time to get practical. In the second part of the series, you’ll explore the building blocks of the “10-foot experience” and how to best utilise them in your designs.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/" />
              <title>Designing For TV: Principles, Patterns And Practical Guidance (Part 2)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Designing For TV: Principles, Patterns And Practical Guidance (Part 2)</h1>
                  
                    
                    <address>Milan Balać</address>
                  
                  <time datetime="2025-09-04T10:00:00&#43;00:00" class="op-published">2025-09-04T10:00:00+00:00</time>
                  <time datetime="2025-09-04T10:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Having covered the developmental history and legacy of TV in <a href="https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/"><strong>Part 1</strong></a>, let’s now delve into more practical matters. As a quick reminder, the “10-foot experience” and its reliance on the six core buttons of any remote form the basis of our efforts, and as you’ll see, most principles outlined simply reinforce the unshakeable foundations.</p>

<p>In this article, we’ll sift through the systems, account for layout constraints, and distill the guidelines to understand the essence of TV interfaces. Once we’ve collected all the main ingredients, we’ll see what we can do to elevate these inherently simplistic experiences.</p>

<p>Let’s dig in, and let’s get practical!</p>

<h2 id="the-systems">The Systems</h2>

<p>When it comes to hardware, TVs and set-top boxes are usually a few generations behind phones and computers. Their components are made to run lightweight systems optimised for viewing, energy efficiency, and longevity. Yet even within these constraints, different platforms offer varying performance profiles, conventions, and price points.</p>

<p>Some notable platforms/systems of today are:</p>

<ul>
<li><strong>Roku</strong>, the most affordable and popular, but severely bottlenecked by weak hardware.</li>
<li><strong>WebOS</strong>, most common on LG devices, relies on web standards and runs well on modest hardware.</li>
<li><strong>Android TV</strong>, considered very flexible and customisable, but relatively demanding hardware-wise.</li>
<li><strong>Amazon Fire</strong>, based on Android but with a separate ecosystem. It offers great smooth performance, but is slightly more limited than stock Android.</li>
<li><strong>tvOS</strong>, by Apple, offering a high-end experience followed by a high-end price with extremely low customizability.</li>
</ul>

<p>Despite their differences, all of the platforms above share something in common, and by now you’ve probably guessed that it has to do with <em>the remote</em>. Let’s take a closer look:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg"
			
			sizes="100vw"
			alt="Five TV remotes from left to right: Roku, LG WebOS, Philips Android TV, Amazon Fire TV, and Apple tvOS. Each features a directional pad, select button, and back button, thus showcasing the shared navigation layout across different platforms."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Left to right: Roku, WebOS (LG), Android TV (Philips), Amazon Fire, and tvOS remotes. While they control different systems, their control schemes are exactly the same. (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/1-remotes.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>If these remotes were stripped down to just the D-pad, <kbd>OK</kbd>, and <kbd>BACK</kbd> buttons, they would still be capable of successfully navigating any TV interface. It is this shared control scheme that allows for the <a href="https://www.techtarget.com/whatis/definition/agnostic">agnostic approach</a> of this article with broadly applicable guidelines, regardless of the manufacturer.</p>

<p>Having already discussed the TV remote in detail in <a href="https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/"><strong>Part 1</strong></a>, let’s turn to the second part of the equation: the TV screen, its layout, and the fundamental building blocks of TV-bound experiences.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="tv-design-fundamentals">TV Design Fundamentals</h2>

<h3 id="the-screen">The Screen</h3>

<p>With almost one hundred years of legacy, TV has accumulated quite some baggage. One recurring topic in modern articles on TV design is the concept of “<a href="https://en.wikipedia.org/wiki/Overscan">overscan</a>” &mdash; a legacy concept from the era of cathode ray tube (<a href="https://en.wikipedia.org/wiki/Cathode-ray_tube">CRT</a>) screens. Back then, the lack of standards in production meant that television sets would often crop the projected image at its edges. To address this inconsistency, broadcasters created guidelines to keep important content from being cut off.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg"
			
			sizes="100vw"
			alt="Diagram showing the TV screen safe area. The inner frame is labeled ‘Title Safe’ and the outer ‘Action Safe’, illustrating traditional TV overscan zones used to keep key content visible on all TV displays."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Overscan guides on a 16:9 image. Broadcasters differentiate between title safe and action safe areas. (Photo by <a href='https://unsplash.com/photos/man-in-white-t-shirt-and-brown-pants-riding-skateboard-on-brown-sand-during-daytime-r1SwcagHVG0'>Tom Morbey</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/2-safe-zones.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>While overscan gets mentioned occasionally, we should call it what it really is &mdash; a thing of the past. Modern panels display content with greater precision, making thinking in terms of title and action safe areas rather archaic. Today, we can simply consider the <strong>margins</strong> and get the same results.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg"
			
			sizes="100vw"
			alt="Diagram showing the TV red margin frames on all sides of an image, illustrating how overscan areas can be simplified into consistent screen margins for modern TV layouts."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Simplifying overscan, we can turn it into margins. (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/3-tv-margins.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p><a href="https://developer.android.com/design/ui/tv/guides/styles/layouts">Google calls for a 5% margin layout</a> and <a href="https://developer.apple.com/design/human-interface-guidelines/layout">Apple advises</a> a 60-point margin top and bottom, and 80 points on the sides in their Layout guidelines. The standard is not exactly clear, but the takeaway is simple: leave some breathing room between screen edge and content, like you would in any thoughtful layout.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="469"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png"
			
			sizes="100vw"
			alt="Diagram showing the TV screen safe area. A dark central rectangle represents the usable safe zone, inset by 60 points from the top and bottom and 80 points from the left and right edges. These margins ensure content isn&#39;t clipped or hard to see on TVs with overscan."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Be prepared for a wide range of TV sizes and adhere to the screen’s safe area. Inset primary content 60 points from the top/bottom of the screen, and 80 points from the sides. (Image source: <a href='https://developer.apple.com/design/human-interface-guidelines/layout'>Layout, Apple Developer Docs</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/4-tvos-safe-zones.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Having left some baggage behind, we can start considering what to put within and outside the defined bounds.</p>

<h3 id="the-layout">The Layout</h3>

<p>Considering the device is made for content consumption, streaming apps such as Netflix naturally come to mind. Broadly speaking, all these interfaces share a common layout structure where a vast collection of content is laid out in a simple <strong>grid</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg"
			
			sizes="100vw"
			alt="Screenshot of Netflix’s TV interface showing horizontally scrolling content shelves with thumbnail images of shows and movies arranged in rows, illustrating a common layout pattern used in TV apps."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Netflix TV UI features ‘content shelves,’ a common design pattern for TV apps. (Photo by <a href='https://www.flatpanelshd.com/news.php?subaction=showfull&id=1698138139'>Rasmus Larsen</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/5-netflix-tv-ui.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>These horizontally scrolling groups (sometimes referred to as “shelves”) resemble rows of a bookcase. Typically, they’ll contain dozens of items that don’t fit into the initial “fold”, so we’ll make sure the last visible item “peeks” from the edge, subtly indicating to the viewer there’s more content available if they continue scrolling.</p>

<p>If we were to define a standard 12-column layout grid, with a 2-column-wide item, we’d end up with something like this:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg"
			
			sizes="100vw"
			alt="12-column TV layout grid with two horizontal content shelves. Each shelf contains rectangular tiles, each spanning 2 columns. The tiles are aligned to the grid, and the last tile in each row extends beyond the visible screen area."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of a 12-column layout with 80px margin on the sides. (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/6-twelve-column-grid-layout.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>As you can see, the last item falls outside the “safe” zone.</p>

<p><strong>Tip:</strong> A useful trick I discovered when designing TV interfaces was to utilise an <em>odd</em> number of columns. This allows the last item to fall within the defined margins and be more prominent while having little effect on the entire layout. We’ve concluded that overscan is not a prominent issue these days, yet an additional column in the layout helps <em>completely</em> circumvent it. Food for thought!</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg"
			
			sizes="100vw"
			alt="3-column TV layout grid with two horizontal content shelves and 80-pixel side margins. Each tile spans 2 columns. The 13th column allows more of the final tile in each row to be visible, though it still extends slightly beyond the screen edge."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of a 13-column layout with 80px margin on the sides. One additional column within the set bounds gives more prominence to the last visible item on the shelf. (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/7-thirteen-column-grid-layout.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="typography">Typography</h3>

<p>TV design requires us to practice restraint, and this becomes very apparent when working with type. All good typography practices apply to TV design too, but I’d like to point out two specific takeaways.</p>

<p>First, accounting for the distance, everything (including type) needs to <strong>scale up</strong>. Where 16&ndash;18px might suffice for web baseline text, 24px should be your starting point on TV, with the rest of the scale increasing proportionally.</p>

<blockquote>“Typography can become especially tricky in 10-ft experiences. When in doubt, <strong>go larger</strong>.”<br /><br />&mdash; <a href="https://marvelapp.com/blog/designing-for-television/">Molly Lafferty</a> (Marvel Blog)</blockquote>

<p>With that in mind, the second piece of advice would be to <strong>start with a small 5&ndash;6 size scale</strong> and adjust if necessary. The simplicity of a TV experience can, and should, be reflected in the typography itself, and while small, such a scale will do all the “heavy lifting” if set correctly.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg"
			
			sizes="100vw"
			alt="A type scale with five text sizes and weights, demonstrating a simplified system suitable for TV interfaces."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A 5&ndash;6 size type scale can carry the “burden” of a TV interface. (Image source: <a href='https://www.figma.com/community/file/1533026722522937199'>TV UI Base Type Scale</a>, by Milan Balać) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/8-type-application.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What you see in the example above is a scale I reduced from <a href="https://developer.android.com/design/ui/tv/guides/styles/typography">Google</a> and <a href="https://developer.apple.com/design/human-interface-guidelines/typography">Apple</a> guidelines, with a few size adjustments. Simple as it is, this scale served me well for years, and I have no doubt it could do the same for you.</p>

<h4 id="freebie">Freebie</h4>

<p>If you’d like to use my basic reduced type scale Figma design file for kicking off your own TV project, feel free to do so!</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg"
			
			sizes="100vw"
			alt="A screenshot of the first page in the Figma design file by the author Milan Balać. Below the screenshot there is a link which points to the Figma file."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      <a href='https://www.figma.com/community/file/1533026722522937199'>TV UI Base Type Scale</a> (Figma Design file, <a href='https://creativecommons.org/licenses/by/4.0/'>CC-BY</a> license) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/9-figma-freebie.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="color">Color</h3>

<p>Imagine watching TV at night with the device being the only source of light in the room. You open up the app drawer and select a new streaming app; it loads into a pretty splash screen, and &mdash; bam! &mdash; a bright interface opens up, which, amplified by the dark surroundings, blinds you for a fraction of a second. That right there is our main consideration when using color on TV.</p>

<p>Built for cinematic experiences and often used in dimly lit environments, TVs lend themselves perfectly to darker and more subdued interfaces. Bright colours, especially pure white (<code>#ffffff</code>), will translate to maximum luminance and may be straining on the eyes. As a general principle, you should <strong>rely on a more muted color palette</strong>. Slightly tinting brighter elements with your brand color, or undertones of yellow to imitate natural light, will produce less visually unsettling results.</p>

<p>Finally, without a pointer or touch capabilities, it’s crucial to <strong>clearly highlight</strong> interactive elements. While using bright colors as backdrops may be overwhelming, using them sparingly to highlight element states in a highly contrasting way will work perfectly.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/11-button-focus-basic.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/10-button-focus-basic-800.gif" width="800" height="450" alt="A row of buttons, with one button animating into a high-contrast focus state, where its color shifts from a dark tone to a bright, light color against a dark background. This illustrates how TV interfaces visually emphasize the selected element." /></a><figcaption>A focus state is the underlying principle of TV navigation. Most commonly, it relies on creating high contrast between the focused and unfocused elements. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/11-button-focus-basic.gif">Large preview</a>)</figcaption></figure>

<p>This highlighting of UI elements is what TV leans on heavily &mdash; and it is what we’ll discuss next.</p>

<h3 id="focus">Focus</h3>

<p>In <a href="https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/">Part 1</a>, we have covered how interacting through a remote implies a certain detachment from the interface, mandating reliance on a focus state to carry the burden of TV interaction. This is done by visually accenting elements to anchor the user’s eyes and map any subsequent movement within the interface.</p>

<p>If you have ever written HTML/CSS, you might recall the use of the <code>:focus</code> <a href="https://developer.mozilla.org/en-US/docs/Web/CSS/:focus">CSS pseudo-class</a>. While it’s primarily an accessibility feature on the web, it’s the <strong>core of interaction</strong> on TV, with more flexibility added in the form of two additional directions thanks to a dedicated D-pad.</p>

<h4 id="focus-styles">Focus Styles</h4>

<p>There are a few standard ways to style a focus state. Firstly, there’s <strong>scaling</strong> &mdash; enlarging the focused element, which creates the illusion of depth by moving it closer to the viewer.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/13-focus-scale-base.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/12-focus-scale-base-800.gif" width="800" height="450" alt="A horizontal row of image cards where one enlarges slightly on focus, demonstrating a common TV UI technique where focused elements scale up to indicate selection, especially when using image-only content." /></a><figcaption>Example of scaling elements on focus. This is especially common in cases where only images are used for focusable elements. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/13-focus-scale-base.gif">Large preview</a>)</figcaption></figure>

<p>Another common approach is to <strong>invert</strong> background and text colors.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/15-focus-bg-base.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/14-focus-bg-base-800.gif" width="800" height="450" alt="A horizontal row of image cards where one changes its background from dark to light on focus, demonstrating color inversion as a common technique for highlighting selected cards in TV interfaces." /></a><figcaption>Color inversion on focus, common for highlighting cards. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/15-focus-bg-base.gif">Large preview</a>)</figcaption></figure>

<p>Finally, a <strong>border</strong> may be added around the highlighted element.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/17-focus-border-base.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/16-focus-bg-base-800.gif" width="800" height="450" alt="A horizontal row of image cards where one displays a bright border on focus, illustrating how outlining is used in TV interfaces to visually highlight the selected element." /></a><figcaption>Example of border highlights on focus. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/17-focus-border-base.gif">Large preview</a>)</figcaption></figure>

<p>These styles, used independently or in various combinations, appear in all TV interfaces. While execution may be constrained by the specific system, the purpose remains the same: <strong>clear and intuitive feedback, even from across the room</strong>.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/19-focus-combo.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/18-focus-combo-800.gif" width="800" height="450" alt="A horizontal row of three image cards, each demonstrating a different combination of focus styles: the first scales and changes background color, the second changes background color and adds a border, and the third scales and adds a border — illustrating how focus states can be mixed for visual emphasis in TV interfaces." /></a><figcaption>The three basic styles can be combined to produce more focus state variants. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/19-focus-combo.gif">Large preview</a>)</figcaption></figure>

<p>Having set the foundations of interaction, layout, and movement, we can start building on top of them. The next chapter will cover the most common elements of a TV interface, their variations, and a few tips and tricks for button-bound navigation.</p>

<div class="partners__lead-place"></div>

<h2 id="common-tv-ui-components">Common TV UI Components</h2>

<p>Nowadays, the core user journey on television revolves around browsing (or searching through) a content library, selecting an item, and opening a dedicated screen to watch or listen.</p>

<p>This translates into a few fundamental screens:</p>

<ul>
<li><strong>Library</strong> (or Home) for content browsing,</li>
<li><strong>Search</strong> for specific queries, and</li>
<li><strong>A player screen</strong> focused on content playback.</li>
</ul>

<p>These screens are built with a handful of components optimized for the <a href="https://en.wikipedia.org/wiki/10-foot_user_interface">10-foot experience</a>, and while they are often found on other platforms too, it’s worth examining how they differ on TV.</p>

<h3 id="menus">Menus</h3>

<p>Appearing as a horizontal bar along the top edge of the screen, or as a vertical sidebar, the <strong>menu</strong> helps move between the different screens of an app. While its orientation mostly depends on the specific system, it does seem TV favors the side menu a bit more.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg"
			
			sizes="100vw"
			alt="Netflix TV interface with an expanded vertical side menu overlaying content tiles, showing navigation options like Watch Now, Browse, and Search, while dimming the background to keep focus on the menu."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Netflix side menu expanded. (Photo by <a href='https://variety.com/2019/digital/news/netflix-watch-now-test-1203431588/'>Variety</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/20-netflix-sidebar-expanded.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Both menu types share a common issue: the farther the user navigates away from the menu (vertically, toward the bottom for top-bars; and horizontally, toward the right for sidebars), the more button presses are required to get back to it. Fortunately, usually a <kbd>Back</kbd> button shortcut is added to allow for immediate menu focus, which greatly improves usability.</p>


<figure class="video-embed-container break-out">
  <div class="video-embed-container--wrapper"
	
  >
    <iframe class="video-embed-container--wrapper-iframe" src="https://player.vimeo.com/video/1115330235"
        frameborder="0"
        allow="autoplay; fullscreen; picture-in-picture"
        allowfullscreen>
    </iframe>
	</div>
	
		<figcaption>Example of a top menu in Prime Video. As soon as focus is moved from the first shelf toward the bottom, the top menu disappears. Interestingly, Prime Video implemented both menus for different purposes: the sidebar for global navigation between screens, and the top menu for filtering.</figcaption>
	
</figure>


<figure class="video-embed-container break-out">
  <div class="video-embed-container--wrapper"
	
  >
    <iframe class="video-embed-container--wrapper-iframe" src="https://player.vimeo.com/video/1115330478"
        frameborder="0"
        allow="autoplay; fullscreen; picture-in-picture"
        allowfullscreen>
    </iframe>
	</div>
	
		<figcaption>While not without its flaws, the side menu remains persistently on the screen, no matter how far you move away from it. Paired with “Back” for quick refocusing, it offers a slightly more consistent experience.</figcaption>
	
</figure>

<p>That said, the problem will arise a lot sooner for top menus, which, paired with the issue of having to hide or fade the element, makes a <em>persistent sidebar</em> a more common pick in TV user interfaces, and allows for a more consistent experience.</p>

<h3 id="shelves-posters-and-cards">Shelves, Posters, And Cards</h3>

<p>We’ve already mentioned shelves when covering layouts; now let’s shed some more light on this topic. The “shelves” (horizontally scrolling groups) form the basis of TV content browsing and are commonly populated with posters in three different aspect ratios: <strong>2:3</strong>, <strong>16:9</strong>, and <strong>1:1</strong>.</p>

<p><strong>2:3</strong> posters are common in apps specializing in movies and shows. Their vertical orientation references traditional movie posters, harkening back to the cinematic experiences TVs are built for. Moreover, their narrow shape allows more items to be immediately visible in a row, and they rarely require any added text, with titles baked into the poster image.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg"
			
			sizes="100vw"
			alt="Grid of vertically oriented 2:3 Netflix posters showing various movie and show titles, illustrating the common use of poster-style imagery in TV interfaces for visually dense content browsing."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Netflix 2:3 posters. (Photo by <a href='https://news.xbox.com/en-us/2023/05/10/join-the-netflix-xbox-insider-preview/'>Xbox Wire</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/21-netflix-2-3-posters.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>16:9</strong> posters abide by the same principles but with a horizontal orientation. They are often paired with text labels, which effectively turn them into cards, commonly seen on platforms like YouTube. In the absence of dedicated poster art, they show stills or playback from the videos, matching the aspect ratio of the media itself.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg"
			
			sizes="100vw"
			alt="Amazon Prime Video interface displaying a row of 16:9 horizontally oriented posters for various shows and movies, illustrating the use of media thumbnails with aspect ratios matching video content."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Amazon Prime 16:9 posters. (Photo by <a href='https://techcrunch.com/2022/06/15/amazon-revamps-fire-tv-user-interface-with-new-home-screen-improved-navigation-and-more/'>Amazon</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/22-amazon-prime-16-9-posters.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p><strong>1:1</strong> posters are often found in music apps like Spotify, their shape reminiscent of album art and vinyl sleeves. These squares often get used in other instances, like representing channel links or profile tiles, giving more visual variety to the interface.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg"
			
			sizes="100vw"
			alt="Spotify TV interface displaying square 1:1 posters for playlists and albums in a grid layout, reflecting the visual style of album art and offering a consistent, music-focused browsing experience."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Spotify 1:1 posters. (Photo by <a href='https://www.linkedin.com/pulse/today-we-released-spotify-apple-tv-henrik-adler/'>Henrik Adler</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/23-spotify-1-1-posters.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>All of the above can co-exist within a single app, allowing for richer interfaces and breaking up otherwise uniform content libraries.</p>

<p>And speaking of breaking up content, let’s see what we can do with <strong>spotlights</strong>!</p>

<h3 id="spotlights">Spotlights</h3>

<p>Typically taking up the entire width of the screen, these eye-catching components will highlight a new feature or a promoted piece of media. In a sea of uniform shelves, they can be placed strategically to introduce aesthetic diversity and disrupt the monotony.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg"
			
			sizes="100vw"
			alt="Large spotlight component with abstract background art and a call-to-action message, spanning the full width of the row to draw attention and visually break up the content grid in a TV interface."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of a large spotlight component, with “Create Account” and “Login” buttons. (Photo by <a href='https://unsplash.com/photos/multicolored-abstract-painting-QwoNAhbmLLo'>Joel Filipe</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/24-spotlight-main.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>A spotlight can be a focusable element by itself, or it could expose several actions thanks to its generous space. In my ventures into TV design, I relied on a few different spotlight sizes, which allowed me to place multiples into a single row, all with the purpose of highlighting different aspects of the app, without breaking the form to which viewers were used.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg"
			
			sizes="100vw"
			alt="Two horizontally arranged spotlight components, each featuring a large portrait, title, and call-to-action label, showing a smaller spotlight variant that fits two items per row while clearly indicating interactivity."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Defining a few spotlight variants comes in handy — a smaller variant allows promoting two items per row while maintaining a strong visual presence. In this example, the entire element is focusable, but exposing an action label helps communicate what will happen upon selection. (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/25-spotlight-half.jpg'>Large preview</a>)
    </figcaption>
  
</figure>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg"
			
			sizes="100vw"
			alt="Two compact spotlight components arranged in a row, featuring large portraits and bold titles without action buttons — illustrating a minimized layout that maintains visual impact while saving vertical space."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      In their most compressed version, the spotlights reduce their vertical footprint, doing away with actions and focusing solely on visuals and titles to preserve space while still drawing attention. (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/26-spotlight-mini.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Posters, cards, and spotlights shape the bulk of the visual experience and content presentation, but viewers still need a way to find specific titles. Let’s see how <strong>search</strong> and <strong>input</strong> are handled on TV.</p>

<h3 id="search-and-entering-text">Search And Entering Text</h3>

<p>Manually browsing through content libraries can yield results, but having the ability to <strong>search</strong> will speed things up &mdash; though not without some hiccups.</p>

<p>TVs allow for text input in the form of on-screen keyboards, similar to the ones found in modern smartphones. However, inputting text with a remote control is quite inefficient given the restrictiveness of its control scheme. For example, typing “hey there” on a mobile keyboard requires 9 keystrokes, but about 38 on a TV (!) due to the movement between characters and their selection.</p>

<p>Typing with a D-pad may be an arduous task, but at the same time, having the ability to search is unquestionably useful.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg"
			
			sizes="100vw"
			alt="Roku TV interface showing an on-screen grid keyboard used for text input, with a search query partially entered and matching content displayed on the right. This illustrates the standard grid layout commonly used for TV search."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of an on-screen keyboard for Roku. The grid keyboard layout is the most common on generally all platforms, aside from tvOS. (Photo by <a href='https://developer.roku.com/docs/developer-program/discovery/search/implementing-search.md'>Roku</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/27-roku-search.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Luckily for us, keyboards are accounted for in all systems and usually come in two varieties. We’ve got the grid layouts used by most platforms and a horizontal layout in support of the touch-enabled and gesture-based controls on tvOS. Swiping between characters is significantly faster, but this is yet another pattern that can only be enhanced, not replaced.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg"
			
			sizes="100vw"
			alt="tvOS on-screen keyboard with a horizontally scrolling layout of letters, numbers, and symbols, designed for gesture-based input using a touch-enabled remote."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The tvOS horizontal keyboard is designed to support touch and gesture-enabled remote controllers. (Photo by <a href='https://arstechnica.com/gadgets/2016/03/mini-review-tvos-9-2-fixes-all-the-apple-tvs-biggest-problems/'>Andrew Cunningham</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/28-tvos-horizontal-keyboard.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aModernization%20has%20made%20things%20significantly%20easier,%20with%20search%20autocomplete%20suggestions,%20device%20pairing,%20voice%20controls,%20and%20remotes%20with%20physical%20keyboards,%20but%20on-screen%20keyboards%20will%20likely%20remain%20a%20necessary%20fallback%20for%20quite%20a%20while.%20And%20no%20matter%20how%20cumbersome%20this%20fallback%20may%20be,%20we%20as%20designers%20need%20to%20consider%20it%20when%20building%20for%20TV.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fdesigning-tv-principles-patterns-practical-guidance%2f">
      
Modernization has made things significantly easier, with search autocomplete suggestions, device pairing, voice controls, and remotes with physical keyboards, but on-screen keyboards will likely remain a necessary fallback for quite a while. And no matter how cumbersome this fallback may be, we as designers need to consider it when building for TV.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h3 id="players-and-progress-bars">Players And Progress Bars</h3>

<p>While all the different sections of a TV app serve a purpose, <strong>the Player</strong> takes center stage. It’s where all the roads eventually lead to, and where viewers will spend the most time. It’s also one of the rare instances where focus gets lost, allowing for the interface to get out of the way of enjoying a piece of content.</p>

<p>Arguably, players are the most complex features of TV apps, compacting all the different functionalities into a single screen. Take YouTube, for example, its player doesn’t just handle expected playback controls but also supports content browsing, searching, reading comments, reacting, and navigating to channels, all within a single screen.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg"
			
			sizes="100vw"
			alt="YouTube TV player interface during video playback, displaying a range of controls including playback speed, quality, like/dislike, and related video thumbnails. This showcases the app’s extensive in-player functionality."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The YouTube TV app features one of the most robust players out there. (Photo by <a href='https://www.androidpolice.com/2021/06/07/youtube-on-android-tv-just-added-the-feature-ive-been-wanting-for-years/'>Rita El Khoury</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/29-youtube-android-player.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Compared to YouTube, Netflix offers a very lightweight experience guided by the nature of the app.</p>

<p>Still, every player has a basic set of controls, the foundation of which is the <strong>progress bar</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg"
			
			sizes="100vw"
			alt="Netflix TV app media player interface shown during playback, featuring a minimalist design with only essential controls: pause, back, subtitles, skip, and a progress bar with time indicators. Compared to the more complex YouTube player, this stripped-down layout prioritizes simplicity and keeps the focus on the content."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Netflix TV app media player. (Photo by <a href='https://play.google.com/store/apps/details?id=com.netflix.ninja'>Netflix</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/30-netflix-player.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The progress bar UI element serves as a visual indicator for content duration. During interaction, focus doesn’t get placed on the bar itself, but on a movable knob known as the “scrubber.” It is by moving the scrubber left and right, or stopping it in its tracks, that we can control playback.</p>

<p>Another indirect method of invoking the progress bar is with the good old <kbd>Play</kbd> and <kbd>Pause</kbd> buttons. Rooted in the mechanical era of tape players, the universally understood triangle and two vertical bars are as integral to the TV legacy as the D-pad. No matter how minimalist and sleek the modern player interface may be, these symbols remain a staple of the viewing experience.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg"
			
			sizes="100vw"
			alt="Close-up of a player showing physical playback control buttons: record (red dot), play (right-pointing triangle), stop (square), rewind (double left arrows), and fast forward (double right arrows). Japanese labels appear above each button. (Note: The pause button (two vertical bars) is not shown in the picture.)"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Transcending language barriers, the simple symbols for playback controls are universally recognisable. (Photo by <a href='https://commons.wikimedia.org/wiki/File:SONY_ICZ-R50_025_(5434919279).jpg'>TAKA@P.P.R.S</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/31-physical-playback-controls.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The presence of a scrubber may also indicate the type of content. Video on demand allows for the full set of playback controls, while live streams (unless DVR is involved) will do away with the scrubber since viewers won’t be able to rewind or fast-forward.</p>

<p>Earlier iterations of progress bars often came bundled with a set of playback control buttons, but as viewers got used to the tools available, these controls often got consolidated into the progress bar and scrubber themselves.</p>

<h3 id="bringing-it-all-together">Bringing It All Together</h3>

<p>With the building blocks out of the box, we’ve got everything necessary for a basic but functional TV app. Just as the six core buttons make remote navigation possible, the components and principles outlined above help guide purposeful TV design. The more context you bring, the more you’ll be able to expand and combine these basic principles, creating an experience unique to your needs.</p>

<p>Before we wrap things up, I’d like to share a few tips and tricks I discovered along the way &mdash; tips and tricks which I wish I had known from the start. Regardless of how simple or complex your idea may be, these may serve you as useful tools to help add depth, polish, and finesse to any TV experience.</p>

<div class="partners__lead-place"></div>

<h2 id="thinking-beyond-the-basics">Thinking Beyond The Basics</h2>

<p>Like any platform, TV has a set of constraints that we abide by when designing. But sometimes these norms are applied without question, making the already limited capabilities feel even more restraining. Below are a handful of less obvious ideas that can help you design more thoughtfully and flexibly for the big screen.</p>

<h3 id="long-press">Long Press</h3>

<p>Most modern remotes support <strong>press-and-hold gestures</strong> as a subtle way to enhance the functionality, especially on remotes with fewer buttons available.</p>

<p>For example, holding directional buttons when browsing content speeds up scrolling, while holding <kbd>Left</kbd>/<kbd>Right</kbd> during playback speeds up timeline seeking. In many apps, a single press of the <kbd>OK</kbd> button opens a video, but holding it for longer opens a contextual menu with additional actions.</p>


<figure class="video-embed-container break-out">
  <div class="video-embed-container--wrapper"
	
  >
    <iframe class="video-embed-container--wrapper-iframe" src="https://player.vimeo.com/video/1115333599"
        frameborder="0"
        allow="autoplay; fullscreen; picture-in-picture"
        allowfullscreen>
    </iframe>
	</div>
	
		<figcaption>Example of long-press interaction on YouTube.</figcaption>
	
</figure>

<p>While not immediately apparent, press-and-hold is often used in many instances of TV experiences, essentially doubling the capabilities of a single button. Depending on context, you can map certain buttons to have an additional action and give more depth to the interface without making it convoluted.</p>

<p>And speaking of <em>mapping</em>, let’s see how we can utilize it to our benefit.</p>

<h3 id="remapping-keys-and-the-importance-of-context">Remapping Keys And The Importance Of Context</h3>

<p>While not as flexible as long-press, button functions can be contextually remapped. For example, Amazon’s Prime Video maps the <kbd>Up</kbd> button to open its X-Ray feature during playback. Typically, all directional buttons open video controls, so repurposing one for a custom feature cleverly adds interactivity with little tradeoff.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg"
			
			sizes="100vw"
			alt="A paused scene from The Lord of the Rings: The Rings of Power on Prime Video shows playback controls and the X-Ray feature. The screen highlights character information, bonus content, and book links at the bottom. A subtle hint suggests pressing the Up button to access the full X-Ray view."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Upon opening playback controls, Prime Video allows opening the X-Ray feature with another press of the Up button. (Photo by <a href='https://www.amazon.com/salp/xray'>Amazon</a>) (<a href='https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/32-amazon-xray-button-mapping.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aWith%20limited%20input,%20context%20becomes%20a%20powerful%20tool.%20It%20not%20only%20declutters%20the%20interface%20to%20allow%20for%20more%20focus%20on%20specific%20tasks,%20but%20also%20enables%20the%20same%20set%20of%20buttons%20to%20trigger%20different%20actions%20based%20on%20the%20viewer%e2%80%99s%20location%20within%20an%20app.%0a&url=https://smashingmagazine.com%2f2025%2f09%2fdesigning-tv-principles-patterns-practical-guidance%2f">
      
With limited input, context becomes a powerful tool. It not only declutters the interface to allow for more focus on specific tasks, but also enables the same set of buttons to trigger different actions based on the viewer’s location within an app.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>Another great example is YouTube’s <strong>scrubber interaction</strong>. Once the scrubber is moved, every other UI element fades. This cleans up the viewer’s working area, so to speak, narrowing the interface to a single task. In this state &mdash; and only in this state &mdash; pressing <kbd>Up</kbd> one more time moves away from scrubbing and into browsing by chapter.</p>


<figure class="video-embed-container break-out">
  <div class="video-embed-container--wrapper"
	
  >
    <iframe class="video-embed-container--wrapper-iframe" src="https://player.vimeo.com/video/1115334334"
        frameborder="0"
        allow="autoplay; fullscreen; picture-in-picture"
        allowfullscreen>
    </iframe>
	</div>
	
		<figcaption>YouTube’s chaptering can only be utilized after initiating timeline seeking.</figcaption>
	
</figure>

<p>This is such an elegant example of expanding restraint, and adding <em>more</em> only <em>when necessary</em>. I hope it inspires similar interactions in your TV app designs.</p>

<h3 id="efficient-movement-on-tv">Efficient Movement On TV</h3>

<p>At its best, every action on TV “costs” at least one click. There’s no such thing as aimless cursor movement &mdash; if you want to move, you must press a button. We’ve seen how cumbersome it can be inside a keyboard, but there’s also something we can learn about efficient movement in these restrained circumstances.</p>

<p>Going back to the Homescreen, we can note that vertical and horizontal movement serve two distinct roles. Vertical movement switches between groups, while horizontal movement switches items within these groups. No matter how far you’ve gone inside a group, a single vertical click will move you into another.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/34-horizontal-group-movement.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/33-horizontal-group-movement-800.gif" width="800" height="450" alt="A grid of focusable items is organized into labeled groups (Group A, B, and C). Horizontal navigation moves between items within the same group (e.g., A1 to A2), while vertical navigation switches between groups (e.g., A1 to B1 to C1). Each group change requires only a single button press, illustrating an efficient and predictable movement model." /></a><figcaption>Every step on TV “costs” an action, so we might as well optimize movement. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/34-horizontal-group-movement.gif">Large preview</a>)</figcaption></figure>

<p>This subtle difference &mdash; two axes with separate roles &mdash; is the most efficient way of moving in a TV interface. Reversing the pattern: horizontal to switch groups, and vertical to drill down, will work like a charm as long as you keep the role of each axis well defined.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/36-vertical-group-movement.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/35-vertical-group-movement-800.gif" width="800" height="450" alt="A vertically structured layout with three labeled groups (Group A, B, and C) arranged in columns. Navigating vertically moves within a group (e.g., A1 to A2 to A3), while horizontal input switches between groups (e.g., A1 to B1 to C1). This design maintains consistent and predictable movement, requiring only one directional press to traverse either within or across groups." /></a><figcaption>Properly applied in a vertical layout, the principles of optimal movement remain the same. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/36-vertical-group-movement.gif">Large preview</a>)</figcaption></figure>

<p>Quietly brilliant and easy to overlook, this pattern powers almost every step of the TV experience. Remember it, and use it well.</p>

<h3 id="thinking-beyond-jpgs">Thinking Beyond JPGs</h3>

<p>After covering in detail many of the technicalities, let’s finish with some visual polish.</p>

<p>Most TV interfaces are driven by tightly packed rows of cover and poster art. While often beautifully designed, this type of content and layouts leave little room for visual flair. For years, the flat JPG, with its small file size, has been a go-to format, though contemporary alternatives like <a href="https://en.wikipedia.org/wiki/WebP">WebP</a> are slowly taking its place.</p>

<p>Meanwhile, we can rely on the tried and tested PNG to give a bit more shine to our TV interfaces. The simple fact that it supports transparency can help the often-rigid UIs feel more sophisticated. Used strategically and paired with simple focus effects such as background color changes, PNGs can bring subtle moments of delight to the interface.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/38-basic-png-focus.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/37-basic-png-focus-800.gif" width="800" height="450" alt="A focus animation shows two adjacent spotlight cards on a TV interface, each featuring a person and title. As focus shifts from the left to the right card, a transparent PNG overlay adapts smoothly to background color changes, preserving contrast and clarity without requiring hard edges or solid backgrounds." /></a><figcaption>Having a transparent background blends well with surface color changes common in TV interfaces. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/38-basic-png-focus.gif">Large preview</a>)</figcaption></figure>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/40-png-shape-focus.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/39-png-shape-focus-800.gif" width="800" height="450" alt="A focus animation highlights a card featuring a person. As the card gains focus, an animated orange wavy shape appears behind the person, creating a dynamic frame effect. This example shows how semi-transparent overlays with defined shapes can enhance focus without relying on solid rectangular backgrounds." /></a><figcaption>And don’t forget, transparency doesn’t have to mean that there shouldn't be any background at all. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/40-png-shape-focus.gif">Large preview</a>)</figcaption></figure>

<p>Moreover, if transformations like scaling and rotating are supported, you can really make those rectangular shapes come alive with layering multiple assets.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/42-multilayer-focus.gif"><img src="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/41-multilayer-focus-800.gif" width="800" height="450" alt="Animated TV UI section transitions through layered images with a background color change illustrating how combining multiple visuals and color shifts can add energy to a layout." /></a><figcaption>Combining multiple images along with a background color change can liven up certain sections. (<a href="https://files.smashing.media/articles/designing-tv-principles-patterns-practical-guidance/42-multilayer-focus.gif">Large preview</a>)</figcaption></figure>

<p>As you probably understand by now, these little touches of finesse don’t go out of bounds of possibility. They simply find more room to breathe within it. But with such limited capabilities, it’s best to learn all the different tricks that can help make your TV experiences stand out.</p>

<h2 id="closing-thoughts">Closing Thoughts</h2>

<p>Rooted in legacy, with a limited control scheme and a rather “shallow” interface, TV design reminds us to do the best with what we have at our disposal. The restraints I outlined are not meant to induce claustrophobia and make you feel limited in your design choices, but rather to serve you as <em>guides</em>. It is by accepting that fact that we can find freedom and new avenues to explore.</p>

<p>This two-part series of articles, just like my experience designing for TV, was not about reinventing the wheel with radical ideas. It was about understanding its nuances and contributing to what’s already there with my personal touch.</p>

<p>If you find yourself working in this design field, I hope my guide will serve as a warm welcome and will help you do your finest work. And if you have any questions, do leave a comment, and I will do my best to reply and help.</p>

<p>Good luck!</p>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://developer.android.com/design/ui/tv/guides/foundations/design-for-tv">Design for TV</a>,” by Android Developers<br />
<em>Great TV design is all about putting content front and center. It&rsquo;s about creating an interface that&rsquo;s easier to use and navigate, even from a distance. It&rsquo;s about making it easier to find the content you love, and to enjoy it in the best possible quality.</em></li>
<li>“<a href="https://uxdesign.cc/guidelines-designing-for-television-experience-524f19ab6357">TV Guidelines: A quick kick-off on designing for Television Experiences</a>,” by Andrea Pacheco<br />
<em>Just like designing a mobile app, designing a TV application can be a fun and complex thing to do, due to the numerous guidelines and best practices to follow. Below, I have listed the main best practices to keep in mind when designing an app for a 10-foot screen.</em></li>
<li>“<a href="https://marvelapp.com/blog/designing-for-television/">Designing for Television – TV Ui design</a>,” by Molly Lafferty<br />
<em>We’re no longer limited to a remote and cable box to control our TVs; we’re using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface that’s much more powerful than your old-fashioned on-screen guide.</em></li>
<li>“<a href="https://www.toptal.com/designers/ui/tv-ui-design">Rethinking User Interface Design for the TV Platform</a>,” by Pascal Potvin<br />
<em>Designing for television has become part of the continuum of devices that require a rethink of how we approach user interfaces and user experiences.</em></li>
<li>“<a href="https://developer.android.com/design/ui/tv/guides/styles/typography">Typography for TV</a>,” by Android Developers<br />
<em>As television screens are typically viewed from a distance, interfaces that use larger typography are more legible and comfortable for users. TV Design&rsquo;s default type scale includes contrasting and flexible type styles to support a wide range of use cases.</em></li>
<li>“<a href="https://developer.apple.com/design/human-interface-guidelines/typography">Typography</a>,” by Apple Developer docs<br />
<em>Your typographic choices can help you display legible text, convey an information hierarchy, communicate important content, and express your brand or style.</em></li>
<li>“<a href="https://developer.android.com/design/ui/tv/guides/foundations/color-on-tv">Color on TV</a>,” by Android Developers<br />
<em>Color on TV design can inspire, set the mood, and even drive users to make decisions. It&rsquo;s a powerful and tangible element that users notice first. As a rich way to connect with a wide audience, it&rsquo;s no wonder color is an important step in crafting a high-quality TV interface.</em></li>
<li>“<a href="https://marvelapp.com/blog/designing-for-television/">Designing for Television &mdash; TV UI Design</a>,” by Molly Lafferty (Marvel Blog)<br />
<em>Today, we’re no longer limited to a remote and cable box to control our TVs; we’re using Smart TVs, or streaming from set-top boxes like Roku and Apple TV, or using video game consoles like Xbox and PlayStation. And each of these devices allows a user interface that’s much more powerful than your old-fashioned on-screen guide.</em></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(mb, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Milan Balać</author><title>Designing For TV: The Evergreen Pattern That Shapes TV Experiences (Part 1)</title><link>https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/</link><pubDate>Wed, 27 Aug 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/</guid><description>TV interface design is a unique, fascinating, and often overlooked field. It’s been guided by decades of evolution and innovation, yet still firmly constrained by its legacy. Follow Milan into the history, quirks, and unshakable rules that dictate how we control these devices.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/designing-tv-evergreen-pattern-shapes-tv-experiences/" />
              <title>Designing For TV: The Evergreen Pattern That Shapes TV Experiences (Part 1)</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Designing For TV: The Evergreen Pattern That Shapes TV Experiences (Part 1)</h1>
                  
                    
                    <address>Milan Balać</address>
                  
                  <time datetime="2025-08-27T13:00:00&#43;00:00" class="op-published">2025-08-27T13:00:00+00:00</time>
                  <time datetime="2025-08-27T13:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Television sets have been the staple of our living rooms for decades. We watch, we interact, and we control, but how often do we <em>design</em> for them? TV design flew under my “radar” for years, until one day I found myself in the deep, designing TV-specific user interfaces. Now, after gathering quite a bit of experience in the area, I would like to share my knowledge on this rather rare topic. If you’re interested in learning more about the <strong>user experience</strong> and <strong>user interfaces of television</strong>, this article should be a good starting point.</p>

<p>Just like any other device or use case, TV has its quirks, specifics, and guiding principles. Before getting started, it will be beneficial to understand the core <em>ins</em> and <em>outs</em>. In Part 1, we’ll start with a bit of history, take a close look at the fundamentals, and review the evolution of television. In <a href="https://www.smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/">Part 2</a>, we’ll dive into the depths of practical aspects of designing for TV, including its key principles and patterns.</p>

<p>Let’s start with the two key paradigms that dictate the process of designing TV interfaces.</p>

<h2 id="mind-the-gap-or-the-10-foot-experience">Mind The Gap, Or The 10-foot-experience</h2>

<p>Firstly, we have the so-called “<a href="https://www.edenspiekermann.com/insights/the-10-foot-experience/">10-foot experience</a>,” referring to the fact that interaction and consumption on TV happens from a distance of roughly three or more meters. This is significantly different than interacting with a phone or a computer and implies having some specific approaches in the TV user interface design. For example, we’ll need to make text and user interface (UI) elements larger on TV to account for the bigger distance to the screen.</p>

<p>Furthermore, we’ll take extra care to adhere to <strong>contrast standards</strong>, primarily relying on dark interfaces, as light ones may be too blinding in darker surroundings. And finally, considering the laid-back nature of the device, we’ll <strong>simplify the interactions</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg"
			
			sizes="100vw"
			alt="A hand holds a modern TV remote, aimed at a television screen. The blurred background emphasizes the viewing distance, illustrating the &#39;10-foot experience,&#39; a key aspect of TV interaction distinct from phones and computers."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Unlike phones or computers, the TV set is used from a greater distance. This interaction paradigm is known as the “10-foot experience.” (Photo by <a href='https://unsplash.com/photos/person-holding-gray-remote-control-dZmNJKFDuVI'>Jonas Leupe</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/1-10-ft-experience.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>But the 10-foot experience is only one part of the equation. There wouldn’t be a “10-foot experience” in the first place if there were no <em>mediator</em> between the user and the device, and if we didn’t have something to interact <em>through</em> from a distance.</p>

<p>There would be no 10-foot experience if there were no <strong>remote controllers</strong>.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h2 id="the-mediator">The Mediator</h2>

<p>The <strong>remote</strong>, the second half of the equation, is what allows us to interact with the TV from the comfort of the couch. Slower and more deliberate, this conglomerate of buttons lacks the fluid motion of a mouse, or the dexterity of fingers against a touchscreen &mdash; yet the capabilities of the remote should not be underestimated.</p>

<p>Rudimentary as it is and with a limited set of functions, the remote allows for some interesting design approaches and can carry the weight of the modern TV along with its ever-growing requirements for interactivity. It underwent a handful of overhauls during the seventy years since its inception and was refined and made more ergonomic; however, there is a <strong>40-year-old pattern</strong> so deeply ingrained in its foundation that nothing can change it.</p>

<p>What if I told you that you could navigate TV interfaces and apps with a basic controller from the 1980s <em>just as well</em> as with the latest remote from Apple? Not only that, but any experience built around the <strong>six core buttons</strong> of a remote will be system-agnostic and will easily translate across platforms.</p>

<p>This is the main point I will focus on for the rest of this article.</p>

<h2 id="birth-of-a-pattern">Birth Of A Pattern</h2>

<p>As television sets were taking over people’s living rooms in the 1950s, manufacturers sought to upgrade and improve the user experience. The effort of walking up to the device to manually adjust some settings was eventually identified as an area for improvement, and as a result, the first television remote controllers were introduced to the market.</p>

<h3 id="early-developments">Early Developments</h3>

<p>Preliminary iterations of the remotes were rather unique, and it took some divergence before we finally settled on a rectangular shape and sprinkled buttons on top.</p>

<p>Take a look at the <a href="https://en.wikipedia.org/wiki/Zenith_Flash-matic">Zenith Flash-Matic</a>, for example. Designed in the mid-1950s, this standout device featured a single button that triggered a directional lamp; by pointing it at specific corners of the TV set, viewers could control various functions, such as changing channels or adjusting the volume.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg"
			
			sizes="100vw"
			alt="The Zenith Flash-Matic remote, a vintage green and gold device resembling a ray gun, with a trigger-style red button."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Zenith Flash-Matic remote, one of the earliest predecessors of modern TV remotes. (Photo by the <a href='https://collection.sciencemuseumgroup.org.uk/objects/co8676884/zenith-flash-matic-remote-control'>Science Museum Group</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/2-flash-matic.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>While they were a far cry compared to their modern counterparts, devices like the Flash-Matic set the scene for further developments, and we were off to the races!</p>

<p class="c-pre-sidenote--left">As the designs evolved, the core functionality of the remote solidified. Gradually, remote controls became more than just simple channel changers, evolving into command centers for the expanding territory of home entertainment.</p>
<p class="c-sidenote c-sidenote--right"><strong>Note</strong>: I will not go too much into history here &mdash; aside from some specific points that are of importance to the matter at hand &mdash; but if you have some time to spare, do look into the developmental history of television sets and remotes, it’s quite a fascinating topic.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg"
			
			sizes="100vw"
			alt="The Zenith Space Command remote, a sleek, metallic device with a number pad, volume, and channel controls. Its refined design closely resembles modern TV remotes."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      After two decades of iteration, Zenith Space Command’s form-factor is a lot more in line with contemporary remotes. (Photo by <a href='https://www.flickr.com/photos/oskay/297852961/in/photostream/'>Windell Oskay</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/3-space-command.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>However, practical as they may have been, they were still considered a luxury, significantly increasing the prices of TV sets. As the 1970s were coming to a close, only around <a href="https://www.grunge.com/826329/the-history-of-the-tv-remote/">17% of United States households</a> had a remote controller for their TVs. Yet, things would change as the new decade rolled in.</p>

<h3 id="button-mania-of-the-1980s">Button Mania Of The 1980s</h3>

<p>The eighties brought with them the Apple Macintosh, MTV, and Star Wars. It was a time of cultural shifts and technological innovation. <a href="https://en.wikipedia.org/wiki/Videocassette_recorder">Videocassette recorders</a> (VCRs) and a multitude of other consumer electronics found their place in the living rooms of the world, along with TVs.</p>

<p>These new devices, while enriching our media experiences, also introduced a few new design problems. Where there was once a single remote, now there were <em>multiple</em> remotes, and things were getting slowly out of hand.</p>

<p>This marked the advent of <strong>universal remotes</strong>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg"
			
			sizes="100vw"
			alt="A Sony universal remote control with a metallic faceplate and a black body. It features a large number of small, uniform buttons, all the same shape and size, arranged in a dense grid. The buttons are labeled for various functions, including playback controls, numeric input, and device selection. This design allows the remote to control multiple devices, consolidating numerous functions into a single unit but means that the remote will have a significantly larger number of buttons than a standard TV remote."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A universal remote by Sony, programmable for up to three different devices. (Image source: <a href='https://www.ebay.com/itm/116233118261'>ebay.com</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/4-universal-remote.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Trying to hit many targets with one stone, the unwieldy universal remotes were humanity’s best solution for controlling a wider array of devices. And they did solve some of these problems, albeit in an awkward way. The complexity of universal remotes was a trade-off for versatility, allowing them to be programmed and used as a command center for controlling multiple devices. This meant transforming the relatively simple design of their predecessors into a beehive of buttons, prioritizing broader compatibility over elegance.</p>

<p>On the other hand, almost as a response to the inconvenience of the universal remote, a different type of controller was conceived in the 1980s &mdash; one with a very basic layout and set of buttons, and which would leave its mark in both <em>how</em> we interact with the TV, and how our remotes are laid out. A device that would, knowingly or not, give birth to a navigational pattern that is yet to be broken &mdash; the <a href="https://nintendo.fandom.com/wiki/Nintendo_Entertainment_System_controller">NES controller</a>.</p>

<h3 id="d-pad-dominance">D-pad Dominance</h3>

<p>Released in 1985, the <strong>Nintendo Entertainment System (NES)</strong> was an instant hit. Having sold sixty million units around the world, it left an undeniable mark on the gaming console industry.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg"
			
			sizes="100vw"
			alt="A classic Nintendo Entertainment System (NES) controller with a rectangular design, featuring a black directional pad (D-pad) on the left, two red circular action buttons labeled &#39;A&#39; and &#39;B&#39; on the right, and two small rectangular &#39;Select&#39; and &#39;Start&#39; buttons in the center."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Nintendo NES controller with its iconic D-pad and two action buttons. (Photo by <a href='https://commons.wikimedia.org/wiki/File:NES-Controller-Flat.jpg'>Evan Amos</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/5-nes-control.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The NES controller (which was not truly remote, as it ran a cable to the central unit) introduced the world to a deceptively simple control scheme. Consisting of six primary actions, it gave us the directional pad (the D-pad), along with two action buttons (<code>A</code> and <code>B</code>). Made in response to the bulky joystick, the cross-shaped cluster allowed for easy movement along two axes (<code>up</code>, <code>down</code>, <code>left</code>, and <code>right</code>).</p>

<p>Charmingly intuitive, this navigational pattern would produce countless hours of gaming fun, but more importantly, its elementary design would “seep over” into the <em>wider industry</em> &mdash; the D-pad, along with the two action buttons, would become the very basis on which future remotes would be constructed.</p>

<p>The world continued spinning madly on, and what was once a luxury became commonplace. By the end of the decade, TV remotes were more integral to the standard television experience, and more than <a href="https://www.grunge.com/826329/the-history-of-the-tv-remote/">two-thirds of American TV owners</a> had some sort of a remote.</p>

<p>The nineties rolled in with further technological advancements. TV sets became more robust, allowing for finer tuning of their settings. This meant creating interfaces through which such tasks could be accomplished, and along with their master sets, remotes got updated as well.</p>

<p>Gone were the bulky rectangular behemoths of the eighties. As ergonomics took precedence, they got replaced by comfortably contoured devices that better fit their users’ hands. Once conglomerations of dozens of uniform buttons, these contemporary remotes introduced different shapes and sizes, allowing for recognition simply through touch. Commands were being clustered into sensible groups along the body of the remote, and within those button groups, a familiar shape started to emerge.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg"
			
			sizes="100vw"
			alt="A Magnavox remote control from the 1990s, featuring a black plastic body with a slightly curved shape. At the top, a cluster of playback buttons is arranged in a circular layout, resembling a D-pad, and includes &#39;Play,&#39; &#39;Rewind,&#39; &#39;Fast Forward,&#39; and &#39;Stop.&#39; Below, there are additional buttons for number input, recording, and other TV functions."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A remote controller from the 1990s, with a prominent button cluster resembling a D-pad. (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/6-magnavox-remote.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Gradually, the D-pad found its spot on our TV remotes. As the evolution of these devices progressed, it became even more deeply embedded at the core of their interactivity.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg"
			
			sizes="100vw"
			alt="A Samsung remote from the 2000s, featuring a grey plastic body with a structured button layout. The central section features a prominent D-pad-like cluster with an &#39;Enter&#39; button at its center, surrounded by directional buttons for navigation. Above, there are numeric keys and function buttons, while the lower section includes additional controls and color-coded buttons for multimedia or menu navigation."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Remote controller from the 2000s with a clearly defined D-pad cluster. (Image source: <a href='https://www.emag.bg/distancionno-za-televizor-syvmestimo-s-samsung-a-sivo-aa59-00332/pd/DD9ZRHMBM/'>emag.bg</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/7-samsung-remote.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Set-top boxes and smart features emerged in the 2000s and 2010s, and TV technology continued to advance. Along the way, many bells and whistles were introduced. TVs got bigger, brighter, thinner, yet their essence remained unchanged.</p>

<p>In the years since their inception, remotes were innovated upon, but all the undertakings circle back to the <strong>core principles of the NES controller</strong>. Future endeavours never managed to replace, but only to augment and reinforce the pattern.</p>

<div class="partners__lead-place"></div>

<h2 id="the-evergreen-pattern">The Evergreen Pattern</h2>

<p>In 2013, <a href="https://www.lg.com/nz/about-lg/press-and-media/lg-announces-2013-lg-smart-tv-with-magic-remote/">LG introduced</a> their Magic remote <em>(“So magically simple, the kids will be showing you how to use it!”)</em>. This uniquely shaped device enabled motion controls on LG TV sets, allowing users to point and click similar to a computer mouse. Having a pointer on the screen allowed for much <strong>more flexibility and speed</strong> within the system, and the remote was well-received and praised as one of the best smart TV remotes.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg"
			
			sizes="100vw"
			alt="A black LG Magic Remote with a circular D-pad at the top, surrounded by navigation and function buttons. Unlike traditional rectangular remotes, this one has a sleek, tapered oval design that widens at the top and narrows towards the bottom, making it comfortable to hold. This remote supports motion controls, allowing users to point, gesture, and interact with the TV using an on-screen cursor."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The LG Magic remote. This device allowed for innovative ways of interacting with the TV, but kept the D-pad as one of its central elements. (Image source: <a href='https://www.bhphotovideo.com/c/product/965478-REG/lg_electronics_an_mr400_magic_remote_with_receiver.html'>bhphotovideo.com</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/8-lg-magic-remote.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Innovating on tradition, this device introduced new features and fresh perspectives to the world of TV. But if we look at the device itself, we’ll see that, despite its differences, it still retains the D-pad as a means of interaction. It may be argued that LG never set out to replace the directional pad, and as it stands, regardless of their intent, they only managed to <em>augment</em> it.</p>

<p>For an even better example, let’s examine Apple TV’s second-generation remotes (the first-generation Siri remote). Being the industry disruptors, Apple introduced a touchpad to the top half of the remote. The glass surface provided briskness and precision to the experience, enabling <strong>multi-touch gestures</strong>, <strong>swipe navigation</strong>, and <strong>quick scrolling</strong>. This quality of life upgrade was most noticeable when typing with the horizontal on-screen keyboards, as it allowed for smoother and quicker scrolling from A to Z, making for a more refined experience.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg"
			
			sizes="100vw"
			alt="Apple TV second-generation remote (first-generation Siri remote) with a slim, rectangular aluminum body. The top half features a touchpad that replaces a traditional D-pad while maintaining the same four-directional movement, allowing for swipe gestures and precise navigation. Below the touchpad are a few essential buttons, including &#39;Menu,&#39; &#39;TV/Home,&#39; a microphone button for Siri voice commands, and volume controls."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Apple TV second-generation remote control (first-generation Siri remote), known for removing the familiar shape of the D-pad and augmenting it with a touchpad. (Image source: <a href='https://support.apple.com/en-is/103233'>Apple</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/9-apple-tv-gen-2.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>While at first glance it may seem Apple removed the directional buttons, the fact is that the touchpad is simply a modernised take on the pattern, still abiding by the same four directions a classic D-pad does. You could say it’s a D-pad with an extra layer of gimmick.</p>

<p>Furthermore, the touchpad didn’t really sit well with the user base, along with the fact that the remote’s ergonomics were a bit iffy. So instead of pushing the boundaries even further with their third generation of remotes, Apple did a complete 180, <a href="https://support.apple.com/en-us/111844">re-introducing the classic D-pad</a> cluster while keeping the touch capabilities from the previous generation (the touch-enabled clickpad lets you select titles, swipe through playlists, and use a circular gesture on the outer ring to find just the scene you’re looking for).</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg"
			
			sizes="100vw"
			alt="The Apple TV third-generation remote (second-generation Siri remote) featuring a slim, rectangular aluminum body with a silver finish. At the top, a circular black D-pad with a touch-sensitive surface allows both directional button presses and swipe gestures. Below it, a set of black buttons includes back, TV/home, play/pause, mute, volume controls, and a power button."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The Apple TV third-generation remote (second-generation Siri remote). Keeping the past generation’s touch capabilities, it reintroduced the D-pad. (Image source: <a href='https://www.apple.com/shop/product/MW5G3AM/A/siri-remote'>Apple</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/10-apple-tv-gen-3.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Now, why can’t we figure out a better way to navigate TVs? Does that mean we shouldn’t try to innovate?</p>

<p>We can argue that using motion controls and gestures is an obvious upgrade to interacting with a TV. And we’d be right… in principle. These added features are more complex and costly to produce, but more importantly, while it has been upgraded with bits and bobs, the TV is essentially a legacy system. And it’s not only that.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aWhile%20touch%20controls%20are%20a%20staple%20of%20interaction%20these%20days,%20adding%20them%20without%20thorough%20consideration%20can%20reduce%20the%20usability%20of%20a%20remote.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fdesigning-tv-evergreen-pattern-shapes-tv-experiences%2f">
      
While touch controls are a staple of interaction these days, adding them without thorough consideration can reduce the usability of a remote.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h3 id="pitfalls-of-touch-controls">Pitfalls Of Touch Controls</h3>

<p>Modern car dashboards are increasingly being dominated by touchscreens. While they may impress at auto shows, their <a href="https://uxdesign.cc/why-touchscreens-dont-work-in-cars-69b6ff3d4355">real-world usability is often compromised</a>.</p>

<p>Driving demands constant focus and the ability to adapt and respond to ever-changing conditions. Any interface that requires taking your eyes off the road for more than a moment increases the risk of accidents. That’s exactly where touch controls fall short. While they may be more practical (and likely cheaper) for manufacturers to implement, they’re often the opposite for the end user.</p>

<p>Unlike physical buttons, knobs, and levers, which offer tactile landmarks and feedback, touch interfaces lack the ability to be used by <em>feeling</em> alone. Even simple tasks like adjusting the volume of the radio or the climate controls often involve gestures and nested menus, all performed on a smooth glass surface that demands visual attention, especially when fine-tuning.</p>

<p>Fortunately, the upcoming <a href="https://www.theautopian.com/europe-is-requiring-physical-buttons-for-cars-to-get-top-safety-marks-and-we-should-too/">2026 Euro NCAP regulations</a> will encourage car manufacturers to <strong>reintroduce physical controls for core functions</strong>, reducing driver distraction and promoting safer interaction.</p>

<p>Similarly (though far less critically), sleek, buttonless TV remote controls may feel modern, but they introduce unnecessary abstraction to a familiar set of controls.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aPhysical%20buttons%20with%20distinct%20shapes%20and%20positioning%20allow%20users%20to%20navigate%20by%20memory%20and%20touch,%20even%20in%20the%20dark.%20That%e2%80%99s%20not%20outdated%20%e2%80%94%20it%e2%80%99s%20a%20deeper%20layer%20of%20usability%20that%20modern%20design%20should%20respect,%20not%20discard.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fdesigning-tv-evergreen-pattern-shapes-tv-experiences%2f">
      
Physical buttons with distinct shapes and positioning allow users to navigate by memory and touch, even in the dark. That’s not outdated — it’s a deeper layer of usability that modern design should respect, not discard.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>And this is precisely why Apple reworked the Apple TV third-generation remote the way it is now, where the touch area at the top disappeared. Instead, the D-pad again had clearly defined buttons, and at the same time, the D-pad could also be <em>extended</em> (not replaced) to accept some touch gestures.</p>

<h2 id="the-legacy-of-tv">The Legacy Of TV</h2>

<p>Let’s take a look at an old on-screen keyboard.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg"
			
			sizes="100vw"
			alt="Name registration screen from the 1986 game &#39;The Legend of Zelda&#39; featuring an early on-screen keyboard. The interface has a black background with white pixelated text and a blue selection box. Three small pixel-art Link characters are displayed, with one highlighted by a red cursor. Below them, an alphabetic and numeric character selection grid is presented, allowing players to input a name."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Name registration screen along with an early iteration of an on-screen keyboard from the game “<a href='https://en.wikipedia.org/wiki/The_Legend_of_Zelda_(video_game)'>The Legend of Zelda</a>” (1986). (Image source: <a href='https://www.gameuidatabase.com/gameData.php?id=1869&autoload=76508'>Game UI Database</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/11-zelda-keyboard.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>The Legend of Zelda, released in 1986, allowed players to register their names in-game. There are even older games with the same feature, but that’s beside the point. Using the NES controller, the players would move around the keyboard, entering their moniker character by character. Now let’s take a look at a modern iteration of the on-screen keyboard.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="450"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg"
			
			sizes="100vw"
			alt="A modern on-screen dark-themed keyboard interface from Google&#39;s GBoard for Android TVs. The top of the screen is reserved for user details, with focus on the password field. Below the password field, a virtual keyboard with a QWERTY layout is visible, featuring rounded keys with white lettering on a dark background."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Google’s GBoard, a modern iteration of the on-screen keyboard for Android TVs. (Image by <a href='https://websiddu.com/work/g-board-for-tv'>Siddhartha Gudipati</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/12-google-tv-keyboard.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Notice the difference? Or, to phrase it better: do you notice the similarities? Throughout the years, we’ve introduced quality of life improvements, but the core is exactly the same as it was forty years ago. And it is not the lack of innovation or bad remotes that keep TV deeply ingrained in its beginnings. It’s simply that it’s the most optimal way to interact given the circumstances.</p>

<h3 id="laying-it-all-out">Laying It All Out</h3>

<p>Just like phones and computers, TV layouts are based on a <strong>grid system</strong>. However, this system is a lot more apparent and rudimentary on TV. Taking a look at a standard TV interface, we’ll see that it consists mainly of horizontal and vertical lists, also known as <em>shelves</em>.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg"
			
			sizes="100vw"
			alt="The interface of the YouTube TV app, displaying a dark-themed home screen with recommended videos. The layout is optimized for TV navigation, with large video thumbnails in two horizontal lists, and a sidebar menu on the left for browsing options."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The interface of the YouTube TV app. (Image source: <a href='https://play.google.com/store/apps/details?id=com.google.android.youtube.tv'>Google Play</a>) (<a href='https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/13-youtube-tv-ui.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<p>These grids may be populated with cards, characters of the alphabet, or anything else, essentially, and upon closer examination, we’ll notice that our movement is restricted by a few factors:</p>

<ol>
<li>There is no pointer for our eyes to follow, like there would be on a computer.</li>
<li>There is no way to interact directly with the display like we would with a touchscreen.</li>
</ol>

<p>For the purposes of navigating with a remote, a <strong>focus state</strong> is introduced. This means that an element will always be highlighted for our eyes to anchor, and it will be the starting point for any subsequent movement within the interface.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/14-focus-state-column-remote.gif"><img src="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/14-focus-state-column-remote.gif" width="1440" height="810" alt="Representation of TV user interface showcasing a focus state as the selection moves sequentially from item to item within a vertical column. A remote control is placed in the bottom-right corner with highlights of the button presses. The list moves sequentially in a vertical line from the first item to the fourth item, then back." /></a><figcaption>Simplified TV UI demonstrating a focus state along with sequential movement from item to item within a column.</figcaption></figure>

<p>Moreover, starting from the focused element, we can notice that the movement is restricted to one item at a time, almost like skipping stones. Navigating linearly in such a manner, if we wanted to move within a list of elements from element #1 to element #5, we’d have to press a directional button four times.</p>

<figure><a href="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/15-focus-state-row-remote.gif"><img src="https://files.smashing.media/articles/designing-tv-evergreen-pattern-shapes-tv-experiences/15-focus-state-row-remote.gif" width="1440" height="810" alt="Representation of TV user interface showcasing a focus state as the selection moves sequentially from item to item within a horizontal column. A remote control is placed in the bottom-right corner with highlights of the button presses. The list moves sequentially in a horizontal line from the first item to the fifth item, then back." /></a><figcaption>Simplified TV UI demonstrating a focus state along with sequential movement from item to item within a row.</figcaption></figure>

<p>To successfully navigate such an interface, we need the ability to move <code>left</code>, <code>right</code>, <code>up</code>, and <code>down</code> &mdash; we need a D-pad. And once we’ve landed on our desired item, there needs to be a way to select it or make a confirmation, and in the case of a mistake, we need to be able to go back. For the purposes of those two additional interactions, we’d need two more buttons, <code>OK</code> and <code>back</code>, or to make it more abstract, we’d need buttons <code>A</code> and <code>B</code>.</p>

<blockquote>So, to successfully navigate a TV interface, we need only a NES controller.<br /><br />Yes, we can enhance it with touchpads and motion gestures, augment it with voice controls, but <strong>this unshakeable foundation of interaction</strong> will remain as the very basic level of inherent complexity in a TV interface. Reducing it any further would significantly <strong>impair the experience</strong>, so all we’ve managed to do throughout the years is to only build upon it.</blockquote>

<p>The D-pad and buttons <code>A</code> and <code>B</code> survived decades of innovation and technological shifts, and chances are they’ll survive many more. By understanding and respecting this principle, you can design intuitive, system-agnostic experiences and easily translate them across platforms. Knowing you can’t go simpler than these six buttons, you’ll easily build from the ground up and attach any additional framework-bound functionality to the time-tested core.</p>

<p>And once you get the grip of these paradigms, you’ll get into mapping and re-mapping buttons depending on context, and understand just how far you can go when designing for TV. You’ll be able to invent new experiences, conduct experiments, and challenge the patterns. But that is a topic for a different article.</p>

<div class="partners__lead-place"></div>

<h2 id="closing-thoughts">Closing Thoughts</h2>

<p>While designing for TV almost exclusively during the past few years, I was also often educating the stakeholders on the very principles outlined in this article. Trying to address their concerns about different remotes working slightly differently, I found respite in the simplicity of the NES controller and how it got the point across in an understandable way. Eventually, I expanded my knowledge by looking into the developmental history of the remote and was surprised that my analogy had backing in history. This is a fascinating niche, and there’s a lot more to share on the topic. I’m glad we started!</p>

<p>It’s vital to understand the fundamental “ins” and “outs” of any venture before getting practical, and TV is no different. Now that you understand the basics, go, dig in, and break some ground.</p>

<p>Having covered the <strong>underlying interaction patterns of TV experiences</strong> in detail, it’s time to get practical.</p>

<p>In <a href="https://www.smashingmagazine.com/2025/09/designing-tv-principles-patterns-practical-guidance/"><strong>Part 2</strong></a>, we’ll explore the building blocks of the 10-foot experience and how to best utilize them in your designs. We’ll review the TV design fundamentals (the screen, layout, typography, color, and focus/focus styles), and the common TV UI components (menus, “shelves,” spotlights, search, and more). I will also show you how to start thinking beyond the basics and to work with &mdash; and around &mdash; the constraints which we abide by when designing for TV. Stay tuned!</p>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://www.edenspiekermann.com/insights/the-10-foot-experience/">The 10 Foot Experience</a>,” by Robert Stulle (Edenspiekermann)<br />
<em>Every user interface should offer effortless navigation and control. For the 10-foot experience, this is twice as important; with only up, down, left, right, OK and back as your input vocabulary, things had better be crystal clear. You want to sit back and enjoy without having to look at your remote — your thumb should fly over the buttons to navigate, select, and activate.</em></li>
<li>“<a href="https://learn.microsoft.com/en-us/windows/win32/dxtecharts/introduction-to-the-10-foot-experience-for-windows-game-developers">Introduction to the 10-Foot Experience for Windows Game Developers</a>” (Microsoft Learn)<br />
<em>A growing number of people are using their personal computers in a completely new way. When you think of typical interaction with a Windows-based computer, you probably envision sitting at a desk with a monitor, and using a mouse and keyboard (or perhaps a joystick device); this is referred to as the 2-foot experience. But there&rsquo;s another trend which you&rsquo;ll probably start hearing more about: the 10-foot experience, which describes using your computer as an entertainment device with output to a TV. This article introduces the 10-foot experience and explores the list of things that you should consider first about this new interaction pattern, even if you aren&rsquo;t expecting your game to be played this way.</em></li>
<li>“<a href="https://en.wikipedia.org/wiki/10-foot_user_interface">10-foot user interface</a>” (Wikipedia)<br />
<em>In computing, a 10-foot user interface, or 3-meter UI, is a graphical user interface designed for televisions (TV). Compared to desktop computer and smartphone user interfaces, it uses text and other interface elements that are much larger in order to accommodate a typical television viewing distance of 10 feet (3.0 meters); in reality, this distance varies greatly between households, and additionally, the limitations of a television&rsquo;s remote control necessitate extra user experience considerations to minimize user effort.</em></li>
<li>“<a href="https://www.thoughtco.com/history-of-the-television-remote-control-1992384">The Television Remote Control: A Brief History</a>,” by Mary Bellis (ThoughtCo)<br />
<em>The first TV remote, the Lazy Bone, was made in 1950 and used a cable. In 1955, the Flash-matic was the first wireless remote, but it had issues with sunlight. Zenith&rsquo;s Space Command in 1956 used ultrasound and became the popular choice for over 25 years.</em></li>
<li>“<a href="https://www.grunge.com/826329/the-history-of-the-tv-remote/">The History of The TV Remote</a>,” by Remy Millisky (Grunge)<br />
<em>The first person to create and patent the remote control was none other than Nikola Tesla, inventor of the Tesla coil and numerous electronic systems. He patented the idea in 1893 to drive boats remotely, far before televisions were invented. Since then, remotes have come a long way, especially for the television, changing from small boxes with long wires to the wireless universal remotes that many people have today. How has the remote evolved over time?</em></li>
<li>“<a href="https://nintendo.fandom.com/wiki/Nintendo_Entertainment_System_controller">Nintendo Entertainment System controller</a>” (Nintendo Wiki)<br />
<em>The Nintendo Entertainment System controller is the main controller for the NES. While previous systems had used joysticks, the NES controller provided a directional pad (the D-pad was introduced in the Game &amp; Watch version of Donkey Kong).</em></li>
<li>“<a href="https://uxdesign.cc/why-touchscreens-dont-work-in-cars-69b6ff3d4355">Why Touchscreens In Cars Don’t Work</a>,” by Jacky Li (published in June 2018)<br />
<em>Observing the behaviour of 21 drivers has made me realize what’s wrong with automotive UX. [&hellip;] While I was excited to learn more about the Tesla Model X, it slowly became apparent to me that the driver’s eyes were more glued to the screen than the road. Something about interacting with a touchscreen when driving made me curious to know: just how distracting are they?</em></li>
<li>“<a href="https://www.theautopian.com/europe-is-requiring-physical-buttons-for-cars-to-get-top-safety-marks-and-we-should-too/">Europe Is Requiring Physical Buttons For Cars To Get Top Safety Marks</a>,” by Jason Torchinsky (published in March 2024)<br />
<em>The overuse of touchscreens is an industry-wide problem, with almost every vehicle-maker moving key controls onto central touchscreens, obliging drivers to take their eyes off the road and raising the risk of distraction crashes. New Euro NCAP tests due in 2026 will encourage manufacturers to use separate, physical controls for basic functions in an intuitive manner, limiting eyes-off-road time and therefore promoting safer driving.</em></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(mb, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Nikita Samutin</author><title>Beyond The Hype: What AI Can Really Do For Product Design</title><link>https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/</link><pubDate>Mon, 18 Aug 2025 13:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/</guid><description>AI tools are improving fast, but it’s still not clear how they fit into a real product design workflow. Nikita Samutin walks through four core stages &amp;mdash; from analytics and ideation to prototyping and visual design &amp;mdash; to show where AI fits and where it doesn’t, illustrated with real-world examples.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/beyond-hype-what-ai-can-do-product-design/" />
              <title>Beyond The Hype: What AI Can Really Do For Product Design</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Beyond The Hype: What AI Can Really Do For Product Design</h1>
                  
                    
                    <address>Nikita Samutin</address>
                  
                  <time datetime="2025-08-18T13:00:00&#43;00:00" class="op-published">2025-08-18T13:00:00+00:00</time>
                  <time datetime="2025-08-18T13:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>These days, it’s easy to find curated lists of AI tools for designers, galleries of generated illustrations, and countless prompt libraries. What’s much harder to find is a clear view of how AI is <em>actually</em> integrated into the everyday workflow of a product designer &mdash; not for experimentation, but for real, meaningful outcomes.</p>

<p>I’ve gone through that journey myself: testing AI across every major stage of the design process, from ideation and prototyping to visual design and user research. Along the way, I’ve built a simple, repeatable workflow that significantly boosts my productivity.</p>

<p>In this article, I’ll share what’s already working and break down some of the most common objections I’ve encountered &mdash; many of which I’ve faced personally.</p>

<h2 id="stage-1-idea-generation-without-the-clichés">Stage 1: Idea Generation Without The Clichés</h2>

<p><strong>Pushback</strong>: <em>“Whenever I ask AI to suggest ideas, I just get a list of clichés. It can’t produce the kind of creative thinking expected from a product designer.”</em></p>

<p>That’s a fair point. AI doesn’t know the specifics of your product, the full context of your task, or many other critical nuances. The most obvious fix is to “feed it” all the documentation you have. But that’s a common mistake as it often leads to even worse results: the context gets flooded with irrelevant information, and the AI’s answers become vague and unfocused.</p>

<p>Current-gen models can technically process thousands of words, but <strong>the longer the input, the higher the risk of missing something important</strong>, especially content buried in the middle. This is known as the “<a href="https://community.openai.com/t/validating-middle-of-context-in-gpt-4-128k/498255">lost in the middle</a>” problem.</p>

<p>To get meaningful results, AI doesn’t just need more information &mdash; it needs the <em>right</em> information, delivered in the right way. That’s where the RAG approach comes in.</p>

<h3 id="how-rag-works">How RAG Works</h3>

<p>Think of RAG as a smart assistant working with your personal library of documents. You upload your files, and the assistant reads each one, creating a short summary &mdash; a set of bookmarks (semantic tags) that capture the key topics, terms, scenarios, and concepts. These summaries are stored in a kind of “card catalog,” called a vector database.</p>

<p>When you ask a question, the assistant doesn’t reread every document from cover to cover. Instead, it compares your query to the bookmarks, retrieves only the most relevant excerpts (chunks), and sends those to the language model to generate a final answer.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="https://www.smashingconf.com/online-workshops/">Smashing Workshops</a></strong> on <strong>front-end, design &amp; UX</strong>, with practical takeaways, live sessions, <strong>video recordings</strong> and a friendly Q&amp;A. With Brad Frost, Stéph Walter and <a href="https://smashingconf.com/online-workshops/workshops">so many others</a>.</p>
<a data-instant href="smashing-workshops" class="btn btn--green btn--large" style="">Jump to the workshops&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="smashing-workshops" class="feature-panel-image-link">
<div class="feature-panel-image">
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="/images/smashing-cat/cat-scubadiving-panel.svg"
    alt="Feature Panel"
    width="257"
    height="355"
/>

</div>
</a>
</div>
</aside>
</div>

<h3 id="how-is-this-different-from-just-dumping-a-doc-into-the-chat">How Is This Different from Just Dumping a Doc into the Chat?</h3>

<p>Let’s break it down:</p>

<p><strong>Typical chat interaction</strong></p>

<p>It’s like asking your assistant to read a 100-page book from start to finish every time you have a question. Technically, all the information is “in front of them,” but it’s easy to miss something, especially if it’s in the middle. This is exactly what the <em>“lost in the middle”</em> issue refers to.</p>

<p><strong>RAG approach</strong></p>

<p>You ask your smart assistant a question, and it retrieves only the relevant pages (chunks) from different documents. It’s faster and more accurate, but it introduces a few new risks:</p>

<ul>
<li><strong>Ambiguous question</strong><br />
You ask, “How can we make the project safer?” and the assistant brings you documents about cybersecurity, not finance.</li>
<li><strong>Mixed chunks</strong><br />
A single chunk might contain a mix of marketing, design, and engineering notes. That blurs the meaning so the assistant can’t tell what the core topic is.</li>
<li><strong>Semantic gap</strong><br />
You ask, <em>“How can we speed up the app?”</em> but the document says, <em>“Optimize API response time.”</em> For a human, that’s obviously related. For a machine, not always.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="383"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png"
			
			sizes="100vw"
			alt="Diagram showing how RAG works: a user prompt triggers semantic search through a knowledge base. Relevant chunks are sent to a language model, which generates an answer based on retrieved content."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Instead of using the model’s memory, it searches your documents and builds a response based on what it finds. (<a href='https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/1-rag-approach.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>These aren’t reasons to avoid RAG or AI altogether. Most of them can be avoided with better preparation of your knowledge base and more precise prompts. So, where do you start?</p>

<h3 id="start-with-three-short-focused-documents">Start With Three Short, Focused Documents</h3>

<p>These three short documents will give your AI assistant just enough context to be genuinely helpful:</p>

<ul>
<li><strong>Product Overview &amp; Scenarios</strong><br />
A brief summary of what your product does and the core user scenarios.</li>
<li><strong>Target Audience</strong><br />
Your main user segments and their key needs or goals.</li>
<li><strong>Research &amp; Experiments</strong><br />
Key insights from interviews, surveys, user testing, or product analytics.</li>
</ul>

<p>Each document should focus on a single topic and ideally stay within 300&ndash;500 words. This makes it easier to search and helps ensure that each retrieved chunk is semantically clean and highly relevant.</p>

<h3 id="language-matters">Language Matters</h3>

<p>In practice, RAG works best when both the query and the knowledge base are in English. I ran a small experiment to test this assumption, trying a few different combinations:</p>

<ul>
<li><strong>English prompt + English documents</strong>: Consistently accurate and relevant results.</li>
<li><strong>Non-English prompt + English documents</strong>: Quality dropped sharply. The AI struggled to match the query with the right content.</li>
<li><strong>Non-English prompt + non-English documents</strong>: The weakest performance. Even though large language models technically support multiple languages, their internal semantic maps are mostly trained in English. Vector search in other languages tends to be far less reliable.</li>
</ul>

<p><strong>Takeaway</strong>: If you want your AI assistant to deliver precise, meaningful responses, do your RAG work entirely in English, both the data and the queries. This advice applies specifically to RAG setups. For regular chat interactions, you’re free to use other languages. A challenge also highlighted in <a href="https://arxiv.org/abs/2408.12345">this 2024 study on multilingual retrieval</a>.</p>

<h3 id="from-outsider-to-teammate-giving-ai-the-context-it-needs">From Outsider to Teammate: Giving AI the Context It Needs</h3>

<p>Once your AI assistant has proper context, it stops acting like an outsider and starts behaving more like someone who truly understands your product. With well-structured input, it can help you spot blind spots in your thinking, challenge assumptions, and strengthen your ideas &mdash; the way a mid-level or senior designer would.</p>

<p>Here’s an example of a prompt that works well for me:</p>

<blockquote>Your task is to perform a comparative analysis of two features: "Group gift contributions" (described in group_goals.txt) and "Personal savings goals" (described in personal_goals.txt).<br /><br />The goal is to identify potential conflicts in logic, architecture, and user scenarios and suggest visual and conceptual ways to clearly separate these two features in the UI so users can easily understand the difference during actual use.<br /><br />Please include:<ul><li>Possible overlaps in user goals, actions, or scenarios;</li><li>Potential confusion if both features are launched at the same time;</li><li>Any architectural or business-level conflicts (e.g. roles, notifications, access rights, financial logic);</li><li>Suggestions for visual and conceptual separation: naming, color coding, separate sections, or other UI/UX techniques;</li><li>Onboarding screens or explanatory elements that might help users understand both features.</li></ul>If helpful, include a comparison table with key parameters like purpose, initiator, audience, contribution method, timing, access rights, and so on.</blockquote>

<h3 id="ai-needs-context-not-just-prompts">AI Needs Context, Not Just Prompts</h3>

<blockquote>If you want AI to go beyond surface-level suggestions and become a real design partner, it needs the right context. Not just <strong>more</strong> information, but <strong>better</strong>, more structured information.</blockquote>

<p>Building a usable knowledge base isn’t difficult. And you don’t need a full-blown RAG system to get started. Many of these principles work even in a regular chat: <strong>well-organized content</strong> and a <strong>clear question</strong> can dramatically improve how helpful and relevant the AI’s responses are. That’s your first step in turning AI from a novelty into a practical tool in your product design workflow.</p>

<h2 id="stage-2-prototyping-and-visual-experiments">Stage 2: Prototyping and Visual Experiments</h2>

<p><strong>Pushback</strong>: <em>“AI only generates obvious solutions and can’t even build a proper user flow. It’s faster to do it manually.”</em></p>

<p>That’s a fair concern. AI still performs poorly when it comes to building complete, usable screen flows. But for individual elements, especially when exploring new interaction patterns or visual ideas, it can be surprisingly effective.</p>

<p>For example, I needed to prototype a gamified element for a limited-time promotion. The idea is to give users a lottery ticket they can “flip” to reveal a prize. I couldn’t recreate the 3D animation I had in mind in Figma, either manually or using any available plugins. So I described the idea to Claude 4 in Figma Make and within a few minutes, without writing a single line of code, I had exactly what I needed.</p>

<p>At the prototyping stage, AI can be a strong creative partner in two areas:</p>

<ul>
<li><strong>UI element ideation</strong><br />
It can generate dozens of interactive patterns, including ones you might not think of yourself.</li>
<li><strong>Micro-animation generation</strong><br />
It can quickly produce polished animations that make a concept feel real, which is great for stakeholder presentations or as a handoff reference for engineers.</li>
</ul>

<p>AI can also be applied to multi-screen prototypes, but it’s not as simple as dropping in a set of mockups and getting a fully usable flow. The bigger and more complex the project, the more fine-tuning and manual fixes are required. Where AI already works brilliantly is in focused tasks &mdash; individual screens, elements, or animations &mdash; where it can kick off the thinking process and save hours of trial and error.</p>

<p><iframe src="https://repair-neon-43490219.figma.site/" width="100%" height="600" frameborder="0" allowfullscreen></iframe><br/><em>A quick UI prototype of a gamified promo banner created with Claude 4 in Figma Make. No code or plugins needed.</em><br /></p>

<p>Here’s another valuable way to use AI in design &mdash; as a <strong>stress-testing tool</strong>. Back in 2023, Google Research introduced <a href="https://arxiv.org/abs/2310.15435?utm_source=chatgpt.com">PromptInfuser</a>, an internal Figma plugin that allowed designers to attach prompts directly to UI elements and simulate semi-functional interactions within real mockups. Their goal wasn’t to generate new UI, but to check how well AI could operate <em>inside</em> existing layouts &mdash; placing content into specific containers, handling edge-case inputs, and exposing logic gaps early.</p>

<p>The results were striking: designers using PromptInfuser were up to 40% more effective at catching UI issues and aligning the interface with real-world input &mdash; a clear gain in design accuracy, not just speed.</p>

<p>That closely reflects my experience with Claude 4 and Figma Make: when AI operates within a real interface structure, rather than starting from a blank canvas, it becomes a much more reliable partner. It helps test your ideas, not just generate them.</p>

<div class="partners__lead-place"></div>

<h2 id="stage-3-finalizing-the-interface-and-visual-style">Stage 3: Finalizing The Interface And Visual Style</h2>

<p><strong>Pushback</strong>: <em>“AI can’t match our visual style. It’s easier to just do it by hand.”</em></p>

<p>This is one of the most common frustrations when using AI in design. Even if you upload your color palette, fonts, and components, the results often don’t feel like they belong in your product. They tend to be either overly decorative or overly simplified.</p>

<p>And this is a real limitation. In my experience, today’s models still struggle to reliably apply a design system, even if you provide a component structure or JSON files with your styles. I tried several approaches:</p>

<ul>
<li><strong>Direct integration with a component library.</strong><br />
I used Figma Make (powered by Claude) and connected our library. This was the least effective method: although the AI attempted to use components, the layouts were often broken, and the visuals were overly conservative. <a href="https://forum.figma.com/ask-the-community-7/figma-make-library-support-42423?utm_source=chatgpt.com">Other designers</a> have run into similar issues, noting that library support in Figma Make is still limited and often unstable.</li>
<li><strong>Uploading styles as JSON.</strong><br />
Instead of a full component library, I tried uploading only the exported styles &mdash; colors, fonts &mdash; in a JSON format. The results improved: layouts looked more modern, but the AI still made mistakes in how styles were applied.</li>
<li><strong>Two-step approach: structure first, style second.</strong><br />
What worked best was separating the process. First, I asked the AI to generate a layout and composition without any styling. Once I had a solid structure, I followed up with a request to apply the correct styles from the same JSON file. This produced the most usable result — though still far from pixel-perfect.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="535"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png"
			
			sizes="100vw"
			alt="Three mobile UI screens showing how different design system setups affect visual output: with component library, with JSON styles, and without any styles — all generated by Claude Sonnet 4 from the same prompt."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      From left to right: prompt with attached library in Figma, prompt with styles in JSON, and raw prompt. All generated using Claude Sonnet 4 with the same input. (<a href='https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/3-ui-screens-claude-sonnet.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>So yes, AI still can’t help you finalize your UI. It doesn’t replace hand-crafted design work. But it’s very useful in other ways:</p>

<ul>
<li>Quickly creating a <strong>visual concept</strong> for discussion.</li>
<li>Generating <strong>“what if” alternatives</strong> to existing mockups.</li>
<li>Exploring how your interface might look in a different style or direction.</li>
<li>Acting as a <strong>second pair of eyes</strong> by giving feedback, pointing out inconsistencies or overlooked issues you might miss when tired or too deep in the work.</li>
</ul>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAI%20won%e2%80%99t%20save%20you%20five%20hours%20of%20high-fidelity%20design%20time,%20since%20you%e2%80%99ll%20probably%20spend%20that%20long%20fixing%20its%20output.%20But%20as%20a%20visual%20sparring%20partner,%20it%e2%80%99s%20already%20strong.%20If%20you%20treat%20it%20like%20a%20source%20of%20alternatives%20and%20fresh%20perspectives,%20it%20becomes%20a%20valuable%20creative%20collaborator.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fbeyond-hype-what-ai-can-do-product-design%2f">
      
AI won’t save you five hours of high-fidelity design time, since you’ll probably spend that long fixing its output. But as a visual sparring partner, it’s already strong. If you treat it like a source of alternatives and fresh perspectives, it becomes a valuable creative collaborator.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="stage-4-product-feedback-and-analytics-ai-as-a-thinking-exosuit">Stage 4: Product Feedback And Analytics: AI As A Thinking Exosuit</h2>

<p>Product designers have come a long way. We used to create interfaces in Photoshop based on predefined specs. Then we delved deeper into UX with mapping user flows, conducting interviews, and understanding user behavior. Now, with AI, we gain access to yet another level: data analysis, which used to be the exclusive domain of product managers and analysts.</p>

<p>As <a href="https://www.smashingmagazine.com/2025/03/how-to-argue-against-ai-first-research/">Vitaly Friedman rightly pointed out in one of his columns</a>, trying to replace real UX interviews with AI can lead to false conclusions as models tend to generate an average experience, not a real one. <strong>The strength of AI isn’t in inventing data but in processing it at scale.</strong></p>

<p>Let me give a real example. We launched an exit survey for users who were leaving our service. Within a week, we collected over 30,000 responses across seven languages.</p>

<p>Simply counting the percentages for each of the five predefined reasons wasn’t enough. I wanted to know:</p>

<ul>
<li>Are there specific times of day when users churn more?</li>
<li>Do the reasons differ by region?</li>
<li>Is there a correlation between user exits and system load?</li>
</ul>

<p>The real challenge was&hellip; figuring out what cuts and angles were even worth exploring. The entire technical process, from analysis to visualizations, was done “for me” by Gemini, working inside Google Sheets. This task took me about two hours in total. Without AI, not only would it have taken much longer, but I probably wouldn’t have been able to reach that level of insight on my own at all.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="379"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png"
			
			sizes="100vw"
			alt="Bar charts showing cancellation reasons by hour and by currency, generated with Gemini in Google Sheets."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A few examples of output I’ve got from Gemini in Google Sheets. (<a href='https://files.smashing.media/articles/beyond-hype-what-ai-can-do-product-design/4-gemini-google-sheets.png'>Large preview</a>)
    </figcaption>
  
</figure>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aAI%20enables%20near%20real-time%20work%20with%20large%20data%20sets.%20But%20most%20importantly,%20it%20frees%20up%20your%20time%20and%20energy%20for%20what%e2%80%99s%20truly%20valuable:%20asking%20the%20right%20questions.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fbeyond-hype-what-ai-can-do-product-design%2f">
      
AI enables near real-time work with large data sets. But most importantly, it frees up your time and energy for what’s truly valuable: asking the right questions.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p><strong>A few practical notes</strong>: Working with large data sets is still challenging for models without strong reasoning capabilities. In my experiments, I used Gemini embedded in Google Sheets and cross-checked the results using ChatGPT o3. Other models, including the standalone Gemini 2.5 Pro, often produced incorrect outputs or simply refused to complete the task.</p>

<div class="partners__lead-place"></div>

<h2 id="ai-is-not-an-autopilot-but-a-co-pilot">AI Is Not An Autopilot But A Co-Pilot</h2>

<p>AI in design is only as good as the questions you ask it. It doesn’t do the work for you. It doesn’t replace your thinking. But it helps you move faster, explore more options, validate ideas, and focus on the hard parts instead of burning time on repetitive ones. Sometimes it’s still faster to design things by hand. Sometimes it makes more sense to delegate to a junior designer.</p>

<p>But increasingly, AI is becoming the one who suggests, sharpens, and accelerates. Don’t wait to build the perfect AI workflow. Start small. And that might be the first real step in turning AI from a curiosity into a trusted tool in your product design process.</p>

<h2 id="let-s-summarize">Let’s Summarize</h2>

<ul>
<li>If you just paste a full doc into chat, the model often misses important points, especially things buried in the middle. That’s <strong>the “lost in the middle” problem</strong>.</li>
<li><strong>The RAG approach</strong> helps by pulling only the most relevant pieces from your documents. So responses are faster, more accurate, and grounded in real context.</li>
<li><strong>Clear, focused prompts</strong> work better. Narrow the scope, define the output, and use familiar terms to help the model stay on track.</li>
<li><strong>A well-structured knowledge bas</strong> makes a big difference. Organizing your content into short, topic-specific docs helps reduce noise and keep answers sharp.</li>
<li><strong>Use English for both your prompts and your documents.</strong> Even multilingual models are most reliable when working in English, especially for retrieval.</li>
<li>Most importantly: <strong>treat AI as a creative partner</strong>. It won’t replace your skills, but it can spark ideas, catch issues, and speed up the tedious parts.</li>
</ul>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“<a href="https://standardbeagle.com/ai-assisted-design-workflows/#what-ai-actually-does-in-ux-workflows">AI-assisted Design Workflows: How UX Teams Move Faster Without Sacrificing Quality</a>”, Cindy Brummer<br />
<em>This piece is a perfect prequel to my article. It explains how to start integrating AI into your design process, how to structure your workflow, and which tasks AI can reasonably take on — before you dive into RAG or idea generation.</em></li>
<li>“<a href="https://www.figma.com/blog/8-ways-to-build-with-figma-make/">8 essential tips for using Figma Make</a>”, Alexia Danton<br />
<em>While this article focuses on Figma Make, the recommendations are broadly applicable. It offers practical advice that will make your work with AI smoother, especially if you’re experimenting with visual tools and structured prompting.</em></li>
<li>“<a href="https://blogs.nvidia.com/blog/what-is-retrieval-augmented-generation/">What Is Retrieval-Augmented Generation aka RAG</a>”, Rick Merritt<br />
<em>If you want to go deeper into how RAG actually works, this is a great starting point. It breaks down key concepts like vector search and retrieval in plain terms and explains why these methods often outperform long prompts alone.</em></li>
</ul>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Rodolpho Henrique</author><title>The Psychology Of Color In UX And Digital Products</title><link>https://www.smashingmagazine.com/2025/08/psychology-color-ux-design-digital-products/</link><pubDate>Fri, 15 Aug 2025 15:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/psychology-color-ux-design-digital-products/</guid><description>Rodolpho Henrique guides you through the essential aspects of color in digital design and user experience, from the practical steps of creating effective and scalable color palettes to critical accessibility considerations.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/psychology-color-ux-design-digital-products/" />
              <title>The Psychology Of Color In UX And Digital Products</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>The Psychology Of Color In UX And Digital Products</h1>
                  
                    
                    <address>Rodolpho Henrique</address>
                  
                  <time datetime="2025-08-15T15:00:00&#43;00:00" class="op-published">2025-08-15T15:00:00+00:00</time>
                  <time datetime="2025-08-15T15:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Color plays a pivotal role in crafting compelling user experiences and successful digital products. It’s far more than just aesthetics; color strategically guides users, establishes brand identity, and evokes specific emotions.</p>

<p>Beyond functionality, color is also a powerful tool for <strong>brand recognition</strong> and <strong>emotional connection</strong>. Consistent use of brand colors across a digital product reinforces identity and builds trust. Different hues carry cultural and psychological associations, allowing designers to subtly influence user perception and create the desired mood. A thoughtful and deliberate approach to color in UX design elevates the user experience, strengthens brand presence, and contributes significantly to the overall success and impact of digital products. In this article, we will talk about the importance of color and <em>why</em> they are important for creating emotional connection and delivering consistent and accessible digital products.</p>

<p>Well-chosen color palettes enhance <strong>usability</strong> by creating visual hierarchies, highlighting interactive elements, and providing crucial feedback on screens. For instance, a bright color might draw attention to a call-to-action button, while consistent color coding can help users navigate complex interfaces intuitively. Color also contributes significantly to <strong>accessibility</strong>, ensuring that users with visual impairments can still effectively interact with digital products. By carefully considering contrast ratios and providing alternative visual cues, designers can create inclusive experiences that cater to a wider audience.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aThe%20colors%20we%20choose%20are%20the%20silent%20language%20of%20our%20digital%20products,%20and%20speaking%20it%20fluently%20is%20essential%20for%20success.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fpsychology-color-ux-design-digital-products%2f">
      
The colors we choose are the silent language of our digital products, and speaking it fluently is essential for success.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h2 id="communicating-brand-identity-through-color-in-the-digital-space">Communicating Brand Identity Through Color In The Digital Space</h2>

<p>A thoughtfully curated and vibrant color palette becomes a critical differentiator, allowing a brand to stand out amidst the digital noise and cultivate stronger connections with the audience.</p>

<p>Far beyond mere decoration, color acts as a visual shorthand, instantly conveying a brand’s personality, its underlying values, and its unique essence. According to the <a href="https://www.ama.org/2025/04/08/more-vividmore-effective-how-saturated-colors-impact-consumer-behavior-and-waste/">American Marketing Association</a>, vibrant colors, in particular, possess an inherent magnetism, drawing the eye and etching themselves into memory within the online environment. They infuse the brand with energy and dynamism, projecting approachability and memorability in a way that more muted tones often cannot.</p>

<div data-audience="non-subscriber" data-remove="true" class="feature-panel-container">

<aside class="feature-panel" style="">
<div class="feature-panel-left-col">

<div class="feature-panel-description"><p>Meet <strong><a data-instant href="/printed-books/image-optimization/">Image Optimization</a></strong>, Addy Osmani’s new practical guide to optimizing and delivering <strong>high-quality images</strong> on the web. Everything in one single <strong>528-pages</strong> book.</p>
<a data-instant href="https://www.smashingmagazine.com/printed-books/image-optimization/" class="btn btn--green btn--large" style="">Jump to table of contents&nbsp;↬</a></div>
</div>
<div class="feature-panel-right-col"><a data-instant href="https://www.smashingmagazine.com/printed-books/image-optimization/" class="feature-panel-image-link">
<div class="feature-panel-image"><picture><source type="image/avif" srcSet="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/2c669cf1-c6ef-4c87-9901-018b04f7871f/image-optimization-shop-cover-opt.avif" />
<img
    loading="lazy"
    decoding="async"
    class="feature-panel-image-img"
    src="https://archive.smashing.media/assets/344dbf88-fdf9-42bb-adb4-46f01eedd629/87fd0cfa-692e-459c-b2f3-15209a1f6aa7/image-optimization-shop-cover-opt.png"
    alt="Feature Panel"
    width="480"
    height="697"
/>
</picture>
</div>
</a>
</div>
</aside>
</div>

<h2 id="consistency-the-core-of-great-design">Consistency: The Core Of Great Design</h2>

<p>Consistency is important because it fosters trust and familiarity, allowing users to quickly identify and connect with the brand in the online landscape. The strategic deployment of vibrant colors is especially crucial for brands seeking to establish themselves and flourish within the digital ecosystem. In the absence of physical storefronts or tangible in-person interactions, visual cues become paramount in shaping user perception and building brand recognition. A carefully selected primary color, supported by a complementary and equally energetic secondary palette, can become synonymous with a brand’s digital presence. A consistent application of the right colors across different digital touchpoints &mdash; from the logo and website design to the user interface of an app and engaging social media campaigns &mdash; creates a cohesive and instantly recognizable visual language.</p>

<p>Several sources and professionals agree that the psychology behind the colors plays a significant role in shaping brand perception. The publication <a href="https://insightspsychology.org/psychology-of-color-emotional-impact/">Insights Psychology</a>, for instance, explains how colors can create emotional and behavioural responses. Vibrant colors often evoke strong emotions and associations. A bold, energetic red, for example, might communicate passion and excitement, while a bright, optimistic yellow could convey innovation and cheerfulness. By consciously aligning their color choices with their brand values and target audience preferences, digitally-native brands can create a powerful emotional resonance.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="800"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png"
			
			sizes="100vw"
			alt="Colors with corresponding emotions and associations"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      The psychology behind the colors plays a significant role in shaping brand perception. (<a href='https://files.smashing.media/articles/psychology-color-ux-design-digital-products/1-colors-psychology.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="beyond-aesthetics-how-color-psychologically-impacts-user-behavior-in-digital">Beyond Aesthetics: How Color Psychologically Impacts User Behavior In Digital</h2>

<p>As designers working with digital products, we’ve learned that color is far more than a superficial layer of visual appeal. It’s a potent, <strong>often subconscious</strong>, force that shapes how users interact with and feel about the digital products we build.</p>

<blockquote>We’re not just painting pixels, we’re conducting a psychological symphony, carefully selecting each hue to evoke specific emotions, guide behavior, and ultimately forge a deeper connection with the user.</blockquote>

<p>The initial allure of a color palette might be purely aesthetic, but its true power lies in its <strong>ability to bypass conscious thought and tap directly into our emotional core</strong>. Think about the subtle unease that might creep in when encountering a predominantly desaturated interface for a platform promising dynamic content, or the sense of calm that washes over you when a learning application utilizes soft, analogous colors. These are not arbitrary responses; they’re deeply rooted in our evolutionary history and cultural conditioning.</p>

<p>To understand how colors psychologically impact user behavior in digital, we first need to understand how colors are defined. In digital design, colors are precisely defined using the <strong>HSB model</strong>, which stands for <strong>Hue</strong>, <strong>Saturation</strong>, and <strong>Brightness</strong>. This model provides a more intuitive way for designers to think about and manipulate color compared to other systems like RGB (Red, Green, Blue). Here is a quick breakdown of each component:</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="384"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png"
			
			sizes="100vw"
			alt="HSB model"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      HSB describes colors based on how humans perceive them, rather than the physical components of light like RGB. (<a href='https://files.smashing.media/articles/psychology-color-ux-design-digital-products/2-hsb-colors.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="hue">Hue</h3>

<p>This is the pure color itself, the essence that we typically name, such as red, blue, green, or yellow. On a color wheel, hue is represented as an angle ranging from 0 to 360 degrees. For example, 0 is red, 120 is green, and 240 is blue. Think of it as the specific wavelength of light that our eyes perceive as a particular color. In UX, selecting the base hues is often tied to brand identity and the overall feeling you want to convey.</p>

<h3 id="saturation">Saturation</h3>

<p>Saturation refers to the intensity or purity of the hue. It describes how vivid or dull the color appears. A fully saturated color is rich and vibrant, while a color with low saturation appears muted, grayish, or desaturated. Saturation is typically expressed as a percentage, from 0% (completely desaturated, appearing as a shade of gray) to 100% (fully saturated, the purest form of the hue).</p>

<p>In UX, saturation levels are crucial for creating <strong>visual hierarchy</strong> and drawing attention to key elements. Highly saturated colors often indicate interactive elements or important information, while lower saturation can be used for backgrounds or less critical content.</p>

<h3 id="brightness">Brightness</h3>

<p>Brightness, sometimes also referred to as a value or lightness, indicates how light or dark a color appears. It’s the amount of white or black mixed into the hue. Brightness is also usually represented as a percentage, ranging from 0% (completely black, regardless of the hue or saturation) to 100% (fully bright). At 100% brightness and 0% saturation, you get white. In UX, adjusting brightness is essential for <strong>creating contrast</strong> and <strong>ensuring readability</strong>. Sufficient brightness contrast between text and background is a fundamental accessibility requirement. Furthermore, variations in brightness within a color palette can create visual depth and subtle distinctions between UI elements.</p>

<p>By understanding and manipulating these 3 color dimensions, digital designers have precise control over their color choices. This allows for the creation of harmonious and effective color palettes that not only align with brand guidelines but also strategically influence user behavior.</p>

<p>Just as in the physical world, colors in digital also carry symbolic meanings and trigger subconscious associations. Understanding these color associations is essential for UX designers aiming to craft experiences that not only look appealing but also resonate emotionally and guide user behavior effectively.</p>

<p>As the <a href="https://blog.emb.global/color-psychology-in-branding/">EMB Global</a> states, the way we perceive and interpret color is not universal, yet broad patterns of association exist. For instance, the color <strong>blue</strong> often evokes feelings of trust, stability, and calmness. This association stems from the natural world &mdash; the vastness of the sky and the tranquility of deep waters. In the digital space, this makes blue a popular choice for financial institutions, corporate platforms, and interfaces aiming to project reliability and security. However, the specific shade and context matter immensely. A bright, electric blue can feel energetic and modern, while a muted and darker blue might convey a more serious and authoritative tone.</p>

<p>Kendra Cherry, a psychosocial and rehabilitation specialist and author of the book <em>Everything Psychology</em>, <a href="https://www.verywellmind.com/color-psychology-2795824">explains</a> very well how colors evoke certain responses in us. For example, the color <strong>green</strong> is intrinsically linked to nature, often bringing about feelings of growth, health, freshness, and tranquility. It can also symbolize prosperity in some cultures. In digital design, green is frequently used for health and wellness applications, environmental initiatives, and platforms emphasizing sustainability. A vibrant lime green can feel energetic and youthful, while a deep forest green can evoke a sense of groundedness and organic quality.</p>

<p><strong>Yellow</strong>, the color of sunshine, is generally associated with optimism, happiness, energy, and warmth. It’s attention-grabbing and can create a sense of playfulness. In digital interfaces, yellow is often used for highlighting important information, calls to action (though sparingly, as too much can be overwhelming), or for brands wanting to project a cheerful and approachable image.</p>

<p><strong>Red</strong>, a color with strong physiological effects, typically evokes excitement, passion, urgency, and sometimes anger or danger. It commands attention and can stimulate action. Digitally, red is often used for alerts, error messages, sales promotions, or for brands wanting to project a bold and energetic identity. Its intensity requires careful consideration, as overuse can lead to user fatigue or anxiety.</p>

<p><strong>Orange</strong> blends the energy of red with the optimism of yellow, often conveying enthusiasm, creativity, and friendliness. It can feel less aggressive than red but still commands attention. In digital design, orange is frequently used for calls to action, highlighting sales or special offers, and for brands aiming to appear approachable and innovative.</p>

<p><strong>Purple</strong> has historically been associated with royalty and luxury. It can evoke feelings of creativity, wisdom, and mystery. In digital contexts, purple is often used for brands aiming for a sophisticated or unique feel, particularly in areas like luxury goods, beauty, or spiritual and creative platforms.</p>

<p><strong>Black</strong> often signifies sophistication, power, elegance, and sometimes mystery. In digital design, black is frequently used for minimalist interfaces, luxury brands, and for creating strong contrast with lighter elements. The feeling it evokes heavily depends on the surrounding colors and overall design aesthetic.</p>

<p><strong>White</strong> is generally associated with purity, cleanliness, simplicity, and neutrality. It provides a sense of spaciousness and allows other colors to stand out. In digital design, white space is a crucial element, and white is often used as a primary background color to create a clean and uncluttered feel.</p>

<p><strong>Gray</strong> is often seen as neutral, practical, and sometimes somber or conservative. In digital interfaces, various shades of gray are essential for typography, borders, dividers, and creating visual hierarchy without being overly distracting.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="361"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png"
			
			sizes="100vw"
			alt="Guide to some common color associations"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Guide to some common color associations. (<a href='https://files.smashing.media/articles/psychology-color-ux-design-digital-products/3-color-associations.png'>Large preview</a>)
    </figcaption>
  
</figure>

<div class="partners__lead-place"></div>

<h2 id="evoking-emotions-in-digital-interfaces">Evoking Emotions In Digital Interfaces</h2>

<p>Imagine an elegant furniture application. The designers might choose a primary palette of soft, desaturated blues and greens, accented with gentle earth tones. The muted blues could subtly induce a feeling of calmness and tranquility, aligning with the app’s core purpose of relaxation. The soft greens might evoke a sense of nature and well-being, further reinforcing the theme of peace and mental clarity. The earthy browns could ground the visual experience, creating a feeling of stability and connection to the natural world.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="717"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png"
			
			sizes="100vw"
			alt="Example of a calm and vibrant color palette"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of a calm and vibrant color palette. (<a href='https://files.smashing.media/articles/psychology-color-ux-design-digital-products/4-calm-vs-vibrant-color-palette.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Now, consider a platform for extreme investment enthusiasts. The color palette might be dominated by high-energy oranges and reds, contrasted with stark blacks and sharp whites. The vibrant oranges could evoke feelings of excitement and adventure, while the bold red might amplify the sense of adrenaline and intensity. The black and white could provide a sense of dynamism and modernity, reflecting the fast-paced nature of the activities.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aBy%20consciously%20understanding%20and%20applying%20these%20color%20associations,%20digital%20designers%20can%20move%20beyond%20purely%20aesthetic%20choices%20and%20craft%20experiences%20that%20resonate%20deeply%20with%20users%20on%20an%20emotional%20level,%20leading%20to%20more%20engaging,%20intuitive,%20and%20successful%20digital%20products.%0a&url=https://smashingmagazine.com%2f2025%2f08%2fpsychology-color-ux-design-digital-products%2f">
      
By consciously understanding and applying these color associations, digital designers can move beyond purely aesthetic choices and craft experiences that resonate deeply with users on an emotional level, leading to more engaging, intuitive, and successful digital products.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h3 id="color-as-a-usability-tool">Color As A Usability Tool</h3>

<p>Choosing the right colors isn’t about adhering to fleeting trends; it’s about ensuring that our mobile applications and websites are usable by the widest possible audience, including individuals with visual impairments. Improper color choices can create significant barriers, rendering content illegible, interactive elements indistinguishable, and ultimately excluding a substantial portion of potential users.</p>

<blockquote>Prioritizing color with accessibility in mind is not just a matter of ethical design; it’s a fundamental aspect of creating inclusive and user-friendly digital experiences that benefit everyone.</blockquote>

<p>For individuals with low vision, sufficient color contrast between text and background is paramount for readability. Imagine trying to decipher light gray text on a white background &mdash; a common design trend that severely hinders those with even mild visual impairments. Adhering to <a href="https://www.w3.org/WAI/standards-guidelines/wcag/">Web Content Accessibility Guidelines</a> (WCAG) contrast ratios ensures that text remains legible and understandable.</p>

<p>Furthermore, color blindness, affecting a significant percentage of the population, necessitates the use of redundant visual cues. Relying solely on color to convey information, such as indicating errors in red without an accompanying text label, excludes colorblind users. By pairing color with text, icons, or patterns, we ensure that critical information is conveyed through multiple sensory channels, making it accessible to all. Thoughtful color selection, therefore, is not an optional add-on but an integral component of designing digital products that are truly usable and equitable.</p>














<figure class="
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="717"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png"
			
			sizes="100vw"
			alt="Example of high and low contrast on texts"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of high and low contrast on texts. (<a href='https://files.smashing.media/articles/psychology-color-ux-design-digital-products/5-high-vs-low-contrast-texts.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="choosing-your-palette">Choosing Your Palette</h2>

<p>As designers, we need a strategic approach to choosing color palettes, considering various factors to build a scalable and impactful color system. Here’s a breakdown of the steps and considerations involved:</p>

<h3 id="1-deep-dive-into-brand-identity-and-main-goals">1. Deep Dive Into Brand Identity And Main Goals</h3>

<p>The journey begins with a thorough understanding of the brand itself. What are its core values? What personality does it project? Is it playful, sophisticated, innovative? Analyze existing brand guidelines (if any), target audience demographics and psychographics, and the overall goals of the digital product. The color palette should be a visual extension of this identity, reinforcing brand recognition and resonating with the intended users. For instance, a financial app aiming for trustworthiness might lean towards blues and greens, while a creative platform could explore more vibrant and unconventional hues.</p>

<h3 id="2-understand-color-psychology-and-cultural-associations">2. Understand Color Psychology And Cultural Associations</h3>

<p>As discussed previously, colors carry inherent psychological and cultural baggage. While these associations are not absolute, they provide a valuable framework for initial exploration. Consider the emotions you want to evoke and research how your target audience might perceive different colors, keeping in mind cultural nuances that can significantly alter interpretations. This step is important to help in making informed decisions that align with the desired user experience and brand perception.</p>

<h3 id="3-defining-the-core-colors">3. Defining The Core Colors</h3>

<p>Start by identifying the primary color &mdash; the dominant hue that represents your brand’s essence. This will likely be derived from the brand logo or existing visual identity. Next, establish a secondary color or two that complement the primary color and provide visual interest and hierarchy. These secondary colors should work harmoniously with the primary, offering flexibility for different UI elements and interactions.</p>

<h3 id="4-build-a-functional-color-system">4. Build A Functional Color System</h3>

<p>A consistent and scalable color palette goes beyond just a few base colors. It involves creating a system of variations for practical application within the digital interface. This typically includes tints and shades, accent colors, and neutral colors.</p>

<h3 id="5-do-not-forget-about-usability-and-accessibility">5. Do Not Forget About Usability And Accessibility</h3>

<p>Ensure sufficient color contrast between text and background, as well as between interactive elements and their surroundings, to meet WCAG guidelines. Tools are readily available to check color contrast ratios.</p>

<p>Test your palette using color blindness simulators to see how it will be perceived by individuals with different types of color vision deficiencies. This helps identify potential issues where information might be lost due to color alone.</p>

<p>Visual hierarchy is also important to guide the user’s eye and establish a clear visual story. Important elements should be visually distinct.</p>

<h3 id="6-testing-and-iteration">6. Testing And Iteration</h3>

<p>Once you have a preliminary color palette, it’s crucial to test it within the context of your digital product. Create mockups and prototypes to see how the colors work together in the actual interface. Gather feedback from stakeholders and, ideally, conduct user testing to identify any usability or aesthetic issues. Be prepared to iterate and refine your palette based on these insights.</p>

<p>A well-defined color palette for the digital medium should be:</p>

<ul>
<li>Consistent,</li>
<li>Scalable,</li>
<li>Accessible,</li>
<li>Brand-aligned,</li>
<li>Emotionally resonant, and</li>
<li>Functionally effective.</li>
</ul>

<p>By following these steps and keeping these considerations in mind, designers can craft color palettes that are not just visually appealing but also strategically powerful tools for creating effective and accessible digital experiences.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="392"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png"
			
			sizes="100vw"
			alt="Example of a consistent color palette with base colors and accent colors"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Example of a consistent color palette with base colors and accent colors. (<a href='https://files.smashing.media/articles/psychology-color-ux-design-digital-products/6-consistent-color-palette.png'>Large preview</a>)
    </figcaption>
  
</figure>

<div class="partners__lead-place"></div>

<h2 id="color-consistency-building-trust-and-recognition-through-a-harmonized-digital-presence">Color Consistency: Building Trust And Recognition Through A Harmonized Digital Presence</h2>

<p>Consistency plays an important role in the whole color ecosystem. By maintaining a unified color scheme for interactive elements, navigation cues, and informational displays, designers create a seamless and predictable user journey, building trust through visual stability.</p>

<p>Color consistency directly contributes to brand recognition in the increasingly crowded digital landscape. Just as a logo or typeface becomes instantly identifiable, a consistent color palette acts as a powerful visual signature. When users repeatedly encounter the same set of colors associated with a particular brand, it strengthens their recall and fosters a stronger brand association. This visual consistency extends beyond the core interface to marketing materials, social media presence, and all digital touchpoints, creating a cohesive and memorable brand experience. By strategically and consistently applying a solid and consistent color palette, digital products can cultivate stronger brand recognition, build user trust, and enhance user loyalty.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Stephanie Campbell</author><title>From Line To Layout: How Past Experiences Shape Your Design Career</title><link>https://www.smashingmagazine.com/2025/08/from-line-to-layout-past-experiences-shape-design-career/</link><pubDate>Wed, 13 Aug 2025 11:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/from-line-to-layout-past-experiences-shape-design-career/</guid><description>Your past shapes who you are as a designer, no matter where your career began or how unexpected your career path may have been. Stephanie Campbell shows how those lessons can sharpen your instincts, strengthen collaboration, and help you become a better designer today.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/from-line-to-layout-past-experiences-shape-design-career/" />
              <title>From Line To Layout: How Past Experiences Shape Your Design Career</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>From Line To Layout: How Past Experiences Shape Your Design Career</h1>
                  
                    
                    <address>Stephanie Campbell</address>
                  
                  <time datetime="2025-08-13T11:00:00&#43;00:00" class="op-published">2025-08-13T11:00:00+00:00</time>
                  <time datetime="2025-08-13T11:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>Design career origin stories often sound clean and linear: a degree in Fine Arts, a lucky internship, or a first job that launches a linear, upward path. But what about those whose paths were <em>not</em> so straight? The ones who came from service, retail, construction, or <a href="https://jasoncyr.medium.com/how-being-a-firefighter-made-me-a-better-designer-cb6345001d62">even firefighting</a> &mdash; the messy, winding paths that didn’t begin right out of design school &mdash; who learned service instincts long before learning design tools?</p>

<p>I earned my Associate in Science way later than planned, after 15 years in fine dining, which I once dismissed as a detour delaying my “real” career. But in hindsight, it was anything but. Those years built skills and instincts I still rely on daily &mdash; in meetings, design reviews, and messy mid-project pivots.</p>

<h2 id="your-past-is-your-advantage">Your Past Is Your Advantage</h2>

<p>I still have the restaurant dream.</p>

<p>Whenever I’m overwhelmed or deep in a deadline, it comes back: I’m the only one running the restaurant floor. The grill is on fire. There’s no clean glassware. Everyone needs their check, their drink, and their table turned. I wake up sweating, and I ask myself, <em>“Why am I still having restaurant nightmares 15 years into a design career?”</em></p>

<p>Because those jobs wired themselves into how I think and work.</p>

<blockquote>Those years weren’t just a job but high-stakes training in adaptability, anticipation, and grace under pressure. They built muscle memory: ways of thinking, reacting, and solving problems that still appear daily in my design work. They taught me to adapt, connect with people, and move with urgency and grace.</blockquote>

<p>But those same instincts rooted in nightmares can trip you up if you’re unaware. Speed can override thoughtfulness. Constant anticipation can lead to over-complication. The pressure to polish can push you to over-deliver too soon. <strong>Embracing your past also means examining it</strong> &mdash; recognizing when old habits serve you and when they don’t.</p>

<p>With reflection, those experiences can become your greatest advantage.</p>

<h2 id="lessons-from-the-line">Lessons From The Line</h2>

<p>These aren’t abstract comparisons. They’re instincts built through repetition and real-world pressure, and they show up daily in my design process.</p>

<p>Here are five moments from restaurant life that shaped how I think, design, and collaborate today.</p>

<h2 id="1-reading-the-room">1. Reading The Room</h2>

<p>Reading a customer’s mood begins as soon as they sit down. Through years of trial and error, I refined my understanding of subtle cues, like seating delays indicating frustration or menus set aside, suggesting they want to enjoy cocktails. Adapting my approach based on these signals became instinctual, emerging from countless moments of observation.</p>

<h3 id="what-i-learned">What I Learned</h3>

<p>The subtleties of reading a client aren’t so different in product design. Contexts differ, but the cues remain similar: project specifics, facial expressions, tone of voice, lack of engagement, or even the “word salad” of client feedback. With time, these signals become easier to spot, and you learn to ask better questions, challenge assumptions, or offer alternate approaches before misalignment grows. Whether a client is energized and all-in or hesitant and constrained, reading those cues early can make all the difference.</p>

<p>Those instincts &mdash; like constant anticipation and early intervention &mdash; served me well in fine dining, but can hinder the design process if I’m not in tune with how I’m reacting. Jumping in too early can lead to over-complicating the design process, solving problems that haven’t been voiced (yet), or stepping on others’ roles. I’ve had to learn to pause, check in with the team, and trust the process to unfold more collaboratively.</p>

<h3 id="how-i-apply-this-today">How I Apply This Today</h3>

<ul>
<li><strong>Guide direction with focused options.</strong><br />
Early on, share 2&ndash;3 meaningful variations, like style tiles or small component explorations, to shape the conversation and avoid overwhelm.</li>
<li><strong>Flag misalignment fast.</strong><br />
If something feels off, raise it early and loop in the right people.</li>
<li><strong>Be intentional about workshop and deliverable formats.</strong><br />
Structure or space? Depends on what helps the client open up and share.</li>
<li><strong>Pause before jumping in.</strong><br />
A sticky note on my screen (“Pause”) helps me slow down and check assumptions.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="600"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg"
			
			sizes="100vw"
			alt="A close-up of an iMac screen with a yellow sticky note that says “PAUSE”."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A gentle reminder from my own workspace, a sticky note I keep on my screen to remind me to pause before reacting. (<a href='https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/1-workspace.jpeg'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="2-speed-vs-intentionality">2. Speed Vs. Intentionality</h2>

<p>In fine dining, multitasking wasn’t just helpful, it was survival. Every night demanded precision timing, orchestrating every meal step, from the first drink poured to the final dessert plated. The soufflé, for example, was a constant test. It takes precisely 45 minutes &mdash; no more, no less. If the guests lingered over appetizers or finished their entrées too early, that soufflé risked collapse.</p>

<p>But fine dining taught me how to handle that volatility. I learned to manage timing proactively, mastering small strategies: an amuse-bouche to buy the kitchen precious minutes, a complimentary glass of champagne to slow a too-quickly paced meal. Multitasking meant constantly adjusting in real-time, keeping a thousand tiny details aligned even when, behind the scenes, chaos loomed.</p>

<h3 id="what-i-learned-1">What I Learned</h3>

<p>Multitasking is a given in product design, just in a different form. While the pressure is less immediate, it is more layered as designers often juggle multiple projects, overlapping timelines, differing stakeholder expectations, and evolving product needs simultaneously. That restaurant instinct to keep numerous plates spinning at the same time? It’s how I handle shifting priorities, constant Slack pings, regular Figma updates, and unexpected client feedback &mdash; without losing sight of the big picture.</p>

<p>The hustle and pace of fine dining hardwired me to associate speed with success. But in design, speed can sometimes undermine depth. Jumping too quickly into a solution might mean missing the real problem or polishing the wrong idea. I’ve learned that <strong>staying in motion isn’t always the goal</strong>. Unlike a fast-paced service window, product design invites <strong>experimentation</strong> and <strong>course correction</strong>. I’ve had to quiet the internal timer and lean into design with a slower, more intentional nature.</p>

<h3 id="how-i-apply-this-today-1">How I Apply This Today</h3>

<ul>
<li><strong>Make space for inspiration.</strong><br />
Set aside time for untasked exploration outside the norm &mdash; magazines, bookstores, architecture, or gallery visits &mdash; before jumping into design.</li>
<li><strong>Build in pause points.</strong><br />
Plan breaks between design rounds and schedule reviews after a weekend gap to return with fresh eyes.</li>
<li><strong>Stay open to starting over.</strong><br />
Let go of work that isn’t working, even full comps. Starting fresh often leads to better ideas.</li>
</ul>

<h2 id="3-presentation-matters">3. Presentation Matters</h2>

<p>Presentation isn’t just a finishing touch in fine dining &mdash; it’s everything. It’s the mint leaf delicately placed atop a dessert, the raspberry glace cascading across the perfectly off-centered espresso cake.</p>

<p>The presentation engages every sense: the smell of rare imported truffles on your truffle fries, or the meticulous choreography of four servers placing entrées in front of diners simultaneously, creating a collective “wow” moment. An excellent presentation shapes diners’ emotional connection with their meal &mdash; that experience directly impacts how generously they spend, and ultimately, your success.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="534"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg"
			
			sizes="100vw"
			alt="A slice of flourless chocolate cake plated slightly off center, garnished with cocoa powder, chocolate drizzle, and a sprig of mint."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      Decadent flourless chocolate cake, artfully plated just off center. A dusting of cocoa, chocolate drizzle, and a sprig of mint elevate the presentation. (Photo by Yura White via iStock) (<a href='https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/2-flourless-cake.jpg'>Large preview</a>)
    </figcaption>
  
</figure>

<h3 id="what-i-learned-2">What I Learned</h3>

<p>A product design presentation, from the initial concept to the handoff, carries that same power. Introducing a new homepage design can feel mechanical or magical, depending entirely on how you frame and deliver it. Just like careful plating shapes a diner’s experience, <strong>clear framing</strong> and <strong>confident storytelling</strong> shape how design is received.</p>

<p>Beyond the initial introduction, explain the <em>why</em> behind your choices. Connect patterns to the organic elements of the brand’s identity and highlight how users will intuitively engage with each section. Presentation isn’t just about aesthetics; it helps clients connect with the work, understand its value, and get excited to share it.</p>

<p>The pressure to get everything right the first time, to present a pixel-perfect comp that “wows” immediately, is intense.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aSometimes,%20an%20excellent%20presentation%20isn%e2%80%99t%20about%20perfection%20%e2%80%94%20it%e2%80%99s%20about%20pacing,%20storytelling,%20and%20allowing%20the%20audience%20to%20see%20themselves%20in%20the%20work.%0a&url=https://smashingmagazine.com%2f2025%2f08%2ffrom-line-to-layout-past-experiences-shape-design-career%2f">
      
Sometimes, an excellent presentation isn’t about perfection — it’s about pacing, storytelling, and allowing the audience to see themselves in the work.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<p>I’ve had to let go of the idea that polish is everything and instead focus on the why, describing it with clarity, confidence, and connection.</p>

<h3 id="how-i-apply-this-today-2">How I Apply This Today</h3>

<ul>
<li><strong>Frame the story first.</strong><br />
Lead with the “why” behind the work before showing the “what”. It sets the tone and invites clients into the design.</li>
<li><strong>Keep presentations polished.</strong><br />
Share fewer, more intentional concepts to reduce distractions and keep focus.</li>
<li><strong>Skip the jargon.</strong><br />
Clients aren’t designers. Use clear, relatable terms. Say “section” instead of “component,” or “repeatable element” instead of “pattern.”</li>
<li><strong>Bring designs to life.</strong><br />
Use motion, prototypes, and real content to add clarity, energy, and brand relevance.</li>
</ul>


<figure class="video-embed-container break-out">
  <div class="video-embed-container--wrapper"
	
  >
    <iframe class="video-embed-container--wrapper-iframe" src="https://player.vimeo.com/video/1109486230"
        frameborder="0"
        allow="autoplay; fullscreen; picture-in-picture"
        allowfullscreen>
    </iframe>
	</div>
	
		<figcaption>A motion-forward style tile concept I created to introduce storytelling through micro animations, immersive color themes, and real content.</figcaption>
	
</figure>

<h2 id="4-collaboration-is-the-backbone">4. Collaboration Is The Backbone</h2>

<p>In fine dining, teamwork isn’t just helpful &mdash; it’s essential. Every night, success depends entirely on collaboration. The hostess seats guests, the bartender crafts drinks, the chefs prepare dishes, bussers swiftly clear tables, dishwashers provide spotless glasses &mdash; each role is critical, and without one, everything falls apart. You quickly learn there’s no ego or question about whether you could do it better alone. You know that teamwork is the only way, which may mean temporarily stepping outside your role to buss your table or jump behind the dishwasher to get clean glasses. Fine dining is truly a well-oiled machine &mdash; everyone must trust and rely on one another entirely.</p>

<h3 id="what-i-learned-3">What I Learned</h3>

<p>In product design, it’s easier to slip into a silo inadvertently. Unlike restaurants, it can feel natural to work independently, maintaining biases and assumptions, or pushing work forward without additional feedback. But great design thrives on intentional collaboration and shared accountability, especially within an agency setting. Collaborate early, not alone. Actively embracing your support system &mdash; joining a UX call even when you’re not officially invited &mdash; can give critical insights far before wireframes or comps are developed, helping you ask better questions and make smarter assumptions.</p>

<p>In restaurant service, stepping in unannounced to address an issue was seen as helpful, even necessary. But in design, jumping in without alignment can confuse roles or interrupt someone else’s process. I’ve learned that collaboration isn’t about taking over but <em>staying connected</em>. I’ve had to get better at <strong>asking before helping</strong>, <strong>syncing instead of assuming</strong>, and treating the handoff not as an ending but as an open communication thread.</p>

<h3 id="how-i-apply-this-today-3">How I apply This Today</h3>

<ul>
<li><strong>Stay involved after handoff.</strong><br />
Check in during engineering and QA (quality assurance) to support implementation.</li>
<li><strong>Keep workshops flexible.</strong><br />
Adjust structure based on the client’s energy and decision-making style.</li>
<li><strong>Invite a fresh perspective.</strong><br />
Bring in another designer near the end for polish or feedback.</li>
<li><strong>Capture intent visually.</strong><br />
Document decisions clearly so downstream teams understand the nuances and not just the layout.</li>
</ul>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png"
			
			sizes="100vw"
			alt="This illustration shows in detail the component anatomy, outlining the purpose and usage of each element for clear engineering handoff and system documentation. Every part of the component in the illustration is clearly annotated. The major parts of the component are as follows: (1) image, (2) title, (3) accordion items, (4) accordion item in active state."
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      A documented component anatomy, outlining the purpose and usage of each element for clear engineering handoff and system documentation. (<a href='https://files.smashing.media/articles/from-line-to-layout-past-experiences-shape-design-career/3-documented-component-anatomy.png'>Large preview</a>)
    </figcaption>
  
</figure>

<h2 id="5-composure-under-pressure">5. Composure Under Pressure</h2>

<p>In fine dining, pressure isn’t an occasional event &mdash; it’s the default setting. Every night is high stakes. Timing is tight, expectations are sky-high, and mistakes are rarely forgiven. Composure becomes your edge. You don’t show panic when the kitchen is backed up or when a guest sends a dish back mid-rush. You pivot. You delegate. You anticipate. Some nights, the only thing that kept things on track was staying calm and thinking clearly.</p>

<blockquote>“This notion of problem solving and decision making is key to being a great designer. I think that we need to get really strong at problem identification and then prioritization. All designers are good problem solvers, but the <strong>really</strong> great designers are strong problem finders.”<br /><br />&mdash; Jason Cyr, “<a href="https://jasoncyr.medium.com/how-being-a-firefighter-made-me-a-better-designer-cb6345001d62">How being a firefighter made me a better designer thinker</a>”</blockquote>

<h3 id="what-i-learned-4">What I Learned</h3>

<p>The same principle applies to product design. When pressure mounts &mdash; tight timelines, conflicting feedback, or unclear priorities &mdash; your ability to stay composed can shift the energy of the entire project.</p>

<blockquote>Composure isn’t just about being calm; it’s about being adaptable and responsive without reacting impulsively. It helps you hold space for feedback, ask better questions, and move forward with clarity instead of chaos.</blockquote>

<p>There have also been plenty of times when a client doesn’t resonate with a design, which can feel crushing. You can easily take it personally and internalize the rejection, or you can pause, listen, and course-correct. I’ve learned to focus on understanding the root of the feedback. Often, what seems like a rejection is just discomfort with a small detail, which in most cases can be easily corrected.</p>

<p>Perfection was the baseline in restaurants, and pressure drove polish. In design, that mindset can lead to overinvesting in perfection too soon or “freezing” under critique. I’ve had to unlearn that success means getting everything right the first time. Now I see messy collaboration and gradual refinement as a mark of success, not failure.</p>

<h3 id="how-i-apply-this-today-4">How I Apply This Today</h3>

<ul>
<li><strong>Use live design to unblock.</strong><br />
When timelines are tight and feedback goes in circles, co-designing in real time helps break through stuck points and move forward quickly.</li>
<li><strong>Turn critique into clarity.</strong><br />
Listen for what’s underneath the feedback, then ask clarifying questions, or repeat back what you’re hearing to align before acting.</li>
<li><strong>Pause when stress builds.</strong><br />
If you feel reactive, take a moment to regroup before responding.</li>
<li><strong>Frame changes as progress.</strong><br />
Normalize iteration as part of the process, and not a design failure.</li>
</ul>

<h2 id="would-i-go-back">Would I Go Back?</h2>

<p>I still dream about the restaurant floor. But now, I see it as a <em>reminder</em> &mdash; not of where I was stuck, but of where I perfected the instincts I use today. If you’re someone who came to design from another path, try asking yourself:</p>

<ul>
<li>When do I feel strangely at ease while others panic?</li>
<li>What used to feel like “just part of the job,” but now feels like a superpower?</li>
<li>Where do I get frustrated because my instincts are different &mdash; and maybe sharper?</li>
<li>What kinds of group dynamics feel easy to me that others struggle with?</li>
<li>What strengths would not exist in me today if I hadn’t lived that past life?</li>
</ul>

<p><strong>Once you see the patterns, start using them.</strong></p>

<p>Name your edge. Talk about your background as an asset: in intros, portfolios, interviews, or team retrospectives. When projects get messy, lean into what you already know how to do. Trust your instincts. They’re real, and they’re earned. But balance them, too. Stay aware of when your strengths could become blind spots, like speed overriding thoughtfulness. That kind of awareness turns experience into a tool, not a trigger.</p>

<blockquote class="pull-quote">
  <p>
    <a class="pull-quote__link" aria-label="Share on Twitter" href="https://twitter.com/share?text=%0aYour%20past%20doesn%e2%80%99t%20need%20to%20look%20like%20anyone%20else%e2%80%99s.%20It%20just%20needs%20to%20teach%20you%20something.%0a&url=https://smashingmagazine.com%2f2025%2f08%2ffrom-line-to-layout-past-experiences-shape-design-career%2f">
      
Your past doesn’t need to look like anyone else’s. It just needs to teach you something.

    </a>
  </p>
  <div class="pull-quote__quotation">
    <div class="pull-quote__bg">
      <span class="pull-quote__symbol">“</span></div>
  </div>
</blockquote>

<h3 id="further-reading">Further Reading</h3>

<ul>
<li>“If I Was Starting My Career Today: Thoughts After 15 Years Spent In UX Design” (<a href="https://www.smashingmagazine.com/2024/08/thoughts-after-15-years-spent-ux-design-part1/">Part One</a>, <a href="https://www.smashingmagazine.com/2024/08/thoughts-after-15-years-spent-ux-design-part2/">Part Two</a>), by Andrii Zhdan (Smashing Magazine)<br />
In this two-part series, Andrii Zhdan outlines common challenges faced at the start of a design career and offers advice to smooth your journey based on insights from his experience hiring designers.</li>
<li>“<a href="https://www.smashingmagazine.com/2022/07/overcoming-imposter-syndrome-developing-guiding-principles/">Overcoming Imposter Syndrome By Developing Your Own Guiding Principles</a>,” by Luis Ouriach (Smashing Magazine)<br />
Unfortunately, not everyone has access to a mentor or a guide at the start of the design career, which is why we often have to rely on “working it out” by ourselves. In this article, Luis Ouriach tries to help you in this task so that you can walk into the design critique meetings with more confidence and really deliver the best representation of your ideas.</li>
<li>“<a href="https://www.smashingmagazine.com/2022/07/overcoming-imposter-syndrome-developing-guiding-principles/">Why Designers Get Stuck In The Details And How To Stop</a>,” by Nikita Samutin (Smashing Magazine)<br />
Designers love to craft, but polishing pixels before the problem is solved is a time sink. This article pinpoints the five traps that lure us into premature detail and then hands you a rescue plan to refocus on goals, ship faster, and keep your craft where it counts.</li>
<li>“<a href="https://www.smashingmagazine.com/2023/09/rediscovering-joy-happiness-design/">Rediscovering The Joy Of Design</a>,” by Pratik Joglekar (Smashing Magazine)<br />
Pratik Joglekar takes a philosophical approach to remind designers about the lost joy within themselves by effectively placing massive importance on mindfulness, introspection, and forward-looking.</li>
<li>“<a href="https://www.smashingmagazine.com/2022/09/lessons-learned-designer-founder/">Lessons Learned As A Designer-Founder</a>,” by Dave Feldman (Smashing Magazine)<br />
In this article, Dave Feldman shares his lessons learned and the experiments he has done as a multidisciplinary designer-founder-CEO at an early-stage startup.</li>
<li>“<a href="https://www.smashingmagazine.com/2023/02/designers-ask-receive-high-quality-feedback/">How Designers Should Ask For (And Receive) High-Quality Feedback</a>,” by Andy Budd (Smashing Magazine)<br />
Designers often complain about the quality of feedback they get from senior stakeholders without realizing it’s usually because of the way they initially have framed the request. In this article, Andy Budd shares a better way of requesting feedback: rather than sharing a linear case study that explains every design revision, the first thing to do would be to better frame the problem.</li>
<li>“<a href="https://jasoncyr.medium.com/how-being-a-firefighter-made-me-a-better-designer-cb6345001d62">How being a Firefighter made me a better Designer Thinker</a>“ by <a href="https://jasoncyr.medium.com/?source=post_page---byline--cb6345001d62---------------------------------------">Jason Cyr</a> (Medium)<br />
The ability to come upon a situation and very quickly start evaluating information, asking questions, making decisions, and formulating a plan is a skill that every firefighter learns to develop, especially as you rise through the ranks and start leading others.</li>
<li>“<a href="https://adobe.design/stories/leading-design/advice-for-making-the-most-of-an-indirect-career-path-to-design">Advice for making the most of an indirect career path to design</a>,” by Heidi Meredith (Adobe Express Growth)<br />
I didn’t know anything about design until after I graduated from the University of California, Santa Cruz, with a degree in English Literature/Creative Writing. A mere three months into it, though, I realized I didn&rsquo;t want to write books &mdash; I wanted to design them.</li>
</ul>

<p><em>I want to express my deep gratitude to Sara Wachter-Boettcher, whose coaching helped me find the clarity and confidence to write this piece &mdash; and, more importantly, to move forward with purpose in both life and work. And to Lea Alcantara, my design director at Fueled, for being a steady creative force and an inspiring example of thoughtful leadership.</em></p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(mb, yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item><item><author>Ilia Kanazin &amp; Marina Chernyshova</author><title>Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</title><link>https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/</link><pubDate>Mon, 11 Aug 2025 08:00:00 +0000</pubDate><guid>https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/</guid><description>Prompting isn’t just about writing better instructions, but about designing better thinking. Ilia and Marina explore how advanced prompting can empower different product &amp;amp; design use cases, speeding up your workflow and improving results, from research and brainstorming to testing and beyond. Let’s dive in.</description><content:encoded><![CDATA[
          <html>
            <head>
              <meta charset="utf-8">
              <link rel="canonical" href="https://www.smashingmagazine.com/2025/08/designing-with-ai-practical-techniques-product-design/" />
              <title>Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</title>
            </head>
            <body>
              <article>
                <header>
                  <h1>Designing With AI, Not Around It: Practical Advanced Techniques For Product Design Use Cases</h1>
                  
                    
                    <address>Ilia Kanazin &amp; Marina Chernyshova</address>
                  
                  <time datetime="2025-08-11T08:00:00&#43;00:00" class="op-published">2025-08-11T08:00:00+00:00</time>
                  <time datetime="2025-08-11T08:00:00&#43;00:00" class="op-modified">2025-10-14T04:02:41+00:00</time>
                </header>
                
                

<p>AI is almost everywhere &mdash; it writes text, makes music, generates code, draws pictures, runs research, chats with you &mdash; and apparently even <a href="https://hbr.org/2025/04/how-people-are-really-using-gen-ai-in-2025">understands people better than they understand themselves</a>?!</p>

<p>It’s a lot to take in. The pace is wild, and new tools pop up faster than anyone has time to try them. Amid the chaos, one thing is clear: this isn’t hype, but it’s structural change.</p>

<p>According to the <a href="https://www.weforum.org/publications/the-future-of-jobs-report-2025/"><em>Future of Jobs Report 2025</em></a> by the World Economic Forum, one of the fastest-growing, most in-demand skills for the next five years is the <strong>ability to work with AI and Big Data</strong>. That applies to almost every role &mdash; including product design.</p>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			height="673"
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png"
			
			sizes="100vw"
			alt="A figure showing skills on the rise in 2025-2030, which places AI and big data on the first place"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/1-skills-on-the-rise-2025.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>What do companies want most from their teams? Right, efficiency. And AI can make people way more efficient. We’d easily spend 3x more time on tasks like replying to our managers without AI helping out. We’re learning to work with it, but many of us are still figuring out how to meet the rising bar.</p>

<p>That’s especially important for designers, whose work is all about empathy, creativity, critical thinking, and working across disciplines. It’s a uniquely human mix. At least, that’s what we tell ourselves.</p>

<p>Even as debates rage about AI’s limitations, tools today (June 2025 &mdash; timestamp matters in this fast-moving space) already assist with research, ideation, and testing, sometimes better than expected.</p>

<p>Of course, not everyone agrees. AI hallucinates, loses context, and makes things up. So how can both views exist at the same time? Very simple. It’s because both are true: AI is deeply flawed and surprisingly useful. The trick is knowing how to work with its strengths while managing its weaknesses. The real question isn’t whether AI is good or bad &mdash; it’s how we, as designers, stay sharp, stay valuable, and stay in the loop.</p>

<h2 id="why-prompting-matters">Why Prompting Matters</h2>

<p>Prompting matters more than most people realize because even small tweaks in how you ask can lead to radically different outputs. To see how this works in practice, let’s look at a simple example.</p>

<p>Imagine you want to improve the onboarding experience in your product. On the left, you have the prompt you send to AI. On the right, the response you get back.</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Input</th>
            <th>Output</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>How to improve onboarding in a SaaS product?</td>
            <td>👉 Broad suggestions: checklists, empty states, welcome modals…</td>
        </tr>
        <tr>
            <td>How to improve onboarding in Product A’s workspace setup flow?</td>
            <td>👉 Suggestions focused on workspace setup…</td>
        </tr>
        <tr>
            <td>How to improve onboarding in Product A’s workspace setup step to address user confusion?</td>
            <td>👉 ~10 common pain points with targeted UX fixes for each…</td>
        </tr>
    <tr>
            <td>How to improve onboarding in Product A by redesigning the workspace setup screen to reduce drop-off, with detailed reasoning?</td>
            <td>👉 ~10 paragraphs covering a specific UI change, rationale, and expected impact…</td>
        </tr>
    </tbody>
</table>

<p>This side-by-side shows just how much even the smallest prompt details can change what AI gives you.</p>

<p>Talking to an AI model isn’t that different from talking to a person. If you explain your thoughts clearly, you get a better understanding and communication overall.</p>

<blockquote>Advanced prompting is about moving beyond one-shot, throwaway prompts. It’s an iterative, structured process of refining your inputs using different techniques so you can guide the AI toward more useful results. It focuses on being intentional with every word you put in, giving the AI not just the task but also the path to approach it step by step, so it can actually do the job.</blockquote>














<figure class="
  
    break-out article__image
  
  
  ">
  
    <a href="https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png">
    
    <img
      loading="lazy"
      decoding="async"
      fetchpriority="low"
			width="800"
			
			
			srcset="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 400w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_800/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 800w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1200/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 1200w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_1600/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 1600w,
			        https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_2000/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png 2000w"
			src="https://res.cloudinary.com/indysigner/image/fetch/f_auto,q_80/w_400/https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png"
			
			sizes="100vw"
			alt="Advanced prompting vs basic promting"
		/>
    
    </a>
  

  
    <figcaption class="op-vertical-bottom">
      (<a href='https://files.smashing.media/articles/designing-with-ai-practical-techniques-product-design/2-advanced-prompting.png'>Large preview</a>)
    </figcaption>
  
</figure>

<p>Where basic prompting throws your question at the model and hopes for a quick answer, advanced prompting helps you <strong>explore options</strong>, <strong>evaluate branches of reasoning</strong>, and <strong>converge on clear, actionable outputs</strong>.</p>

<p>But that doesn’t mean simple prompts are useless. On the contrary, short, focused prompts work well when the task is narrow, factual, or time-sensitive. They’re great for idea generation, quick clarifications, or anything where deep reasoning isn’t required. <strong>Think of prompting as a scale, not a binary.</strong> The simpler the task, the faster a lightweight prompt can get the job done. The more complex the task, the more structure it needs.</p>

<p>In this article, we’ll dive into how advanced prompting can empower different product &amp; design use cases, speeding up your workflow and improving your results &mdash; whether you’re researching, brainstorming, testing, or beyond. Let’s dive in.</p>

<h2 id="practical-cases">Practical Cases</h2>

<p>In the next section, we’ll explore six practical prompting techniques that we’ve found most useful in real product design work. These aren’t abstract theories &mdash; each one is grounded in hands-on experience, tested across research, ideation, and evaluation tasks. Think of them as modular tools: you can mix, match, and adapt them depending on your use case. For each, we’ll explain the thinking behind it and walk through a sample prompt.</p>

<p><strong>Important note:</strong> The prompts you’ll see are not copy-paste recipes. Some are structured templates you can reuse with small tweaks; others are more specific, meant to spark your thinking. Use them as scaffolds, not scripts.</p>

<h3 id="1-task-decomposition-by-jtbd">1. Task Decomposition By JTBD</h3>

<p><em>Technique: Role, Context, Instructions template + Checkpoints (with self-reflection)</em></p>

<p>Before solving any problem, there’s a critical step we often overlook: breaking the problem down into clear, actionable parts.</p>

<p>Jumping straight into execution feels fast, but it’s risky. We might end up solving the wrong thing, or solving it the wrong way. That’s where GPT can help: not just by generating ideas, but by helping us think more clearly about the structure of the problem itself.</p>

<p>There are many ways to break down a task. One of the most useful in product work is the <strong>Jobs To Be Done (JTBD) framework</strong>. Let’s see how we can use advanced prompting to apply JTBD decomposition to any task.</p>

<p>Good design starts with understanding the user, the problem, and the context. Good prompting? Pretty much the same. That’s why most solid prompts include three key parts: Role, Context, and Instructions. If needed, you can also add the expected format and any constraints.</p>

<p>In this example, we’re going to break down a task into smaller jobs and add self-checkpoints to the prompt, so the AI can pause, reflect, and self-verify along the way.</p>

<blockquote><strong>Role</strong><br />Act as a senior product strategist and UX designer with deep expertise in Jobs To Be Done (JTBD) methodology and user-centered design. You think in terms of user goals, progress-making moments, and unmet needs &mdash; similar to approaches used at companies like Intercom, Basecamp, or IDEO.<br /><br /><strong>Context</strong><br />You are helping a product team break down a broad user or business problem into a structured map of Jobs To Be Done. This decomposition will guide discovery, prioritization, and solution design.<br /><br /><strong>Task & Instructions</strong><br />[👉 DESCRIBE THE USER TASK OR PROBLEM 👈🏼]<br />Use JTBD thinking to uncover:<ul><li>The main functional job the user is trying to get done;</li><li>Related emotional or social jobs;</li><li>Sub-jobs or tasks users must complete along the way;</li><li>Forces of progress and barriers that influence behavior.</li></ul><br /><strong>Checkpoints</strong><br />Before finalizing, check yourself:<ul><li>Are the jobs clearly goal-oriented and not solution-oriented?</li><li>Are sub-jobs specific steps toward the main job?</li><li>Are emotional/social jobs captured?</li><li>Are user struggles or unmet needs listed?</li></ul><br />If anything’s missing or unclear, revise and explain what was added or changed.</blockquote>

<p>With a simple one-sentence prompt, you’ll likely get a high-level list of user needs or feature ideas. An advanced approach can produce a structured JTBD breakdown of a specific user problem, which may include:</p>

<ul>
<li><strong>Main Functional Job</strong>: A clear, goal-oriented statement describing the primary outcome the user wants to achieve.</li>
<li><strong>Emotional &amp; Social Jobs</strong>: Supporting jobs related to how the user wants to feel or be perceived during their progress.</li>
<li><strong>Sub-Jobs</strong>: Step-by-step tasks or milestones the user must complete to fulfill the main job.</li>
<li><strong>Forces of Progress</strong>: A breakdown of motivations (push/pull) and barriers (habits/anxieties) that influence user behavior.</li>
</ul>

<p>But these prompts are most powerful when used with real context. Try it now with your product. Even a quick test can reveal unexpected insights.</p>

<h3 id="2-competitive-ux-audit">2. Competitive UX Audit</h3>

<p><em>Technique: Attachments + Reasoning Before Understanding + Tree of Thought (ToT)</em></p>

<p>Sometimes, you don’t need to design something new &mdash; you need to understand what already exists.</p>

<p>Whether you’re doing a competitive analysis, learning from rivals, or benchmarking features, the first challenge is making sense of someone else’s design choices. What’s the feature really for? Who’s it helping? Why was it built this way?</p>

<p>Instead of rushing into critique, we can use GPT to reverse-engineer the thinking behind a product &mdash; before judging it. In this case, start by:</p>

<ol>
<li>Grabbing the competitor’s documentation for the feature you want to analyze.</li>
<li>Save it as a PDF. Then head over to ChatGPT (or other models).</li>
<li>Before jumping into the audit, ask it to first make sense of the documentation. This technique is called <strong>Reasoning Before Understanding (RBU)</strong>. That means before you ask for critique, you ask for <strong>interpretation</strong>. This helps AI build a more accurate mental model &mdash; and avoids jumping to conclusions.</li>
</ol>

<blockquote><strong>Role</strong><br />You are a senior UX strategist and cognitive design analyst. Your expertise lies in interpreting digital product features based on minimal initial context, inferring purpose, user intent, and mental models behind design decisions before conducting any evaluative critique.<br /><br /><strong>Context</strong><br />You’ve been given internal documentation and screenshots of a feature. The goal is not to evaluate it yet, but to understand what it’s doing, for whom, and why.<br /><br /><strong>Task & Instructions</strong><br />Review the materials and answer:<ul><li>What is this feature for?</li><li>Who is the intended user?</li><li>What tasks or scenarios does it support?</li><li>What assumptions does it make about the user?</li><li>What does its structure suggest about priorities or constraints?</li></ul></blockquote>

<p>Once you get the first reply, take a moment to respond: clarify, correct, or add nuance to GPT’s conclusions. This helps align the model’s mental frame with your own.</p>

<p>For the audit part, we’ll use something called the Tree of Thought (ToT) approach.</p>

<p><strong>Tree of Thought (ToT)</strong> is a prompting strategy that asks the AI to “think in branches.” Instead of jumping to a single answer, the model explores multiple reasoning paths, compares outcomes, and revises logic before concluding &mdash; like tracing different routes through a decision tree. This makes it perfect for handling more complex UX tasks.</p>

<blockquote>You are now performing a UX audit based on your understanding of the feature. You’ll identify potential problems, alternative design paths, and trade-offs using a Tree of Thought approach, i.e., thinking in branches, comparing different reasoning paths before concluding.</blockquote>

<p>or</p>

<blockquote>Convert your understanding of the feature into a set of Jobs-To-Be-Done statements from the user’s perspective using a Tree of Thought approach.</blockquote>

<blockquote>List implicit assumptions this feature makes about the user's behavior, workflow, or context using a Tree of Thought approach.</blockquote>

<blockquote>Propose alternative versions of this feature that solve the same job using different interaction or flow mechanics using a Tree of Thought approach.</blockquote>

<h3 id="3-ideation-with-an-intellectual-opponent">3. Ideation With An Intellectual Opponent</h3>

<p><em>Technique: Role Conditioning + Memory Update</em></p>

<p>When you’re working on creative or strategic problems, there’s a common trap: AI often just agrees with you or tries to please your way of thinking. It treats your ideas like gospel and tells you they’re great &mdash; even when they’re not.</p>

<p>So how do you avoid this? How do you get GPT to challenge your assumptions and act more like a <strong>critical thinking partner</strong>? Simple: tell it to and ask to remember.</p>

<blockquote><strong>Instructions</strong><br />From now on, remember to follow this mode unless I explicitly say otherwise.<br /><br />Do not take my conclusions at face value. Your role is not to agree or assist blindly, but to serve as a sharp, respectful intellectual opponent.<br /><br />Every time I present an idea, do the following:<ul><li>Interrogate my assumptions: What am I taking for granted?</li><li>Present counter-arguments: Where could I be wrong, misled, or overly confident?</li><li>Test my logic: Is the reasoning sound, or are there gaps, fallacies, or biases?</li><li>Offer alternatives: Not for the sake of disagreement, but to expand perspective.</li><li>Prioritize truth and clarity over consensus: Even when it’s uncomfortable.</li></ul>Maintain a constructive, rigorous, truth-seeking tone. Don’t argue for the sake of it. Argue to sharpen thought, expose blind spots, and help me reach clearer, stronger conclusions.<br /><br />This isn’t a debate. It’s a collaboration aimed at insight.</blockquote>

<h3 id="4-requirements-for-concepting">4. Requirements For Concepting</h3>

<p><em>Technique: Requirement-Oriented + Meta prompting</em></p>

<p>This one deserves a whole article on its own, but let’s lay the groundwork here.</p>

<p>When you’re building quick prototypes or UI screens using tools like v0, Bolt, Lovable, UX Pilot, etc., your prompt needs to be better than most PRDs you’ve worked with. Why? Because the output depends entirely on how clearly and specifically you describe the goal.</p>

<p>The catch? Writing that kind of prompt is hard. So instead of jumping straight to the design prompt, try writing a <strong>meta-prompt first</strong>. That is a prompt that asks GPT to help you write a better prompt. Prompting about prompting, prompt-ception, if you will.</p>

<p>Here’s how to make that work: Feed GPT what you already know about the app or the screen. Then ask it to treat things like information architecture, layout, and user flow as variables it can play with. That way, you don’t just get one rigid idea &mdash; you get multiple concept directions to explore.</p>

<blockquote><strong>Role</strong><br />You are a product design strategist working with AI to explore early-stage design concepts.<br /><br /><strong>Goal</strong><br />Generate 3 distinct prompt variations for designing a Daily Wellness Summary single screen in a mobile wellness tracking app for Lovable/Bolt/v0.<br /><br />Each variation should experiment with a different Information Architecture and Layout Strategy. You don’t need to fully specify the IA or layout &mdash; just take a different angle in each prompt. For example, one may prioritize user state, another may prioritize habits or recommendations, and one may use a card layout while another uses a scroll feed.<br /><br /><strong>User context</strong><br />The target user is a busy professional who checks this screen once or twice a day (morning/evening) to log their mood, energy, and sleep quality, and to receive small nudges or summaries from the app.<br /><br /><strong>Visual style</strong><br />Keep the tone calm and approachable.<br /><br /><strong>Format</strong><br />Each of the 3 prompt variations should be structured clearly and independently.<br /><br />Remember: The key difference between the three prompts should be the underlying IA and layout logic. You don’t need to over-explain &mdash; just guide the design generator toward different interpretations of the same user need.</blockquote>

<h3 id="5-from-cognitive-walkthrough-to-testing-hypothesis">5. From Cognitive Walkthrough To Testing Hypothesis</h3>

<p><em>Technique: Casual Tree of Though + Casual Reasoning + Multi-Roles + Self-Reflection</em></p>

<p>Cognitive walkthrough is a powerful way to break down a user action and check whether the steps are intuitive.</p>

<p><strong>Example</strong>: “User wants to add a task” → Do they know where to click? What to do next? Do they know it worked?</p>

<p>We’ve found this technique super useful for reviewing our own designs. Sometimes there’s already a mockup; other times we’re still arguing with a PM about what should go where. Either way, GPT can help.</p>

<p>Here’s an advanced way to run that process:</p>

<blockquote><strong>Context</strong><br />You’ve been given a screenshot of a screen where users can create new tasks in a project management app. The main action the user wants to perform is “add a task”. Simulate behavior from two user types: a beginner with no prior experience and a returning user familiar with similar tools.<br /><br /><strong>Task & Instructions</strong><br />Go through the UI step by step and evaluate:<ol><li>Will the user know what to do at each step?</li><li>Will they understand how to perform the action?</li><li>Will they know they’ve succeeded?</li></ol>For each step, consider alternative user paths (if multiple interpretations of the UI exist). Use a casual Tree-of-Thought method.<br /><br />At each step, reflect: what assumptions is the user making here? What visual feedback would help reduce uncertainty?<br /><br /><strong>Format</strong><br />Use a numbered list for each step. For each, add observations, possible confusions, and UX suggestions.<br /><br /><strong>Limits</strong><br />Don’t assume prior knowledge unless it’s visually implied.<br />Do not limit analysis to a single user type.</blockquote>

<p>Cognitive walkthroughs are great, but they get even more useful when they lead to testable hypotheses.</p>

<p>After running the walkthrough, you’ll usually uncover moments that might confuse users. Instead of leaving that as a guess, turn those into concrete UX testing hypotheses.</p>

<p>We ask GPT to not only flag potential friction points, but to help define how we’d validate them with real users: using a task, a question, or observable behavior.</p>

<blockquote><strong>Task & Instructions</strong><br />Based on your previous cognitive walkthrough:<ol><li>Extract all potential usability hypotheses from the walkthrough.</li><li>For each hypothesis:<ul><li>Assess whether it can be tested through moderated or unmoderated usability testing.</li><li>Explain what specific UX decision or design element may cause this issue. Use causal reasoning.</li><li>For testable hypotheses:<ul><li>Propose a specific usability task or question.</li><li>Define a clear validation criterion (how you’ll know if the hypothesis is confirmed or disproved).</li><li>Evaluate feasibility and signal strength of the test (e.g., how easy it is to test, and how confidently it can validate the hypothesis).</li><li>Assign a priority score based on Impact, Confidence, and Ease (ICE).</li></ul></li></ul></li></ol><strong>Limits</strong><br />Don’t invent hypotheses not rooted in your walkthrough output. Only propose tests where user behavior or responses can provide meaningful validation. Skip purely technical or backend concerns.</blockquote>

<h3 id="6-cross-functional-feedback">6. Cross-Functional Feedback</h3>

<p><em>Technique: Multi-Roles</em></p>

<p>Good design is co-created. And good designers are used to working with cross-functional teams: PMs, engineers, analysts, QAs, you name it. Part of the job is turning scattered feedback into clear action items.</p>

<p>Earlier, we talked about how giving AI a “role” helps sharpen its responses. Now let’s level that up: what if we give it <strong>multiple roles at once</strong>? This is called <strong>multi-role prompting</strong>. It’s a great way to simulate a design review with input from different perspectives. You get quick insights and a more well-rounded critique of your design.</p>

<blockquote><strong>Role</strong><br />You are a cross-functional team of experts evaluating a new dashboard design:<ul><li>PM (focus: user value & prioritization)</li><li>Engineer (focus: feasibility & edge cases)</li><li>QA tester (focus: clarity & testability)</li><li>Data analyst (focus: metrics & clarity of reporting)</li><li>Designer (focus: consistency & usability)</li></ul><strong>Context</strong><br />The team is reviewing a mockup for a new analytics dashboard for internal use.<br /><br /><strong>Task & Instructions</strong><br />For each role:<ol><li>What stands out immediately?</li><li>What concerns might this role have?</li><li>What feedback or suggestions would they give?</li></ol></blockquote>

<h2 id="designing-with-ai-is-a-skill-not-a-shortcut">Designing With AI Is A Skill, Not A Shortcut</h2>

<p>By now, you’ve seen that prompting isn’t just about typing better instructions. It’s about <strong>designing better thinking</strong>.</p>

<p>We’ve explored several techniques, and each is useful in different contexts:</p>

<table class="tablesaw break-out">
    <thead>
        <tr>
            <th>Technique</th>
            <th>When to use It</th>
        </tr>
    </thead>
    <tbody>
        <tr>
            <td>Role + Context + Instructions + Constraints</td>
            <td>Anytime you want consistent, focused responses (especially in research, decomposition, and analysis).</td>
        </tr>
        <tr>
            <td>Checkpoints / Self-verification</td>
            <td>When accuracy, structure, or layered reasoning matters. Great for complex planning or JTBD breakdowns.</td>
        </tr>
        <tr>
            <td>Reasoning Before Understanding (RBU)</td>
            <td>When input materials are large or ambiguous (like docs or screenshots). Helps reduce misinterpretation.</td>
        </tr>
    <tr>
            <td>Tree of Thought (ToT)</td>
            <td>When you want the model to explore options, backtrack, compare. Ideal for audits, evaluations, or divergent thinking.</td>
        </tr>
    <tr>
            <td>Meta-prompting</td>
            <td>When you're not sure how to even ask the right question. Use it early in fuzzy or creative concepting.</td>
        </tr>
    <tr>
            <td>Multi-role prompting</td>
            <td>When you need well-rounded, cross-functional critique or to simulate team feedback.</td>
        </tr>
     <tr>
            <td>Memory-updated “opponent” prompting</td>
            <td>When you want to challenge your own logic, uncover blind spots, or push beyond echo chambers.</td>
        </tr>
    </tbody>
</table>

<p>But even the best techniques won’t matter if you use them blindly, so ask yourself:</p>

<ul>
<li>Do I need precision or perspective right now?

<ul>
<li><em>Precision?</em> Try <strong>Role + Checkpoints</strong> for clarity and control.</li>
<li><em>Perspective?</em> Use <strong>Multi-Role</strong> or <strong>Tree of Thought</strong> to explore alternatives.</li>
</ul></li>
<li>Should the model reflect my framing, or break it?

<ul>
<li><em>Reflect it?</em> Use <strong>Role + Context + Instructions</strong>.</li>
<li><em>Break it?</em> Try <strong>Opponent prompting</strong> to challenge assumptions.</li>
</ul></li>
<li>Am I trying to reduce ambiguity, or surface complexity?

<ul>
<li><em>Reduce ambiguity?</em> Use <strong>Meta-prompting</strong> to clarify your ask.</li>
<li><em>Surface complexity?</em> Go with <strong>ToT</strong> or <strong>RBU</strong> to expose hidden layers.</li>
</ul></li>
<li>Is this task about alignment, or exploration?

<ul>
<li><em>Alignment?</em> Use <strong>Multi-Roles prompting</strong> to simulate consensus.</li>
<li><em>Exploration?</em> Use <strong>Cognitive Walkthrough</strong> to push deeper.</li>
</ul></li>
</ul>

<p>Remember, you don’t need a long prompt every time. Use detail when the task demands it, not out of habit. AI can do a lot, but it reflects the shape of your thinking. And prompting is how you shape it. So don’t just prompt better. Think better. And design with AI &mdash; not around it.</p>

<div class="signature">
  <img src="https://www.smashingmagazine.com/images/logo/logo--red.png" alt="Smashing Editorial" width="35" height="46" loading="lazy" decoding="async" />
  <span>(yk)</span>
</div>


              </article>
            </body>
          </html>
        ]]></content:encoded></item></channel></rss>