[Riddering, Michael. LinkedIn post (June 2024).](https://www.linkedin.com/posts/michaelriddering_im-on-a-mission-to-learn-more-about-designing-activity-7207077580766670848-ktvl)
> I'm on a mission to learn more about designing AI-native interfaces and [](https://www.linkedin.com/in/ACoAAA1kOkgBUBkzcDKMhiORt4aFtTlC2q-UtTc)[Maggie Appleton](https://www.linkedin.com/in/maggieappleton/) is the perfect guide. Here are some things I've learned 👇
>
> → The key is to embed AI into goal-oriented workflows rather than relying on open-ended interfaces. Chat UIs put the burden on users to articulate good questions/prompts. Soon we'll move toward a world that requires more "pointed" UI patterns and "vertical" interfaces.
>
> → The more you rely on AI to perform complex actions, the more important it becomes to visualize what the AI is doing behind the scenes. We're starting to see more UI patterns focus on breaking AI processes into concrete steps.
>
> → Right now you see a lot of generic disclaimers about AI’s propensity to make mistakes. But as AI becomes more integrated into your core product UX, blanket statements that push the liability to the AI aren’t going to cut it. You’ll have to design more specific disclosure systems that communicate confidence levels at granular output levels.
>
> → The industry is mostly focused on generative AI. But AI will play a huge role in condensing, synthesizing, and helping users achieve complex cognitive tasks. This is where the real opportunity lies (and what she's focusing on at Elicit).
>
> → For now "human in the loop" flows are required. Users need to feel like they are in control. But over time this might reduce with more proactive AI applications and dynamic interfaces.
>
> → Maggie is designing stacks of language model calls where she receives output from the first one, checks the quality, and then automatically feeds it into a second language model call. It's something that you figure out in flow charts rather than Figma UI but it's become part of her role as a designer.
>
> → In order to design effective AI systems, it’s crucial to recognize the latent possibilities and limitations of LLMs. Designers are the people responsible for asking "what if". But many of the answers to those questions are going to be directly tied to AI capabilities. It will be very hard to affectively design the user experience without understanding how LLM's work in the future.
>
> I've personally listened to her episode twice already just to take more notes. It's so good 😅 [Listen here](https://www.dive.club/deep-dives/maggie-appleton)