Making peace with using AI tools

This Simpsons clip makes for a great analogy.

This clip from The Simpsons episode Treehouse of Horror II, the 7th episode from season 3, reminded me of how important it is to be very specific with your prompts when crafting inputs into AI tools. Silicon Valley has created lots of hype about the potential of “AI” for people from the executive C-suite and to the consumer market. It almost feels as powerful as a wish machine, like the fabled tale of “The Monkey’s Paw” by W. W. Jacobs.

The challenge is around education about the scope and capabilities that may not live up to the hype, as less tech savvy users may get unreasonable expectations from marketing something as problematic as Apple trying to fuse it’s Siri smart agent with ChatGPT’s learning language model for generative content and actions.

High quality input and specific requests as text or voice prompts are needed to get the kind of results a less tech savvy user will want from what feels like a wishing machine. Like the unexpected consequences of the Monkey’s Paw, as users of AI tools, we must be careful about what we wish for, and which details we may be leaving out of prompts.

We’ve seen variable results from organizations setting mandatory AI training goals across their organization in a rush to get caught up to competitors, but what many organizations fail to realize is the AI tools they’re authorized to use may not be sufficient for each role and use case.

The expectation for rapid behavior change within large orgs and hope for positive impact to a business’s bottom line across all roles at all levels is causing rapid cultural change, with many still unsure how they feel about using these tools. So many headlines generate fear, uncertainty, and doubt about job opportunities across professions. We’ve learned that, in key contexts, each tool can summarize info, write text, edit articles, build software, create short videos, and generate output based on what data it’s being fed. It’s changing fast, as Google’s Gemini seems to be leading the race to more effective results from models these tech firms are competing to deliver. We’re much smarter about being more strategic in using AI tools and models, more so than IBM’s earlier attempt to repurpose Watson for many contexts, which failed to meet expectations.

This likely isn’t due to the myth about learning styles, either. Human behavior change won’t happen at the speed of The Matrix. It’s become expected for large organizations to approve AI tools for specific tasks, like using Figma Make for web software design, relying on Anthropic’s Claude and other models, or Lovable.dev, to do similar work while adding the security checks and database management as part of their offering.

It’s helpful that Andrew Ng is resetting expectations and countering the hype machine about how quickly to expect artificial general intelligence, the kind of intelligence science fiction writers shared their dreams and nightmares about.

Hopefully, educating everyone from the C-suite to the VCs looking to bankroll start-ups increasingly dependent on the latest training models to get to more useful and reliable agentic AI will help set more reasonable expectations for those still struggling to catch up to the legally challenging world as each new generative AI model improves and makes it harder for people in an always connected world to distinguish fact from fiction.

The good news is we’re seeing AI being used to counter disinformation and hallucinations can be reduced with more powerful computational capabilities.

When learning through hands-on practice with ChatGPT two years ago, I experimented to see if it could deliver a pro-democracy political marketing campaign assets. The results were disappointing, and of course, I eventually hit a paywall for anything that would be more useful than just using Google Search or history books for reference material about patterns of fear found in U.S. culture during its last immigration boom period .

In learning to use products like Lovable.dev and Figma Make, I initially asked too much of them with long prompts, but I’ve seen significant improvement that makes each really helpful when focused on smaller, achievable tasks, like creating interactive prototypes, or generating a first rough draft of content that’s better than Lorem Ipsum.

However, I initially forgot to add security measures to concepts I created with them, to keep them from being found on the web. These intermediate designs needed more refinement before being shared widely, and I forgot to make sure they were hidden from Google and other search crawlers.

It wasn’t likely to be found thanks to my obscurity, but I learned to be very specific and tell the bots I’m designing with to put my work on the web, but behind a login and use a robots.txt file to prevent things not ready for review hidden for now.

Something to keep in mind as we all learn and adapt to this very different world full of easy-to-make disinformation.

I mean, what kind of shallow, low-life strong arms a nation for oil and prizes they didn't earn? Maybe someone with such an inferiority complex that they have to slap their damn name on anything they can to desperately seek fame and fortune.

An AI-generated image from Google Gemini Pro, with the watermark to tell you it’s fake, because duh. No one deserves to just be handed a Nobel Peace Prize. Like the creative process, accolades for creativity and social contributions are earned, not just given away for free and without human effort.

Evan Wiener

I ❤️ leading research & design project teams that get results. Let's connect or chat on Bluesky about how I can bring the kind of results you expect from a product and marketing strategy.

https://obviouswins.com
Previous
Previous

Accepting some things may be out of one’s control

Next
Next

Customer-Centric Credit Card UX Design