Connect with us

Innovation and Technology

AI Can’t Read Minds, So Learn to Spell Things Out

Published

on

AI Can’t Read Minds, So Learn to Spell Things Out

Introduction to Prompting

First, the good news: it is now possible to develop programs, illustrations, or extract AI output with plain-spoken English prompting, versus the need to write code in Python, R, or SQL. Now, the reality check: this new form of engaging machines, prompting, requires an ability to know exactly what to ask, and be able to drill down to specific elements. Otherwise, executives and business users will end up either with vague, rehashed, or wrong answers to their queries. This can be very problematic as decision-makers assume AI knows all.

The Evolution of Self-Service Environments

Prompting may be the ultimate stage of self-service, no-coding environments, which have been evolving for decades now. Executives and business users can just make plain-English queries against language models, and see relatively fast results, be it reports or applications. It can even deliver via spoken prompts. Now, emerging memory features may help retain prompts for future use and refinement.

The Importance of Prompting Right

All good, right? But we need to do prompting right, according to AI expert Nate B. Jones, who was Michael Krigsman’s recent guest on CXOTalk. Krigsman teed up the discussion with the significance of prompting, as “the secret skill that taps into AI’s real capabilities, transforming large language models from flashy demos into engines of real-world productivity.” The art of prompting collides with some of the vagueness or inconsistencies of human language, Jones explained. That was the whole purpose of computer languages in the first place – since they offered precise, step-by-step processes.

Challenges with Prompting

But while LLMs may have more intelligence than standard databases and applications, they aren’t mind-readers. “They are not incredibly reliable yet at inferring your intent if you are not precise about what you mean or want," said Jones. "They don’t do that reliably. They guess, and they might guess right, and they might guess wrong.” Then there’s time involved in waiting for responses to prompts. Though they may be delivered relatively quickly, end-users may have to prompt over and over again to try to get things right.

Best Practices for Effective Prompting

According to Jones, there are three considerations for developing an effective prompt:

  • Be really clear about the outcome that you are looking for and about how the model can know that it’s done. “The more you can specify and be clear about what you’re looking for and what good looks like, the better off you’re going to be for the rest of the prompt.”
  • Provide the model all the context it requires, but don’t overdo it. “Be more clean and clear about, ‘this is what I want you to focus on in a web search,’ or ‘here’s some documents I want you to review. I want you to keep your thinking focused around this particular set of meeting transcripts.’"
  • Understand the constraints and guardrails that you need. "Make sure that the model knows, ‘don’t do this. Where do I not go?’ Jones speculated that this thinking is derived from dealing with human colleagues. “We don’t tend to regard a senior colleague as someone who needs a tremendous number of warnings and constraints for a task. We just say, ‘hey, go tackle this. I’m sure you’ll do a great job. Come back and let me think about what you get.’"

Conclusion

Ultimately, what these models are trying to do “is just infer from your utterances what they think you mean,” Jones explained. They need to “figure out where in latent space they can go and get a reasonable pattern match, do some searching across the web. In the case of an inference model, do a lot of that iteratively they can figure out what’s best, and then put together something.” Jones speculated that within the next few years, the models will gain so much experience that sharp prompting skills may not be as necessary. But in the meantime, developing effective prompting skills is crucial for getting the most out of AI.

FAQs

  • Q: What is prompting in AI?
    A: Prompting is the process of giving plain-English instructions to AI models to get specific results or outputs.
  • Q: Why is prompting important?
    A: Prompting is important because it allows executives and business users to get specific results from AI models without needing to write code.
  • Q: What are the challenges of prompting?
    A: The challenges of prompting include the need for precise language, the risk of vague or incorrect results, and the time involved in waiting for responses to prompts.
  • Q: How can I develop effective prompting skills?
    A: To develop effective prompting skills, be clear about the outcome you are looking for, provide the model with the necessary context, and understand the constraints and guardrails that you need.
Advertisement

Our Newsletter

Subscribe Us To Receive Our Latest News Directly In Your Inbox!

We don’t spam! Read our privacy policy for more info.

Trending