How Much Do We Have Control Over AI Reactions?

How Much Do We Have Control Over AI Reactions?
  • Spherical Coder
  • Technology Updates - AI & Automation in Software

How Much Do We Have Control Over AI Reactions?

Today’s search landscape feels controllable yet unpredictable, as AI responses are driven by probabilistic LLM outputs, challenging how we influence and interpret results.

How Much Do We Have Control Over AI Reactions?

The search landscape we currently face is frighteningly easy to control and unpredictable in its effect. We continue to inquire about how to affect AI responses, ignoring the fact that LLM outputs are inherently probabilistic.

 

What We Can Control

  • Training data: By choosing and sifting the data used to train models, developers may affect how AI behaves.
  • Guardrails & fine-tuning: More instruction, security guidelines, and moderation mechanisms influence AI's behavior.
  • Prompt design: A question's formulation has a big impact on the results.
  • System instructions: By implementing internal policies, platforms can control response style, tone, and bounds.

What We Can’t Fully Control

  • Emergent behavior: Because of their intricacy, large models can occasionally provide surprising results.
  • Context sensitivity: Even little phrasing adjustments might have wildly disparate results.
  • Data influence: Responses may exhibit skewed or deceptive patterns that are present in the training data.
  • User manipulation attempts: Sometimes, the intended behavior can be circumvented by prompt injection or aggressive language.

 

The probabilistic nature of LLMs may make it challenging to sway their responses for the following seven valid reasons:

1.Lottery-like results: Search engines (deterministic) are not LLMs (probabilistic). At the micro level (single prompts), responses differ greatly.

 

2.Inconsistency. Answers from AI are inconsistent. There are only 20% of brands that appear consistently after five runs of the same prompt.

 

3.Pre-training data gives models a bias, which Dan Petrovic refers to as "Primary Bias." It's unclear how much we can change or get rid of that pre-training bias.

 

4.Models change over time: When comparing 3.5 and 5.2, ChatGPT has significantly improved in intelligence. Are "old" strategies still effective? How can we make sure strategies remain effective for updated models?

 

5.Models differ: Different sources are weighed by models for web retrieval and training. For instance, AI Overviews cites Reddit more than ChatGPT, which relies more on Wikipedia.

 

6.Individualization: Gemini may provide you with far more individualized results than ChatGPT because it has greater access to your personal information through Google Workspace. Additionally, models may differ in the extent of personalization they permit.

 

7.Additional background: Long prompts give users much more context about what they want, which reduces the number of potential responses and makes it more difficult to sway them.

Final Thoughts: AI is not under our control in the same way that traditional software is. Rather, we mold patterns and limits, understanding that results will still be somewhat unpredictable.