2. The Challenge of Context: Why Prompting Isn’t Simple

Consider a unique example from Malayalam: “കാട്ടാന വീട്ടിൽ കയറിയാൽ വീട്ടുകാരെന്തു കാട്ടാന?” (Kaattana veettil kayariyal veettekaarenthu kaattana?). Here, the word “കാട്ടാന” (kaattana) appears twice. The first “കാട്ടാന” refers to an elephant, while the second “കാട്ടാന” means “what to do.” A human listener instantly understands these two very different meanings just by hearing the sentence. For an Artificial Intelligence (AI) model, however, this single word having two entirely different meanings within the same sentence highlights a fundamental problem: the challenge of context.

Previously, to talk to computers, you needed special programming languages or codes. You had to learn its language. But with modern AI, especially in areas like Natural Language Processing (NLP), computers can now understand and talk back in human language. This change is a huge step, making computers much easier to use. However, this very flexibility introduces a big challenge: human language, full of rich nuances, relies heavily on surrounding context for accurate understanding. Unlike the rigid, clearly defined commands of computer code, words and phrases in everyday language can have many meanings. This depends on the other words around them, who is speaking, the situation, and even shared cultural knowledge. This “context problem” is why getting useful, precise outputs from AI is a “non-trivial problem” that demands our conscious effort. For us, the users, this means we must really understand how AI processes our commands to get the output we desire. Here lies the opportunity for prompt engineering – turning this challenge into a powerful way to guide AI effectively.

This challenge breaks down into several key hurdles for prompt engineers:

Ambiguity 🧐

Human language is full of words that can be confusing, forcing AI to guess meanings. For example, the word “bank” can mean where you keep money or the side of a river. As prompt engineers, we need to add enough details to our prompts so the AI doesn’t get confused and picks the right meaning.

Lack of Common Sense and World Knowledge 🧠

Humans know many basic things about how the world works just by living in it. AI, even after reading tons of information, doesn’t “experience” the world this way. This makes it hard for AI to understand unspoken details. For example, if you tell an AI, “He took a course in the spring. He got a nice job in the summer,” a human understands the course likely led to the job. As prompt engineers, we often need to make these connections very clear in our instructions.

Cultural and Emotional Nuance 🎭

Language is deeply connected to culture and feelings. Phrases like “kick the bucket” (meaning to die) are hard for AI to understand literally. It also struggles to pick up on subtle feelings like sarcasm from text alone. Prompt engineers must be careful with their wording to ensure the AI understands the true intention behind the language, especially when dealing with such nuances.

Long-Term Context and Memory 💾

While modern LLMs have improved at remembering details during a single chat, they often “forget” things between different conversations or lose track in very long ones. This means a prompt engineer might need to remind the AI of past information or background details each time they start a new interaction to keep the

context clear. I experienced this directly when I used Gemini to make notes from SCERT books. Gemini was excellent for extracting notes in Malayalam, which is crucial for regional languages. However, as the chat got longer and longer, I noticed Gemini would start to deviate from the earlier prompt, producing outputs that weren’t what I wanted. I was astonished! Initially, I just managed by giving the prompt again to remind it, but later, after taking a course, I understood this was due to the AI forgetting the earlier chat’s context in lengthy conversations. While notebook LLMs are now available to help with this, my earlier experiences truly highlighted this “memory” challenge.

Bias in Training Data ⚖️

AI learns from the vast amounts of data it’s trained on. If this training data has human biases (like unfair stereotypes), the AI can unintentionally repeat or even strengthen these biases in its answers. As prompt engineers, we need to be aware of this and try to write prompts that encourage fair and neutral responses.

The constant work in AI, especially in understanding language, aims to solve these context challenges. But achieving a human-like understanding of context is still a big and complex goal in AI. My experience shows that clear and precise prompting is truly the key.


Leave a Comment

Your email address will not be published. Required fields are marked *