"Prompt engineering is the process of providing detailed instructions to generative artificial intelligence (GenAI) to produce the desired output."
ChatGPT and its analogs appeared - and along with them, instructions, tutorials, and courses appeared for their users as a new field called Prompt Engineering. Nothing unusual at first glance, right? - The emergence of a new tool is accompanied by instructions on how to use it.
At the same time, many publications appeared about how the intellectual capabilities of GenAI allow such systems to replace writers, lawyers, programmers, and doctors, providing the corresponding services with examples of how it all works, using specific examples. This seems quite reasonable - AI systems demonstrate the ability to generate some useful texts, program codes, and images.
But let's pay attention to this: when we receive the services of a human professional, we do not require any training in the art of communicating with them - with a lawyer or a doctor; there are no courses in prompt engineering for talking to a car salesman or a waiter in a restaurant.
What explains the difference between an AI-layer or AI-writer and a human-layer or human-writer? Why do we need prompt engineering in the first case, while in the second case, we can do just fine without it?
Now is the time to remember the tale of the stone soup:
[ en.wikipedia.org/wiki/Stone_Soup ]
The villager, who anticipates enjoying a share of the soup, does not mind parting with a few carrots, so these are added to the soup. Another villager walks by, inquiring about the pot, and the travelers again mention their stone soup which has not yet reached its full potential. More and more villagers walk by, each adding another ingredient, like potatoes, onions, cabbages, peas, celery, tomatoes, sweetcorn, meat (like chicken, pork, and beef), milk, butter, salt, and pepper. Finally, the stone (being inedible) is removed from the pot, and a delicious and nourishing pot of soup is enjoyed by travelers and villagers alike.
The fairy tale suggests that the missing component in the case of "LLM soup" is something without which the service provider turns into an analog of a traditional tool (washing machine, TV, lawn mower), which does not claim the status of an AI system, and accordingly requires training in its use.
The tool (LLM service) lacks the ability to reason, without which it is possible to extract what is required from the remarkable gigantic set of information memorized by the LLM system in non-trivial cases only by using the user's ability to reason. Prompt engineering uses the user's intelligence to compensate for the LLM's lack of intelligence.
The term "prompt engineering" essentially plays the role of a fig leaf, masking the lack of intelligence in LLM systems - and, at the same time, masking the dubiousness of classifying these systems as AI.
Of course, LLM systems are helpful and should be used where possible. Still, a clear understanding that useful results in non-trivial cases are achieved by involving the user's intelligence in the process allows us to better understand why acceptable results should not be expected in cases where an LLM system is used for non-trivial tasks in the absence of a human - for example, as a basis for controlling a robot or autonomous driving of a car.
Without cooperation with human intelligence, hidden behind the term "prompt engineering", the capabilities of LLM are radically degraded.