← Back to blog

How to Get Good Results from GPT?

Prompt engineering is like being a manager and asking subordinates to work. Here are the key techniques to get the best out of generative AI models.

AIGPTprompt engineeringtips

"Prompt engineering," or prompt engineering, is a bit like being a manager and asking subordinates to work. For example, if you are working with a Generative Artificial Intelligence model and you want him to write you a cover letter for a job, you don't tell him "write something." You give him a well-constructed prompt, showing your resume and describing the job you are applying for. The art of prompt engineering explains how to give and how to ask for information. But it is not an abstract art: within a few years of the appearance of generative models, already hundreds of scientific articles have been written on the subject.

1. Prompting Zero-Shot

The most immediate use. Kind of like walking into a room where there is an intern and asking him a point-blank question. You provide the model with a task it has never seen before, without any examples, and it uses what it already knows to make a guess. For example, you might ask an artificial intelligence, "What is the best cold brew coffee on the market?" without ever having taught the model anything about coffee, extraction methods, and how humans appreciate good aromas.

2. Prompting Few-Shot

This is like giving the AI a little help. Instead of sending it to the task blindly, provide some examples to help it understand what you are looking for. Let's say you are teaching the AI about animal sounds. You might say, "A dog goes 'woof,' a cat goes 'meow,' what does a cow do?" With these few examples, the AI understands the concept and can answer, "A cow goes 'moo.'" This technique is, in our opinion, fundamental. Giving examples to the model infinitely improves the answers.

3. Prompting a Chain of Thought (CoT)

Sometimes, problems require some step-by-step thinking. CoT is like giving the AI an incentive to think aloud while solving a problem. Imagine a math problem as complicated as, "If you have 5 apples and give away 2 apples, how many do you have left?" You won't believe it, but adding "Break down the problem and reason step by step" allows the model to do calculations it would not otherwise do.

4. Generation Enhanced by Information Retrieval (RAG)

It's not really prompt engineering, but we can't not talk about it. Have you ever used a paper during a test? This is the RAG for AI. When faced with a question, the AI draws extra information from a vast database to enrich its answer. So if you ask, "Who was the first person on the moon?" the AI might integrate extra details about the Apollo 11 mission to give you a more complete answer.

5. Self-Coherence

This technique is like when you double-check your work in a test to make sure the answers are consistent. Tell the model to generate multiple answers to a problem, and then compare them to find the most consistent solution. Then, if it is solving a riddle, it might come up with a few hypotheses and then focus on the one that makes the most sense based on what it knows.

6. Be kind

Make an effort to say "Please" when you ask and "Thank you. Could you now..." when you make a new request. This technique is actually not in the review — came from my grandmother. But, believe it or not, we have noticed that it also works with GPTs, not just people!

Happy prompting to all!