Part 1 - Mastering OpenAI: Setup, Principles, and Advanced Prompting Tactics

August 22, 2023 (2y ago)

94 views

Introduction

OpenAI's GPT-3.5-turbo has revolutionized the realm of language models, offering an array of applications from chatbots to creative content generation. To harness its capabilities to the fullest, one needs to understand not only how to set it up but also the art of crafting the perfect prompt. This guide provides an in-depth look into the setup process, foundational prompting principles, and advanced tactics to refine your interactions with the model.

Setting Up OpenAI

Installation and Basic Setup

Kickstart your journey with the OpenAI SDK by importing essential modules and initializing the primary service class. Remember, you'll need an API key from OpenAI to proceed.

import OpenAI from "openai"; 
import { CreateChatCompletionRequestMessage } from "openai/resources/chat";
 
class OpenAIService {
  private apiKey = process.env.OPEN_AI_KEY;
  private instance = new OpenAI({
    apiKey: this.apiKey,
  });
}
 
const openAiInstance = new OpenAIService();
export default openAiInstance;

Helper Function: get_completion

To interact with the model and receive responses, we'll create a helper function called get_completion. This function will simplify the process of sending prompts to the model and retrieving its output.

 async get_completion(prompt: string) {
    const messages: CreateChatCompletionRequestMessage[] = [
      { role: "user", content: prompt },
    ];
    const response = await this.instance.chat.completions.create({
      model: "gpt-3.5-turbo",
      messages,
      temperature: 0,
    });
 
    console.log(response.choices[0].message["content"]);
    return response.choices[0].message["content"];
  }

With this function in place, you're all set to communicate with GPT-3.5-turbo and start crafting your prompts.

Now our final setup will looks like:

import OpenAI from "openai";
import { CreateChatCompletionRequestMessage } from "openai/resources/chat";
 
class OpenAIService {
  private apiKey = process.env.OPEN_AI_KEY;
  private instance = new OpenAI({
    apiKey: this.apiKey,
  });
 
   async get_completion(prompt: string) {
    const messages: CreateChatCompletionRequestMessage[] = [
      { role: "user", content: prompt },
    ];
    const response = await this.instance.chat.completions.create({
      model: "gpt-3.5-turbo",
      messages,
      temperature: 0,
    });
 
    console.log(response.choices[0].message["content"]);
    return response.choices[0].message["content"];
  }
}
 
const openAiInstance = new OpenAIService();
export default openAiInstance;
 

Prompting Principles

Harnessing the power of GPT-3.5-turbo requires more than just sending random strings of text. Crafting effective prompts is both an art and a science. Here are two foundational principles to keep in mind:

1. Write Clear and Specific Instructions

Language models like GPT-3.5-turbo thrive on clarity. Ambiguity can often lead to vague or unrelated outputs. Be explicit in your instructions to get precise results.

2. Give the Model Time to “Think”

While GPT-3.5-turbo processes information at lightning speeds, it's essential to let it generate comprehensive responses. This might mean allowing the model to produce longer outputs or breaking down your queries.

Advanced Prompting Tactics (Derived from Principle One)

Note: The tactics presented below are derived from the first foundational principle: "Write Clear and Specific Instructions".

Dive deeper with these advanced tactics, enhancing the clarity and structure of your prompts:

Tactic 1: Use Delimiters for Clear Distinction

Segment your input using delimiters to offer the model a clearer structure.

Prompt

const tacticOnePromptText = `
You should express what you want a model to do by providing instructions that are as clear and specific as you can possibly make them. This will guide the model towards the desired output,
and reduce the chances of receiving irrelevant or incorrect responses...
`;
 
const tacticOnePrompt = `
Summarize the text delimited by triple === into a single sentence.
=== ${tacticOnePromptText} === 
`;

Example

const tacticOneResponse = await openAiInstance.get_completion(tacticOnePrompt);
console.log(tacticOneResponse);

Output

To guide a model towards the desired output and reduce the chances of irrelevant or incorrect responses, it is important to provide clear and specific instructions, even if it means writing longer prompts that provide more clarity and context.

Tactic 2: Ask for Structured Output

Structuring output in specific formats like JSON or HTML ensures readability and easy processing.

Prompt

const tacticTwoPrompt = `
Generate a list of three made-up book titles along with their authors and genres. Provide them in JSON format with the following keys: book_id, title, author, genre.
`;

Example

const tacticTwoResponse = await openAiInstance.get_completion(tacticTwoPrompt);
console.log(tacticTwoResponse);

Output

{
  "books": [
    {
      "book_id": 1,
      "title": "The Enigma of Elysium",
      "author": "Evelyn Sinclair",
      "genre": "Mystery"
    },
    ...
  ]
}

Tactic 3: Condition Checking

Direct the model to process the input and format its response based on specific conditions.

Prompt

const tacticThreePromptText = `
Making a cup of tea is easy! First, you need to get some water boiling...
`;
 
const tacticThreePrompt = `
You will be provided with text delimited by triple quotes. If it contains a sequence of instructions, re-write those instructions in the following format:
 
Step 1 - ...
...
Step N - …
 
If the text does not contain a sequence of instructions, then simply write "No steps provided."
 
"""${tacticThreePromptText}"""
`;

Example

const tacticThreeResponse = await openAiInstance.get_completion(tacticThreePrompt);
console.log(tacticThreeResponse);

Output

Step 1 - Get some water boiling.
...
Step 7 - Enjoy your delicious cup of tea.

Tactic 4: Few-shot Prompting

With a few-shot approach, you provide the model with multiple examples to understand the expected output format or style.

Prompt

const tacticFourPrompt = `
Your task is to answer in a consistent style.
 
<child>: Teach me about patience.
 
<grandparent>: The river that carves the deepest valley flows from a modest spring...
 
<child>: Teach me about resilience.
`;

Example

const tacticFourResponse = await openAiInstance.get_completion(tacticFourPrompt);
console.log(tacticFourResponse);

Output

<grandparent>: Resilience is like a mighty oak tree that withstands the strongest storms...

These tactics stem from the first principle of crafting prompts. Stay tuned for our next installment, where we'll delve into advanced tactics derived from the second principle. Mastering these techniques will empower you to optimize your interactions with GPT-3.5-turbo and achieve remarkable results!

Access the Complete Code

For a hands-on look at the code and to delve deeper, check out my GitHub repository. Make sure to replace this placeholder with your actual GitHub URL!