Exciting news! OpenAI just released the ChatGPT API, here's how you can build yourself a chatbot service that answers support requests for your app or SaaS.
Librariesβ
Here's what we'll use:
1. OpenAI API π€β
2. Python πβ
Here are the steps:
1. Difference between ChatGPT and GPT-3β
2. Get OpenAI API keysβ
3. Create FAQ dataβ
4. Create a new promptβ
5. Test the model on a new promptβ
1. Difference between ChatGPT and GPT-3 APIβ
Before we dive into the code, it's important to understand the difference between OpenAI's ChatGPT and GPT-3 models.
So, you may wonder, what's the difference between ChatGPT and GPT-3? Well, ChatGPT is OpenAI's new model family designed specifically for chat-based interactions, while GPT-3 is a larger model that can generate text for a variety of applications.
One of the main differences is that ChatGPT models consume a sequence of messages with metadata, while GPT-3 consumes unstructured text represented as a sequence of "tokens". ChatGPT uses a new format called Chat Markup Language (ChatML), which allows for a more contextual understanding of conversations.
The ChatGPT API is also priced at $0.0010 per 1k tokens (input), which is 10x cheaper than the existing GPT-3.5 models. Plus, OpenAI has recently released a new model called gpt-3-turbo, which is even faster and more cost-effective than before. So, if you're looking to build a chatbot for support requests, ChatGPT is definitely worth considering.
ChatGPT also uses a new format called Chat Markup Language (ChatML), which allows for a more contextual understanding of conversations.
2. Get OpenAI API keysβ
Before we go ahead and start coding, let's get the OpenAI credentials needed for the API calls.
Go to https://beta.openai.com/, log in and click on your avatar and View API keys:
Then create a new secret key and save it for the request:
Now we have all the credentials needed to make an API request.
3. Create FAQ dataβ
The next step is to create the FAQ data you'll use as input and prompt.
Let's start by importing the packages we'll be using:
import json
from openai import OpenAI
If you already have openai
installed, make sure to call pip install openai --upgrade
in your terminal to make sure you have access to the newly added gpt-3.5-turbo model.
The next step is to create the FAQ data you'll use as input and prompt. In this use case, we're building a FAQ-answering bot, so let's come up with some questions.
π‘ Tip: Automate question-answer writing π‘
I asked ChatGPT the following:
"Give me some made-up questions a user might have while using a SaaS, also write made-up answers to each question, make all the questions about Billing and Subscription"
And I got this:
Go ahead and define your OpenAI API key:
api_key ="YOUR_API_KEY"
# Initiate OpenAI
openai_client = OpenAI(
api_key=os.environ['OPENAI_API_KEY']
)
Then create a list with dictionaries, where each dict has a question and an answer:
faq_data = [{
"question": "How can I get a copy of my invoice or receipt for my subscription payment?",
"answer": "To obtain a copy of your invoice or receipt for your subscription payment, simply log in to your account and navigate to the 'Billing' section. From there, you can view and download your past invoices and receipts."
},
{
"question": "How do I update my payment method for my subscription?",
"answer": "To update your payment method for your subscription, log in to your account and go to the 'Billing' section. From there, you can add, remove, or modify your payment method. Be sure to save your changes to ensure that your subscription remains active."
},{
"question": "Can I switch to a different pricing plan or downgrade my subscription?",
"answer": "Yes, you can switch to a different pricing plan or downgrade your subscription at any time. Simply log in to your account and go to the 'Billing' section. From there, you can view and select your desired plan. Please note that if you downgrade your subscription, you may lose access to certain features or services that were available in your previous plan. Additionally, any price changes will take effect at the next billing cycle."
}]
The next step is to create the message objects needed as input for the ChatGPT completion function.
From the Chat completion documentation:
"The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either βsystemβ, βuserβ, or βassistantβ) and content (the content of the message). Conversations can be as short as 1 message or fill many pages." https://platform.openai.com/docs/guides/chat/introduction
Create the message objects by looping over the faq_data
list and create the message objects list with role
and content
keys for each question and answer:
message_objects = []
for faq in faq_data:
message_objects.append({
"role": "user", "content": faq['question']
})
message_objects.append({
"role": "assistant", "content": faq['answer']
})
message_objects
4. Create a new promptβ
Now that we have our FAQ data, we can create a new prompt which we'll then append to the message_object
:
new_prompt = "How do I switch to a new credit card?"
Then append the new prompt to the message_object
we already created:
message_objects.append({"role":"user", "content":new_prompt})
The message_object
should now look something like this, with the new prompt dict appended last:
5. Test the model on a new promptβ
The final step is to test your chatbot service with our new prompt. Call the openai_client.chat.completions.create
function with the new prompt. Make sure you're using the new model gpt-3.5-turbo
:
response = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
messages=message_objects
)
response['choices'][0]['message']['content']
The model's reply can be extracted with response.choices[0].message.content
and should look something like this:
This will give us the AI-generated response to our prompt based on the context of the FAQ messages we provided.
We're all set; this is how easy it is to leverage the power of ChatGPT to create conversational AI applications.
With just a few lines of code, we can build a simple chatbot service that can understand natural language and provide useful responses to common questions.
Summaryβ
Here's a summary of what we did
1. Talked about the difference between ChatGPT and GPT-3β
2. Got our OpenAI API keysβ
3. Created questions and corresponding answers for FAQ dataβ
4. Created a new support question as a new promptβ
5. Tested the model with the new promptβ
6. Troubleshootingβ
attributeErrorβ
attributeError: module 'openai' has no attribute 'ChatCompletion'
This probably means that the version of your Python client library for the OpenAI API is lower than 0.27.0
.
Run pip install openai --upgrade
in your terminal for the latest version and make sure it is at least 0.27.0
:
InvalidRequestErrorβ
InvalidRequestError: This model's maximum context length is 4096 tokens
This indicates that the input message_object
sent to the ChatGPT API has exceeded the maximum allowed length of 4096 tokens.
You will need to shorten the length of your messages to resolve the issue:
Notesβ
You might associate ChatGPT with multi-turn conversations, just as the one we did with the FAQ in this guide.
But according to OpenAI https://platform.openai.com/docs/guides/chat, the ChatGPT API is as useful for single-turn tasks as the ones previously done with DaVinci.
Next stepsβ
1. Repo with source code Here is the repo with a Jupyter notebook with all the source code if you'd like to implement this on your own β¬οΈ https://github.com/norahsakal/chatgpt-support-requests
2. Do you need help with getting started with the ChatGPT API? Or do you have other questions? I'm happy to help, don't hesitate to reach out β‘οΈ norah@quoter.se
Or shoot me a DM on Twitter @norahsakal