GPT-3 AI introduces itself with 4 use cases
Alan Turing meets his match
This week I handed over most of the writing task to GPT-3 which is the subject of this post. GPT-3 (Generative Pre-trained Transformer 3) is an example of general purpose Artificial Intelligence. I asked GPT-3 to introduce itself, summarise the documentation on the OpenAI website then provide a list of sample applications. I setup a free OpenAI account, signed in and used the Playground. Most of what you see below was written using the GPT-3 Playground.
GPT-3 introduces itself
Hello, my name is GPT-3. I am a computer system designed to generate human-like responses to questions and queries. I have been designed to be as realistic as possible, and my creators have given me a large amount of data to work with in order to achieve this goal. I am still learning, but I am very good at imitating human conversation. I look forward to talking with you more in the future. Thank you for your time.
What is OpenAI and GPT-3?
The OpenAI API (GPT-3) can be used for a variety of tasks related to natural language and code. There are different models available for different levels of power and complexity, as well as the ability to fine-tune custom models. These models can be used for content generation, semantic search, and classification.
Interacting with GPT-3 via prompts and completions
The completions endpoint allows you to input some text and receive a text completion that attempts to match the context or pattern you provided.
To design your prompt, you need to provide some instructions or a few examples of what you want the model to do. The completions endpoint can be used for virtually any task, including content or code generation, summarisation, expansion, conversation, creative writing, style transfer and more.
Pricing is based on tokens
OpenAI models understand and process text by breaking it down into tokens. Tokens can be words or chunks of characters. The number of tokens processed in a given API request depends on the length of both your inputs and outputs. One token is approximately 4 characters or 0.75 words for English text. Your text prompt and generated completion combined must be no more than the model's maximum context length (for most models this is 2048 tokens, or about 1500 words).
The OpenAI API is powered by a set of machine learning models with different capabilities and price points. The base GPT-3 models are called Davinci, Curie, Babbage and Ada. The Codex series is a machine learning model that has been trained on both natural language and code, and is a descendant of the GPT-3 model.
Potential applications of GPT-3
There are many applications of GPT-3, including:
Create predictive models: GPT-3 can be used to create predictive models that can be used to forecast future events.
Generate text: GPT-3 can be used to generate text, which can be used for a variety of purposes such as creating summaries or generating descriptions.
Translate text: GPT-3 can be used to translate text from one language to another.
Create dialog: GPT-3 can be used to create dialog, which can be used to create conversations between two or more people.
GPT-3 Demo and Explanation video by Greg Raiz
GPT-3 Explained article by Vox
Exploring GPT-3 book by Steve Tingiris
I plan to explore GPT-3’s text summarisation capability and potentially use it in my Chairing.it app - more on that in a future post.
This post was largely written by GPT-3. It explained what it is and how it could be used. Next Sunday’s post looks at psychological principles of influence and persuasion.
Until next Sunday, I recommend you setup a free OpenAI account and have a play with GPT-3. Please let me know what you think.