ChatGPT/ OpenAI API Overview
The rise of Artificial Intelligence (AI) represents a monumental transformation in the tech world. It gifts users the capability to pose a straightforward query and receive an in-depth response. The days of sifting through multiple ad-cluttered web pages to get a simple answer are behind us. Now, crisp and clear answers are delivered directly in a chat or text message format.
Numerous individuals leverage OpenAI's ChatGPT through interfaces like their chat page. This offers a seamless AI experience where anyone can communicate with ChatGPT just typing text to it. When using it in this manner, it's necessary to identify the specific model, be it GPT-3.5 or GPT-4. Each model gives different responses and approaches the queries you give it in its own way.
However, if your ambition is to create an application utilizing ChatGPT, interfacing with an API becomes necessary. This form of interaction unveils detailed settings that enhance the output. It also gives the ability to take actions on each response.
OpenAI API Options
There are many Nuget packages for C# that allow one to work with OpenAI. These give easy access to various fine tuning options.
If you simply want to work with the Chat completions API, which gives conversation-like responses, you can do so quite easily with the HttpClient class.
What you will need:
- API token: Vital for billing and access. See OpenAI's api keys here.
- Your preferred model: (e.g., "text-davinci-003") determining output nuances and the corresponding charge. Explore other available models here.
- Desired response length: in tokens/words (e.g., 1000). Model-specific limitations apply.
Settings Model
A simple model which allows you to pass in an API key, model-type and token amount in C# would look like this:
API Client
The client is the code which communicates with the API on OpenAI's servers. It sends a request and parses a response. The following code shows what a simply implementation to connect to the Chat completions API looks like:
Calling The API
To initiate communication, input your configurations and dispatch a text string via the client. Here's a simple example:
Building Applications
The power of working directly with the API comes when you want to perform some kind of tasks in an automated way. Depending on the specific model you work with, you may ask ChatGPT to do something complex that may have multiple steps involved. The more complex models that can perform several steps generally cost much more than the other models. Using the API allows using less expensive models at higher frequency.
There are limits to how much you can send and receive back from the API in a single request. To build applications, it is best to think of what you want accomplished as a series of small steps which can be easily understood. By breaking down tasks this way, one can build large and complex applications which utilize the full power of OpenAI.
By working with the API, you can ask a series of questions and deal with each response individually. For example, you could ask ChatGPT to re-write all of your website's meta descriptions to be more SEO friendly. Each page would be dealt with, one-by-one, in a loop that eventually updates all of your meta descriptions in your own website database. Instead of manually copying and pasting each meta description to re-write into a chat prompt website or app, you could have each one re-written whenever you want in a background process.
Having direct access to the API means you could use a specific model for a specific task instead of being stuck with one model for every request. It also allows having conditional logic which would re-try a request if the result is too long, too short or contains some text you don't want. In doing so, the completions API allows building robust content creation and management applications that conform to your specific business requirements.
Prompt Engineering
Prompt engineering is a way to optimize and fine tune AI results for a certain use case. By giving the AI some kind of instructions on the output you want, often with a specific example, you can mass produce a result.
An example I've used to tell AI how to produce a result is with URL keys. I can tell it, for a given phrase like: "how to fix your bike if it's tires are flat" that it should return "fix-bike-flat-tires". In this way, I can feed the prompt any phrase and get back an SEO friendly URL key. This same type of example input and output process can apply to anything and takes the AI from a generic system to one that is tailored for a specific need.
Creating A Loop To Update Meta Descriptions
Below is an example of creating meta tags in a loop, which first reads a text file. The text file gives instructions about the meta description tag to create in a templated fashion. In this example it's taking in the title of an article and formatting the message to OpenAI in such a way that it will write an SEO optmized meta description. If the text is too long after a few attempts, it simply continue to the next title.