Want to integrate OpenAI’s powerful language models into your application? Whether you’re building with LangChain or experimenting with GPT-4o, the first step is securing and managing your API key.
This post walks you through how to get your OpenAI API key, store it securely using environment variables, and make your first model call in Python with LangChain. Let’s get your environment set up and your first prompt running in minutes.
To use OpenAI you need to have an application programming interface (API) key. These models are not free, so you’ll be charged for your usage. Prices are dropping permanently over time, and you can check the current pricing here.
First, you must set up an API key. To use a web service, you usually only need a username and password to access the service. To use a service programmatically, you require an API key, which makes an API key something like a combination of username and password.
If you don’t have one yet, get an API key by following these steps:
A best practice is to separate the code from the credentials. Thus, store the API key in a separate file. A common approach is to store this information in a file called .env and place it in the working folder. In that file, you could store the API key and possibly many more keys if needed. This listing shows what an environment file should look like.
OPENAI_API_KEY = sk-proj...
API keys are treated as environment variables, which are typically variables that are used by your operating system. Our variable has the name OPENAI_API_KEY and its value must be defined to the right of the equal sign. Make sure you use the same key name in the coding script.
Let’s now start coding. You can find the required material in the file 03_LLMs/10_model_ chat.py at this webpage (click "Supplements list" to download). A best practice is to place all the required packages and functions at the beginning of the file. Let’s go through what we need next.
The os package is required for fetching and loading the environment variables. All major model providers offer packages for integration into LangChain. Thus, as shown in the listing below, use langchain_openai. The package dotenv is required for working with the environment variables file. Its function load_dotenv() loads the content of the .env file and provides the content as environment variables.
These are the packages you’ll need:
#%% packages
import os
from langchain_openai import ChatOpenAI
from dotenv import load_dotenv
load_dotenv('.env')
Check that the API key is available by running os.getenv('OPENAI_API_KEY'). As a result, you should see the API key printed on the screen.
Now we create an instance of the model we’ll use by using ChatOpenAI class. This requires a model name. We choose GPT 4o Mini here. Another important parameter is the temperature. This parameter controls the creativity of the model. And you need to pass the API key to authenticate and enabling OpenAI to charge you based on your usage.
MODEL_NAME = 'gpt-4o-mini'
model = ChatOpenAI(model_name=MODEL_NAME,
temperature=0.5, # controls creativity
api_key=os.getenv('OPENAI_API_KEY'))
The model object’s invoke() method is particularly important, as shown in the listing below. This method allows you to run the model based on specified parameters. In our first example, we asked the model to provide information on “What is LangChain?” The result is stored in an object of type AIMessage. You can access the information received from the model call by looking at the output of its dict() method.
res = model.invoke("What is a LangChain?")
res.dict()
{'content': 'LangChain is ...',
'additional_kwargs': {},
'response_metadata': {'token_usage': {'completion_tokens': 312,
'prompt_tokens': 13,
'total_tokens': 325},
'model_name': 'gpt-4o-mini-2024-07-18',
'system_fingerprint': 'fp_f33667828e',
'finish_reason': 'stop',
'logprobs': None},
'type': 'ai',
'name': None,
'id': 'run-929cb722-e48e-457b-859c-754a5d272c6d-0',
'example': False,
'tool_calls': [],
'invalid_tool_calls': [],
'usage_metadata': {'input_tokens': 13,
'output_tokens': 312,
'total_tokens': 325}}
Plenty of information is coming back from the model. Let’s start with the most important part: content. This property holds the actual model output prompt. Out of the other properties, we just want to mention response_metadata, which holds information on token usage. You’re charged in terms of input tokens and output tokens. In this property, you can determine how many tokens were used in the request.
Familiarize yourself with the different models from OpenAI’s model family by studying the model overview at this page. Some key features include the following:
You’re not limited to the OpenAI model family. You can work with many other LLMs such as Groq.
Setting up and using the OpenAI API is straightforward once you understand the key steps: getting your API key, securing it with a .env file, and using libraries like LangChain to send prompts and receive structured results. As you begin exploring models like GPT-4o, DALL·E, and Whisper, you’ll discover a rich ecosystem of tools to integrate AI into your projects.
Editor’s note: This post has been adapted from a section of the book Generative AI with Python: The Developer’s Guide to Pretrained LLMs, Vector Databases, Retrieval-Augmented Generation, and Agentic Systems by Bert Gollnick. Bert is a senior data scientist who specializes in renewable energies. For many years, he has taught courses about data science and machine learning, and more recently, about generative AI and natural language processing. Bert studied aeronautics at the Technical University of Berlin and economics at the University of Hagen. His main areas of interest are machine learning and data science.
This post was originally published 8/2025.