Python

How to Use OpenAI’s API with LangChain: Setup, Environment Variables, and First Model Call

Want to integrate OpenAI’s powerful language models into your application? Whether you’re building with LangChain or experimenting with GPT-4o, the first step is securing and managing your API key.

 

This post walks you through how to get your OpenAI API key, store it securely using environment variables, and make your first model call in Python with LangChain. Let’s get your environment set up and your first prompt running in minutes.

 

Getting Started with the OpenAI API

To use OpenAI you need to have an application programming interface (API) key. These models are not free, so you’ll be charged for your usage. Prices are dropping permanently over time, and you can check the current pricing here.

What Is an API Key and Why Do You Need One?

First, you must set up an API key. To use a web service, you usually only need a username and password to access the service. To use a service programmatically, you require an API key, which makes an API key something like a combination of username and password.

How to Get an OpenAI API Key

If you don’t have one yet, get an API key by following these steps:

  1. Head over to https://platform.openai.com/.
  2. Create an account.
  3. Activate billing and load some money onto your account.
  4. Navigate to API keys and create a new API key. The name you specify in the web frontend is irrelevant, you only need the key. Copy the key into the clipboard.
  5. Paste this key into a file called the .env The key should look like sk-proj...

Best Practice: Use a .env File for Credentials

A best practice is to separate the code from the credentials. Thus, store the API key in a separate file. A common approach is to store this information in a file called .env and place it in the working folder. In that file, you could store the API key and possibly many more keys if needed. This listing shows what an environment file should look like.

 

OPENAI_API_KEY = sk-proj...

 

API keys are treated as environment variables, which are typically variables that are used by your operating system. Our variable has the name OPENAI_API_KEY and its value must be defined to the right of the equal sign. Make sure you use the same key name in the coding script.

 

Let’s now start coding. You can find the required material in the file 03_LLMs/10_model_ chat.py at this webpage (click "Supplements list" to download). A best practice is to place all the required packages and functions at the beginning of the file. Let’s go through what we need next.

 

Leading Environment Variables in Python

The os package is required for fetching and loading the environment variables. All major model providers offer packages for integration into LangChain. Thus, as shown in the listing below, use langchain_openai. The package dotenv is required for working with the environment variables file. Its function load_dotenv() loads the content of the .env file and provides the content as environment variables.

Required Packages

These are the packages you’ll need:

 

#%% packages

import os

from langchain_openai import ChatOpenAI

from dotenv import load_dotenv

load_dotenv('.env') 

Loading the API Key from the .env File

Check that the API key is available by running os.getenv('OPENAI_API_KEY'). As a result, you should see the API key printed on the screen.

 

Creating and Using a LangChain Model

Now we create an instance of the model we’ll use by using ChatOpenAI class. This requires a model name. We choose GPT 4o Mini here. Another important parameter is the temperature. This parameter controls the creativity of the model. And you need to pass the API key to authenticate and enabling OpenAI to charge you based on your usage.

 

MODEL_NAME = 'gpt-4o-mini'

model = ChatOpenAI(model_name=MODEL_NAME,                                                    temperature=0.5, # controls creativity                                                                          api_key=os.getenv('OPENAI_API_KEY'))

Run a Model Prompt with invoke()

The model object’s invoke() method is particularly important, as shown in the listing below. This method allows you to run the model based on specified parameters. In our first example, we asked the model to provide information on “What is LangChain?” The result is stored in an object of type AIMessage. You can access the information received from the model call by looking at the output of its dict() method.

 

res = model.invoke("What is a LangChain?")

res.dict()

{'content': 'LangChain is ...',

'additional_kwargs': {},

'response_metadata': {'token_usage': {'completion_tokens': 312,

 'prompt_tokens': 13,

 'total_tokens': 325},

'model_name': 'gpt-4o-mini-2024-07-18',

'system_fingerprint': 'fp_f33667828e',

'finish_reason': 'stop',

'logprobs': None},

'type': 'ai',

'name': None,

'id': 'run-929cb722-e48e-457b-859c-754a5d272c6d-0',

'example': False,

'tool_calls': [],

'invalid_tool_calls': [],

'usage_metadata': {'input_tokens': 13,

 'output_tokens': 312,

 'total_tokens': 325}} 

Understanding the Response Object

Plenty of information is coming back from the model. Let’s start with the most important part: content. This property holds the actual model output prompt. Out of the other properties, we just want to mention response_metadata, which holds information on token usage. You’re charged in terms of input tokens and output tokens. In this property, you can determine how many tokens were used in the request.

 

Exploring the OpenAI Model Family

Familiarize yourself with the different models from OpenAI’s model family by studying the model overview at this page. Some key features include the following:

  • OpenAI created a model family (https://platform.openai.com/docs/models) consisting of several models suitable for different tasks.
  • Language models like the GPT family (e.g., GPT-4o) can process text, and some can also work with images.
  • Text-to-image generation: DALL-E is a model that can generate and edit images.
  • Text-to-speech (TTS): Several models can convert text to natural, spoken audio.
  • Speech-to-text: With Whisper, you can convert audio recordings into text.
  • Text embeddings: Embeddings are numerical representations of text. Such embeddings are the cornerstone of NLP.

You’re not limited to the OpenAI model family. You can work with many other LLMs such as Groq.

 

Conclusion

Setting up and using the OpenAI API is straightforward once you understand the key steps: getting your API key, securing it with a .env file, and using libraries like LangChain to send prompts and receive structured results. As you begin exploring models like GPT-4o, DALL·E, and Whisper, you’ll discover a rich ecosystem of tools to integrate AI into your projects.

 

Editor’s note: This post has been adapted from a section of the book Generative AI with Python: The Developer’s Guide to Pretrained LLMs, Vector Databases, Retrieval-Augmented Generation, and Agentic Systems by Bert Gollnick. Bert is a senior data scientist who specializes in renewable energies. For many years, he has taught courses about data science and machine learning, and more recently, about generative AI and natural language processing. Bert studied aeronautics at the Technical University of Berlin and economics at the University of Hagen. His main areas of interest are machine learning and data science.

 

This post was originally published 8/2025.

Recommendation

Generative AI with Python
Generative AI with Python

Your guide to generative AI with Python is here! Start with an introduction to generative AI, NLP models, LLMs, and LMMs—and then dive into pretrained models with Hugging Face. Work with LLMs using Python with the help of tools like OpenAI and LangChain. Get step-by-step instructions for working with vector databases and using retrieval-augmented generation. With information on agentic systems and AI application deployment, this guide gives you all you need to become an AI master!

Learn More
Rheinwerk Computing
by Rheinwerk Computing

Rheinwerk Computing is an imprint of Rheinwerk Publishing and publishes books by leading experts in the fields of programming, administration, security, analytics, and more.

Comments