The Grafana-Docker Setup

In this blog post, we’ll describe a Docker setup in which Grafana works with a database and a collector.


The setup reads performance data from your computer and generates graphs from it.


Information about the System Load in Grafana (Here with the Light Theme)


Sample Code on GitHub: You can find the full source code for this post on GitHub:


Generating Data with Collectors (Telegraf)

We’ll generate the data we want to visualize by fetching it from the running computer. Performance data on the utilization of your computer is excellently suited for a display using Grafana.


In a classic microservices architecture, we’ll use a Docker container for each service. For data collection purposes, we’ll use Telegraf, which is a service written in the Go programming language that uses plug-ins to collect a wide variety of data. The list of input plug-ins is pretty long. Here’s just a brief excerpt of some of the better known services:

  • Apache
  • Amazon CloudWatch
  • Docker
  • Dovecot
  • iptables
  • Kubernetes
  • MongoDB
  • MySQL
  • ping

Once we’ve collected the data, we can transform and aggregate it before passing it to an output plug-in. For the output, a time series–optimized database is usually used, such as InfluxDB, which originated from the same software project.


In our setup, we want to visualize statistics on system utilization (CPU and memory), network availability (ping), and Docker itself. The corresponding excerpt from the docker-compose.yml file looks like the following:



   image: telegraf:1.19

   hostname: telegraf


       - ./telegraf.conf:/etc/telegraf/telegraf.conf:ro

       - /var/run/docker.sock:/var/run/docker.sock:ro

restart: always


We use the official Docker image for Telegraf in the current version 1.19 and mount two volumes: the telegraf.conf configuration file from the current directory and the Docker socket. Both will be integrated in read-only mode, which you can recognize by the terminating string :ro. While statistics on the host computer’s CPU and memory can also be queried within the container, we need the included socket for Docker statistics. Because Telegraf doesn’t store any data itself, we don’t use a separate Docker volume here.


Docker on Windows: The Docker socket /var/run/docker.sock isn’t available on Windows. If you want to run the configuration presented here with the current version of Docker on Windows, the easiest way is to comment out this line. Of course, then you won’t receive any statistics about the Docker daemon.


The Telegraf configuration file (slightly shortened here) isn’t complicated at all:



   interval = "10s"


   urls = ["http://influx:8086"]

   database = "telegraf"

   timeout = "5s"


   percpu = true

   totalcpu = true




   urls = [""]

   count = 1


   endpoint = "unix:///var/run/docker.sock"



As you can see, there’s a global section [agent] where you set the query interval. The other sections define input and output plug-ins, respectively. The Docker input plug-in requires the specification of an endpoint; in our case, that’s the included Unix socket. The plug-in could also monitor a Docker daemon on a remote system when it can be accessed through the network. The line should then contain the IP address accordingly, for example, endpoint = "tcp://".


If you’re interested in more input plug-ins, it’s best to copy the original configuration file from the Telegraf container. The file contains very detailed comments and explains numerous settings. For example, you can use the following command for copying:


docker run --rm -v ${PWD}:/src telegraf:1.19 \

   cp /etc/telegraf/telegraf.conf /src/example.conf


For this purpose, you need to include the local directory in the container under the /src folder and copy the default configuration file there. The container will then be terminated and deleted (--rm).


Telegraf also has an option that prints the current configuration on the command line. You can start a container and redirect the output to a file on your computer using the default configuration file:


docker run --rm telegraf:1.19 telegraf config > telegraf.conf


Configuration without Comments: If you want to see only the active configuration settings without comments or empty lines, you must execute the following command on Linux:


docker run -t telegraf:1.19 telegraf config | egrep -v '(^ *^M$|^ *#)'


In the regular expression after egrep, you need to type the ^M as (Ctrl)+(V) followed by (Ctrl)+(M) on the keyboard. (This is the different end of line on Windows.)


Another way to formulate the regular expression is to look for any control character instead of the Windows line end. In this case, the syntax looks like the following:


egrep -v '(^ *[[:cntrl:]]$|^ *#)'


Storing Data with InfluxDB

As already seen in the Telegraf configuration file, we use InfluxDB to store the data. This time series database, which is a special type of database that specializes in time-related content. In particular, queries that have been aggregated over a longer period of time can be answered efficiently with such databases.


More Information about InfluxDB: For more information and statistics on the current use of time series databases, refer to the following web page:


You can use InfluxDB on the basis of the free MIT license. InfluxDB works perfectly straightforward as a container in the docker compose setup. For a start, the image doesn’t require any further configuration settings. In order not to lose the collected data even if containers get restarted, we use a named volume:


# File: grafana-manual/docker-compose.yml (excerpt)



   image: influxdb

   restart: unless-stopped


       - influx:/var/lib/influxdb






The influx volume gets associated with the /var/lib/influxdb folder, which is the default folder for the database data. We don’t want to say much more about InfluxDB in this context except that it simply works.


Visualizing Data with Grafana

Now that the data is available in the dedicated database, we still need to visualize it. This is where Grafana comes into play. In the Grafana image on Docker Hub, a configuration file is used whose values can be overridden with environment variables, which is ideal for our Docker setup.


If you prefer to work with a configuration file, you can also copy it from a running container, modify it accordingly, and bind it to your container using a bind mount. To copy the standard configuration file with a command from the image, you want to overwrite the entrypoint in Grafana, for example, by using the following command:


docker run --rm -v ${PWD}:/src -u $UID:$GID --entrypoint=cp \

   grafana/grafana /etc/grafana/grafana.ini /src/


The trick is that the entrypoint for the container is the cp command and that the parameters for the command (source file and destination directory) are listed after the name of the image. Because we mount ${PWD} in the container under /src, the configuration file ends up in the current directory.


For the first version, however, we won’t make many settings in the configuration file at all; it’s sufficient to set a password for the web interface using variable GF_SECURITY_ADMIN_PASSWORD. We also want to set up a volume for the mutable data in Grafana and connect port 3000 of the container to the host. Here you can see the full dockercompose. yml file that we’ll use to launch the first version of Grafana:


# File: grafana-manual/docker-compose.yml

version: '3'



       image: grafana/grafana:latest

       restart: unless-stopped


         - 3000:3000


           - GF_SECURITY_ADMIN_PASSWORD=secret


           - grafana:/var/lib/grafana


       image: telegraf:1.19

       hostname: telegraf


           - ./telegraf.conf:/etc/telegraf/telegraf.conf:ro

           - /var/run/docker.sock:/var/run/docker.sock:ro

       restart: unless-stopped


       image: influxdb

       restart: unless-stopped


           - influx:/var/lib/influxdb





Now you can launch the setup using docker compose up -d. After that, you can log in with the user name admin and the password secret at the following address: http://localhost:3000.

 Grafana Login Screen

Creating a Data Source and Dashboard

After successful login, the first thing you need to do is add a data source. Select InfluxDB from the dropdown list under Name, and enter a descriptive Name (we used “InfluxDB”). Under HTTP, enter http://influx:8086” in the URL field, and select Server (default) in the Access dropdown field.


Settings for the InfluxDB Data Source


The Grafana container accesses the Influx container via the URL http://influx:8086. Because this URL isn’t accessible outside the Docker network, the Grafana container must use a proxy to hide this address from the web browser.


As another entry on this page, you have to set the name of the database in the Database field. You should use the name you’ve set as database in the [[outputs.telegraf]] section of the telegraf.conf file (so here, enter “telegraf”). Finally, you should set the minimum time interval to 10 seconds (>10s). Because you started Telegraf with a 10-second interval, it makes no sense for InfluxDB to answer queries that come in at intervals shorter than 10 seconds.


Grafana manages the graphical output in dashboards that are easy to create (via the web interface) and even easier to distribute (as JSON strings). This will be important for our plan to develop a flexible, ready-to-run Docker setup.


Once the data source is working, you can create your first dashboard (see figure below). In the new panel, click Add Query to create the query that will fetch the data from the database.


The First Diagram with Data on the System Load in Grafana


The suggested query already helps a lot in generating the first graph. The syntax is as follows:


SELECT mean("value")

FROM "measurement"

WHERE $timeFilter

GROUP BY time($__interval) fill(null)


If you’re familiar with the SQL database language, you’ll know your way around here right away. But even if you aren’t familiar with SQL, it’s easy to adjust the syntax with the graphical editor. Simply select System under Select Measurement, and then select load1 in the field (value) field (shown earlier). This will make the first line graphic appear in the display.


But you don’t need to start from scratch when you create a dashboard. On the Grafana website, you’ll find a large number of dashboards created by the community that you can import very easily into your own Grafana installation.


Now you should filter the list of available dashboards at by InfluxDB and Telegraf, and pick an appealing dashboard. To import it, you can simply copy the ID of the dashboard from the + Create > Import menu into the empty text line. In the subsequent step, you’ll be asked for the data source. Here you want to select the previously configured InfluxDB.


Importing a Dashboard from the Grafana Website


After a successful import, you can modify and save the graphics in the new dashboard per your requirements. In some dashboards, you’ll find a variable called Host or Server.

Grafana Dashboards

We don’t want to dive too much into the configuration of Grafana dashboards here. There are numerous design options available, and, once you understand the concept, you can set them up very easily via the web interface. There’s more useful information about Grafana available on its well-documented website:


Editor’s note: This post has been adapted from a section of the book Docker: Practical Guide for Developers and DevOps Teams by Bernd Öggl and Michael Kofler.



Learn the ins and outs of containerization in Docker with this practical guide! Begin by installing and setting up the platform. Then master the basics: get to know important terminology, understand how to run containers, and set up port redirecting and communication. You’ll learn to create custom images, work with commands, and use key containerization tools. Gain essential skills by following exercises that cover common tasks from packaging new applications and modernizing existing applications to handling security and operations.

Learn More
Rheinwerk Computing
by Rheinwerk Computing

Rheinwerk Computing is an imprint of Rheinwerk Publishing and publishes books by leading experts in the fields of programming, administration, security, analytics, and more.