×
Kibana Beginner's Guide

Kibana is an open-source visualization tool that can help users analyze a large volume of logs in the form of pie charts, line graphs, bar graphs, region maps, and heat maps, among others. With this visualization, users can easily predict the trends of errors or other significant events of the input source. Working together to form the ELK stack, Kibana can work well with Logstash and Elasticsearch.

Logstash is an open-source, lightweight server-side data processing pipeline, which ensures that users can gather data from multiple sources while transforming and sending it to the desired destination. Logstash is often used as a data pipeline for Elasticsearch, also open-source analytics. With Elasticsearch, users can quickly store, search, and analyze large amounts of data while receiving responses in milliseconds.

As mentioned above, ELK refers to Elasticsearch, Logstash, and Kibana. As one of the most popular log management platforms, ELK is wide across the globe for log analysis. Kibana is a visualization tool that can access the logs from Elasticsearch while displaying them to the user in the form of a bar graph, line graph, and more.

Features of Kibana

With the use of Kibana, users can make the most of the following features: 

  • Visualization: Kibana helps users visualize data easily. Some commonly used ones include pie charts, vertical bar charts, line graphs, horizontal bar charts, heat maps, and more.
  • Kibana is designed with a dashboard, which helps in observing different sections at once. This gives users a clear idea of what is happening.
  • Users can work with indexes by making use of dev tools.
  • The data in the form of a dashboard and visualization can be transformed into reports.
  • Users can adopt filters and search queries.
  • Third-party plugins can be added to offer new visualization. 
  • Coordinate and Region Maps are present to give a realistic data view.
  • Another visualization tool in Kibana is the Timelion. This offers reliable time-based analysis.
  • Canvas is another powerful feature of Kibana. This allows users to represent their data in various shapes, colors, and texts.

Pros:

  • This section looks into the benefits of using Kibana.
  • Kibana is an open-source tool that helps evaluate large amounts of logs in the form of pie charts, line graphs, bar graphs, region maps, heat maps, and more.
  • It is simple and easy for beginners to understand.
  • It facilitates ease of conversion of visualization and dashboard into reports.
  • The use of canvas ensures that users can analyze complex data easily.
  • Adopting Timelion visualization in Kibana helps users compare data backward to acquaint them with the performance better.

Cons:

  • If there is a version mismatch, adding plugins to Kibana can be quite tedious.
  • Users might face challenges when they wish to upgrade from older versions to new ones.

Installing Kibana

Certain prerequisites must be listed:

  • An Ubuntu 22.04 server with 4GB RAM and 2 CPUs set up with a non-root sudo user.
  • OpenJDK 11 installed.
  • Nginx is installed on your server. 
  • A fully qualified domain name (FQDN). In this guide, we will make use of your_domain.

Installing and Configuring Elasticsearch

You won’t find the Elasticsearch components in Ubuntu’s default package repositories. However, they can be installed with APT after adding Elastic’s package source list. You can start by importing the Elasticsearch public GPG key into APT.

Then, pipe the output of the curl command to the gpg –dearmor command, which transforms the key into a format that apt can use to verify downloaded packages.

$ curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch |sudo gpg –dearmor -o /usr/share/keyrings/elastic.gpg

Next, add the Elastic source list to the sources.list.d directory, where APT will search for new sources:

$ echo “deb [signed-by=/usr/share/keyrings/elastic.gpg] https://artifacts.elastic.co/packages/7.x/apt stable main” | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list

The [signed-by=/usr/share/keyrings/elastic.gpg] portion of the file instructs apt to use the key that you downloaded to verify repository and file information for Elasticsearch packages.

Next, update your package lists so APT will read the new Elastic source:

$ sudo apt update

Then install Elasticsearch with this command:

$ sudo apt install elasticsearch

This completes the installation of Elasticsearch, and it is ready for configuration. Use your preferred text editor to edit Elasticsearch’s main configuration file, elasticsearch.yml. Here, we’ll use nano:

$ sudo nano /etc/elasticsearch/elasticsearch.yml

The elasticsearch.yml file offers users various configuration options for their node, memory, cluster, paths, network, discovery, and gateway. Most of these options are preconfigured in the file. However, you can decide to change them based on your needs.

Elasticsearch listens for traffic from everywhere on port 9200. You will want to restrict outside access to your Elasticsearch instance to prevent outsiders from reading your data. Limiting access while increasing security involves finding the line that specifies network.host, uncomment it, and replace its value with localhost like this:

etc/elasticsearch/elasticsearch.yml

. . .

# ———————————- Network ———————————–

#

# Set the bind address to a specific IP (IPv4 or IPv6):

#

network.host: localhost

. . .

Now, the localhost has been specified to ensure that Elasticsearch listens on all interfaces. To listen to a certain interface, you can specify its IP in place of localhost. Save and close elasticsearch.yml. 

Start the Elasticsearch service with systemctl. Give Elasticsearch a few moments to start up. Otherwise, you may get errors about not being able to connect.

$ sudo systemctl start elasticsearch

Now, to ensure that Elasticsearch starts up each time the server boots, run this command:

$ sudo systemctl enable elasticsearch

You can test whether your Elasticsearch service is running by sending an HTTP request:

$ curl -X GET “localhost:9200”

The output will be a response that shows certain basic information about the local node:

Output

{

  “name” : “Elasticsearch”,

  “cluster_name” : “elasticsearch”,

  “cluster_uuid” : “n8Qu5CjWSmyIXBzRXK-j4A”,

  “version” : {

    “number” : “7.17.2”,

    “build_flavor” : “default”,

    “build_type” : “deb”,

    “build_hash” : “de7261de50d90919ae53b0eff9413fd7e5307301”,

    “build_date” : “2022-03-28T15:12:21.446567561Z”,

    “build_snapshot” : false,

    “lucene_version” : “8.11.1”,

    “minimum_wire_compatibility_version” : “6.8.0”,

    “minimum_index_compatibility_version” : “6.0.0-beta1”

  },

  “tagline” : “You Know, for Search”

}

Now that Elasticsearch is up and running, let’s install Kibana, the next component of the Elastic Stack.

Installing and Configuring the Kibana Dashboard

The next step is to install the remaining components of the Elastic Stack using apt:

$ sudo apt install kibana

Then enable and start the Kibana service:

$ sudo systemctl enable kibana

$ sudo systemctl start kibana

Nginx will be used to set up a reverse proxy to allow external access to it. After all, Kibana is configured to listen only on localhost.

First, create an administrative Kibana user with the use of openssl command. This will be used to access the Kibana web interface. 

The following command will create the administrative Kibana user and password, and store them in the htpasswd.users file:

$ echo “kibanaadmin:`openssl passwd -apr1`” | sudo tee -a /etc/nginx/htpasswd.users

Enter a password at the prompt.

Next, a Nginx server block file will be created. For example, this file will be referred to as your_domain.

Using nano or your preferred text editor, create the Nginx server block file:

$ sudo nano /etc/nginx/sites-available/your_domain

Add the following code block into the file, and be sure that you update your_domain to match the server’s FQDN. With this code, you can configure Nginx to direct your server’s HTTP traffic to the Kibana application, which is listening on localhost:5601. Additionally, it configures Nginx to read the htpasswd.users file and require basic authentication.

Add the following:

/etc/nginx/sites-available/your_domain

server {

    listen 80;

    server_name your_domain;

    auth_basic “Restricted Access”;

    auth_basic_user_file /etc/nginx/htpasswd.users;

    location / {

        proxy_pass http://localhost:5601;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection ‘upgrade’;

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;

    }

}

Then, save and close the file.

After this, you can enable the new configuration. You can do so by creating a symbolic link to the sites-enabled directory. If you already created a server block file with the same name in the Nginx prerequisite, you do not need to run this command:

$ sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/your_domain

Then check the configuration for syntax errors:

$ sudo nginx -t

Restart the Nginx service once you see the syntax is ok in the output:

$ sudo systemctl reload nginx

By following the initial server setup guide, a UFW firewall should be enabled. To allow connections to Nginx, we can adjust the rules by typing:

$ sudo ufw allow ‘Nginx Full’

Kibana has become accessible via your FQDN. Then, check the Kibana’s server status: 

http://your_domain/status

You find the information that relates to the server’s resource usage, as well as a list of the installed plugins.

Install and Configure Logstash

Logstash is commonly used to process data. With it, users can become more flexible in gathering data from various sources while transforming them into a common format for transfer to another database.

This command will install Logstash:

$ sudo apt install logstash

Once Logstash’s installation is complete, you can configure it. The configuration files of Logstash can be found in the /etc/logstash/conf.d directory. You can check out Elastic’s configuration reference for more information on the configuration syntax. There are two essential elements of a Logstash: input and output. There is also an optional element: filter. The input plugin consumes data from a source. The filter plugin processes the data. The output plugins ensure that the data is written to a destination.

Create a configuration file known as 02-beats-input.conf where you will set up your Filebeat input:

$ sudo nano /etc/logstash/conf.d/02-beats-input.conf

Insert the following input configuration. This specifies a beat input that will listen on TCP port 5044.

/etc/logstash/conf.d/02-beats-input.conf

input {

  beats {

    port => 5044

  }

}

Save and close the file.

After this, proceed to create a configuration file known as 30-elasticsearch-output.conf:

$ sudo nano /etc/logstash/conf.d/30-elasticsearch-output.conf

Then, input the following output configuration. What this output does is that it helps in configuring Logstash to store the Beats data in Elasticsearch, which operates at localhost:9200. This guide will make use of the Beat, Filebeat.

/etc/logstash/conf.d/30-elasticsearch-output.conf

output {

  if [@metadata][pipeline] {

elasticsearch {

   hosts => [“localhost:9200”]

   manage_template => false

   index => “%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}”

   pipeline => “%{[@metadata][pipeline]}”

}

  } else {

elasticsearch {

   hosts => [“localhost:9200”]

   manage_template => false

   index => “%{[@metadata][beat]}-%{[@metadata][version]}-%{+YYYY.MM.dd}”

}

  }

}

Then, save and close the file.

After this, put the Logstash configuration to the test by using this command:

$ sudo -u logstash /usr/share/logstash/bin/logstash –path.settings /etc/logstash -t

If no syntax errors occur, the output you receive will display Config Validation Result: OK. After a few seconds, you can exit Logstash. Once this configuration test is successful, you can enable Logstash to effect the changes in the configuration.

$ sudo systemctl start logstash

$ sudo systemctl enable logstash

Once Logstash has started running properly, you can install and configure Filebeat.

Installing and Configuring Filebeat

The Elastic Stack uses several lightweight data shippers – also known as Beats – to gather data from different sources and transfer them to Elasticsearch or Logstash. Some available beats by Elastic are introduced as follows:

  • Metricbeat This is capable of gathering metrics from your systems and services.
  • Filebeat This collects and ships log files.
  • Packetbeat This is capable of collecting and analyzing network data. 
  • Auditbeat This collects Linux audit framework data while facilitating file integrity. 
  • Heartbeat This monitors services for their availability with active probing. 
  • Winlogbeat This can collect Windows event logs.

This guide will make use of Filebeat to transfer local logs to Elastic Stack.

Then, install Filebeat using apt:

 sudo apt install filebeat

Next, configure Filebeat to connect to Logstash. Here, we will modify the example configuration file that comes with Filebeat.

After opening the Filebeat configuration file, input the following command:

 $ sudo nano /etc/filebeat/fileb

Many outputs are supported by Filebeat. However, users only need to send events directly to Elasticsearch or Logstash if additional processing is required. Logstash will be used in this guide to carry out an additional procession on any data that Filebeat gathers.

Then, find the output.elasticsearch section and comment out the following lines by preceding them with a #:

/etc/filebeat/filebeat.yml

#output.elasticsearch:

  # Array of hosts to connect to.

  #hosts: [“localhost:9200”]

After this, you can configure the output.logstash section. Uncomment the lines output.logstash: and hosts: [“localhost:5044“] by eliminating the #. With this, you can configure the Filebeat to ensure a connection to Logstash on the Elastic Stack server at port 5044.

/etc/filebeat/filebeat.yml

output.logstash:

  # The Logstash hosts

  hosts: [“localhost:5044”]

Save and close the file.

You can extend the functionality of Filebeat with Filebeat modules. This is capable of gathering and parsing logs created by the system logging services of common Linux distributions. In this guide, the system module will be used.

To enable it,

$ sudo filebeat modules enable system

There will be a list of enabled and disabled modules by running:

$ sudo filebeat modules list

There will also be a list similar to this: 

Output

Enabled:

system

Disabled:

apache2

auditd

elasticsearch

icinga

iis

kafka

kibana

logstash

mongodb

mysql

nginx

osquery

postgresql

redis

traefik

Filebeat has been configured by default to use default paths for Syslog and authorization logs. This guide does not need to change anything in the configuration. There are parameters of the module in the  /etc/filebeat/modules.d/system.yml configuration file

Next, there is a need to establish the Filebeat ingest pipelines, which are capable of parsing the log data before it is sent through logstash to Elasticsearch. For loading the ingest pipeline for the system module, the following command should be entered:

$ sudo filebeat setup –pipelines –modules system

After this, you can start loading the index template into Elasticsearch.

To load the template, this command can be helpful:

$ sudo filebeat setup –index-management -E output.logstash.enabled=false -E ‘output.elasticsearch.hosts=[“localhost:9200”]’

Output

Index setup finished.

With Filebeat, users can visualize Filebeat data in Kibana. Before using Kibana dashboards, there is a need to create the index pattern and load the dashboards into Kibana.

Once the dashboard loads, Filebeat will connect to Elasticsearch to confirm the version details. To load dashboards whenever Logstash is enabled, there is a need to disable the Logstash output while enabling the Elasticsearch output:

$ sudo filebeat setup -E output.logstash.enabled=false -E output.elasticsearch.hosts=[‘localhost:9200’] -E setup.kibana.host=localhost:5601

After some minutes, you should receive output similar to this:

Output

Overwriting ILM policy is disabled. Set `setup.ilm.overwrite:true` for enabling.

Index setup finished.

Loading dashboards (Kibana must be running and reachable)

Loaded dashboards

Setting up ML using setup –machine-learning is going to be removed in 8.0.0. Please use the ML app instead.

See more: https://www.elastic.co/guide/en/elastic-stack-overview/current/xpack-ml.html

Loaded machine learning job configurations

Loaded Ingest pipelines

Now, you can use this command to start and enable Filebeat:

sudo systemctl start filebeat

sudo systemctl enable filebeat

Once the Elastic Stack has been set up correctly, Filebeat will start shipping the Syslog, as well as authorization logs to Logstash. Then, this will load the data into Elasticsearch.

You can query the Filebeat index with this command if you wish to verify that Elasticsearch is receiving the data:

$ curl -XGET ‘http://localhost:9200/filebeat-*/_search?pretty’

Output like this should come up:

Output

. . .

{

  “took” : 4,

  “timed_out” : false,

  “_shards” : {

    “total” : 2,

    “successful” : 2,

    “skipped” : 0,

    “failed” : 0

  },

  “hits” : {

    “total” : {

      “value” : 4040,

      “relation” : “eq”

    },

    “max_score” : 1.0,

    “hits” : [

      {

        “_index” : “filebeat-7.17.2-2022.04.18”,

        “_type” : “_doc”,

        “_id” : “YhwePoAB2RlwU5YB6yfP”,

        “_score” : 1.0,

        “_source” : {

          “cloud” : {

            “instance” : {

              “id” : “294355569”

            },

            “provider” : “digitalocean”,

            “service” : {

              “name” : “Droplets”

            },

            “region” : “tor1”

          },

          “@timestamp” : “2022-04-17T04:42:06.000Z”,

          “agent” : {

            “hostname” : “elasticsearch”,

            “name” : “elasticsearch”,

            “id” : “b47ca399-e6ed-40fb-ae81-a2f2d36461e6”,

            “ephemeral_id” : “af206986-f3e3-4b65-b058-7455434f0cac”,

            “type” : “filebeat”,

            “version” : “7.17.2”

          },

. . .

Once the expected output has been received, you can proceed to the next step, exploring Kibana’s dashboards.

Exploring Kibana’s Dashboards

In this step, you need to visit a web browser. Visit the FQDN or public IP address of your Elastic Stack server. If there is an interruption in the session, you must re-enter your predefined credentials. After logging in, the Kibana homepage will show up.

Select the Discover link you will find on the left-hand navigation bar. When you get to this page, choose the predefined filebeat-* index pattern to see Filebeat data. By default, this will bring up the entire log data in the last 15 minutes. 

This will show a histogram with log events.

Then, you can browse and look through the logs while also customizing your dashboard as you see fit. However, you should know that nothing much will be present there since Syslog is just being gathered from the Elastic Stack server.

Then, navigate to the Dashboard page with the left-hand panel and look for the Filebeat System dashboards. After this, you can choose the sample dashboards that are associated with Filebeat’s system module.

Conclusion

Kibana is a top visualization tool that makes log analysis easy for users. To start working with Kibana, users need to install Logstash, Elasticsearch, and Kibana. This is exactly what has been discussed in this comprehensive guide.