This tutorial discusses how to install ElasticSearch 7.10 on CentOS 7. Elasticsearch is an open source search and analytics engine that allows you to store, search, and analyze big volumes of data in real time.
We will cover the minimum steps you’ll need to install ElasticSearch 7 on CentOS 7, with all security features enabled, which isn’t covered in most of howtos
1 / Introduction to the Elastic Stack
Nothing better than a schema to understand the Elastic Stack architecture.
ELK Stack is a stack with three different open source software—Elasticsearch, Logstash, and Kibana
Elasticsearch is ingesting the logs sended by Beats or Logstash and let you analyze them with a GUI : Kibana.
Kibana is a dashboarding open source software from ELK Stack, and it is a very good tool for creating different visualizations, charts, maps, and histograms, and by integrating different visualizations together, we can create dashboards
Logstash, Beats, what is the difference ?
Beats collect & send logs to Elasticsearch directly, where Logstash can collect logs or receive logs from beats, and transform them (ETL) before sending them to Elasticsearch.
2 / Installation
2.1 / Update Centos 7
1 | sudo yum -y update |
2.2 / Prerequisites
1 | sudo yum -y install java-1.8.0-openjdk java-1.8.0-openjdk-devel unzip |
If you’re in test environment, create the DNS entry in your hosts file.
1 2 | // edit your /etc/hosts file 10.11.164.221 elastic.local kibana.local logstash.local |
2.3 / Install Elasticsearch
Download and install the public signing key:
1 | rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch |
Create the repo for Elasticsearch :
1 | sudo vi /etc/yum.repos.d/elasticsearch.repo |
Add the following lines to the file :
1 2 3 4 5 6 7 8 | [elasticsearch] name=Elasticsearch repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=0 autorefresh=1 type=rpm-md |
Your repository is ready for use. You can now install Elasticsearch :
1 | yum install --enablerepo=elasticsearch elasticsearch |
Lets activate the service on boot
1 | systemctl enable elasticsearch.service |
2.4 / Install Kibana
Create the repo for Kibana :
1 | sudo vi /etc/yum.repos.d/kibana.repo |
Add the following lines to the file :
1 2 3 4 5 6 7 8 | [kibana-7.x] name=Kibana repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md |
Your repository is ready for use. You can now install Kibana:
1 | yum install kibana |
Lets activate the service on boot
1 | systemctl enable kibana.service |
2.5 / Install Logstash
Create the repo for Logstash:
1 | sudo vi /etc/yum.repos.d/logstash.repo |
Add the following lines to the file :
1 2 3 4 5 6 7 8 | [logstash-7.x] name=Elastic repository for 7.x packages baseurl=https://artifacts.elastic.co/packages/7.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md |
Your repository is ready for use. You can now install Logstash:
1 | yum install logstash |
Lets activate the service on boot
1 | systemctl enable logstash.service |
2.6 / Lets secure everything
We’ll start by creating the needed certificates for each instance.
go to /tmp
1 | cd /tmp |
Create a yaml file and add the instance informations, use the informations you set in the DNS or hosts file :
1 | vi /tmp/instance.yml |
1 2 3 4 5 6 7 | instances: - name: 'elastic' dns: [ 'elastic.local' ] - name: 'kibana' dns: [ 'kibana.local' ] - name: 'logstash' dns: [ 'logstash.local' ] |
Generate Certificate Authority (CA) and Server Certificates
1 | /usr/share/elasticsearch/bin/elasticsearch-certutil cert --keep-ca-key --pem --in /tmp/instance.yml --out /tmp/certs.zip |
Unzip the certs files
1 | unzip certs.zip -d ./certs |
Go to the elasticsearch folder to imports the certificates :
1 2 3 4 | cd /etc/elasticsearch mkdir certs cp /tmp/certs/ca/ca.crt /etc/elasticsearch/certs/ cp /tmp/certs/elastic/* /etc/elasticsearch/certs/ |
Jump to the kibana folder, and do the same things :
1 2 3 4 | cd /etc/kibana mkdir certs cp /tmp/certs/ca/ca.crt /etc/kibana/certs/ cp /tmp/certs/kibana/* /etc/kibana/certs/ |
Jump to the Logstash folder, and do the same again
1 2 3 4 | cd /etc/logstash mkdir certs cp /tmp/certs/ca/ca.crt /etc/logstash/certs/ cp /tmp/certs/logstash/* /etc/logstash/certs/ |
3 / Configure the ELK Stack
3.1 / Elasticsearch configuration
Open the elasticsearch conf file, and add the following parameters
1 | sudo vi /etc/elasticsearch/elasticsearch.yml |
Add / Replace these parameters
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 | node.name: elastic network.host: elastic.local http.port: 9200 xpack.security.enabled: true xpack.security.http.ssl.enabled: true xpack.security.transport.ssl.enabled: true xpack.security.http.ssl.key: certs/elastic.key xpack.security.http.ssl.certificate: certs/elastic.crt xpack.security.http.ssl.certificate_authorities: certs/ca.crt xpack.security.transport.ssl.key: certs/elastic.key xpack.security.transport.ssl.certificate: certs/elastic.crt xpack.security.transport.ssl.certificate_authorities: certs/ca.crt xpack.security.authc.api_key.enabled: true discovery.seed_hosts: [ "elastic.local" ] cluster.initial_master_nodes: [ "elastic" ] |
We can now start the elasticsearch service
1 | sudo systemctl start elasticsearch.service |
Then, we can create Elasticsearch users :
1 | /usr/share/elasticsearch/bin/elasticsearch-setup-passwords auto -u "https://elastic.local:9200" |
Keep the passwords in a safe place ;)
3.2 / Kibana Configuration
Open the Kibana conf file, and add the following parameters
1 | sudo vi /etc/kibana/kibana.yml |
Add / Replace these parameters
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | server.port: 5601 server.host: "kibana.local" server.name: "kibana.local" elasticsearch.hosts: ["https://elastic.local:9200"] server.ssl.enabled: true server.ssl.certificate: /etc/kibana/certs/kibana.crt server.ssl.key: /etc/kibana/certs/kibana.key xpack.fleet.enabled: true xpack.fleet.agents.tlsCheckDisabled: true xpack.encryptedSavedObjects.encryptionKey: "something_at_least_32_characters" #REPLACE the ENC KEY xpack.security.enabled: true elasticsearch.username: "kibana" elasticsearch.password: "144Q8CU2obLjHTUOfkWT" elasticsearch.ssl.certificateAuthorities: [ "/etc/kibana/certs/ca.crt" ] |
We can now start the elasticsearch service
1 | sudo systemctl start kibana.service |
Well done ! You can now log in to your server using elastic user :)
https://your-url:5601
Last important step in the Kibana configuration. As we use an autosigned certificate, we have to trust him to avoid issues.
1 2 3 4 | yum install ca-certificates update-ca-trust force-enable cp /tmp/certs/ca/ca.crt /etc/pki/ca-trust/source/anchors/ update-ca-trust extract |
3.3 / Logstash Configuration
Into Kibana, we have to create a Logstash Role & a logstash User, then we will be able to configure it.
To create the role, go “Stack Management”, “Roles”, then click on “Create Role”
Or create it throw API using the “Dev Tools” by entering the following parameters
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 | POST /_security/role/logstash_write_role { "cluster": [ "monitor", "manage_index_templates" ], "indices": [ { "names": [ "logstash*" ], "privileges": [ "write", "create_index" ], "field_security": { "grant": [ "*" ] } } ], "run_as": [], "metadata": {}, "transient_metadata": { "enabled": true } } |
You should obtain the following answer :
1 | {"role":{"created":true}} |
Create the user, and link it to the role :
with API :
1 2 3 4 5 6 7 8 9 10 11 | POST /_security/user/logstash_writer { "username": "logstash_writer", "roles": [ "logstash_write_role" ], "full_name": null, "email": null, "password": "<logstash_system_password>", "enabled": true } |
Now, Convert logstash.key to PKCS # 8 format for the Beats input plug-in
1 2 | cd /etc/logstash openssl pkcs8 -in certs/logstash.key -topk8 -nocrypt -out certs/logstash.pkcs8.key |
Configure Logstash
1 | vi /etc/logstash/logstash.yml |
1 2 3 4 5 6 7 | node.name: logstash.local path.config: /etc/logstash/conf.d/*.conf xpack.monitoring.enabled: true xpack.monitoring.elasticsearch.username: logstash_system xpack.monitoring.elasticsearch.password: 'OcUbJny3AgToY9zoxz9T' xpack.monitoring.elasticsearch.hosts: [ 'https://elastic.local:9200' ] xpack.monitoring.elasticsearch.ssl.certificate_authority: /etc/logstash/certs/ca.crt |
Now, lets create a conf file with generic parameters :
1 | vi conf.d/example.conf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | input { beats { port => 5044 ssl => true ssl_key => '/etc/logstash/certs/logstash.pkcs8.key' ssl_certificate => '/etc/logstash/certs/logstash.crt' } } output { elasticsearch { ilm_enabled => false hosts => ['https://elastic.local:9200'] cacert => '/etc/logstash/certs/ca.crt' user => 'logstash_writer' password => 'OcUbJny3AgToY9zoxz9T' } } |
Now, start the logstash service :
1 | systemctl start logstash |
You can verify everything is running well on Kibana, in the monitoring section :
4 / Configuration Test
4.1 / Install Filebeat
To validate the configuration, we’ll install filebeat on a server (the local one or a remote server). As we saw in the presentation, we’ll configure filebeat to send logs to logstash, and we’ll see it goes to Elasticsearch
Install Filebeat :
1 | rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.1-x86_64.rpm |
Configure TLS :
1 2 3 | cd /etc/filebeat mkdir certs cp /tmp/certs/ca/ca.crt /etc/filebeat/certs/ |
Configure filebeat to send logs to Logstash :
1 2 | mv filebeat.yml filebeat.bkp vi filebeat.yml |
1 2 3 4 5 6 7 8 9 | filebeat.inputs: - type: log enabled: true paths: - /var/log/*.log output.logstash: hosts: ["logstash.local:5044"] ssl.certificate_authorities: - /etc/filebeat/certs/ca.crt |
Start the filebeat service :
1 | systemctl start filebeat |
4.2 / Create the Index Pattern
Go to Kibana, “Stack Management”, “Index Pattern”, then click on “Create Index Pattern” :

And check the result in the discover section of Kibana :
Everything working fine, with all securities enabled :)