I try this in centos OS. Perhaps you use different machine then need to adapt accordingly
First Step Download all needeed rpm
Step 1 Download all rpm
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.rpm
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.3.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.3.0.rpm
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.2.0-1.x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.3.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.3.0-x86_64.rpm
Step 2 Install Elastic Search in monitoring node
Install Elasticsearch
#Install Elasticsearch
sha1sum elasticsearch-5.3.0.rpm
sudo rpm --install elasticsearch-5.3.0.rpm
#Configure Elasticsearch
sed -i 's/^#network.host.*/network.host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
#Adjust size of bulk operations queue so that it will not fill up and block the flow of data into ES.
echo 'thread_pool.bulk.queue_size: 1000' >> /etc/elasticsearch/elasticsearch.yml
#Start Elasticsearch and set it to run on boot
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
systemctl start elasticsearch
Step 3 Install Kibana in Monitoring node
#Install Kibana
sha1sum kibana-5.3.0-x86_64.rpm
sudo rpm --install kibana-5.3.0-x86_64.rpm
#Configure Kibana
#Get IP of elk server
IP=$(ip route get 8.8.8.8 | awk '/8.8.8.8/ {print $NF}')
#Get hostname of elk server
HOSTNAME="$(hostname)"
#Setting server.host to the servers public IP means remote machines can connect and view Kibana
sed -i "s/^#server.host.*/server.host: $IP/" /etc/kibana/kibana.yml
sed -i "s/^#server.name.*/server.name: $HOSTNAME/" /etc/kibana/kibana.yml
chown kibana:kibana -R /usr/share/kibana/optimize/
#Start Kibana and enable on startup
sudo systemctl enable kibana.service
systemctl start kibana
sudo chkconfig kibana on
Step 4 Install Logstash in Monitoring node
#Install Logstash
sha1sum logstash-5.3.0.rpm
sudo rpm --install logstash-5.3.0.rpm
# copy logstash.conf to the directory that configurations are
# looked for in - This file defines how logs will be parsed
cp logstash.conf /etc/logstash/conf.d
#Change log level to error to save on space
sed -i "s/^# log.level: info.*/log.level: error/" /etc/logstash/logstash.yml
#Start Logstash and enable on startup
systemctl start logstash
chkconfig logstash on
#Install the logstash plugin logstash-input-beats to allow files to be
# received in logstash from filebeats
/usr/share/logstash/bin/logstash-plugin install logstash-input-beats
#Change log level to warn rather than default 'info' to save space
curl -XPUT localhost:9200/_cluster/settings -d '{"transient":{"logger._root":"WARN"}}'
Step 5 Install Filebeat in application node
#install Filebeat version 5.3.0
sudo rpm -vi filebeat-5.3.0-x86_64.rpm
#Configure filebeats input and output
#Delete the existing filebeat.yml, as this is replaced with an your own template
sudo cp /home/$USER/filebeat.yml /etc/filebeat/filebeat.yml
Perhaps you don’t have predefine filebeat.yml, configure the path of in by modify the path section in filebeat.yml. Set the paths for the stats and logs to be harvested from
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/*.log
In real deployment Filebeat and Monitoring system are running in different node. Filebeats need to know where ELK is located. This can be done by change filebeat.yaml (change $ELK_IP accordingly
sudo sed -i "/localhost:5044/ c\ hosts: ['$ELK_IP:5044']" /etc/filebeat/filebeat.yml
ELK need to know how the record format looks like
#Upload template.json file to ELK server
curl -XPUT "http://$ELK_IP:9200/_template/filebeat" -d@/etc/filebeat/filebeat.template.json
Start the Filebeats
#Start filebeat
sudo systemctl start filebeat
sudo systemctl enable filebeat
Step 6, Install Grafana in Monitoring node
#!/bin/bash
#Download & install Grafana
sudo yum install grafana-4.2.0-1.x86_64.rpm
#Start and enable Grafana on boot
systemctl daemon-reload
sudo systemctl enable grafana-server.service
systemctl start grafana-server
sleep 15
#Create datasource for metrics
curl -H "Content-Type: application/json" -X POST -d
'{"name":"Elasticsearch-System-Metrics", "type":"elasticsearch", "url":"http://localhost:9200", "access":"proxy",
"basicAuth":false, "database":"metricbeat-*",
"jsonData":{"esVersion":5}}' http://admin:admin@127.0.0.1:3000/api/datasources
#Create datasource for Unified logs
curl -H "Content-Type: application/json" -X POST -d
'{"name":"Elasticsearch-UL", "type":"elasticsearch", "url":"http://localhost:9200", "access":"proxy",
"basicAuth":false, "database":"logstash-mylog-*",
"jsonData":{"esVersion":5}}' http://admin:admin@127.0.0.1:3000/api/datasources
Step 7, Install Metricbeat in Monitoring node
#!/bin/bash
#Install Metricbeat
sudo rpm -vi metricbeat-5.3.0-x86_64.rpm
#Configure Elasticsearch output ip
sudo sed -i "/localhost:9200/ c\ hosts: ['$ELK_IP:9200']" /etc/metricbeat/metricbeat.yml
#Import the metricbeat dashboard into grafana
cd /usr/share/metricbeat/scripts
./import_dashboards -es http://$ELK_IP:9200
#Start and enable on boot
sudo systemctl start metricbeat
sudo systemctl enable metricbeat