-
-
Save Arka111/a602f2cea436719005f3cd00bc80dcc2 to your computer and use it in GitHub Desktop.
Elastic Stack Class
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Install Nginx first : | |
We will use Nginx as a proxy server to host the Kibana UI (for Secure Login) | |
sudo apt update && sudo apt -y install nginx | |
sudo systemctl enable nginx | |
sudo apt install apt-transport-https -y | |
Install ElasticSearch : | |
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.17.0-amd64.deb | |
sudo dpkg -i elasticsearch-7.17.0-amd64.deb | |
Download and Install Kibana: | |
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.17.0-amd64.deb | |
sudo dpkg -i kibana-7.17.0-amd64.deb | |
For Logstash : | |
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.17.1-amd64.deb | |
sudo dpkg -i logstash-7.17.1-amd64.deb | |
Filebeat: | |
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.17.2-amd64.deb | |
sudo dpkg -i filebeat-7.17.2-amd64.deb | |
Heartbeat: | |
wget https://artifacts.elastic.co/downloads/beats/heartbeat/heartbeat-8.3.2-amd64.deb | |
sudo dpkg -i heartbeat-8.3.2-amd64.deb | |
Elasticsearch yaml: | |
Cluster name, Node Name, Network Host, http port (Uncomment) | |
network.host: localhost | |
sudo systemctl start elasticsearch.service | |
Kibana yaml : | |
Uncomment Server.port and server.host | |
sudo systemctl start kibana.service | |
Configure Nginx to proxy Kibana: | |
htpasswd is needed, part of apache2-utils, install using apt-get | |
sudo apt-get -y install apache2-utils | |
sudo htpasswd -c /etc/nginx/htpasswd.users kibanaadm | |
It will prompt for password | |
sudo vi /etc/nginx/sites-available/default | |
Below should be the content of the file: | |
server { | |
listen 80; | |
server_name <Private IP of the server on which Kibana is running>; | |
auth_basic "Restricted Access"; | |
auth_basic_user_file /etc/nginx/htpasswd.users; | |
location / { | |
proxy_pass http://localhost:5601; | |
proxy_http_version 1.1; | |
proxy_set_header Upgrade $http_upgrade; | |
proxy_set_header Connection 'upgrade'; | |
proxy_set_header Host $host; | |
proxy_cache_bypass $http_upgrade; | |
} | |
} | |
sudo systemctl start nginx | |
systemctl restart kibana | |
systemctl restart elasticsearch | |
Open the DNS of the VM : You'll be on Kibana Page, Use kibanaadm login credentials | |
Download a Sample Data Set for Use Case 1 : | |
wget logz.io/sample-data | |
cp sample-data apache.log | |
cd /etc/logstash/conf.d | |
Write a piepline inside logstash | |
vi apache.conf | |
input { | |
file { | |
path => "/etc/logstash/apache.log" | |
start_position => "beginning" | |
sincedb_path => "/dev/null" | |
} | |
} | |
filter { | |
grok { | |
match => { "message" => "%{COMBINEDAPACHELOG}" } | |
} | |
date { | |
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] | |
} | |
geoip { | |
source => "clientip" | |
} | |
} | |
output { | |
elasticsearch { | |
hosts => ["localhost:9200"] | |
} | |
} | |
Add Index to it later. like this | |
index => "apache-log-1" | |
Discuss the Filter used in above conf | |
If Filter needed, below can be used: | |
filter { | |
grok { | |
match => { "message" => "%{COMBINEDAPACHELOG}" } | |
} | |
date { | |
match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ] | |
} | |
geoip { | |
source => "clientip" | |
} | |
} | |
} | |
In the filter section, we are applying: | |
a) a grok filter that parses the log string and populates the event with the relevant information from the Apache logs, | |
b) a date filter to define the timestsamp field, and | |
c) a geoip filter to enrich the clientip field with geographical data. | |
sudo systemctl start logstash | |
sudo systemctl status logstash | |
Check logstash Index Pattern, the above written index pattern will load automatically | |
2nd Use Case : Load csv Data | |
wget https://www.quandl.com/api/v1/datasets/BCHARTS/MTGOXUSD.csv | |
New conf file for logstash would be : (bitcoin.conf for example) | |
input { | |
file { | |
path => "path of the csv file" | |
start_position => "beginning" | |
sincedb_path => "/dev/null" | |
} | |
} | |
filter { | |
csv { | |
separator => "," | |
#Date,Open,High,Low,Close,Volume (BTC),Volume (Currency),Weighted Price | |
columns => ["Date","Open","High","Low","Close","Volume (BTC)", "Volume (Currency)" ,"Weighted Price"] | |
} | |
} | |
output { | |
elasticsearch { | |
hosts => "http://localhost:9200" | |
index => "bitcoin-prices" | |
} | |
stdout {} | |
} | |
Restart Logstash and follow same steps on kibana , Kibana might need a restart as well | |
3rd Use Case: Real time web logs | |
Use Filebeat | |
sudo filebeat modules list | |
sudo filebeat modules enable nginx | |
sudo filebeat modules enable system | |
cd /etc/filebeat/modules.d | |
vi nginx.yml | |
Add (In Access Logs Section): | |
var.paths: ["/var/log/nginx/access.log*"] | |
Add (In Errror Logs Section): | |
var.paths: ["/var/log/nginx/error.log*"] | |
vi system.yml | |
Add (In SysLog Section): | |
var.paths: ["/var/log/syslog*"] | |
Add (In Auth Logs Section): | |
var.paths: ["/var/log/auth.log*"] | |
sudo systemctl start filebeat | |
Restart Logstash, you will find new indices (Management -> Index Pattern) | |
Visualization : | |
Some predefined viz can be enabled using : | |
sudo filebeat setup -e | |
Check dashboards now (filter with filebeat) | |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
https://www.digitalocean.com/community/tutorials/how-to-set-up-an-elasticsearch-fluentd-and-kibana-efk-logging-stack-on-kubernetes