ELK and Grafana

I try this in centos OS. Perhaps you use different machine then need to adapt accordingly

First Step Download all needeed rpm

Step 1 Download all rpm

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.rpm 
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.3.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.3.0.rpm
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.2.0-1.x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.3.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.3.0-x86_64.rpm

Step 2 Install Elastic Search in monitoring node

Install Elasticsearch

#Install Elasticsearch
sha1sum elasticsearch-5.3.0.rpm
sudo rpm --install elasticsearch-5.3.0.rpm
#Configure Elasticsearch
sed -i 's/^#network.host.*/network.host: 0.0.0.0/' /etc/elasticsearch/elasticsearch.yml
#Adjust size of bulk operations queue so that it will not fill up and block the flow of data into ES.
echo 'thread_pool.bulk.queue_size: 1000' >> /etc/elasticsearch/elasticsearch.yml
#Start Elasticsearch and set it to run on boot
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
systemctl start elasticsearch

Step 3 Install Kibana in Monitoring node

#Install Kibana
sha1sum kibana-5.3.0-x86_64.rpm
sudo rpm --install kibana-5.3.0-x86_64.rpm
#Configure Kibana
#Get IP of elk server
IP=$(ip route get 8.8.8.8 | awk '/8.8.8.8/ {print $NF}')
#Get hostname of elk server
HOSTNAME="$(hostname)"
#Setting server.host to the servers public IP means remote machines can connect and view Kibana
sed -i "s/^#server.host.*/server.host: $IP/" /etc/kibana/kibana.yml
sed -i "s/^#server.name.*/server.name: $HOSTNAME/" /etc/kibana/kibana.yml
chown kibana:kibana -R /usr/share/kibana/optimize/
#Start Kibana and enable on startup
sudo systemctl enable kibana.service
systemctl start kibana
sudo chkconfig kibana on

Step 4 Install Logstash in Monitoring node

#Install Logstash
sha1sum logstash-5.3.0.rpm
sudo rpm --install logstash-5.3.0.rpm
#copy logstash.conf to the directory that configurations are looked for in - This file defines how logs will be parsed
cp logstash.conf /etc/logstash/conf.d
#Change log level to error to save on space
sed -i "s/^# log.level: info.*/log.level: error/" /etc/logstash/logstash.yml
#Start Logstash and enable on startup
systemctl start logstash
chkconfig logstash on
#Install the logstash plugin logstash-input-beats to allow files to be received in logstash from filebeats
/usr/share/logstash/bin/logstash-plugin install logstash-input-beats
#Change log level to warn rather than default 'info' to save space
curl -XPUT localhost:9200/_cluster/settings -d '{"transient":{"logger._root":"WARN"}}'

Step 5 Install Filebeat in application node

#install Filebeat version 5.3.0
sudo rpm -vi filebeat-5.3.0-x86_64.rpm
#Configure filebeats input and output
#Delete the existing filebeat.yml, as this is replaced with an your own template
sudo cp /home/$USER/filebeat.yml /etc/filebeat/filebeat.yml

Perhaps you don’t have predefine filebeat.yml, configure the path of in by modify the path section in filebeat.yml. Set the paths for the stats and logs to be harvested from

 filebeat.inputs:
- type: log
  enabled: true
  paths:
    - /var/log/*.log

In real deployment Filebeat and Monitoring system are running in different node. Filebeats need to know where ELK is located. This can be done by change filebeat.yaml (change $ELK_IP accordingly

sudo sed -i "/localhost:5044/ c\  hosts: ['$ELK_IP:5044']" /etc/filebeat/filebeat.yml

ELK need to know how the record format looks like

#Upload template.json file to ELK server
curl -XPUT "http://$ELK_IP:9200/_template/filebeat" -d@/etc/filebeat/filebeat.template.json

Start the Filebeats

#Start filebeat
sudo systemctl start filebeat
sudo systemctl enable filebeat

Step 6, Install Grafana in Monitoring node

#!/bin/bash
#Download & install Grafana
sudo yum install grafana-4.2.0-1.x86_64.rpm 
#Start and enable Grafana on boot
systemctl daemon-reload
sudo systemctl enable grafana-server.service
systemctl start grafana-server

sleep 15
#Create datasource for metrics
curl -H "Content-Type: application/json" -X POST -d '{"name":"Elasticsearch-System-Metrics", "type":"elasticsearch", "url":"http://localhost:9200", "access":"proxy", "basicAuth":false, "database":"metricbeat-*", "jsonData":{"esVersion":5}}' http://admin:admin@127.0.0.1:3000/api/datasources
#Create datasource for Unified logs
curl -H "Content-Type: application/json" -X POST -d '{"name":"Elasticsearch-UL", "type":"elasticsearch", "url":"http://localhost:9200", "access":"proxy", "basicAuth":false, "database":"logstash-mylog-*", "jsonData":{"esVersion":5}}' http://admin:admin@127.0.0.1:3000/api/datasources

Step 7, Install Metricbeat in Monitoring node

#!/bin/bash
#Install Metricbeat
sudo rpm -vi metricbeat-5.3.0-x86_64.rpm
#Configure Elasticsearch output ip
sudo sed -i "/localhost:9200/ c\  hosts: ['$ELK_IP:9200']" /etc/metricbeat/metricbeat.yml
#Import the metricbeat dashboard into grafana
cd /usr/share/metricbeat/scripts
./import_dashboards -es http://$ELK_IP:9200

#Start and enable on boot
sudo systemctl start metricbeat
sudo systemctl enable metricbeat
Advertisements

Data Lake and Datawarehouse

7 years ago I wrote some article about datawarehouse (see https://jokondo.wordpress.com/category/data-warehouse/) . At that time, just few organization talk about data warehouse and data mining.

Nowadays, after the era of big data where the data is super huge is generated people start thinking different way on how the data is being stored and can be used for analytical purposes.

In traditional data warehouse, the data is loaded into RDBMS after the use of it is define. e.g Perhaps the organization use the data warehouse to keep the total goods that has been sold for every city, region, state and country. It captured also the goods type.

Both Data Lake and Data Warehouse have different objectives to be achieved in an enterprise. Some of the key difference are shown here:

 

Data Lake Data Warehouse
Captures all types of data and structures, semi-structured and unstructured in their most natural form from source systems Captures structured information and processes it as it is acquired into a fixed model defined for data warehouse purposes
Possesses enough processing power to process and analyze all kinds of data and have it analyzed for access Processes structured data into a dimensional or reporting model for advanced reporting and analytics
A Data Lake usually contains more relevant information that has good probability of access and can provide operational needs for an enterprise A Data Warehouse usually stores and retains data for long term, so that the data can be accessed on demand

Layers in data lake figure

Datalake

 

Rich Text Editor in Java, JSP

Today my friend asking me to implement Rich Text Editor in Java. Their current application is standard Java web application.

Talking to my friend who use to talk using Ruby, he recommends me to use CKEditor. From the forum and the feedback this javascript library, I can say already mature.

They have implement also the taglib for JSP (either make this more complicated or easier is up to developer who uses it).

Since the friend who asks a help already implement in java way for the sake of maintaining purpose.

For anyone who still has doubt on how the Rich Text Editor (RTE) looks like, it is like this

1

The common use case when the RTE use is when we need to let user has a fancy post (like in Jira) , instead of writing  HTML code to draw a table and insert an image. Others use case is for in some web application that required the user admin to create an email template in a fancy way but easier.

So, now you have a solid reason to have the feature on your web application. The steps to implement it as follow.

I assume you use maven base project, and you can add ckeditor taglib library to the project


<!-- WYSWYG lib CK Editor -->
<dependency>
     <groupId>com.ckeditor</groupId>
     <artifactId>ckeditor-java-core</artifactId>
     <version>3.5.3</version>
</dependency>

After the jar is added (either trough maven or manual), download the ckeditor from here http://ckeditor.com/download/releases ( I used ckeditor 3.5.3)

and copy all to your javascript folder (later we use the location in our jsp). In my project will be like this

1

Once the javascript and all the ckeditor added into the project then we can start add and work on our jsp.

Add this snipped on top of your jsp, so you can start using the taglib


<%@ taglib uri="http://ckeditor.com" prefix="ckeditor" %>

 

within the form tag, you ckeditor:editor instead of textarea like below

<tr>
<td class="field-label" colspan="2">Void Email Template
<ckeditor:editor basePath="${contextPath}/js" editor="voidTemplate"         value="${settingForm.voidTemplate}" /></td>
</tr>

basePath=”${contextPath}/js is location where the ckeditor javascript located

Start deploying the application, you should be able to use the CK Editor as your RTE.

 

 

ESB Dengan Mule II

Saya beramsusi kamu mengahui kenapa musti menggunakan ESB, jika belom silahkan baca post sebelumnya di esb-dengan-mule-i.

Jika sudah tahu kenapa dan kapan kita membutuhkan maka mari kita kupas salah satu technology ESB dimulai dari installasi hingga mempeljari kasus per kasus.

Langkah pertama, unduh Anypoint studio (dulu di sebut studio) di http://www.mulesoft.org/ dan install di PC kamu.

Oh ya, sebelum menginstall pastikan kamu sudah menginstall Java JDK dan mengeset JAVA_HOME, minimal java version 7.

Seperti ini penampakan saat anypoint studio, jika kamu terbiasa menggunakan eclipse maka akan jauh lebih mudah, jika tidakpun tidak mengapa 🙂anypoint

Langkah selanjutnya adalah menginstall plugin community edition (by default Anypoint hanya memilki runtime untuk Enterprise edition)

installcommunity

Kemudian tambahkan mule studio runtime plugin atau update jika sudah ada

StudioRuntimes

Click tombol next, kamu bisa memilih versi CE yang kamu butuhkan

36CmtyEd

 

Jika sudah berhasil, maka kamu bisa memulai membuat project dengan menggunakan Anypoint Community edition seperti ini

36CEEEProject

Selamat mencoba 🙂

 

 

 

 

 

 

 

 

 

ESB Dengan Mule I

Kapan kamu membutuhkan ESB Enterprise Service Bus?

Pertanyaan ini untuk mempermudah kita memahami apa itu ESB.

1. Di saat kamu perlu mengintegrasikan 2 atau lebih aplikasi/sistem.

2. Kamu perlu mengekspose API untuk di gunakan oleh sistem lain yang notabene milik perusahaan lain

3. Berbagai legacy sistem dengan input atau output yang berbagai macam dan input/ouput ini perlu di gabungkan dan ‘berbicara’ antara satu dengan yang lainnya.

Kata kunci dari kebutuhan-kebutuhan di atas adalaha INTEGRASI, API dan MULTI SYSTEM.

 

Integrasi bisa dalam bentuk yang paling sederhana yaitu pertukaran file hingga dalam bentuk Web Service (ini yang paling umum).

Bisa saja integrasi ini dilakukan dengan mendevelop aplikasi setelah kedua aplikasi sepakat bagaimana integrasi dan formatnya jika (jika itu file).  Hanya saja solusi ini tidak praktis, karena setiap ada perubahan, misal penambahan field akan butuh effort yang banyak. Selain itu jika yang di integrasikan lebih banyak aplikasi dan banyak inbound dan outbound (istilah dalam ESB yang mewakili endpoint atau invoker/pemanggill), develop aplikasi sendiri akan sangat tidak mantainable.

ESB menawarkan sebagai solusi untuk integrasi banyak aplikasi dengan mudah, baik aplikasi itu sudah dalam produks (legacy) ataupun jika hendak mengekspose API agar sistem lain bisa ‘komunikasi’ dengan sistem kita.

 

Ubuntu Environment Preparation as web server

Install JAVA

============

sudo apt-get purge openjdk*

sudo add-apt-repository ppa:webupd8team/java

sudo apt-get update

sudo apt-get install oracle-java7-installer

Install  Apache + Tomcat

========================

http://thetechnocratnotebook.blogspot.ae/2012/05/installing-tomcat-7-and-apache2-with.html

sudo apt-get install apache2

sudo apt-get install tomcat7

sudo apt-get install libapache2-mod-jk

sudo vim /etc/tomcat7/server.xml

–un comment the line below

<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ />

sudo vim /etc/apache2/workers.properties

# Define 1 real worker using ajp13

worker.list=worker1

# Set properties for worker (ajp13)

worker.worker1.type=ajp13

worker.worker1.host=localhost

worker.worker1.port=8009

sudo vim /etc/apache2/mods-available/jk.conf

change the JkWorkersFile property to /etc/apache2/workers.properties

sudo vim /etc/apache2/sites-enabled/000-default

JkMount /tomcat-demo* worker1

Install MySQL

============

https://help.ubuntu.com/12.04/serverguide/mysql.html

sudo apt-get install mysql-server

vim /etc/mysql/my.cnf

bind-address            = 192.XX.X.X

sudo service mysql restart

TOMCAT Detail:

————–

Tomcat Path: cd /var/lib/tomcat7/webapps

/usr/share/tomcat7

Policy and Control Charging Architecture

PCC

The core flow of PCC is on Gx protocol

PCRF will tell PCEF what to install trough Gx CCR or Gx RAR.