ELK and Grafana

I try this in centos OS. Perhaps you use different machine then need to adapt accordingly

First Step Download all needeed rpm

Step 1 Download all rpm

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-5.3.0.rpm 
wget https://artifacts.elastic.co/downloads/kibana/kibana-5.3.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/logstash/logstash-5.3.0.rpm
wget https://s3-us-west-2.amazonaws.com/grafana-releases/release/grafana-4.2.0-1.x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/metricbeat/metricbeat-5.3.0-x86_64.rpm
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.3.0-x86_64.rpm

Step 2 Install Elastic Search in monitoring node

Install Elasticsearch

#Install Elasticsearch
sha1sum elasticsearch-5.3.0.rpm
sudo rpm --install elasticsearch-5.3.0.rpm
#Configure Elasticsearch
sed -i 's/^#network.host.*/network.host:' /etc/elasticsearch/elasticsearch.yml
#Adjust size of bulk operations queue so that it will not fill up and block the flow of data into ES.
echo 'thread_pool.bulk.queue_size: 1000' >> /etc/elasticsearch/elasticsearch.yml
#Start Elasticsearch and set it to run on boot
sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service
systemctl start elasticsearch

Step 3 Install Kibana in Monitoring node

#Install Kibana
sha1sum kibana-5.3.0-x86_64.rpm
sudo rpm --install kibana-5.3.0-x86_64.rpm
#Configure Kibana
#Get IP of elk server
IP=$(ip route get | awk '/ {print $NF}')
#Get hostname of elk server
#Setting server.host to the servers public IP means remote machines can connect and view Kibana
sed -i "s/^#server.host.*/server.host: $IP/" /etc/kibana/kibana.yml
sed -i "s/^#server.name.*/server.name: $HOSTNAME/" /etc/kibana/kibana.yml
chown kibana:kibana -R /usr/share/kibana/optimize/
#Start Kibana and enable on startup
sudo systemctl enable kibana.service
systemctl start kibana
sudo chkconfig kibana on

Step 4 Install Logstash in Monitoring node

#Install Logstash
sha1sum logstash-5.3.0.rpm
sudo rpm --install logstash-5.3.0.rpm
# copy logstash.conf to the directory that configurations are
# looked for in - This file defines how logs will be parsed
cp logstash.conf /etc/logstash/conf.d

#Change log level to error to save on space
sed -i "s/^# log.level: info.*/log.level: error/" /etc/logstash/logstash.yml
#Start Logstash and enable on startup
systemctl start logstash
chkconfig logstash on

#Install the logstash plugin logstash-input-beats to allow files to be 
# received in logstash from filebeats
/usr/share/logstash/bin/logstash-plugin install logstash-input-beats

#Change log level to warn rather than default 'info' to save space
curl -XPUT localhost:9200/_cluster/settings -d '{"transient":{"logger._root":"WARN"}}'

Step 5 Install Filebeat in application node

#install Filebeat version 5.3.0
sudo rpm -vi filebeat-5.3.0-x86_64.rpm
#Configure filebeats input and output
#Delete the existing filebeat.yml, as this is replaced with an your own template
sudo cp /home/$USER/filebeat.yml /etc/filebeat/filebeat.yml

Perhaps you don’t have predefine filebeat.yml, configure the path of in by modify the path section in filebeat.yml. Set the paths for the stats and logs to be harvested from

- type: log
  enabled: true
    - /var/log/*.log

In real deployment Filebeat and Monitoring system are running in different node. Filebeats need to know where ELK is located. This can be done by change filebeat.yaml (change $ELK_IP accordingly

sudo sed -i "/localhost:5044/ c\  hosts: ['$ELK_IP:5044']" /etc/filebeat/filebeat.yml

ELK need to know how the record format looks like

#Upload template.json file to ELK server
curl -XPUT "http://$ELK_IP:9200/_template/filebeat" -d@/etc/filebeat/filebeat.template.json

Start the Filebeats

#Start filebeat
sudo systemctl start filebeat
sudo systemctl enable filebeat

Step 6, Install Grafana in Monitoring node

#Download & install Grafana
sudo yum install grafana-4.2.0-1.x86_64.rpm 
#Start and enable Grafana on boot
systemctl daemon-reload
sudo systemctl enable grafana-server.service
systemctl start grafana-server

sleep 15
#Create datasource for metrics
curl -H "Content-Type: application/json" -X POST -d 
'{"name":"Elasticsearch-System-Metrics", "type":"elasticsearch", "url":"http://localhost:9200", "access":"proxy", 
"basicAuth":false, "database":"metricbeat-*", 
"jsonData":{"esVersion":5}}' http://admin:admin@

#Create datasource for Unified logs
curl -H "Content-Type: application/json" -X POST -d 
'{"name":"Elasticsearch-UL", "type":"elasticsearch", "url":"http://localhost:9200", "access":"proxy", 
"basicAuth":false, "database":"logstash-mylog-*",
 "jsonData":{"esVersion":5}}' http://admin:admin@

Step 7, Install Metricbeat in Monitoring node

#Install Metricbeat
sudo rpm -vi metricbeat-5.3.0-x86_64.rpm
#Configure Elasticsearch output ip
sudo sed -i "/localhost:9200/ c\  hosts: ['$ELK_IP:9200']" /etc/metricbeat/metricbeat.yml
#Import the metricbeat dashboard into grafana
cd /usr/share/metricbeat/scripts
./import_dashboards -es http://$ELK_IP:9200

#Start and enable on boot
sudo systemctl start metricbeat
sudo systemctl enable metricbeat

Basic Python Language II

In python, reading is stright forward


Perhaps your file name is game.txt then


If you want to iterate each of line then

for game in open('games.txt'):

the output looks like this (depend on the content of your text file

Athens (1896)

Paris (1900)

St Louis (1904)

London (1908)

Stockholm (1912)

Antwerp (1920)

Paris (1924)

Amsterdam (1928)

Los Angeles (1932)

Berlin (1936)

London (1948)

Helsinki (1952)

Melbourne / Stockholm (1956)

Rome (1960)

Tokyo (1964)

Mexico (1968)

Munich (1972)

Montreal (1976)

Moscow (1980)

Los Angeles (1984)

Seoul (1988)

Barcelona (1992)

Atlanta (1996)

Sydney (2000)

Athens (2004)

Beijing (2008)

London (2012)

Rio (2016)

the ‘open’ syntax has additional parameter you can pass when open file


the ‘r’ indicate we open file for read,

Perhaps from I want to manipulate the record on file and assign to variable then , see this syntax below. I can

for game in open('games.txt','r'):
    city = game.split()[0]
    year = game.split()[1]    

Basic Python Language

The first script is to print Hello World as we do normally. This can be achieve as simple as follow. I presume you have environment setup in your laptop/desktop. I used python 3. You can start python console and type this sniped code below

print("Hello World!")

I used jupyter notebook as my IDE. With Jupyter, in most cases, you have most of the package (library).

Data Lake and Datawarehouse

7 years ago I wrote some article about datawarehouse (see https://jokondo.wordpress.com/category/data-warehouse/) . At that time, just few organization talk about data warehouse and data mining.

Nowadays, after the era of big data where the data is super huge is generated people start thinking different way on how the data is being stored and can be used for analytical purposes.

In traditional data warehouse, the data is loaded into RDBMS after the use of it is define. e.g Perhaps the organization use the data warehouse to keep the total goods that has been sold for every city, region, state and country. It captured also the goods type.

Both Data Lake and Data Warehouse have different objectives to be achieved in an enterprise. Some of the key difference are shown here:


Data Lake Data Warehouse
Captures all types of data and structures, semi-structured and unstructured in their most natural form from source systems Captures structured information and processes it as it is acquired into a fixed model defined for data warehouse purposes
Possesses enough processing power to process and analyze all kinds of data and have it analyzed for access Processes structured data into a dimensional or reporting model for advanced reporting and analytics
A Data Lake usually contains more relevant information that has good probability of access and can provide operational needs for an enterprise A Data Warehouse usually stores and retains data for long term, so that the data can be accessed on demand

Layers in data lake figure



Rich Text Editor in Java, JSP

Today my friend asking me to implement Rich Text Editor in Java. Their current application is standard Java web application.

Talking to my friend who use to talk using Ruby, he recommends me to use CKEditor. From the forum and the feedback this javascript library, I can say already mature.

They have implement also the taglib for JSP (either make this more complicated or easier is up to developer who uses it).

Since the friend who asks a help already implement in java way for the sake of maintaining purpose.

For anyone who still has doubt on how the Rich Text Editor (RTE) looks like, it is like this


The common use case when the RTE use is when we need to let user has a fancy post (like in Jira) , instead of writing  HTML code to draw a table and insert an image. Others use case is for in some web application that required the user admin to create an email template in a fancy way but easier.

So, now you have a solid reason to have the feature on your web application. The steps to implement it as follow.

I assume you use maven base project, and you can add ckeditor taglib library to the project

<!-- WYSWYG lib CK Editor -->

After the jar is added (either trough maven or manual), download the ckeditor from here http://ckeditor.com/download/releases ( I used ckeditor 3.5.3)

and copy all to your javascript folder (later we use the location in our jsp). In my project will be like this


Once the javascript and all the ckeditor added into the project then we can start add and work on our jsp.

Add this snipped on top of your jsp, so you can start using the taglib

<%@ taglib uri="http://ckeditor.com" prefix="ckeditor" %>


within the form tag, you ckeditor:editor instead of textarea like below

<td class="field-label" colspan="2">Void Email Template
<ckeditor:editor basePath="${contextPath}/js" editor="voidTemplate"         value="${settingForm.voidTemplate}" /></td>

basePath=”${contextPath}/js is location where the ckeditor javascript located

Start deploying the application, you should be able to use the CK Editor as your RTE.



ESB Dengan Mule II

Saya beramsusi kamu mengahui kenapa musti menggunakan ESB, jika belom silahkan baca post sebelumnya di esb-dengan-mule-i.

Jika sudah tahu kenapa dan kapan kita membutuhkan maka mari kita kupas salah satu technology ESB dimulai dari installasi hingga mempeljari kasus per kasus.

Langkah pertama, unduh Anypoint studio (dulu di sebut studio) di http://www.mulesoft.org/ dan install di PC kamu.

Oh ya, sebelum menginstall pastikan kamu sudah menginstall Java JDK dan mengeset JAVA_HOME, minimal java version 7.

Seperti ini penampakan saat anypoint studio, jika kamu terbiasa menggunakan eclipse maka akan jauh lebih mudah, jika tidakpun tidak mengapa 🙂anypoint

Langkah selanjutnya adalah menginstall plugin community edition (by default Anypoint hanya memilki runtime untuk Enterprise edition)


Kemudian tambahkan mule studio runtime plugin atau update jika sudah ada


Click tombol next, kamu bisa memilih versi CE yang kamu butuhkan



Jika sudah berhasil, maka kamu bisa memulai membuat project dengan menggunakan Anypoint Community edition seperti ini


Selamat mencoba 🙂










ESB Dengan Mule I

Kapan kamu membutuhkan ESB Enterprise Service Bus?

Pertanyaan ini untuk mempermudah kita memahami apa itu ESB.

1. Di saat kamu perlu mengintegrasikan 2 atau lebih aplikasi/sistem.

2. Kamu perlu mengekspose API untuk di gunakan oleh sistem lain yang notabene milik perusahaan lain

3. Berbagai legacy sistem dengan input atau output yang berbagai macam dan input/ouput ini perlu di gabungkan dan ‘berbicara’ antara satu dengan yang lainnya.

Kata kunci dari kebutuhan-kebutuhan di atas adalaha INTEGRASI, API dan MULTI SYSTEM.


Integrasi bisa dalam bentuk yang paling sederhana yaitu pertukaran file hingga dalam bentuk Web Service (ini yang paling umum).

Bisa saja integrasi ini dilakukan dengan mendevelop aplikasi setelah kedua aplikasi sepakat bagaimana integrasi dan formatnya jika (jika itu file).  Hanya saja solusi ini tidak praktis, karena setiap ada perubahan, misal penambahan field akan butuh effort yang banyak. Selain itu jika yang di integrasikan lebih banyak aplikasi dan banyak inbound dan outbound (istilah dalam ESB yang mewakili endpoint atau invoker/pemanggill), develop aplikasi sendiri akan sangat tidak mantainable.

ESB menawarkan sebagai solusi untuk integrasi banyak aplikasi dengan mudah, baik aplikasi itu sudah dalam produks (legacy) ataupun jika hendak mengekspose API agar sistem lain bisa ‘komunikasi’ dengan sistem kita.


Ubuntu Environment Preparation as web server

Install JAVA


sudo apt-get purge openjdk*

sudo add-apt-repository ppa:webupd8team/java

sudo apt-get update

sudo apt-get install oracle-java7-installer

Install  Apache + Tomcat



sudo apt-get install apache2

sudo apt-get install tomcat7

sudo apt-get install libapache2-mod-jk

sudo vim /etc/tomcat7/server.xml

–un comment the line below

<Connector port=”8009″ protocol=”AJP/1.3″ redirectPort=”8443″ />

sudo vim /etc/apache2/workers.properties

# Define 1 real worker using ajp13


# Set properties for worker (ajp13)




sudo vim /etc/apache2/mods-available/jk.conf

change the JkWorkersFile property to /etc/apache2/workers.properties

sudo vim /etc/apache2/sites-enabled/000-default

JkMount /tomcat-demo* worker1

Install MySQL



sudo apt-get install mysql-server

vim /etc/mysql/my.cnf

bind-address            = 192.XX.X.X

sudo service mysql restart

TOMCAT Detail:


Tomcat Path: cd /var/lib/tomcat7/webapps


Develop Portal using liferay I-Develop Portlet

Lets begin on  how to build a portal with liferay. Familiarize with below term while develop portal:

  1. Portlet.
  2. Theme
  3. Hook
  4. Layout

Capability on develop those four components is necessary in order to build a portal.

I assume you familiar with Java in order to develop portlet while 3 others component is good to have but not mandatory.


Install it liferay eclipse plugin(http://releases.liferay.com/tools/ide/latest/stable/)

Download liferay SDK  and extract it

Download liferay tomcat bundled (6.2 CE edition) and extract it

Configure Eclipse for SDK liferay and tomcat bundled refer to image below for this step



Now let’s rock by creating a project for a portlet

Create new project (we will use ant instead of maven since its easier.



The generated code will be like this


Start to deploy your first portlet




Start Server




Check if Ojolali portlet is deployed


In view mode after added into a page looks like below





Now you can start build your own portlet. 🙂

The source code for article can be downloaded from