Logging means nothing if you cannot collect and analyze it.
In the previous article, I already explain how to add logging capability into lorem service. However, it is not sufficient in terms of logging perspective. So, why it is not enough? There are two reasons. First, I still log into OS standard output. Let say I want to trace yesterday log. It is possibly will not be there anymore. Second, it is not centralized. Imagine you have hundreds of services, it would be like hell to trace which services caused an error.
Therefore, I will try to address those two issues in this article. My first step is to change the logging output from OS standard output into log files. In that case, I resolve the first issue. Next one is to centralize the logs. For this purpose, I am going to use Elasticsearch, Logstash and Kibana or as know as an ELK stack.
ELK stack is one of the most widely deployed platforms for collecting, indexing and visualizing logs. The setup has main four components:
- Elasticsearch: store all the logs
- Logstash: to process incoming logs.
- Kibana: web interface for searching and visualizing logs. Will use nginx as a reverse proxy.
- Filebeat: installed on client to send logs to Logstash.
For the purpose in this article, I already made a vagrant setup for ELK server. Just clone it from my GitHub repository. All you need to do is only to type
vagrant up. Then virtual machine will provision all required setups in order to make this stack run. The provisions itself use Ansible and the playbooks are located under
Setting Up Virtual Machine
A couple of notes for these setup, I disabled the SSL configuration while setting up Logstash. However, this is not recommended in the production environment. If you want to enable it, just remark line
include: ssl.yml under
The next one, I created Kibana administration user using openssl and put it under /etc/nginx/htpasswd.users. The default username is
admin with password
admin. You should never use this default configuration. Whenever you want to change the username and password, just change it under
provision/roles/nginx/tasks/main.yml. Find a line with
name:add kibana users.
After all, it is time to configure the Logstash filter. Logstash itself already has many of filter plugins. Before we pick which plugins is suitable for our needs, just take a look the log from previous article.
This type of log is perfectly match with logstash-kv-filter. All is good. Now put configuration for this KV filter and give it a name
Note: All Logstash configuration files can be found under
More information about this
vagrant-elksetup, please take a look at
In this step, we only need to change the logging output from OS stdout into a file. To
Note: for simplicity I only use simple logging file. Whenever you need more capabilities like log rotation, you can use lumberjack.
Just as previously stated, I am going to use Filebeat to ship logs from lorem service into Logstash. The only thing for this step is only setting up the configuration under
Note: prospector path is set to
/golorem.log. Meanwhile, in the
main.gothe path points to
./gokit-playground/log/lorem/golorem.log. This is on purpose because I am going to use docker and mounting the volume into this path.
The last thing to make this runs is having Filebeat installed in your computer. But, because this is my laptop, I will not install it. Instead, I am going to use Docker with Filebeat container to ship the logs.
I will use image from fiunchinho/docker-filebeat and mounting two volumes. First volume goes to
filebeat.yml and the other one goes to log file itself. For all these things, I created a
docker-compose-filebeat.yml in order to make it available whenever I execute
docker compose command.
To run this demo, there are several steps:
- Run the virtual box
- Run lorem-logging server
- Run Filebeat by using docker
- Make HTTP request
If there is no problem, we will get output such as:
Afterwards, by using centralized log architecture, it is easier for us to collect, analyze and visualize logs. Also, whenever something goes wrong, we can spot the problem faster because we can search the logs by using keywords. So it is going to make our life easier, isn’t it?
In general, what my article only spoke about basic logging functionality. You can do far more better. Such as logged username and their session ID. What are their preferences on your services. So you can get lots of information about your customers.
Alright, that’s all from me for this article. Have a nice day!
PS: all the source code can be downloaded here.