The Chef Platform

The Chef Platform in one picture: Because a picture is worth a thousand words!

(Everything in blue is open source)

The Chef Platform


Should FinTech Companies Adopt DevOps?

Let’s start by defining DevOps…

DevOps is a concept. “We have adopted DevOps” is equivalent to saying “We build software”. Except that software term was coined long time ago while DevOps term is new.

I would define DevOps as:

DevOps is what you do to with your software, so that you are able to grow your business, by overcoming the limitations of systems and infrastructure and also by providing the insights in the code to help in better development.

Concepts are good to begin. However you would be interested in how concepts can be useful in reality!

What DevOps is all about, in actuality?!

“DevOps accelerates your delivery process”. I guess this is the most popular and most accepted definition of what DevOps is all about.

Its really cool to have an accelerated process. But again you would ask, “How will it benefit my business?”. So lets break down the above statement in 5 fundamental aspects that DevOps can offer:

  • Accelerated delivery, with quality assurance
  • Accelerated delivery, with risk mitigation
  • Accelerated delivery, with assured security
  • Accelerated delivery, that can result in business growth
  • Accelerated delivery, that guarantees your product does not blow up!

DevOps can assure you risk-mitigated, accelerated delivery with quality and security which guarantees your product will not blow up with those hotfixes and helps you expand your business, thereby increasing your revenue!

DevOps for Fintech companies

I believe that the key goals of a Fintech company are:

  • Stability
  • Security
  • Compliance

A Fintech company might not be interested in accelerated delivery. They would consider stability and compliance over faster TTM. 

DevOps assures you of a stable code release. DevOps can help you catch defects very early in your release cycle, which can help save you money and clients.

DevOps assures you of quality and compliance. With continuous integration and continuous feedback, the quality of your product is maintained. It assures you that you are compliant and follow the required regulations. Fail fast is one of the key benefits of adopting DevOps.

DevOps is also a lot about security. ITIL and security are key considerations while implementing DevOps.

Yes. FinTech companies should also adopt DevOps.

A DevOps model for one product may not suit another. It is very important to understand that the business objective of each company is different. A web-based, media company might find business benefit by releasing 10 builds daily. However, a FinTech company might want to stick to 1 release per quarter. The advantage a FinTech company has by adopting DevOps is that it can help reduce bugs in production, ensure compliance and mitigate risks. 

For a FinTech company to effectively adopt DevOps, they should be able to align their business vision with the DevOps vision. 

Want to know more?

To know how to best adopt a DevOps practice for a FinTech company, you can write to me at

Chef Push Jobs

What are Push Jobs?

Push Jobs work like knife-ssh. Almost. Almost because, in knife-ssh the changes are pushed from your workstation using the SSH protocol. In push jobs, the changes are pushed to the node by the Chef Server.

Chef is based on the “pull” and has a reason for that – to keep the server “thin”. But the changing challenges demand that there is a need for a push model. So chef has introduced push jobs by keeping the server is thin!

“Chef push jobs is an extension of the Chef server that allows jobs to be run against nodes independently of a chef-client run” – that’s how push jobs are defined. A job, in this context, is a set of commands that need to be run on the target node.

Difference between Push Jobs and knife-ssh

Push Jobs knife-ssh

Use message bus (zeromq)

Parallel ssh

Claims to attack the scalability issue

SSH Protocol is slow and CPU hungry at scale

Deployment status is relayed back

Feedback on deployment status is not as easy

Newly introduced

Been in the market for long

Complex at the moment, ready with just the basic foundation

Easy to use

Configuring Chef Push Jobs Server

You need either Enterprise Chef or Chef Server 12. It relies on the ACL system that was open sourced with Chef Sever 12. Also, the install command was introduced with Chef Server 12.

Push Jobs does not work with Open Source Chef Server 11. 

Can be setup as standalone or as HA

Run the following commands on Chef Server:

chef-server-ctl install opscode-push-jobs-server
opscode-push-jobs-server-ctl reconfigure
chef-server-ctl reconfigure

Setup Workstation

  • Install knife push plugin
    Gem install knife-jobs
  • Download push-jobs cookbook
    Push jobs cookbook would be used. So download it from the site or git clone the cookbook. You would have to fetch its dependency cookbooks as well.
    knife cookbook site download push-jobs
  • Extract and save the cookbook to your cookbook path
  • Edit the attributes file (push-jobs/attributes/default.rb)
    Update the attributes to add the push jobs package URL and checksum as mentioned.
    default[‘push_jobs’][‘package_url’] = ‘’

    default[‘push_jobs’][‘package_checksum’] = ‘d659c06c72397ed2bc6cd88488349857f1958538‘

  • Upload the push-jobs cookbook to your ChefServer

Create Groups

Create the pushy_job_writers and pushy_job_readers on the organization of the Chef server and add your workstation user to that group.

Setup Node

Simply run the chef client with the recipe:

sudo chef-client –r “recipe[push-jobs]”

Run the knife node status commands to check the node status. It will just show the status “available” at this stage which confirms that the node is prepared for push events.

knife node status
knife node status <node-name>

Run Push Jobs

Run chef-client as:

knife job start ‘chef-client –r recipe[git]’ <node-name>

Run your commands/script as:

knife job start ‘’ <my_node>

Install Monit on CentOS/Redhat

Monit is a free open source tool to monitor and manage linux system and services.

To install Monit on Redhat or CentOS, you need to enable EPEL (Extra Packages for Enterprise Linux). You need to login as root to enable EPEL and install monit.

$ ls /etc/yum.repos.d/

You will see some repo and conf at this path. Eg: redhat-rhui-client-config.repo,  rhel-source.repo, rhui-load-balancers.conf, etc

$ vi /etc/yum.repos.d/epel.repo

Add the following lines:

 name=Extra Packages for Enterprise Linux 5 – $basearch

Save and close file

$ yum clean all

Loaded plugins: amazon-id, rhui-lb, security

Cleaning repos: epel rhui-REGION-client-config-server-6 rhui-REGION-rhel-server-releases

              : rhui-REGION-rhel-server-rh-common

Cleaning up Everything

$ yum install monit

Loaded plugins: amazon-id, rhui-lb, security
Setting up Install Process
epel                                          | 3.7 kB     00:00     
epel/primary_db                               | 3.3 MB     00:01 
  monit.x86_64 0:4.10.1-9.el5                                                                                         
Dependency Installed:
  openssl098e.x86_64 0:0.9.8e-18.el6_5.2                                                                              

Adding SLF4J Logs to Akka

You might want to consider adding SLF4J plugin to your default Akka logs. This can help you standardize the logs and would help you in the better analysis of your logs.

Akka provides help on how you can plugin SLF4J into your akka application: 

I implemented the suggested approach and am listing down the simplified steps for quick reference.

Add dependency

Add akka-slf4j plugin and logback-classic dependency to your build.

Eg: Add the following to your pom.xml


If you are using SBT, then your build.sbt would look like:

libraryDependencies += “com.typesafe.akka” % “akka-slf4j” % “2.3.9”,
libraryDependencies += “ch.qos.logback” % “logback-classic” % “1.1.2”

Enable SLF4JLogger for Akka

Update your application.conf (src/main/resources) with the following:

akka {
event-handlers = [“akka.event.slf4j.Slf4jEventHandler”]
loglevel = “INFO”

Add logback.xml

Add logback.xml to your classpath. That is, create a logback.xml at src/main/resources. The following is an example of logback.xml. You can find more info about how you can configure your logs with the various patterns at:

<?xml version=”1.0″ encoding=”UTF-8″?>
<appender name=”CONSOLE” class=”ch.qos.logback.core.ConsoleAppender”>
<pattern>%d %X{akkaTimestamp} %-4r %-5level [%thread] %logger{0} %class{0} – %-5msg%n</pattern>
<appender name=”FILE” class=”ch.qos.logback.core.FileAppender”>
<pattern>%d %X{akkaTimestamp} %-4r %-5level [%thread] %logger{0} %class{0} – %-5msg%n</pattern>
<logger name=”akka” level=”DEBUG” />
<root level=”INFO”>
<appender-ref ref=”CONSOLE”/>
<appender-ref ref=”FILE”/>

Add Logs

Now you can add logs to your code, add the following to your scala code:

import akka.event.Logging

val log = Logging(context.system, this.getClass.getName)“Info message”)

log.warning(“Warning message”)

log.debug(“Debug message”)

log.error(“Error message”)

Unified Logging Solution using ELK Stack


There is a need for a unified logging solution: Applications typically run across multiple nodes which may not have SSH access to developers or management to view the logs. It is also not possible to SSH to each node to check which node can show the logs of a particular claim we are interested in.

There is also a need to maintain and check build logs, integration and performance test logs. The post below explains how the ELK stack can be used to setup and configure ELK stack for log monitoring.

What is ELK?

Elasticsearch, Logstash, Kibana

Any kind of and any volume of data that flows into your system can be searched in real time using elasticsearch. A distributed, HA elasticsearch cluster can be setup to enable horizontal scalability. Data can be stored using multiple indices which makes querying easier. The full text search capability is offered using Lucene.

Logstash is a tool to manage events and logs. It collects logs, parses them and stores them.

Kibana is a front end where you can see and interact with your data.

Logstash Forwarder (Lumberjack)

This is a tool which collects logs locally on a node and these logs can be forwarded to Logstash.

Setting up ELK Stack (on ubuntu)

The setup will cover 4 components installation:




Logstash Forwarder

The setup can be done on AWS machine, Instance Type: t2.medium (vCPU=2, Memory=4GB)

Install Java

Elasticsearch uses Apache Lucene, which is written in Java. Hence we need java installed on the machine.

sudo add-apt-repository -y ppa:webupd8team/java

sudo apt-get update

sudo apt-get -y install oracle-java7-installer

Install Elasticsearch

wget -O - | sudo apt-key add -

echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/elasticsearch.list

sudo apt-get update

sudo apt-get -y install elasticsearch=1.1.1

Update the config file:

sudo vi /etc/elasticsearch/elasticsearch.yml

Some edits are needed here.

script.disable_dynamic: true & localhost

Start elasticsearch:

sudo service elasticsearch restart

Install Kibana

I have setup Logstash 1.4.2 which recommends Kibana 3.0.1

cd ~; wget

tar xvf kibana-3.0.1.tar.gz

Edit the configuration file:

sudo vi ~/kibana-3.0.1/config.js

In the configuration, changed the port from default 9200 to 80:

elasticsearch: "http://"+window.location.hostname+":80"

Install Nginx to serve kibana

sudo mkdir -p /var/www/kibana3

sudo cp -R ~/kibana-3.0.1/* /var/www/kibana3/

sudo apt-get install nginx

User <== Port 80 ==> Kibana/Nginx <== Port 9200 ==> Elasticsearch

For this port routing, some configuration changes need to be done. Kibana’s sample Nginx configuration can be used.

cd ~; wget

Edit configuration:

vi nginx.conf

Make changes as below:

server_name FQDN;

root /var/www/kibana3;

sudo mv nginx.conf /etc/nginx/sites-available/default

Secure using htpasswd

sudo apt-get install apache2-utils

sudo htpasswd -c /etc/nginx/conf.d/ user

Sanity Test

Start nginx:

sudo service nginx restart

Check if you can access kibana using the link: http://logstash_server_public_ip/

Install Logstash

echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/logstash.list

sudo apt-get update

sudo apt-get install logstash=1.4.2-1-2c0f5a1

Generate SSL certificates

sudo mkdir -p /etc/pki/tls/certs
sudo mkdir /etc/pki/tls/private

cd /etc/pki/tls;

cat > logstash.cnf << BLOCK1


distinguished_name = req_distinguished_name
x509_extensions = v3_req
prompt = no

C = TG
ST = Togo
L =  Lome
O = Private company
CN = *
subjectKeyIdentifier = hash
authorityKeyIdentifier = keyid,issuer
basicConstraints = CA:TRUE
subjectAltName = @alt_names
DNS.1 = *
DNS.2 = *.*
DNS.3 = *.*.*
DNS.4 = *.*.*.*
DNS.5 = *.*.*.*.*
DNS.6 = *.*.*.*.*.*
DNS.7 = *.*.*.*.*.*.*
IP.1 = <IP-address-of-logstash-server>

Put the IP address of the logstash server machine.

Generate keys:

 sudo openssl req -x509 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt -config logstash.cnf -days 1825

Configure Logstash

There is lots that can be done to configure logstash. A basic part has been shown below:

cat /etc/logstash/conf.d/logstash-default.conf

input {
  lumberjack {
   port => 5000
    type => "logs
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
filter {
    grok {
      type => "myapplog"
      pattern => "%{GREEDYDATA:logline}"
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }

Start Logstash

sudo /opt/logstash/bin/logstash --config /etc/logstash/conf.d/logstash-default.conf &

You will see logstash server started successfully when you see something like below:

Using milestone 1 input plugin ‘lumberjack’. This plugin should work, but would benefit from use by folks like you. Please let us know if you find bugs or have suggestions on how to improve this plugin.  For more information on plugin milestones, see {:level=>:warn}

Setup Logstash Forwarder

Logstash forwarder, formerly known as Lumberjack is set up on the machine which logs we want to capture.

Copy the SSL certificates from the logstash server to this node on which your application would run. (Consider you have copied to /tmp/logstash-forwarder.crt)

sudo mkdir -p /etc/pki/tls/certs
sudo cp /tmp/logstash-forwarder.crt /etc/pki/tls/certs/

Install Logstash-forwarder

echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/logstashforwarder.list
sudo apt-get update
sudo apt-get --yes --force-yes install logstash-forwarder
cd /etc/init.d/
sudo wget -O logstash-forwarder
sudo chmod +x logstash-forwarder
sudo update-rc.d logstash-forwarder defaults

Configure Logstash forwarder

There is a lot that can be configured and forwarded. A basic example for the configuration is as below:

cat > /etc/logstash-forwarder.conf << BLOCK
"network": {
    "servers": [ "$LOGSTASH_IP" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  "files": [
      "paths": [
      "fields": { "type": "myapplog" }

where LOGSTASH_IP=logstash-server:port. (In this case port=5000) and PWD=”path of the log file”

Start logstash forwarder

sudo ./logstash-forwarder -config  /etc/logstash-forwarder.conf &


The ELK stack along with logstash forwarder has been setup. When your application runs and writes to the log file, these would be sent by logstash forwarder to the logstash server. Logstash server parses them, indexes them and saves them. When you open kibana and view the logs, you can filter out data as needed.

Mount AWS S3 Bucket on Ubuntu using S3FS: Docker Image

Facing issues in mounting your S3 bucket on ubuntu? Try using this docker image!

I created the image which has S3FS and Fuse installed on it. It includes Oracle Java 8 too. You need to supply your S3 bucket name and credentials while running the docker image as below. Then you can further configure it as per your needs!

Looking forward to your feedback!

Run the command below to use this image:

docker pull ihealthtechnologies/s3-mount

docker run -it --rm --privileged -e AWS_ACCESS_KEY=?? -e AWS_SECRET_ACCESS_KEY=?? -e AWS_S3_BUCKET=yourBucket ihealthtechnologies/s3-mount /bin/bash

CI/CD Pipeline with Github-SBT-TravisCI-Codacy-Nexus/Artifactory

This is a “HelloWorld CI Pipeline”!

This blog describes the steps taken to establish a CI / CD pipeline using SBT, Travis CI, Codacy and Nexus or Artifactory.

Github Private Repository

I created a HellowWorld Scala project in a github private repository. (You can create a public repo)

It is recommended that you set your github account for Two Factor Authentication.


For those who are new to Scala and SBT, here is some information. You need to create a build.sbt. Similar to ant, maven or any other build tool you can have “tasks” that sbt will perform. Eg – clean, compile, test, publish.

Publish is what we want to do. Publish where? If we want to publish the build to the Nexus Repository, this is how the build.sbt will look like:

name := "hello-scala"
version := "1.4"
organization := "fakepath.test"
scalaVersion := "2.11.1"
libraryDependencies += "org.scalatest" %% "scalatest" % "2.1.6" % "test"
publishTo := {
val nexus = "https://" + System.getenv("NEXUS_IP") + ":8081/nexus/"
if (isSnapshot.value)
Some("snapshots" at nexus + "content/repositories/snapshots")
Some("releases"  at nexus + "content/repositories/releases")

System.getenv("NEXUS_USER"), System.getenv("NEXUS_PASSWORD"))

Note the publishTo section. The nexus IP, User and password are not pushed to the repo and are taken from the environment variables.

Travis CI

I have used the hosted Travis CI. For private repositories Travis CI has a SaaS offering at –

Travis docs suggest that they ensure our builds and artifacts from the private repo are secure and the space is not shared by any other application. Their security statement:

Signup up with Travis CI using your github account. To start with, a webhook needs to be activated for your private repo, manually, as a one time task. Then a .travis.yml file needs to be created. There are lot of things you can do by scripting the .travis.yml file properly.

The .travis.yml would be added to your repo. Our file looks as below:

language: scala
jdk: oraclejdk7
sudo: false
- 2.11.1
before_install: umask 0022
- sbt clean compile test
- sbt publish
  - secure:
  - secure:
  - secure:

The environment variables are added to this file by encrypting them. The enviroment variables used in this case are: NEXUS_IP, NEXUS_USER, NEXUS_PASSWORD. Notifications can also be set. The notifications set in this case are slack notifications to a single person or to a channel.

You can encrypt this info as:

travis encrypt NEXUS_USER=someuser --add

Note – that we run the “sbt publish” from Travis CI. This will generate an artifact. For a scala project, the artifact is a jar file. The artifact can also be a docker image. When “sbt publish” is run by Travis CI, it will use the build.sbt’s publishTo description to publish the artifact to the Nexus repository.

Note – We can (and we should) have a task like “sbt promote”, or we can write scripts in Travis CI itself which will “promote” a build. We would want to publish every build to Nexus. But we will not want to “promote” every build. The promoted build is deployed to production. This is typically a manual step with what is known as “one-click” deployment. This can be completely automated too, depending on the project. Infact, thats the difference between “Continuous Deployment ” and “Continuous Delivery”.

Note – We can read the latest version number of the build and then increment the number of the next build. This can be done programmatically by scripts.


Codacy is a code review and analysis tool. You can signup with using your github account and enabling the webhooks as documented by them.

Codacy also has support for private repository. You can play with the UI to see and set different metrics. However, to automate, you will need some scripts. Eg – if we want to ensure we do not build if code coverage metrics are less than 95% then these settings need to be done in codacy. To enable codacy for a private repo, again there are some web hooks. We can have codacy analyse the code per commit, per PR, per branch etc.


I have made use of Nexus to act as an “Artifact Manager”. So Nexus would be the tool which will store all the builds. Each build will be numbered and we will also have a build named “latest” which will be like a pointer to the latest promoted build. When a decision is made to promote a build, Travis CI will publish a build to Nexus & will also have a script that will update the “latest” pointer to point to this build.

Alternatives to Nexus

There are couple of repository management solutions available. Some offer hosted services.

  1. Artifactory
  2. Bintray

Configuration Management / Deployment

I have written a simple bash script which downloads the artifact from Nexus repo or Artifactory and simply executes it. We can run this script as a cron job.

Here is the script for fetching newly published artifacts from Artifactory repo:


# Artifactory location

# Maven artifact location
version=`curl -s $path/maven-metadata.xml | grep latest | sed "s/.*<latest>\([^<]*\)<\/latest>.*/\1/"`

#check if jar file exists
echo $jar
if [ ! -f $jar ]; then
 echo "Downloading new artifact"
 wget -q -N $url
 echo "Executing Jar: " `date` >> hello-scala.log
 scala $jar >> hello-scala.log