Quick EFK on EKS

Assumption: You've a kubernetes cluster running on EKS, and all the kubeconfig, kubectl, aws authentication are handled.

What's EFK?

Elasticsearch: A search database with support for REST api queries.
Fluentd: Data collector typically used for collecting logs in a unified manner.
Kibana: Visualisation tool for Elasticsearch data.

Data flow:

pods generate logs on the host. Host has a fluentd agent running which captures the logs and sends them to cloudwatch. Cloudwatch streams the logs to Elasticsearch.

We'll use managed services of AWS to make it quick, easy and reliable.

Fluentd deployment

Fluentd is deployed as a daemonset. This is to ensure every node has fluentd agent up and running.

Permissions: To send out logs, our worker nodes of kubernetes (hosts) need some access to cloudwatch.

We've a default IAM role, called worker-role for all the worker nodes. We'll attach following policy to this role:

    "Version": "2012-10-17",
    "Statement": [
            "Action": [
            "Resource": "*",
            "Effect": "Allow"

Let's say this policy is called logs-policy.json

The following command adds the policy to the role:

aws iam put-role-policy --role-name worker-role --policy-name Logs-Policy-For-EKS-Workers --policy-document file://path/to/logs-policy.json

To Deploy fluentd:

Use the following as fluentd.yaml:


Deploy fluentd to kubernetes using kubectl apply -f fluentd.yaml

Create an ES cluster:

You can use the console or command line or terraform.

NOTE: Elasticsearch cluster should be in private subnets of VPC to receive cloudwatch logs. So, access to kibana shall happen via vpn or bastion tunneling.

Stream cloudwatch logs to ES:

You can use AWS console to select log group --> Actions/Stream to Elasticsearch service --> Select cluster and IAM role (for internal lambda function) --> Common log format --> next --> start streaming.


For creating indices, use kibana's default index pattern match, etc. You can enable auth based on cognito to kibana anytime.