Navigation

Automating the Build of the ELK Stack with Terraform

The Automatic ELK!

Sounds kind of cool doesn't it? But alas we are not going to be performing any weird experiments today! Looking back at our last article, we can see the basic usage of the ELK stack and how to best integrate it.


As we already know, the ELK stack, consisting of Elasticsearch, Logstash, and Kibana, is a powerful tool for data analysis and visualization. In this post, we'll look at how to automate the build of an ELK stack using Terraform, an open-source infrastructure as code tool.

First, let's take a look at the architecture of our ELK stack. We'll use three EC2 instances, one for each of Elasticsearch, Logstash, and Kibana. These instances will be located in a single VPC, with a security group allowing traffic on the relevant ports (9200 for Elasticsearch, 5044 for Logstash, and 5601 for Kibana).




Next, we'll define our Terraform configuration. We'll start by creating the VPC and security group:

resource "aws_vpc" "elk_vpc" {
  cidr_block = "10.0.0.0/16"
}

resource "aws_security_group" "elk_sg" {
  name        = "elk_sg"
  description = "Security group for ELK stack"
  vpc_id      = aws_vpc.elk_vpc.id

  ingress {
    from_port   = 9200
    to_port     = 9200
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 5044
    to_port     = 5044
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }

  ingress {
    from_port   = 5601
    to_port     = 5601
    protocol    = "tcp"
    cidr_blocks = ["0.0.0.0/0"]
  }
}

Next, we'll define our EC2 instances. We'll use a single EC2 instance type for all three instances, but you could use different instance types if needed. We'll also specify the AMI and user data to install and configure the necessary software:


resource "aws_instance" "elasticsearch" {
  ami           = "ami-12345678"
  instance_type = "t2.micro"
  vpc_security_group_ids = [aws_security_group.elk_sg.id]
  key_name      = "elk_key"

  user_data = <<-EOF
              #!/bin/bash
              sudo yum install -y java-1.8.0
              sudo yum install -y elasticsearch
              sudo systemctl enable elasticsearch
              sudo systemctl start elasticsearch
              EOF
}

resource "aws_instance" "logstash" {
ami = "ami-12345678"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.elk_sg.id]
key_name = "elk_key"

user_data = <<-EOF
#!/bin/bash
sudo yum install -y java-1.8.0
sudo yum install -y logstash
sudo systemctl enable logstash
sudo systemctl start logstash
EOF
}

resource "aws_instance" "kibana" {
ami = "ami-12345678"
instance_type = "t2.micro"
vpc_security_group_ids = [aws_security_group.elk_sg.id]
key_name = "elk_key"

user_data = <<-EOF
#!/bin/bash
sudo yum install -y java-1.8.0
sudo yum install -y kibana
sudo systemctl enable kibana
sudo systemctl start kibana
EOF
}


Finally, we'll define an output to display the public IP addresses of our EC2 instances:

output "elasticsearch_ip" {
value = aws_instance.elasticsearch.public_ip
}

output "logstash_ip" {
value = aws_instance.logstash.public_ip
}

output "kibana_ip" {
value = aws_instance.kibana.public_ip
}


With this configuration, we can use Terraform to automate the build of our ELK stack. Simply run `terraform apply`, and Terraform will create the necessary resources in AWS. Once the resources have been created, you can access Kibana at the public IP address of the Kibana EC2 instance, using the port specified in the security group (port 5601).

Wrap Up!

In conclusion, using Terraform to automate the build of the ELK stack can save time and effort, and ensure that your stack is consistently configured. Whether you're using the ELK stack for log analysis, application monitoring, or any other purpose, automating the build process with Terraform is a valuable addition to your toolkit.

If you think about it you could take the above and abstract it into a module, it's output could be added to report the endpoint you have created which your Logging instances could then pick up from and configure automatically for the environment they are deploying in to.