Building Efficient Kubernetes Infrastructure with EKS and Terraform: A Hands-on Web App
The Container Orchestration Challenge
As organizations increasingly adopt microservices architectures, the need for robust container orchestration has become paramount. While Kubernetes offers powerful orchestration capabilities, the complexity of setting it up and maintaining it can be daunting. Enter Amazon Elastic Kubernetes Service (EKS) and Terraform — a powerful combination that simplifies deployment while maintaining enterprise-grade security and scalability.
In today’s fast-paced tech landscape, infrastructure as code (IaC) isn’t just a luxury — it’s a necessity. As someone who’s navigated both the trials and triumphs of cloud infrastructure, I recently undertook a project that combined two powerful tools: Terraform and Amazon EKS. What follows is my journey building an efficient Kubernetes environment that balances security, scalability, and observability.
The Vision: Beyond Simple Deployment
When I first approached this project, I wanted to build something that would genuinely be ready for an efficient workloads — with proper networking isolation, appropriate security controls, and comprehensive monitoring capabilities.
My objectives were clear:
- Create network isolation with a proper multi-AZ VPC architecture.
- Ensure highly available control and data planes.
- Implement secure access patterns with least privilege principles.
- Establish comprehensive observability for both the cluster and applications.
- Automate everything for consistency and repeatability using Terraform modules and add-ons.
Foundation First: Architecting the Network Layer
Any experienced cloud advisor will tell you: get your networking wrong, and everything else falls apart. That’s why I started with a carefully designed Virtual Private Cloud (VPC).
Using Terraform’s VPC module made this considerably easier than crafting everything from scratch. I created a multi-Availability Zone setup with distinct subnet tiers:
module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "5.1.1"
name = var.environment_name
cidr = var.vpc_cidr
azs = local.azs
public_subnets = local.public_subnets
private_subnets = local.private_subnets
enable_nat_gateway = true
create_igw = true
enable_dns_hostnames = true
single_nat_gateway = true
# Manage so we can name
manage_default_network_acl = true
default_network_acl_tags = { Name = "${var.environment_name}-default" }
manage_default_route_table = true
default_route_table_tags = { Name = "${var.environment_name}-default" }
manage_default_security_group = true
default_security_group_tags = { Name = "${var.environment_name}-default" }
public_subnet_tags = merge(var.tags, var.public_subnet_tags)
private_subnet_tags = merge(var.tags, var.private_subnet_tags)
tags = var.tags
}
This architecture provides:
- Public subnets: Hosting only NAT gateways and load balancers.
- Private subnets: Where my EKS worker nodes would eventually reside.
- NAT gateways: Ensuring high availability for outbound connectivity.


Building the Kubernetes Foundation with EKS
With a solid network foundation in place, I turned my attention to Kubernetes itself. Amazon EKS abstracts much of the complexity of running a Kubernetes control plane, but there are still important architectural decisions to make.
Using Terraform’s EKS module, I deployed a cluster spanning across multiple availability zones:
module "eks_cluster" {
source = "terraform-aws-modules/eks/aws"
version = "~> 20.11"
# providers = {
# kubernetes = kubernetes.cluster
# }
cluster_name = var.environment_name
cluster_version = var.cluster_version
cluster_endpoint_public_access = true
enable_cluster_creator_admin_permissions = true
cluster_addons = {
vpc-cni = {
before_compute = true
most_recent = true
configuration_values = jsonencode({
env = {
ENABLE_POD_ENI = "true"
POD_SECURITY_GROUP_ENFORCING_MODE = "standard"
}
})
}
}
vpc_id = var.vpc_id
subnet_ids = var.subnet_ids
control_plane_subnet_ids = var.subnet_ids
eks_managed_node_groups = {
node_group_1 = {
name = "managed-nodegroup-1"
instance_types = ["m5.large"]
subnet_ids = [var.subnet_ids[0]]
force_update_version = true
min_size = 1
max_size = 3
desired_size = 1
}
node_group_2 = {
name = "managed-nodegroup-2"
instance_types = ["m5.large"]
subnet_ids = [var.subnet_ids[1]]
force_update_version = true
min_size = 1
max_size = 3
desired_size = 1
}
node_group_3 = {
name = "managed-nodegroup-3"
instance_types = ["m5.large"]
subnet_ids = [var.subnet_ids[2]]
force_update_version = true
min_size = 1
max_size = 3
desired_size = 1
}
}
node_security_group_additional_rules = {
ingress_self_all = {
description = "Node to node all ports/protocols"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
self = true
}
egress_all = {
description = "Node all egress"
protocol = "-1"
from_port = 0
to_port = 0
type = "egress"
cidr_blocks = ["0.0.0.0/0"]
ipv6_cidr_blocks = ["::/0"]
}
ingress_cluster_to_node_all_traffic = {
description = "Cluster API to Nodegroup all traffic"
protocol = "-1"
from_port = 0
to_port = 0
type = "ingress"
source_cluster_security_group = true
}
}
tags = var.tags
}
This configuration creates:
- A highly available EKS control plane spanning AWS’s own infrastructure.
- Worker nodes spread across multiple availability zones.
- Auto-scaling capabilities to handle varying workloads.
- Comprehensive control plane logging for security and troubleshooting.



Enhancing the Cluster with Essential Add-ons
A bare Kubernetes cluster, even with proper networking, isn’t ready for production workloads. It needs additional components to handle networking, observability, and load balancing effectively.
I leveraged Terraform to deploy these critical add-ons:
AWS VPC CNI for Pod Networking
The AWS VPC CNI (Container Network Interface) provides native VPC networking for Kubernetes pods. This integration is crucial as it allows pods to receive IPs directly from your VPC, simplifying network policies and security group implementation.
AWS Load Balancer Controller
To properly direct traffic to services running in the cluster, I implemented the AWS Load Balancer Controller:
This controller automates the provisioning of Application and Network Load Balancers when Kubernetes services are created, making it seamless to expose applications both internally and externally.
module "lb_controller_irsa" {
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
role_name = "lb-controller"
attach_load_balancer_controller_policy = true
oidc_providers = {
main = {
provider_arn = module.eks.oidc_provider_arn
namespace_service_accounts = ["kube-system:aws-load-balancer-controller"]
}
}
}
resource "helm_release" "lb_controller" {
name = "aws-load-balancer-controller"
repository = "https://aws.github.io/eks-charts"
chart = "aws-load-balancer-controller"
namespace = "kube-system"
set {
name = "clusterName"
value = module.eks.cluster_id
}
set {
name = "serviceAccount.create"
value = "true"
}
set {
name = "serviceAccount.annotations.eks\\.amazonaws\\.com/role-arn"
value = module.lb_controller_irsa.iam_role_arn
}
}

Implementing Enterprise-Grade Observability
No development environment is complete without robust monitoring. While Kubernetes provides basic health monitoring, I wanted deeper insights into both cluster and application performance.
Amazon CloudWatch Container Insights was my solution of choice, providing comprehensive observability without requiring additional infrastructure:
module "cloudwatch_observability_irsa_role" {
count = var.create_cloudwatch_observability_irsa_role ? 1 : 0
source = "terraform-aws-modules/iam/aws//modules/iam-role-for-service-accounts-eks"
version = "v5.33.0"
role_name = "cloudwatch-observability"
attach_cloudwatch_observability_policy = true
oidc_providers = {
ex = {
provider_arn = var.eks_oidc_provider_arn
namespace_service_accounts = ["amazon-cloudwatch:cloudwatch-agent"]
}
}
}
resource "aws_eks_addon" "amazon_cloudwatch_observability" {
count = var.enable_amazon_eks_cw_observability ? 1 : 0
cluster_name = var.eks_cluster_id
addon_name = local.name
addon_version = try(var.addon_config.addon_version, data.aws_eks_addon_version.eks_addon_version.version)
resolve_conflicts_on_create = try(var.addon_config.resolve_conflicts_on_create, "OVERWRITE")
service_account_role_arn = try(module.cloudwatch_observability_irsa_role[0].iam_role_arn, null)
preserve = try(var.addon_config.preserve, true)
configuration_values = try(var.addon_config.configuration_values, null)
tags = var.tags
}
What makes Container Insights particularly valuable is its ability to collect, aggregate, and summarize metrics and logs from containerized applications and microservices. This provides:
- Performance visibility: Detailed metrics on CPU, memory, disk, and network at every level (cluster, node, pod, container)
- Operational insights: Log integration that correlates performance issues with log events
- Alerting capabilities: Automated alerting when resources deviate from expected behavior
The dashboard gives operations teams real-time visibility into the health of applications, enabling proactive issue identification rather than reactive firefighting.




Putting It All Together: Deployment in Action
With all the components defined in Terraform, deploying the entire infrastructure became a repeatable, version-controlled process. After initial planning and security reviews, I executed the deployment and within about 15–20 minutes, I had a complete EKS environment spanning multiple availability zones, with proper networking, security controls, and observability in place.
To test the setup, I deployed a sample web application with ingress resources. The AWS Load Balancer Controller automatically provisioned an Application Load Balancer, making the application accessible while keeping all workloads in private subnets.
Source: towardsaws