My Experiences With AWS Fargate So Far

I've recently started using Docker containers more in my projects. With the move towards containers I wanted to explore AWS Elastic Container Service (ECS) more since it exists specifically for container orchestration.

My Experiences With AWS Fargate So Far

I've recently started using Docker containers more in my projects. Previously the majority of my projects had been a combination of Vagrant for local development and Ansible for configuring servers both locally (in Vagrant) and remote (i.e. AWS Lightsail). With the move towards containers I wanted to explore AWS Elastic Container Service (ECS) more since it exists specifically for container orchestration.

AWS Fargate is the newest offering for ECS, so it seemed like a good place to start. The hardest initial hurdle was simply getting a good grasp of the terminology.

The main building blocks consist of the following: clusters, services, tasks, and task definitions. I've given my best understanding of them below as well as Amazon's official definitions after my own. One thing I've found is that their definitions make a lot more sense once you understand each part independently. Their definitions mention the other pieces in them which is confusing if you don't yet understand the other pieces yet. Let's tackle them one by one.


A cluster is the highest level element and is essentially the wrapper around the other pieces. The best way I've found to think of clusters so far is as a way to keep environments (i.e. staging versus production) or products separate. For instance, you may have the following clusters:

  • product-a-production
  • product-a-staging
  • product-b-production
  • etc


The best way to think of services is as the independent micro-services you have in your product / application. For instance, one service may be the API that powers your software. While it's not necessary, load balancers are attached to services in AWS Fargate if they get attached to anything. This allows that particular service to be exposed to the outside world, scale automatically, and anything else you might expect from a web app deployment.

One useful way to think of services might be to consider the various Docker images that make up your docker-compose.yml setup. Many of these would be good candidates for their own service within your cluster.

Tasks and Task Definitions

A task defines an indepedent piece of work that a container can do and how many resources it has to do that work. They essentially map to a command you'd run on your server. They also define the environment variables for that command.

Amazon's Definitions


An Amazon ECS cluster is a regional grouping of one or more container instances on which you can run task requests.


A service lets you specify how many copies of your task definition to run and maintain in a cluster.

Tasks and Task Definitions

Task definitions specify the container information for your application, such as how many containers are part of your task, what resources they will use, how they are linked together, and which host ports they will use.

Where I'm At


  • I've found using this tech become much easier to manage once you combine the power of the AWS CLI with Fargate. Task definitions can be defined with JSON and therefore can be easily put under version control along side of your code. Updating your task definitions then becomes as easy as aws ecs register-task-definition --cli-input-json file://misc/aws/ecs-task-definitions/api.json.
  • Amazon Elastic Container Registry (ECR) is a tool provided by Amazon that makes hosted your Docker images very simple. Just four quick commands are needed to update your images (or you can use CodeBuild which makes it even easier):
$(aws ecr get-login --no-include-email)
docker build -t image-name .
docker tag image-name:latest 123456789.dkr.ecr.$(aws configure get region)
docker push 123456789.dkr.ecr.$(aws configure get region)
  • Versioning is built in to your task definitions on Amazon's end which makes it easy to rollback if needed.
  • Logging to CloudWatch is very easy and makes it much easier to see what is going on inside of your container. This is especially helpful because I don't believe it is possible to SSH in to a Fargate container instance yet.


  • As far as I can tell AWS Fargate does not yet support scheduled tasks natively. However, using aws ecs run-task via the command line as well as a micro EC2 instance with a crontab configured is an easy enough workaround for now. An example of what that might look like:
*/30 */3 * * * aws ecs run-task --cli-input-json file://aws-run-tasks/api.json


  • The command argument for task definitions is an array of strings as opposed to one string command. For instance, if you want to run an Apache server that would look like "command": ["/usr/sbin/apachectl", "-D", "FOREGROUND"] as opposed to "command": "/usr/sbin/apachectl -D FOREGROUND".
  • The command that is run by your task definition needs to be long running for tasks like running a web server. Otherwise the task will die as soon as the command finishes. I originally had CMD ["/usr/sbin/apachectl","-DFOREGROUND"] as the last command in my Dockerfile. However, that instead should be the command specified in your task definition itself.


  • I've not used AWS Fargate (or really any container based techology) in production thus far in my career. Therefore, I'm not fully sure how monitoring the health of an ECS cluster would work. I imagine it's not particularily different than a pool of EC2 servers though.