Enter your search

How we use EC2 Instance tags

By
Using instance tags to simplify EC2 management

Amazon EC2 conveniently offers the ability to associate (currently) up to 10 key-value “tags” to EC2 resources like instances and EBS volumes, containing any information you like. This is particularly useful for custom server nomenclature, data assignment, grouping, identification and so on, to help you organise and work with your instances conveniently.

Naming

The most commonly used instance tag is one with the key “Name”. AWS seems to encourage the use of this themselves with widespread support for displaying the Name tag in the Management Console. Wherever possible/implemented, the console will show you the Name tag instead of just the instance ID, saving you the trouble of looking up the nondescript instance ID to ensure you’re working with the right server.

Here’s an example of what the Name tag looks like for one of the many instances busily working away on the analytics you love:

nginx-1
  ^   ^
  |   |
name  ID

 

The two-part tag contains a name and a unique ID, delimited by a single hyphen. The name portion indicates what cluster, or “role”, the instance is assigned. The ID portion is a numerical ID to uniquely identify that instance.

When you have a large number of instances to manage, identifying instances in this way is an essential part of the ops process. The real magic happens when you start using the EC2 API to query and minipulate instances based on their tags. It helps us classify instances, perform cluster-wide operations on related instances, deploy services to a target portion of instances, along with many other benefits.

Hostnames

Now that all our instances have name tags, we can make use of these tags for server identification in a way that’s useful for our applications and services. We’ve built a tool in Ruby that makes use of the AWS API to load a list of instances, their internal IP addresses and their Name tags. Using Ghost, it then writes this information to the /etc/hosts file as host records, like this:

127.0.0.1 localhost

# ghost
nginx-1 10.100.83.239
nodejs-1 10.100.30.24
nodejs-2 10.100.16.87
# end ghost

 

This service continually runs via upstart on every instance, so that they each have up-to-date internal IP addresses mapped to a readable hostname for every other instance we own. As a result, there’s always a predictable*, standardised way for instances and services to communicate with each other via readable, identifiable hostnames.

This is great because it means you don’t need as much isolated logic to resolve and connect IP addresses in your applications. It also makes configuring distributed services, such as monitoring, logging, and load balancing tools less of a headache. This is because hostnames for these programs tend to be hard-coded in config files, and this way they’ll change less often.

Notes

  • It’s not completely unknown for the EC2 API to encounter problems which means your app will be unable to load instance tags and assign hostnames. It goes without saying in the cloud anyway, but you need to be able to gracefully handle this scenario in case of a service outage.
  • Make sure you’ve authorised your instances to access each other through their EC2 Security Groups. You need only open ports to other security groups in your account.

You May Also Like

Group 5 Created with Sketch. Group 11 Created with Sketch. CLOSE ICON Created with Sketch. icon-microphone Group 9 Created with Sketch. CLOSE ICON Created with Sketch. SEARCH ICON Created with Sketch. Group 4 Created with Sketch. Path Created with Sketch. Group 5 Created with Sketch.