I used to have a sticker on my laptop that said "My other computer is Amazon EC2". This was true in the sense that I had many EC2 instances running at any given time doing compute on data and shuffling packets around the web. Amazon EC2 was not, however, a platform I used for day-to-day development.
My daily driver was a 13" Macbook. This worked well until it didn't. It started not working so well when memory became a bottleneck - this usually happened when I did anything with docker or had too many tabs open on Firefox.
After one too many computer panics due to memory issues, I invested in making EC2 my primary development environment. This transition happened around 2015 and I have been running on it ever since.
Running your dev environment on the cloud is a paradigm that has only become more versatile over time, improving from better instance types (eg. gravitron), development-friendly features (eg. hibernation), and developer tooling (eg. CDK).
You can find my dev setup on github as a CDK template. Executing the template will provision a personal cloud-based dev environment that can be spun up anywhere in the world (where there are AWS data centers).
Unfortunately, it's the year 2023 and setting this up via IaC (infrastructure as code) still requires some knowledge of AWS plumbing. This post goes over some of those technical details.
Architecture
NOTE: this article assumes you're familiar with the basics of AWS
If you deploy using the personal-cloud
template, you'll end up with the following resources:
1 VPC
1 Autoscaling Group
1 EC2 Instance
VPC
The VPC consists of:
1 public subnet (has inbound and outbound paths to the internet)
0 private subnets (has no inbound but outbound to the internet)
1 isolated subnet (no path to public internet)
0 NAT gateways
Since I'm using this as a dev environment, the EC2 instance is launched in a public subnet. This is because we'll need to pull in packages from various programming languages (eg. npm) as well as download dependencies (eg. docker images) that we don't want to pay extra network fees for via a NAT gateway. The EC2 instance itself is locked down via security group rules that only permit inbound on port 22.
There are no private subnets as this stack is not meant to run any services besides development ec2 instances. I keep an isolated subnet around in reserve in case I need to spin up additional resources outside of my ec2 instance (eg. a database on a separate ec2 instance).
Note that the template by default creates a separate VPC. If you are working in a corporate environment, you might already have a development account and an existing VPC to launch your EC2 instance into. That might be preferable as you can then also connect to other internal services from inside the VPC.
Autoscaling Group
I like to add an autoscaling group with EC2, even if it is just a single EC2 instance. Autoscaling by itself does not cost anything and gives us health checks for free. If I do end up needing multiples of the same development instance, it will also be easy to provision by toggling the min-capacity
threshold.
EC2 Instance
The EC2 instance has a custom security group that by default, only allows traffic from port 22 (ssh). You can optionally add additional inbound depending on your use case (eg. testing a WebSocket endpoint).
I generally find that a lot of work I do is either memory-bound or CPU bound. Depending on the case, I will provision a r6a
or c6a
type of ec2 instance.
To decode the instance family names:
R: memory-optimized
C: compute optimized
A: AMD based (generally cheaper than i[ntel] based architectures)
NOTE: in most cases, you can also use arm-based gravitron instances (eg. swap out
r6a
withr6g
) to deliver slightly better performance at less cost. Since most of my development work still targets non-arm architectures, I like to keep my environment the same as that of my target environments. If you end up using gravitron, you can use the newest generationc7g
andr7g
instances which offer the best "compute/memory to price-performance ratio" on AWS
The operating system is Amazon Linux 2. Note that this does not have many of the development tools you might be used to by default in other Linux distributions (eg. git). It is a secure Linux distro that is tuned to run with AWS and has long-term support.
I generally add some common user data scripts (a set of commands that run at the startup of a cloud VM) that add some developer tools like git
and docker
.
Note that by default, EBS-backed AWS instances will launch with 8GB of disk space. The template changes the default to 100GB.
Etc
The architecture here represents the basics of getting something running in AWS. There are many additional things you can do to optimize in terms of cost and DX.
Some of my favorites which are not covered in this post:
adding tailscale to ec2 so you can use it to create your personal VPC of self-hosted things
hibernating ec2 instance when not in use (and creating a script to automatically do this)
connecting to your ec2 instance via vscode
using reserved instances to further reduce costs by up to 75%
Conclusion
Today, it would be more accurate to simply say that "my computer is Amazon EC2". AWS gives you infinite freedom in creating a custom dev environment tailored for anything you might need. With this freedom also comes great complexity and hidden foot guns. The goal of this post is to give you a starting point for creating your personal cloud. Enjoy responsibly and watch out for dragons.
How much does this cost you, monthly?