So, I recently landed a new gig that I’m super stoked for and one of the things is the on the systems and platform level, its pretty heavy on AWS. That in itself isn’t too big of an issue, but my focus in the past has primarily been Azure.
So I asked myself how it was that I could easily translate my Azure skills into using AWS?
Simple: Migrate everything.
The goal here wasn’t to instantly become an AWS expert, but to familiarize myself with the platform so that I can continue to learn and support it in my new role. Some of the things I wanted to familiarize myself with:
- EC2 Instances – launching and managing via the command line and web UI.
- On-Premise migration to EC2 – migrating some of my on-premise servers to EC2, some eventually became all when I fell in love with AWS.
- Understanding AWS Cost Management – how to lower compute cost, manage budgets, etc
- Familiarizing myself with the way Amazon does cloud networking (VPCs)
- Backing up and restoring systems using EBS Snapshots.
There are a number of other things that I’ve also learned on top of that, and continue to learn as well. The goal of this initial migration wasn’t to know everything, but get familiar.
EC2 Instances are the AWS equivalent of Virtual Machines in Azure.
EC2 is the actual compute component of the virtual machine, you also have to consider the Elastic Block Storage (EBS) volumes as well when you consider the overall cost of an EC2 Instance.
While there are some free-tier offering, you end up over-extending those sooner than you thought so it’s really important to understand cost management so you don’t end up with a hefty bill.
AWS has a nifty calculator that you can use to figure out the overall cost that you can find here: https://calculator.aws
The calculator takes into consideration a few things:
- Constant vs Peak/Spike Usage of VM
- Choice of Operating System and Licensed Software (you can forgo this if you BYOL, BYOL to the cloud is actually alot murkier than many believe. Its often just easier to pay the extra money to ensure license compliance)
- EC2 Family and Number of Instances
- Whether you are paying full on-demand, reserved, spot, or savings plan pricing for the instance.
- The EBS Volume size and type.
- The EBS Volume snapshot frequency and retention period.
- Outbound data transfer (inbound is free)
I am currently using a 3-year savings plan where I have committed around $0.072/hour for the T3 Family and $0.020/hour for the T2 Family. This ends up costing me a total of $67.16/month. Doing this provides me with the funding to run:
- A total of up (4) t3.medium EC2 Instances or a combination of any members of the t3 family with a cost of up to $0.072/hour without spending money I haven’t already budgeted for.
- One t2.medium EC2 Instance running 24/7 as my cloud firewall. I am currently running Untangle, but intend to migrate to OPNSense once I can work out the kinks on migrating my on-prem image to an AMI.
- A cost savings of 50% or more on all t2 and t3 family instances over the on-demand cost.
The above savings plan doesn’t take into consideration any EBS volumes, just strictly the compute cost.
You can view the savings plan cost vs the on-demand cost here: https://aws.amazon.com/savingsplans/pricing/
My total AWS budget is set at $100/month, this allows some wiggle room if I want to quickly spin up something for a lab, and also takes into consideration EBS Volumes and outbound traffic spikes.
I may need to increase this in the future as I’m actually working on doing periodic backups to an S3 Glacier Vault. My desire there is to have a full backup of all of my data. My current backup only backs up critical documents because storage on Azure is not cheap when you get into having TB of data in your recovery services vault. I do still love Azure Backup though.
I’ll likely do a separate post on cost-management in both Azure and AWS in the future, there are a lot to it and plenty of different ways you can manage and keep costs under control. The first month I ever used Azure I ended up surprising myself with a pretty hefty bill, so I started with cost-management from the start so I didn’t make the same mistake.
Just to recap, when you run an EC2 Instance you’ll pay for:
- Compute (Hourly)
- Storage (Hourly)
- Outbound Date (per GB)
I want to make a note for you that AWS calculates cost rounded to the hour. This means that whether you end up running something for 5 minutes or 50 minutes, you will pay for the full hour.
When you launch an EC2 instance, you launch it from something called an Amazon Machine Image (AMI). This is essentially a template that contains the operating system and any required software. There are public AMIs, community AMIs, marketplace AMIs, and private AMIs.
Public AMIs are ones that require no additional subscription to launch, unless there is specific software licensing involved such as Microsoft SQL Server.
Marketplace AMIs can be free or charge you hourly for the cost of running them. I’ve found that a common difficulty with Marketplace AMIs is that they’re pretty restrictive as to what EC2 family they can run on, they can also be quite expensive unless you BYOL.
I’m currently using the Marketplace image for my Untangle firewall, with the BYOL option. I’m currently paying $25.00/month for that subscription outside of AWS, but will move away from it when I get OPNSense ported. I also have the option of getting the vendor supported AMI for OPNSense, but I have to purchase an annual support subscription which defeats the purpose of me using open-source software. I’m currently running it on prem as a VM.
Community AMIs can also be free or have a required subscription. Community AMIs are generally not supported by the vendor directly and may contain outdated software. I’m hesitant to use community AMIs because its hard to verify that the individual who created that image didn’t slip something into the image before releasing it. Community AMIs may also be legitimate though, some open-source vendors release their images as community AMIs rather than putting them in the marketplace.
Private AMIs can either be ones that you organization has created or has had shared with them. As I mentioned earlier, I would have to purchase a support subscription for OPNSense in order to receive access to the AMI. In this specific instance the creators of OPNSense are choosing to share their private AMI with you directly.
There is also a way to migrate your on-prem VMs to an AMI using the AWS CLI. There are some requirements your VM has to meet before it will migrate properly, which is part of the reason I’m having difficulty migrating OPNSense to AWS directly. I’ll cover that in a section later on when I talk about migrating my on-prem systems.
Launching an EC2 Instance
Launching an EC2 Instance is pretty simple. I’m going to walk you through doing so real quick on the AWS web UI.
1. In the AWS Console, navigate to “Services” -> “Compute” -> “EC2”
2. In the dashboard, find the button that says “Launch instance”. Depending on the version of your dashboard, it may be blue or may be orange.
During the first step we will be asked to choose an AMI. I am a big fan of Red-hat, so I’ll be choosing RHEL 8. There are also plenty of other AMIs that you can use without additional cost.
3. Select your AMI by clicking on the “Select” button on the desired AMI. Make sure you choose the correct version too, there are x86 and ARM version available.
4. Choose your instance type, for this scenario I’m just going to pick the t2.micro because it’s part of the free-tier.
5. This next step is optional as you can actually just launch the instance from here if that would meet your workload. For this next step though, we’re going to configure some of our instance details.
When configuring your instance details you have a number of different options to pick from.
- How many instances you want to launch
- What VPC you want to attached it to, and adding additional network interfaces*
- What IP Address you want to assign to it, and whether or not you want it to auto-assign a public IP.
- Enabling detailed CloudWatch Monitoring**
- Behaviors for shutdown, stop, hibernate, and termination protection.
- If you use dedicated hosts, you can also select to launch the instance on a dedicated host***
* If adding additional network interfaces, you cannot auto-assign a public IP Address
** Enabling detailed monitoring will incur additional costs to your AWS Subscription
*** Running any workload on a dedicated or reserved host will incur an additional cost.
For this scenario, I’m going to maintain all the defaults, but I did want to introduce you to the options available.
6. Similar to this last section, this is completely optional. During this next stage you have the option of adding additional storage or modifying the root volume storage size.
For free-tier eligible systems, you can also utilize up to 35GB of General Purpose SSD storage at no additional cost. Details on how this is calculated and billed can be found on the free-tier information page.
All I’m going to do here is increase my EBS root volume size from 10GB to 20GB.
In addition to scaling up the size, you also have the option to add encryption using either a default AWS KMS Key, or generating your own. There is also the additional option of utilizing your own, non-Amazon managed keys if you are processing highly sensitive information.
I will also be encrypting the volume with my preset key “(default) aws/ebs“
Depending on the AMI you selected, encryption may or may not be enabled by default.
Like the last few steps, and this will be the case for the next two steps, these are completely optional steps. You can also just click “Review and Launch”
7. This next section is where you define any tags that you want to assign to your instance or EBS Volumes. Tags can be used to automate tasks using CloudWatch, manage access to resources, group resources into resource groups and many more.
For this scenario, I will add the following tags:
There are two things you can do with the key-value pairs here. You can either assign them to already existing tags, for instance assign the EBS volume to a lifecycle policy using the lifecyle policy ID, or you can assign custom key-value pairs and manage those seperately.
In this instance you can see that I am defining custom tags that I will use for automating the retention policy of my EBS Volume Snapshots, Automate Weekly EBS Volume Snapshots, and finally one that I will use to group the Instance and Volume into my “labs” resource group.
8. Our next stage allows us to configure security groups. I’ll get into these a little deeper later on when I talk about VPCs and AWS Networking, but at a high level these are just ACLs that are applied to your instance.
Since this instance is going to have a public IP Address and be directly accessible, I want to restrict inbound SSH from only my IP Address. You can do this by easily clicking “My IP” in the drop-down under source.
You also have the option of assigning this to an existing security group if you already have one that meets all of your requirements.
Finally, it’s time to click “Launch”, when you click launch you will be asked to assign a key-pair to login with. You can choose an existing key-pair, create a new one, or choose not to use one whatsoever.
The key pair will be used when we go to login to our instance over SSH. Whatever you do, be sure to keep it in a secure location because it is (by-default) all that you need to login to your instance. I recommend password protecting it.
I personally use PuTTY, which uses it’s own format for private keys. When I convert it to the PuTTY format, I delete the original .PEM format from my drive securely.
I also have a backup stored securely in an encrypted password vault in case the PuTTY formatted one gets corrupted or I forget the password.
Once you’ve launched the instance, you’ll be taken to a page that allows you to look at the launch log or view the instance directly. I’m going to skip covering that part because I think it’s pretty self-explanatory on how to get back to verifying that the instance is running. Once you see the instance in the state of “Running” in the dashboard, you can being to connect.
Connecting via PuTTY
I’ll cover this briefly for one main reason, the handling of instances that don’t support password-protected private keys out of box. I myself have a few instances running that when I launched them did not support connecting to them using a password protected private key.
In order to get around this, you have to load the key into the PuTTY Cache prior to attempting to connect.
You can also use TeraTerm, or any other SSH Client that supports private key authentication.
Part 1: Converting to PuTTY Format with Password Protection
1. Open up “PuTTYgen”, if you installed the full PuTTY package this should be installed already.
2. In the top menu bar, select “Conversions” -> “Import Key”, select the key you downloaded from the AWS Console.
3. Enter password in the “Key Passphrase” and “Confirm passphrase” section, when complete click “Save private key”.
You can optionally add a comment to the key as well (may be useful if you have multiple private keys you’re managing).
Save the key to a location you can easily access.
Step 2: Loading the Private Key into the PuTTY Cache
Look for the program “Pageant” in the start menu and open it up. This should be installed if you already installed the full PuTTY suite.
Locate the icon in the task-bar and right click it, select “View Keys”
In the popup-windows, select “Add Key”, locate the key you recently converted and enter the passphrase.
Once completed, the key is now loaded into the cache.
Step 3: Configuring the PuTTY Authentication Profile
Setting up the Authentication profile is pretty easy, I generally save the profiles because its a bit of a pain to have to go back and do every time.
To setup the authentication with our private key, open PuTTY and navigate to the left panel of the window. Locate and expand the section for “SSH”. In the expanded panel, select “Auth”.
In this window, click “Browse” and select the private key you loaded earlier.
Next, scroll back up to the connection properties window that shows up when you first open PuTTY. In the hostname, you must specify the user account you want to connect as. The default for Amazon AMIs (unless otherwise noted) is “ec2-user”. Your string should be something like “ec2-user@publicip”
I’m going to go ahead and name mine and save it so I can use it later without much hassle. Like anytime you connect via SSH for the first time, you’ll have to accept the warning. If all went well, you should be presented with that classic Linux shell.
Thanks for following along on this one. I know it was a rather simple post and process, but I wanted to get everyone setup with atleast a free-tier instance that we can use to expand on later.
If you look back and remember I added some tags that we can use for automation and grouping. One of my next posts will focus on managing EBS Volumes and Snapshots, and automating snapshots with CloudWatch.
Let me know if you think of something I should post, I’m always looking for ideas on new content.