Teach Me How to Route

Beam Me Up, Scotty – Getting Started with AWS EC2

So, I recently landed a new gig that I’m super stoked for and one of the things is the on the systems and platform level, its pretty heavy on AWS. That in itself isn’t too big of an issue, but my focus in the past has primarily been Azure.

So I asked myself how it was that I could easily translate my Azure skills into using AWS?

Simple: Migrate everything.

The goal here wasn’t to instantly become an AWS expert, but to familiarize myself with the platform so that I can continue to learn and support it in my new role. Some of the things I wanted to familiarize myself with:

  • EC2 Instances – launching and managing via the command line and web UI.
  • On-Premise migration to EC2 – migrating some of my on-premise servers to EC2, some eventually became all when I fell in love with AWS.
  • Understanding AWS Cost Management – how to lower compute cost, manage budgets, etc
  • Familiarizing myself with the way Amazon does cloud networking (VPCs)
  • Backing up and restoring systems using EBS Snapshots.

There are a number of other things that I’ve also learned on top of that, and continue to learn as well. The goal of this initial migration wasn’t to know everything, but get familiar.

EC2 Instances

EC2 Instances are the AWS equivalent of Virtual Machines in Azure.

EC2 is the actual compute component of the virtual machine, you also have to consider the Elastic Block Storage (EBS) volumes as well when you consider the overall cost of an EC2 Instance.

While there are some free-tier offering, you end up over-extending those sooner than you thought so it’s really important to understand cost management so you don’t end up with a hefty bill.

AWS has a nifty calculator that you can use to figure out the overall cost that you can find here: https://calculator.aws

The calculator takes into consideration a few things:

  • Constant vs Peak/Spike Usage of VM
  • Choice of Operating System and Licensed Software (you can forgo this if you BYOL, BYOL to the cloud is actually alot murkier than many believe. Its often just easier to pay the extra money to ensure license compliance)
  • EC2 Family and Number of Instances
  • Whether you are paying full on-demand, reserved, spot, or savings plan pricing for the instance.
  • The EBS Volume size and type.
  • The EBS Volume snapshot frequency and retention period.
  • Outbound data transfer (inbound is free)

I am currently using a 3-year savings plan where I have committed around $0.072/hour for the T3 Family and $0.020/hour for the T2 Family. This ends up costing me a total of $67.16/month. Doing this provides me with the funding to run:

  • A total of up (4) t3.medium EC2 Instances or a combination of any members of the t3 family with a cost of up to $0.072/hour without spending money I haven’t already budgeted for.
  • One t2.medium EC2 Instance running 24/7 as my cloud firewall. I am currently running Untangle, but intend to migrate to OPNSense once I can work out the kinks on migrating my on-prem image to an AMI.
  • A cost savings of 50% or more on all t2 and t3 family instances over the on-demand cost.

The above savings plan doesn’t take into consideration any EBS volumes, just strictly the compute cost.

You can view the savings plan cost vs the on-demand cost here: https://aws.amazon.com/savingsplans/pricing/

My total AWS budget is set at $100/month, this allows some wiggle room if I want to quickly spin up something for a lab, and also takes into consideration EBS Volumes and outbound traffic spikes.

I may need to increase this in the future as I’m actually working on doing periodic backups to an S3 Glacier Vault. My desire there is to have a full backup of all of my data. My current backup only backs up critical documents because storage on Azure is not cheap when you get into having TB of data in your recovery services vault. I do still love Azure Backup though.

I’ll likely do a separate post on cost-management in both Azure and AWS in the future, there are a lot to it and plenty of different ways you can manage and keep costs under control. The first month I ever used Azure I ended up surprising myself with a pretty hefty bill, so I started with cost-management from the start so I didn’t make the same mistake.

Just to recap, when you run an EC2 Instance you’ll pay for:

  • Compute (Hourly)
  • Storage (Hourly)
  • Outbound Date (per GB)

I want to make a note for you that AWS calculates cost rounded to the hour. This means that whether you end up running something for 5 minutes or 50 minutes, you will pay for the full hour.

When you launch an EC2 instance, you launch it from something called an Amazon Machine Image (AMI). This is essentially a template that contains the operating system and any required software. There are public AMIs, community AMIs, marketplace AMIs, and private AMIs.

Public AMIs are ones that require no additional subscription to launch, unless there is specific software licensing involved such as Microsoft SQL Server.

Marketplace AMIs can be free or charge you hourly for the cost of running them. I’ve found that a common difficulty with Marketplace AMIs is that they’re pretty restrictive as to what EC2 family they can run on, they can also be quite expensive unless you BYOL.

I’m currently using the Marketplace image for my Untangle firewall, with the BYOL option. I’m currently paying $25.00/month for that subscription outside of AWS, but will move away from it when I get OPNSense ported. I also have the option of getting the vendor supported AMI for OPNSense, but I have to purchase an annual support subscription which defeats the purpose of me using open-source software. I’m currently running it on prem as a VM.

Community AMIs can also be free or have a required subscription. Community AMIs are generally not supported by the vendor directly and may contain outdated software. I’m hesitant to use community AMIs because its hard to verify that the individual who created that image didn’t slip something into the image before releasing it. Community AMIs may also be legitimate though, some open-source vendors release their images as community AMIs rather than putting them in the marketplace.

Private AMIs can either be ones that you organization has created or has had shared with them. As I mentioned earlier, I would have to purchase a support subscription for OPNSense in order to receive access to the AMI. In this specific instance the creators of OPNSense are choosing to share their private AMI with you directly.

There is also a way to migrate your on-prem VMs to an AMI using the AWS CLI. There are some requirements your VM has to meet before it will migrate properly, which is part of the reason I’m having difficulty migrating OPNSense to AWS directly. I’ll cover that in a section later on when I talk about migrating my on-prem systems.

Launching an EC2 Instance

Launching an EC2 Instance is pretty simple. I’m going to walk you through doing so real quick on the AWS web UI.

1. In the AWS Console, navigate to “Services” -> “Compute” -> “EC2”

2. In the dashboard, find the button that says “Launch instance”. Depending on the version of your dashboard, it may be blue or may be orange.

During the first step we will be asked to choose an AMI. I am a big fan of Red-hat, so I’ll be choosing RHEL 8. There are also plenty of other AMIs that you can use without additional cost.

3. Select your AMI by clicking on the “Select” button on the desired AMI. Make sure you choose the correct version too, there are x86 and ARM version available.

4. Choose your instance type, for this scenario I’m just going to pick the t2.micro because it’s part of the free-tier.

5. This next step is optional as you can actually just launch the instance from here if that would meet your workload. For this next step though, we’re going to configure some of our instance details.

When configuring your instance details you have a number of different options to pick from.

  • How many instances you want to launch
  • What VPC you want to attached it to, and adding additional network interfaces*
  • What IP Address you want to assign to it, and whether or not you want it to auto-assign a public IP.
  • Enabling detailed CloudWatch Monitoring**
  • Behaviors for shutdown, stop, hibernate, and termination protection.
  • If you use dedicated hosts, you can also select to launch the instance on a dedicated host***

* If adding additional network interfaces, you cannot auto-assign a public IP Address

** Enabling detailed monitoring will incur additional costs to your AWS Subscription

*** Running any workload on a dedicated or reserved host will incur an additional cost.

For this scenario, I’m going to maintain all the defaults, but I did want to introduce you to the options available.

6. Similar to this last section, this is completely optional. During this next stage you have the option of adding additional storage or modifying the root volume storage size.

For free-tier eligible systems, you can also utilize up to 35GB of General Purpose SSD storage at no additional cost. Details on how this is calculated and billed can be found on the free-tier information page.


All I’m going to do here is increase my EBS root volume size from 10GB to 20GB.

In addition to scaling up the size, you also have the option to add encryption using either a default AWS KMS Key, or generating your own. There is also the additional option of utilizing your own, non-Amazon managed keys if you are processing highly sensitive information.

I will also be encrypting the volume with my preset key “(default) aws/ebs

Depending on the AMI you selected, encryption may or may not be enabled by default.

Like the last few steps, and this will be the case for the next two steps, these are completely optional steps. You can also just click “Review and Launch”

7. This next section is where you define any tags that you want to assign to your instance or EBS Volumes. Tags can be used to automate tasks using CloudWatch, manage access to resources, group resources into resource groups and many more.

For this scenario, I will add the following tags:


There are two things you can do with the key-value pairs here. You can either assign them to already existing tags, for instance assign the EBS volume to a lifecycle policy using the lifecyle policy ID, or you can assign custom key-value pairs and manage those seperately.

In this instance you can see that I am defining custom tags that I will use for automating the retention policy of my EBS Volume Snapshots, Automate Weekly EBS Volume Snapshots, and finally one that I will use to group the Instance and Volume into my “labs” resource group.

8. Our next stage allows us to configure security groups. I’ll get into these a little deeper later on when I talk about VPCs and AWS Networking, but at a high level these are just ACLs that are applied to your instance.

Since this instance is going to have a public IP Address and be directly accessible, I want to restrict inbound SSH from only my IP Address. You can do this by easily clicking “My IP” in the drop-down under source.

You also have the option of assigning this to an existing security group if you already have one that meets all of your requirements.

Finally, it’s time to click “Launch”, when you click launch you will be asked to assign a key-pair to login with. You can choose an existing key-pair, create a new one, or choose not to use one whatsoever.

The key pair will be used when we go to login to our instance over SSH. Whatever you do, be sure to keep it in a secure location because it is (by-default) all that you need to login to your instance. I recommend password protecting it.

I personally use PuTTY, which uses it’s own format for private keys. When I convert it to the PuTTY format, I delete the original .PEM format from my drive securely.

I also have a backup stored securely in an encrypted password vault in case the PuTTY formatted one gets corrupted or I forget the password.

Once you’ve launched the instance, you’ll be taken to a page that allows you to look at the launch log or view the instance directly. I’m going to skip covering that part because I think it’s pretty self-explanatory on how to get back to verifying that the instance is running. Once you see the instance in the state of “Running” in the dashboard, you can being to connect.

Connecting via PuTTY

I’ll cover this briefly for one main reason, the handling of instances that don’t support password-protected private keys out of box. I myself have a few instances running that when I launched them did not support connecting to them using a password protected private key.

In order to get around this, you have to load the key into the PuTTY Cache prior to attempting to connect.

You can also use TeraTerm, or any other SSH Client that supports private key authentication.

Part 1: Converting to PuTTY Format with Password Protection

1. Open up “PuTTYgen”, if you installed the full PuTTY package this should be installed already.

2. In the top menu bar, select “Conversions” -> “Import Key”, select the key you downloaded from the AWS Console.

3. Enter password in the “Key Passphrase” and “Confirm passphrase” section, when complete click “Save private key”.

You can optionally add a comment to the key as well (may be useful if you have multiple private keys you’re managing).

Save the key to a location you can easily access.

Step 2: Loading the Private Key into the PuTTY Cache

Look for the program “Pageant” in the start menu and open it up. This should be installed if you already installed the full PuTTY suite.

Locate the icon in the task-bar and right click it, select “View Keys”

In the popup-windows, select “Add Key”, locate the key you recently converted and enter the passphrase.

Once completed, the key is now loaded into the cache.

Step 3: Configuring the PuTTY Authentication Profile

Setting up the Authentication profile is pretty easy, I generally save the profiles because its a bit of a pain to have to go back and do every time.

To setup the authentication with our private key, open PuTTY and navigate to the left panel of the window. Locate and expand the section for “SSH”. In the expanded panel, select “Auth”.

In this window, click “Browse” and select the private key you loaded earlier.

Next, scroll back up to the connection properties window that shows up when you first open PuTTY. In the hostname, you must specify the user account you want to connect as. The default for Amazon AMIs (unless otherwise noted) is “ec2-user”. Your string should be something like “ec2-user@publicip”

I’m going to go ahead and name mine and save it so I can use it later without much hassle. Like anytime you connect via SSH for the first time, you’ll have to accept the warning. If all went well, you should be presented with that classic Linux shell.

Thanks for following along on this one. I know it was a rather simple post and process, but I wanted to get everyone setup with atleast a free-tier instance that we can use to expand on later.

If you look back and remember I added some tags that we can use for automation and grouping. One of my next posts will focus on managing EBS Volumes and Snapshots, and automating snapshots with CloudWatch.

Let me know if you think of something I should post, I’m always looking for ideas on new content.

Posted in UncategorizedTagged , , , , , , , , , , Leave a Comment on Beam Me Up, Scotty – Getting Started with AWS EC2

Securing the Grid – Getting Started on CIP Standards

This one is going to go into something that I work with on a day-to-day basis at my current job. Everyone knows HIPPA, Sarbanes Oxley (SOX), and PCI-DSS are common compliance frameworks that delve into the securing of information systems, but one that is often overlooked is the regulatory compliance guidelines that the energy industry is required to follow.

In the energy industry, we’re required to follow Critical Infrastructure Protection (CIP) Standards published by North American Energy Reliability Corporation (NERC).

The goal of CIP Standards:

The goal of CIP standards is actually to protect the critical infrastructure that controls many of the functions of the grid, this includes everything from the generation, transmission, and load balancing of power throughout the grid.

CIP Standards are published by NERC and enforced by regional enforcement agencies. There are currently six different regions within the United States, each managing separate portions of the grid.

There are standards other than CIP that the energy industry has to comply with, in addition to the CIP “cyber-security” standards, NERC has published various other standards that go into the Emergency Operations, Transmission, Load balancing and others that are primarily focused on procedures and policies regarding the actual management of load.

A complete list of standards can be found at:


What does CIP Encompass

Like many regulatory compliance frameworks, CIP covers everything from asset identification, access control, incident response, recovery, and information protection. As of today, there are currently 12 CIP standards that are subject to current or future enforcement in the near future.

CIP NumberDescription
CIP-002BES Cyber System Categorization
CIP-003Security Management Controls
CIP-004Personnel & Training
CIP-005Electronic Security Perimeters (ESP)
CIP-006Physical Security of BES Cyber Systems
CIP-007System Security Management
CIP-008Incident Reporting and Reponse Planning
CIP-009Recovery Plans for BES Cyber Systems
CIP-010Configuration Change Management and Vulnerability Assessments
CIP-011Information Protection
CIP-013Supply Chain Management (Future Enforcement, July 2020)
CIP-014Physical Security (not Cyber-security related, I won’t be covering this one)

Alright, for the scope of this one, I’m going to skip right ahead to CIP-005. CIP-002 is an easy one, the requirements here are pretty simple, we have to identify and classify any cyber assets, along with the impact rating they would have to the grid.

A list of all standards, along with some technical rationale behind them can be found here:


Impact Rating:

NERC specifies impact ratings. These impact ratings are based off of the projected impact a site or facility would have to the grid. There are currently three different impact ratings:

  • Low – a low impact BES facility is any facility not rated medium or high impact, generally the threshold is a minimum of 75MW (MVA) to be considered an in-scope facility for NERC. Anything below that is considered “no-impact”, or out of scope for NERC.
  • Medium Impact – there are different criteria for this based off of the functions you are performing. Control Centers, Generator Owners, Generator Operations, Transmission Owners, and Transmission Operators each have their own criteria for medium impact.
  • High Impact – again with medium impact, each function has their own criteria, if you’re interested in this, read CIP-002-5.1a for more information.

The impact rating of the facilities dictates what standards you are subject to. I will be focusing on a Medium Impact Control Center since these actually happen to be some of the most common facilities out there, there are medium impact Generation Facilities, as well as Medium-Impact Tran mission Facilities, but we’ll be focusing on the control center.

I mentioned the term “BES”, all this means is bulk-electric system. A system that is part of one or more interconnect.

Control Center Criteria

There are different Criteria for Control Centers that dictate your impact. See the below table:

DescriptionResulting Impact Rating
Each Control Center or backup Control Center used to perform the functional obligations of the Reliability Coordinator.High
Each Control Center or backup Control Center used to perform the functional obligations of the Balancing Authority: 1) for generation equal to or greater than an aggregate of 3000 MW in a single interconnection, or 2) for one or more of the assets that meet criterion 2.3, 2.6, or 2.9.High
Each Control Center or backup Control Center used to perform the functional obligations of the Transmission Operator for one or more of the assets that meet criterion 2.2, 2.4, 2.5, 2.7, 2.8, 2.9, or 2.10.High
Each Control Center or backup Control Center used to perform the functional obligations of the Generator Operator for one or more of the assets that meet criterion 2.1, 2.3, 2.6, or 2.9.High
Each Control Center or backup Control Center, not already included in High Impact Rating (H) above, used to perform the functional obligations of the Generator Operator for an aggregate highest rated net Real Power capability of the preceding 12 calendar months equal to or exceeding 1500 MW in a single Interconnection.Medium
Each Control Center or backup Control Center used to perform the functional obligations of the Transmission Operator not included in High Impact Rating (H), above.Medium
Each Control Center or backup Control Center, not already included in High Impact Rating (H) above, used to perform the functional obligations of the Balancing Authority for generation equal to or greater than an aggregate of 1500 MW in a single Interconnection.Medium
Control Centers and backup Control Centers not included in the above criteria.Low

Our theoretical control center will fall under the criteria where we are operating in excess of 1500MW in a single interconnection. We are therefor a Medium-Impact control center.

CIP-005 Requirements

CIP-005 dictates standards that must be followed to secure BES cyber assets that have external-routable connectivity (ERC). As a medium impact control center, here is a table of the requirements we are subject to under CIP-005.

Requirement 1 – Electronic Security Perimeters

1.1All applicable Cyber Assets connected to a network via a routable protocol shall reside within a defined ESP
1.2All External Routable Connectivity must be through an identified Electronic Access Point (EAP).
1.3Require inbound and outbound access permissions, including the reason for granting access, and deny all other access by default
1.4Where technically feasible, perform authentication when establishing Dial-up Connectivity with applicable Cyber Assets
1.5Have one or more methods for detecting known or suspected
malicious communications for both inbound and outbound communications

This requirement is one of the easier ones to understand. It simply requires us to have a network that is protected by one or more firewall to secure communications.

Part 1.1

Part 1.1 requires us know that these networks exist, have drawings, assets sheets, or other documentation that specifies what assets reside in this network.

It also requires that we have written policies and procedures for establishing these ESPs, along with methods to verify that assets do not exist outside of an ESP.

Part 1.2

Part 1.2 requires us to identify any paths that can be used for inbound or outbound communications, these are reffered to as Electronic Access Points, or an EAP.

A common misunderstanding here is that an EAP is a device itself, it is not. An EAP is an interface on a device. Most commonly we will use next-generation firewalls, and an interface on those will be specified as the EAP.

Part 1.3

Part 1.3 requires us to have documentation and authorization related to the configured firewalls on our EAP. It also requires that we have a deny-by-default rule in place, which is normally default without us even configured it.

Documentation can be a spreadsheet or really anything detailing the configured firewall rules, along with who configured and who authorized them.

If you utilize the implicit deny instead of an explicit deny, be prepared to supply vendor documentation stating that this is the normal operation. In any event, it’s really just easier to configure an explicit deny-all in both directions.

In an audit, not only will you be asked for the documentation of authorized access lists, but you’ll also be asked for the running configuration to validate that they align with the authorized access lists.

Part 1.4

Part 1.4 requires us to use authentication of some form, not specific on what, when Dial-Up is used. In the real world, Dial-Up might be used on remote facilities in remote locations where no broadband or fiber connectivity exists.

Since we’re in a control center, we’re going to assume we’re in a populated area with established fiber and broadband infrastructures and as such our method of complying with this will be to enforce this via our policy.

Our policy will prohibit the use of dial-up communications all together. Of course, auditors will want to know how we verify and validate there is no dial up communication and this will be performed during our periodic walk-throughs and inventories of our system.

Part 1.5

Part 1.5 is why we use Next-Generation firewalls. This part of the requirement requires us to have a method to detect, or block known malicious communications.

We all know what these are, these are our Intrusion Detection and Prevention systems. One of the things that’s interesting about this policy is that it’s really up to us whether we block these communications, we’re only required to detect them.

And that’s probably a good thing, any IPS is subject to false positives and when we’re dealing with real-time control of critical infrastructure, we don’t really want the potential for an outage because an update to an IPS signature blocked that communication on us.

Auditors are going to want to see a drawing here that specifies where the IDS inspection point is. They may also want running configurations showing that the IDS is configured and that traffic is configured to be inspected.

Later on we’ll talk about Anti-malware definition updates in CIP-007, IDS signatures are viewed the same as anti-malware definitions and we’ll have to document a method for testing and deployment of these signatures regularly.

There is no specified time period on how often we have to update these. Testing is more along the lines of ensuring that the update does not block authorized communications, not so much that it actually catches malicious traffic.

Requirement 2 – Interactive Remote Access Management

2.1Utilize an Intermediate System such that the Cyber Asset initiating
Interactive Remote Access does not directly access an applicable Cyber Asset.
2.2For all Interactive Remote Access sessions, utilize encryption that
terminates at an Intermediate System
2.3Require multi-factor authentication for all Interactive Remote Access sessions

This is where things start to get interesting, but this still isn’t the full face of the vagueness of CIP standards.

What is Interactive Remote Access?

Well, NERC’s definition lacks on this one. They specify a few criteria for it, but this can be summed as user-initiated communications to an asset within the ESP.

There are few key things things to understand about interactive remote access:

  • Interactive Remote Access is intended for support personnel maintaining the BES cyber system, not for actually monitoring or controlling BES assets.
  • Interactive Remote Access does not include machine-to-machine communications such as replication, heartbeats, etc.

Part 2.1

Okay, this one might be a little confusing based off of the wording. Essentially what they are requiring us to have is a bastion server that does not reside within our ESP.

Typically implementations of this include a bastion server, otherwise referred to as a “jump-host” in a managed DMZ. There is no requirement that the server cannot be dual-homed. When I say this I mean that the server can have an interface in the DMZ, as well as an interface in the ESP.

If this is your desired implementation, then you would also be required to document that interface as an EAP.

Also keep in mind that if this was the choice, you would also have to have a method for IDS inspection at this EAP.

Diagram of a Dual-Homed Jump-host implementation:

It’s much easier to have this system in the managed DMZ where it is required to route through the firewall prior to initiating a connection with a device in the ESP.

Here’s a diagram of a normal jump-host residing in a managed DMZ where it has to route back through the firewall to initiate a connection to a device within the ESP:

It is never acceptable to have port-forwards or any type of static NAT for direct access to a device in the ESP. There must always be what is referred to as a “protocol-break”. No outside interactive sessions can communicate directly to a device within the ESP.

Auditors will likely want proof that the device is not dual homed, if the device is dual homed they will want documentation that IDS inspection is occurring, as well as that it is a documented EAP.

In addition to the above, they’re going to want a list of all assets that are currently accessible from this system. They will also want a list of installed applications to ensure that there are no applications that can directly control power from the jump-host.

There’s actually a lot more that they’ll want since this device is classified as an EACMS. It’ll fall into scope for just about every CIP requirement we are subject to.

Part 2.2

This is where things get a little tricky. Not only are we required to have some sort of jump-host, but we’re also required to have encryption for this system that terminates at the system itself.

That means that if you have a client-based VPN such as Pulse-Secure, Forticlient, AnyConnect, or GlobalProtect that the encryption it provides will not suffice. The encryption must terminate at the jump host itself.

With the requirement for encryption throws out many options off the bat, we can’t have a telnet gateway or any system which doesn’t provide end-to-end encryption.

Alright, so we know a lot of things provide encryption by themselves. Remote Desktop does, and often times that’s used. There is some difficulty here though because you also have to have some forethought and remember that Part 2.3 requires us to enforce MFA.

Now, there’s no perfect way to do this and everyone’s setup will be different, however, one of the most widely used solutions and possibly my favorite is a product from ManageEngine known as Password Manager Pro.

I want to be clear that I do not endorse this software, but that it does seem to meet the CIP standards.

This one is a little more difficult to prove to auditors, network documentation will help, but they’ll probably want to see this actual system in action. They might want to see packet captures to verify the traffic is encrypted.

Part 2.3

This is one of the standards that I definitely think is a requirement for any system that gives you access to a system remotely, not just for the BES. I think this one is best practice for any company still using client VPNs to provide access to internal resources.

We’re required here to enforce multi-factor authentication for access to the jump-host. We should already know what MFA is, and for this one the only real options for us are:

  • Something the user has (OTP token, Smart Card)
  • Something the user knows (Username/Password/PIN)

I’m not sure about you, but I simply don’t trust any of the commercially available bio-metric systems out there that could support this. I do know that if you did implement this it would likely meet requirements, but you have to think about the false acceptance rate here and how easy most systems are to spoof.

There is some gray area on what you can use here, but remember that it has to be compatible with the solution you use. If you’re using Password Manager Pro, here are some of the most common ways of enforcing MFA:

  • Duo Authentication
  • RSA SecurID
  • Smart Cards

Again, a demo for this one is going to be the easiest way of demonstrating compliance. This is especially true if the auditor is unfamiliar with Multi-Factor authentication mechanisms.

You may also be required to show proof that all users with access to that system have MFA enforced, as well as any local emergency accounts.

There is some gray area here as if the account doesn’t have access to devices within the ESP, it might not be required to have MFA enforced. This would be useful for emergency maintenance accounts used to troubleshoot the system. Don’t follow this one as if it’s the gospel itself, every auditor and audit team is different and you don’t want to be hit with a potential non-compliance (PNC) for something as simple as this.

What’s next

Okay, so we’ve covered what CIP standards are and the scope of CIP standards. Now that you’ve got a good understanding of CIP-005, our next session will jump to CIP-007.

CIP-007 covers:

  • Malicious Code Prevention (Antimalware)
  • Security Patch Management
  • Logical and Physical Port Security
  • Security Event Monitoring (SIEM)
  • System Access Control (Account Management, Authentication)

Posted in Cybersecurity, Regulatory ComplianceTagged , , , , , , , , , , , ,

Demystifying Cryptography

Many system administrations and network administrators alike find the details behind cryptography rather difficult to comprehend. Prior to me researching and learning about it, I also thought it was rather difficult, but it’s not.

What is Cryptography?

Put rather simply, cryptography is the science of manipulating data in such a way that it is obfuscated and useless to one who does not have the knowledge of the cipher and any associated secret keys to decrypt.

But there’s more to it than that, there are actually three common forms of crytography:

  • Encryption – a method of obfuscating data that is reversible, generally using secret keys and a shared cipher.
  • Hashing – a method used for verifying the integrity of data, not so much for obfuscating data.
  • Steganography – hiding data in plain site, for example in pictures.

History of Encryption

Throughout history, different forms of encryption have been used to hide data for various purposes. Historically speaking, data only needed to be kept secret for a little bit, and during the height of World-War II, the general rule of thumb was that data only needed to be kept secret for about 3 hours.

That’s where classical ciphers come in.

Caesar’s Cipher

The Caesar’s cipher is one of the most well known forms of encryption. The Caesar’s Cipher is a form of substitution cipher wherein letters are shifted three characters to the right in the alphabet.

For Example, if we wanted to encrypt the work “ALPHA”, it would have the following cipher text:

Cipher Text: DOSKD

To decrypt this, we would simply shift each character of the cipher text 3 characters to the left.

For example, the Cipher Text: PRPPD would decrypt to:

Plain Text: MOMMA

As you can see here, we actually experienced a phenomenon common to the English language, certain letters occur throughout our language more often then others. This is something known as Frequency Analysis.

Obviously the padding of three can really be substituted for any desired padding, it doesn’t require much effort to break this either as all you need to know is the padding used to shift characters.

Vigenere Cipher

Similar to the Caesar Cipher, this is another form of classical substituion ciphers. Instead of the shift being a set number of characters, the shift is actually defined by a phrase known as the “key”.

The easiest way to visualize this cipher is through the use of a table that lists corresponding cipher text for each plain text entered.

Vigenere Cipher Table:

For our example, we’ll use the following key:


Now let’s encrypt the word “Dogshow”

Ciphertext: ZCSTHHS

As you can see, our key wasn’t exactly the same length as the word we were encryption, so we actually just end up wrapping around to the beginning of our key when we get there.

Now using that same key, let’s decrypt the following:


Using our cipher, we should get the following:


This cipher was considered unbreakable for over 3000 years, until a method of easily decrypting the data was developing in the 1800s.

Modern Encryption

Okay, we’ve gone a little into some of the most simple ciphers known to man, there are many others that were used throughout history, but it’s time to get into modern cryptography and how we can use that to keep data secret.

Today’s algorithms are based largely on math, and that’s in many thanks to the advancements in computers that allow us to calculate large numbers in real time. Something that would take even a skilled cryptographer years to do.

Stream vs Block

I won’t go to far into this because the goal of this post is to get people comfortable with Cryptography, not make them experts.

There are two well known encryption ciphers that provide the same result, just in different ways. There are Stream Ciphers and Block Ciphers.

What is a block cipher?

A block cipher is an algorithm that encrypts fixed lengths of data, one at a time. This fixed length of data is known as a block, giving the cipher it’s name. The size of each block generally ranged between 64-256 Bits.

What are some common block ciphers?

You probably already know most of them, and maybe just didn’t know that they were block ciphers, but some common block cipers include:

  • DES
  • DES
  • AES
  • Blowfish
  • Serpent

What is a stream cipher?

A stream cipher is an algorithm which encrypts one bit of data at a time. Stream ciphers are designed around the ideal cipher known as a one-time pad. However, the idealism of the one-time pad presents us with too much impracticability for everyday use.

What are some common stream ciphers?

There are probably alot less common to you, these ciphers your really don’t see everyday although some of them, such as RC4, are used for WPA and WEP encryption. RC4

Kerckhoffs’ Principle

Kerckhoffs’ principle is a theory behind cryptography where the strength of encryption should not rely on the secrecy of the cipher, but only the secrecy of the key.

This is visible in just about all modern day encryption algorithms. The cipher is generally open to scrutiny and criticism, which in turns makes it more secure.

However, we keep our keys near to our chest, because without those, our data can’t be decrypted.

And there are some good examples of this. Looking back in history, everyone can probably remember the OpenSSL Heartbleed nightmare. While the vulnerability did allow attacks to slowly leak information about the secret key, it was caught because the OpenSSL library was open-source and easily audit able by anyone in the world.

This is a good example of Kerckhoffs’ principle in action.

Encryption vs Hashing

This is one that I think is pretty simple to understand, but I’ll go over it briefly as it’s an essential component of modern cryptography.

Encryption is based of the original goal of cryptography, keeping data secret. It works in a reversible manner where both the person decrypting and encrypting the data have a secret key they use to perform those actions.

Hashing is different. Although hashing is a form of cryptography, it’s not based on the original goal of cryptography, but is based on the idea of maintaining and validating the integrity of data.

Sure, hashing is often used as a form of securely storing passwords, but hashed data is non-reversible. Hashing operations are performed one way and cannot be performed in the other. Typical authentication methods used hashed and salted data to securely store the password in a database.

What’s salting?

I mentioned salting in reference to securely storing passwords. This is a method where the data is appended with random data before being hashed to prevent simple attacks such as rainbow table attacks on passwords.

A rainbow table attack is where an attacker has a pre-calculated databased of known passwords in there hashed form and attempts to derive the password based on collisions.

What’s a collision?

This is rooted in one of the major concepts behind hashing. Hashing should result in a unique output for each unique piece of data inputted. What this means is that no one word in it’s hashed form should equal the hashed form of another word.

There have been some successful attacks on older, and current hashing algorithms that have resulted in successfully being able to manipulate and produce a collision. The most notable of these that I suggest you research if you want to learn more is SHA-1.


Alright, if you read this one I want to thank you for bearing with me. These are some of the basics behind cryptography that I think are very important to understand if you implement any type of encryption or hashing in your day to day job duties.

If you’re really interested in reading more about this, a book I found extremely helpful when I was studying for my ECES:

Serious Cryptography by Jean-Philippe Aumasson


Posted in PKITagged , , , , , ,

PKI – Part 3 : RDP Certs

Pesky Warning Messages

If you’ve ever found yourself remoting into a machine, you’ve likely encountered that pesky and rather annoying message that the servers certificate can’t be trusted.

We know, you could, you could just select that little checkbox that says “Don’t ask me again for connections to this computer”, but that’s not what we’re going for.

Getting Rid of Self-Signed Certs

We want to use our enterprise PKI to get rid of self-signed SSL certs, and that’s reasonable. Self signed certificates are impossible to track effectively and many systems will refuse to connect to a system with an expired certificate, and that’s good.

While RDP doesn’t necessarily have that issue because it will generate a new certificate when the original expires, it’s something that we could also leverage our PKI to do.

Creating our Certificate Template

This one is a rather simple and easy one for us to do, let’s open the Certification Authority MMC snapin and get to managing our Certificate Templates.

With the Certificate Templates Console open, right click the “Web Server” template and click “Duplicate”.

Open the “Security Tab” and provide the following permissions to the following Active Directory Objects:

  • Domain Controllers: Read, Enroll
  • Domain Computers : Read, Enroll

We set these permissions so that our Workstations, Servers, and Domain Controllers all have the required permissions to request these certificates and have them automatically issued to them.

Like any template, it’s up for you to really specify the rest such as cryptography, extensions, name, etc. For my template I will be using the following:


  • Provider Category: Key Storage Provider
  • Algorithm Name: ECDH_P384
  • Minimum Key Size: 384
  • Request Hash: SHA256

Request Handling:

  • Purpose: Signature and Encryption


  • Name: RDP-Encr
  • Validity Period: 4 years

Something important you probably do need a little assistance with, make sure you modify the “Subject Name” to “Build from Active Directory Information”:

Setting up Our GPO

Unless you have Remote Desktop Gateway Services installed on everything, the only way to ensure that everything uses a specific template is through either the registry or group policy.

Obviously we already know that managing things through the registry isn’t a scalable option. For this we’ll be using Group Policy. To get started, open up the Group Policy Management Console and create a new Group Policy Object.

I’ll be naming my “RDP – Certificate Templates”, the name of this actually eludes the specific setting we are going to be modifying. I hope you remember the name of your certificate template, because you going to need that moving forward.

In the editor for the GPO, Navigate to the following:

Computer Configuration -> Administrative Templates -> Windows Components -> Remove Desktop Services -> Remote Desktop Session Host -> Security 

Once here, locate the settings “Server authentication certificate template”, open it and specify the name of the template we just created.

Once you’ve done this, you can close everything out and link that GPO to your desired location. Since I want everything in my lab domain to have a CA signed RDP certificate, I’m just going to apply it at the root and force a group policy update to demonstrate it real quick.

Before I demonstrate, I’m just going to validate the the server has receied the correct GPO settings via:

gpresult /h <path-to-html-report>

As you can see, the setting is applied as expected. The RDP Service will request a new cert next time I connect and if I use the proper DNS name to connect, I should not receive any errors.

Testing It Out

First, we’re going to test this out to make sure that when I connect, I don’t get a certificate not trusted error.

Other Thoughts

I thought about enabling TLS1.2 on these, but there’s a few issues with that:

  • Not supported in Group Policy, has to be done through registry.
  • There are known compatibility issues with forcing RDP to use TLS1.2, mainly because you have to disable all other version of SSL and TLS.
Posted in PKITagged , , ,

PKI – Part 2 : Smart Cards

I began writing this one shortly after I wrote my last. As of current, it’s the 11th of January at 1AM and I can’t find the care to go to sleep. Looking back at past posts I noticed that I seem to post these in waves when I get the motiviation.

Typically I’ll spend anywhere from 2-3 hours planning these our and another 2 hours actually writing them. I’m not exactly the greatest wordsmith so I constantly find myself with writers block trying to find out what words go next on the page.

I also try to write my posts in such a way where both the lab and the material are useful. Even if one doesn’t necessarily completely understand the topic, I aim to provide enough information so they aren’t lost in the sauce. Sometimes I execute this better than others. Sometimes I’ll even find myself going to far in depth and loosing myself in the sauce.

Think with your dipstick, Jimmy.

Okay, I get it, I often ramble on for too long to preface these posts, but today we’re going to be talking about something I hold near and dear to my heart, and that just happens to be Smart Card Logon. Those of us with experience in the public sector or military already know what these are and often times how much of a pain they can actually be.

But from an authentication perspective, they’re about as bullet-proof as you can get without exponentially increasing the costs of your infrastructure.

And that’s the focus I take. Industry certifications focus on reducing the risk involved with our systems to acceptable levels, but I don’t quite think that cuts it. Our focus should really be on reducing the ROI for attackers. Sure, there are always going to be well finances groups backed by large organizations and states, but the majority of attacks are actually performed by the lowest of hanging fruit. Low hanging fruits are always looking for the easiest for us to pick by decreasing their ROI from attacks. Anyway, coming back from my tangent. I still don’t get where I was going with that. I’ll probably never know.

Back to Smart Cards.

One of the most prevelant attacks in the industry is credential theft and credential stuffing. This is because of the inherint weakness of username and password based authentication. This is caused because of:

  • Poor credential hygene.

We get it, users don’t want to be bothered with creating complex passwords every three months, but we also don’t want to let them use the same complex password forever. Re: eBay.

We’ve come along way in that we can supplement this with multi-factor using one-time passwords, but those aren’t bulletproof either. in your organization wants to be forced to use a Yubikey, no one wants to be forced to install an app on their phone, no one.

And that’s understandable. This often leads to users enrolling into multi-factor authentication using their phone number, and don’t even get me started on that.

That’s where smart cards can provide us with something.

Everyday, we badge in and out, we don’t think about it, we just do it.

We already have an ID that provides us access to physical locations, why not make this the same thing that gives us access to electronic systems?

Anyway, here’s how smart cards work from an authentication perspective:

  • Users are provided a private and public key pair for the special purpose of Smart Card Logon.
  • These private key is loaded onto a physical token, most often in the form of an ID card using PIV compliant chips.
  • These private keys are protected with a PIN that only the end user knows, which is required to perform signing operations with this key.
  • When a user authenticates using a smart card, they are essentially signing an authentication request using their private key, this is validated against their public key which is published to Active Directory.

What do I need to get this setup?

To get started, you’ll need the following:

  • An Enterprise Certification Authority, see Part 1 of this series.
  • A Smart Card Reader.
  • A Smart Card, or PIV Compliant USB Token.

For my setup, I chose some stuff that’s readily available on Amazon, although if you’re subject to supply chain regulations, you might want to work with a more reputable vendor:

  • PIVKey C910 Smart Card
  • HID Omnikey 3121

Links for these will be provided at the end of the post.

A sad day in history.

Unfortunately, most of the commercially available smart cards only support RSA Keys with a maximum length of 2048 bits. So if you were hoping to use ECC, you’ll be dishing out probably at least triple the money for each card.

Configuring Certificate Templates

The first thing we’re going to need to do before we get any further is configure some Certificate Templates on our CA. To do this, open the Certification Authority MMC Snap-in and right click certificate templates, click “Manage”

We’re going to create two certificate templates here, and lets discuss those real quick:

  • A Certificate Registration Agent, used for enrolling on behalf of other users.
  • A Smart Card Logon, used for actual smart card logon.

I know what you’re going to say, these templates already exist, why can’t I just publish them and use them?

Well, you don’t want to.

First off, the Smart Card Logon does not allow you to enroll on behalf of by default, and second off all of the included templates are configured for older devices such as Windows Server 2003 and Windows XP. We need to get these updated so they support some of the newer features.

Smart Card Template

To get started, right click the “Smart Card Logon” template and click “Duplicate”

In the first window labeled “Compatibility”, Select the operating system of the oldest CA in your environment, as well as the oldest operating system in your environment. My Lab environment is all Server 2016 and Windows 10, so I’ll be going with the newest stuff available for me.

Next up, navigate to “Request Handling”, change the following:

  • Purpose: Signature and Smart Card Logon
  • Check: Allow Private Key to Be Exported

The reason we want to enable the private key to be exported is simply because when we’re creating these, we’ll be creating them on behalf of other users and will need to export the private key to the smart card in order for all of this to work properly.

Next up, go to “Cryptography”, change the following items:

  • Provider Category: Key Storage Provider
  • Algorithm Name: RSA
  • Minimum Key Size: 2048
  • Request Hash: SHA256

You don’t have to follow these exactly, but these are just what I’ve validated to work, and what I’m configuring in my environment.

Next, navigate to “Issuance Requirements” and change the following:

  • Check: This number of authorized signatures (1)
  • Policy Type Required in Signature: Application Policy
  • Application Policy: Certificate Request Agent

These configurations are what is actually going to allow us to enroll on behalf of other users. We’ll later issue our self a certificate with this application policy so we can sign these requests and issue certificates for other users.

Next, navigate to the “General Tab” and give your template a name, I’m naming mine “Smartcard – Managed Enrollment” since I’ll be managing the enrollment of other users with this template. Next, specify the validity period and check publish the cert to AD.

Certificate Request Agent Template

Next up, create a duplicate of the Enrollment Agent template. Following the same steps to setup the cryptography, compatibility, and create the template.

I won’t show pictures from this, just follow the general guidelines doe the Smart Card one, do not do the following though (unless you have bigger plans):

  • Don’t configure the Issuance Requirements
  • Don’t configure request handling

A note here, if you’re not going to store this on a smart card and only plan on storing it in your local certificate store, you can use ECC. Also, if you do plan on using a smart card for this, make sure you make the private key exportable.

Next up, it’s time to publish these to AD so we can use them. To do this, go back to your Certification Authority, right click the Certificate Templates and select “New”, click “Certificate Template to Issue”

Find the template you created and click “Ok”, do this for any templates you have created. Next, verify that these are published. Depending on your account permissions, you may need to go back and change the permissions on the templates. I’m using my Domain Admin account here so there’s no requirement for me to change anything. When it comes to issuing and enrolling in these, I’ll just elevate my session.

Issuing Certificates

Now comes the time to actually enroll and issue certificates. First off, we need to enroll ourselfs in the Certificate Registration Agent certificate we created, to do this open MMC and add the Certificates snapin. If you’re like me and doing this from your normal user account, you’ll need to elevate to admin to do this properly.

In the snap-in, right click, select “All Tasks” and click “Request New Certificate”

If you haven’t already, make sure the computer you’re using has the new Root CA certificate installed in it’s trusted stores, otherwise your computer won’t see any templates you can request.

Since we’re given the permission to enroll in this, all we need to do is click “Enroll”

Next up, let’s enroll my normal user account for Smart Card logon. To do this, right click and select “Advanced Operations”, click “Enroll on Behalf of”

Next, you’ll be asked to select a certificate to sign the request with. Choose the Certificate Registration Certificate we just issued.

Click “Next”, select the certificate we wish to enroll another user in

Click “Next”, now we need to select the user we want to enroll. Be sure to change the default location from your location enrollment workstation to the AD Domain the user resides in. For some reason it always select the local computer.

Next up, click “Enroll”, once complete you will receive a success message (hopefully), you can either select another user or click close.

Now that we’ve enrolled that user, right click their certificate and click export. Ensure that you export the private key as well, remember where you saved this as we’ll need it later to load onto the card.

Setting Up the Smart Card

Before we go too much farther, first make sure your smart card is in hand and your card reader is plugged in.

Once you’ve done that, download and install the PIVKey admin tool from Taglio: http://pivkey.com/download/pkadmin.zip

Once installed, we’ll also need to modify the following registry entries:

Registry Hive: HKLM\SYSTEM\CurrentControlSet\Control\Cryptography\Providers\Microsoft Smart Card Key Storage Provider
Key Name: AllowPrivateExchangeKeyImport
Value: 1
Key Name: AllowPrivateSignatureKeyImport
Value: 1

The reason for this is that Windows doesn’t by default allow us to import private keys to a smart card, modifying these registry values allows us to do so without issue.

Next up we need to delete the existing keys, if any from the card. If you bought the card on Amazon or in quantities of less than 25, it probably has the Default PIVKey certificate on it.

Open Command Prompt as Administrator, when prompted for a PIN type the default pin of “000000”. I’m going through this process to show you how to delete certificates off the card, but I’ve already deleted the default one and loaded my own, so I’ll just delete my own.

Type the command:

certutil -scinfo

You will be looking for the Key Container ID, this will look something like this:

Now that you’ve got that, you can issue the command with certutil to delete the key container from the device:

cerutil -delkey -csp "Microsoft Base Smart Card Crypto Provider" <cert-id>

When done correctly, you will be asked for the pin. Again, the default is “000000”

Next up, lets load our new key, you can validate that there are no other certs on the card by using “certutil -scinfo”

The command to load the certificate is:

certutil -v -csp "Microsoft Base Smart Card Crypto Provider" -p <pw> -importpfx <pathtopfx>

You can validate the card is successfully loaded using the “certutil -scinfo” command.

Now that the keys loaded, we should change the default pin. To do this, change your working directory in command prompt to:

C:\Program Files (x86)\PIVKey Installer\PIVKey Admin Tools

Next run the commad:

pivkeytool.exe --changepin <new-pin> --userpin 000000

I do want to actually mention that there is a GUI Interface included for managing the smart card, but I wanted to focus on an agnostic way of doing this for the most part, if you’re interested in using the GUI, you can find it in your AppData folder.

One last thing, I know, it continues. Now all we have to do is map that certificate to the appropriate PIV slot on the card:

pivkeytool.exe --mapdefault --userpin <userpin>

Domain Controller Certificates

Now I’m not going to walk through the full steps here, I’m just going to tell you what you need to do to set this up for Smart Card Authentication.

  • On your enterprise CA, make the Domain Controller Authentication Template published to AD.
  • On your Domain Controller(s), enroll in a Domain Controller Certificate issued by the new CA.

Logging in with the Smartcard

Now that we’ve got the card loaded and PIN changed, let’s validate that we can login. I’m going to remote into my file server using my user account that I’ve temporarily added to the remote desktop users on that computer.

As you can see, we ran into some issues there with passing the smart card credential through RDP. This is normal when you don’t have the Mini-driver installed on the computer you’re remoting into. That can be fixed by simply A) installing the minidriver, or B) uninstalling the minidriver on the computer you’re using to remote.

Additional Thoughts

If you you really want to use smart cards to their full capability, you want to ensure that Smart Cards are required for interactive logon. This is a setting in the AD Object of a user that actually randomizes the hash to protect against pass the hash attacks and also ensures that users can’t login without their smart card.





Posted in PKITagged , , , , , ,

Public Key Infrastructure – Part 1

It’s been over a year since I’ve last posted and that’s primarily because of life smacking me straight in the face. Between studying for college, certification exams, and the recent news of our Baby Girl on the way, I just haven’t had the time to sit down and write a meaningful and well thought out post.

In the months since my last post, I’ve also added some pretty cool certifications to my resume, most of them I actually did within the last month:

  • ITILv3 Foundations
  • CompTIA A+ (I know, really? Part of my degree program)
  • EC Council Certified Encryption Specialist (ECES)
  • ISC2 Systems Security Certified Practioner (SSCP)

Next weekend I’ll be taking the ISC2 Certified Cloud Security Professional (CCSP) Exam, so I should be adding that one to my resume as well. I’m excited and enjoying focusing a lot more on security than networking, the enjoyment I got out of networking had begun to grow stale.

I still find a good amount of enjoyment out of it, but not the amount I used to where I would often find myself up for 3 days straight off of the pure chemical rush of learning and labbing things out.

Turn’s out you’ll wait no more. Today’s discussion will be part 1 of a multi-part discussion on PKI. This post will primarily focus on building a two-tiered PKI infrastructure using Active Directory Certificate Services.

Here’s what the other parts will cover:

  • Part 1 – Building a Two-Tiered PKI
  • Part 2 – Configuring and Setting Up Smart Card Logon and Smart Card based EFS
  • Part 3 – Configuring Automatic Enrollment and Certificate Selection for RDP
  • Part 4 – Federating login with Office 365 and Providing S/MIME functionality with your on-prem PKI
  • Part 5 – Forget Smart Cards; FIDO, TPM VSC and more

Before I get too far ahead of myself I want to make it clear that I’m assuming you already have an understanding of Asymmetric Cryptography and PKI and because of that, I’ll only describe it with three bullet points:

  • Asymmetric Cryptography – data is encrypted with a different key than it is decrypted with.
  • PKI enables functional Asymmetric Cryptography by providing a backend to validate, issue, revoke and publish certificates to a directory
  • In an asymmetric cryptography environment, users have both a private key and public key. Data encrypted with the public key of a user can only be decrypted using that same user’s private key.
  • For Example:
    • Jane wants to encrypt data to Send to Jon; neither Jane or Jon have exchanged private keys to encrypt the data.
    • Jane will instead reference Jon’s public key stored in their local directory. Jane will encrypt the data using Jon’s public key and also sign the data with her private key.
    • When Jon received the data, he will decrypt the data with his private key, and reference Jane’s public key to validate the digital signature.

The above is a simple example of how such encryption would work. There are a few key distinctions made in this example that I want to highlight before moving forward.

  • Digital Signatures can only be signed by the owner of the private key.
  • Digital Signatures are proof of origin, and in some instances provide evidence of non-repudiation.
  • Digital Signatures are validated using the user’s public key.
  • Encryption of data is only possible using the recipient’s public key.
  • The decryption of data is only possible using the recipient’s private key.
  • In either instance, there is a key-pair generated that contains both the private and public keys. Generally speaking, only the user will maintain possession of the private key. Public Keys are normally published to a Global Address List (GAL) or in Active Directory

For our setup, we’ll be using a simple two-tiered PKI for an enterprise environement. There won’t be a whole lot of things going on, but we’ll show you how to get this setup and working in your environement. The two Certification Authorities we’re going to be creating are:

  • Scorchedwire Media Group – Root CA X1
  • Scorchedwire Media Group – Subordinate CA G1

Based of the names, I’m sure you’ve already made the assumption that there’s some heirarchy here to this setup. This is normal in any PKI.

  • The Root Certification Authority is responsible for issuing certificates to Subordinate Certification Authorities.
  • The Root CA should remain off at all time, other than to patch, manage certificates (issue/revoke) and to periodically publish CRLs.
  • The Subordinate CA will be used to manage all server and user certificates, not including CAs.
  • The Subordinate CA will publish the Root CAs CRLs.

Wait, you mentioned CRL, what’s that?

When a CA revokes a certificate prior to the expiration, a new entry to the servers Certificate Revocation List (CRL) is created. The CRL is what provides systems with the information necessary to establish whether the certificate is valid or not.

Let’s Get Started

To start, I’ve already deployed to Windows Server 2016 Standard VMs. When deploying these VMs, here’s the setup you should use:

  • The Root CA should be offline, and not domain joined.
  • The Subordinate CA should be online, and domain joined.

Installing ADCS

The first task we’re faced with is actually installing ADCS on both the Root and Sub CA. This is relatively easy and can be done through the server manager, under Add Roles and Features.

Additionally, if you’re not one for GUIs or your Root is running Server Core, you could use Powershell.


Anyway, time to get started on the Root CA:

Open the Server Manager, navigate to Add Roles and Features. Navigate to the Role-based or feature based installation and click next.

You’ll find a screen for “Active Directory Certificate Services” , select this and accept the popup asking you to install the management tools as well. You’ll want these for configuring AIA and CRL publication locations.

Go through all the steps, just clicking next. Ensure that when you get to “ADCS -> Role Services” that the only selection is “Certification Authority”, click next and click “Install”

Once the role is installed, there’s a couple things that need to be completed.

  • Configure the Root CA, and generate a Certificate
  • Configure the Root CAs maximum validity period.
  • Configure Root CA AIA and CRL Publishing locations
  • Last, issue a certificate for the Sub CA once we’re complete with the setup there.

You’ll probably notice in the Server Dashboard that there is that notorious yellow caution icon, let’s click that and get to configuring our certification authority.

Some things I think you can do without me showing you pictures, hopefully:

  • Specify credentials, click next.
  • Select Role of Certification Authority, click next.
  • Select “Standalone CA”, click next.

Next up, we need to generate our private key, to do this select “Create a new private key”, and click “Next>”

So what about these crypyography options?

Honestly, that’s all on you and your business requirements. I personally prefer Elliptic Curve to RSA, so I’ll be creating my Certificates with ECC where possible, when we get to smart cards, we’ll have to deal with RSA. I’ll put an section in for my general preferences and recommendations at the end and why I make certain choices.

For my options, I’m selecting

  • Cryptographic Provider: ECDSA_P384 Mcirosoft Software Key Storage Provider
  • Key Length: 384
  • Hash Algorithm: SHA384

In the Next Window, I’m only going to specify my Root CA name.

Lastly, the validity period. This is really contingent on your needs, but Root CAs can be valid as long as you feel confident in the encryption and protection mechanisms behind them.

Obviously public and government CAs have guidelines and regulations they have to follow, but for enterprise CAs, that’s really a business decision based on the risk. I recommend no longer than 15 years.

Next up we’ll specify where we want the certificate database and logs to be located. I’m going to move them from the default location to a seperate location. The default location is “C:\Windows\System32\CertLog”

Next up, just verify the options you selected and click “Configure”.

Once, you get it configured, you’ll get a message stating success, let’s move on to the next part.

Configuring Maximum Issuance Lifetime

For those of you familiar with certificate requests, you’re familiar with the fact that you usually specify the desired certificate lifetime in your request. While this might seem like it’s what actually dictates how long your certificate is valid for, it’s actually something else.

Open the registry editor and navigate to the following registry hive:


There are two registry values you may need to edit, these have to do with validity period of issued certificates.

The first one is the only one we will be editing, since the default value for the other one is already set to years.

Key Name: ValidityPeriodUnits
Default Value: 1
We're changing it to: 10

The other one you might want to change, depending on your configuration is:

Key Name: ValidityPeriod
Type: REG_SZ (String)
Default Value: Years

Configuring AIA and CRL Publishing Locations

Open the Certification Authority MMC Snap-in and point it to your CA if it isn’t already.

Right click on your CA name and click “Properties”, open the “Extensions” tab.

I’m going to need to delete all the default entries for CRL Distribution Points and AIA:

Next up I’m going to want to specify the locations that these are stored locally, for this I’m publishing them to the “C:\Certificates\DB” folder.

CDP: C:\Certificates\DB\<CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
AIA: C:\Certificates\DB\<ServerDNSName>_<CaName><CertificateName>.crt

Next up, I’m also going to specify where they will be published on my Subordinate CA. I will have to manually import these, but they’ll be published using an IIS site on my Subordinate CA.

The location for this will be:

CDP: http://ocsp.scorchedwiremedia.net/certs/CaName><CRLNameSuffix><DeltaCRLAllowed>.crl
AIA: http://ocsp.scorchedwiremedia.net/certs/<ServerDNSName>_<CaName><CertificateName>.crt

You’ll notice that I selected a few check boxes, make sure you check all of these. Some of these are just for the sake of beauty when pulling up “pkiview.msc”, the others are actually important for telling clients where to look for new CRLs.

Next up, you’ll click “Apply”, and the service will restart.

One more thing we’ll want to do before we continue is set the CRL interval for when CRLs will be published. Since the CA is offline, I’m going to set the CRL publishing interval to every 52 weeks, I’ll have to turn this on next January to publish a new CRL.

To do this, drill down to revoked certificates in and right click “Properties” and set the interval to “52 weeks”, you can also set this higher if you really want to, no need to publish delta CRLs since we won’t be online often enough.

Now, right click the “Revoked Certificates” and click “Publish CRL”, go grab the CRL from the file location on the local disk and export this to a place you’ll remember for later, we’re going to need this when setting up the Subordinate CA.

Setting up the Sub CA

Time to move on to our Sub CA Server. We’ll be going through and installing AD CS on that too, but I won’t go through the pictures on that part until we get to the configuration. The following roles should be installed on your Sub CA, which should be domain joined:

  • Certification Authority
  • Online Responder (we’ll use this in another part of this series, so just install it for now)
  • IIS – for publishing CRLs, for now.

Optionally, you can install the Certificate Enrollment Web Service, which provides a web interface for users to enroll in and request certificates. Since I use the Certification Authority MMC Snapin, I don’t see a need for this as most times we will be enrolling on behalf of other users or users will be automatically enrolled through AD.

Now that it’s installed, the first thing I’m going to do is create an IIS site for the CRLs and Certificates. First I’m going to create the directory “C:\certs” and copy the CRL from my Root CA here then I’m going to create a virtual site for this in IIS Manager.

  • <SERVER-NAME>\IIS_USRS should have read only access.
  • NETWORK SERVICE should have read only access.
  • Authenticated Users should have read only access.

In IIS Manager, I’m going to right click the Default Web Site and select “Add Virtual Directory”, the virtual directory name will be “certs”, pointed to the physical directory “C:\certs”

I’ve also enabled directory browsing so I can navigate to the website and validate the certs and CRLs are there:

Now, in my browser I will validate that the Root CAs CRL I placed there can be reached

Now that we’ve setup all the pre-requisites for our Subordinate CA to validate these CRLs, it’s time to configure our Sub CA:

Go back to the server manager and get rid of that pesky yellow caution icon, it’s time to configuration ADCS.

Again, somethings you can do without me:

  • Enter Credentials, click next.
  • Select “Certification Authority”, click next.
  • Select “Enterprise CA”, click next.
  • Select “Subordinate CA”, click next.
  • Select “Create a new private key”, click next.

Select the cryptography for this one, I’m going to use the same I used on the Root CA:

  • Cryptographic Provider: ECDSA_P384 Mcirosoft Software Key Storage Provider
  • Key Length: 384
  • Hash Algorithm: SHA384

The Common name for this CA will be “Scorched Wire Media Group Subordinate CA G1”

Click next, save the Certificate Signing Request (CSR) to the local disk, copy to a location that you will remember as well, we’ll use this to request the Certificate from the Root CA.

Click next, select where you’re going to save the certificate database to, and click configure.

Issuing our Sub CAs Certificate

Now that we’ve generated the request, lets take that and copy it to our root CA server. On the Root CA server, open up the Certification Authority MMC Snap-in.

Right click on your CA, select “All Tasks”, and “Submit New Request”

Select the CSR from your Sub CA, refresh the “Pending Requests” tab, and right click your request and select “All Tasks”, click “Issue”

Next, go to the “Issued Certificates” tab to locate your Sub CA’s certificate. Click “Open”, navigate to the certificate details and select “Copy to File”, use the certificate export wizard to export the public key and copy this to your Sub CA.

Installing the Public Key on Our Sub CA

Next go back to our Sub CA and open up the Certification Authority MMC Snap-in. You’ll notice that the services aren’t started, and that’s because we haven’t installed our certificate yet. Right click your CA Name and select “All Tasks” and click “Install CA Certificate”, in the popup, navigate to and select the certificate you exported in our previous step.

If you exported the certificate in a format that includes the Root Certificate, you should not have any issues with the CA Services starting on your Sub CA. If you get an issue saying that the certificate signature cannot be validated, this is because the Root Certificate needs to be installed in the Trusted Root Certification Authorities certificate store.

Next up, make sure you export the Root Certification Authorities Certificate in the format specified in the AIA locations. If all is done correctly, you should be able to open “pkiview.mmc” and see no errors.

You made it this far:

You made it this far, so you should know a few things that should be done moving forward:

  • Cleanup CRL and AIA distribution points on the Sub CA, there’s some defaults that just need to be removed to avoid issues later.
  • Since the Sub CA is an enterprise CA, all CRLs are already published to Active Directory, so no need to publish them to the IIS virtual directory unless you want to.
  • Think deeply about the cryptography used. Using next-generation encryption where possible will save us all when computing power eventually get’s powerful enough to break RSA 2048-bit keys.
  • If your Root CA is in a production environment and used for actual enterprise certificate services, consider full disk encryption to protect against simple methods used to steal the CAs private key.
  • Consider migrating your IIS to HTTPS once your have your CA setup, if you’re like me, you encrypt everything (even DNS), because you’re paranoid.
  • You’ll notice I’m using the DNS name ocsp.scorchedwiremedia.net, that’s because one of our future updates to this series will cover setting up an online responder for Smart Cards and S/MIME.
  • When we modified the validity period, we didn’t actually specify the validity period of specified certificates. The validity period is actually determined as the intersection of the shortest period between the configured validity period and that on the Signing Request. Therefore, if a CSR requested a cert that would last 5 years, but the CA was only configured to issue certs for 2 years, the cert validity period would be 2 years. On the flip-side, if a CSR requested a cert that was 3 years, but the CA was configured for 5 years, the cert issued would be valid for 3 years.
  • Don’t forget to update the configured validity period on your Sub CA, otherwise all your Certificate Templates will be ignored for validity periods when certificates are issued using them.
  • Don’t forget to harden your CAs, you don’t want them to be breached because you forgot to do your due-diligence.
  • Don’t forget to publish your CA certs to domain joined computers. The easiest way for this is just a Group Policy that publishes the Root CA to the Trusted Root Certification Authorities certificate store and the Sub CA to the Intermediate Root Certification Authorities Certificate Store.

Here’s a good table on the effective strength of ECC vs RSA:

Credit: https://www.globalsign.com/en/blog/elliptic-curve-cryptography/

Posted in PKITagged , , , ,

Beer me that Route!

I’ve actually been meaning to write this post for 2 months now, but I keep procrastinating and bingewatching Netflix instead.
I found myself in a precarious situation a while ago, something that we’ve since resolved through changes in our network architecture, but prior to the changes, I learned a thing or two thanks to it.
The issue we were having was an issue where the router which terminated VPN connections sat behind a separate router, usually this would not be an issue as we would have enabled EIGRP on them to exchange the routes. The issue however was that we were utilizing VRF-lite on the VPN terminating router to separate the underlay and overlay networks. Due to this, there was no way to get the border router to properly communicate with the VRF.
Now, a preface to this is that generally speaking, all VPN connections would usually have been over a terrestrial transmission piece (we have our DMVPN clouds separated for full-mesh support on the TDMA), however, we were testing out new campus fiber connections which terminated on our border router. The idea was, due to concerns about wire tapping, that we needed to terminate these connections to our IPSEC DMVPN cloud.
Me being me, I didn’t want to create a separate cloud on the border router to terminate these connections, so I ventured out on a journey to learn how to get them to communicate.
It turns out, it’s actually pretty simple.


  1. Import campus fiber /24 into the VRF on the internal router.
  2. Export DMVPN loopback /32 to the global RIB on the internal router, and advertise this network to the border router.
  3. Verify Phase-1 DMVPN connectivity on campus fiber sourced spokes.

Here is a representation of what we’re working with so you better understand. Understand the  “Loopback10” is in the “DMVPN-TERR” VRF.


Step 1:

First, we’re just going to verify that as we have everything setup in our lab, that the following should happen:

  1. From the inner router, A “show ip route vrf DMVPN-TERR” will return 0 entries in the routing table.
  2. From both the inner router and outer routers global RIB, a “show ip route” will return 0 entries.
  3. The spoke should not be able to ping the “” address.

Additionally, at first the /27 for the DMVPN tunnel will be routed to the outer router, this is undesirable.
We need to set in place a route filter, going back to some of our older skills we learned, to prevent recursive routing, which would end up in the looped chain, attempting to stack error.
As expected, all of the following tests responded how we expected!

Changes Needed:

  1. Create a VRF definition for what we’re going to use as our shared routing table.
  2. Place all interfaces which currently reside in the Global RIB on the inner router into our shared VRF.
  3. Create route distinquishers for DMVPN-TERR and our shared VRF.
  4. Configure EIGRP for our VRF.

To create a VRF
Router(config)#vrf definition <vrf-name>
Router(config-vrf)#address-family ipv4
To place an interface in the VRF
Router(config-if)# vrf forwarding <vrf-name>
Once you place the interface in the VRF, re-assign it’s IP address. You will need to do this for anything that currently resides in the default RIB.
After you do that, you must create the EIGRP instance for the VRF, to do this:

Router(config)#router eigrp <asn>

Router(config-router)#address-family ipv4 vrf <vrf-name> autonomous-system <asn>

Router(config-router-af)#network <network> <wildcard>

Since I’ve started working with EIGRP on multiple VRFs, as well as with IPv6, I’ve taken to EIGRP named mode. 
The benefit to named mode is that it’s modular configuration of all EIGRP configurations from within the EIGRP router configuration mode. No longer do I need to enable authentication on a per-interface level. 
To read more about EIGRP named mode:
You can also upgrade your current configuration to named mode with the “eigrp upgrade-cli” command.
Next, to assign route distinquishers to a VRF:

Router(config-vrf)# rd [rd:rd]

  1. Our RD for our DMVPN-TERR will be 9001:900
  2. Our RD for our SHARED-GLOBAL will be 9001:999

Now, to verify that the SHARED-GLOBAL looked like before, do a “show ip route vrf SHARED-GLOBAL”

Leaking Routes:

Now that we have setup our shared tables, we’re going to use route-maps to select which networks we are going to leak into the shared routing and into the DMVPN-TERR routing table.

  1. We need to leak the into DMVPN-TERR
  2. We need to leak the into the SHARED-GLOBAL

To do this, use ip prefix-lists:

Router(config)#ip prefix-list <prefix-list> seq <#> permit <network/mask>

Once you have done that, create a route-map that matches that prefix-list.
Here is what we are going to do:

Router(config)# ip prefix-list PL-FIBER-TO-DMVPN seq 5 permit

Router(config)# route-map RM-SHARED-TO-DMVPN

Router(config-route-map)# match ip address prefix-list PL-FIBER-TO-DMVPN

Router(config)# ip prefix-list PL-DMVPN-TO-FIBER seq 5 permit

Router(config)# route-map RM-DMVPN-TO-SHARED

Router(config-route-map)#match ip address prefix-list PL-DMVPN-TO-FIBER

After we do that, we need to configure the import and export maps on our VRFs, to do this:

Router(config)# vrf definition <vrf-name>

Router(config-vrf)# address-family ipv4

Router(config-vrf-af)# import map <map-name>

Additionally, you have to specify which topology you want to target for import and export.
For our use case, the export will be the self RD, and the import will be the opposite RD.

Router(config-vrf)#route-target import [rd]

Router(config-vrf)#route-target export [rd]

Next, we’re going to enable a BGP instance whose sole purpose is to import/export these routes between the routing tables. I’ve come to call this my “ghost” BGP instance, although I’m not sure if that’s proper whatsoever.
To enable the instance to work with multiple VRFs, use the address family command.
Because the routes we are wishing to share are connected, or learned through a routing protocol, we must utilize the “redistribute” command.

Router(config)#router bgp <asn>

Router(config-router)#address-family ipv4 vrf <vrf-name>

Router(config-router-af)#redistribute <method>

  1. On the DMVPN-TERR, we will be redistributing connected.
  2. On the SHARED-GLOBAL, we will be redistributed EIGRP instance 100

Next, verify this by doing a “show ip route vrf SHARED-GLOBAL” and “show ip route vrf DMVPN-TERR”

Next up, redistribute BGP into EIGRP and you should be able to ping from the spoke to the DMVPN loopback interface.

Now, just to verify our adjacency so I can say my job here is done!


Posted in RoutingTagged , , , , , Leave a Comment on Beer me that Route!

Houston, We Have a Problem – Spanning Tree Protocol Review

In my studying for the CCNA Tech Assessment that one must pass to continue with the VTIP/CTIP program, I’ve done a lot of review on basic CCNA concepts. My primary weakness, not so much with the configuration, but with some of the detailed information, is with Spanning Tree Protocol.
For those who don’t know, STP is a layer 2 loop prevention mechanism. STP is designed to mitigate broadcast storms and the 2nd and 3rd wave effects that occur as a result.
There are many versions of STP, but the primary ones that I’m going to cover are the original 802.1D standard and the 802.1w standard.
802.1D was released in 1999 by IEEE as the original spanning tree protocol. In 2001, as an amendment to the original standard, 802.1w was defined. In 2004, IEEE combined the 2 standards into one publication known 802.1D-2004, which can be found here:  https://ieeexplore.ieee.org/document/1309630/

STP Operation:

802.1d and 802.1w share primary mechanisms for loop prevention. STP and RSTP do the following:

  1. Calculate the root switch (root bridge)
  2. For non-root bridges, calculate the cost to the root bridge.
  3. Determine designated ports for dedicated collision domains.
  4. Disable, or block ports which would create layer 2 loops.

Root Bridge Selection

The root bridge is selected based on two things. The first is the STP priority, which by default is 32768, and can be incremented in values of 4096.
The switch with the lowest priority will become the root bridge. 
However, if there is a tie, the switch with the lowest MAC address, will become the root bridge.
802.1d (STP) and 802.1w (RSTP) share this selection process for the root bridge.

Root Cost

The root cost is calculated for each link with a path to the root bridge. The root cost is used to decide which port will get priority for forwarding over another, based on who has the best cost to the root.
The cost is calculated as the cumulative cost of all links in the path to the root.
The following table defines the cost based on link speed:
[table id=7 /]
If there is a tie between path costs, the lowest port ID is used as the tiebreaker.

But How Does it Determine This?

Simple, Bridge Protocol Data Units, or BPDUs. During the initial convergence of STP, switches will send out STP BPDUs which contain some necessary information.
Inside these BPDUs, you can find the priority, perhaps an extended system id (for RPVST+ or PVST), as well as information about the current root bridge, the sender, cost to the root bridge and any timers.
Here is a breakdown the STP Bridge ID:

Here is a breakdown of STP Hello Messages:

Using these messages, this is how STP is able to build a topology and determine which ports should be forwarding and which ports should be blocking/discarding.

STP Port States:

STP has a total of 4 port states, two of which are used only during the STP convergence time frame. The below table lists port states and when they are used.
[table id=5 /]
In the event of a network change, STP will transition a port from blocking to listening and learning, after which it will transition the port to forwarding.
It is important to note that this total process takes about 50 seconds in the original 802.1D standard, 802.1w (RSTP) makes some significant improvements in this. RSTP also transitions the port directly from blocking to learning, as listening is not a state in RSTP.

Port Roles

The following roles are used by 802.1w (RSTP):
[table id=8 /]

RSTP Improvements:

Portfast – portfast is a mechanism for immediately placing ports into a forwarding state. This should only be used for edge ports, or end-user device ports which will never participate in the switching of frames in large.
BPDUGuard – BPDUGuard goes hand-in-hand with portfast. Portfast is great, however, what if a malicious actor plugs in a switch or a network administrator inadvertently plugs a switch into the wrong port? BPDUGuard will disable the port upon the receiving of BPDUs. BPDU guard makes the assumption that we should never receive BPDUs on edge ports.
I think that covers, at least for the most part, the important concepts and operation of STP, If I missed something, don’t be afraid to call me out!

STP Configuration

To change the mode of spanning tree:
spanning-tree mode [mode]
To change the priority of the current Switch, you have two options:
spanning-tree priority vlan [vlan-id/range] root [primary | secondary]
spanning-tree priority vlan [vlan-id/range] [priority]

The first command mentioned above will take the information about the current root bridge and change the priority to be 4096 below that of the root bridge, ensuring that it will become the root. The same goes for secondary, however it is 4096 above the root bridge.
To enable portfast on all edge ports, by default, this is something I always do by the way:
spanning-tree portfast default
To enable BPDUGuard, which is something you should definitely do if using portfast:
spanning-tree portfast bpduguard default
If for some reason you need to modify the cost of an interface, in interface configuration mode:
spanning-tree cost [value]
Some verification commands:
show spanning-tree vlan [vlan-id]
show spanning-tree vlan [vlan-id] bridge
show spanning-tree summary
show spanning-tree interface [interface-name]
show spanning-tree root

If these don’t suffice, you can find the configuration guide here:

Posted in SwitchingTagged , , , , , , , Leave a Comment on Houston, We Have a Problem – Spanning Tree Protocol Review

Back to Basics: Revisiting Group Policy

As we all know,  I don’t usually dive into servers or applications, my primary specialty and passion is the infrastructure which runs it all. Despite this, my current position requires me to be proficient in all of it so that I can provide advanced technical support and instruction to our less experience technicians.
After an in depth instruction with one of my junior Marines last night, which rolled over into this morning, I decided it’s a good time that I revisit some of the key elements of group policy.

What is Group Policy?

Group Policy is simple. It provides administrators the ability to centrally administer all workstations and servers connected to a domain. The function of Group Policy Objects (GPOs) can vary depending upon circumstances, ranging from enhanced security, improved functionality, compliance and overall consistency across the domain.
GPOs actually dive a little deeper, as we configure them, we see them in a simple, easy to read and navigate interface. However, under the hood GPOs are actually used to modify registry settings within the Windows operating system. This is great because the registry is difficult to navigate and dangerous if navigated improperly. It also would be virtually impossible to provide consistency across the domain if administrators had to modify the registry every time they needed to push a change to users or computers.
Within a GPO, there are two scopes which we can modify settings for.

  1. Computer Settings – generally applies changes to the HKEY_LOCAL_MACHINE registry hive.
  2. User Settings – generally applies changes to the HKEY_CURRENT_USER registry hive.

Registry settings can be viewed utilizing the registry editor (regedit.exe)

What can we do with GPOs?
The power of GPOs is limitless.
This is because GPOs can not only modify settings for Windows components, but also manage installed software, GPOs can install software remotely, run scripts, or modify the users interface. GPOs can do whatever you set your mind to do.

GPO Location and Components

For those not familiar with where GPOs are located, they are located within the SYSVOL. The SYSVOL (System Volume) is used in active directory as a method to share settings between domain controllers, as well as to workstations and users. The SYSVOL is replicated to all other domain controllers within the domain. It utilizes the Distributed File System (DFS) to provide these to servers, workstations and users.
Within the SYSVOL is the “Policies” folder, the policies folder will contain folders named with the Globally Unique Identifier (GUID) of each group policy object in active directory.
Also withing the “Policies” folder is the option to create a folder called “PolicyDefinitions” this folder, when created, will provide administrative templates which can be used and sent to any computer in the domain. This becomes known as the Central Store which computers will reference when they do not locally have an administrative template which the central store contains.
Without a central store it is impossible to make your domain entirely consistent, this is due to the fact that not every computer will have identical administrative templates installed which allow these group policy settings to be applied. Each computer maintains its own central store in the “%windir%\PolicyDefinitions” folder.  If you fail to create the central store in the SYSVOL, the Group Policy Management Console (GPMC) will load the administrative templates from it’s local central store, these will be inaccessible to other computers in the domain.

What are Administrative Templates?

Administrative Templates are files which are released by software manufactures which provide Windows with the information it needs to allow us to modify the registry settings to a respective software. Administrative Templates are released for most software, although there are some exceptions such as Firefox. The default administrative templates each computer will have vary on the version of Windows installed, but it’s important to know that each computer will only have the bare minimum of these templates which Microsoft provided during its respective release.
There are two file types which contain settings: ADMX and ADM files. ADMX files are the newer version of administrative templates, whereas ADM are the legacy templates. There is also a third type, however, this is for language support (ADML).
It’s important to know that although ADMX and ADM files are great, they are limited in that they cannot make changes outside of the HKEY_LOCAL_MACHINE or HKEY_CURRENT_USER registry hive.
Microsoft provides all their updated ADMX files in their download center, and can easily be googled. Something important to know with the advent of Windows 10 is that for each new major release, the ADMX files will be updated. This means that Build 1703 ADMX files are slightly different than Build 1709 and should be monitored and updated as your environment migrates to newer releases.

To  install and Administrative Template, simple copy the ADMX files into the central store, if the ADMX has associated ADML files, copy them to the correct folder. 

Windows Management Instruction Filters

About as important as consistent configurations across your domain is ensuring that policies get applied to their intended systems. This is where Windows Management Instruction (WMI) filters come into place. WMI filters are essentially just queries which obtain system information regarding installed software or operating system versions and any roles which they hold.
To create a WMI Filter, open the GPMC and navigate to the WMI filters container within GPMC. Right click and select “New…”

In the popup window, name the WMI Filter, we are going to be creating one for Windows 10. After you have named it, click the “Add” button. Leave the namespace set to “root\CIMv2”, in the query add the following. Once done, click “Ok” and “Save”
select * from Win32_OperatingSystem where Version like "10.%"

The following is a table of all of the WMI Filters for past and present Windows versions (not legacy of course).
[table id=4 /]
WMI filters can be applied to a GPO by navigating, clicking on the “Scope” tab and navigating to the bottom of the pane. You  will see a section called “WMI Filtering”, in the drop down, you can select which WMI Filter you want to apply. A popup will appear asking you to confirm, click “Yes”.

Group Policy Inheritance

Now that we have covered the components which help us build and make our GPOs robust, let’s talk for a second about group policy inheritance. There’s a pretty simple acronym for remembering GPO inheritance.
In the order that they are processed, top being first, bottom being last. GPOs which originate due to a link lower in the process will take precedence.

  1. Local – the group policy stored on the workstation or server
  2. Site – any group policy which is applied to the Active Directory site
  3. Domain – any group policy linked to the root of the domain
  4. Organizational Unit (OU) – any group policy explicitly linked to an OU

It does, however, get a little more confusing than that. Say you have a GPO linked to a parent OU, but a separate GPO linked to a child OU.

  1. GPO Applied to “OU=Test,DC=lab,DC=teachmehowtoroute,DC=com”
  2. GPO Applied to “OU=Child,OU=Test,DC=lab,DC=teachmehowtoroute,DC=com”

Active Directory (AD) will process GPOs applied at a higher level first. Meaning that the GPO in the child OU would be processed last. However, because it is processed last, it will technically receive more precedence than the GPO applied to the parent OU. This is because since it is processed last, it may (or may not) have settings which conflict with the GPO at the higher level.
Lets say you have alot of GPOs, and because of this, there are multiple linked to the same OU. There is something called link order, when you navigate to the OU, click on the “Linked Group Policy Objects” tab. Within this tab is a pane which depicts the link order, links with a lower order number (higher on the list) are processed first. Similar to the above situation, GPOs lower on the list do technically receive more precedence.
Along with the afformentioned, there is the option to “Block Inheritance” on a GPO. The purpose of this is to prevent unwanted or erroneous GPOs linked to a parent container from being applied to objected within an OU. To do this, simply right click on the OU and click “Block Inheritance”

Note: OUs with blocked inheritance will show up with a blue circle with an exclamation point in them.
Additionally, you can also “Enforce” GPOs, enforcing GPOs will push the GPO to objects in containers with blocked inheritance. To do this, right click the GPO Link in GPMC and click “Enforced”

Note: GPOs which are enforced will be depicted with a lock in the corner of them.
I think this concludes this discussion, the following are the basic tools you can use to help you troubleshoot GPOs. For best results, especially when troubleshooting Computer and not User configuration Settings, run the command prompt as administrator.
To force Group Policy Update to Local Machine:
 gpupdate /force
To view applied GPOs, name only:
 gpresult /R
To generate an HTML Report
 gpresult /H GPOResult.html

Posted in Active DirectoryTagged , , , , , , Leave a Comment on Back to Basics: Revisiting Group Policy

Essential Constructs: Path Manipulation

One of the things that I think is key from separating a good network engineer from a great network enginner is their ability to control the paths which data travels.
As we all know, each routing protocol has built-in best path algorithms, but sometimes, these just aren’t enough. There may be many reasons for an administrator to need to override these routing protocols, in order to do this, you must understand the best path calculation for each routing protocol.

Understanding Best-Path Selection

Key to understanding which route will be installed in the routing table, is understanding how each routing protocol will select it’s candidate to be presented to the routing table.
Once routes are presented from the routing process, to the routing table, the router will select the route with the lowest administrative distance to be installed into the routing table.
[table id=2 /]
Once a route is installed into the routing table, the router will evaluate whether there are any routing policies installed. If a policy based route (PBR) is installed, the router will forward traffic based on that policy. If there is no PBR, the router will forward traffic out the route which has the most specific prefix.
We have two considerations for path manipulation:

  1. How our router will forward packets.
  2. How adjacent routers will forward packets.

The easiest part is manipulating how our router will forward packets, the two simplest ways to do this are via PBR and route-maps. After we configure our path manipulation, we have to consider how we want other routers in the topology to forward our traffic. The easiest way to do this is to place route-maps on our outbound routes. In the future, I may start begin referring to routes as NLRI (network layer reach ability information), but mostly just when talking about BGP.
Before we go any further, and delve into actually configuring a solution, we need to understand how each routing protocol selects candidate routes for the routing table. This is extremely important since we need to know what to change to manipulate administrative distances and metrics.

  • RIP has a default administrative distance of 120.
  • RIP utilizes hop-count to determine its metric, with 1 being the lowest, and 15 being the highest.
  • RIP will not present a route to the routing table with a hop-count of higher than 15.
  • By default, RIP will only route on classful boundaries, it is important that you enable ip classless at the global configuration level and no auto-summary on the router configuration level. This will ensure that RIP includes the complete prefix, instead of just the classful boundary in its routing update.
  • When redistributing routes into RIP, Cisco recommends using a low hop-count (such as 1).


  • EIGRP Internal Routes have an administrative distance of 90; external, or redistributed routes have an administrative distance of 170.
  • EIGRP Utilizes K-Values for Calculation, by default, the only two used for metric calculation are bandwidth and delay. Cisco recommends not changing this. Note: changing K-values on one EIGRP neighbor will cause neighbor-relationship issues with neighboring routers.
    • K1 – Bandwidth (Enabled by Default)
    • K2 – Load (Disabled by Default)
    • K3 – Delay (Enabled by Default)
    • K4 – Reliability (Disabled by Default)
    • K5 – MTU (Disabled by Default)
  • The bandwidth used for the metric calculation is the lowest bandwidth on any interface in the path.
  • The delay is the total time (in tens of microseconds) a packet takes to travel across an interface.
  • The full metric formula is:
    • metric = ([K1 * bandwidth + (K2 * bandwidth) / (256 – load) + K3 * delay] * [K5 / (reliability + K4)]) * 256
  • The simplified metric formula, when using the default K-Values is:
    • metric = bandwidth + delay
  • When redistributing routes into EIGRP, you must specify the bandwidth, delay, reliability, load and MTU for EIGRP to calculate a metric for these routes.


  • OSPF has a default administrative distance of 110.
  • OSPF utilizes bandwidth in its metric calculation.
  • By default, the reference bandwidth is 100MBps, meaning that any interface with a speed above that, will be seen the same as a 100MBps interface in the eyes of OSPF. It is important to change the reference bandwidth to the highest speed interface installed on your router.
  • Reference bandwidth should also be changed on all routers within the same OSPF area.
  • All redistributed routes will receive a metric of 20, except for BGP routes, which receive a metric of 1

BGP has the most robust best-path selection process out of all of the routing protocols. BGP will utilize what is referred to as Path Attributes. BGP has about 13 different attributes it evaluates before selecting a best route. BGP will run through this process until there is a definitive winner. This means that if a path wins and evaluation anywhere along the process, it is selected as the best path.

  1. First and foremost is the Weight, this is generally proprietary to Cisco’s implementation of BGP, the weight attribute is configured by the administrator and is only locally significant.
  2. The path with the highest LOCAL_PREF attribute. BGP assumes a default value of 100 when no LOCAL_PREF has been specified by the administrator.
  3. Path Type; BGP will prefer routes learned through the network or aggregate command, before routes learned through redistribution.
  4. AS_PATH; the AS_PATH defines which autonomous systems the route has past through before it got to us, the route with which traverses the least amount of autonomous systems is preferred.
  5. Origin Type; this informs BGP whether the route was originated from an IGP, EGP or INCOMPLETE. IGP has the lowest origin code, EGP follows, followed by INCOMPLETE. BGP will prefer the route with the lowest origin code.
  6. Multi-Exit Discriminator (MED) – BGP will prefer the path with the lowest MED, the MED is used when routes are received from neighbors with the same ASN. If the router received matching routes from neighbors that are not in the same ASN, this will be ignored.
  7. BGP Neighbor Type; prefer eBGP over iBGP
  8. Lowest IGP metric; self-explanatory
  9. Determine whether we need to present two candidates to the routing table (is BGP multi-path enabled?)
  10. Prefer the path which was received first (oldest path)
  11. Prefer the path which was received from the neighbor with the lowest BGP router-ID.
  12. We’ll skip this; not used unless you are in a BGP RR environment. (I’m also not too sure what the hell it means)
  13. Prefer the path with the lowest neighbor IP address.

The best part about BGP is that it gives us, administrators, the most robust solution to manipulate and engineer traffic in the way we desire. It’s also somewhat confusing, and hard to remember, but don’t be too intimidated, (other than if you’re taking your tests) you can always reference the Cisco support page.

Why Manipulate Paths?

  1. Despite routing protocols having their respective best-path selection algorithms, there may be instances where the router does not take the actual best path.
  2. There may be instances where we need to separate network traffic, and route traffic differently based off of it’s source/destination address.

Methods of Manipulation

  1. Offset Lists; specific to RIP, these will change the outgoing (or incoming) hop-count on a route(I mean, technically you could use them for EIGRP).
  2. Route-Maps; these are powerful, using these we can change the administrative distance, metric, or other attributes to a route.
  3. Policy-Based Routing; policy based routing is essentially static routing, however, we can get extremely granular and specify where we are going to forward traffic based off of the traffic type, source/destination IP address, etc.
  4. IP-SLAs; these are often not considered a method for path manipulation, but can definitely be used as such. IP-SLAs allow us to configure tracked objects and make decisions based off the reach ability, reliability or response time of a path.
  5. Route Filtering; this will be key in manipulating how we want other routers which we are not directly in control of, to forward traffic.

Example #1: RIP Offset List

This example is going to be extremely simple, given the network topology, we’re going to create an access-control list and apply an offset list to our routes going outbound on a specific interface.

Our objective for this lab is extremely simple, we are going to make sure that R1 utilizes path 1 to forward traffic to the network. As we all know, RIP only utilizes hop-count for it’s metric, so although Path 1 has a higher bandwidth, both paths would be seen equally from R1. Due to the fact that both paths would be seen equally, R1 would load balance. This is not a desired action whatsoever, as the traffic share ratio would be 1:1, and RIP would not do what EIGRP does where it will calculate the percentage of packets to be routed out each interface (this is more specific to EIGRPs ability of unequal cost load balancing).
We are not going to make any changes on R1, all changes will be made from R2.
This lab will be utilizing 4 Cisco IOSv 15.6(2)T routers in GNS3. I have posted all the lab files (including diagrams) on the “Lab Files” page, which can be found in the menu bar at the top.
First, verify that the hop-count for the “” network on R1 is 2, also verify that your routing table has two entries for this network:

The first step; creating the access list. For this, we can simply use a standard ACL which matches the network for interface “Loopback 1”.
Router(config)#access-list 1 permit
After creating the access list, you need to go into router configuration mode for the RIP process on R2; the lab already should have RIP enabled and all the interfaces configured. (I’ll post the initial configs in case GNS3 didn’t export them for some reason). For this network, we’re going to offset the hop-count by 3, going out interface G0/0, which leads to path 2.
Router(config)#router rip
Router(config-router)#offset-list 1 out 3 GigabitEthernet0/0

Once completed, do a “show ip protocols”; you will see something along the lines of “Outgoing routes in GigabitEthernet0/0 will have 3 added to metric if on list 1”
If you know somebody at Cisco that could please fix the English on that statement, that would be great. That just perturbs me.

This simply means that if the network is in the access-list 1, and being sent out interface GigabitEthernet0/0 we will report the hop-count as 3 to R3; thus ensuring that path 1 is the preferred path to this network. To verify this, console into R1 and do a traceroute to the network, you should see it traverse through R4.

That was pretty simply, though when using RIP, remember to keep in mind that RIP will not present a route to the routing table if it has a hop-count of higher than 15. This means you can’t offset the route too much, especially if your network has alot more hops than this lab does.
The syntax for the offset list is as follows:
offset-list [acl-name] [in | out] [0-16] [interface-name]
The 4th argument is the hop count which you want to offset, and the final argument is optional (though pretty pointless if you don’t do it, for RIP anyway.)

Example #2: EIGRP Policy-Based Routing and Offset Lists

In this example, we’re going to go with pairing a few things together to get granular control over the path traffic takes. As we all know, EIGRP utilizes bandwidth and delay in it’s calculations, important to know here is that it will use the default interface bandwidth and delay, unless otherwise specified. Sure, it’s absolutely easy to change that, but that’s not the point of this lab. The point of this lab is to show you different ways in which you could manipulate the path for which you want data to travel.
For this lab, we are going to a 6 router topology. All links will be connected via gigabit ethernet, so EIGRP will calculate the same metric for all paths. Given the topology, ensure that voice traffic will always flow over the most desirable path. You will need to do so without configuring interface bandwidth or delay, and you do not have control over any router other than R2. (Well, you can console into all the other routers just to verify, but you cannot configure anything on them).

The first thing we will need to do is create access-lists for the voice and user networks, we will reference these throughout multiple parts of our lab.

ip access-list standard User-Network
ip access-list standard Voice-Network

The first part we’re going to do is create multiple IP SLAs and tracked object. These will give us the ability to monitor the reachability of a network over a certain path. Our first tracked object will monitor the interface on R1 over path 3, the second will monitor the interface on R1 over path 1.
ip sla 1
icmp-echo source-interface GigabitEthernet0/0
timeout 6000
ip sla schedule 1 life forever start-time now
ip sla 2
icmp-echo source-interface GigabitEthernet0/1
timeout 6000
ip sla schedule 2 life forever start-time now

After defining the IP-SLAs, next create the tracked objects which are linked to these SLAs, these will allow us to define specific actions based on the state of the tracked object.
track 1 ip sla 1
track 2 ip sla 2

We have set our IP-SLAs to change their state to failure if they are not able to ping the destination within a 6000 millisecond timeframe.
To view the statistics of your IP SLAs:
show ip sla summary
show ip sla statistics

To view the state of our tracked objects:
show track
For our route-map, we are going to utilize sequence numbers. Sequence numbers are used in access-lists, but also route-maps. Think of route-maps as ACLs for how we want to manipulate routing. If a packet does not match anything defined in the match claw within a sequence, then the router will proceed to the next entry in the route map. If nothing is matched, it will route based off of the routing table.
For our route-map, we’re simply going to match the source IP, and set the next-hop (there are plenty of other things we could do as well, but just trying to show you examples). For our “set ip next-hop” clause, we are going to utilize the “verify-reachability” argument. This will allow us to link the action to a tracked object, the route map will only set this as the next-hop if the tracked object state is “UP”. We will use multiple set entries to set the backup next-hop to R5.
route-map RM-EIGRP-PBR permit 5
match ip address Voice-Network
set ip next-hop verify-availability 1 track 1
set ip next-hop
route-map RM-EIGRP-PBR permit 10
match ip address User-Network
set ip next-hop verify-availability 1 track 2
set ip next-hop

Something to note about route-maps is that multiple match clauses, on the same line are viewed as logical OR statements.
Match statements on different lines are viewed as logical AND statements
After you define the route-map, you must apply it to the interface on which we are going to match the inbound traffic, this would be GigabitEthernet0/2. Do this with the command (from interface configuration level):
ip policy route-map RM-EIGRP-PBR
At this point, if you were to ping R1 with traffic sourced from the User network, it would take Path 1, and voice would take path 3. If you want to see R2 forwarding this traffic, enable the below command and send traffic from R3 to R1 sourced from one of our loopback interfaces.
debug ip policy

For manipulating our metrics, we’re simply going to do what we did earlier with  RIP, and use offset lists. These offset lists will reference the ACLs we already created earlier.
This is extremely simple. On G0/3, we’re going to offset the voice and user networks by +1000, on G0/0 we’re going to offset the User network by +2000 and on G0/1, we’re going to offset the voice network by +2000. Note that for outbound updates on G0/3, we will have to create a new ACL which contains both the User and Voice networks.
ip access-list standard User-and-Voice-Network

Next, hammer down into the router subconfiguration mode for the EIGRP proccess and apply those offset lists:
offset-list User-Network out 2000 GigabitEthernet0/0
offset-list User-and-Voice-Network out 1000 GigabitEthernet0/3
offset-list Voice-Network out 2000 GigabitEthernet0/1

Please note that as part of my initial configuration, I emplaced a route-filter on R1 to ensure that the transport routers (R4,R5,R6) do not receive routes for the User or Voice network from R1.
Now that that is done, you can console into R1 and do a show ip route, you will notice that there is no longer 3 entries in the routing table. You will notice that the user network is seen best out of G0/0, and that the voice network is seen best out of G0/1.

Next, let’s verify our PBR by shutting down G0/0 on R6. Once complete, just wait a few seconds and do a “show track”, you should see a changed state for the tracked object. This should also come up as a syslog message. 
Just verify that our PBR is working by pinging from Lo2 on R3 to anything on R1. Keep the “debug ip policy” running on R2. This is what your output should look like. On R2, you should see the router setting the gateway, or next-hop to

I hope you liked these two examples, and the discussion on path manipulation. Even if this example doesn’t exactly make the most sense, I think it still shows the variety of possibilities we have when configuring path control. There’s alot you could do with path manipulation and it isn’t limited to outbound routes either. We could easily manipulate inbound routes based on a number of properties.
One cool thing that I want you guys to see real quick is that you can tag routing updates with a a value in either decimal format, or dotted decimal format. This is great for EIGRP because by doing this, on the origin router, I can basically tell any other routers within the topology who originated these routes and then filter or modify metrics based off of that.
Hint: I really like the AS_PATH attribute in BGP, because it just makes sense that I want to see who originated these routes.

Posted in RoutingTagged , , , , , , , , , , , , Leave a Comment on Essential Constructs: Path Manipulation