Exploring AWS VPC Endpoints by Examples with AWS CDK

By Adam McQuistan in DevOps  12/28/2021 Comment

In this article I present the concepts of AWS Virtual Private Cloud (VPC) Endpoints through simple examples of setting up and consuming AWS services made explicitly accessible through VPC endpoints. VPC Endpoints are used in the AWS Cloud to securely relay network requests between resources in a VPC to other services and resources either provided by AWS or by AWS Partners without the need for Internet Gateways, NAT Gateways, or VPN connections. To facilitate consistent and repeatable examples I utilize Infrastructure as Code (IaC) in the form of AWS Cloud Development Kit (CDK) for provisioning all Cloud Resources and Services.

In the examples I use a VPC with a public and a private subnet each containing a Linux EC2 instance along with an S3 Bucket and a Kinesis Data Stream. SSH connections are established to each EC2 instance for demonstrating connectivity and access patterns among the public and private EC2 instances and the AWS Services, S3 and Kinesis, with and without VPC Endpoints present.

The source code for this article can be found on GitHub.

Creating EC2 Key Pair

Establishing a SSH connection to a Linux EC2 instance in the AWS Cloud, particularly the Amazon Linux 2 AMI, require the use of SSH PEM key pair. I can utilize the AWS CLI to create a new keypair, named vpc-endpoints, then save it on my laptop.

aws ec2 create-key-pair --key-name vpc-endpoints --query 'KeyMaterial' > vpc-endpoints.pem

Next I set the appropriate permission on the key file.

chmod 600 vpc-endpoints.pem

Finding My Home IP Address

To access the EC2 instances a security group is needed to expliticly allow SSH (TCP port 22) traffic in from some source network range. There are a couple of ways to do this: (i) allow SSH connections from anywhere but that would be a poor security practice or, (ii) find the public IP address of my home's internet connection then explicitly allow traffic from that IP. This later option is preferred.

There are many ways to determine your home's public IP address but, the one I use most often is a REST service, http://checkip.amazonaws.com, offerred by AWS. I can use my browser or any other HTTP client (curl, httpie, etc...) to make a request to.

Once my IP address is determined I add a /32 suffix to conform to the source specifications for a AWS Security Group.

For example, if my public IP is 76.11.22.33 then I would use 76.11.22.33/32

Setting up a AWS CDK Project

I am utilizing the AWS CDK for a repeatable and consistent method of creating the AWS Cloud Resources for the demo. To follow along you'll want to have a functional AWS CDK v2 installed and configured.

With the AWS CDK I create a new app inside a vpc-endpoints directory.

mkdir vpc-endpoints
cd vpc-endpoints
cdk init app --language typescript

Next I bootstrap the CDK app to handle my deployable asset files.

cdk bootstrap 

Note that if you have multiple AWS accounts or profiles associated with your AWS CLI credentials and configuration you can specify them using the following.

cdk bootstrap aws://ACCOUNT-NUMBER/REGION

Or you can set environment variables in your shell for CDK_DEFAULT_ACCOUNT and CDK_DEFAULT_REGION before issuing cdk bootstrap and cdk deploy commands.

This is all that is required for basic setup.

Provisioning VPC, EC2s, S3 and Kinesis Resources

In this section I provide the necessary CDK IaC implementation within the lib/vpc-endpoints-stack.ts source file to provision a starting point of AWS resources (VPC, subnets, EC2, permissions, etc). Below is a basic architecture diagram showing the components.

Basic Architecture

The VPC is defined with CIDR 10.0.0.0/20 consisting of IPs from 10.0.0.1 to 10.0.15.254 (4,096 addresses) along with a public subnet of network CIDR /24 (256 addresses) equipped with a route to a Internet Gateway. There is also a private subnet of the same size configured to be isolated (meaning no route to a NAT Gateway and thus no routable traffic out of the VPC).

Requests made from the public subnet will travel out to the public internet then back into the AWS network to reach AWS Services, AWS S3 and AWS Kinesis. On the other hand, requests made from the private subnet will not be able to reach the S3 and Kinesis services because there is no path out of the custom VPC. This is where I'll use VPC Endpoints in a later section to explicitly allow traffic to regional S3 and Kinesis AWS services.

// lib/vpc-endpoints-stack.ts

import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as iam from "aws-cdk-lib/aws-iam";
import * as kds from "aws-cdk-lib/aws-kinesis";
import * as s3 from "aws-cdk-lib/aws-s3";
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';


export class VpcEndpointsStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    /**
     * Configurable parameters to be passed to CloudFormation stack
     * upon deployment
     */
    const keyPair = new cdk.CfnParameter(this, 'keypair', {
      type: 'String',
      description: 'EC2 Key Pair Name'
    });
    const sshSafeIp = new cdk.CfnParameter(this, 'safeip', {
      type: 'String', 
      description: 'IP Address with /32 suffix to Allow SSH Connections from'
    });


    /**
     * VPC in Single Availability Zone
     * - IPs from 10.0.0.1 to 10.0.15.254 (4,096 addresses) along with a 
     * - Public subnet (/24 256 addresses) with route to Internet Gateway
     * - Private subnet (/24 256 addresses) with no route to Internet (no NAT)
     */
    const vpc = new ec2.Vpc(this, 'VPC-Endpoints', {
      natGateways: 0,
      cidr: '10.0.0.0/20',
      maxAzs: 1,
      subnetConfiguration: [
        {
          name: 'public-subnet',
          subnetType: ec2.SubnetType.PUBLIC,
          cidrMask: 24
        },
        {
          name: 'private-subnet',
          subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
          cidrMask: 24
        }
      ]
    });


    /**
     * Security Group Allowing SSH Connections from specific IP
     * along with all TCP traffic among EC2s within VPC
     */
    const ec2SecurityGroup = new ec2.SecurityGroup(this, 'ssh-security-group', {
      vpc: vpc,
      description: 'Allow SSH (TCP port 22) from Anywhere and All TCP within VPC',
      allowAllOutbound: true
    });
    ec2SecurityGroup.addIngressRule(
      ec2.Peer.ipv4(sshSafeIp.valueAsString),
      ec2.Port.tcp(22), 'Allow SSH from Specific IP'
    );
    ec2SecurityGroup.addIngressRule(
      ec2.Peer.ipv4(vpc.vpcCidrBlock),
      ec2.Port.allTcp(), 
      'Allow all TCP within VPC'
    );


    /**
     * AWS S3 and AWS Kinesis are AWS Services Accessible via Internet Gateway (publicly)
     * or VPC Endpoints (privately)
     */
     const s3Bucket = new s3.Bucket(this, 'vpc-endpoints-bkt');
     const kdsStream = new kds.Stream(this, 'vpc-endpoints-stream');


    /**
     * EC2 Instance Role with IAM Policies allowing EC2's to work with
     * the AWS S3 bucket and AWS Kinesis Data Stream defined previously.
     */
    const ec2Role = new iam.Role(this, 'ec2-role', {
      assumedBy: new iam.ServicePrincipal('ec2.amazonaws.com')
    });
    ec2Role.addToPolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: [
          "s3:ListBucket",
          "s3:PutObject",
          "s3:GetObject",
          "s3:DeleteObject"
        ],
        resources: [s3Bucket.bucketArn, `${s3Bucket.bucketArn}/*`]
      })
    );
    ec2Role.addToPolicy(
      new iam.PolicyStatement({
        effect: iam.Effect.ALLOW,
        actions: [
          "kinesis:*"
        ],
        resources: [kdsStream.streamArn]
      })
    );


    /**
     * Create two Amazon Linux 2 AMI EC2 Instances within VPC containing
     * previously defined IAM roles and Security Groups.
     * - one in public subnet
     * - one in private subnet
     */
    const ami = new ec2.AmazonLinuxImage({
      generation: ec2.AmazonLinuxGeneration.AMAZON_LINUX_2,
      cpuType: ec2.AmazonLinuxCpuType.X86_64
    });

    const publicEc2 = new ec2.Instance(this, 'pub-ec2-vpc-endpts', {
      vpc: vpc,
      instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MICRO),
      machineImage: ami,
      role: ec2Role,
      securityGroup: ec2SecurityGroup,
      vpcSubnets: { subnetType: ec2.SubnetType.PUBLIC },
      keyName: keyPair.valueAsString
    });

    const privateEc2 = new ec2.Instance(this, 'priv-ec2-vpc-endpts', {
      vpc: vpc,
      instanceType: ec2.InstanceType.of(ec2.InstanceClass.T3, ec2.InstanceSize.MICRO),
      machineImage: ami,
      role: ec2Role,
      securityGroup: ec2SecurityGroup,
      vpcSubnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED },
      keyName: keyPair.valueAsString
    });


    /**
     * Dynamically generated resource values to display in output of CloudFormation
     */
    new cdk.CfnOutput(this, 'Ec2PublicIp', {
      value: publicEc2.instancePublicIp
    });
    new cdk.CfnOutput(this, 'Ec2PrivateIp', {
      value: privateEc2.instancePrivateIp
    });
    new cdk.CfnOutput(this, 'S3Bucket', {
      value: s3Bucket.bucketName
    });
    new cdk.CfnOutput(this, 'KdsStream', {
      value: kdsStream.streamName
    });
  }
}

Now I can synthesize my CDK app to CloudFormation and deploy the stack via my terminal like so.

cdk synth
cdk deploy --parameters keypair="vpc-endpoints" --parameters safeip="YOUR-IP/32"

Upon successful deployment the terminal will show the outputs section complete with the IP address of the publicly available as well as private EC2 instances, the S3 bucket name and, the Kinesis Data Stream name.

When I deployed these are the outputs I received and will be using throughout the article but, yours will differ.

Outputs:
VpcEndpointsStack.Ec2PrivateIp = 10.0.1.97
VpcEndpointsStack.Ec2PublicIp = 3.144.105.195
VpcEndpointsStack.KdsStream = VpcEndpointsStack-vpcendpointsstream01BCD141-skfpDltVZrOT
VpcEndpointsStack.S3Bucket = vpcendpointsstack-vpcendpointsbkt6949eda6-13bwwlsf3uft4

Connecting to Public and Private EC2 Over SSH

At this point I can use my previously generated SSH PEM key along with the IP address I just got from the output of deploying the CDK application. Here is a link to the AWS Docs detailing SSH Connection to Linux EC2 if you would like more info on this.

Below is the command to connect to the public EC2 on a Mac or Linux system. Your EC2 IP address will be different from mine so use the value from the output of the deployment command.

ssh ec2-user@3.144.105.195 -i vpc-endpoints.pem

While on the public EC2 instance I can query the EC2 metadata endpoint for a sanity check after installing my favorite HTTP CLI client httpie (of course curl would work also).

[ec2-user@ip-10-0-0-224 ~]$ pip3 install httpie
[ec2-user@ip-10-0-0-224 ~]$ http 169.254.169.254/1.0/meta-data/local-ipv4
HTTP/1.1 200 OK
Accept-Ranges: none
Connection: close
Content-Length: 10
Content-Type: text/plain
Date: Mon, 27 Dec 2021 23:23:03 GMT
Last-Modified: Mon, 27 Dec 2021 12:36:27 GMT
Server: EC2ws

10.0.0.224

Connections to the private EC2 instance are only accessible from within the VPC but, since I'm currently connected to the public EC2 instance I add my SSH keypair to this public EC2 instance and establish a SSH connection from the public EC2 instance to private EC2 instance.

Back in another terminal on my laptop I copy the contents of my vpc-endpoints.pem key and paste it into a similarly named vpc-endpoints.pem file on the public EC2 instance. Then set its file permission to 600.

[ec2-user@ip-10-0-0-224 ~]$ tee vpc-endpoints.pem <<EOF
> -----BEGIN RSA PRIVATE KEY-----
> ySNeRJcl8esTMQTcvV2vEmT1MFZZOda... the reset of the contents
> -----END RSA PRIVATE KEY-----
> EOF
[ec2-user@ip-10-0-0-224 ~]$ chmod 600 vpc-endpoints.pem

Now if I use the private EC2 IP from the output of deploying the CDK app I should be able establish an SSH connection from the public EC2 to the private EC2.

[ec2-user@ip-10-0-0-224 ~]$ ssh ec2-user@10.0.1.97 -i vpc-endpoints.pem

And my shell prompt will look like this after accepting the ECDSA fingerprint with the prompt being suffixed by the private EC2 instance's IP. I also query the 169.254.169.254 metadata endpoint from it as well.

[ec2-user@ip-10-0-1-97 ~]$ curl 169.254.169.254/1.0/meta-data/local-ipv4
10.0.1.9

Your private EC2 instance's IP addresses may be different from mine.

At this point I can exit from the private EC2 instance and that will dump me back onto the public EC2 instance where I'll start the next section.

10.0.1.97[ec2-user@ip-10-0-1-97 ~]$ exit
logout
Connection to 10.0.1.97 closed.
[ec2-user@ip-10-0-0-224 ~]$ 

Verify S3 Bucket and Kinesis Data Stream Access from Public EC2

While on the public EC2 instance I create a test file to upload to the S3 bucket using the AWS CLI which comes pre-installed with the Amazon Linux 2 AMI used during creation. Your S3 bucket name will be different from mine so use the value from the output section of the deployment if following along.

[ec2-user@ip-10-0-0-224 ~]$ echo "I'm the first s3 object" > file1.txt
[ec2-user@ip-10-0-0-224 ~]$ aws s3 cp file1.txt s3://vpcendpointsstack-vpcendpointsbkt6949eda6-13bwwlsf3uft4
upload: ./file1.txt to s3://vpcendpointsstack-vpcendpointsbkt6949eda6-13bwwlsf3uft4/file1.txt

I get a confirmation message tellimg me the file was successfully uploaded to the S3 bucket.

Next while still on the public EC2 instance I will publish a couple of messages to the Kinesis Data Stream using the AWS CLI.

[ec2-user@ip-10-0-0-224 ~]$ STREAM_NAME=VpcEndpointsStack-vpcendpointsstream01BCD141-skfpDltVZrOT
[ec2-user@ip-10-0-0-224 ~]$ 
[ec2-user@ip-10-0-0-224 ~]$ aws kinesis put-record --stream-name $STREAM_NAME --data "I am the first message" --partition-key "adam" --region us-east-2
[ec2-user@ip-10-0-0-224 ~]$ 
[ec2-user@ip-10-0-0-224 ~]$ aws kinesis put-record --stream-name $STREAM_NAME --data "I am the second message" --partition-key "adam" --region us-east-2

Then I verify that the messages are available in the Kinesis Stream by fetching a shard iterator starting at the beginning of the first and only shard then I use that to fetch the records.

[ec2-user@ip-10-0-0-224 ~]$ SHARD_ID=shardId-000000000000
[ec2-user@ip-10-0-0-224 ~]$ SHARD_ITR=$(aws kinesis get-shard-iterator --stream-name $STREAM_NAME --shard-id $SHARD_ID --shard-iterator-type TRIM_HORIZON --region us-east-2 --query 'ShardIterator')

[ec2-user@ip-10-0-0-224 ~]$ aws kinesis get-records --shard-iterator $SHARD_ITR --region us-east-2
{
    "Records": [
        {
            "Data": "SSBhbSB0aGUgZmlyc3QgbWVzc2FnZQ==", 
            "PartitionKey": "adam", 
            "ApproximateArrivalTimestamp": 1640647237.975, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "49625250023738605656126291265135795977006709310171906050"
        }, 
        {
            "Data": "SSBhbSB0aGUgc2Vjb25kIG1lc3NhZ2U=", 
            "PartitionKey": "adam", 
            "ApproximateArrivalTimestamp": 1640647286.192, 
            "EncryptionType": "KMS", 
            "SequenceNumber": "49625250023738605656126291265137004902826327306600972290"
        }
    ], 
    "NextShardIterator": "AAAAAAAAAAGlNHVsZfqrsHM2GyfPn8WIXR/xhlwAXVGG6/b0GhODmnfyqBpjArq9BQZKKAsqnucnfaMWsqM4J4fZ2naWm96Ok+XNq+WgfRPAIjw2dQ9c5zDIMciO2zUubgEu+95kMbik2sMWteQJI73J7oScID4m1BZS5P1IB2ScLOI7GEd5BaRFGlgp8AZyigxSg1hLaUe394vc2xH3F+MMwVDDSTqrNjamk2s3rg/ndUgqcSaOfA3MoGRM0XQo+0ca5bmhztBzmGL3MyX4+ATikBxu2Ggo2fUKVSR5NFINHVD41kG/ag==", 
    "MillisBehindLatest": 0
}

You'll notice that the Data field of each Kinesis Stream Record is encoded, if you're like me and need to see the contents of the message then follow these instructions to install the Python AWS SDK (boto3) while still on the public EC2 instance.

[ec2-user@ip-10-0-0-224 ~]$ pip3 install boto3

Then create a file named kinesis_consumer.py and place the following in it.

# kinesis_consumer.py

import argparse

import boto3


def main(args):

    stream_name = args.stream_name
    shard_id = args.shard_id
    aws_region = args.region

    kinesis = boto3.client('kinesis', region_name=aws_region)

    itr_response = kinesis.get_shard_iterator(StreamName=stream_name,
                                              ShardId=shard_id,
                                              ShardIteratorType='TRIM_HORIZON')

    shard_itr = itr_response['ShardIterator']
    records_response = kinesis.get_records(ShardIterator=shard_itr, Limit=200)

    fmt = "{:^20} {:^60} {:<40}"
    print(fmt.format("Shard", "Position", "Message"))
    print(fmt.format("-"*20, "-"*60, "-"*40))
    for record in records_response['Records']:
        print(fmt.format(shard_id, record['SequenceNumber'], record['Data'].decode('utf-8')))


if __name__ == '__main__':
    parser = argparse.ArgumentParser()
    parser.add_argument('--stream-name')
    parser.add_argument('--shard-id')
    parser.add_argument('--region')
    args = parser.parse_args()

    main(args)

Now I can run it to verify that the messages in Kinesis contain the expected data.

[ec2-user@ip-10-0-0-224 ~]$ python3 kinesis_consumer.py --stream-name $STREAM_NAME --shard-id $SHARD_ID --region us-east-2
       Shard                                   Position                           Message                                 
-------------------- ------------------------------------------------------------ ----------------------------------------
shardId-000000000000   49625250023738605656126291265135795977006709310171906050   I am the first message                  
shardId-000000000000   49625250023738605656126291265137004902826327306600972290   I am the second message 

With the current setup all traffic in the form of requests out to AWS services like S3 and Kinesis, external to my VPC, will pass through the Internet Gateway to the public internet then get routed back into the AWS network resolving to the S3 and Kinesis services as shown in the below diagram.

Public Internet Only

Verify S3 and Kinesis Not Accessible from Private EC2

The goal of this section is to SSH onto the private EC2 instance (remember the connection must start from the public EC2 instance) and verify that access cannot be established to either the AWS S3 or AWS Kinesis service. This is because there is no routable traffic out of the private subnet within the custom VPC.

So SSH to the private EC2 instance from the public EC2.

[ec2-user@ip-10-0-0-224 ~]$ ssh ec2-user@10.0.1.97 -i vpc-endpoints.pem 
Last login: Mon Dec 27 22:37:52 2021 from ip-10-0-0-224.us-east-2.compute.internal

       __|  __|_  )
       _|  (     /   Amazon Linux 2 AMI
      ___|\___|___|

https://aws.amazon.com/amazon-linux-2/
[ec2-user@ip-10-0-1-97 ~]$ 

Now create a second test file named file2.txt with the following content and attempt to copy it to the S3 bucket as done previously on the public EC2 instance.

[ec2-user@ip-10-0-1-97 ~]$ echo "I'm the second s3 object" > file2.txt
[ec2-user@ip-10-0-1-97 ~]$ aws s3 cp file2.txt s3://vpcendpointsstack-vpcendpointsbkt6949eda6-13bwwlsf3uft4

This last command (copying to s3) should hang until you enter CTRL+C (perhaps repeating a few times) verifying the S3 service not reachable.

For completeness it is a good idea to verify a similar outcome occurs when trying to publish messages to Kinesis from this private EC2 instance also.

[ec2-user@ip-10-0-1-97 ~]$ STREAM_NAME=VpcEndpointsStack-vpcendpointsstream01BCD141-skfpDltVZrOT
[ec2-user@ip-10-0-1-97 ~]$ aws kinesis put-record --stream-name $STREAM_NAME --data "I am a message from private ec2" --partition-key "adam" --region us-east-2

Again this command should hang requiring you to CTRL+C to regain control of the shell also verifying that Kinesis is not reachable either.

No Private Traffic

Adding a Gateway Endpoint for S3

The first of the two VPC Endpoints up for discussion, the Gateway Endpoint, is a special VPC Endpoint for connecting to either DynamoDB or S3 directly from within a VPC. Said another way, when you add a Gateway Endpoint to your VPC along with a route from a specific service (S3 in this example) to the newly added Gateway Endpoint then network traffic between the VPC and the service does not leave the AWS network.

After this section the AWS architecture and network access paths will conform to the diagram shown.

S3 Gateway Endpoint

Below is the updated CDK IaC required to add a S3 Gateway Endpoint which also adds a route from for all request traffic destinated to the S3 service to be directed through the Gateway Endpoint and back.

// lib/vpc-endpoints-stack.ts

import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as iam from "aws-cdk-lib/aws-iam";
import * as kds from "aws-cdk-lib/aws-kinesis";
import * as s3 from "aws-cdk-lib/aws-s3";
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';


export class VpcEndpointsStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // skipping to VPC section ... omitting for brevity


    /**
     * VPC in Single Availability Zone
     * - IPs from 10.0.0.1 to 10.0.15.254 (4,096 addresses) along with a 
     * - Public subnet (/24 256 addresses) with route to Internet Gateway
     * - Private subnet (/24 256 addresses) with no route to Internet (no NAT)
     */
    const vpc = new ec2.Vpc(this, 'VPC-Endpoints', {
      natGateways: 0,
      cidr: '10.0.0.0/20',
      maxAzs: 1,
      subnetConfiguration: [
        {
          name: 'public-subnet',
          subnetType: ec2.SubnetType.PUBLIC,
          cidrMask: 24
        },
        {
          name: 'private-subnet',
          subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
          cidrMask: 24
        }
      ]
    });
    const s3GatewayEndpoint = vpc.addGatewayEndpoint('s3-endpoint', {
      service: ec2.GatewayVpcEndpointAwsService.S3
    });
    
    
    // skipping to outputs section ... omitting for brevity


    /**
     * Dynamically generated resource values to display in output of CloudFormation
     */
    new cdk.CfnOutput(this, 'Ec2PublicIp', {
      value: publicEc2.instancePublicIp
    });
    new cdk.CfnOutput(this, 'Ec2PrivateIp', {
      value: privateEc2.instancePrivateIp
    });
    new cdk.CfnOutput(this, 'S3Bucket', {
      value: s3Bucket.bucketName
    });
    new cdk.CfnOutput(this, 'KdsStream', {
      value: kdsStream.streamName
    });
    new cdk.CfnOutput(this, 'S3GatewayEndpoint', {
      value: s3GatewayEndpoint.vpcEndpointId
    });
  }
}

I synthesize and deploy to update the new network architecture in the AWS Cloud.

cdk synth
cdk deploy --parameters keypair="vpc-endpoints" --parameters safeip="YOUR-IP/32"

Now when I SSH onto the private EC2 instance and attempt to copy file2.txt to the S3 bucket it succeeds.

[ec2-user@ip-10-0-1-97 ~]$ aws s3 cp file2.txt s3://vpcendpointsstack-vpcendpointsbkt6949eda6-13bwwlsf3uft4upload: ./file2.txt to s3://vpcendpointsstack-vpcendpointsbkt6949eda6-13bwwlsf3uft4/file2.txt

Its also worth noting that all requests made to the S3 service from the EC2 instance in the public subnet will also utilize the S3 Gateway Endpoint due to the mapping in the public route table's route linking the S3 prefix list ID to the Gateway Endpoint.

Adding a Interface Endpoint for Kinesis

This last VPC Endpoint, the Interface Endpoint, is for connecting to both AWS and AWS partner services which utilize AWS PrivateLink. Interface Endpoints enable direct paths from customer (aka consumer) VPCs establishing secure and private connections from the custom VPC to the AWS Kinesis streaming service.

After this section the AWS architecture and network access paths will conform to the architecture diagram shown below with both VPC Endpoints now present.

Kinesis Interface Endpoint

Below is the updated CDK IaC required to add a Interface Endpoint to the private subnet enabling traffic from the private subnet to reach Kinesis. The CDK updates also enable private DNS causing requests within the public subnet destined for Kinesis to go through the Interface Endpoint rather than making a trip out to the public internet via the Internet Gateway then back to Kinesis.

// lib/vpc-endpoints-stack.ts

import * as ec2 from "aws-cdk-lib/aws-ec2";
import * as iam from "aws-cdk-lib/aws-iam";
import * as kds from "aws-cdk-lib/aws-kinesis";
import * as s3 from "aws-cdk-lib/aws-s3";
import * as cdk from 'aws-cdk-lib';
import { Construct } from 'constructs';


export class VpcEndpointsStack extends cdk.Stack {
  constructor(scope: Construct, id: string, props?: cdk.StackProps) {
    super(scope, id, props);

    // skipping to VPC section ... omitting for brevity


    /**
     * VPC in Single Availability Zone
     * - IPs from 10.0.0.1 to 10.0.15.254 (4,096 addresses) along with a 
     * - Public subnet (/24 256 addresses) with route to Internet Gateway
     * - Private subnet (/24 256 addresses) with no route to Internet (no NAT)
     */
    const vpc = new ec2.Vpc(this, 'VPC-Endpoints', {
      natGateways: 0,
      cidr: '10.0.0.0/20',
      maxAzs: 1,
      subnetConfiguration: [
        {
          name: 'public-subnet',
          subnetType: ec2.SubnetType.PUBLIC,
          cidrMask: 24
        },
        {
          name: 'private-subnet',
          subnetType: ec2.SubnetType.PRIVATE_ISOLATED,
          cidrMask: 24
        }
      ],
      enableDnsHostnames: true,
      enableDnsSupport: true
    });
    const s3GatewayEndpoint = vpc.addGatewayEndpoint('s3-endpoint', {
      service: ec2.GatewayVpcEndpointAwsService.S3
    });
    const kinesisInterfaceEndpoint = vpc.addInterfaceEndpoint('kinesis-endpoint', {
      service: ec2.InterfaceVpcEndpointAwsService.KINESIS_STREAMS,
      privateDnsEnabled: true,
      subnets: { subnetType: ec2.SubnetType.PRIVATE_ISOLATED }
    });

	// skipping to outputs section ... omitting for brevity

    /**
     * Dynamically generated resource values to display in output of CloudFormation
     */
    new cdk.CfnOutput(this, 'Ec2PublicIp', {
      value: publicEc2.instancePublicIp
    });
    new cdk.CfnOutput(this, 'Ec2PrivateIp', {
      value: privateEc2.instancePrivateIp
    });
    new cdk.CfnOutput(this, 'S3Bucket', {
      value: s3Bucket.bucketName
    });
    new cdk.CfnOutput(this, 'KdsStream', {
      value: kdsStream.streamName
    });
    new cdk.CfnOutput(this, 'S3GatewayEndpoint', {
      value: s3GatewayEndpoint.vpcEndpointId
    });
    new cdk.CfnOutput(this, 'KinesisInterfaceEndpoint', {
      value: kinesisInterfaceEndpoint.vpcEndpointId
    });
  }
}

Next I must synthesize and deploy as before for the updates to be pushed to the AWS Cloud.

cdk synth
cdk deploy --parameters keypair="vpc-endpoints" --parameters safeip="YOUR-IP/32"

Then I can attempt to publish some new messages to Kinesis from the EC2 instance in the private subnet testing the Interface Endpoints ability to relay traffic to the Kinesis service.

[ec2-user@ip-10-0-1-97 ~]$ aws kinesis put-record --stream-name $STREAM_NAME --data "I am a message from private ec2" --partition-key "adam" --region us-east-2
{
    "ShardId": "shardId-000000000000", 
    "EncryptionType": "KMS", 
    "SequenceNumber": "49625250023738605656126291265138213828647522758618513410"
}

Awesome, looks like the Interface Endpoint is relaying traffic from the private subnet to Kinesis securely remaining within the AWS network without ever being exposed to the public internet.

Clean Up

In order to minimize charges incurred by running the various AWS Cloud Resources and Services for this tutorial its important to remove them when done experimenting with them. The CDK makes this quite simple with the following command.

cdk destroy

Conclusion

In this article I presented the concept of AWS VPC Endpoints which are used to control network traffic between VPC resources and services provided by AWS and AWS Partners so that traffic remains within the AWS network without the need for Internet Gateways, NAT Gateways, VPC Connections or other VPC network resources. Specificly I demonstrated how to provision and configure a Gateway Endpoint for interacting with AWS S3 along with a Interface Endpoint for interacting with AWS Kinesis Data Streams.

As always, I thank you for reading and please feel free to ask questions or critique in the comments section below.

Share with friends and colleagues

[[ likes ]] likes

Navigation

Community favorites for DevOps

theCodingInterface