You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Continuous Integration / Continuous Delivery for CDK Applications---
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This library includes a CodePipeline composite Action for deploying AWS CDK Applications.
The construct library in it's current form has the following limitations:
It can only deploy stacks that are hosted in the same AWS account and region as the CodePipeline is.
Stacks that make use of Assets cannot be deployed successfully.
Getting Started
In order to add the PipelineDeployStackAction to your CodePipeline, you need to have a CodePipeline artifact that
contains the result of invoking cdk synth -o <dir> on your CDK App. You can for example achieve this using a
CodeBuild project.
The example below defines a CDK App that contains 3 stacks:
CodePipelineStack manages the CodePipeline resources, and self-updates before deploying any other stack
ServiceStackA and ServiceStackB are service infrastructure stacks, and need to be deployed in this order
importaws_cdk.aws_codebuildascodebuildimportaws_cdk.aws_codepipelineascodepipelineimportaws_cdk.aws_codepipeline_actionsascodepipeline_actionsimportaws_cdk.coreascdkimportaws_cdk.app_deliveryascicdapp=cdk.App()
# We define a stack that contains the CodePipelinepipeline_stack=cdk.Stack(app, "PipelineStack")
pipeline=codepipeline.Pipeline(pipeline_stack, "CodePipeline",
# Mutating a CodePipeline can cause the currently propagating state to be# "lost". Ensure we re-run the latest change through the pipeline after it's# been mutated so we're sure the latest state is fully deployed through.restart_execution_on_update=True
)
# Configure the CodePipeline source - where your CDK App's source code is hostedsource_output=codepipeline.Artifact()
source=codepipeline_actions.GitHubSourceAction(
action_name="GitHub",
output=source_output
)
pipeline.add_stage(
stage_name="source",
actions=[source]
)
project=codebuild.PipelineProject(pipeline_stack, "CodeBuild")
synthesized_app=codepipeline.Artifact()
build_action=codepipeline_actions.CodeBuildAction(
action_name="CodeBuild",
project=project,
input=source_output,
outputs=[synthesized_app]
)
pipeline.add_stage(
stage_name="build",
actions=[build_action]
)
# Optionally, self-update the pipeline stackself_update_stage=pipeline.add_stage(stage_name="SelfUpdate")
self_update_stage.add_action(cicd.PipelineDeployStackAction(
stack=pipeline_stack,
input=synthesized_app,
admin_permissions=True
))
# Now add our service stacksdeploy_stage=pipeline.add_stage(stage_name="Deploy")
service_stack_a=MyServiceStackA(app, "ServiceStackA")
# Add actions to deploy the stacks in the deploy stage:deploy_service_aAction=cicd.PipelineDeployStackAction(
stack=service_stack_a,
input=synthesized_app,
# See the note below for details about this option.admin_permissions=False
)
deploy_stage.add_action(deploy_service_aAction)
# Add the necessary permissions for you service deploy action. This role is# is passed to CloudFormation and needs the permissions necessary to deploy# stack. Alternatively you can enable [Administrator](https://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_job-functions.html#jf_administrator) permissions above,# users should understand the privileged nature of this role.deploy_service_aAction.add_to_role_policy(iam.PolicyStatement(
actions=["service:SomeAction"],
resources=[my_resource.my_resource_arn]
))
service_stack_b=MyServiceStackB(app, "ServiceStackB")
deploy_stage.add_action(cicd.PipelineDeployStackAction(
stack=service_stack_b,
input=synthesized_app,
create_change_set_run_order=998,
admin_permissions=True
))
buildspec.yml
The repository can contain a file at the root level named buildspec.yml, or
you can in-line the buildspec. Note that buildspec.yaml is not compatible.
For example, a TypeScript or Javascript CDK App can add the following buildspec.yml
at the root of the repository:
version: 0.2phases:
install:
commands:
# Installs the npm dependencies as defined by the `package.json` file# present in the root directory of the package# (`cdk init app --language=typescript` would have created one for you)
- npm installbuild:
commands:
# Builds the CDK App so it can be synthesized
- npm run build# Synthesizes the CDK App and puts the resulting artifacts into `dist`
- npm run cdk synth -- -o distartifacts:
# The output artifact is all the files in the `dist` directorybase-directory: distfiles: '**/*'
The PipelineDeployStackAction expects it's input to contain the result of
synthesizing a CDK App using the cdk synth -o <directory>.
Amazon API Gateway is a fully managed service that makes it easy for developers
to publish, maintain, monitor, and secure APIs at any scale. Create an API to
access data, business logic, or functionality from your back-end services, such
as applications running on Amazon Elastic Compute Cloud (Amazon EC2), code
running on AWS Lambda, or any web application.
Defining APIs
APIs are defined as a hierarchy of resources and methods. addResource and
addMethod can be used to build this hierarchy. The root resource is
api.root.
For example, the following code defines an API that includes the following HTTP
endpoints: ANY /, GET /books, POST /books, GET /books/{book_id}, DELETE /books/{book_id}.
You can also supply proxy: false, in which case you will have to explicitly
define the API model:
backend=lambda.Function(...)
api=apigateway.LambdaRestApi(self, "myapi",
handler=backend,
proxy=False
)
items=api.root.add_resource("items")
items.add_method("GET")# GET /itemsitems.add_method("POST")# POST /itemsitem=items.add_resource("{item}")
item.add_method("GET")# GET /items/{item}# the default integration for methods is "handler", but one can# customize this behavior per method or even a sub path.item.add_method("DELETE", apigateway.HttpIntegration("http://amazon.com"))
Integration Targets
Methods are associated with backend integrations, which are invoked when this
method is called. API Gateway supports the following integrations:
MockIntegration - can be used to test APIs. This is the default
integration if one is not specified.
LambdaIntegration - can be used to invoke an AWS Lambda function.
AwsIntegration - can be used to invoke arbitrary AWS service APIs.
HttpIntegration - can be used to invoke HTTP endpoints.
The following example shows how to integrate the GET /book/{book_id} method to
an AWS Lambda function:
When you work with Lambda integrations that are not Proxy integrations, you
have to define your models and mappings for the request, response, and integration.
You can define more parameters on the integration to tune the behavior of API Gateway
# INCORRECTintegration=LambdaIntegration(hello,
proxy=False,
request_parameters={
# You can define mapping parameters from your method to your integration# - Destination parameters (the key) are the integration parameters (used in mappings)# - Source parameters (the value) are the source request parameters or expressions# @see: https://docs.aws.amazon.com/apigateway/latest/developerguide/request-response-data-mappings.html""integration.request.querystring.who"": "method.request.querystring.who"
},
allow_test_invoke=True,
request_templates={
# You can define a mapping that will build a payload for your integration, based# on the integration parameters that you have specified# Check: https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html""application/json"": JSON.stringify(action="sayHello", poll_id="$util.escapeJavaScript($input.params('who'))")
},
# This parameter defines the behavior of the engine is no suitable response template is foundpassthrough_behavior=PassthroughBehavior.NEVER,
integration_responses=[{
# Successful response from the Lambda function, no filter defined# - the selectionPattern filter only tests the error message# We will set the response status code to 200"status_code": "200",
"response_templates": {
# This template takes the "message" result from the Lambda function, adn embeds it in a JSON response# Check https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-mapping-template-reference.html""application/json"": JSON.stringify(state="ok", greeting="$util.escapeJavaScript($input.body)")
},
"response_parameters": {
# We can map response parameters# - Destination parameters (the key) are the response parameters (used in mappings)# - Source parameters (the value) are the integration response parameters or expressions""method.response.header.Content-Type"": "'application/json'",
""method.response.header.Access-Control-Allow-Origin"": "'*'",
""method.response.header.Access-Control-Allow-Credentials"": "'true'"
}
}, {
# For errors, we check if the error message is not empty, get the error data"selection_pattern": "(\n|.)+",
# We will set the response status code to 200"status_code": "400",
"response_templates": {
""application/json"": JSON.stringify(state="error", message="$util.escapeJavaScript($input.path('$.errorMessage'))")
},
"response_parameters": {
""method.response.header.Content-Type"": "'application/json'",
""method.response.header.Access-Control-Allow-Origin"": "'*'",
""method.response.header.Access-Control-Allow-Credentials"": "'true'"
}
}
]
)
You can define models for your responses (and requests)
# INCORRECT# We define the JSON Schema for the transformed valid responseresponse_model=api.add_model("ResponseModel",
content_type="application/json",
model_name="ResponseModel",
schema={""$schema"": "http://json-schema.org/draft-04/schema#", ""title"": "pollResponse", ""type"": "object", ""properties"": {""state"": {""type"": "string"}, ""greeting"": {""type"": "string"}}}
)
# We define the JSON Schema for the transformed error responseerror_response_model=api.add_model("ErrorResponseModel",
content_type="application/json",
model_name="ErrorResponseModel",
schema={""$schema"": "http://json-schema.org/draft-04/schema#", ""title"": "errorResponse", ""type"": "object", ""properties"": {""state"": {""type"": "string"}, ""message"": {""type"": "string"}}}
)
And reference all on your method definition.
# INCORRECT# If you want to define parameter mappings for the request, you need a validatorvalidator=api.add_request_validator("DefaultValidator",
validate_request_body=False,
validate_request_parameters=True
)
resource.add_method("GET", integration,
# We can mark the parameters as requiredrequest_parameters={
""method.request.querystring.who"": True
},
# We need to set the validator for ensuring they are passedrequest_validator=validator,
method_responses=[{
# Successful response from the integration"status_code": "200",
# Define what parameters are allowed or not"response_parameters": {
""method.response.header.Content-Type"": True,
""method.response.header.Access-Control-Allow-Origin"": True,
""method.response.header.Access-Control-Allow-Credentials"": True
},
# Validate the schema on the response"response_models": {
""application/json"": response_model
}
}, {
# Same thing for the error responses"status_code": "400",
"response_parameters": {
""method.response.header.Content-Type"": True,
""method.response.header.Access-Control-Allow-Origin"": True,
""method.response.header.Access-Control-Allow-Credentials"": True
},
"response_models": {
""application/json"": error_response_model
}
}
]
)
Default Integration and Method Options
The defaultIntegration and defaultMethodOptions properties can be used to
configure a default integration at any resource level. These options will be
used when defining method under this resource (recursively) with undefined
integration or options.
If not defined, the default integration is MockIntegration. See reference
documentation for default method options.
The following example defines the booksBackend integration as a default
integration. This means that all API methods that do not explicitly define an
integration will be routed to this AWS Lambda function.
books_backend=apigateway.LambdaIntegration(...)
api=apigateway.RestApi(self, "books",
default_integration=books_backend
)
books=api.root.add_resource("books")
books.add_method("GET")# integrated with `booksBackend`books.add_method("POST")# integrated with `booksBackend`book=books.add_resource("{book_id}")
book.add_method("GET")
Proxy Routes
The addProxy method can be used to install a greedy {proxy+} resource
on a path. By default, this also installs an "ANY" method:
proxy=resource.add_proxy(
default_integration=LambdaIntegration(handler),
# "false" will require explicitly adding methods on the `proxy` resourceany_method=True
)
Deployments
By default, the RestApi construct will automatically create an API Gateway
Deployment and a "prod" Stage which represent the API configuration you
defined in your CDK app. This means that when you deploy your app, your API will
be have open access from the internet via the stage URL.
The URL of your API can be obtained from the attribute restApi.url, and is
also exported as an Output from your stack, so it's printed when you cdk deploy your app:
To disable this behavior, you can set { deploy: false } when creating your
API. This means that the API will not be deployed and a stage will not be
created for it. You will need to manually define a apigateway.Deployment and
apigateway.Stage resources.
Use the deployOptions property to customize the deployment options of your
API.
The following example will configure API Gateway to emit logs and data traces to
AWS CloudWatch for all API calls:
By default, an IAM role will be created and associated with API Gateway to
allow it to write logs and metrics to AWS CloudWatch unless cloudWatchRole is
set to false.
API Gateway deployments are an immutable snapshot of the API. This means that we
want to automatically create a new deployment resource every time the API model
defined in our CDK app changes.
In order to achieve that, the AWS CloudFormation logical ID of the
AWS::ApiGateway::Deployment resource is dynamically calculated by hashing the
API configuration (resources, methods). This means that when the configuration
changes (i.e. a resource or method are added, configuration is changed), a new
logical ID will be assigned to the deployment resource. This will cause
CloudFormation to create a new deployment resource.
By default, old deployments are deleted. You can set retainDeployments: true
to allow users revert the stage to an old deployment manually.
Custom Domains
To associate an API with a custom domain, use the domainName configuration when
you define your API:
This will define a DomainName resource for you, along with a BasePathMapping
from the root of the domain to the deployment stage of the API. This is a common
set up.
To route domain traffic to an API Gateway API, use Amazon Route 53 to create an
alias record. An alias record is a Route 53 extension to DNS. It's similar to a
CNAME record, but you can create an alias record both for the root domain, such
as example.com, and for subdomains, such as www.example.com. (You can create
CNAME records only for subdomains.)
NOTE: currently, the mapping will always be assigned to the APIs
deploymentStage, which will automatically assigned to the latest API
deployment. Raise a GitHub issue if you require more granular control over
mapping base paths to stages.
If you don't specify basePath, all URLs under this domain will be mapped
to the API, and you won't be able to map another API to the same domain:
domain.add_base_path_mapping(api)
This can also be achieved through the mapping configuration when defining the
domain as demonstrated above.
If you wish to setup this domain with an Amazon Route53 alias, use the route53_targets.ApiGatewayDomain:
Application AutoScaling is used to configure autoscaling for all
services other than scaling EC2 instances. For example, you will use this to
scale ECS tasks, DynamoDB capacity, Spot Fleet sizes and more.
As a CDK user, you will probably not have to interact with this library
directly; instead, it will be used by other construct libraries to
offer AutoScaling features for their own constructs.
This document will describe the general autoscaling features and concepts;
your particular service may offer only a subset of these.
AutoScaling basics
Resources can offer one or more attributes to autoscale, typically
representing some capacity dimension of the underlying service. For example,
a DynamoDB Table offers autoscaling of the read and write capacity of the
table proper and its Global Secondary Indexes, an ECS Service offers
autoscaling of its task count, an RDS Aurora cluster offers scaling of its
replica count, and so on.
When you enable autoscaling for an attribute, you specify a minimum and a
maximum value for the capacity. AutoScaling policies that respond to metrics
will never go higher or lower than the indicated capacity (but scheduled
scaling actions might, see below).
There are three ways to scale your capacity:
In response to a metric (also known as step scaling); for example, you
might want to scale out if the CPU usage across your cluster starts to rise,
and scale in when it drops again.
By trying to keep a certain metric around a given value (also known as
target tracking scaling); you might want to automatically scale out an in to
keep your CPU usage around 50%.
On a schedule; you might want to organize your scaling around traffic
flows you expect, by scaling out in the morning and scaling in in the
evening.
The general pattern of autoscaling will look like this:
capacity=resource.auto_scale_capacity(
min_capacity=5,
max_capacity=100
)
# Enable a type of metric scaling and/or schedule scalingcapacity.scale_on_metric(...)
capacity.scale_to_track_metric(...)
capacity.scale_on_schedule(...)
Step Scaling
This type of scaling scales in and out in deterministic steps that you
configure, in response to metric values. For example, your scaling strategy
to scale in response to CPU usage might look like this:
(Note that this is not necessarily a recommended scaling strategy, but it's
a possible one. You will have to determine what thresholds are right for you).
You would configure it like this:
# INCORRECTcapacity.scale_on_metric("ScaleToCPU",
metric=service.metric_cpu_utilization(),
scaling_steps=[{"upper": 10, "change": -1}, {"lower": 50, "change": +1}, {"lower": 70, "change": +3}
],
# Change this to AdjustmentType.PercentChangeInCapacity to interpret the# 'change' numbers before as percentages instead of capacity counts.adjustment_type=autoscaling.AdjustmentType.CHANGE_IN_CAPACITY
)
The AutoScaling construct library will create the required CloudWatch alarms and
AutoScaling policies for you.
Target Tracking Scaling
This type of scaling scales in and out in order to keep a metric (typically
representing utilization) around a value you prefer. This type of scaling is
typically heavily service-dependent in what metric you can use, and so
different services will have different methods here to set up target tracking
scaling.
The following example configures the read capacity of a DynamoDB table
to be around 60% utilization:
This type of scaling is used to change capacities based on time. It works
by changing the minCapacity and maxCapacity of the attribute, and so
can be used for two purposes:
Scale in and out on a schedule by setting the minCapacity high or
the maxCapacity low.
Still allow the regular scaling actions to do their job, but restrict
the range they can scale over (by setting both minCapacity and
maxCapacity but changing their range over time).
The following schedule expressions can be used:
at(yyyy-mm-ddThh:mm:ss) -- scale at a particular moment in time
rate(value unit) -- scale every minute/hour/day
cron(mm hh dd mm dow) -- scale on arbitrary schedules
Of these, the cron expression is the most useful but also the most
complicated. A schedule is expressed as a cron expression. The Schedule class has a cron method to help build cron expressions.
The following example scales the fleet out in the morning, and lets natural
scaling take over at night:
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
AWS App Mesh is a service mesh based on the Envoy proxy that makes it easy to monitor and control microservices. App Mesh standardizes how your microservices communicate, giving you end-to-end visibility and helping to ensure high-availability for your applications.
App Mesh gives you consistent visibility and network traffic controls for every microservice in an application.
App Mesh supports microservice applications that use service discovery naming for their components. To use App Mesh, you must have an existing application running on AWS Fargate, Amazon ECS, Amazon EKS, Kubernetes on AWS, or Amazon EC2.
A service mesh is a logical boundary for network traffic between the services that reside within it.
After you create your service mesh, you can create virtual services, virtual nodes, virtual routers, and routes to distribute traffic between the applications in your mesh.
The following example creates the AppMesh service mesh with the default filter of DROP_ALL, see docs here for more info on egress filters.
mesh=Mesh(stack, "AppMesh",
name="myAwsmMesh"
)
The mesh can also be created with the "ALLOW_ALL" egress filter by overwritting the property.
The Mesh needs VirtualRouters as logical units to route to VirtualNodes.
Virtual routers handle traffic for one or more virtual services within your mesh. After you create a virtual router, you can create and associate routes for your virtual router that direct incoming requests to different virtual nodes.
The router can also be created using the constructor and passing in the mesh instead of calling the addVirtualRouter() method for the mesh.
# INCORRECTmesh=Mesh(stack, "AppMesh",
name="myAwsmMesh",
mesh_spec={
"egress_filter": appmesh.MeshFilterType.Allow_All
}
)
router=appmesh.VirtualRouter(stack, "router",
mesh=mesh, # notice that mesh is a required property when creating a router with a new statementport_mappings=[{
"port": 8081,
"protocol": appmesh.Protocol.HTTP
}
]
)
The listener protocol can be either HTTP or TCP.
The same pattern applies to all constructs within the appmesh library, for any mesh.addXZY method, a new constuctor can also be used. This is particularly useful for cross stack resources are required. Where creating the mesh as part of an infrastructure stack and creating the resources such as nodes is more useful to keep in the application stack.
Adding VirtualService
A virtual service is an abstraction of a real service that is provided by a virtual node directly or indirectly by means of a virtual router. Dependent services call your virtual service by its virtualServiceName, and those requests are routed to the virtual node or virtual router that is specified as the provider for the virtual service.
We recommend that you use the service discovery name of the real service that you're targeting (such as my-service.default.svc.cluster.local).
When creating a virtual service:
If you want the virtual service to spread traffic across multiple virtual nodes, specify a Virtual router.
If you want the virtual service to reach a virtual node directly, without a virtual router, specify a Virtual node.
Note that only one must of virtualNode or virtualRouter must be chosen.
Adding a VirtualNode
A virtual node acts as a logical pointer to a particular task group, such as an Amazon ECS service or a Kubernetes deployment.
When you create a virtual node, you must specify the DNS service discovery hostname for your task group. Any inbound traffic that your virtual node expects should be specified as a listener. Any outbound traffic that your virtual node expects to reach should be specified as a backend.
The response metadata for your new virtual node contains the Amazon Resource Name (ARN) that is associated with the virtual node. Set this value (either the full ARN or the truncated resource name) as the APPMESH_VIRTUAL_NODE_NAME environment variable for your task group's Envoy proxy container in your task definition or pod spec. For example, the value could be mesh/default/virtualNode/simpleapp. This is then mapped to the node.id and node.cluster Envoy parameters.
Note
If you require your Envoy stats or tracing to use a different name, you can override the node.cluster value that is set by APPMESH_VIRTUAL_NODE_NAME with the APPMESH_VIRTUAL_NODE_CLUSTER environment variable.
The listeners property can be left blank dded later with the mesh.addListeners() method. The healthcheck property is optional but if specifying a listener, the portMappings must contain at least one property.
Adding a Route
A route is associated with a virtual router, and it's used to match requests for a virtual router and distribute traffic accordingly to its associated virtual nodes.
You can use the prefix parameter in your route specification for path-based routing of requests. For example, if your virtual service name is my-service.local and you want the route to match requests to my-service.local/metrics, your prefix should be /metrics.
If your route matches a request, you can distribute traffic to one or more target virtual nodes with relative weighting.
NOTE: AutoScalingGroup has an property called allowAllOutbound (allowing the instances to contact the
internet) which is set to true by default. Be sure to set this to false if you don't want
your instances to be able to start arbitrary connections.
Machine Images (AMIs)
AMIs control the OS that gets launched when you start your EC2 instance. The EC2
library contains constructs to select the AMI you want to use.
Depending on the type of AMI, you select it a different way.
The latest version of Amazon Linux and Microsoft Windows images are
selectable by instantiating one of these classes:
# Pick a Windows edition to usewindows=ec2.WindowsImage(ec2.WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE)
# Pick the right Amazon Linux edition. All arguments shown are optional# and will default to these values when omitted.amzn_linux=ec2.AmazonLinuxImage(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX,
edition=ec2.AmazonLinuxEdition.STANDARD,
virtualization=ec2.AmazonLinuxVirt.HVM,
storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
)
# For other custom (Linux) images, instantiate a `GenericLinuxImage` with# a map giving the AMI to in for each region:# INCORRECTlinux=ec2.GenericLinuxImage(
"us-east-1"="ami-97785bed",
"eu-west-1"="ami-12345678"
)
NOTE: The Amazon Linux images selected will be cached in your cdk.json, so that your
AutoScalingGroups don't automatically change out from under you when you're making unrelated
changes. To update to the latest version of Amazon Linux, remove the cache entry from the context
section of your cdk.json.
We will add command-line options to make this step easier in the future.
AutoScaling Instance Counts
AutoScalingGroups make it possible to raise and lower the number of instances in the group,
in response to (or in advance of) changes in workload.
When you create your AutoScalingGroup, you specify a minCapacity and a
maxCapacity. AutoScaling policies that respond to metrics will never go higher
or lower than the indicated capacity (but scheduled scaling actions might, see
below).
There are three ways to scale your capacity:
In response to a metric (also known as step scaling); for example, you
might want to scale out if the CPU usage across your cluster starts to rise,
and scale in when it drops again.
By trying to keep a certain metric around a given value (also known as
target tracking scaling); you might want to automatically scale out and in to
keep your CPU usage around 50%.
On a schedule; you might want to organize your scaling around traffic
flows you expect, by scaling out in the morning and scaling in in the
evening.
The general pattern of autoscaling will look like this:
This type of scaling scales in and out in deterministics steps that you
configure, in response to metric values. For example, your scaling strategy to
scale in response to a metric that represents your average worker pool usage
might look like this:
(Note that this is not necessarily a recommended scaling strategy, but it's
a possible one. You will have to determine what thresholds are right for you).
Note that in order to set up this scaling strategy, you will have to emit a
metric representing your worker utilization from your instances. After that,
you would configure the scaling something like this:
worker_utilization_metric=cloudwatch.Metric(
namespace="MyService",
metric_name="WorkerUtilization"
)
# INCORRECTcapacity.scale_on_metric("ScaleToCPU",
metric=worker_utilization_metric,
scaling_steps=[{"upper": 10, "change": -1}, {"lower": 50, "change": +1}, {"lower": 70, "change": +3}
],
# Change this to AdjustmentType.PERCENT_CHANGE_IN_CAPACITY to interpret the# 'change' numbers before as percentages instead of capacity counts.adjustment_type=autoscaling.AdjustmentType.CHANGE_IN_CAPACITY
)
The AutoScaling construct library will create the required CloudWatch alarms and
AutoScaling policies for you.
Target Tracking Scaling
This type of scaling scales in and out in order to keep a metric around a value
you prefer. There are four types of predefined metrics you can track, or you can
choose to track a custom metric. If you do choose to track a custom metric,
be aware that the metric has to represent instance utilization in some way
(AutoScaling will scale out if the metric is higher than the target, and scale
in if the metric is lower than the target).
If you configure multiple target tracking policies, AutoScaling will use the
one that yields the highest capacity.
The following example scales to keep the CPU usage of your instances around
50% utilization:
This type of scaling is used to change capacities based on time. It works by
changing minCapacity, maxCapacity and desiredCapacity of the
AutoScalingGroup, and so can be used for two purposes:
Scale in and out on a schedule by setting the minCapacity high or
the maxCapacity low.
Still allow the regular scaling actions to do their job, but restrict
the range they can scale over (by setting both minCapacity and
maxCapacity but changing their range over time).
A schedule is expressed as a cron expression. The Schedule class has a cron method to help build cron expressions.
The following example scales the fleet out in the morning, going back to natural
scaling (all the way down to 1 instance if necessary) at night:
See the documentation of the @aws-cdk/aws-ec2 package for more information
about allowing connections between resources backed by instances.
Future work
CloudWatch Events (impossible to add currently as the AutoScalingGroup ARN is
necessary to make this rule and this cannot be accessed from CloudFormation).
After requesting a certificate, you will need to prove that you own the
domain in question before the certificate will be granted. The CloudFormation
deployment will wait until this verification process has been completed.
Because of this wait time, it's better to provision your certificates
either in a separate stack from your main service, or provision them
manually and import them into your CDK application.
The CDK also provides a custom resource which can be used for automatic
validation if the DNS records for the domain are managed through Route53 (see
below).
Email validation
Email-validated certificates (the default) are validated by receiving an
email on one of a number of predefined domains and following the instructions
in the email.
Automatic DNS-validated certificates using Route53
The DnsValidatedCertificateRequest class provides a Custom Resource by which
you can request a TLS certificate from AWS Certificate Manager that is
automatically validated using a cryptographically secure DNS record. For this to
work, there must be a Route 53 public zone that is responsible for serving
records under the Domain Name of the requested certificate. For example, if you
request a certificate for www.example.com, there must be a Route 53 public
zone example.com that provides authoritative records for the domain.
Custom Resources are CloudFormation resources that are implemented by
arbitrary user code. They can do arbitrary lookups or modifications
during a CloudFormation synthesis run.
You will typically use Lambda to implement a Construct implemented as a
Custom Resource (though SNS topics can be used as well). Your Lambda function
will be sent a CREATE, UPDATE or DELETE message, depending on the
CloudFormation life cycle. It will perform whatever actions it needs to, and
then return any number of output values which will be available as attributes
of your Construct. In turn, those can be used as input to other Constructs in
your model.
In general, consumers of your Construct will not need to care whether
it is implemented in term of other CloudFormation resources or as a
custom resource.
Note: when implementing your Custom Resource using a Lambda, use
a SingletonLambda so that even if your custom resource is instantiated
multiple times, the Lambda will only get uploaded once.
Example
The following shows an example of a declaring Custom Resource that copies
files into an S3 bucket during deployment (the implementation of the actual
Lambda handler is elided for brevity).
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
A CloudFront construct - for setting up the AWS CDN with ease!
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Add a CloudTrail construct - for ease of setting up CloudTrail logging in your account
This creates the same setup as above - but also logs events to a created CloudWatch Log stream.
By default, the created log group has a retention period of 365 Days, but this is also configurable.
For using CloudTrail event selector to log specific S3 events,
you can use the CloudTrailProps configuration object.
Example:
importaws_cdk.aws_cloudtrailascloudtrailtrail=cloudtrail.Trail(self, "MyAmazingCloudTrail")
# Adds an event selector to the bucket magic-bucket.# By default, this includes management events and all operations (Read + Write)trail.add_s3_event_selector(["arn:aws:s3:::magic-bucket/"])
# Adds an event selector to the bucket foo, with a specific configurationtrail.add_s3_event_selector(["arn:aws:s3:::foo/"],
include_management_events=False,
read_write_type=ReadWriteType.ALL
)
Metric objects represent a metric that is emitted by AWS services or your own
application, such as CPUUsage, FailureCount or Bandwidth.
Metric objects can be constructed directly or are exposed by resources as
attributes. Resources that expose metrics will have functions that look
like metricXxx() which will return a Metric object, initialized with defaults
that make sense.
For example, lambda.Function objects have the fn.metricErrors() method, which
represents the amount of errors reported by that Lambda function:
errors=fn.metric_errors()
Aggregation
To graph or alarm on metrics you must aggregate them first, using a function
like Average or a percentile function like P99. By default, most Metric objects
returned by CDK libraries will be configured as Average over 300 seconds (5 minutes).
The exception is if the metric represents a count of discrete events, such as
failures. In that case, the Metric object will be configured as Sum over 300 seconds, i.e. it represents the number of times that event occurred over the
time period.
If you want to change the default aggregation of the Metric object (for example,
the function or the period), you can do so by passing additional parameters
to the metric function call:
This function also allows changing the metric label or color (which will be
useful when embedding them in graphs, see below).
Rates versus Sums
The reason for using Sum to count discrete events is that some events are
emitted as either 0 or 1 (for example Errors for a Lambda) and some are
only emitted as 1 (for example NumberOfMessagesPublished for an SNS
topic).
In case 0-metrics are emitted, it makes sense to take the Average of this
metric: the result will be the fraction of errors over all executions.
If 0-metrics are not emitted, the Average will always be equal to 1,
and not be very useful.
In order to simplify the mental model of Metric objects, we default to
aggregating using Sum, which will be the same for both metrics types. If you
happen to know the Metric you want to alarm on makes sense as a rate
(Average) you can always choose to change the statistic.
Alarms
Alarms can be created on metrics in one of two ways. Either create an Alarm
object, passing the Metric object to set the alarm on:
The most important properties to set while creating an Alarms are:
threshold: the value to compare the metric against.
comparisonOperator: the comparison operation to use, defaults to metric >= threshold.
evaluationPeriods: how many consecutive periods the metric has to be
breaching the the threshold for the alarm to trigger.
Dashboards
Dashboards are set of Widgets stored server-side which can be accessed quickly
from the AWS console. Available widgets are graphs of a metric over time, the
current value of a metric, or a static piece of Markdown which explains what the
graphs mean.
The following widgets are available:
GraphWidget -- shows any number of metrics on both the left and right
vertical axes.
AlarmWidget -- shows the graph and alarm line for a single alarm.
SingleValueWidget -- shows the current value of a set of metrics.
TextWidget -- shows some static Markdown.
Graph widget
A graph widget can display any number of metrics on either the left or
right vertical axis:
The widgets on a dashboard are visually laid out in a grid that is 24 columns
wide. Normally you specify X and Y coordinates for the widgets on a Dashboard,
but because this is inconvenient to do manually, the library contains a simple
layout system to help you lay out your dashboards the way you want them to.
Widgets have a width and height property, and they will be automatically
laid out either horizontally or vertically stacked to fill out the available
space.
Widgets are added to a Dashboard by calling add(widget1, widget2, ...).
Widgets given in the same call will be laid out horizontally. Widgets given
in different calls will be laid out vertically. To make more complex layouts,
you can use the following widgets to pack widgets together in different ways:
AWS CodeBuild is a fully managed continuous integration service that compiles
source code, runs tests, and produces software packages that are ready to
deploy. With CodeBuild, you don’t need to provision, manage, and scale your own
build servers. CodeBuild scales continuously and processes multiple builds
concurrently, so your builds are not left waiting in a queue. You can get
started quickly by using prepackaged build environments, or you can create
custom build environments that use your own build tools. With CodeBuild, you are
charged by the minute for the compute resources you use.
Installation
Install the module:
$ npm i @aws-cdk/aws-codebuild
Import it into your code:
importaws_cdk.aws_codebuildascodebuild
The codebuild.Project construct represents a build project resource. See the
reference documentation for a comprehensive list of initialization properties,
methods and attributes.
Source
Build projects are usually associated with a source, which is specified via
the source property which accepts a class that extends the Source
abstract base class.
The default is to have no source associated with the build project;
the buildSpec option is required in that case.
Here's a CodeBuild project with no source which simply prints Hello, CodeBuild!:
These source types can be used to build code from a GitHub repository.
Example:
git_hub_source=codebuild.Source.git_hub(
owner="awslabs",
repo="aws-cdk",
webhook=True, # optional, default: true if `webhookFilteres` were provided, false otherwisewebhook_filters=[
codebuild.FilterGroup.in_event_of(codebuild.EventAction.PUSH).and_branch_is("master")
]
)
To provide GitHub credentials, please either go to AWS CodeBuild Console to connect
or call ImportSourceCredentials to persist your personal access token.
Example:
To add a CodeBuild Project as an Action to CodePipeline,
use the PipelineProject class instead of Project.
It's a simple class that doesn't allow you to specify sources,
secondarySources, artifacts or secondaryArtifacts,
as these are handled by setting input and output CodePipeline Artifact instances on the Action,
instead of setting them on the Project.
For more details, see the readme of the @aws-cdk/@aws-codepipeline package.
Caching
You can save time when your project builds by using a cache. A cache can store reusable pieces of your build environment and use them across multiple builds. Your build project can use one of two types of caching: Amazon S3 or local. In general, S3 caching is a good option for small and intermediate build artifacts that are more expensive to build than to download. Local caching is a good option for large intermediate build artifacts because the cache is immediately available on the build host.
S3 Caching
With S3 caching, the cache is stored in an S3 bucket which is available from multiple hosts.
With local caching, the cache is stored on the codebuild instance itself. This is simple,
cheap and fast, but CodeBuild cannot guarantee a reuse of instance and hence cannot
guarantee cache hits. For example, when a build starts and caches files locally, if two subsequent builds start at the same time afterwards only one of those builds would get the cache. Three different cache modes are supported, which can be turned on individually.
LocalCacheMode.Source caches Git metadata for primary and secondary sources.
By default, projects use a small instance with an Ubuntu 18.04 image. You
can use the environment property to customize the build environment:
buildImage defines the Docker image used. See Images below for
details on how to define build images.
computeType defines the instance type used for the build.
privileged can be set to true to allow privileged access.
environmentVariables can be set at this level (and also at the project
level).
Images
The CodeBuild library supports both Linux and Windows images via the
LinuxBuildImage and WindowsBuildImage classes, respectively.
You can either specify one of the predefined Windows/Linux images by using one
of the constants such as WindowsBuildImage.WIN_SERVER_CORE_2016_BASE or
LinuxBuildImage.UBUNTU_14_04_RUBY_2_5_1.
Alternatively, you can specify a custom image using one of the static methods on
XxxBuildImage:
Use .fromDockerRegistry(image[, { secretsManagerCredentials }]) to reference an image in any public or private Docker registry.
Use .fromEcrRepository(repo[, tag]) to reference an image available in an
ECR repository.
Use .fromAsset(directory) to use an image created from a
local asset.
The following example shows how to define an image from a Docker asset:
CodeBuild projects can be used either as a source for events or be triggered
by events via an event rule.
Using Project as an event target
The @aws-cdk/aws-events-targets.CodeBuildProject allows using an AWS CodeBuild
project as a AWS CloudWatch event rule target:
# start build when a commit is pushedtargets=require("@aws-cdk/aws-events-targets")
code_commit_repository.on_commit("OnCommit", targets.CodeBuildProject(project))
Using Project as an event source
To define Amazon CloudWatch event rules for build projects, use one of the onXxx
methods:
Note that the identifier property is required for both secondary sources and
artifacts.
The contents of the secondary source is available to the build under the
directory specified by the CODEBUILD_SRC_DIR_<identifier> environment variable
(so, CODEBUILD_SRC_DIR_source2 in the above case).
The secondary artifacts have their own section in the buildspec, under the
regular artifacts one. Each secondary artifact has its own section, beginning
with their identifier.
So, a buildspec for the above Project could look something like this:
Definition of VPC configuration in CodeBuild Project
Typically, resources in an VPC are not accessible by AWS CodeBuild. To enable
access, you must provide additional VPC-specific configuration information as
part of your CodeBuild project configuration. This includes the VPC ID, the
VPC subnet IDs, and the VPC security group IDs. VPC-enabled builds are then
able to access resources inside your VPC.
Use Cases
VPC connectivity from AWS CodeBuild builds makes it possible to:
Run integration tests from your build against data in an Amazon RDS database that's isolated on a private subnet.
Query data in an Amazon ElastiCache cluster directly from tests.
Interact with internal web services hosted on Amazon EC2, Amazon ECS, or services that use internal Elastic Load Balancing.
Retrieve dependencies from self-hosted, internal artifact repositories, such as PyPI for Python, Maven for Java, and npm for Node.js.
Access objects in an Amazon S3 bucket configured to allow access through an Amazon VPC endpoint only.
Query external web services that require fixed IP addresses through the Elastic IP address of the NAT gateway or NAT instance associated with your subnet(s).
Your builds can access any resource that's hosted in your VPC.
Enable Amazon VPC Access in your CodeBuild Projects
Pass the VPC when defining your Project, then make sure to
give the CodeBuild's security group the right permissions
to access the resources that it needs by using the
connections object.
# trigger is established for all repository actions on all branches by default.repo.notify("arn:aws:sns:*:123456789012:my_topic")
Events
CodeCommit repositories emit Amazon CloudWatch events for certain activities.
Use the repo.onXxx methods to define rules that trigger on these events
and invoke targets as a result:
# starts a CodeBuild project when a commit is pushed to the "master" branch of the reporepo.on_commit("CommitToMaster",
target=targets.CodeBuildProject(project),
branches=["master"]
)
# publishes a message to an Amazon SNS topic when a comment is made on a pull requestrule=repo.on_comment_on_pull_request("CommentOnPullRequest",
target=targets.SnsTopic(my_topic)
)
AWS CodeDeploy is a deployment service that automates application deployments to
Amazon EC2 instances, on-premises instances, serverless Lambda functions, or
Amazon ECS services.
The CDK currently supports Amazon EC2, on-premise and AWS Lambda applications.
EC2/on-premise Applications
To create a new CodeDeploy Application that deploys to EC2/on-premise instances:
To create a new CodeDeploy Deployment Group that deploys to EC2/on-premise instances:
# INCORRECTdeployment_group=codedeploy.ServerDeploymentGroup(self, "CodeDeployDeploymentGroup",
application=application,
deployment_group_name="MyDeploymentGroup",
auto_scaling_groups=[asg1, asg2],
# adds User Data that installs the CodeDeploy agent on your auto-scaling groups hosts# default: trueinstall_agent=True,
# adds EC2 instances matching tagsec2_instance_tags=codedeploy.InstanceTagSet(
# any instance with tags satisfying# key1=v1 or key1=v2 or key2 (any value) or value v3 (any key)# will match this group"key1"=["v1", "v2"],
"key2"=[],
""=["v3"]
),
# adds on-premise instances matching tagson_premise_instance_tags=codedeploy.InstanceTagSet({
""key1"": ["v1", "v2"]
},
"key2"=["v3"]
),
# CloudWatch alarmsalarms=[
cloudwatch.Alarm()
],
# whether to ignore failure to fetch the status of alarms from CloudWatch# default: falseignore_poll_alarms_failure=False,
# auto-rollback configurationauto_rollback={
"failed_deployment": True, # default: true"stopped_deployment": True, # default: false"deployment_in_alarm": True
}
)
All properties are optional - if you don't provide an Application,
one will be automatically created.
The default Deployment Configuration is ServerDeploymentConfig.ONE_AT_A_TIME.
You can also create a custom Deployment Configuration:
deployment_config=codedeploy.ServerDeploymentConfig(self, "DeploymentConfiguration",
deployment_config_name="MyDeploymentConfiguration", # optional property# one of these is required, but both cannot be specified at the same timemin_healthy_host_count=2,
min_healthy_host_percentage=75
)
To enable traffic shifting deployments for Lambda functions, CodeDeploy uses Lambda Aliases, which can balance incoming traffic between two different versions of your function.
Before deployment, the alias sends 100% of invokes to the version used in production.
When you publish a new version of the function to your stack, CodeDeploy will send a small percentage of traffic to the new version, monitor, and validate before shifting 100% of traffic to the new version.
To create a new CodeDeploy Deployment Group that deploys to a Lambda function:
importaws_cdk.aws_codedeployascodedeployimportaws_cdk.aws_lambdaaslambdamy_application=codedeploy.LambdaApplication()
func=lambda.Function()
version=func.add_version("1")
version1_alias=lambda.Alias(self, "alias",
alias_name="prod",
version=version
)
deployment_group=codedeploy.LambdaDeploymentGroup(stack, "BlueGreenDeployment",
application=my_application, # optional property: one will be created for you if not providedalias=version1_alias,
deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE
)
In order to deploy a new version of this function:
Increment the version, e.g. const version = func.addVersion('2').
Re-deploy the stack (this will trigger a deployment).
Monitor the CodeDeploy deployment as traffic shifts between the versions.
Rollbacks and Alarms
CodeDeploy will roll back if the deployment fails. You can optionally trigger a rollback when one or more alarms are in a failed state:
deployment_group=codedeploy.LambdaDeploymentGroup(stack, "BlueGreenDeployment",
alias=alias,
deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE,
alarms=[
# pass some alarms when constructing the deployment groupcloudwatch.Alarm(stack, "Errors",
comparison_operator=cloudwatch.ComparisonOperator.GREATER_THAN_THRESHOLD,
threshold=1,
evaluation_periods=1,
metric=alias.metric_errors()
)
]
)
# or add alarms to an existing groupdeployment_group.add_alarm(cloudwatch.Alarm(stack, "BlueGreenErrors",
comparison_operator=cloudwatch.ComparisonOperator.GREATER_THAN_THRESHOLD,
threshold=1,
evaluation_periods=1,
metric=blue_green_alias.metric_errors()
))
Pre and Post Hooks
CodeDeploy allows you to run an arbitrary Lambda function before traffic shifting actually starts (PreTraffic Hook) and after it completes (PostTraffic Hook).
With either hook, you have the opportunity to run logic that determines whether the deployment must succeed or fail.
For example, with PreTraffic hook you could run integration tests against the newly created Lambda version (but not serving traffic). With PostTraffic hook, you could run end-to-end validation checks.
warm_up_user_cache=lambda.Function()
end_to_end_validation=lambda.Function()
# pass a hook whe creating the deployment groupdeployment_group=codedeploy.LambdaDeploymentGroup(stack, "BlueGreenDeployment",
alias=alias,
deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE,
pre_hook=warm_up_user_cache
)
# or configure one on an existing deployment groupdeployment_group.on_post_hook(end_to_end_validation)
By default, the Pipeline will poll the Bucket to detect changes.
You can change that behavior to use CloudWatch Events by setting the trigger
property to S3Trigger.EVENTS (it's S3Trigger.POLL by default).
If you do that, make sure the source Bucket is part of an AWS CloudTrail Trail -
otherwise, the CloudWatch Events will not be emitted,
and your Pipeline will not react to changes in the Bucket.
You can do it through the CDK:
When you want to have multiple inputs and/or outputs for a Project used in a
Pipeline, instead of using the secondarySources and secondaryArtifacts
properties of the Project class, you need to use the extraInputs and
extraOutputs properties of the CodeBuild CodePipeline
Actions. Example:
Note: when a CodeBuild Action in a Pipeline has more than one output, it
only uses the secondary-artifacts field of the buildspec, never the
primary output specification directly under artifacts. Because of that, it
pays to explicitly name all output artifacts of that Action, like we did
above, so that you know what name to use in the buildspec.
Note that a Jenkins provider
(identified by the provider name-category(build/test)-version tuple)
must always be registered in the given account, in the given AWS region,
before it can be used in CodePipeline.
With a JenkinsProvider,
we can create a Jenkins Action:
This module contains Actions that allows you to deploy to CloudFormation from AWS CodePipeline.
For example, the following code fragment defines a pipeline that automatically deploys a CloudFormation template
directly from a CodeCommit repository, with a manual approval step in between to confirm the changes:
See the AWS documentation
for more details about using CloudFormation in CodePipeline.
Actions defined by this package
This package defines the following actions:
CloudFormationCreateUpdateStackAction - Deploy a CloudFormation template directly from the pipeline. The indicated stack is created,
or updated if it already exists. If the stack is in a failure state, deployment will fail (unless replaceOnFailure
is set to true, in which case it will be destroyed and recreated).
CloudFormationDeleteStackAction - Delete the stack with the given name.
CloudFormationCreateReplaceChangeSetAction - Prepare a change set to be applied later. You will typically use change sets if you want
to manually verify the changes that are being staged, or if you want to separate the people (or system) preparing the
changes from the people (or system) applying the changes.
CloudFormationExecuteChangeSetAction - Execute a change set prepared previously.
Lambda deployed through CodePipeline
If you want to deploy your Lambda through CodePipeline,
and you don't use assets (for example, because your CDK code and Lambda code are separate),
you can use a special Lambda Code class, CfnParametersCode.
Note that your Lambda must be in a different Stack than your Pipeline.
The Lambda itself will be deployed, alongside the entire Stack it belongs to,
using a CloudFormation CodePipeline Action. Example:
lambda_stack=cdk.Stack(app, "LambdaStack")
lambda_code=lambda.Code.from_cfn_parameters()
lambda.Function(lambda_stack, "Lambda",
code=lambda_code,
handler="index.handler",
runtime=lambda.Runtime.NODEJS_8_10
)
# other resources that your Lambda needs, added to the lambdaStack...pipeline_stack=cdk.Stack(app, "PipelineStack")
pipeline=codepipeline.Pipeline(pipeline_stack, "Pipeline")
# add the source code repository containing this code to your Pipeline,# and the source code of the Lambda Function, if they're separatecdk_source_output=codepipeline.Artifact()
cdk_source_action=codepipeline_actions.CodeCommitSourceAction(
repository=codecommit.Repository(pipeline_stack, "CdkCodeRepo",
repository_name="CdkCodeRepo"
),
action_name="CdkCode_Source",
output=cdk_source_output
)
lambda_source_output=codepipeline.Artifact()
lambda_source_action=codepipeline_actions.CodeCommitSourceAction(
repository=codecommit.Repository(pipeline_stack, "LambdaCodeRepo",
repository_name="LambdaCodeRepo"
),
action_name="LambdaCode_Source",
output=lambda_source_output
)
pipeline.add_stage(
stage_name="Source",
actions=[cdk_source_action, lambda_source_action]
)
# synthesize the Lambda CDK template, using CodeBuild# the below values are just examples, assuming your CDK code is in TypeScript/JavaScript -# adjust the build environment and/or commands accordinglycdk_build_project=codebuild.Project(pipeline_stack, "CdkBuildProject",
environment={
"build_image": codebuild.LinuxBuildImage.UBUNTU_14_04_NODEJS_10_1_0
},
build_spec=codebuild.BuildSpec.from_object(
version="0.2",
phases={
"install": {
"commands": "npm install"
},
"build": {
"commands": ["npm run build", "npm run cdk synth LambdaStack -- -o ."
]
}
},
artifacts={
"files": "LambdaStack.template.yaml"
}
)
)
cdk_build_output=codepipeline.Artifact()
cdk_build_action=codepipeline_actions.CodeBuildAction(
action_name="CDK_Build",
project=cdk_build_project,
input=cdk_source_output,
outputs=[cdk_build_output]
)
# build your Lambda code, using CodeBuild# again, this example assumes your Lambda is written in TypeScript/JavaScript -# make sure to adjust the build environment and/or commands if they don't match your specific situationlambda_build_project=codebuild.Project(pipeline_stack, "LambdaBuildProject",
environment={
"build_image": codebuild.LinuxBuildImage.UBUNTU_14_04_NODEJS_10_1_0
},
build_spec=codebuild.BuildSpec.from_object(
version="0.2",
phases={
"install": {
"commands": "npm install"
},
"build": {
"commands": "npm run build"
}
},
artifacts={
"files": ["index.js", "node_modules/**/*"
]
}
)
)
lambda_build_output=codepipeline.Artifact()
lambda_build_action=codepipeline_actions.CodeBuildAction(
action_name="Lambda_Build",
project=lambda_build_project,
input=lambda_source_output,
outputs=[lambda_build_output]
)
pipeline.add_stage(
stage_name="Build",
actions=[cdk_build_action, lambda_build_action]
)
# finally, deploy your Lambda Stack# INCORRECTpipeline.add_stage(
stage_name="Deploy",
actions=[
codepipeline_actions.CloudFormationCreateUpdateStackAction(
action_name="Lambda_CFN_Deploy",
template_path=cdk_build_output.at_path("LambdaStack.template.yaml"),
stack_name="LambdaStackDeployedName",
admin_permissions=True,
parameter_overrides={
(SpreadAssignment ...lambdaCode.assign(lambdaBuildOutput.s3Location)
lambda_code.assign(lambda_build_output.s3_location))
},
extra_inputs=[lambda_build_output
]
)
]
)
Cross-account actions
If you want to update stacks in a different account,
pass the account property when creating the action:
This will create a new stack, called <PipelineStackName>-support-123456789012, in your App,
that will contain the role that the pipeline will assume in account 123456789012 before executing this action.
This support stack will automatically be deployed before the stack containing the pipeline.
You can also pass a role explicitly when creating the action -
in that case, the account property is ignored,
and the action will operate in the same account the role belongs to:
fromaws_cdk.coreimportPhysicalName# in stack for account 123456789012...action_role=iam.Role(other_account_stack, "ActionRole",
assumed_by=iam.AccountPrincipal(pipeline_account),
# the role has to have a physical name setrole_name=PhysicalName.GENERATE_IF_NEEDED
)
# in the pipeline stack...codepipeline_actions.CloudFormationCreateUpdateStackAction(
# ...role=action_role
)
AWS CodeDeploy##### Server deployments
To use CodeDeploy for EC2/on-premise deployments in a Pipeline:
importaws_cdk.aws_codedeployascodedeploypipeline=codepipeline.Pipeline(self, "MyPipeline",
pipeline_name="MyPipeline"
)
# add the source and build Stages to the Pipeline...deploy_action=codepipeline_actions.CodeDeployServerDeployAction(
action_name="CodeDeploy",
input=build_output,
deployment_group=deployment_group
)
pipeline.add_stage(
stage_name="Deploy",
actions=[deploy_action]
)
Lambda deployments
To use CodeDeploy for blue-green Lambda deployments in a Pipeline:
lambda_code=lambda.Code.from_cfn_parameters()
func=lambda.Function(lambda_stack, "Lambda",
code=lambda_code,
handler="index.handler",
runtime=lambda.Runtime.NODEJS_8_10
)
# used to make sure each CDK synthesis produces a different Versionversion=func.add_version("NewVersion")
alias=lambda.Alias(lambda_stack, "LambdaAlias",
alias_name="Prod",
version=version
)
codedeploy.LambdaDeploymentGroup(lambda_stack, "DeploymentGroup",
alias=alias,
deployment_config=codedeploy.LambdaDeploymentConfig.LINEAR_10PERCENT_EVERY_1MINUTE
)
Then, you need to create your Pipeline Stack,
where you will define your Pipeline,
and deploy the lambdaStack using a CloudFormation CodePipeline Action
(see above for a complete example).
ECS
CodePipeline can deploy an ECS service.
The deploy Action receives one input Artifact which contains the image definition file:
deploy_stage=pipeline.add_stage(
stage_name="Deploy",
actions=[
codepipeline_actions.EcsDeployAction(
action_name="DeployAction",
service=service,
# if your file is called imagedefinitions.json,# use the `input` property,# and leave out the `imageFile` propertyinput=build_output,
# if your file name is _not_ imagedefinitions.json,# use the `imageFile` property,# and leave out the `input` propertyimage_file=build_output.at_path("imageDef.json")
)
]
)
AWS S3
To use an S3 Bucket as a deployment target in CodePipeline:
If the notificationTopic has not been provided,
but notifyEmails were,
a new SNS Topic will be created
(and accessible through the notificationTopic property of the Action).
AWS Lambda
This module contains an Action that allows you to invoke a Lambda function in a Pipeline:
You can insert the new Stage at an arbitrary point in the Pipeline:
# INCORRECTsome_stage=pipeline.add_stage(
stage_name="SomeStage",
placement={
# note: you can only specify one of the below properties"right_before": another_stage,
"just_after": another_stage
}
)
Actions
Actions live in a separate package, @aws-cdk/aws-codepipeline-actions.
To add an Action to a Stage, you can provide it when creating the Stage,
in the actions property,
or you can use the IStage.addAction() method to mutate an existing Stage:
source_stage.add_action(some_action)
Cross-region CodePipelines
You can also use the cross-region feature to deploy resources
(currently, only CloudFormation Stacks are supported)
into a different region than your Pipeline is in.
It works like this:
pipeline=codepipeline.Pipeline(self, "MyFirstPipeline",
# ...cross_region_replication_buckets={
# note that a physical name of the replication Bucket must be known at synthesis time""us-west-1"": s3.Bucket.from_bucket_attributes(self, "UsWest1ReplicationBucket",
bucket_name="my-us-west-1-replication-bucket",
# optional KMS keyencryption_key=kms.Key.from_key_arn(self, "UsWest1ReplicationKey", "arn:aws:kms:us-west-1:123456789012:key/1234-5678-9012")
)
}
)
# later in the code...codepipeline_actions.CloudFormationCreateUpdateStackAction(
action_name="CFN_US_West_1",
# ...region="us-west-1"
)
This way, the CFN_US_West_1 Action will operate in the us-west-1 region,
regardless of which region your Pipeline is in.
If you don't provide a bucket for a region (other than the Pipeline's region)
that you're using for an Action,
there will be a new Stack, called <nameOfYourPipelineStack>-support-<region>,
defined for you, containing a replication Bucket.
This new Stack will depend on your Pipeline Stack,
so deploying the Pipeline Stack will deploy the support Stack(s) first.
Example:
$ cdk ls
MyMainStack
MyMainStack-support-us-west-1
$ cdk deploy MyMainStack
# output of cdk deploy here...
See the AWS docs here
for more information on cross-region CodePipelines.
Creating an encrypted replication bucket
If you're passing a replication bucket created in a different stack,
like this:
# INCORRECTreplication_stack=Stack(app, "ReplicationStack",
env={
"region": "us-west-1"
}
)
key=kms.Key(replication_stack, "ReplicationKey")
replication_bucket=s3.Bucket(replication_stack, "ReplicationBucket",
# like was said above - replication buckets need a set physical namebucket_name=PhysicalName.GENERATE_IF_NEEDED,
encryption_key=key
)
# later...codepipeline.Pipeline(pipeline_stack, "Pipeline",
cross_region_replication_buckets={
""us-west-1"": replication_bucket
}
)
When trying to encrypt it
(and note that if any of the cross-region actions happen to be cross-account as well,
the bucket has to be encrypted - otherwise the pipeline will fail at runtime),
you cannot use a key directly - KMS keys don't have physical names,
and so you can't reference them across environments.
In this case, you need to use an alias in place of the key when creating the bucket:
A pipeline can be used as a target for a CloudWatch event rule:
importaws_cdk.aws_events_targetsastargetsimportaws_cdk.aws_eventsasevents# kick off the pipeline every dayrule=events.Rule(self, "Daily",
schedule=events.Schedule.rate(Duration.days(1))
)
rule.add_target(targets.CodePipeline(pipeline))
When a pipeline is used as an event target, the
"codepipeline:StartPipelineExecution" permission is granted to the AWS
CloudWatch Events service.
Event sources
Pipelines emit CloudWatch events. To define event rules for events emitted by
the pipeline, stages or action, use the onXxx methods on the respective
construct:
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Higher level constructs for managed rules are available, see Managed Rules. Prefer to use those constructs when available (PRs welcome to add more of those).
Custom rules
To set up a custom rule, define a CustomRule and specify the Lambda Function to run and the trigger types:
By default rules are triggered by changes to all resources. Use the scopeToResource(), scopeToResources() or scopeToTag() methods to restrict the scope of both managed and custom rules:
ssh_rule=ManagedRule(self, "SSH",
identifier="INCOMING_SSH_DISABLED"
)
# Restrict to a specific security grouprule.scope_to_resource("AWS::EC2::SecurityGroup", "sg-1234567890abcdefgh")
custom_rule=CustomRule(self, "CustomRule",
lambda_function=my_fn,
configuration_changes=True
)
# Restrict to a specific tagcustom_rule.scope_to_tag("Cost Center", "MyApp")
Only one type of scope restriction can be added to a rule (the last call to scopeToXxx() sets the scope).
Events
To define Amazon CloudWatch event rules, use the onComplianceChange() or onReEvaluationStatus() methods:
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Global Tables builds upon DynamoDB’s global footprint to provide you with a fully managed, multi-region, and multi-master database that provides fast, local, read and write performance for massively scaled, global applications. Global Tables replicates your Amazon DynamoDB tables automatically across your choice of AWS regions.
Here is a minimal deployable Global DynamoDB tables definition:
AWS Global DynamoDB Tables is an odd case currently. The way this package works -
Creates a DynamoDB table in a separate stack in each DynamoDBGlobalStackProps.region specified
Deploys a CFN Custom Resource to your stack's specified region that calls a lambda that runs the aws cli which calls createGlobalTable()
Notes
GlobalTable() will set dynamoProps.stream to be NEW_AND_OLD_IMAGES since this is a required attribute for AWS Global DynamoDB tables to work. The package will throw an error if any other stream specification is set in DynamoDBGlobalStackProps.
When a table is defined, you must define it's schema using the partitionKey
(required) and sortKey (optional) properties.
Billing Mode
DynamoDB supports two billing modes:
PROVISIONED - the default mode where the table and global secondary indexes have configured read and write capacity.
PAY_PER_REQUEST - on-demand pricing and scaling. You only pay for what you use and there is no read and write capacity for the table or its global secondary indexes.
You can have DynamoDB automatically raise and lower the read and write capacities
of your table by setting up autoscaling. You can use this to either keep your
tables at a desired utilization level, or by scaling up and down at preconfigured
times of the day:
Auto-scaling is only relevant for tables with the billing mode, PROVISIONED.
All default constructs require EC2 instances to be launched inside a VPC, so
you should generally start by defining a VPC whenever you need to launch
instances for your project.
Subnet Types
A VPC consists of one or more subnets that instances can be placed into. CDK
distinguishes three different subnet types:
Public - public subnets connect directly to the Internet using an
Internet Gateway. If you want your instances to have a public IP address
and be directly reachable from the Internet, you must place them in a
public subnet.
Private - instances in private subnets are not directly routable from the
Internet, and connect out to the Internet via a NAT gateway. By default, a
NAT gateway is created in every public subnet for maximum availability. Be
aware that you will be charged for NAT gateways.
Isolated - isolated subnets do not route from or to the Internet, and
as such do not require NAT gateways. They can only connect to or be
connected to from other instances in the same VPC. A default VPC configuration
will not include isolated subnets,
A default VPC configuration will create public and private subnets, but not
isolated subnets. See Advanced Subnet Configuration below for information
on how to change the default subnet configuration.
Constructs using the VPC will "launch instances" (or more accurately, create
Elastic Network Interfaces) into one or more of the subnets. They all accept
a property called subnetSelection (sometimes called vpcSubnets) to allow
you to select in what subnet to place the ENIs, usually defaulting to
private subnets if the property is omitted.
If you would like to save on the cost of NAT gateways, you can use
isolated subnets instead of private subnets (as described in Advanced
Subnet Configuration). If you need private instances to have
internet connectivity, another option is to reduce the number of NAT gateways
created by setting the natGateways property to a lower value (the default
is one NAT gateway per availability zone). Be aware that this may have
availability implications for your application.
By default, a VPC will spread over at most 3 Availability Zones available to
it. To change the number of Availability Zones that the VPC will spread over,
specify the maxAzs property when defining it.
The number of Availability Zones that are available depends on the region
and account of the Stack containing the VPC. If the region and account are
specified on
the Stack, the CLI will look up the existing Availability
Zones
and get an accurate count. If region and account are not specified, the stack
could be deployed anywhere and it will have to make a safe choice, limiting
itself to 2 Availability Zones.
Therefore, to get the VPC to spread over 3 or more availability zones, you
must specify the environment where the stack will be deployed.
Advanced Subnet Configuration
If the default VPC configuration (public and private subnets spanning the
size of the VPC) don't suffice for you, you can configure what subnets to
create by specifying the subnetConfiguration property. It allows you
to configure the number and size of all subnets. Specifying an advanced
subnet configuration could look like this:
# INCORRECTvpc=ec2.Vpc(self, "TheVPC",
# 'cidr' configures the IP range and size of the entire VPC.# The IP space will be divided over the configured subnets.cidr="10.0.0.0/21",
# 'maxAzs' configures the maximum number of availability zones to usemax_azs=3,
# 'subnetConfiguration' specifies the "subnet groups" to create.# Every subnet group will have a subnet for each AZ, so this# configuration will create `3 groups × 3 AZs = 9` subnets.subnet_configuration=[{
# 'subnetType' controls Internet access, as described above."subnet_type": ec2.SubnetType.PUBLIC,
# 'name' is used to name this particular subnet group. You will have to# use the name for subnet selection if you have more than one subnet# group of the same type."name": "Ingress",
# 'cidrMask' specifies the IP addresses in the range of of individual# subnets in the group. Each of the subnets in this group will contain# `2^(32 address bits - 24 subnet bits) - 2 reserved addresses = 254`# usable IP addresses.## If 'cidrMask' is left out the available address space is evenly# divided across the remaining subnet groups."cidr_mask": 24
}, {
"cidr_mask": 24,
"name": "Application",
"subnet_type": ec2.SubnetType.PRIVATE
}, {
"cidr_mask": 28,
"name": "Database",
"subnet_type": ec2.SubnetType.ISOLATED,
# 'reserved' can be used to reserve IP address space. No resources will# be created for this subnet, but the IP range will be kept available for# future creation of this subnet, or even for future subdivision."reserved": True
}
]
)
The example above is one possible configuration, but the user can use the
constructs above to implement many other network configurations.
The Vpc from the above configuration in a Region with three
availability zones will be the following:
Subnet Name
Type
IP Block
AZ
Features
IngressSubnet1
PUBLIC
10.0.0.0/24
#1
NAT Gateway
IngressSubnet2
PUBLIC
10.0.1.0/24
#2
NAT Gateway
IngressSubnet3
PUBLIC
10.0.2.0/24
#3
NAT Gateway
ApplicationSubnet1
PRIVATE
10.0.3.0/24
#1
Route to NAT in IngressSubnet1
ApplicationSubnet2
PRIVATE
10.0.4.0/24
#2
Route to NAT in IngressSubnet2
ApplicationSubnet3
PRIVATE
10.0.5.0/24
#3
Route to NAT in IngressSubnet3
DatabaseSubnet1
ISOLATED
10.0.6.0/28
#1
Only routes within the VPC
DatabaseSubnet2
ISOLATED
10.0.6.16/28
#2
Only routes within the VPC
DatabaseSubnet3
ISOLATED
10.0.6.32/28
#3
Only routes within the VPC
Reserving subnet IP space
There are situations where the IP space for a subnet or number of subnets
will need to be reserved. This is useful in situations where subnets would
need to be added after the vpc is originally deployed, without causing IP
renumbering for existing subnets. The IP space for a subnet may be reserved
by setting the reserved subnetConfiguration property to true, as shown
below:
In the example above, the subnet for Application2 is not actually provisioned
but its IP space is still reserved. If in the future this subnet needs to be
provisioned, then the reserved: true property should be removed. Reserving
parts of the IP space prevents the other subnets from getting renumbered.
Sharing VPCs between stacks
If you are creating multiple Stacks inside the same CDK application, you
can reuse a VPC defined in one Stack in another by simply passing the VPC
instance around:
## Stack1 creates the VPC#classStack1(cdk.Stack):
def__init__(self, scope, id, props=None):
super().__init__(scope, id, props)
self.vpc=ec2.Vpc(self, "VPC")
## Stack2 consumes the VPC#classStack2(cdk.Stack):
def__init__(self, scope, id, *, vpc):
super().__init__(scope, id, vpc=vpc)
# Pass the VPC to a construct that needs itConstructThatTakesAVpc(self, "Construct",
vpc=vpc
)
stack1=Stack1(app, "Stack1")
stack2=Stack2(app, "Stack2",
vpc=stack1.vpc
)
Importing an existing VPC
If your VPC is created outside your CDK app, you can use Vpc.fromLookup().
The CDK CLI will search for the specified VPC in the the stack's region and
account, and import the subnet configuration. Looking up can be done by VPC
ID, but more flexibly by searching for a specific tag on the VPC.
The import does assume that the VPC will be symmetric, i.e. that there are
subnet groups that have a subnet in every Availability Zone that the VPC
spreads over. VPCs with other layouts cannot currently be imported, and will
either lead to an error on import, or when another construct tries to access
the subnets.
Subnet types will be determined from the aws-cdk:subnet-type tag on the
subnet if it exists, or the presence of a route to an Internet Gateway
otherwise. Subnet names will be determined from the aws-cdk:subnet-name tag
on the subnet if it exists, or will mirror the subnet type otherwise (i.e.
a public subnet will have the name "Public").
Here's how Vpc.fromLookup() can be used:
vpc=ec2.Vpc.from_lookup(stack, "VPC",
# This imports the default VPC but you can also# specify a 'vpcName' or 'tags'.is_default=True
)
Allowing Connections
In AWS, all network traffic in and out of Elastic Network Interfaces (ENIs)
is controlled by Security Groups. You can think of Security Groups as a
firewall with a set of rules. By default, Security Groups allow no incoming
(ingress) traffic and all outgoing (egress) traffic. You can add ingress rules
to them to allow incoming traffic streams. To exert fine-grained control over
egress traffic, set allowAllOutbound: false on the SecurityGroup, after
which you can add egress traffic rules.
You can manipulate Security Groups directly:
my_security_group=ec2.SecurityGroup(self, "SecurityGroup",
vpc=vpc,
description="Allow ssh access to ec2 instances",
allow_all_outbound=True
)
my_security_group.add_ingress_rule(ec2.Peer.any_ipv4(), ec2.Port.tcp(22), "allow ssh access from the world")
All constructs that create ENIs on your behalf (typically constructs that create
EC2 instances or other VPC-connected resources) will all have security groups
automatically assigned. Those constructs have an attribute called
connections, which is an object that makes it convenient to update the
security groups. If you want to allow connections between two constructs that
have security groups, you have to add an Egress rule to one Security Group,
and an Ingress rule to the other. The connections object will automatically
take care of this for you:
# Allow connections from anywhereload_balancer.connections.allow_from_any_ipv4(ec2.Port.tcp(443), "Allow inbound HTTPS")
# The same, but an explicit IP addressload_balancer.connections.allow_from(ec2.Peer.ipv4("1.2.3.4/32"), ec2.Port.tcp(443), "Allow inbound HTTPS")
# Allow connection between AutoScalingGroupsapp_fleet.connections.allow_to(db_fleet, ec2.Port.tcp(443), "App can call database")
Connection Peers
There are various classes that implement the connection peer part:
Any object that has a security group can itself be used as a connection peer:
# These automatically create appropriate ingress and egress rules in both security groupsfleet1.connections.allow_to(fleet2, ec2.Port.tcp(80), "Allow between fleets")
fleet.connections.allow_from_any_ipv4(ec2.Port.tcp(80), "Allow from load balancer")
Port Ranges
The connections that are allowed are specified by port ranges. A number of classes provide
the connection specifier:
NOTE: This set is not complete yet; for example, there is no library support for ICMP at the moment.
However, you can write your own classes to implement those.
Default Ports
Some Constructs have default ports associated with them. For example, the
listener of a load balancer does (it's the public port), or instances of an
RDS database (it's the port the database is accepting connections on).
If the object you're calling the peering method on has a default port associated with it, you can call
allowDefaultPortFrom() and omit the port specifier. If the argument has an associated default port, call
allowDefaultPortTo().
For example:
# Port implicit in listenerlistener.connections.allow_default_port_from_any_ipv4("Allow public")
# Port implicit in peerfleet.connections.allow_default_port_to(rds_database, "Fleet can access database")
Machine Images (AMIs)
AMIs control the OS that gets launched when you start your EC2 instance. The EC2
library contains constructs to select the AMI you want to use.
Depending on the type of AMI, you select it a different way.
The latest version of Amazon Linux and Microsoft Windows images are
selectable by instantiating one of these classes:
# Pick a Windows edition to usewindows=ec2.WindowsImage(ec2.WindowsVersion.WINDOWS_SERVER_2019_ENGLISH_FULL_BASE)
# Pick the right Amazon Linux edition. All arguments shown are optional# and will default to these values when omitted.amzn_linux=ec2.AmazonLinuxImage(
generation=ec2.AmazonLinuxGeneration.AMAZON_LINUX,
edition=ec2.AmazonLinuxEdition.STANDARD,
virtualization=ec2.AmazonLinuxVirt.HVM,
storage=ec2.AmazonLinuxStorage.GENERAL_PURPOSE
)
# For other custom (Linux) images, instantiate a `GenericLinuxImage` with# a map giving the AMI to in for each region:# INCORRECTlinux=ec2.GenericLinuxImage(
"us-east-1"="ami-97785bed",
"eu-west-1"="ami-12345678"
)
# For other custom (Windows) images, instantiate a `GenericWindowsImage` with# a map giving the AMI to in for each region:generic_windows=ec2.GenericWindowsImage(
"us-east-1"="ami-97785bed",
"eu-west-1"="ami-12345678"
)
NOTE: The Amazon Linux images selected will be cached in your cdk.json, so that your
AutoScalingGroups don't automatically change out from under you when you're making unrelated
changes. To update to the latest version of Amazon Linux, remove the cache entry from the context
section of your cdk.json.
We will add command-line options to make this step easier in the future.
VPN connections to a VPC
Create your VPC with VPN connections by specifying the vpnConnections props (keys are construct ids):
To create a VPC that can accept VPN connections, set vpnGateway to true:
vpc=ec2.Vpc(stack, "MyVpc",
vpn_gateway=True
)
VPN connections can then be added:
vpc.add_vpn_connection("Dynamic",
ip="1.2.3.4"
)
Routes will be propagated on the route tables associated with the private subnets.
VPN connections expose metrics (cloudwatch.Metric) across all tunnels in the account/region and per connection:
# Across all tunnels in the account/regionall_data_out=VpnConnection.metric_all_tunnel_data_out()
# For a specific vpn connectionvpn_connection=vpc.add_vpn_connection("Dynamic",
ip="1.2.3.4"
)
state=vpn_connection.metric_tunnel_state()
VPC endpoints
A VPC endpoint enables you to privately connect your VPC to supported AWS services and VPC endpoint services powered by PrivateLink without requiring an internet gateway, NAT device, VPN connection, or AWS Direct Connect connection. Instances in your VPC do not require public IP addresses to communicate with resources in the service. Traffic between your VPC and the other service does not leave the Amazon network.
Endpoints are virtual devices. They are horizontally scaled, redundant, and highly available VPC components that allow communication between instances in your VPC and services without imposing availability risks or bandwidth constraints on your network traffic.
# Add gateway endpoints when creating the VPC# INCORRECTvpc=ec2.Vpc(self, "MyVpc",
gateway_endpoints={
"S3": {
"service": ec2.GatewayVpcEndpointAwsService.S3
}
}
)
# Alternatively gateway endpoints can be added on the VPCdynamo_db_endpoint=vpc.add_gateway_endpoint("DynamoDbEndpoint",
service=ec2.GatewayVpcEndpointAwsService.DYNAMODB
)
# This allows to customize the endpoint policydynamo_db_endpoint.add_to_policy(
iam.PolicyStatement(# Restrict to listing and describing tablesprincipals=[iam.AnyPrincipal()],
actions=["dynamodb:DescribeTable", "dynamodb:ListTables"],
resources=["*"]))
# Add an interface endpointecr_docker_endpoint=vpc.add_interface_endpoint("EcrDockerEndpoint",
service=ec2.InterfaceVpcEndpointAwsService.ECR_DOCKER
)
# When working with an interface endpoint, use the connections object to# allow traffic to flow to the endpoint.ecr_docker_endpoint.connections.allow_default_port_from_any_ipv4()
Bastion Hosts
A bastion host functions as an instance used to access servers and resources in a VPC without open up the complete VPC on a network level.
You can use bastion hosts using a standard SSH connection targetting port 22 on the host. As an alternative, you can connect the SSH connection
feature of AWS Systems Manager Session Manager, which does not need an opened security group. (https://aws.amazon.com/about-aws/whats-new/2019/07/session-manager-launches-tunneling-support-for-ssh-and-scp/)
A default bastion host for use via SSM can be configured like:
As there are no SSH public keys deployed on this machine, you need to use EC2 Instance Connect
with the command aws ec2-instance-connect send-ssh-public-key to provide your SSH public key.
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This module allows bundling Docker images as assets.
Images are built from a local Docker context directory (with a Dockerfile),
uploaded to ECR by the CDK toolkit and/or your app's CI-CD pipeline, and can be
naturally referenced in your CDK app.
This will instruct the toolkit to build a Docker image from my-image, push it
to an AWS ECR repository and wire the name of the repository as CloudFormation
parameters to your stack.
Use asset.imageUri to reference the image (it includes both the ECR image URL
and tag.
You can optionally pass build args to the docker build command by specifying
the buildArgs property:
Depending on the consumer of your image asset, you will need to make sure
the principal has permissions to pull the image.
In most cases, you should use the asset.repository.grantPull(principal)
method. This will modify the IAM policy of the principal to allow it to
pull images from this repository.
If the pulling principal is not in the same account or is an AWS service that
doesn't assume a role in your account (e.g. AWS CodeBuild), pull permissions
must be granted on the resource policy (and not on the principal's policy).
To do that, you can use asset.repository.addToResourcePolicy(statement) to
grant the desired principal the following permissions: "ecr:GetDownloadUrlForLayer",
"ecr:BatchGetImage" and "ecr:BatchCheckLayerAvailability".
This package contains constructs for working with Amazon Elastic Container Registry.
Repositories
Define a repository by creating a new instance of Repository. A repository
holds multiple verions of a single container image.
repository=ecr.Repository(self, "Repository")
Automatically clean up repositories
You can set life cycle rules to automatically clean up old images from your
repository. The first life cycle rule that matches an image will be applied
against that image. For example, the following deletes images older than
30 days, while keeping all images tagged with prod (note that the order
is important here):
CDK Construct library for higher-level ECS Constructs---
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This library provides higher-level Amazon ECS constructs which follow common architectural patterns. It contains:
Application Load Balanced Services
Network Load Balanced Services
Queue Processing Services
Scheduled Tasks (cron jobs)
Application Load Balanced Services
To define an Amazon ECS service that is behind an application load balancer, instantiate one of the following:
Instead of providing a cluster you can specify a VPC and CDK will create a new ECS cluster.
If you deploy multiple services CDK will only create on cluster per VPC.
You can omit cluster and vpc to let CDK create a new VPC with two AZs and create a cluster inside this VPC.
Network Load Balanced Services
To define an Amazon ECS service that is behind a network load balancer, instantiate one of the following:
The CDK will create a new Amazon ECS cluster if you specify a VPC and omit cluster. If you deploy multiple services the CDK will only create one cluster per VPC.
If cluster and vpc are omitted, the CDK creates a new VPC with subnets in two Availability Zones and a cluster within this VPC.
Queue Processing Services
To define a service that creates a queue and reads from that queue, instantiate one of the following:
To define a task that runs periodically, instantiate an ScheduledEc2Task:
# Instantiate an Amazon EC2 Task to run at a scheduled intervalecs_scheduled_task=ScheduledEc2Task(self, "ScheduledTask",
cluster=cluster,
image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
schedule_expression="rate(1 minute)",
environment=[{"name": "TRIGGER", "value": "CloudWatch Events"}],
memory_limit_mi_b=256
)
This package contains constructs for working with Amazon Elastic Container
Service (Amazon ECS).
Amazon ECS is a highly scalable, fast, container management service
that makes it easy to run, stop,
and manage Docker containers on a cluster of Amazon EC2 instances.
The following example creates an Amazon ECS cluster,
adds capacity to it,
and instantiates the Amazon ECS Service with an automatic load balancer.
importaws_cdk.aws_ecsasecs# Create an ECS clustercluster=ecs.Cluster(self, "Cluster",
vpc=vpc
)
# Add capacity to itcluster.add_capacity("DefaultAutoScalingGroupCapacity",
instance_type=ec2.InstanceType("t2.xlarge"),
desired_capacity=3
)
# Instantiate an Amazon ECS Serviceecs_service=ecs.Ec2Service(self, "Service",
cluster=cluster,
memory_limit_mi_b=512,
image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)
For a set of constructs defining common ECS architectural patterns, see the @aws-cdk/aws-ecs-patterns package.
AWS Fargate vs Amazon ECS
There are two sets of constructs in this library; one to run tasks on Amazon ECS and
one to run tasks on AWS Fargate.
Use the Ec2TaskDefinition and Ec2Service constructs to run tasks on Amazon EC2 instances running in your account.
Use the FargateTaskDefinition and FargateService constructs to run tasks on
instances that are managed for you by AWS.
Here are the main differences:
Amazon EC2: instances are under your control. Complete control of task to host
allocation. Required to specify at least a memory reseration or limit for
every container. Can use Host, Bridge and AwsVpc networking modes. Can attach
Classic Load Balancer. Can share volumes between container and host.
AWS Fargate: tasks run on AWS-managed instances, AWS manages task to host
allocation for you. Requires specification of memory and cpu sizes at the
taskdefinition level. Only supports AwsVpc networking modes and
Application/Network Load Balancers. Only the AWS log driver is supported.
Many host features are not supported such as adding kernel capabilities
and mounting host devices/volumes inside the container.
For more information on Amazon EC2 vs AWS Fargate and networking see the AWS Documentation:
AWS Fargate and
Task Networking.
Clusters
A Cluster defines the infrastructure to run your
tasks on. You can run many tasks on a single cluster.
The following code creates a cluster that can run AWS Fargate tasks:
cluster=ecs.Cluster(self, "Cluster",
vpc=vpc
)
To use tasks with Amazon EC2 launch-type, you have to add capacity to
the cluster in order for tasks to be scheduled on your instances. Typically,
you add an AutoScalingGroup with instances running the latest
Amazon ECS-optimized AMI to the cluster. There is a method to build and add such an
AutoScalingGroup automatically, or you can supply a customized AutoScalingGroup
that you construct yourself. It's possible to add multiple AutoScalingGroups
with various instance types.
The following example creates an Amazon ECS cluster and adds capacity to it:
cluster=ecs.Cluster(self, "Cluster",
vpc=vpc
)
# Either add default capacitycluster.add_capacity("DefaultAutoScalingGroupCapacity",
instance_type=ec2.InstanceType("t2.xlarge"),
desired_capacity=3
)
# Or add customized capacity. Be sure to start the Amazon ECS-optimized AMI.auto_scaling_group=autoscaling.AutoScalingGroup(self, "ASG",
vpc=vpc,
instance_type=ec2.InstanceType("t2.xlarge"),
machine_image=EcsOptimizedImage.amazon_linux(),
# Or use Amazon ECS-Optimized Amazon Linux 2 AMI# machineImage: EcsOptimizedImage.amazonLinux2(),desired_capacity=3
)
cluster.add_auto_scaling_group(auto_scaling_group)
If you omit the property vpc, the construct will create a new VPC with two AZs.
Task definitions
A task Definition describes what a single copy of a task should look like.
A task definition has one or more containers; typically, it has one
main container (the default container is the first one that's added
to the task definition, and it is marked essential) and optionally
some supporting containers which are used to support the main container,
doings things like upload logs or metrics to monitoring services.
To run a task or service with Amazon EC2 launch type, use the Ec2TaskDefinition. For AWS Fargate tasks/services, use the
FargateTaskDefinition. These classes provide a simplified API that only contain
properties relevant for that specific launch type.
For a FargateTaskDefinition, specify the task size (memoryLimitMiB and cpu):
To add containers to a task definition, call addContainer():
container=fargate_task_definition.add_container("WebContainer",
# Use an image from DockerHubimage=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample")
)
For a Ec2TaskDefinition:
ec2_task_definition=ecs.Ec2TaskDefinition(self, "TaskDef",
network_mode=NetworkMode.BRIDGE
)
container=ec2_task_definition.add_container("WebContainer",
# Use an image from DockerHubimage=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
memory_limit_mi_b=1024
)
You can specify container properties when you add them to the task definition, or with various methods, e.g.:
To use a TaskDefinition that can be used with either Amazon EC2 or
AWS Fargate launch types, use the TaskDefinition construct.
When creating a task definition you have to specify what kind of
tasks you intend to run: Amazon EC2, AWS Fargate, or both.
The following example uses both:
Images supply the software that runs inside the container. Images can be
obtained from either DockerHub or from ECR repositories, or built directly from a local Dockerfile.
ecs.ContainerImage.fromRegistry(imageName): use a public image.
ecs.ContainerImage.fromRegistry(imageName, { credentials: mySecret }): use a private image that requires credentials.
ecs.ContainerImage.fromEcrRepository(repo, tag): use the given ECR repository as the image
to start. If no tag is provided, "latest" is assumed.
ecs.ContainerImage.fromAsset('./image'): build and upload an
image directly from a Dockerfile in your source directory.
Environment variables
To pass environment variables to the container, use the environment and secrets props.
task_definition.add_container("container",
image=ecs.ContainerImage.from_registry("amazon/amazon-ecs-sample"),
memory_limit_mi_b=1024,
environment={# clear text, not for sensitive data"STAGE": "prod"},
secrets={# Retrieved from AWS Secrets Manager or AWS Systems Manager Parameter Store at container start-up."SECRET": ecs.Secret.from_secrets_manager(secret),
"PARAMETER": ecs.Secret.from_ssm_parameter(parameter)}
)
The task execution role is automatically granted read permissions on the secrets/parameters.
Service
A Service instantiates a TaskDefinition on a Cluster a given number of
times, optionally associating them with a load balancer.
If a task fails,
Amazon ECS automatically restarts the task.
Task auto-scaling is powered by Application Auto-Scaling.
See that section for details.
Instance Auto-Scaling
If you're running on AWS Fargate, AWS manages the physical machines that your
containers are running on for you. If you're running an Amazon ECS cluster however,
your Amazon EC2 instances might fill up as your number of Tasks goes up.
To avoid placement errors, configure auto-scaling for your
Amazon EC2 instance group so that your instance count scales with demand. To keep
your Amazon EC2 instances halfway loaded, scaling up to a maximum of 30 instances
if required:
auto_scaling_group=cluster.add_capacity("DefaultAutoScalingGroup",
instance_type=ec2.InstanceType("t2.xlarge"),
min_capacity=3,
max_capacity=30,
desired_capacity=3,
# Give instances 5 minutes to drain running tasks when an instance is# terminated. This is the default, turn this off by specifying 0 or# change the timeout up to 900 seconds.task_drain_time=Duration.seconds(300)
)
auto_scaling_group.scale_on_cpu_utilization("KeepCpuHalfwayLoaded",
target_utilization_percent=50
)
See the @aws-cdk/aws-autoscaling library for more autoscaling options
you can configure on your instances.
Integration with CloudWatch Events
To start an Amazon ECS task on an Amazon EC2-backed Cluster, instantiate an
@aws-cdk/aws-events-targets.EcsTask instead of an Ec2Service:
importaws_cdk.aws_events_targetsastargets# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_asset(path.resolve(__dirname, "..", "eventhandler-image")),
memory_limit_mi_b=256,
logging=ecs.AwsLogDriver(stream_prefix="EventDemo")
)
# An Rule that describes the event trigger (in this case a scheduled run)rule=events.Rule(self, "Rule",
schedule=events.Schedule.expression("rate(1 min)")
)
# Pass an environment variable to the container 'TheContainer' in the task# INCORRECTrule.add_target(targets.EcsTask(
cluster=cluster,
task_definition=task_definition,
task_count=1,
container_overrides=[{
"container_name": "TheContainer",
"environment": [{
"name": "I_WAS_TRIGGERED",
"value": "From CloudWatch Events"
}]
}]
))
Log Drivers
Currently Supported Log Drivers:
awslogs
fluentd
gelf
journald
json-file
splunk
syslog
awslogs Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.awslogs(stream_prefix="EventDemo")
)
fluentd Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.fluentd()
)
gelf Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.gelf()
)
journald Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.journald()
)
json-file Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.json_file()
)
splunk Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.splunk()
)
syslog Log Driver
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.LogDrivers.syslog()
)
Generic Log Driver
A generic log driver object exists to provide a lower level abstraction of the log driver configuration.
# Create a Task Definition for the container to starttask_definition=ecs.Ec2TaskDefinition(self, "TaskDef")
task_definition.add_container("TheContainer",
image=ecs.ContainerImage.from_registry("example-image"),
memory_limit_mi_b=256,
logging=ecs.GenericLogDriver(
log_driver="fluentd",
options={
"tag": "example-tag"
}
)
)
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This construct library allows you to define Amazon Elastic Container Service
for Kubernetes (EKS) clusters programmatically.
This library also supports programmatically defining Kubernetes resource
manifests within EKS clusters.
This example defines an Amazon EKS cluster with the following configuration:
2x m5.large instances (this instance type suits most common use-cases, and is good value for money)
Dedicated VPC with default configuration (see ec2.Vpc)
NOTE: in order to determine the default AMI for for Amazon EKS instances the
eks.Cluster resource must be defined within a stack that is configured with an
explicit env.region. See Environments
in the AWS CDK Developer Guide for more details.
The cluster.defaultCapacity property will reference the AutoScalingGroup
resource for the default capacity. It will be undefined if defaultCapacity
is set to 0:
When adding capacity, you can specify options for
/etc/eks/boostrap.sh
which is responsible for associating the node to the EKS cluster. For example,
you can use kubeletExtraArgs to add custom node labels or taints.
# up to ten spot instancescluster.add_capacity("spot",
instance_type=ec2.InstanceType("t3.large"),
desired_capacity=2,
bootstrap_options={
"kubelet_extra_args": "--node-labels foo=bar,goo=far",
"aws_api_retry_attempts": 5
}
)
To disable bootstrapping altogether (i.e. to fully customize user-data), set bootstrapEnabled to false when you add
the capacity.
Masters Role
The Amazon EKS construct library allows you to specify an IAM role that will be
granted system:masters privileges on your cluster.
Without specifying a mastersRole, you will not be able to interact manually
with the cluster.
The following example defines an IAM role that can be assumed by all users
in the account and shows how to use the mastersRole property to map this
role to the Kubernetes system:masters group:
# first define the rolecluster_admin=iam.Role(self, "AdminRole",
assumed_by=iam.AccountRootPrincipal()
)
# now define the cluster and map role to "masters" RBAC groupeks.Cluster(self, "Cluster",
masters_role=cluster_admin
)
When you cdk deploy this CDK app, you will notice that an output will be printed
with the update-kubeconfig command.
Copy & paste the "aws eks update-kubeconfig ..." command to your shell in
order to connect to your EKS cluster with the "masters" role.
Now, given AWS CLI is configured to use AWS
credentials for a user that is trusted by the masters role, you should be able
to interact with your cluster through kubectl (the above example will trust
all users in the account).
For example:
$ aws eks update-kubeconfig --name cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 --role-arn arn:aws:iam::112233445566:role/eks-integ-defaults-Role1ABCC5F0-1EFK2W5ZJD98YAdded new context arn:aws:eks:eu-west-2:112233445566:cluster/cluster-ba7c166b-c4f3-421c-bf8a-6812e4036a33 to /Users/boom/.kube/config
$ kubectl get nodes # list all nodesNAME STATUS ROLES AGE VERSIONip-10-0-147-66.eu-west-2.compute.internal Ready <none> 21m v1.13.7-eks-c57ff8ip-10-0-169-151.eu-west-2.compute.internal Ready <none> 21m v1.13.7-eks-c57ff8
$ kubectl get all -n kube-systemNAME READY STATUS RESTARTS AGEpod/aws-node-fpmwv 1/1 Running 0 21mpod/aws-node-m9htf 1/1 Running 0 21mpod/coredns-5cb4fb54c7-q222j 1/1 Running 0 23mpod/coredns-5cb4fb54c7-v9nxx 1/1 Running 0 23mpod/kube-proxy-d4jrh 1/1 Running 0 21mpod/kube-proxy-q7hh7 1/1 Running 0 21mNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/kube-dns ClusterIP 172.20.0.10 <none> 53/UDP,53/TCP 23mNAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGEdaemonset.apps/aws-node 2 2 2 2 2 <none> 23mdaemonset.apps/kube-proxy 2 2 2 2 2 <none> 23mNAME READY UP-TO-DATE AVAILABLE AGEdeployment.apps/coredns 2/2 2 2 23mNAME DESIRED CURRENT READY AGEreplicaset.apps/coredns-5cb4fb54c7 2 2 2 23m
For your convenience, an AWS CloudFormation output will automatically be
included in your template and will be printed when running cdk deploy.
NOTE: if the cluster is configured with kubectlEnabled: false, it
will be created with the role/user that created the AWS CloudFormation
stack. See Kubectl Support for details.
Kubernetes Resources
The KubernetesResource construct or cluster.addResource method can be used
to apply Kubernetes resource manifests to this cluster.
Since Kubernetes resources are implemented as CloudFormation resources in the
CDK. This means that if the resource is deleted from your code (or the stack is
deleted), the next cdk deploy will issue a kubectl delete command and the
Kubernetes resources will be deleted.
The Amazon EKS construct manages the aws-auth ConfigMap Kubernetes resource
on your behalf and exposes an API through the cluster.awsAuth for mapping
users, roles and accounts.
Furthermore, when auto-scaling capacity is added to the cluster (through
cluster.addCapacity or cluster.addAutoScalingGroup), the IAM instance role
of the auto-scaling group will be automatically mapped to RBAC so nodes can
connect to the cluster. No manual mapping is required any longer.
NOTE: cluster.awsAuth will throw an error if your cluster is created with kubectlEnabled: false.
For example, let's say you want to grant an IAM user administrative privileges
on your cluster:
A convenience method for mapping a role to the system:masters group is also available:
cluster.aws_auth.add_masters_role(role)
Node ssh Access
If you want to be able to SSH into your worker nodes, you must already
have an SSH key in the region you're connecting to and pass it, and you must
be able to connect to the hosts (meaning they must have a public IP and you
should be allowed to connect to them on port 22):
If you want to SSH into nodes in a private subnet, you should set up a
bastion host in a public subnet. That setup is recommended, but is
unfortunately beyond the scope of this documentation.
kubectl Support
When you create an Amazon EKS cluster, the IAM entity user or role, such as a
federated user
that creates the cluster, is automatically granted system:masters permissions
in the cluster's RBAC configuration.
In order to allow programmatically defining Kubernetes resources in your AWS
CDK app and provisioning them through AWS CloudFormation, we will need to assume
this "masters" role every time we want to issue kubectl operations against your
cluster.
At the moment, the AWS::EKS::Cluster
AWS CloudFormation resource does not support this behavior, so in order to
support "programmatic kubectl", such as applying manifests
and mapping IAM roles from within your CDK application, the Amazon EKS
construct library uses a custom resource for provisioning the cluster.
This custom resource is executed with an IAM role that we can then use
to issue kubectl commands.
The default behavior of this library is to use this custom resource in order
to retain programmatic control over the cluster. In other words: to allow
you to define Kubernetes resources in your CDK code instead of having to
manage your Kubernetes applications through a separate system.
One of the implications of this design is that, by default, the user who
provisioned the AWS CloudFormation stack (executed cdk deploy) will
not have administrative privileges on the EKS cluster.
Additional resources will be synthesized into your template (the AWS Lambda
function, the role and policy).
As described in Interacting with Your Cluster,
if you wish to be able to manually interact with your cluster, you will need
to map an IAM role or user to the system:masters group. This can be either
done by specifying a mastersRole when the cluster is defined, calling
cluster.awsAuth.addMastersRole or explicitly mapping an IAM role or IAM user to the
relevant Kubernetes RBAC groups using cluster.addRoleMapping and/or
cluster.addUserMapping.
If you wish to disable the programmatic kubectl behavior and use the standard
AWS::EKS::Cluster resource, you can specify kubectlEnabled: false when you define
the cluster:
Take care: a change in this property will cause the cluster to be destroyed
and a new cluster to be created.
When kubectl is disabled, you should be aware of the following:
When you log-in to your cluster, you don't need to specify --role-arn as
long as you are using the same user that created the cluster.
As described in the Amazon EKS User Guide, you will need to manually
edit the aws-auth ConfigMap
when you add capacity in order to map the IAM instance role to RBAC to allow nodes to join the cluster.
Any eks.Cluster APIs that depend on programmatic kubectl support will fail
with an error: cluster.addResource, cluster.awsAuth, props.mastersRole.
The @aws-cdk/aws-elasticloadbalancing package provides constructs for configuring
classic load balancers.
Configuring a Load Balancer
Load balancers send traffic to one or more AutoScalingGroups. Create a load
balancer, set up listeners and a health check, and supply the fleet(s) you want
to load balance to in the targets property.
You define an application load balancer by creating an instance of
ApplicationLoadBalancer, adding a Listener to the load balancer
and adding Targets to the Listener:
importaws_cdk.aws_ec2asec2importaws_cdk.aws_elasticloadbalancingv2aselbv2importaws_cdk.aws_autoscalingasautoscaling# ...vpc=ec2.Vpc(...)
# Create the load balancer in a VPC. 'internetFacing' is 'false'# by default, which creates an internal load balancer.lb=elbv2.ApplicationLoadBalancer(self, "LB",
vpc=vpc,
internet_facing=True
)
# Add a listener and open up the load balancer's security group# to the world. 'open' is the default, set this to 'false'# and use `listener.connections` if you want to be selective# about who can access the listener.listener=lb.add_listener("Listener",
port=80,
open=True
)
# Create an AutoScaling group and add it as a load balancing# target to the listener.asg=autoscaling.AutoScalingGroup(...)
listener.add_targets("ApplicationFleet",
port=8080,
targets=[asg]
)
The security groups of the load balancer and the target are automatically
updated to allow the network traffic.
Use the addFixedResponse() method to add fixed response rules on the listener:
It's possible to route traffic to targets based on conditions in the incoming
HTTP request. Path- and host-based conditions are supported. For example,
the following will route requests to the indicated AutoScalingGroup
only if the requested host in the request is example.com:
priority is a required field when you add targets with conditions. The lowest
number wins.
Every listener must have at least one target without conditions.
Defining a Network Load Balancer
Network Load Balancers are defined in a similar way to Application Load
Balancers:
importaws_cdk.aws_ec2asec2importaws_cdk.aws_elasticloadbalancingv2aselbv2importaws_cdk.aws_autoscalingasautoscaling# Create the load balancer in a VPC. 'internetFacing' is 'false'# by default, which creates an internal load balancer.lb=elbv2.NetworkLoadBalancer(self, "LB",
vpc=vpc,
internet_facing=True
)
# Add a listener on a particular port.listener=lb.add_listener("Listener",
port=443
)
# Add targets on a particular port.listener.add_targets("AppFleet",
port=443,
targets=[asg]
)
One thing to keep in mind is that network load balancers do not have security
groups, and no automatic security group configuration is done for you. You will
have to configure the security groups of the target yourself to allow traffic by
clients and/or load balancer instances, depending on your target types. See
Target Groups for your Network Load
Balancers
and Register targets with your Target
Group
for more information.
Targets and Target Groups
Application and Network Load Balancers organize load balancing targets in Target
Groups. If you add your balancing targets (such as AutoScalingGroups, ECS
services or individual instances) to your listener directly, the appropriate
TargetGroup will be automatically created for you.
If you need more control over the Target Groups created, create an instance of
ApplicationTargetGroup or NetworkTargetGroup, add the members you desire,
and add it to the listener by calling addTargetGroups instead of addTargets.
addTargets() will always return the Target Group it just created for you:
The health check can also be configured after creation by calling
configureHealthCheck() on the created object.
No attempts are made to configure security groups for the port you're
configuring a health check for, but if the health check is on the same port
you're routing traffic to, the security group already allows the traffic.
If not, you will have to configure the security groups appropriately:
Constructs that want to be a load balancer target should implement
IApplicationLoadBalancerTarget and/or INetworkLoadBalancerTarget, and
provide an implementation for the function attachToXxxTargetGroup(), which can
call functions on the load balancer and should return metadata about the
load balancing target:
targetType should be one of Instance or Ip. If the target can be
directly added to the target group, targetJson should contain the id of
the target (either instance ID or IP address depending on the type) and
optionally a port or availabilityZone override.
Application load balancer targets can call registerConnectable() on the
target group to register themselves for addition to the load balancer's security
group rules.
If your load balancer target requires that the TargetGroup has been
associated with a LoadBalancer before registration can happen (such as is the
case for ECS Services for example), take a resource dependency on
targetGroup.loadBalancerDependency() as follows:
# Make sure that the listener has been created, and so the TargetGroup# has been associated with the LoadBalancer, before 'resource' is created.resourced.add_dependency(target_group.load_balancer_dependency())
Amazon CloudWatch Events delivers a near real-time stream of system events that
describe changes in AWS resources. For example, an AWS CodePipeline emits the
State
Change
event when the pipeline changes it's state.
Events: An event indicates a change in your AWS environment. AWS resources
can generate events when their state changes. For example, Amazon EC2
generates an event when the state of an EC2 instance changes from pending to
running, and Amazon EC2 Auto Scaling generates events when it launches or
terminates instances. AWS CloudTrail publishes events when you make API calls.
You can generate custom application-level events and publish them to
CloudWatch Events. You can also set up scheduled events that are generated on
a periodic basis. For a list of services that generate events, and sample
events from each service, see CloudWatch Events Event Examples From Each
Supported
Service.
Targets: A target processes events. Targets can include Amazon EC2
instances, AWS Lambda functions, Kinesis streams, Amazon ECS tasks, Step
Functions state machines, Amazon SNS topics, Amazon SQS queues, and built-in
targets. A target receives events in JSON format.
Rules: A rule matches incoming events and routes them to targets for
processing. A single rule can route to multiple targets, all of which are
processed in parallel. Rules are not processed in a particular order. This
enables different parts of an organization to look for and process the events
that are of interest to them. A rule can customize the JSON sent to the
target, by passing only certain parts or by overwriting it with a constant.
The Rule construct defines a CloudWatch events rule which monitors an
event based on an event
pattern
and invoke event targets when the pattern is matched against a triggered
event. Event targets are objects that implement the IRuleTarget interface.
Normally, you will use one of the source.onXxx(name[, target[, options]]) -> Rule methods on the event source to define an event rule associated with
the specific activity. You can targets either via props, or add targets using
rule.addTarget.
For example, to define an rule that triggers a CodeBuild project build when a
commit is pushed to the "master" branch of a CodeCommit repository:
You can add additional targets, with optional input
transformer
using eventRule.addTarget(target[, input]). For example, we can add a SNS
topic target which formats a human-readable message for the commit.
For example, this adds an SNS topic as a target:
on_commit_rule.add_target(targets.SnsTopic(topic,
message=events.RuleTargetInput.from_text(f"A commit was pushed to the repository {codecommit.ReferenceEvent.repositoryName} on branch {codecommit.ReferenceEvent.referenceName}")
))
Event Targets
The @aws-cdk/aws-events-targets module includes classes that implement the IRuleTarget
interface for various AWS services.
The following targets are supported:
targets.CodeBuildProject: Start an AWS CodeBuild build
targets.CodePipeline: Start an AWS CodePipeline pipeline execution
targets.EcsTask: Start a task on an Amazon ECS cluster
targets.LambdaFunction: Invoke an AWS Lambda function
targets.SnsTopic: Publish into an SNS topic
targets.SqsQueue: Send a message to an Amazon SQS Queue
targets.SfnStateMachine: Trigger an AWS Step Functions state machine
targets.AwsApi: Make an AWS API call
Cross-account targets
It's possible to have the source of the event and a target in separate AWS accounts:
In this situation, the CDK will wire the 2 accounts together:
It will generate a rule in the source stack with the event bus of the target account as the target
It will generate a rule in the target stack, with the provided target
It will generate a separate stack that gives the source account permissions to publish events
to the event bus of the target account in the given region,
and make sure its deployed before the source stack
Note: while events can span multiple accounts, they cannot span different regions
(that is a CloudWatch, not CDK, limitation).
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
A Glue table describes a table of data in S3: its structure (column names and types), location of data (S3 objects with a common prefix in a S3 bucket), and format for the files (Json, Avro, Parquet, etc.):
# INCORRECTglue.Table(stack, "MyTable",
database=my_database,
table_name="my_table",
columns=[{
"name": "col1",
"type": glue.Schema.string
}, {
"name": "col2",
"type": glue.Schema.array(Schema.string),
"comment": "col2 is an array of strings"
}],
data_format=glue.DataFormat.Json
)
By default, a S3 bucket will be created to store the table's data but you can manually pass the bucket and s3Prefix:
To improve query performance, a table can specify partitionKeys on which data is stored and queried separately. For example, you might partition a table by year and month to optimize queries based on a time window:
ClientSideKms - Client-side encryption (CSE-KMS) with an AWS KMS Key managed by the account owner.
# KMS key is created automaticallyglue.Table(stack, "MyTable",
encryption=glue.TableEncryption.ClientSideKms, ...
)
# with an explicit KMS keyglue.Table(stack, "MyTable",
encryption=glue.TableEncryption.ClientSideKms,
encryption_key=kms.Key(stack, "MyKey"), ...
)
Note: you cannot provide a Bucket when creating the Table if you wish to use server-side encryption (Kms, KmsManaged or S3Managed).
Types
A table's schema is a collection of columns, each of which have a name and a type. Types are recursive structures, consisting of primitive and complex types:
Define a policy and attach it to groups, users and roles. Note that it is possible to attach
the policy either by calling xxx.attachInlinePolicy(policy) or policy.attachToXxx(xxx).
Many of the AWS CDK resources have grant* methods that allow you to grant other resources access to that resource. As an example, the following code gives a Lambda function write permissions (Put, Update, Delete) to a DynamoDB table.
The grant* methods accept an IGrantable object. This interface is implemented by IAM principlal resources (groups, users and roles) and resources that assume a role such as a Lambda function, EC2 instance or a Codebuild project.
You can find which grant* methods exist for a resource in the AWS CDK API Reference.
Configuring an ExternalId
If you need to create roles that will be assumed by 3rd parties, it is generally a good idea to require an ExternalId
to assume them. Configuring
an ExternalId works like this:
When we say Principal, we mean an entity you grant permissions to. This
entity can be an AWS Service, a Role, or something more abstract such as "all
users in this account" or even "all users in this organization". An
Identity is an IAM representing a single IAM entity that can have
a policy attached, one of Role, User, or Group.
IAM Principals
When defining policy statements as part of an AssumeRole policy or as part of a
resource policy, statements would usually refer to a specific IAM principal
under Principal.
IAM principals are modeled as classes that derive from the iam.PolicyPrincipal
abstract class. Principal objects include principal type (string) and value
(array of string), optional set of conditions and the action that this principal
requires when it is used in an assume role policy document.
To add a principal to a policy statement you can either use the abstract
statement.addPrincipal, one of the concrete addXxxPrincipal methods:
addAwsPrincipal, addArnPrincipal or new ArnPrincipal(arn) for { "AWS": arn }
addAwsAccountPrincipal or new AccountPrincipal(accountId) for { "AWS": account-arn }
addServicePrincipal or new ServicePrincipal(service) for { "Service": service }
addAccountRootPrincipal or new AccountRootPrincipal() for { "AWS": { "Ref: "AWS::AccountId" } }
addCanonicalUserPrincipal or new CanonicalUserPrincipal(id) for { "CanonicalUser": id }
addFederatedPrincipal or new FederatedPrincipal(federated, conditions, assumeAction) for
{ "Federated": arn } and a set of optional conditions and the assume role action to use.
addAnyPrincipal or new AnyPrincipal for { "AWS": "*" }
If multiple principals are added to the policy statement, they will be merged together:
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Define an unencrypted Kinesis stream.
Stream(self, "MyFirstStream")
Encryption
Define a KMS-encrypted stream:
stream=new_stream(self, "MyEncryptedStream",
encryption=StreamEncryption.Kms
)
# you can access the encryption key:assert(stream.encryption_keyinstanceofkms.Key)
To use a KMS key in a different stack in the same CDK application,
pass the construct to the other stack:
## Stack that defines the key#classKeyStack(cdk.Stack):
def__init__(self, scope, id, props=None):
super().__init__(scope, id, props)
self.key=kms.Key(self, "MyKey", removal_policy=RemovalPolicy.DESTROY)
## Stack that uses the key#classUseStack(cdk.Stack):
def__init__(self, scope, id, *, key):
super().__init__(scope, id, key=key)
# Use the IKey object here.kms.Alias(self, "Alias",
alias_name="alias/foo",
target_key=key
)
key_stack=KeyStack(app, "KeyStack")
UseStack(app, "UseStack", key=key_stack.key)
Importing existing keys
To use a KMS key that is not defined in this CDK app, but is created through other means, use
Key.fromKeyArn(parent, name, ref):
my_key_imported=kms.Key.from_key_arn(self, "MyImportedKey", "arn:aws:...")
# you can do stuff with this imported key.my_key_imported.add_alias("alias/foo")
Note that a call to .addToPolicy(statement) on myKeyImported will not have
an affect on the key's policy because it is not owned by your stack. The call
will be a no-op.
This module includes classes that allow using various AWS services as event
sources for AWS Lambda via the high-level lambda.addEventSource(source) API.
NOTE: In most cases, it is also possible to use the resource APIs to invoke an
AWS Lambda function. This library provides a uniform API for all Lambda event
sources regardless of the underlying mechanism they use.
SQS
Amazon Simple Queue Service (Amazon SQS) allows you to build asynchronous
workflows. For more information about Amazon SQS, see Amazon Simple Queue
Service. You can configure AWS Lambda to poll for these messages as they arrive
and then pass the event to a Lambda function invocation. To view a sample event,
see Amazon SQS Event.
To set up Amazon Simple Queue Service as an event source for AWS Lambda, you
first create or update an Amazon SQS queue and select custom values for the
queue parameters. The following parameters will impact Amazon SQS's polling
behavior:
visibilityTimeout: May impact the period between retries.
receiveMessageWaitTime: Will determine long
poll
duration. The default value is 20 seconds.
You can write Lambda functions to process S3 bucket events, such as the
object-created or object-deleted events. For example, when a user uploads a
photo to a bucket, you might want Amazon S3 to invoke your Lambda function so
that it reads the image and creates a thumbnail for the photo.
You can use the bucket notification configuration feature in Amazon S3 to
configure the event source mapping, identifying the bucket events that you want
Amazon S3 to publish and which Lambda function to invoke.
You can write Lambda functions to process Amazon Simple Notification Service
notifications. When a message is published to an Amazon SNS topic, the service
can invoke your Lambda function by passing the message payload as a parameter.
Your Lambda function code can then process the event, for example publish the
message to other Amazon SNS topics, or send the message to other AWS services.
This also enables you to trigger a Lambda function in response to Amazon
CloudWatch alarms and other AWS services that use Amazon SNS.
When a user calls the SNS Publish API on a topic that your Lambda function is
subscribed to, Amazon SNS will call Lambda to invoke your function
asynchronously. Lambda will then return a delivery status. If there was an error
calling Lambda, Amazon SNS will retry invoking the Lambda function up to three
times. After three tries, if Amazon SNS still could not successfully invoke the
Lambda function, then Amazon SNS will send a delivery status failure message to
CloudWatch.
DynamoDB Streams
You can write Lambda functions to process change events from a DynamoDB Table. An event is emitted to a DynamoDB stream (if configured) whenever a write (Put, Delete, Update)
operation is performed against the table. See Using AWS Lambda with Amazon DynamoDB for more information.
To process events with a Lambda function, first create or update a DynamoDB table and enable a stream specification. Then, create a DynamoEventSource
and add it to your Lambda function. The following parameters will impact Amazon DynamoDB's polling behavior:
batchSize: Determines how many records are buffered before invoking your lambda function - could impact your function's memory usage (if too high) and ability to keep up with incoming data velocity (if too low).
startingPosition: Will determine where to being consumption, either at the most recent ('LATEST') record or the oldest record ('TRIM_HORIZON'). 'TRIM_HORIZON' will ensure you process all available data, while 'LATEST' will ignore all reocrds that arrived prior to attaching the event source.
You can write Lambda functions to process streaming data in Amazon Kinesis Streams. For more information about Amazon SQS, see Amazon Kinesis
Service. To view a sample event,
see Amazon SQS Event.
To set up Amazon Kinesis as an event source for AWS Lambda, you
first create or update an Amazon Kinesis stream and select custom values for the
event source parameters. The following parameters will impact Amazon Kinesis's polling
behavior:
batchSize: Determines how many records are buffered before invoking your lambnda function - could impact your function's memory usage (if too high) and ability to keep up with incoming data velocity (if too low).
startingPosition: Will determine where to being consumption, either at the most recent ('LATEST') record or the oldest record ('TRIM_HORIZON'). 'TRIM_HORIZON' will ensure you process all available data, while 'LATEST' will ignore all reocrds that arrived prior to attaching the event source.
When deploying a stack that contains this code, the directory will be zip
archived and then uploaded to an S3 bucket, then the exact location of the S3
objects will be passed when the stack is deployed.
During synthesis, the CDK expects to find a directory on disk at the asset
directory specified. Note that we are referencing the asset directory relatively
to our CDK project directory. This is especially important when we want to share
this construct through a library. Different programming languages will have
different techniques for bundling resources into libraries.
Layers
The lambda.LayerVersion class can be used to define Lambda layers and manage
granting permissions to other AWS accounts or organizations.
layer=lambda.LayerVersion(stack, "MyLayer",
code=lambda.Code.from_asset(path.join(__dirname, "layer-code")),
compatible_runtimes=[lambda.Runtime.NODEJS_8_10],
license="Apache-2.0",
description="A layer to test the L2 construct"
)
# To grant usage by other AWS accountslayer.add_permission("remote-account-grant", account_id=aws_account_id)
# To grant usage to all accounts in some AWS Ogranization# layer.grantUsage({ accountId: '*', organizationId });lambda.Function(stack, "MyLayeredLambda",
code=lambda.InlineCode("foo"),
handler="index.handler",
runtime=lambda.Runtime.NODEJS_8_10,
layers=[layer]
)
Event Rule Target
You can use an AWS Lambda function as a target for an Amazon CloudWatch event
rule:
In most cases, it is possible to trigger a function as a result of an event by
using one of the add<Event>Notification methods on the source construct. For
example, the s3.Bucket construct has an onEvent method which can be used to
trigger a Lambda when an event, such as PutObject occurs on an S3 bucket.
An alternative way to add event sources to a function is to use function.addEventSource(source).
This method accepts an IEventSource object. The module @aws-cdk/aws-lambda-event-sources
includes classes for the various event sources supported by AWS Lambda.
For example, the following code adds an SQS queue as an event source for a function:
This library supplies constructs for working with CloudWatch Logs.
Log Groups/Streams
The basic unit of CloudWatch is a Log Group. Every log group typically has the
same kind of data logged to it, in the same format. If there are multiple
applications or services logging into the Log Group, each of them creates a new
Log Stream.
Every log operation creates a "log event", which can consist of a simple string
or a single-line JSON object. JSON objects have the advantage that they afford
more filtering abilities (see below).
The only configurable attribute for log streams is the retention period, which
configures after how much time the events in the log stream expire and are
deleted.
The default retention period if not supplied is 2 years, but it can be set to
one of the values in the RetentionDays enum to configure a different
retention period (including infinite retention).
# Configure log group for short retentionlog_group=LogGroup(stack, "LogGroup",
retention=RetentionDays.ONE_WEEK
)
# Configure log group for infinite retentionlog_group=LogGroup(stack, "LogGroup",
retention=Infinity
)
Subscriptions and Destinations
Log events matching a particular filter can be sent to either a Lambda function
or a Kinesis stream.
If the Kinesis stream lives in a different account, a CrossAccountDestination
object needs to be added in the destination account which will act as a proxy
for the remote Kinesis stream. This object is automatically created for you
if you use the CDK Kinesis library.
Create a SubscriptionFilter, initialize it with an appropriate Pattern (see
below) and supply the intended destination:
CloudWatch Logs can extract and emit metrics based on a textual log stream.
Depending on your needs, this may be a more convenient way of generating metrics
for you application than making calls to CloudWatch Metrics yourself.
A MetricFilter either emits a fixed number every time it sees a log event
matching a particular pattern (see below), or extracts a number from the log
event and uses that as the metric value.
Will extract the value of jsonField wherever it occurs in JSON-structed
log records in the LogGroup, and emit them to CloudWatch Metrics under
the name Namespace/MetricName.
Patterns
Patterns describe which log events match a subscription or metric filter. There
are three types of patterns:
Text patterns
JSON patterns
Space-delimited table patterns
All patterns are constructed by using static functions on the FilterPattern
class.
In addition to the patterns above, the following special patterns exist:
FilterPattern.allEvents(): matches all log events.
FilterPattern.literal(string): if you already know what pattern expression to
use, this function takes a string and will use that as the log pattern. For
more information, see the Filter and Pattern
Syntax.
Text Patterns
Text patterns match if the literal strings appear in the text form of the log
line.
FilterPattern.allTerms(term, term, ...): matches if all of the given terms
(substrings) appear in the log event.
FilterPattern.anyTerm(term, term, ...): matches if all of the given terms
(substrings) appear in the log event.
FilterPattern.anyGroup([term, term, ...], [term, term, ...], ...): matches if
all of the terms in any of the groups (specified as arrays) matches. This is
an OR match.
Examples:
# Search for lines that contain both "ERROR" and "MainThread"pattern1=FilterPattern.all_terms("ERROR", "MainThread")
# Search for lines that either contain both "ERROR" and "MainThread", or# both "WARN" and "Deadlock".pattern2=FilterPattern.any_group(["ERROR", "MainThread"], ["WARN", "Deadlock"])
JSON Patterns
JSON patterns apply if the log event is the JSON representation of an object
(without any other characters, so it cannot include a prefix such as timestamp
or log level). JSON patterns can make comparisons on the values inside the
fields.
Strings: the comparison operators allowed for strings are = and !=.
String values can start or end with a * wildcard.
Numbers: the comparison operators allowed for numbers are =, !=,
<, <=, >, >=.
Fields in the JSON structure are identified by identifier the complete object as $
and then descending into it, such as $.field or $.list[0].field.
FilterPattern.stringValue(field, comparison, string): matches if the given
field compares as indicated with the given string value.
FilterPattern.numberValue(field, comparison, number): matches if the given
field compares as indicated with the given numerical value.
FilterPattern.isNull(field): matches if the given field exists and has the
value null.
FilterPattern.notExists(field): matches if the given field is not in the JSON
structure.
FilterPattern.exists(field): matches if the given field is in the JSON
structure.
FilterPattern.booleanValue(field, boolean): matches if the given field
is exactly the given boolean value.
FilterPattern.all(jsonPattern, jsonPattern, ...): matches if all of the
given JSON patterns match. This makes an AND combination of the given
patterns.
FilterPattern.any(jsonPattern, jsonPattern, ...): matches if any of the
given JSON patterns match. This makes an OR combination of the given
patterns.
Example:
# Search for all events where the component field is equal to# "HttpServer" and either error is true or the latency is higher# than 1000.pattern=FilterPattern.all(
FilterPattern.string_value("$.component", "=", "HttpServer"),
FilterPattern.any(
FilterPattern.boolean_value("$.error", True),
FilterPattern.number_value("$.latency", ">", 1000)))
Space-delimited table patterns
If the log events are rows of a space-delimited table, this pattern can be used
to identify the columns in that structure and add conditions on any of them. The
canonical example where you would apply this type of pattern is Apache server
logs.
Text that is surrounded by "..." quotes or [...] square brackets will
be treated as one column.
FilterPattern.spaceDelimited(column, column, ...): construct a
SpaceDelimitedTextPattern object with the indicated columns. The columns
map one-by-one the columns found in the log event. The string "..." may
be used to specify an arbitrary number of unnamed columns anywhere in the
name list (but may only be specified once).
After constructing a SpaceDelimitedTextPattern, you can use the following
two members to add restrictions:
pattern.whereString(field, comparison, string): add a string condition.
The rules are the same as for JSON patterns.
pattern.whereNumber(field, comparison, number): add a numerical condition.
The rules are the same as for JSON patterns.
Multiple restrictions can be added on the same column; they must all apply.
Example:
# Search for all events where the component is "HttpServer" and the# result code is not equal to 200.pattern=FilterPattern.space_delimited("time", "component", "...", "result_code", "latency").where_string("component", "=", "HttpServer").where_number("result_code", "!=", 200)
Amazon Relational Database Service Construct Library---
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
### Starting a Clustered Database
To set up a clustered database (like Aurora), define a DatabaseCluster. You must
always launch a database in a VPC. Use the vpcSubnets attribute to control whether
your instances will be launched privately or publicly:
By default, the master password will be generated and stored in AWS Secrets Manager.
Your cluster will be empty by default. To add a default database upon construction, specify the
defaultDatabaseName attribute.
Starting an Instance Database
To set up a instance database, define a DatabaseInstance. You must
always launch a database in a VPC. Use the vpcSubnets attribute to control whether
your instances will be launched privately or publicly:
Creating a "production" Oracle database instance with option and parameter groups:
# Set open cursors with parameter group# INCORRECTparameter_group=rds.ParameterGroup(self, "ParameterGroup",
family="oracle-se1-11.2",
parameters={
"open_cursors": "2500"
}
)
Add XMLDB and OEM with option group
# INCORRECToption_group=rds.OptionGroup(self, "OptionGroup",
engine=rds.DatabaseInstanceEngine.ORACLE_SE1,
major_engine_version="11.2",
configurations=[{
"name": "XMLDB"
}, {
"name": "OEM",
"port": 1158,
"vpc": vpc
}
]
)
# Allow connections to OEMoption_group.option_connections.OEM.connections.allow_default_port_from_any_ipv4()
# Database instance with production valuesinstance=rds.DatabaseInstance(self, "Instance",
engine=rds.DatabaseInstanceEngine.ORACLE_SE1,
license_model=rds.LicenseModel.BRING_YOUR_OWN_LICENSE,
instance_class=ec2.InstanceType.of(ec2.InstanceClass.BURSTABLE2, ec2.InstanceSize.MEDIUM),
multi_az=True,
storage_type=rds.StorageType.IO1,
master_username="syscdk",
vpc=vpc,
database_name="ORCL",
storage_encrypted=True,
backup_retention=cdk.Duration.days(7),
monitoring_interval=cdk.Duration.seconds(60),
enable_performance_insights=True,
cloudwatch_logs_exports=["trace", "audit", "alert", "listener"
],
cloudwatch_logs_retention=logs.RetentionDays.ONE_MONTH,
auto_minor_version_upgrade=False,
option_group=option_group,
parameter_group=parameter_group
)
# Allow connections on default port from any IPV4instance.connections.allow_default_port_from_any_ipv4()
# Rotate the master user password every 30 daysinstance.add_rotation_single_user("Rotation")
# Add alarm for high CPUcloudwatch.Alarm(self, "HighCPU",
metric=instance.metric_cPUUtilization(),
threshold=90,
evaluation_periods=1
)
# Trigger Lambda function on instance availability eventsfn=lambda.Function(self, "Function",
code=lambda.Code.from_inline("exports.handler = (event) => console.log(event);"),
handler="index.handler",
runtime=lambda.Runtime.NODEJS_8_10
)
# INCORRECTavailability_rule=instance.on_event("Availability", target=targets.LambdaFunction(fn))
availability_rule.add_event_pattern(
detail={
"EventCategories": ["availability"
]
}
)
Instance events
To define Amazon CloudWatch event rules for database instances, use the onEvent
method:
To control who can access the cluster or instance, use the .connections attribute. RDS databases have
a default port, so you don't need to specify the port:
cluster.connections.allow_from_any_ipv4("Open to the world")
The endpoints to access your database cluster will be available as the .clusterEndpoint and .readerEndpoint
attributes:
# The number of database connections in use (average over 5 minutes)db_connections=instance.metric_database_connections()
# The average amount of time taken per disk I/O operation (average over 1 minute)read_latency=instance.metric("ReadLatency", statistic="Average", period_sec=60)
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This library contains commonly used patterns for Route53.
HTTPS Redirect
This construct allows creating a simple domainA -> domainB redirect using CloudFront and S3. You can specify multiple domains to be redirected.
To add a private hosted zone, use PrivateHostedZone. Note that
enableDnsHostnames and enableDnsSupport must have been enabled for the
VPC you're configuring for private hosted zones.
importaws_cdk.aws_route53asroute53route53.TxtRecord(self, "TXTRecord",
zone=my_zone,
record_name="_foo", # If the name ends with a ".", it will be used as-is;# if it ends with a "." followed by the zone name, a trailing "." will be added automatically;# otherwise, a ".", the zone name, and a trailing "." will be added automatically.# Defaults to zone root if not specified.values=["Bar!", "Baz?"],
ttl=Duration.minutes(90)
)
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Assets are local files or directories which are needed by a CDK app. A common
example is a directory which contains the handler code for a Lambda function,
but assets can represent any artifact that is needed for the app's operation.
When deploying a CDK app that includes constructs with assets, the CDK toolkit
will first upload all the assets to S3, and only then deploy the stacks. The S3
locations of the uploaded assets will be passed in as CloudFormation Parameters
to the relevant stacks.
The following JavaScript example defines an directory asset which is archived as
a .zip file and uploaded to S3 during deployment.
IAM roles, users or groups which need to be able to read assets in runtime will should be
granted IAM permissions. To do that use the asset.grantRead(principal) method:
The following examples grants an IAM group read permissions on an asset:
When an asset is defined in a construct, a construct metadata entry
aws:cdk:asset is emitted with instructions on where to find the asset and what
type of packaging to perform (zip or file). Furthermore, the synthesized
CloudFormation template will also include two CloudFormation parameters: one for
the asset's bucket and one for the asset S3 key. Those parameters are used to
reference the deploy-time values of the asset (using { Ref: "Param" }).
Then, when the stack is deployed, the toolkit will package the asset (i.e. zip
the directory), calculate an MD5 hash of the contents and will render an S3 key
for this asset within the toolkit's asset store. If the file doesn't exist in
the asset store, it is uploaded during deployment.
The toolkit's asset store is an S3 bucket created by the toolkit for each
environment the toolkit operates in (environment = account + region).
Now, when the toolkit deploys the stack, it will set the relevant CloudFormation
Parameters to point to the actual bucket and key for each asset.
CloudFormation Resource Metadata
NOTE: This section is relevant for authors of AWS Resource Constructs.
In certain situations, it is desirable for tools to be able to know that a certain CloudFormation
resource is using a local asset. For example, SAM CLI can be used to invoke AWS Lambda functions
locally for debugging purposes.
To enable such use cases, external tools will consult a set of metadata entries on AWS CloudFormation
resources:
aws:asset:path points to the local path of the asset.
aws:asset:property is the name of the resource property where the asset is used
Using these two metadata entries, tools will be able to identify that assets are used
by a certain resource, and enable advanced local experiences.
To add these metadata entries to a resource, use the
asset.addResourceMetadata(resource, property) method.
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Status: Experimental
This library allows populating an S3 bucket with the contents of a .zip file
from another S3 bucket or from local disk.
The following example defines a publicly accessible S3 bucket with web hosting
enabled and populates it from a local directory on disk.
When this stack is deployed (either via cdk deploy or via CI/CD), the
contents of the local website-dist directory will be archived and uploaded
to an intermediary assets bucket.
The BucketDeployment construct synthesizes a custom CloudFormation resource
of type Custom::CDKBucketDeployment into the template. The source bucket/key
is set to point to the assets bucket.
The custom resource downloads the .zip archive, extracts it and issues aws s3 sync --delete against the destination bucket (in this case
websiteBucket).
Supported sources
The following source types are supported for bucket deployments:
Local .zip file: s3deploy.Source.asset('/path/to/local/file.zip')
Local directory: s3deploy.Source.asset('/path/to/local/directory')
Another bucket: s3deploy.Source.bucket(bucket, zipObjectKey)
Retain on Delete
By default, the contents of the destination bucket will be deleted when the
BucketDeployment resource is removed from the stack or when the destination is
changed. You can use the option retainOnDelete: true to disable this behavior,
in which case the contents will be retained.
CloudFront Invalidation
You can provide a CloudFront distribution and optional paths to invalidate after the bucket deployment finishes.
This library uses an AWS CloudFormation custom resource which about 10MiB in
size. The code of this resource is bundled with this library.
AWS Lambda execution time is limited to 15min. This limits the amount of data which can
be deployed into the bucket by this timeout.
When the BucketDeployment is removed from the stack, the contents are retained
in the destination bucket (#952).
Bucket deployment only happens during stack create/update. This means that
if you wish to update the contents of the destination, you will need to
change the source s3 key (or bucket), so that the resource will be updated.
This is inline with best practices. If you use local disk assets, this will
happen automatically whenever you modify the asset, since the S3 key is based
on a hash of the asset contents.
Development
The custom resource is implemented in Python 3.6 in order to be able to leverage
the AWS CLI for "aws sync". The code is under lambda/src and
unit tests are under lambda/test.
This package requires Python 3.6 during build time in order to create the custom
resource Lambda bundle and test it. It also relies on a few bash scripts, so
might be tricky to build on Windows.
Bucket constructs expose the following deploy-time attributes:
bucketArn - the ARN of the bucket (i.e. arn:aws:s3:::bucket_name)
bucketName - the name of the bucket (i.e. bucket_name)
bucketWebsiteUrl - the Website URL of the bucket (i.e.
http://bucket_name.s3-website-us-west-1.amazonaws.com)
bucketDomainName - the URL of the bucket (i.e. bucket_name.s3.amazonaws.com)
bucketDualStackDomainName - the dual-stack URL of the bucket (i.e.
bucket_name.s3.dualstack.eu-west-1.amazonaws.com)
bucketRegionalDomainName - the regional URL of the bucket (i.e.
bucket_name.s3.eu-west-1.amazonaws.com)
arnForObjects(pattern) - the ARN of an object or objects within the bucket (i.e.
arn:aws:s3:::bucket_name/exampleobject.png or
arn:aws:s3:::bucket_name/Development/*)
urlForObject(key) - the URL of an object within the bucket (i.e.
https://s3.cn-north-1.amazonaws.com.cn/china-bucket/mykey)
Encryption
Define a KMS-encrypted bucket:
bucket=Bucket(self, "MyUnencryptedBucket",
encryption=BucketEncryption.KMS
)
# you can access the encryption key:assert(bucket.encryption_keyinstanceofkms.Key)
Most of the time, you won't have to manipulate the bucket policy directly.
Instead, buckets have "grant" methods called to give prepackaged sets of permissions
to other resources. For example:
Will give the Lambda's execution role permissions to read and write
from the bucket.
Sharing buckets between stacks
To use a bucket in a different stack in the same CDK application, pass the object to the other stack:
## Stack that defines the bucket#classProducer(cdk.Stack):
def__init__(self, scope, id, props=None):
super().__init__(scope, id, props)
bucket=s3.Bucket(self, "MyBucket",
removal_policy=cdk.RemovalPolicy.DESTROY
)
self.my_bucket=bucket## Stack that consumes the bucket#classConsumer(cdk.Stack):
def__init__(self, scope, id, *, userBucket):
super().__init__(scope, id, userBucket=userBucket)
user=iam.User(self, "MyUser")
user_bucket.grant_read_write(user)
producer=Producer(app, "ProducerStack")
Consumer(app, "ConsumerStack", user_bucket=producer.my_bucket)
Importing existing buckets
To import an existing bucket into your CDK application, use the Bucket.fromBucketAttributes
factory method. This method accepts BucketAttributes which describes the properties of an already
existing bucket:
bucket=Bucket.from_bucket_attributes(self, "ImportedBucket",
bucket_arn="arn:aws:s3:::my-bucket"
)
# now you can just call methods on the bucketbucket.grant_read_write(user)
Alternatively, short-hand factories are available as Bucket.fromBucketName and
Bucket.fromBucketArn, which will derive all bucket attributes from the bucket
name or ARN respectively:
The Amazon S3 notification feature enables you to receive notifications when
certain events happen in your bucket as described under S3 Bucket
Notifications of the S3 Developer Guide.
To subscribe for bucket notifications, use the bucket.addEventNotification method. The
bucket.addObjectCreatedNotification and bucket.addObjectRemovedNotification can also be used for
these common use cases.
The following example will subscribe an SNS topic to be notified of all s3:ObjectCreated:* events:
This call will also ensure that the topic policy can accept notifications for
this specific bucket.
Supported S3 notification targets are exposed by the @aws-cdk/aws-s3-notifications package.
It is also possible to specify S3 object key filters when subscribing. The
following example will notify myQueue when objects prefixed with foo/ and
have the .jpg suffix are removed from the bucket.
When blockPublicPolicy is set to true, grantPublicRead() throws an error.
Website redirection
You can use the two following properties to specify the bucket redirection policy. Please note that these methods cannot both be applied to the same bucket.
Static redirection
You can statically redirect a to a given Bucket URL or any other host name with websiteRedirect:
AWS Serverless Application Model Construct Library---
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This module includes low-level constructs that synthesize into AWS::Serverless resources.
sam=require("@aws-cdk/aws-sam")
Related
The following AWS CDK modules include constructs that can be used to work with Amazon API Gateway and AWS Lambda:
The Secret construct does not allow specifying the SecretString property
of the AWS::SecretsManager::Secret resource (as this will almost always
lead to the secret being surfaced in plain text and possibly committed to
your source control).
If you need to use a pre-existing secret, the recommended way is to manually
provision the secret in AWS SecretsManager and use the Secret.fromSecretArn
or Secret.fromSecretAttributes method to make it available in your CDK Application:
secret=secretsmanager.Secret.from_secret_attributes(scope, "ImportedSecret",
secret_arn="arn:aws:secretsmanager:<region>:<account-id-number>:secret:<secret-name>-<random-6-characters>",
# If the secret is encrypted using a KMS-hosted CMK, either import or reference that key:encryption_key=encryption_key
)
This package contains constructs for working with AWS Cloud Map
AWS Cloud Map is a fully managed service that you can use to create and
maintain a map of the backend services and resources that your applications
depend on.
The following example creates an AWS Cloud Map namespace that
supports API calls, creates a service in that namespace, and
registers an instance to it:
The following example creates an AWS Cloud Map namespace that
supports both API calls and DNS queries within a vpc, creates a
service in that namespace, and registers a loadbalancer as an
instance:
The following example creates an AWS Cloud Map namespace that
supports both API calls and public DNS queries, creates a service in
that namespace, and registers an IP instance:
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
This will add a rule at the top of the rule set with a Lambda action that stops processing messages that have at least one spam indicator. See Lambda Function Examples.
Various subscriptions can be added to the topic by calling the
.addSubscription(...) method on the topic. It accepts a subscription object,
default implementations of which can be found in the
@aws-cdk/aws-sns-subscriptions package:
Note that subscriptions of queues in different accounts need to be manually confirmed by
reading the initial message from the queue and visiting the link found in it.
Filter policy
A filter policy can be specified when subscribing an endpoint to a topic.
Example with a Lambda subscription:
my_topic=sns.Topic(self, "MyTopic")
fn=lambda.Function(self, "Function", ...)
# Lambda should receive only message matching the following conditions on attributes:# color: 'red' or 'orange' or begins with 'bl'# size: anything but 'small' or 'medium'# price: between 100 and 200 or greater than 300# store: attribute must be presenttopic.subscribe_lambda(subs.LambdaSubscription(fn,
filter_policy={
"color": sns.SubscriptionFilter.string_filter(
whitelist=["red", "orange"],
match_prefixes=["bl"]
),
"size": sns.SubscriptionFilter.string_filter(
blacklist=["small", "medium"]
),
"price": sns.SubscriptionFilter.numeric_filter(
between={"start": 100, "stop": 200},
greater_than=300
),
"store": sns.SubscriptionFilter.exists_filter()
}
))
CloudWatch Event Rule Target
SNS topics can be used as targets for CloudWatch event rules.
Use the @aws-cdk/aws-events-targets.SnsTopicTarget:
This will result in adding a target to the event rule and will also modify the
topic resource policy to allow CloudWatch events to publish to the topic.
Amazon Simple Queue Service (SQS) is a fully managed message queuing service that
enables you to decouple and scale microservices, distributed systems, and serverless
applications. SQS eliminates the complexity and overhead associated with managing and
operating message oriented middleware, and empowers developers to focus on differentiating work.
Using SQS, you can send, store, and receive messages between software components at any volume,
without losing messages or requiring other services to be available.
Installation
Import to your project:
importaws_cdk.aws_sqsassqs
Basic usage
Here's how to add a basic queue to your application:
sqs.Queue(self, "Queue")
Encryption
If you want to encrypt the queue contents, set the encryption property. You can have
the messages encrypted with a key that SQS manages for you, or a key that you
can manage yourself.
# Use managed keysqs.Queue(self, "Queue",
encryption=QueueEncryption.KMS_MANAGED
)
# Use custom keymy_key=kms.Key(self, "Key")
sqs.Queue(self, "Queue",
encryption=QueueEncryption.KMS,
encryption_master_key=my_key
)
First-In-First-Out (FIFO) queues
FIFO queues give guarantees on the order in which messages are dequeued, and have additional
features in order to help guarantee exactly-once processing. For more information, see
the SQS manual. Note that FIFO queues are not available in all AWS regions.
A queue can be made a FIFO queue by either setting fifo: true, giving it a name which ends
in ".fifo", or enabling content-based deduplication (which requires FIFO queues).
You can reference existing SSM Parameter Store values that you want to use in
your CDK app by using ssm.ParameterStoreString:
# Retrieve the latest value of the non-secret parameter# with name "/My/String/Parameter".string_value=ssm.StringParameter.from_string_parameter_attributes(self, "MyValue",
parameter_name="/My/Public/Parameter"
).string_value# Retrieve a specific version of the secret (SecureString) parameter.# 'version' is always required.secret_value=ssm.StringParameter.from_secure_string_parameter_attributes(self, "MySecureValue",
parameter_name="/My/Secret/Parameter",
version=5
)
Creating new SSM Parameters in your CDK app
You can create either ssm.StringParameter or ssm.StringListParameters in
a CDK app. These are public (not secret) values. Parameters of type
SecretString cannot be created directly from a CDK application; if you want
to provision secrets automatically, use Secrets Manager Secrets (see the
@aws-cdk/aws-secretsmanager package).
# Create a new SSM Parameter holding a Stringparam=ssm.StringParameter(stack, "StringParameter",
# description: 'Some user-friendly description',# name: 'ParameterName',string_value="Initial parameter value"
)
# Grant read access to some Roleparam.grant_read(role)
# Create a new SSM Parameter holding a StringListlist_parameter=ssm.StringListParameter(stack, "StringListParameter",
# description: 'Some user-friendly description',# name: 'ParameterName',string_list_value=["Initial parameter value A", "Initial parameter value B"]
)
When specifying an allowedPattern, the values provided as string literals
are validated against the pattern and an exception is raised if a value
provided does not comply.
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
The @aws-cdk/aws-stepfunctions package contains constructs for building
serverless workflows using objects. Use this in conjunction with the
@aws-cdk/aws-stepfunctions-tasks package, which contains classes used
to call other AWS services.
importaws_cdk.aws_stepfunctionsassfnimportaws_cdk.aws_stepfunctions_tasksastaskssubmit_lambda=lambda.Function(self, "SubmitLambda", ...)
get_status_lambda=lambda.Function(self, "CheckLambda", ...)
submit_job=sfn.Task(self, "Submit Job",
task=tasks.InvokeFunction(submit_lambda),
# Put Lambda's result here in the execution's state objectresult_path="$.guid"
)
wait_x=sfn.Wait(self, "Wait X Seconds",
duration=sfn.WaitDuration.seconds_path("$.wait_time")
)
get_status=sfn.Task(self, "Get Job Status",
task=tasks.InvokeFunction(get_status_lambda),
# Pass just the field named "guid" into the Lambda, put the# Lambda's result in a field called "status"input_path="$.guid",
result_path="$.status"
)
job_failed=sfn.Fail(self, "Job Failed",
cause="AWS Batch Job Failed",
error="DescribeJob returned FAILED"
)
final_status=sfn.Task(self, "Get Final Job Status",
task=tasks.InvokeFunction(get_status_lambda),
# Use "guid" field as input, output of the Lambda becomes the# entire state machine output.input_path="$.guid"
)
definition=submit_job.next(wait_x).next(get_status).next(sfn.Choice(self, "Job Complete?").when(sfn.Condition.string_equals("$.status", "FAILED"), job_failed).when(sfn.Condition.string_equals("$.status", "SUCCEEDED"), final_status).otherwise(wait_x))
sfn.StateMachine(self, "StateMachine",
definition=definition,
timeout=Duration.minutes(5)
)
State Machine
A stepfunctions.StateMachine is a resource that takes a state machine
definition. The definition is specified by its start state, and encompasses
all states reachable from the start state:
State machines execute using an IAM Role, which will automatically have all
permissions added that are required to make all state machine tasks execute
properly (for example, permissions to invoke any Lambda functions you add to
your workflow). A role will be created by default, but you can supply an
existing one as well.
Amazon States Language
This library comes with a set of classes that model the Amazon States
Language. The following State classes
are supported:
Task
Pass
Wait
Choice
Parallel
Succeed
Fail
An arbitrary JSON object (specified at execution start) is passed from state to
state and transformed during the execution of the workflow. For more
information, see the States Language spec.
Task
A Task represents some work that needs to be done. The exact work to be
done is determine by a class that implements IStepFunctionsTask, a collection
of which can be found in the @aws-cdk/aws-stepfunctions-tasks package. A
couple of the tasks available are:
tasks.InvokeActivity -- start an Activity (Activities represent a work
queue that you poll on a compute fleet you manage yourself)
tasks.InvokeFunction -- invoke a Lambda function with function ARN
tasks.RunLambdaTask -- call Lambda as integrated service with magic ARN
tasks.PublishToTopic -- publish a message to an SNS topic
tasks.SendToQueue -- send a message to an SQS queue
tasks.RunEcsFargateTask/ecs.RunEcsEc2Task -- run a container task,
depending on the type of capacity.
tasks.SagemakerTrainTask -- run a SageMaker training job
tasks.SagemakerTransformTask -- run a SageMaker transform job
tasks.StartExecution -- call StartExecution to a state machine of Step Functions
Except tasks.InvokeActivity and tasks.InvokeFunction, the service integration
pattern
(integrationPattern) are supposed to be given as parameter when customers want
to call integrated services within a Task state. The default value is FIRE_AND_FORGET.
Task parameters from the state json
Many tasks take parameters. The values for those can either be supplied
directly in the workflow definition (by specifying their values), or at
runtime by passing a value obtained from the static functions on Data,
such as Data.stringAt().
If so, the value is taken from the indicated location in the state JSON,
similar to (for example) inputPath.
Lambda example - InvokeFunction
task=sfn.Task(self, "Invoke1",
task=tasks.InvokeFunction(my_lambda),
input_path="$.input",
timeout=Duration.minutes(5)
)
# Add a retry policytask.add_retry(
interval=Duration.seconds(5),
max_attempts=10
)
# Add an error handlertask.add_catch(error_handler_state)
# Set the next statetask.next(next_state)
importaws_cdk.aws_snsassns# ...topic=sns.Topic(self, "Topic")
# Use a field from the execution data as message.task1=sfn.Task(self, "Publish1",
task=tasks.PublishToTopic(topic,
integration_pattern=sfn.ServiceIntegrationPattern.FIRE_AND_FORGET,
message=TaskInput.from_data_at("$.state.message")
)
)
# Combine a field from the execution data with# a literal object.task2=sfn.Task(self, "Publish2",
task=tasks.PublishToTopic(topic,
message=TaskInput.from_object(
field1="somedata",
field2=Data.string_at("$.field2")
)
)
)
SQS example
importaws_cdk.aws_sqsassqs# ...queue=sns.Queue(self, "Queue")
# Use a field from the execution data as message.task1=sfn.Task(self, "Send1",
task=tasks.SendToQueue(queue,
message_body=TaskInput.from_data_at("$.message"),
# Only for FIFO queuesmessage_group_id="1234"
)
)
# Combine a field from the execution data with# a literal object.task2=sfn.Task(self, "Send2",
task=tasks.SendToQueue(queue,
message_body=TaskInput.from_object(
field1="somedata",
field2=Data.string_at("$.field2")
),
# Only for FIFO queuesmessage_group_id="1234"
)
)
ECS example
importaws_cdk.aws_ecsasecs# See examples in ECS library for initialization of 'cluster' and 'taskDefinition'# INCORRECTfargate_task=ecs.RunEcsFargateTask(
cluster=cluster,
task_definition=task_definition,
container_overrides=[{
"container_name": "TheContainer",
"environment": [{
"name": "CONTAINER_INPUT",
"value": Data.string_at("$.valueFromStateData")
}
]
}
]
)
fargate_task.connections.allow_to_default_port(rds_cluster, "Read the database")
task=sfn.Task(self, "CallFargate",
task=fargate_task
)
# Define a state machine with one Pass statechild=sfn.StateMachine(stack, "ChildStateMachine",
definition=sfn.Chain.start(sfn.Pass(stack, "PassState"))
)
# Include the state machine in a Task state with callback patterntask=sfn.Task(stack, "ChildTask",
task=tasks.ExecuteStateMachine(child,
integration_pattern=sfn.ServiceIntegrationPattern.WAIT_FOR_TASK_TOKEN,
input={
"token": sfn.Context.task_token,
"foo": "bar"
},
name="MyExecutionName"
)
)
# Define a second state machine with the Task state abovesfn.StateMachine(stack, "ParentStateMachine",
definition=task
)
Pass
A Pass state does no work, but it can optionally transform the execution's
JSON state.
# Makes the current JSON state { ..., "subObject": { "hello": "world" } }pass=stepfunctions.Pass(self, "Add Hello World",
result={"hello": "world"},
result_path="$.subObject"
)
# Set the next statepass.next(next_state)
Wait
A Wait state waits for a given number of seconds, or until the current time
hits a particular time. The time to wait may be taken from the execution's JSON
state.
# Wait until it's the time mentioned in the the state object's "triggerTime"# field.wait=stepfunctions.Wait(self, "Wait For Trigger Time",
duration=stepfunctions.WaitDuration.timestamp_path("$.triggerTime")
)
# Set the next statewait.next(start_the_work)
Choice
A Choice state can take a differen path through the workflow based on the
values in the execution's JSON state:
choice=stepfunctions.Choice(self, "Did it work?")
# Add conditions with .when()choice.when(stepfunctions.Condition.string_equal("$.status", "SUCCESS"), success_state)
choice.when(stepfunctions.Condition.number_greater_than("$.attempts", 5), failure_state)
# Use .otherwise() to indicate what should be done if none of the conditions matchchoice.otherwise(try_again_state)
If you want to temporarily branch your workflow based on a condition, but have
all branches come together and continuing as one (similar to how an if ... then ... else works in a programming language), use the .afterwards() method:
choice=stepfunctions.Choice(self, "What color is it?")
choice.when(stepfunctions.Condition.string_equal("$.color", "BLUE"), handle_blue_item)
choice.when(stepfunctions.Condition.string_equal("$.color", "RED"), handle_red_item)
choice.otherwise(handle_other_item_color)
# Use .afterwards() to join all possible paths back together and continuechoice.afterwards().next(ship_the_item)
If your Choice doesn't have an otherwise() and none of the conditions match
the JSON state, a NoChoiceMatched error will be thrown. Wrap the state machine
in a Parallel state if you want to catch and recover from this.
Parallel
A Parallel state executes one or more subworkflows in parallel. It can also
be used to catch and recover from errors in subworkflows.
parallel=stepfunctions.Parallel(self, "Do the work in parallel")
# Add branches to be executed in parallelparallel.branch(ship_item)
parallel.branch(send_invoice)
parallel.branch(restock)
# Retry the whole workflow if something goes wrongparallel.add_retry(max_attempts=1)
# How to recover from errorsparallel.add_catch(send_failure_notification)
# What to do in case everything succeededparallel.next(close_order)
Succeed
Reaching a Succeed state terminates the state machine execution with a
succesful status.
success=stepfunctions.Succeed(self, "We did it!")
Fail
Reaching a Fail state terminates the state machine execution with a
failure status. The fail state should report the reason for the failure.
Failures can be caught by encompassing Parallel states.
success=stepfunctions.Fail(self, "Fail",
error="WorkflowFailure",
cause="Something went wrong"
)
Task Chaining
To make defining work flows as convenient (and readable in a top-to-bottom way)
as writing regular programs, it is possible to chain most methods invocations.
In particular, the .next() method can be repeated. The result of a series of
.next() calls is called a Chain, and can be used when defining the jump
targets of Choice.on or Parallel.branch:
It is possible to define reusable (or abstracted) mini-state machines by
defining a construct that implements IChainable, which requires you to define
two fields:
startState: State, representing the entry point into this state machine.
endStates: INextable[], representing the (one or more) states that outgoing
transitions will be added to if you chain onto the fragment.
Since states will be named after their construct IDs, you may need to prefix the
IDs of states if you plan to instantiate the same state machine fragment
multiples times (otherwise all states in every instantiation would have the same
name).
The class StateMachineFragment contains some helper functions (like
prefixStates()) to make it easier for you to do this. If you define your state
machine as a subclass of this, it will be convenient to use:
classMyJob(stepfunctions.StateMachineFragment):
def__init__(self, parent, id, *, jobFlavor):
super().__init__(parent, id)
first=stepfunctions.Task(self, "First", ...)
# ...last=stepfunctions.Task(self, "Last", ...)
self.start_state=firstself.end_states= [last]
# Do 3 different variants of MyJob in parallelstepfunctions.Parallel(self, "All jobs").branch(MyJob(self, "Quick", job_flavor="quick").prefix_states()).branch(MyJob(self, "Medium", job_flavor="medium").prefix_states()).branch(MyJob(self, "Slow", job_flavor="slow").prefix_states())
Activity
Activities represent work that is done on some non-Lambda worker pool. The
Step Functions workflow will submit work to this Activity, and a worker pool
that you run yourself, probably on EC2, will pull jobs from the Activity and
submit the results of individual jobs back.
You need the ARN to do so, so if you use Activities be sure to pass the Activity
ARN into your worker pool:
activity=stepfunctions.Activity(self, "Activity")
# Read this CloudFormation Output from your application and use it to poll for work on# the activity.cdk.CfnOutput(self, "ActivityArn", value=activity.activity_arn)
Metrics
Task object expose various metrics on the execution of that particular task. For example,
to create an alarm on a particular task failing:
This library includes the basic building blocks of the AWS Cloud Development Kit (AWS CDK). It defines the core classes that are used in the rest of the
AWS Construct Library.
See the AWS CDK Developer
Guide for
information of most of the capabilities of this library. The rest of this
README will only cover topics not already covered in the Developer Guide.
Durations
To make specifications of time intervals unambiguous, a single class called
Duration is used throughout the AWS Construct Library by all constructs
that that take a time interval as a parameter (be it for a timeout, a
rate, or something else).
An instance of Duration is constructed by using one of the static factory
methods on it:
To help avoid accidental storage of secrets as plain text, we use the SecretValue type to
represent secrets. Any construct that takes a value that should be a secret (such as
a password or an access key) will take a parameter of type SecretValue.
The best practice is to store secrets in AWS Secrets Manager and reference them using SecretValue.secretsManager:
secret=SecretValue.secrets_manager("secretId",
json_field="password", # optional: key of a JSON field to retrieve (defaults to all content),version_id="id", # optional: id of the version (default AWSCURRENT)version_stage="stage"
)
Using AWS Secrets Manager is the recommended way to reference secrets in a CDK app.
SecretValue also supports the following secret sources:
SecretValue.plainText(secret): stores the secret as plain text in your app and the resulting template (not recommended).
SecretValue.ssmSecure(param, version): refers to a secret stored as a SecureString in the SSM Parameter Store.
SecretValue.cfnParameter(param): refers to a secret passed through a CloudFormation parameter (must have NoEcho: true).
SecretValue.cfnDynamicReference(dynref): refers to a secret described by a CloudFormation dynamic reference (used by ssmSecure and secretsManager).
ARN manipulation
Sometimes you will need to put together or pick apart Amazon Resource Names
(ARNs). The functions stack.formatArn() and stack.parseArn() exist for
this purpose.
formatArn() can be used to build an ARN from components. It will automatically
use the region and account of the stack you're calling it on:
parseArn() can be used to get a single component from an ARN. parseArn()
will correctly deal with both literal ARNs and deploy-time values (tokens),
but in case of a deploy-time value be aware that the result will be another
deploy-time value which cannot be inspected in the CDK application.
# Extracts the function name out of an AWS Lambda Function ARNarn_components=stack.parse_arn(arn, ":")
function_name=arn_components.resource_name
Note that depending on the service, the resource separator can be either
: or /, and the resource name can be either the 6th or 7th
component in the ARN. When using these functions, you will need to know
the format of the ARN you are dealing with.
For an exhaustive list of ARN formats used in AWS, see AWS ARNs and
Namespaces
in the AWS General Reference.
Dependencies### Construct Dependencies
Sometimes AWS resources depend on other resources, and the creation of one
resource must be completed before the next one can be started.
In general, CloudFormation will correctly infer the dependency relationship
between resources based on the property values that are used. In the cases where
it doesn't, the AWS Construct Library will add the dependency relationship for
you.
If you need to add an ordering dependency that is not automatically inferred,
you do so by adding a dependency relationship using
constructA.node.addDependency(constructB). This will add a dependency
relationship between all resources in the scope of constructA and all
resources in the scope of constructB.
If you want a single object to represent a set of constructs that are not
necessarily in the same scope, you can use a ConcreteDependable. The
following creates a single object that represents a dependency on two
construts, constructB and constructC:
# Declare the dependable objectb_and_c=ConcreteDependable()
b_and_c.add(construct_b)
b_and_c.add(construct_c)
# Take the dependencyconstruct_a.node.add_dependency(b_and_c)
Stack Dependencies
Two different stack instances can have a dependency on one another. This
happens when an resource from one stack is referenced in another stack. In
that case, CDK records the cross-stack referencing of resources,
automatically produces the right CloudFormation primitives, and adds a
dependency between the two stacks. You can also manually add a dependency
between two stacks by using the stackA.addDependency(stackB) method.
A stack dependency has the following implications:
Cyclic dependencies are not allowed, so if stackA is using resources from
stackB, the reverse is not possible anymore.
Stacks with dependencies between them are treated specially by the CDK
toolkit:
If stackA depends on stackB, running cdk deploy stackA will also
automatically deploy stackB.
stackB's deployment will be performed beforestackA's deployment.
AWS CloudFormation features
A CDK stack synthesizes to an AWS CloudFormation Template. This section
explains how this module allows users to access low-level CloudFormation
features when needed.
Stack Outputs
CloudFormation stack outputs and exports are created using
the CfnOutput class:
CfnOutput(self, "OutputName",
value=bucket.bucket_name,
description="The name of an S3 bucket", # Optionalexport_name="Global.BucketName"
)
Parameters
CloudFormation templates support the use of Parameters to
customize a template. They enable CloudFormation users to input custom values to
a template each time a stack is created or updated. While the CDK design
philosophy favors using build-time parameterization, users may need to use
CloudFormation in a number of cases (for example, when migrating an existing
stack to the AWS CDK).
Template parameters can be added to a stack by using the CfnParameter class:
The value of parameters can then be obtained using one of the value methods.
As parameters are only resolved at deployment time, the values obtained are
placeholder tokens for the real value (Token.isUnresolved() would return true
for those):
param=CfnParameter(self, "ParameterName")
# If the parameter is a Stringparam.value_as_string# If the parameter is a Numberparam.value_as_number# If the parameter is a Listparam.value_as_list
Pseudo Parameters
CloudFormation supports a number of pseudo parameters,
which resolve to useful values at deployment time. CloudFormation pseudo
parameters can be obtained from static members of the Aws class.
It is generally recommended to access pseudo parameters from the scope's stack
instead, which guarantees the values produced are qualifying the designated
stack, which is essential in cases where resources are shared cross-stack:
# "this" is the current constructstack=Stack.of(self)
stack.account# Returns the AWS::AccountId for this stack (or the literal value if known)stack.region# Returns the AWS::Region for this stack (or the literal value if known)stack.partition
Resource Options
CloudFormation resources can also specify resource
attributes. The CfnResource class allows
accessing those though the cfnOptions property:
CloudFormation supports intrinsic functions. These functions
can be accessed from the Fn class, which provides type-safe methods for each
intrinsic function as well as condition expressions:
# To use Fn::Base64Fn.base64("SGVsbG8gQ0RLIQo=")
# To compose condition expressions:environment_parameter=CfnParameter(self, "Environment")
Fn.condition_and(
# The "Environment" CloudFormation template parameter evaluates to "Production"Fn.condition_equals("Production", environment_parameter),
# The AWS::Region pseudo-parameter value is NOT equal to "us-east-1"Fn.condition_not(Fn.condition_equals("us-east-1", Aws.REGION)))
When working with deploy-time values (those for which Token.isUnresolved
returns true), idiomatic conditionals from the programming language cannot be
used (the value will not be known until deployment time). When conditional logic
needs to be expressed with un-resolved values, it is necessary to use
CloudFormation conditions by means of the CfnCondition class:
environment_parameter=CfnParameter(self, "Environment")
is_prod=CfnCondition(self, "IsProduction",
expression=Fn.condition_equals("Production", environment_parameter)
)
# Configuration value that is a different string based on IsProductionstage=Fn.condition_if(is_prod.logical_id, "Beta", "Prod").to_string()
# Make Bucket creation condition to IsProduction by accessing# and overriding the CloudFormation resourcebucket=s3.Bucket(self, "Bucket")
cfn_bucket=bucket.node.default_childcfn_bucket.cfn_options.condition=is_prod
Mappings
CloudFormation mappings are created and queried using the
CfnMappings class:
# INCORRECTmapping=CfnMapping(self, "MappingTable",
mapping={
"region_name": {
""us-east-1"": "US East (N. Virginia)",
""us-east-2"": "US East (Ohio)"
}
}
)
mapping.find_in_map("regionName", Aws.REGION)
Dynamic References
CloudFormation supports dynamically resolving values
for SSM parameters (including secure strings) and Secrets Manager. Encoding such
references is done using the CfnDynamicReference class:
CloudFormation templates support a number of options, including which Macros or
Transforms to use when deploying the stack. Those can be
configured using the stack.templateOptions property:
stack=Stack(app, "StackName")
stack.template_options.description="This will appear in the AWS console"stack.template_options.transform="AWS::Serverless"stack.template_options.metadata= {
"metadata_key": "MetadataValue"
}
Emitting Raw Resources
The CfnResource class allows emitting arbitrary entries in the
[Resources][cfn-resources] section of the CloudFormation template.
As for any other resource, the logical ID in the CloudFormation template will be
generated by the AWS CDK, but the type and properties will be copied verbatim in
the synthesized template.
Including raw CloudFormation template fragments
When migrating a CloudFormation stack to the AWS CDK, it can be useful to
include fragments of an existing template verbatim in the synthesized template.
This can be achieved using the CfnInclude class.
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
Sometimes a single API call can fill the gap in the CloudFormation coverage. In
this case you can use the AwsCustomResource construct. This construct creates
a custom resource that can be customized to make specific API calls for the
CREATE, UPDATE and DELETE events. Additionally, data returned by the API
call can be extracted and used in other constructs/resources (creating a real
CloudFormation dependency using Fn::GetAtt under the hood).
The physical id of the custom resource can be specified or derived from the data
returned by the API call.
The AwsCustomResource uses the AWS SDK for JavaScript. Services, actions and
parameters can be found in the API documentation.
Path to data must be specified using a dot notation, e.g. to get the string value
of the Title attribute for the first item returned by dynamodb.query it should
be Items.0.Title.S.
# INCORRECTget_parameter=AwsCustomResource(self, "GetParameter",
on_update={# will also be called for a CREATE event"service": "SSM",
"action": "getParameter",
"parameters": {
"Name": "my-parameter",
"WithDecryption": True
},
"physical_resource_id": Date.now().to_string()}
)
# Use the value in another construct withget_parameter.get_data("Parameter.Value")
IAM policy statements required to make the API calls are derived from the calls
and allow by default the actions to be made on all resources (*). You can
restrict the permissions by specifying your own list of statements with the
policyStatements prop.
Chained API calls can be achieved by creating dependencies:
This is a developer preview (public beta) module. Releases might lack important features and might have
future breaking changes.
This API is still under active development and subject to non-backward
compatible changes or removal in any future version. Use of the API is not recommended in production
environments. Experimental APIs are not subject to the Semantic Versioning model.
## Usage
Some information used in CDK Applications differs from one AWS region to
another, such as service principals used in IAM policies, S3 static website
endpoints, ...
The RegionInfo class
The library offers a simple interface to obtain region specific information in
the form of the RegionInfo class. This is the preferred way to interact with
the regional information database:
fromaws_cdk.region_infoimportRegionInfo# Get the information for "eu-west-1":region=RegionInfo.get("eu-west-1")
# Access attributes:region.s3_static_website_endpoint# s3-website.eu-west-1.amazonaws.comregion.service_principal("logs.amazonaws.com")
The RegionInfo layer is built on top of the Low-Level API, which is described
below and can be used to register additional data, including user-defined facts
that are not available through the RegionInfo interface.
Low-Level API
This library offers a primitive database of such information so that CDK
constructs can easily access regional information. The FactName class provides
a list of known fact names, which can then be used with the RegionInfo to
retrieve a particular value:
As new regions are released, it might happen that a particular fact you need is
missing from the library. In such cases, the Fact.register method can be used
to inject FactName into the database:
In the event information provided by the library is incorrect, it can be
overridden using the same Fact.register method demonstrated above, simply
adding an extra boolean argument:
If you happen to have stumbled upon incorrect data built into this library, it
is always a good idea to report your findings in a GitHub issue, so we can fix
it for everyone else!