Working with amplify has allowed me to develop some opinions about it. As with any opinions, these are absolutely correct :)
Amplify is an opinionated, category-based client framework for building scalable mobile and web apps. Think of it like a devops-as-code tool to manage all aspects of a traditional application's infrastructure. Under the hood Amplify generates CloudFormation templates which are used to then provision your applications resources. These templates are generated as a result of guided configuration wizards that the Amplify CLI provides for you.
Pros
- Provides an interface to most AWS services required to create a back end API
- Authentication & Authorization to other AWS services
- Integrates well with Git Flow
- GraphQL or REST
- DynamoDB or RDS Aurora Serverless
- Write schema in GraphQL Schema Definition Language using a schema.graphql file (all AWS CloudFormation templates are derived from this)
- Model and property Level authorization directives baked into their SDL
- Can write custom resolvers and pipeline resolvers with AppSync
- Supports offline-first
Cons
- No automated migration solution for DynamoDB (Though you could potentially avoid this issue by using RDS & a migration library like type orm)
- Very young (ran into quite a few bugs related to the library itself)
- Not ideal for rapidly changing Schemas (can lead to problematic deployments if relocating multiple Dynamo GSIs in a single deployment)
- Provided code-generation tool quickly loses its usefulness
- Sharing backends between repos can sometimes be painful
- Resolvers are written in Apache Velocity Templating Language (not terrible but also not the greatest)
- Writing custom business logic is non-trivial with GraphQL api (It’s a bit simpler with REST since you’re customizing your lambda functions anyway)
- Custom resolvers require basic knowledge of CloudFormation
- Offline-first requires a LOT of setup and configuration - even more if your requirements extend past basic needs (DeltaSync)
Let's take some time to play around with amplify and see, at a basic level, what it's capable of.
$ npm install -g @aws-amplify/cli
$ amplify configure
yarn add aws-amplify
amplify init # walks you through config process
Example init config questions & answers:
? Enter a name for the project amplify-example
? Enter a name for the environment staging
? Choose your default editor: None
? Choose the type of app that you're building javascript
Please tell us about your project
? What javascript framework are you using react
? Source Directory Path: src
? Distribution Directory Path: build
? Build Command: npm run-script build
? Start Command: npm run-script start
Now we must deploy the API with amplify push
which brings up the following prompt:
| Category | Resource name | Operation | Provider plugin |
| -------- | ---------------------- | --------- | ----------------- |
| Auth | amplifyexamplec80b54e2 | Create | awscloudformation |
This is the same output of amplify status
.
amplify add api
Example api config questions & answers:
? Please select from one of the below mentioned services GraphQL
? Provide API name: examplify
? Choose an authorization type for the API Amazon Cognito User Pool
Use a Cognito user pool configured as a part of this project
? Do you have an annotated GraphQL schema? No
? Do you want a guided schema creation? Yes
? What best describes your project: One-to-many relationship (e.g., “Blogs” with “Posts” and “Comments”)
? Do you want to edit the schema now? No
Based on the answer to our guided schema creation, a sample schema was produced for us.
type Blog @model {
id: ID!
name: String!
posts: [Post] @connection(name: "BlogPosts")
}
type Post @model {
id: ID!
title: String!
blog: Blog @connection(name: "BlogPosts")
comments: [Comment] @connection(name: "PostComments")
}
type Comment @model {
id: ID!
content: String
post: Post @connection(name: "PostComments")
}
You'll notice two interesting things within this schema - @model
and @connection
. The rest should be familiar.
@model
tells Amplify to produce a table for this type
@connection
tells Amplify to produce an index for this field
Now we must deploy the API with amplify push
which brings up the following prompt:
| Category | Resource name | Operation | Provider plugin |
| -------- | ---------------------- | --------- | ----------------- |
| Api | examplify | Create | awscloudformation |
| Auth | amplifyexamplec80b54e2 | No Change | awscloudformation |
This is the same output of amplify status
.
Before your API is deployed, Amplify will ask you if you want to generate queries, mutations, and subscriptions based off your schema.graphql
file. In reality you'll probably want to manage your own graphql files yourself, but for this example I'll leverage the auto-generated code.
Example code-gen questions & answers:
? Do you want to generate code for your newly created GraphQL API Yes
? Choose the code generation language target javascript
? Enter the file name pattern of graphql queries, mutations and subscriptions src/graphql/**/*.js
? Do you want to generate/update all possible GraphQL operations - queries, mutations and subscriptions Yes
? Enter maximum statement depth [increase from default if your schema is deeply nested] 2
After you answer yes or no to the code generation question, amplify will then deploy your API.
Let's create our first Blog to test that the API was stood up properly. We can do this by using the AWS AppSync Queries console.
mutation CreateBlog {
createBlog(input: {
name: "Exampliblog"
}) {
id
name
}
}
This will return:
{
"data": {
"createBlog": {
"id": "57699c21-81a5-499f-a53c-d7b5bbd7d184",
"name": "Exampliblog"
}
}
}
Great! We've successfully deployed and created our first blog!
You'll notice that in our Blog
model we're defining an id
property with the type ID
. Whenever this type is present, Amplify will auto-generate logic in the Blog resolver to populate that property with a UUID. We can check the Blog's resolver for Mutation.createBlog
to confirm this.
## [Start] Prepare DynamoDB PutItem Request. **
$util.qr($context.args.input.put("createdAt", $util.time.nowISO8601()))
$util.qr($context.args.input.put("updatedAt", $util.time.nowISO8601()))
$util.qr($context.args.input.put("__typename", "Blog"))
{
"version": "2017-02-28",
"operation": "PutItem",
"key": {
"id": $util.dynamodb.toDynamoDBJson($util.defaultIfNullOrBlank($ctx.args.input.id, $util.autoId()))
},
"attributeValues": $util.dynamodb.toMapValuesJson($context.args.input),
"condition": {
"expression": "attribute_not_exists(#id)",
"expressionNames": {
"#id": "id"
}
}
}
## [End] Prepare DynamoDB PutItem Request. **
You'll notice that $util.autoId()
is being used if an id
is missing from our request payload.
Additionally you'll notice two lines for createdAt
and updatedAt
at the top of the above resolver.
$util.qr($context.args.input.put("createdAt", $util.time.nowISO8601()))
$util.qr($context.args.input.put("updatedAt", $util.time.nowISO8601()))
These two lines are unconditionally generated when we create a model. This means that amplify is always storing and automatically updating time stamps for us behind the scenes. However this does not mean that we're able to access those values in our queries. We would have to update our models to include them in the response.
A new requirement just poped up, the client now wants all records in the DB to contain timestamps. Now that we're aware of the fact that we have auto-generated timestamps at our disposal let's leverage them in our schema.
Amplify promotes a philosophy similar to git feature branches. For each feature a developer is working on, it's suggested that you spin up a new isolated environment dedicated to the work being implemented on that branch. However some folks choose to have one dedicated environment per developer. This also works fine.
For this example we'll be creating a dedicated feature environment for our feature branch. Next we'll add our environment for introducing timestamps into the schema.
amplify env add
Example env add config questions & answers:
? Do you want to use an existing environment? No
? Enter a name for the environment: timestamps
? Do you want to use an AWS profile? Yes
? Please choose the profile you want to use: my-amplify-profile
After we've added the new environment we must deploy the API with amplify push
, similarly to when we were getting started. Once we've done that our timestamps
environment will be an exact clone of the staging
environment with the absence of data.
Let's now update our schema so that we can receive the auto-generated timestamps back in each model's response.
type Blog @model {
id: ID!
name: String!
posts: [Post] @connection(name: "BlogPosts")
createdAt: AWSDateTime
updatedAt: AWSDateTime
}
type Post @model {
id: ID!
title: String!
blog: Blog @connection(name: "BlogPosts")
comments: [Comment] @connection(name: "PostComments")
createdAt: AWSDateTime
updatedAt: AWSDateTime
}
type Comment @model {
id: ID!
content: String
post: Post @connection(name: "PostComments")
createdAt: AWSDateTime
updatedAt: AWSDateTime
}
You'll notice that we're not enforcing the presence of these values since they're created for us when our request resolver is run.
Whenever our schema.graphql
file is changed, we can run amplify status
(think git status
) to ping AWS and determine if there are any pending updates which require a deploy.
Since we've just added timestamps to our models the output of our status check looks like so:
Current Environment: timestamps
| Category | Resource name | Operation | Provider plugin |
| -------- | ---------------------- | --------- | ----------------- |
| Api | examplify | Update | awscloudformation |
| Auth | amplifyexamplec80b54e2 | No Change | awscloudformation |
GraphQL endpoint: https://s5ghmhss7ff75ccsab4ntlksui.appsync-api.us-east-1.amazonaws.com/graphql
Let's go ahead and get those changes deployed with amplify push
!
We can confirm the changes were made by updating our earlier CreateBlog
mutation.
mutation CreateBlog {
createBlog(input: {
name: "Exampliblog"
}) {
id
name
createdAt
updatedAt
}
}
This returns:
{
"data": {
"createBlog": {
"id": "6311f9ef-a6ba-4120-a8e5-27540af37c33",
"name": "Exampliblog",
"createdAt": "2019-05-23T19:06:49.535Z",
"updatedAt": "2019-05-23T19:06:49.535Z"
}
}
}
After your beautiful new code has been merged into master, we'll want to update Amplify's staging environment to reflect the changes made in our timestamps environment. To do this we can run amplify env checkout staging
. The output will indicate that there's a pending update operation for our API resource. A simple amplify push
will sync those pending changes to our staging environment, effectively making it identical to our timestamps environment.
To verify this we can grab the ID of the Blog we created earlier and request its createdAt
and updatedAt
values.
query GetBlog {
getBlog(id: "57699c21-81a5-499f-a53c-d7b5bbd7d184") {
id
name
createdAt
updatedAt
}
}
This returns:
{
"data": {
"getBlog": {
"id": "57699c21-81a5-499f-a53c-d7b5bbd7d184",
"name": "Exampliblog",
"createdAt": "2019-05-23T16:47:42.074Z",
"updatedAt": "2019-05-23T16:47:42.074Z"
}
}
}