We have been creating an API using Serverless.com
Here is an example of our Serverless config file
service: hello-self-member-api
provider:
name: aws
region: eu-west-1
runtime: nodejs8.10
functions:
##API
api-public:
handler: handler.public
events:
- http:
path: api/v1/public
method: get
integration: lambda
cors: true
api-private:
handler: handler.private
events:
- http:
path: api/v1/private
method: get
authorizer: aws_iam
integration: lambda
cors: true
##Profile
api-profile-read:
handler: functions/profile/handler.profileRead
events:
- http:
path: api/v1/profile/read
method: get
authorizer: aws_iam
integration: lambda
cors: true
api-profile-update:
handler: functions/profile/handler.profileUpdate
events:
- http:
path: api/v1/profile/update
method: put
authorizer: aws_iam
integration: lambda
cors: true
## Settings
api-settings-update:
handler: functions/settings/handler.settingsUpdate
events:
- http:
path: api/v1/settings/update
method: put
authorizer: aws_iam
integration: lambda
cors: true
# Measurement
api-measurement-list:
handler: functions/measurement/handler.measurementList
events:
- http:
path: api/v1/measurements/list
method: get
authorizer: aws_iam
integration: lambda
cors: true
api-measurement-create:
handler: functions/measurement/handler.measurementCreate
events:
- http:
path: api/v1/measurements/create
method: post
authorizer: aws_iam
integration: lambda
cors: true
api-measurement-read:
handler: functions/measurement/handler.measurementRead
events:
- http:
path: api/v1/measurements/{id}/read
method: get
authorizer: aws_iam
integration: lambda
cors: true
api-measurement-update:
handler: functions/measurement/handler.measurementUpdate
events:
- http:
path: api/v1/measurements/{id}/update
method: put
authorizer: aws_iam
integration: lambda
cors: true
api-measurement-delete:
handler: functions/measurement/handler.measurementDelete
events:
- http:
path: api/v1/measurements/{id}/delete
method: delete
authorizer: aws_iam
integration: lambda
cors: true
# Practice
api-practice-create:
handler: functions/practice/handler.practiceCreate
events:
- http:
path: api/v1/practices/create
method: post
authorizer: aws_iam
integration: lambda
cors: true
api-practice-update:
handler: functions/practice/handler.practiceUpdate
events:
- http:
path: api/v1/practices/{id}/update
method: post
authorizer: aws_iam
integration: lambda
cors: true
api-practice-toggle:
handler: functions/practice/handler.practiceToggle
events:
- http:
path: api/v1/practices/{id}/toggle
method: post
authorizer: aws_iam
integration: lambda
cors: true
Here are some questions that we have:
- It seems we only need to specify that we only need to specify our authorizer as "aws_iam" (not the ARN of a user pool) and AWS magically knows where to look. Is there any danger that our API picks the wrong pool too authorise against?
- It seems that we don't need to specify CORS headers in our lambda functions and this is generated by the authorizer. Is that right?
eg.
module.exports.profileUpdate = async (event, context, callback) => {
console.log('API Profile Update');
let profile = Profile.update();
callback(null, API.response(true, [], profile));
};
- In order to test our API via Postman, we're currently creating an AWS User with a custom policy (copied from the role that our Identity pool hands to authorised users: TEST_API_ROLE) so that we can test API calls with an AWS Access / Secret Key in API requests. Ideally we'd just be using a token as Auth Header like our users do. As a result we're not testing our API with an actual user from the User Pool. We're missing a concept of a technique here. We've had custom auth with an API 'Authorizer' but that seemed not as straight forward to configure the Policies required.
- Maximum number of resources that you can declare in your AWS CloudFormation template is 200. Is this something we can raise or should we split our API down into much smaller components Estimation is that it's potentially 6 resources per lambda function so means an API could hold about 30 endpoints. Are we hitting a limit that we should be nowhere near? In which case are we architecting this incorrectly.
- We are planning on building a real-time chat feature in our app. We are looking at using the pub/sub functionality of the IoT service. We assume that it's possible to easily manage fine-grained access to topics. Is this the right way to go? Or should we be thinking about SNS
- With spin-up time for Java looking to be quite high for Lambda we are leaning towards writing our own app and deploying via Fargate. Would be good to discuss this decision. We don't want to lose the easy integration with Cognito and other AWS services that we get from using Serverless.. Would be good to discuss this.
- Best way to test locally?
- AWS Pinpoint. Single user level? Why use it? Piggyback for chat app?