Skip to content

Instantly share code, notes, and snippets.

@mvsusp
Last active July 21, 2018 21:55
Show Gist options
  • Save mvsusp/feaf7d568d4e7405f7dff22d4329ad6e to your computer and use it in GitHub Desktop.
Save mvsusp/feaf7d568d4e7405f7dff22d4329ad6e to your computer and use it in GitHub Desktop.
How to host a TensorFlow or Keras model in AWS SageMaker - nginx.conf
events {
# determines how many requests can simultaneously be served
# https://www.digitalocean.com/community/tutorials/how-to-optimize-nginx-configuration
# for more information
worker_connections 2048;
}
http {
server {
# configures the server to listen to the port 8080
listen 8080 deferred;
# redirects requests from SageMaker to TF Serving
location /invocations {
proxy_pass http://localhost:8501/v1/models/half_plus_three:predict;
}
# Used my SageMaker to confirm if server is alive.
location /ping {
return 200 "OK";
}
}
}
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment