If you're intersted in software you've probably heard about deep learning and even done some reading or played around with it. But unless you have a desktop for high-end gaming you've probably found that running all these new CUDA-based parallel-GPU computing tools is just painfully slow.
That's what happened to me. So, it's time to spin up an EC2 on AWS and use someone else's hardware. This is just a basic introduction into how I did that, from creating an AWS dev account to installing some fun Python deep learning projects on GitHub. If you follow along, you'll be in a good position to install whatever other tools you want (Caffe, for instance) and get deep.
If you haven't already, you need to set up your Amazon AWS profile:
If you want to make use of GPU-accelerated deep learning tools, you will want to pick a big machine with a solid graphics card:
Install Nvidia's CUDA, to make use of those GPUs you're paying for:
- CUDA to nolearn
- AWS tutorial on parallel computing
- Install Caffe from scratch
- Installing Theano on AWS
If you want from-scratch, CUDA-specific install instructions:
Here are a couple of fun Deep Learning Python tools on GitHub to flex your new AWS system. These tools are great fun, and have the benefit of being much quicker and easier to install than Theno, Caffe, or what-have-you.
Here are some result images created using the Neural Artistic Style library above. I used the project to pull content information from my own photographs and style/color/texture information from famous paintings.