In this post I explore the basics on creating a Lex Bot as code. AWS Lex is a service that allows you to create a bot to respond to certain chat scripts. You essentially define the various ways people will talk to the bot, and then define AWS Lambda functions to actually perform the action requested and respond with the result of that action, or possibly continue on in the script to gather more information or merge into a new conversation altogether.
The Ticker News Sentiment Analysis System (TNSAS we’ll call it) has gone through quite a transformation lately. Not only have we taken advantage of our containerization to scale the tone analysis service independently from the content scraper service, but we have also taken advantage of Redis based caching to reduce the total number of calls sent to IBM Watson to an absolute minimum as well as providing a log of all scraped documents and their related analysis.
We report this data via the console and copy and paste to a spreadsheet for tracking over time. This is not ideal and require significant manual data collection over an extended period time to get data at a resolution small enough to enable something approaching a real time system. We need a solution that can stand up and collect data consistently and then report it per ticker.
Developing microservices for pleasure and profit works when it doesn’t cost anything to do so. After our last month of developing a tone analysis system using IBM Watson on BlueMix, I tallied up quite a few queries and as a result got a bill from BlueMix for a whole 9 cents.
Problem – Tone Analysis is Too Slow
Currently we fetch a list of the latest 20 news stories for a stock ticker and then iterate through the list sequentially scraping the content and then submitting that scrape to the IBM Watson tone analysis service. As a result we have a time of O(n) to process a list of URLS.
In our previous post we finally began to see the individual microservices coming together to create an abstract plausible data flow. In this post, we are going to chain our microservices together to run an entire calculation stream sequentially. We are going to illustrate how to orchestrate a collection of microservices using Docker Compose, and we are going to illustrate how to use the Docker Registry to avoid recreating the wheel when creating microservice ecosystems.