In our series we have constructured two microservices. The first would submit a string of text, or a message, to IBM Watson’s tone analysis service and report back tonal analytics based on the input. The second microservice would accept a URL and return a text scrape of the content of that page.
Our next microservice will be a tool for amplifying the inputs that our previous two microservices will be using. We will accept a stock ticker symbol, and our microservice will return a list of 10 links that represent the latest news items regarding that stock ticker symbol. We will then submit each link to the content extraction and tone analysis stage.
This is a x10 amplifier of work streams. So if NASDAQ has 3000 tickers, we will have 30000 news stories to analyze at each analysis period. At this point we can start to see where the microservice architecture, specifically the message driven / event driven architecture of this system is going to pay dividends.
We decide to use python again for this simple RSS fetcher and parser. Since this is a fairly straight forward implementation especially after our previous posting regarding python microservices, we will just use the same structure, and Dockerfile to create the container.
from flask import Flask
from flask import request
import xml.etree.ElementTree as ET
r = requests.get("https://feeds.finance.yahoo.com/rss/2.0/headline?s=%s®ion=US&lang=en-US" %ticker)
root = ET.fromstring(r.content)
result = 
for elem in root.findall('.//channel/item/link'):
app = Flask(__name__)
ticker = request.values.get("ticker")
response = app.response_class(
if __name__ == '__main__':
As you can tell, if you have been following this series, there is now a data flow that will need to be assembled by some sort of glue. In the next few posts we are going to be evaluating a few messaging/queueing/data pipeline architecture.
We have a few to evaluate. In particular I want to evaluate RabbitMQ, Apache Kafka, Spring Cloud Streams, Amazon Kinesis, and perhaps some sort of hybrid architecture. Additionally I want to investigate how to scale these clusters as appropriate. Our PaaS/IaaS choice here will be difficult, as there are many competing alternatives.
Our next post however, will be around storage. In particular, if we already have processed a URL we want to use the stored copy of that calculation. To do this we will use a Redis KV Store to map URL’s to their tone analysis.
We additionally want to be able to track this data in a colomular format so that we can graph the data easily. For this, a simple relational data store with PostgreSQL is what we will use. In the next blog post we will illustrate standing up these two services.
In addition to that data we will want to track our tone analysis with the stock price of the ticker symbol so a simple stock price microservice will be used. For this one however, we are going to try and use a container right off docker hub – in essence starting to hybridize our architecture and illustrate how to construct an application with publicly sourced containers.