Wednesday 12 August 2015

Historical Stock Quotations as input for your Spring Application

Update on July 2nd 2017

Unfortunately, the API described in this article has been changed and now it requires a session cookie to access the data in CSV format. I guess the guys on Yahoo do not like to display their data without having the chance of displaying their ads...
Still, you might download the data manually (example URL here).
In the meantime, I´m looking for an alternate source of prices, I´ll keep you posted.

Summary


In a previous article, we defined how to get new stock quotations as soon as they were published in Google Finance (of course, after a 20-minute delay, this is a free service after all).
However, it was hard to study this data without having any historical data to compare with, so we defined a manual process to load some historical prices.

This manual process is very inefficient, though. You need to do it manually and there is a data gap between the last file update and the current quotations.

Solution: Yahoo finance historical quotations


We know that this service is not suitable for "real-time" quotations in case of the Spanish stock exchange, however, it is perfectly good for getting historical quotations during a period of time and offering more information such as:
- Volume
- Open, close, high and low prices
- Statistical data, such as mobile means, etc.

API Description


There seems to be no documentation for this interface at all (or at least, I have not been able to find it), however, I found some information about it buried in this 2009 blog article.

Basically, you need to compose a URL with the stock ticker and some time delimiters, for instance:


New Spring integration flow


As I had to replace the logic for reading the old static files, I created a new flow defined entirely in a separated XML. Basically what it does is:
- Reads from the application settings which stocks are we going to retrieve.
- Reads the time range to recover.
- Performs one request per Stock to the specific URL.
- Parses and converts the received CSV to our internal format.
- Publishes the new POJOs in to the topic so the  consumer can use this information seamlessly (it does not really care how the POJOs were built as long as the contract is respected).

Application.properties


HistoricalCSVRequestCreator

This class will receive a signal from Spring ApplicationContext as soon it is initialised (as it implements the interface ApplicationListener), it will parse the property containing the stocks to be retrieved and finally it will inject one message in the downstream channel per stock (Note that the ticker code is set as message header).


HTTP Outbound Gateway

This is the XML configuration for the component that will perform the request. Note how the target ticket is retrieved from the message header ticker and how the range of time is taken from the properties file.
And that´s it! As soon as you see the application running, a stream of data will be loaded into the ElasticSearch node so you can perform studies with wider statistical information.

As usual you can find the code in this GitHub repository. Feedback is always welcome!

Related articles


  1. Live Stock data as input for your Spring application
  2. Implement a Spring Integration flow with Spring Boot
  3. Integrate the application with ElasticSearch
  4. Represent and search your data with Kibana
  5. Deploy your Spring Boot microservice in a Docker container


Sunday 9 August 2015

Deploy your Spring Boot microservice + Kibana in a Docker container

Summary


In this step, we are are going to deploy our solution to a Docker container. This eases deployment as we don´t have to start/stop one by one each module (ie. the microservice, ElasticSearch and Kibana) and the port configuration is much simpler to do.

Docker installation


If you are running on a Linux distribution, Docker comes out of the box. In Windows or MacOS, it´s a bit more complicated. One option to run Docker on those systems is to use Boot2Docker, (which actually runs a virtualized Linux with Docker installed in it).
You can find the installation instructions here.

Creation of the Docker image


The specifications of the Docker Image are defined in a DockerFile file, placed within the application resources. Here is an explanation on each line of the file:


Note: I based this file on the one used in this example.

In order to create the Docker image to run, you need to get access to the DockerFile file and our application´s executable Jar file generated by the Maven build. You can achieve this in two steps:

  • Create a shared folder called "stokker" in your VirtualBox agent. This folder should point to the folder in your local host where the code is compiled.
  • Run this command to mount that folder in the embedded Linux OS running Docker:
    • sudo mount -t vboxsf stokker /mnt/stokker

Now, change the current directory to /mnt/stokker/target and run this command to register locally the image:

docker build -t victor-ferrer/stokker


Run it!


First, find out the image id of the newly created image:
docker images

By executing this command you will launch the image execution:
docker run -i <image_id> -P

Note: The -P option enables the port exposure defined in the DockerFile. If that parameter is missing, you will have to manually provide the mappings (along with the -p option).

Note #2: If you are running the Docker image in VirtualBox, you will have to expose the same ports again, so they can be used from outside VirtualBox. In order to do so, open the VM network configuration and add the following port forwarding:





That should be all. You can check any of the Spring Boot Actuation endpoints exposed in port 8080 or the Elastic search cluster status endpoint:

  • http://<container ip address>:8080/health
  • http://<container ip address>:9200/_cluster/health

You can get the <container ip address> by executing a netstat command and choosing the IP associated to the VirtualBox container.

Note: I´m still working on launching Kibana along our application and ElasticSearch...
Moreover, it is stated in the Docker best practices, that is not a good idea to run several processes in the same Docker container, as that impairs horizontal scalability and application decoupling. So, I´m not sure whether I will finally add Kibana to this image or to a new one...

Related articles


  1. Define the application input: Google Spreadsheets
  2. Implement a Spring Integration flow with Spring Boot
  3. Integrate the application with ElasticSearch
  4. Represent and search your data with Kibana