Summary
As commented in the previous post, the destinations for the recovered information will be:
- A simple log
- An Elasticsearch node, fur further data indexing and analysis.
How do we accomplish this? Very easy.
As we saw, during the project creation, we included the dependency spring-boot-starter-data-elasticsearch. This dependency will provide us with:
- A "template" API to perform REST operations in towards an ES node, but without all the REST boilerplate.
- An embedded ES node, that works great in order to perform tests and help you understand ES.
Launching an ES node along with your Spring-Boot application:
If you want to launch an embedded ES node, you just need to:- Enable it in your application .properties file by setting spring.data.elasticsearch.properties.http.enabled=true.
- Check that is properly running this curl command:
- $ curl -XGET 'http://localhost:9200/_cluster/health?pretty=true'
- Some cluster health statistics will be displayed (name, shards available, etc.)
Sending the information to ES
If you recall, our Spring Integration flow ended in a Service Activator. This component specified that @ELKClient.pushToELK() would be invoked along with the payload of the message received (that is the CSV quotation). Well, let´s see the code used to perform the sending (Which I think is pretty much self-explanatory)
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@Component | |
public class ELKClient { | |
// This template is configured by default in order to connect to a local | |
// ES node and injected in the application context | |
@Autowired | |
private ElasticsearchTemplate template; | |
// This will be the main index to work with, which perhaps would be | |
// better off in a configuration file | |
public final static String INDEX_NAME = "stockquotations"; | |
@Autowired | |
private IStockQuotationConverter<String> stockConverter; | |
@PostConstruct | |
public void initIndex() | |
{ | |
// Create an index if necessary | |
if (!template.indexExists(INDEX_NAME)){ | |
template.createIndex(INDEX_NAME); | |
} | |
// Tell ELK to consider StockQuotation as a entity to use | |
template.putMapping(StockQuotation.class); | |
template.refresh(INDEX_NAME, true); | |
} | |
public String pushToELK(String quotationCSV) throws Exception | |
{ | |
// Convert our CSV to an entity valid for ES | |
StockQuotation quotation = stockConverter.converToStockQuotation(quotationCSV); | |
// Create the Query POJO targeting our index and with our CSV payload | |
IndexQuery query = new IndexQuery(); | |
query.setIndexName(INDEX_NAME); | |
query.setObject(quotation); | |
return template.index(query); | |
} | |
} |
Building the document to be indexed
Finally we just need to create a simple bean, with some annotations specifying which fields are to be sent, analyzed and stored and in which format:
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
@Document(indexName=ELKClient.INDEX_NAME) | |
public class StockQuotation | |
{ | |
@Id | |
private Long id; | |
@Field (type = FieldType.String, | |
store = true, | |
index = FieldIndex.analyzed, | |
searchAnalyzer = "standard", | |
indexAnalyzer = "standard") | |
private String stock; | |
@Field(type=FieldType.Double, | |
store = true, | |
index = FieldIndex.analyzed, | |
searchAnalyzer = "standard", | |
indexAnalyzer = "standard") | |
private Double value; | |
@Field(type = FieldType.Date, | |
format = DateFormat.custom, | |
pattern ="dd-MM-yyyy HH:mm:ss", | |
store = true, | |
index = FieldIndex.analyzed, | |
searchAnalyzer = "standard", | |
indexAnalyzer = "standard") | |
private Calendar timestamp; | |
} |
Next Steps
Once we run the application, it will load some files containing historical data and it will query the spreadsheets stored in Google Docs. Each entry will be converted to a Java Object and sent to Elastic Search. In the next post we will see how can we build some nice graphs using Kibana (part of ELK stack) and the data we have indexed. One example (showing the historical data):
Moreover, we will see how can we deploy the micro-service, the elastic search node and Kibana to the cloud, so we can continuously gather stock quotations and perform better analysis.
No comments:
Post a Comment