Category Archives: Rittman Mead
Game of Thrones Series 8: Real Time Sentiment Scoring with Apache Kafka, KSQL, Google’s Natural Language API and Python
Hi, Game of Thrones aficionados, welcome to GoT Series 8 and my tweet analysis! If you missed any of the prior season episodes, here are I, II and III. Finally, after almost two years, we have a new series and something interesting to write about! If you didn't watch Episode 1, do it before reading this post as it might contain spoilers!
Let's now start with a preview of the starting scene of Episode 2:
If you followed the previous season blog posts you may remember that I was using Kafka Connect to source data from Twitter, doing some transformations with KSQL and then landing the data in BigQuery using Connect again. On top of it, I was using Tableau to analyze the data.
The above infrastructure was working fine and I have been able to provide insights like the sentiment per character and the "game of couples" analysing how a second character mentioned in the same tweet could change the overall sentiment.
The sentiment scoring was however done at visualization time, with the data extracted from BigQuery into Tableau at tweet level, scored with an external call to R, then aggregated and finally rendered.
As you might understand the solution was far from optimal since:
- The Sentiment scoring was executed for every query sent to the database, so possibly multiple times per dashboard
- The data was extracted from the source at tweet level, rather than aggregated
The dashboard indeed was slow to render and the related memory consumption huge (think about data volumes being moved around). Furthermore, Sentiment Scores were living only inside Tableau: if any other people/application/visualization tool wanted to use them, they had to recalculate from scratch.
My question was then: where should I calculate Sentiment Scores in order to:
- Do it only once per tweet, not for every visualization
- Provide them to all the downstream applications
The answer is simple, I need to do it as close to the source as possible: in Apache Kafka!
Sentiment Scoring in Apache Kafka
There are a gazillion different ways to implement Sentiment Scoring in Kafka, so I chose a simple method based on Python and Google's Natural Language API.
Google Natural Language API
Google's NL APIs is a simple interface over a pre-trained Machine Learning model for language Analysis and as part of the service it provides sentiment scoring.
The Python implementation is pretty simple, you just need to import the correct packages
from google.cloud import language_v1
from google.cloud.language_v1 import enums
Instantiate the LanguageServiceClient
client = language_v1.LanguageServiceClient()
Package the tweet string you want to be evaluated in a Python dictionary
content = 'I'm Happy, #GoT is finally back!'
type_ = enums.Document.Type.PLAIN_TEXT
document = {'type': type_, 'content': content}
And parse the response
response = client.analyze_sentiment(document)
sentiment = response.document_sentiment
print('Score: {}'.format(sentiment.score))
print('Magnitude: {}'.format(sentiment.magnitude))
The result is composed by Sentiment Score and Magnitude:
- Score indicated the emotion associated with the content as Positive (Value > 0) or Negative (Value < 0)
- Magnitude indicates the power of such emotion, and is often proportional with the content length.
Please note that Google's Natural Language API is priced per document so the more content you send for scoring, the bigger your bill will be!
Creating a Kafka Consumer/Producer in Python
Once we fixed how to do Sentiment Scoring, it's time to analyze how we can extract a tweet from Kafka in Python. Unfortunately, there is no Kafka Streams implementation in Python at the moment, so I created an Avro Consumer/Producer based on Confluent Python Client for Apache Kafka. I used the jcustenborder/kafka-connect-twitter
Connect, so it's always handy to have the Schema definition around when prototyping.
Avro Consumer
The implementation of an Avro Consumer is pretty simple: as always first importing the packages
from confluent_kafka import KafkaError
from confluent_kafka.avro import AvroConsumer
from confluent_kafka.avro.serializer import SerializerError
then instantiating the AvroConsumer
passing the list of brokers, group.id
useful, as we'll see later, to add multiple consumers to the same topic, and the location of the schema registry service in schema.registry.url
.
c = AvroConsumer({
'bootstrap.servers': 'mybroker,mybroker2',
'group.id': 'groupid',
'schema.registry.url': 'http://127.0.0.1:8081'})
Next step is to subscribe to a topic, in my case got_avro
c.subscribe(['got_avro'])
and start polling the messages in loop
while True:
try:
msg = c.poll(10)
except SerializerError as e:
print("Message deserialization failed for {}: {}".format(msg, e))
break
print(msg.value())
c.close()
In my case, the message was returned as JSON and I could extract the tweet Text
and Id
using the json
package
text=json.dumps(msg.value().get('TEXT'))
id=int(json.dumps(msg.value().get('ID')))
Avro Producer
The Avro Producer follows a similar set of steps, first including needed packages
from confluent_kafka import avro
from confluent_kafka.avro import AvroProducer
Then we define the Avro Key and Value Schemas, in my case I used the tweet Id
as key and included the text
in the value together with the sentiment score
and magnitude
.
key_schema_str = """
{
"namespace": "my.test",
"name": "value",
"type": "record",
"fields" : [
{
"name" : "id",
"type" : "long"
}
]
}
"""
value_schema_str = """
{
"namespace": "my.test",
"name": "key",
"type": "record",
"fields" : [
{
"name" : "id",
"type" : "long"
},
{
"name" : "text",
"type" : "string"
},
{
"name" : "sentimentscore",
"type" : "float"
},
{
"name" : "sentimentmagnitude",
"type" : "float"
}
]
}
"""
Then it's time to load the Key and the Value
value_schema = avro.loads(value_schema_str)
key_schema = avro.loads(key_schema_str)
key = {"id": id}
value = {"id": id, "text": text,"sentimentscore": score ,"sentimentmagnitude": magnitude}
Creating the instance of the AvroProducer
passing the broker(s), the schema registry URL and the Key and Value schemas as parameters
avroProducer = AvroProducer({
'bootstrap.servers': 'mybroker,mybroker2',
'schema.registry.url': 'http://schem_registry_host:port'
}, default_key_schema=key_schema, default_value_schema=value_schema)
And finally produce the event defining as well the topic that will contain it, in my case got_avro_sentiment
.
avroProducer.produce(topic='got_avro_sentiment', value=value, key=key)
avroProducer.flush()
The overall Producer/Consumer flow is needless to say, very easy
And it works!
Parallel Sentiment Scoring
One thing I started noticing immediately, however, is that especially on tweeting peaks, the scoring routine couldn't cope with the pace of the incoming tweets: a single python Consumer/Producer was not enough. No problem! With Kafka, you can add multiple consumers to the same topic, right?
Of course Yes! But you need to be careful.
Consumer Groups and Topic Partitions
You could create multiple consumers on different Consumer Groups (defined by the group.id
parameter mentioned above), but by doing this you're telling Kafka that those consumers are completely independent, thus Kafka will send each one a copy of every message. In our case, we'll simply end up scoring N times the same message, one for each consumer.
If, on the other hand, you create multiple consumers with the same consumer group, Kafka will treat them as unique consuming process and will try to share the load amongst them. However, it will do so only if the source topic is partitioned and will exclusively associate each consumer to one (or more) topic partitions! To read more about this check the Confluent documentation.
The second option is what we're looking for, having multiple threads reading from the same topic and splitting the tweet workload, but how do we split an existing topic into partitions? Here is where KSQL is handy! If you don't know about KSQL, read this post!
With KSQL we can define a new STREAM
sourcing from an existing TOPIC
or STREAM
and the related number of partitions and partition key (the key's hash will be used to assign deterministically a message to a partition). The code is the following
CREATE STREAM <NEW_STREAM_NAME>
WITH (PARTITIONS=<NUMBER_PARTITIONS>)
AS SELECT <COLUMNS>
FROM <EXISTING_STREAM_NAME>
PARTITION BY <PARTITION_KEY>;
Few things to keep in mind:
- Choose the number of partitions carefully, the more partitions for the same topic, the more throughput but at the cost of extra complexity.
- Choose the
<PARTITION_KEY>
carefully: if you have 10 partitions but only 3 distinct Keys, then 7 partitions will not be used. If you have 10 distinct keys but 99% of the messages have just 1 key, you'll end up using almost always the same partition.
Yeah! We can now create one consumer per partition within the same Consumer Group!
Joining the Streams
As the outcome of our process so far we have:
- The native
GOT_AVRO
Stream coming from Kafka Connect, which we divided into 6 partitions using the tweetid
as Key and namedGOT_AVRO_PARTITIONED
. - A
GOT_AVRO_SENTIMENT
Stream that we created using Python and Google's Natural Language API, withid
as Key.
The next logical step would be to join them, which is possible with KSQL by including the WITHIN
clause specifying the temporal validity of the join. The statement is, as expected, the following:
SELECT A.ID, B.ID, A.TEXT, B.SENTIMENTSCORE, B.SENTIMENTMAGNITUDE
FROM GOT_AVRO_PARTITIONED A JOIN GOT_AVRO_SENTIMENT B
WITHIN 2 MINUTES
ON A.ID=B.ID;
Please note that I left a two minute window to take into account some delay in the scoring process. And as you would expect I get............ 0 results!
Reading the documentation better gave me the answer: Input data must be co-partitioned in order to ensure that records having the same key on both sides of the join are delivered to the same stream task.
Since the GOT_AVRO_PARTITIONED
stream had 6 partitions and GOT_AVRO_SENTIMENT
only one, the join wasn't working. So let's create a 6-partitioned version of GOT_AVRO_SENTIMENT
.
CREATE STREAM GOT_AVRO_SENTIMENT_PARTITIONED
WITH (PARTITIONS=6) AS
SELECT ID,
TEXT,
SENTIMENTSCORE,
SENTIMENTMAGNITUDE
FROM GOT_AVRO_SENTIMENT
PARTITION BY ID;
Now the join actually works!
Next topics are: pushdown to Google's BigQuery and visualization using Google's Data Studio! But this, sadly, will be for another post! See you soon, and enjoy Game of Thrones!
Democratize Data Science with Oracle Analytics Cloud – Data Analysis and Machine Learning
Welcome back! In my previous Post, I described how the democratization of Data Science is a hot topic in the analytical industry. We then explored how Oracle Analytics Cloud can act as an enabler for the transformation from Business Analyst to Data Scientist and covered the first steps in a Data Science project: problem definition, data connection & cleaning. In today's post, we'll cover the second part of the path: from the data transformation and enrichment, the analysis, the machine learning model training and evaluation. Let's Start!
Step #3: Transform & Enrich
In the previous post, we understood how to clean data in order to handle wrong values, outliers, perform aggregation, feature scaling and divide our dataset between train and test. Cleaning the data, however, is only the first step in data processing, and should be followed by what in Data Science is called Feature Engineering.
Feature Engineering is a fancy name to call what in ETL terms we always called data transformation, we take a set of columns in input and we apply transformation rules to create new columns. The aim of Feature Engineering is to create good predictors for the following machine learning model. Feature Engineering is a bit of black art and to achieve excellent results requires a deep understanding of the ML Model we intend to use. However, most of the basic transformations are actually driven by domain knowledge: we should create new columns that we think will improve the problem explanation. Let's see some examples:
- If we're planning to predict the Taxi Fare in New York between any two given points and we have source and destination, a good predictor for the fare probably would be the Euclidean distance between the two.
- If we have Day/Month/Year on separate columns, we may want to condense the information in a unique column containing the Date
- In case our dataset contains location names (Cities, Regions, Countries) we may want to geo-tag those properly with ZIP codes or ISO Codes.
- If we have personal information like Credit Cards details or Person Name, we may want to decide to obfuscate or extract features like the person's sex from the name (on this topic please check the blog post about GDPR and ML from Brendan Tierney).
- If we have continuous values like the person's age, do we think there is much difference between a 35, 36 or 37 year-old person? If not we should think about binning them in the same category.
- Most Machine Learning Models can't cope with categorical data, thus we need to transform them to numbers (aka encoding). The standard process, when no ordering exists between the labels, is to create a new column for each value and mark the rows with
1/0
accordingly.
Oracle Analytics Cloud again covers all the above cases with two tools: Euclidean distance, generic data transformation like data condensation and binning are standard steps of the Dataflow component. We only need to set the correct parameters or write simple SQL-like statements. Moreover, for binning, there are options to do it manually as well as automatically providing equal-width and equal-height bins therefore taking out the manual labour and related BIAS.
On the other side the geo-tagging, data obfuscation, automatic feature extraction (like person's sex based on name) is something that with most of the other tools needs to be resolved by hand, with complex SQL statements or dedicated Machine Learning efforts.
OAC again does a great job during the Data Preparation Recommendation step: after defining a data source, OAC will scan column names and values in order to find interesting features and propose some recommendations like geo-tagging, obfuscation, data splitting (e.g. Full Name split into First and Last Name) etc.
The accepted recommendations will be added to a Data Preparation Script that can be automatically applied when updating our dataset.
Step #4: Data Analysis
Data Analysis is declared as Step #4 however since the Data Transformation and Enrichment phase we started a circular flow in order to optimize our predictive model output.
The analysis is a crucial step for any Data Science project; in R or Python one of the first steps is to check dataset head()
that will show a first overview of the data like the below
OAC does a similar job with the Metadata Overview where we can see for each column the name, type and sample values as well as the Attribute/Metric definition and associated aggregation than we can then change later on.
Analysing Data is always a complex task and is where the expert eye of a data scientist makes the difference. OAC, however, can help with the excellent Explain feature. As described in the previous post, by right clicking on any column in the dataset and selecting Explain, OAC will start calculating statistics and metrics related to the column and display the findings in graphs that we can incorporate in the Data Visualization project.
Even more, there are additional tabs in the Explain window that provide Key Drivers, Segments and Anomalies.
- Key Drivers provides the statistically significant drivers for the column we are examining.
- Segments shows hidden groups in the dataset that can predict outcomes in the column
- Anomalies does an outlier detection, showing which are corner cases in our dataset
Some Data Science projects could already end here. If the objective was to find insights, anomalies or particular segments in our dataset, Explain already provides that information in a clear and reusable format. We can add the necessary visualization to a Project and create a story with the Narrate option.
If on the other side, our scope is to build a predictive model, then it's time to tackle the next phase: Model Training & Evaluation.
Step #5: Train & Evaluate
Exciting: now it's time to tackle Machine Learning! The first thing to do is to understand what type of problem we are trying to solve. OAC allows us to solve problems in the following categories:
- Supervised when we have a history of the problem's solution and we want to predict future outcomes, we can then identify two subcategories
- Regression when we are trying to predict a continuous numerical value
- Classification when we are trying to assign every sample to a category out of two or more
- Unsupervised when we don't have a history of the solution, but we ask the ML tool to help us understanding the dataset.
- Clustering when we try to label our dataset in categories based on similarity.
OAC provides two different ways to apply Machine Learning on a dataset: On the Fly or via DataFlows. The On the Fly method is provided directly in the data visualization: when we create any chart, OAC provides the option to add Clusters, Outliers, Trend and Forecast Lines.
When adding one of the Analytics, we have some control over the behaviour of the predictive model. For the clustering image above we can decide which algorithm to implement (between K-means and Hierarchical Clustering), the number of clusters and the trellis scope in case we visualize multiple scatterplots, one for each value of a dimension.
Applying Machine Learning models on the fly is very useful and could provide some great insights, however, it suffers from a limitation: the columns analysed by the model are only the ones included in the visualization, we have no control over other columns we may want to add to the model to increase predictions accuracy.
If we want to have granular control over columns, algorithm and parameters to use, OAC provides the Train Model step in the DataFlow component.
As described above OAC provides the option to solve Regression problems via Numeric Prediction, apply Binary or Multi-Classifier for Classification, and Clustering. There is also an option to train Custom Models which can be scripted by a Data Scientist, wrapped in XML tags and included in OAC (more about this topic in a later post).
Once we've selected the class of problem we're aiming to solve, OAC lets us select which Model to train between various prebuilt ones. After selecting the model, we need to identify which is the target column (for Supervised ML classes) and fix the parameters. Note the Train Partition Percent providing an automated way to split the dataset in train/test and Categorical/Numerical Column Imputation to handle the missing values. As part of this process, the encoding for categorical data is executed.
... But which Model should we use? What parameters should we pick? One lesson I got from my knowledge of Machine Learning is that there is no golden model and parameters set to solve all problems. Data Scientist will try to use different models, compare them and tune parameters based on experimentation (aka trial and error).
OAC allows us to create an initial Dataflow, select a model, set the parameters then save the Dataflow and model output. Then restart by opening the Dataflow changing the model or the parameters and storing the artefacts with different names to compare them.
After creating one or more Models, it's time to evaluate them, on OAC we can select a Model and Click on Inspect. In the Overview tab, Inspect shows the model description and properties. Far more interesting is the Quality tab which provides a set of Model scoring metrics based on the test dataset created following the Train Partition Percent parameter. In case of a Numeric Prediction problem, the Quality tab will show for each model quality metrics like the Root Mean Squared Error. OAC will provide similar metrics no matter which ML algorithm you're implementing, making the analysis and comparison easy.
In the case of Classification, the Quality Tab will show the confusion matrix together with some pre-calculated metrics like Precision, Recall etc.
The model selection then becomes an optimization problem for the metric (or set of) we picked during the problem definition (see TEP in the previous post). After trying several models, parameters, features, we'll then choose the model that minimizes the error (or increase the accuracy) of our prediction.
Note: as part of the model training, it's very important to select which columns will be used for the prediction. A blind option is to use all columns but adding irrelevant columns isn't going to provide better results and, for big or wide (huge number of columns) datasets, it becomes computationally very expensive. As written before, the Explain function provides the list of columns that represent statistically significant predictors. The columns listed there should represent the basics of the model training.
Ok, part II done, we saw how to perform Feature Engineering and Model Training and Evaluation, check my next post for the final piece of the Data Science journey: Predictions and final considerations!
Announcing The Kafka Pilot with Rittman Mead
Rittman Mead is today pleased to announce the launch of it's Kafka Pilot service, focusing on engaging with companies to help fully assess the capabilities of Apache Kafka for event streaming use cases with both a technical and business focus.
Our 30 day Kafka Pilot includes:
- A comprehensive assessment of your use cases for event streaming and Kafka
- A full assessment of connectors
- Provides a transformation from your current state to future state architecture
- Delivers your first Kafka platform with end-to-end tests built in to assess success criteria
- Introduction to KSQL
- A fully comprehensive output document detailing outcomes of the pilot, future state architecture featuring Kafka, installation & configuration details based on the platform and a roadmap for building towards a production ready platform
Kafka plays a vital role for many organisations who are looking to process large volumes of data and information in real-time. Many different digital applications and devices that are at the core of business operations capture events and Kafka gives companies the chance to process these streams of events in a fault tolerant and scalable way. It helps organisations de-couple their applications and devices which can lead to fewer data silos. Kafka provides the chance to have quicker access to more data and is used by organisations such as Betfair, Uber, NetFlix & Spotify.
Rittman Mead have written a number of blogs on the uses of Kafka ranging from using Kafka to analyse data in Scala and Spark to real-time Sailing Yacht performance. These can be read here
To find out more information about our Kafka Pilot, please read our data sheet below 👇🏼
If you'd like to discuss how event streaming and Kafka may fit into your organisation, applications and data platform please contact info@rittmanmead.com
Spark Streaming and Kafka – Creating a New Kafka Connector
More Kafka and Spark, please!
Hello, world!
Having joined Rittman Mead more than 6 years ago, the time has come for my first blog post. Let me start by standing on the shoulders of blogging giants, revisiting Robin's old blog post Getting Started with Spark Streaming, Python, and Kafka.
The blog post was very popular, touching on the subjects of Big Data and Data Streaming. To put my own twist on it, I decided to:
- not use Twitter as my data source, because there surely must be other interesting data sources out there,
- use Scala, my favourite programming language, to see how different the experience is from using Python.
Why Scala?
Scala is admittedly more challenging to master than Python. However, because Scala compiles into Java bytecode, it can be used pretty much anywhere where Java is being used. And Java is being used everywhere. Python is arguably even more widely used than Java, however it remains a dynamically typed scripting language that is easy to write in but can be hard to debug.
Is there a case for using Scala instead of Python for the job? Both Spark and Kafka were written in Scala (and Java), hence they should get on like a house on fire, I thought. Well, we are about to find out.
My data source: OpenWeatherMap
When it comes to finding sample data sources for data analysis, the selection out there is amazing. At the time of this writing, Kaggle offers freely available 14,470 datasets, many of them in easy-to-digest formats like CSV and JSON. However, when it comes to real-time sample data streams, the selection is quite limited. Twitter is usually the go-to choice - easily accessible and well documented. Too bad I decided not to use Twitter as my source.
Another alternative is the Wikipedia Recent changes stream. Although in the stream schema there are a few values that would be interesting to analyse, overall this stream is more boring than it sounds - the text changes themselves are not included.
Fortunately, I came across the OpenWeatherMap real-time weather data website. They have a free API tier, which is limited to 1 request per second, which is quite enough for tracking changes in weather. Their different API schemas return plenty of numeric and textual data, all interesting for analysis. The APIs work in a very standard way - first you apply for an API key. With the key you can query the API with a simple HTTP GET request (Apply for your own API key instead of using the sample one - it is easy.):
This request
https://samples.openweathermap.org/data/2.5/weather?q=London,uk&appid=b6907d289e10d714a6e88b30761fae22
gives the following result:
{
"coord": {"lon":-0.13,"lat":51.51},
"weather":[
{"id":300,"main":"Drizzle","description":"light intensity drizzle","icon":"09d"}
],
"base":"stations",
"main": {"temp":280.32,"pressure":1012,"humidity":81,"temp_min":279.15,"temp_max":281.15},
"visibility":10000,
"wind": {"speed":4.1,"deg":80},
"clouds": {"all":90},
"dt":1485789600,
"sys": {"type":1,"id":5091,"message":0.0103,"country":"GB","sunrise":1485762037,"sunset":1485794875},
"id":2643743,
"name":"London",
"cod":200
}
Getting data into Kafka - considering the options
There are several options for getting your data into a Kafka topic. If the data will be produced by your application, you should use the Kafka Producer Java API. You can also develop Kafka Producers in .Net (usually C#), C, C++, Python, Go. The Java API can be used by any programming language that compiles to Java bytecode, including Scala. Moreover, there are Scala wrappers for the Java API: skafka by Evolution Gaming and Scala Kafka Client by cakesolutions.
OpenWeatherMap is not my application and what I need is integration between its API and Kafka. I could cheat and implement a program that would consume OpenWeatherMap's records and produce records for Kafka. The right way of doing that however is by using Kafka Source connectors, for which there is an API: the Connect API. Unlike the Producers, which can be written in many programming languages, for the Connectors I could only find a Java API. I could not find any nice Scala wrappers for it. On the upside, the Confluent's Connector Developer Guide is excellent, rich in detail though not quite a step-by-step cookbook.
However, before we decide to develop our own Kafka connector, we must check for existing connectors. The first place to go is Confluent Hub. There are quite a few connectors there, complete with installation instructions, ranging from connectors for particular environments like Salesforce, SAP, IRC, Twitter to ones integrating with databases like MS SQL, Cassandra. There is also a connector for HDFS and a generic JDBC connector. Is there one for HTTP integration? Looks like we are in luck: there is one! However, this connector turns out to be a Sink connector.
Ah, yes, I should have mentioned - there are two flavours of Kafka Connectors: the Kafka-inbound are called Source Connectors and the Kafka-outbound are Sink Connectors. And the HTTP connector in Confluent Hub is Sink only.
Googling for Kafka HTTP Source Connectors gives few interesting results. The best I could find was Pegerto's Kafka Connect HTTP Source Connector. Contrary to what the repository name suggests, the implementation is quite domain-specific, for extracting Stock prices from particular web sites and has very little error handling. Searching Scaladex for 'Kafka connector' does yield quite a few results but nothing for http. However, there I found Agoda's nice and simple Source JDBC connector (though for a very old version of Kafka), written in Scala. (Do not use this connector for JDBC sources, instead use the one by Confluent.) I can use this as an example to implement my own.
Creating a custom Kafka Source Connector
The best place to start when implementing your own Source Connector is the Confluent Connector Development Guide. The guide uses JDBC as an example. Our source is a HTTP API so early on we must establish if our data source is partitioned, do we need to manage offsets for it and what is the schema going to look like.
Partitions
Is our data source partitioned? A partition is a division of source records that usually depends on the source medium. For example, if we are reading our data from CSV files, we can consider the different CSV files to be a natural partition of our source data. Another example of partitioning could be database tables. But in both cases the best partitioning approach depends on the data being gathered and its usage. In our case, there is only one API URL and we are only ever requesting current data. If we were to query weather data for different cities, that would be a very good partitioning - by city. Partitioning would allow us to parallelise the Connector data gathering - each partition would be processed by a separate task. To make my life easier, I am going to have only one partition.
Offsets
Offsets are for keeping track of the records already read and the records yet to be read. An example of that is reading the data from a file that is continuously being appended - there can be rows already inserted into a Kafka topic and we do not
want to process them again to avoid duplication. Why would that be a problem? Surely, when going through a source file row by row, we know which row we are looking at. Anything above the current row is processed, anything below - new records. Unfortunately, most of the time it is not as simple as that: first of all Kafka supports concurrency, meaning there can be more than one Task busy processing Source records. Another consideration is resilience - if a Kafka Task process fails,
another process will be started up to continue the job. This can be an important consideration when developing a Kafka Source Connector.
Is it relevant for our HTTP API connector? We are only ever requesting current weather data. If our process fails, we may miss some time periods but we cannot recover then later on. Offset management is not required for our simple connector.
So that is Partitions and Offsets dealt with. Can we make our lives just a bit more difficult? Fortunately, we can. We can create a custom Schema and then parse the source data to populate a Schema-based Structure. But we will come to that later.
First let us establish the Framework for our Source Connector.
Source Connector - the Framework
The starting point for our Source Connector are two Java API classes: SourceConnector
and SourceTask
. We will put them into separate .scala
source files but they are shown here together:
import org.apache.kafka.connect.source.{SourceConnector, SourceTask}
class HttpSourceConnector extends SourceConnector {...}
class HttpSourceTask extends SourceTask {...}
These two classes will be the basis for our Source Connector implementation:
- HttpSourceConnector represents the Connector process management. Each Connector process will have only one SourceConnector instance.
- HttpSourceTask represents the Kafka task doing the actual data integration work. There can be one or many Tasks active for an active SourceConnector instance.
We will have some additional classes for config and for HTTP access.
But first let us look at each of the two classes in more detail.
SourceConnector class
SourceConnector
is an abstract class that defines an interface that our HttpSourceConnector needs to adhere to. The first function we need to override is config
:
private val configDef: ConfigDef =
new ConfigDef()
.define(HttpSourceConnectorConstants.HTTP_URL_CONFIG, Type.STRING, Importance.HIGH, "Web API Access URL")
.define(HttpSourceConnectorConstants.API_KEY_CONFIG, Type.STRING, Importance.HIGH, "Web API Access Key")
.define(HttpSourceConnectorConstants.API_PARAMS_CONFIG, Type.STRING, Importance.HIGH, "Web API additional config parameters")
.define(HttpSourceConnectorConstants.SERVICE_CONFIG, Type.STRING, Importance.HIGH, "Kafka Service name")
.define(HttpSourceConnectorConstants.TOPIC_CONFIG, Type.STRING, Importance.HIGH, "Kafka Topic name")
.define(HttpSourceConnectorConstants.POLL_INTERVAL_MS_CONFIG, Type.STRING, Importance.HIGH, "Polling interval in milliseconds")
.define(HttpSourceConnectorConstants.TASKS_MAX_CONFIG, Type.INT, Importance.HIGH, "Kafka Connector Max Tasks")
.define(HttpSourceConnectorConstants.CONNECTOR_CLASS, Type.STRING, Importance.HIGH, "Kafka Connector Class Name (full class path)")
override def config: ConfigDef = configDef
This is validation for all the required configuration parameters. We also provide a description for each configuration parameter, that will be shown in the missing configuration error message.
HttpSourceConnectorConstants is an object where config parameter names are defined - these configuration parameters must be provided in the connector configuration file:
object HttpSourceConnectorConstants {
val HTTP_URL_CONFIG = "http.url"
val API_KEY_CONFIG = "http.api.key"
val API_PARAMS_CONFIG = "http.api.params"
val SERVICE_CONFIG = "service.name"
val TOPIC_CONFIG = "topic"
val TASKS_MAX_CONFIG = "tasks.max"
val CONNECTOR_CLASS = "connector.class"
val POLL_INTERVAL_MS_CONFIG = "poll.interval.ms"
val POLL_INTERVAL_MS_DEFAULT = "5000"
}
Another simple function to be overridden is taskClass - for the SourceConnector class to know its corresponding SourceTask class.
override def taskClass(): Class[_ <: SourceTask] = classOf[HttpSourceTask]
The last two functions to be overridden here are start
and stop
. These are called upon the creation and termination of a SourceConnector instance (not Task instance). JavaMap
here is an alias for java.util.Map
- a Java Map, which is not to be confused with the native Scala Map
- that cannot be used here. (If you are a Python developer, a Map
in Java/Scala is similar to the Python dictionary
, but strongly typed.) The interface requires Java data structures, but that is fine - we can convert them from one to another. By far the biggest problem here is the assignment of the connectorConfig
variable - we cannot have a functional programming friendly immutable value here. The variable is defined at the class level
private var connectorConfig: HttpSourceConnectorConfig = _
and is set in the start
function and then referred to in the taskConfigs
function further down. This does not look pretty in Scala. Hopefully somebody will write a Scala wrapper for this interface.
Because there is no logout/shutdown/sign-out required for the HTTP API, the stop
function just writes a log message.
override def start(connectorProperties: JavaMap[String, String]): Unit = {
Try (new HttpSourceConnectorConfig(connectorProperties.asScala.toMap)) match {
case Success(cfg) => connectorConfig = cfg
case Failure(err) => connectorLogger.error(s"Could not start Kafka Source Connector ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
}
}
override def stop(): Unit = {
connectorLogger.info(s"Stopping Kafka Source Connector ${this.getClass.getName}.")
}
HttpSourceConnectorConfig is a thin wrapper class for the configuration.
We are almost done here. The last function to be overridden is taskConfigs
.
This function is in charge of producing (potentially different) configurations for different Source Tasks. In our case, there is no reason for the Source Task configurations to differ. In fact, our HTTP API will benefit little from parallelism, so, to keep things simple, we can assume the number of tasks always to be 1.
override def taskConfigs(maxTasks: Int): JavaList[JavaMap[String, String]] = List(connectorConfig.connectorProperties.asJava).asJava
The name of the taskConfigs
function was changed in the Kafka version 2.1.0 - please consider that when using this code for older Kafka versions.
Source Task class
In a similar manner to the Source Connector class, we implement the Source Task abstract class. It is only slightly more complex than the Connector class.
Just like for the Connector, there are start
and stop
functions to be overridden for the Task.
Remember the taskConfigs
function from above? This is where task configuration ends up - it is passed to the Task's start function. Also, similarly to the Connector's start function, we parse the connection properties with HttpSourceTaskConfig
, which is the same as HttpSourceConnectorConfig
- configuration for Connector and Task in our case is the same.
We also set up the Http service that we are going to use in the poll
function - we create an instance of the WeatherHttpService
class. (Please note that start
is executed only once, upon the creation of the task and not every time a record is polled from the data source.)
override def start(connectorProperties: JavaMap[String, String]): Unit = {
Try(new HttpSourceTaskConfig(connectorProperties.asScala.toMap)) match {
case Success(cfg) => taskConfig = cfg
case Failure(err) => taskLogger.error(s"Could not start Task ${this.getClass.getName} due to error in configuration.", new ConnectException(err))
}
val apiHttpUrl: String = taskConfig.getApiHttpUrl
val apiKey: String = taskConfig.getApiKey
val apiParams: Map[String, String] = taskConfig.getApiParams
val pollInterval: Long = taskConfig.getPollInterval
taskLogger.info(s"Setting up an HTTP service for ${apiHttpUrl}...")
Try( new WeatherHttpService(taskConfig.getTopic, taskConfig.getService, apiHttpUrl, apiKey, apiParams) ) match {
case Success(service) => sourceService = service
case Failure(error) => taskLogger.error(s"Could not establish an HTTP service to ${apiHttpUrl}")
throw error
}
taskLogger.info(s"Starting to fetch from ${apiHttpUrl} each ${pollInterval}ms...")
running = new JavaBoolean(true)
}
The Task also has the stop
function. But, just like for the Connector, it does not do much, because there is no need to sign out from an HTTP API session.
Now let us see how we get the data from our HTTP API - by overriding the poll
function.
The fetchRecords
function uses the sourceService
HTTP service initialised in the start
function. sourceService
's sourceRecords
function requests data from the HTTP API.
override def poll(): JavaList[SourceRecord] = this.synchronized { if(running.get) fetchRecords else null }
private def fetchRecords: JavaList[SourceRecord] = {
taskLogger.debug("Polling new data...")
val pollInterval = taskConfig.getPollInterval
val startTime = System.currentTimeMillis
val fetchedRecords: Seq[SourceRecord] = Try(sourceService.sourceRecords) match {
case Success(records) => if(records.isEmpty) taskLogger.info(s"No data from ${taskConfig.getService}")
else taskLogger.info(s"Got ${records.size} results for ${taskConfig.getService}")
records
case Failure(error: Throwable) => taskLogger.error(s"Failed to fetch data for ${taskConfig.getService}: ", error)
Seq.empty[SourceRecord]
}
val endTime = System.currentTimeMillis
val elapsedTime = endTime - startTime
if(elapsedTime < pollInterval) Thread.sleep(pollInterval - elapsedTime)
fetchedRecords.asJava
}
Phew - that is the interface implementation done. Now for the fun part...
Requesting data from OpenWeatherMap's API
The fun part is rather straightforward. We use the scalaj.http library to issue a very simple HTTP request and get a response.
Our WeatherHttpService
implementation will have two functions:
httpServiceResponse
that will format the request and get data from the APIsourceRecords
that will parse the Schema and wrap the result within the KafkaSourceRecord
class.
Please note that error handling takes place in the fetchRecords
function above.
override def sourceRecords: Seq[SourceRecord] = {
val weatherResult: HttpResponse[String] = httpServiceResponse
logger.info(s"Http return code: ${weatherResult.code}")
val record: Struct = schemaParser.output(weatherResult.body)
List(
new SourceRecord(
Map(HttpSourceConnectorConstants.SERVICE_CONFIG -> serviceName).asJava, // partition
Map("offset" -> "n/a").asJava, // offset
topic,
schemaParser.schema,
record
)
)
}
private def httpServiceResponse: HttpResponse[String] = {
@tailrec
def addRequestParam(accu: HttpRequest, paramsToAdd: List[(String, String)]): HttpRequest = paramsToAdd match {
case (paramKey,paramVal) :: rest => addRequestParam(accu.param(paramKey, paramVal), rest)
case Nil => accu
}
val baseRequest = Http(apiBaseUrl).param("APPID",apiKey)
val request = addRequestParam(baseRequest, apiParams.toList)
request.asString
}
Parsing the Schema
Now the last piece of the puzzle - our Schema parsing class.
The short version of it, which would do just fine, is just 2 lines of class (actually - object) body:
object StringSchemaParser extends KafkaSchemaParser[String, String] {
override val schema: Schema = Schema.STRING_SCHEMA
override def output(inputString: String) = inputString
}
Here we say we just want to use the pre-defined STRING_SCHEMA value as our schema definition. And pass inputString
straight to the output, without any alteration.
Looks too easy, does it not? Schema parsing could be a big part of Source Connector implementation. Let us implement a proper schema parser. Make sure you read the Confluent Developer Guide first.
Our schema parser will be encapsulated into the WeatherSchemaParser
object. KafkaSchemaParser is a trait with two type parameters - inbound and outbound data type. This indicates that the Parser receives data in String format and the result is a Kafka's Struct value.
object WeatherSchemaParser extends KafkaSchemaParser[String, Struct]
The first step is to create a schema
value with the SchemaBuilder. Our schema is rather large, therefore I will skip most fields. The field names given are a reflection of the hierarchy structure in the source JSON. What we are aiming for is a flat, table-like structure - a likely Schema creation scenario.
For JSON parsing we will be using the Scala Circle library, which in turn is based on the Scala Cats library. (If you are a Python developer, you will see that Scala JSON parsing is a bit more involved (this might be an understatement), but, on the flipside, you can be sure about the result you are getting out of it.)
override val schema: Schema = SchemaBuilder.struct().name("weatherSchema")
.field("coord-lon", Schema.FLOAT64_SCHEMA)
.field("coord-lat", Schema.FLOAT64_SCHEMA)
.field("weather-id", Schema.FLOAT64_SCHEMA)
.field("weather-main", Schema.STRING_SCHEMA)
.field("weather-description", Schema.STRING_SCHEMA)
.field("weather-icon", Schema.STRING_SCHEMA)
// ...
.field("rain", Schema.FLOAT64_SCHEMA)
// ...
Next we define case classes, into which we will be parsing the JSON content.
case class Coord(lon: Double, lat: Double)
case class WeatherAtom(id: Double, main: String, description: String, icon: String)
That is easy enough. Please note that the case class attribute names match one-to-one with the attribute names in JSON. However, our Weather JSON schema is rather relaxed when it comes to attribute naming. You can have names like type
and 3h
, both of which are invalid value names in Scala. What do we do? We give the attributes valid Scala names and then implement a decoder:
case class Rain(threeHours: Double)
object Rain {
implicit val decoder: Decoder[Rain] = Decoder.instance { h =>
for {
threeHours <- h.get[Double]("3h")
} yield Rain(
threeHours
)
}
}
The rain
case class is rather short, with only one attribute. The corresponding JSON name was 3h
. We map '3h' to the Scala attribute threeHours
.
Not quite as simple as JSON parsing in Python, is it?
In the end, we assemble all sub-case classes into the WeatherSchema
case class, representing the whole result JSON.
case class WeatherSchema(
coord: Coord,
weather: List[WeatherAtom],
base: String,
mainVal: Main,
visibility: Double,
wind: Wind,
clouds: Clouds,
dt: Double,
sys: Sys,
id: Double,
name: String,
cod: Double
)
Now, the parsing itself. (Drums, please!)
structInput
here is the input JSON in String
format. WeatherSchema
is the case class we created above. The Circle decode
function returns a Scala Either
monad, error on the Left(), successful parsing result on the Right() - nice and tidy. And safe.
val weatherParsed: WeatherSchema = decode[WeatherSchema](structInput) match {
case Left(error) => {
logger.error(s"JSON parser error: ${error}")
emptyWeatherSchema
}
case Right(weather) => weather
}
Now that we have the WeatherSchema object, we can construct our Struct object that will become part of the SourceRecord returned by the sourceRecords
function in the WeatherHttpService class. That in turn is called from the HttpSourceTask's poll
function that is used to populate the Kafka topic.
val weatherStruct: Struct = new Struct(schema)
.put("coord-lon", weatherParsed.coord.lon)
.put("coord-lat", weatherParsed.coord.lat)
.put("weather-id", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).id)
.put("weather-main", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).main)
.put("weather-description", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).description)
.put("weather-icon", weatherParsed.weather.headOption.getOrElse(emptyWeatherAtom).icon)
// ...
Done!
Considering that Schema parsing in our simple example was optional, creating a Kafka Source Connector for us meant creating a Source Connector class, a Source Task class and a Source Service class.
Creating JAR(s)
JAR creation is described in the Confluent's Connector Development Guide. The guide mentions two options - either all the library dependencies can be added to the target JAR file, a.k.a an 'uber-Jar'. Alternatively, the dependencies can be copied to the target folder. In that case they must all reside in the same folder, with no subfolder structure. For no particular reason, I went with the latter option.
The Developer Guide says it is important not to include the Kafka Connect API libraries there. (Instead they should be added to CLASSPATH.) Please note that for the latest Kafka versions it is advised not to add these custom JARs to CLASSPATH. Instead, we will add them to connectors' plugin.path. But that we will leave for another blog post.
Scala - was it worth using it?
Only if you are a big fan. The code I wrote is very Java-like and it might have been better to write it in Java. However, if somebody writes a Scala wrapper for the Connector interfaces, or, even better, if a Kafka Scala API is released, writing Connectors in Scala would be a very good choice.connector
Exciting News for Unify
Announcement: Unify for Free
We are excited to announce we are going to make Unify available for free. To get started send an email to unify@rittmanmead.com, we will ask you to complete a short set of qualifying questions, then we can give you a demo, provide a product key and a link to download the latest version.
The free version of Unify will come with no support obligations or SLAs. On sign up, we will give you the option to join our Unify Slack channel, through which you can raise issues and ask for help.
If you’d like a supported version, we have built a special Expert Service Desk package for Unify which covers
- Unify support, how to, bugs and fixes
- Assistance with configuration issues for OBIEE or Tableau
- Assistance with user/role issues within OBIEE
- Ad-hoc support queries relating to OBIEE, Tableau and Unify
Beyond supporting Unify, the Expert Service Desk package can also be used to provide technical support and expert services for your entire BI and analytics platform, including:
- An agreed number of hours per month for technical support of Oracle and Tableau's BI and DI tools
- Advisory, strategic and roadmap planning for your platform
- Use of any other Rittman Mead accelerators including support for our other Open Source tools and DevOps Developer Toolkits
- Access to Rittman Mead’s On Demand Training
New Release: Unify 10.0.17
10.0.17 is the new version of Unify. This release doesn’t change how Unify looks and feels, but there are some new features and improvements under the hood.
The most important feature is that now you can get more data from OBIEE using fewer resources. While we are not encouraging you to download all your data from OBIEE to Tableau all time (please use filters, aggregation etc.), we realise that downloading the large datasets is sometimes required. With the new version, you can do it. Hundreds of thousands of rows can be retrieved without causing your Unify host to grind to a halt.
The second feature we would like to highlight is that now you can use OBIEE instances configured with self-signed SSL certificates. Self-signed certificates are often used for internal systems, and now Unify supports such configurations.
The final notable change is that you can now run Unify Server as a Windows service. It wasn't impossible to run Unify Server at system startup before, but it is even easier.
And, of course, we fixed some bugs and enhanced the logging. We would like to see our software function without bugs, but sometimes they just happen, and when they do, you will get a better explanation of what happened.
On most platforms, Unify Desktop should auto update, if it doesn’t, then please download manually.
Unify is 100% owned and maintained by Rittman Mead Consulting Ltd, and while this announcement makes it available for free, all copies must be used under an End User Licence Agreement (EULA) with Rittman Mead Consulting Ltd.