Category Archives: Rittman Mead
Data Integration Tips: ODI 12c – Who Changed My Table Names?
It’s Sunday night (well, technically Monday morning now), and we have just enough time for another one of my Data Integration Tips. This one, revolving around Oracle Data Integrator 12c, has been on my mind for some time now, so I figured we better just get it out there. Imagine this; you’ve upgraded from Oracle Data Integrator (ODI) 11g to ODI 12c and executed the first test Mapping. But hey…what happened to my C$ table names? And wait a minute, the I$ tables look a bit different as well! Let’s dive in and uncover the truth, shall we?
The Scenario
In the 11g version of Oracle Data Integrator, we could only load one single target table per mapping (or Interface, as they were called way back then). Now, in ODI 12c, we have the new flow-based mapping paradigm, allowing us to choose our sources, apply different components (joins, filters, pivot, aggregates, etc), and load as many targets as we like. Quite an upgrade, if you ask me! But with this redesign comes some minor, albeit important, change under the covers. The temporary tables used to store data that is loaded from a source, across the network, and into a target, known as C$ or Loading tables, are generated by the ODI Substitution API called from within a Knowledge Module step. The underlying code that creates the temp tables has changed to output a different format for the table names. What exactly does that mean for our C$ tables? And why do we care?
In the beginning, the C$ tables were named for the target table. If there were multiple source tables, the C$ name would be indexed – C$_0, C$_1, etc. For example, if your source to target mapping looked like this: F0010 —> ACCOUNT_MASTERS, then the loading table was named C$_0ACCOUNT_MASTERS. If there was a join between two tables executed on staging, then the second loading table would be named C$_1ACCOUNT_MASTERS.
So…what changed in ODI 12c? Let’s take a look at a few mapping examples.
In this mapping, the C$ table is now named after the source datastore. Instead of C$_0ACCOUNT_MASTERS, we have C$_0F0010. That can be an interesting challenge for data warehouse teams who rely on specific naming conventions for debugging, monitoring, etc. Ok, so let’s take a look at another example.
Ok, so normally I wouldn’t work with a Dataset component, but this is a look at the Mapping after an upgrade from ODI 11g. I could use the Convert to Flow feature, but as you’ll find out by reading on, it wouldn’t help with our temp table naming issues. In this example, the loading table is named C$_0DEFAULT. What’s this “default” business all about? Well, that is derived from the Dataset Component name. I must say, that’s much worse than just switching from the target table name to the source name. Yikes! Ok, one final test…
Oh boy. In this case, the resulting table is called C$_0FILTER. The name? It’s based on the Filter Component name. I’m sensing a pattern here. Basically the name of any component that is mapped to the target table, and in the physical design mapped to an access point, will be used to generate the C$ loading table name.
Digging a bit deeper into the Knowledge Modules, we find that the create loading object step of the KMs invokes the following method.
<%=odiRef.getTable("L", "COLL_NAME", "W")%>
The COLL_NAME refers to the loading table name, while the other options “L” & “W” refer to the format and source of the schema name that will be prefixed to the resulting table name. As mentioned previously, this method would return the target table name with the C$ prefix. Now, it returns the source table or component name for the specific source dataset that is being extracted and loaded into the target work schema. Here’s another way to show these differences in naming conventions:
This image is based on a specific use case in which the Oracle Data Integrator customer was using the C$ tables in debugging. As you can see, the naming really doesn’t lend itself to understanding which target the C$ table was created to load.
Here’s the Tip…
Now that we understand what drives the C$ table name, we can workaround the issue. While the use case above is somewhat unique to folks who have upgraded from Oracle Data Integrator 11g, the use of components rather than tables in the naming of temporary objects can be quite confusing. We can easily change the output of <%=odiRef.getTable(“L”, “COLL_NAME”, “W”)%> by changing the component alias, or name, within the mapping. That’s an easy enough task for just a few mappings, but when you’ve upgraded hundreds, or even thousands, to ODI 12c – you’re in for some serious manual labor. Unless, of course, you dive into some Groovy script and the ODI SDK.
In the code snippet below, we first find the mapping we’re interested in editing. Then, work our way through the different components that may exist on the mapping and need a name change. This code was written specifically to handle Dataset, Filter, and source Datastore components only. Any additional components would need to be added to the list or, better yet, a different approach written in Groovy to find the last component before the final target Datastore. Hmm, next DI Tip?
Mapping mapToEdit = mapfinder.findByName(folder, mapName) try { //fix filter name. filterComp = mapToEdit.findComponent("Filter") //find the filter named Filter. if(filterComp != null) { filterComp.setName(targName) out.println(mapName + " filter renamed.") } else { //fix dataset name. datasetComp = mapToEdit.findComponent("Default_DS") //find the dataset named Default. if(datasetComp != null) { datasetComp.setName(targName) out.println(mapName + " dataset renamed.") } else { //fix source datastore name. sources = mapToEdit.getSources() for(sourceComp in sources) { datastoreComp = sourceComp } datastoreComp.setName(targName) out.println(mapName + " source datastore renamed.") } } } catch(MapComponentException e) { out.println e.toString() } tme.persist(mapToEdit)
The “targName” variable in this snippet is set to the target datastore name concatenated with the target data server name. That’s a specific use case, but the key takeaway is that the component name cannot be set to the target datastore name exactly. There must be a slight difference, since components cannot have the exact same name within a single mapping. Another caveat, if we had multiple target tables, this approach may not work out so well. But, again, coming from ODI 11g that’s a non-issue. This code can be run against a project, project folder, or even individual mappings, making it an easy way to change thousands of objects in seconds. Man I love Groovy and the ODI SDK!
That seems to solve our naming issue by modifying our loading table name into something more meaningful than C$_0FILTER. Groovy has come to the rescue and allowed us to batch change mappings in an instant. It seems we’ve completed this Data Integration Tip successfully.
But Wait, There’s More
I did mention earlier that the I$ table had an issue as well. Oh brother. The I$, or integration table, is the result of the mapping logic stored as a dataset in the I$ table just prior to loading into the final target. There is only a slight change to the ODI Substitution API method used in generating the integration table name, but again, just slight enough to bother processes built around the naming conventions.
In the past, the integration table name was based on the target table alias. But now in the latest version of ODI, the I$ table name is built based on the target datastore resource name. Again, this could potentially be problematic for those customers interested in using a different logical name for a physical target table. Something more readable, perhaps. Or maybe removing redundant characters that exist in all tables. Either way, we have to deal with a slight change in the code.
In researching a way to modify the way the I$ table is created, I came across an interesting issue. The call to odiRef.getTableName(“INT_SHORT_NAME”) is supposed to return the integration table name alone, without any schema prefix attached to it. So in the previous example, when our target table was named ACCOUNT_MASTERS, the resulting table should have been I$_ACCOUNT_MASTERS. The original call to odiRef.getTable(“L”, “INT_NAME”, “W”) actually returns ODISTAGE.I$_JDE_ACCOUNT_MASTERS, based on the resource name of the datastore object and prepending the work schema name. Using the INT_SHORT_NAME, we expected a different result. But instead, the code generated a name like this: %INT_PRFJDE_ACCOUNT_MASTERS. This must be a bug in ODI 12.2.1, but I haven’t found it yet in My Oracle Support.
To work around this whole mess, we just searched for the work schema name and removed it from the table name, while replacing the unnecessary characters as well. All of this was completed using Java within the Knowledge Module steps. In the “Define Java Variable” step, which was custom added to setup Java variables in the KM, the below function was included. It lets you perform a substring while specifying length as a parameter. Found and repurposed from here.
String mySubString(String myString, int start, int length) { return myString.substring(start, Math.min(start + length, myString.length()));
Then, in the “Set Java Variable” task, again custom, the code below was added to create the integration table name:
ITABLENAME ="<%=odiRef.getTable("L", "INT_NAME", "W")%>".replace("_JDE_","_"); ITABLENAME = mySubString(BMINAME, BMINAME.indexOf(".") + 1, 26);
The end result was a temporary integration table named I$_ACCOUNT_MASTERS, just as we were planning.
So there you have it, another Data Integration Tip shared with the ODI public. Hopefully this, or one of the other many DITips shared by Rittman Mead, can help you solve one of your challenging problems with Oracle Data Integration solutions. Please let me know of any Data Integration Tips you may need solved by commenting below or emailing me at michael.rainey@rittmanmead.com. And if you need a bit more help with your Data Integration setup, feel free to reach out and the experts at Rittman Mead will be glad to help!
The post Data Integration Tips: ODI 12c – Who Changed My Table Names? appeared first on Rittman Mead Consulting.
Oracle’s New Data Visualization Desktop
A recent addition to the Oracle lineup of visualization tools is the Oracle Data Visualization Desktop. Described by Oracle as a “single user desktop application that provides Oracle Data Visualization functionality to business users,” ODVD is an easy-to-install data visualization tool for Windows 7, 8 or 10 that packs some very powerful features.
I recently had a chance to sit down and explore ODVD and wanted to share some of my first impressions.
At its core, ODVD is a stand-alone version of Oracle’s DVCS. If you are at all familiar with Visual Analyzer, you will feel right at home.
Installation was a breeze on my Windows 10 VM and only took about 5 minutes and required no additional software or plugins for the standard VA functionality.
After installation, launching ODVD is as easy as clicking on the desktop icon like any other stand-alone application.
After the ODV startup, I was presented with a home screen which contains a search field for finding projects, a list of user folders and a main window to select individual visualizations that have been created.
Clicking on the hamburger icon in the top left corner brings up a menu where I can choose to start Visual Analyzer with the last data source selected, select new Data Sources or create a new VA Project.
I chose to create a new VA project and selected the sample data from Oracle (the sample data is an optional install chosen during the ODVD install process). Creating a dashboard was a fairly straightforward process. Following Visual Analyzer’s functionality of dragging and dropping columns, I was able to put together a simple sales and profit dashboard in a few minutes.
While creating my dashboard, I noticed that Oracle has included some new visualization types. You can now choose Scatter (Cat.), Stacked Scatter (Cat.), Donut or Sunburst visualizations.
One other feature that Oracle added to ODV is the ability to insert images onto the dashboards. You can choose to upload your own image or link to a URL to pull images from the web.
I uploaded an image and changed the canvas layout to freeform, which allowed me to move the image anywhere on the dashboard. By adjusting the transparency it is possible to have the image underlay the entire dashboard and still be able to see the visualizations. This example is pretty extreme, and in a real world scenario, caution should be used as to not obstruct the visualizations.
Next I decided to try to connect to my Oracle 12c sample database to pull in some new data to work with. Selecting “Create New Datasource” from the menu prompted me with three options: create from a file, from an existing app or from a database.
Clicking on the “From Database” option, I was presented with a connection screen.
On this screen I discovered one of the most impressive things about ODVD. Clicking on “Database Type” reveals a dropdown menu which you can choose from a variety of database formats, including Spark, Hive and Mongo DB, among others.
That’s awesome.
Because I already had 12c DB installed, I selected the Oracle Database Type and entered all my connection information.
Once a connection to the database is made, it shows up in the available connections list. Clicking on my sample database brought up a list of available schemas to choose from. In this case, I chose the sample HR schema which then brings up a list of tables available to add as data sources for visualizations.
I chose to add EMPLOYEES, JOBS and LOCATIONS and then started a new VA project. The HR tables now show up in the list of available data sources.
I selected EMPLOYEES and JOBS and, within seconds, was able to create a simple table showing a list of employee names, their job titles, salaries and commission percentages.
As you can see, adding new data sources is quick and easy and allows users to explore their data and create meaningful visualizations from that data in a very short amount of time.
Another feature is the Advanced Analytics portion of Oracle Data Visualization Desktop. This feature, which uses R, gives users the ability to do things like highlight outliers or show trend lines with a click of a button.
This feature does require an optional install located within the ODV application folder. The install process proved once again to be very quick and easy and completed in about 5 minutes.
After the installation was complete, I created a new VA project. Choosing the sample data provided by Oracle for ODV, I created a quick scatter chart and then, by right clicking anywhere on the visualization, clicked “Add Outliers.”
As you can see, outliers and non-outliers are easily distinguishable by the color key that ODVD assigned automatically.
Next, I wanted to see how if I could change some of the colors in my visualization. ODVD allows you to do this under the visualization menu.
As with OBIEE, entering specific hex values is supported as well as selecting from pre-made color pallets is possible with ODVD.
Using the same right-click functionality that I used for adding outliers, I was able to additionally add a polynomial trend line to show a gains and losses.
Next, I decided to see if I could export this data and import it into Excel. Choosing export from the visualization menu, I was able to easily export the data as a .CSV and upload it into Excel.
Overall, Oracle Data Visualization Desktop is a very impressive new addition to the to the DVCS lineup. The ability to collect data from multiple sources, its native adaptors for a variety of popular databases, and the ability to manipulate visualizations to convey the data in creative ways make it a strong contender against Tableau and Wave. It requires no remote server infrastructure and is a solid business solution for users Oracle Data Visualization functionality in a small and easily accessible package.
I feel as though I have just cracked the surface of everything this tool can do. Check back for future blogs and articles as we at Rittman Mead continue to explore the possibilities of ODV. The future of data visualization may be closer than we think.
If you would like more information about Visual Analyzer or the Oracle Cloud Service, see this blog post by Mark Rittman.
If you would like to watch the official Tech Demo of ODV, you can find it here.
Rittman Mead also offers in depth professional training courses for OBIEE 12c and Visual Analyzer.
The post Oracle’s New Data Visualization Desktop appeared first on Rittman Mead Consulting.
Experiments with Elastic’s Graph Tool
Elastic announced their Graph tool at ElastiCON 2016 (see presentation here). It’s part of the forthcoming X-Pack which bundles Graph along with other helper tools such as Shield and Marvel. Graph itself is two things; an extension of Elasticsearch’s capabilities, enabling the user to explore how items indexed in Elasticsearch are related, and a plugin for Kibana that acts as an optional front-end for this new functionality.
You can find a good introduction to Graph and the purpose and theory behind it in the documentation here. The installation of the components themselves is simple and documented here.
First Graph
To use Graph, you just point it at your existing data in Elasticsearch. The first data set I’m going to explore is one of the standard ones that everyone uses; Twitter. I’m streaming it in through Logstash (via Kafka for flexibility), but if you wanted you could ship it in via JDBC from any RDBMS, or from HDFS too.
See an important note at the end of this article about the slice of data within it, because it affects how the relationships visualised here should be viewed.
On launching Kibana’s Graph plugin (http://localhost:5601/app/graph
) I choose the index (note that index patterns, e.g. when partitioning by date, are not supported yet), and the field in the data that I want to use as my vertices. A point to note here – “vertices” are usually called “nodes” in Graph terminology, but since Elasticsearch already uses “nodes” as part of its infrastructure topology terminology, they had to pick a different term.
In the search box, I can put my search term from which I’m interested to see the related ‘vertices’.
Sounds baffling? It is, kinda – right up until you run it (hit enter from the search box or click the magnifying glass search icon) and see what happens:
Here we’re seeing the hashtags used in tweets that mention Kibana. The “connections” (Elastic term) or “edges” (general Graph term) show which vertices (nodes) are related, and the width indicates the strength of that relationship (based on Elasticsearch’s significant terms and scoring algorithm). For more details, see the “Behind the Scenes” section towards the end of this article.
We can add in a second set of vertices by running a second search (“Elasticsearch”) – the results for these are, in effect, appended to the existing ones:
Add Links
Since we’ve pulled back an additional set of vertices, it could be that there’s overlap between these and the first set (you’d kinda of expect it, Elasticsearch and Kibana being related). To visualise this, use the Add Links button
Note how the graph redraws itself with additional connections:
Blinked and you missed it? Use the Undo button to step back, and Redo button to re-apply.
Grouping Vertices
If you look closely at the graph you’ll see that Elasticsearch
, ElasticSearch
, and elasticsearch
are all there as separate vertices. This is because I’m using a non-analyzed index field, so the strings are treated literally, case included. In this specific example, we’d probably re-run the graph using the analysed version of the field, which following the same two searches as above gives this:
But, sticking with our non-analysed example, we can use it to demonstrate Graph’s ability to group multiple terms together into a single vertex. Switch to Advanced Mode:
and then select the three vertices and click the group option
Now all three, and their connections, are as one:
Whilst the above analysed/non-analysed difference gave me excuse to show the group function (can you tell I’ve done many-a-failed-live-demo? ;-) ), I’m now going to switch over to a graph built on the analysed version of the hashtag field, as we saw briefly above:
Tidying up the Graph – Delete and Blacklist
There’s a few straglers on the Graph that are making it less easy to comprehend. We can temporarily remove them, or even blacklist them from appearing again in this session:
Expand Selection
One of the points of Graph analysis is visualising the relationships in your data in a way that standard relational methods may not lend themselves to so easily. We can now start to explore this further, by digging into the Graph that we’ve got so far. This process, along with the add links seen above, is often called “spidering“. By selecting the elasticsearch node and clicking on Expand selection we can see additional (by default, five) vertices related to this one:
So we see that kafka is related to Elasticsearch (in the view of the twitterati, at least), and let’s expand that Kafka
vertex too:
By clicking the Expand selection button again for the same vertex we get further results added:
We can select one node (e.g. realtime
) an using the Add Link see additional relationships:
But, there are many nodes, and we want to see any relationships. So, switch to Advanced Mode, select All…
…add Add Link again:
Knob Twiddling
Let’s start with a blank canvas, in basic mode, showing hashtags related to … me (@rmoff)!
But, surely I do more than talk about OBIEE and ODI? Like, Elasticsearch? Let’s relax the Graph selection criteria, under Settings:
and run the search again (on top of the existing results):
There’s more results … but I know how much I tweet and it feels like I’m only seeing a part of the picture. By switching over to Advanced Mode, we can refine how many results each field returns:
I reset the workspace (undo to blank, or just reload), and run the search again, this time with a greater number of hashtag field values shown, and with the same relaxed search settings as shown above:
At this point I’m into “fiddling” territory, twiddling with the ‘Number of terms’, ‘Significant’ and ‘Certainty’ knobs to see how the results vary. You can read more about the algorithm behind the Significance setting here, and more about the Graph API here. The certainty setting is simply “The min number of documents that are required as evidence before introducing a related term”, so by lowering it we see more links, but potentially with more “noise” too, of terms that aren’t really related.
An important point to note here is the dataset that I’m using is already biased because of the terms I’m including in my twitter feed search, therefore I’d expect to see this skew in the results below. See the section at the end of this article for more details of the dataset.
- 50 terms, significant unticked, certainty 1 (as above)
- 50 terms, significant ticked, certainty 1
- 50 terms, significant ticked, certainty 3
- 20 terms, significant ticked, certainty 1
- 20 terms, significant unticked, certainty 1
Based on the above, “Significant” seems to reduce the number of relationships discovered, but increase the level of weight shown in those that are there.
Adding Additional Vertex Fields
So we’ve seen a basic overview of how to generate Graphs, expand selections, and add relationships to those additional selections. Let’s look now at how multiple fields can be added to a Graph.
Starting with a blank workspace, I switched to Advanced Mode and added two fields from my twitter data:
user.screen_name
in_reply_to_screen_name
Note that you can customise the colour and icon of different fields.
Under Options I’ve left Significant Links enabled, and set Certainty to 1.
Let’s see who’s been interacting about the recent E4 summit:
Whilst it looks like Mark Rittman is the centre of everything, this is actually highlighting a skew in the source dataset – which includes everything Mark tweets but not all tweets about E4. See the section at the end of this article for more details of the dataset.
The lower cluster is Mark as the addressee of tweets (i.e. he is the in_reply_to_screen_name
), whilst the upper cluster is tweets that Mark has sent addressing others (i.e. he is the user.screen_name
).
If we click on Add Links a couple of times we can see that there’s other connections here – for example, Mark replies to Stewart (@stewartbryson), who Christian Berg (@Nephentur) talks to, who in turn talks to Mark.
This being twitter and the age of narcissism, I’ll click on my vertex and click Expand Selection to see the people who in turn talk to me:
And by using Add Link see how they relate to those already shown in the Graph:
Viewing Associated Records
Within Graph there’s the option to view the data associated with one or more vertices. We do this by selecting a vertex and clicking on View Example Docs (in Elasticsearch parlance, a document is akin to a ‘row’ as traditional RDBMS folk would know it). From here select the field – for twitter the text field has the contents of the tweet:
Adding Even more Vertex Fields
So, we’ve got a bit of a picture of who talks to whom, but can we see what they’re talking about? We could use the text field shown above to see the contents of tweets but that’s down in the weeds of individual tweets – we want to step back a notch and get a summarised view.
First I add in the hashtag field:
And then deselect the two username fields. This is so that I can expand existing vertices, and instead of showing related hashtags and users, instead I only expand it to show hashtags – and not additional users.
Now I select Mark as the orinator of a tweet, and Expand Selection followed by Add Links on all vertices until I get this:
The number of values selected is key in getting a representative Graph. Above I used a value of 10. Compare that to instead running the same process but with 50. Under Options I’ve left Significant Links enabled, and set Certainty to 1:
One interesting point we can see from this is that the user “itknowingness” in the cluster on the left seems to use all the hashtags, but doesn’t interact with anyone – from the Graph it’s easy to see, and a great example of where Graph gives you the answer to a question you didn’t necessarily know that you had, and which to get the answer out through a traditional RDBMS query would need a very specific query to do so. Looking at the source data via Kibana’s Discover panel shows that it is indeed a bot auto-retweeting anything and everything:
Building a Graph from Scratch
Now that we’ve seen all the salient functions, let’s start with a blank canvas, and see where we get.
The setttings I’m using are:
- Significant Links unticked
- Certainty = 1
- Field
entities.hashtags.text.analyzed
max terms = 10 - Field
user.screen_name
max terms = 10 - Initial search term
rmoff
Then I click on markrittman
and Expand Selection, the same for mrainey
, and also for the two hashtags e4
and hadoop
:
Within the clusters, let’s see what links exist. With no vertices select I click on Add Links (which seems to be the same as selecting all vertices and doing the same). With each click additional links are added, all related to the hadoop/bigdata area:
I’m interested now in the E4 region of the Graph, and the vertices related to Mark Rittman. Clicking on his vertex and clicking “Select Neighbours” does exactly that:
Now I’m more interested in digging into the terms (hashtags) that are related that people, so I deselect the user.screen_name
field, and then Expand Selection and Add Links again.
Note the width of the connections – a strong relationship between Mark Rittman, “Hadoop” and “SQL”, which is presumably from the tweets around the presentation he did recently on the subject of… SQL on Hadoop. Other terms, including Hive and Impala, are also related, as you’d expect.
Graphing Tweet Text Contents
By making sure that the tweet text is available as an analysed field we can produce a Graph based on the ‘tokens’ within the tweet, rather than the literal 140 characters. Whilst hashtags are there deliberately to help with the classification and grouping of tweets (so that other people can follow conversations on the same subject) there are two reasons why you’d want to look at the tweet text too:
- Not everyone uses hashtags
- Not all relationships are as boolean as a hashtag or not – maybe a general discussion in an area re-uses the same words which overall forms a relationship between the terms.
Here I’m going back to the default settings:
- Significant Links ticked
- Certainty = 3
And returning two fields – hashtag and tweet text
- Field
entities.hashtags.text.analyzed
max terms = 20 - Field
text.analyzed
max terms = 50 - Initial search term
kafka
I then tidy it up a bit :
- Joining the same/near-same text and hashtags, such as “kafkasummit” hashtag and the same text. If you think about the contents of a tweet, hashtags are part of the text, therefore, there’s going to be a lot of this duplication.
- Blacklisted text terms that are URL snippets. Here I’m using the Example Docs function to check the context of the term in the whole text field
I also blacklisted common words (“the”, “of”, etc), and foreign ones (how British…).
Behind the Scenes
The Kibana Graph plugin is just a front-end for the Graph extension in Elasticsearch. It’s useful (and fun!) for exploring data, but in practice you’d be making direct REST API calls into Elasticsearch to retrieve a list of vertices and connections and relative weights for use in your application. You can see details of this from the Settings page and Last Request option
Looking at an example (the one used in the first example on this article), the request is pretty simple:
{ "query": { "query_string": { "default_field": "_all", "query": "kibana" } }, "controls": { "use_significance": true, "sample_size": 2000, "timeout": 5000 }, "connections": { "vertices": [ { "field": "entities.hashtags.text.analyzed", "size": 5, "min_doc_count": 3 } ] }, "vertices": [ { "field": "entities.hashtags.text.analyzed", "size": 5, "min_doc_count": 3 } ] }
and the response not too complex either, just long.
{ "took": 201, "timed_out": false, "failures": [], "vertices": [ { "field": "entities.hashtags.text.analyzed", "term": "logstash", "weight": 0.1374238061561338, "depth": 0 }, { "field": "entities.hashtags.text.analyzed", "term": "timelion", "weight": 0.12719678206002483, "depth": 0 }, { "field": "entities.hashtags.text.analyzed", "term": "elasticsearch", "weight": 0.11733085557405047, "depth": 0 }, { "field": "entities.hashtags.text.analyzed", "term": "osdc", "weight": 0.00759026383038536, "depth": 1 }, { "field": "entities.hashtags.text.analyzed", "term": "letsencrypt", "weight": 0.006869972953128271, "depth": 1 }, { "field": "entities.hashtags.text.analyzed", "term": "kibana", "weight": 0.6699955212823048, "depth": 0 }, { "field": "entities.hashtags.text.analyzed", "term": "filebeat", "weight": 0.004700657388257993, "depth": 1 }, { "field": "entities.hashtags.text.analyzed", "term": "elk", "weight": 0.09717015256984456, "depth": 0 }, { "field": "entities.hashtags.text.analyzed", "term": "justsayin", "weight": 0.005724977460940227, "depth": 1 }, { "field": "entities.hashtags.text.analyzed", "term": "elasticsearch5", "weight": 0.004700657388257993, "depth": 1 } ], "connections": [ { "source": 0, "target": 3, "weight": 0.00759026383038536, "doc_count": 26 }, { "source": 7, "target": 5, "weight": 0.02004197094823259, "doc_count": 26 }, { "source": 5, "target": 4, "weight": 0.006869972953128271, "doc_count": 6 }, { "source": 5, "target": 0, "weight": 0.018289612748107368, "doc_count": 48 }, { "source": 0, "target": 6, "weight": 0.004700657388257993, "doc_count": 11 }, { "source": 7, "target": 0, "weight": 0.0038135609650491726, "doc_count": 10 }, { "source": 0, "target": 5, "weight": 0.0052711254217388415, "doc_count": 48 }, { "source": 0, "target": 9, "weight": 0.004700657388257993, "doc_count": 11 }, { "source": 5, "target": 1, "weight": 0.033204869273453314, "doc_count": 29 }, { "source": 1, "target": 5, "weight": 0.04492364819068228, "doc_count": 29 }, { "source": 5, "target": 8, "weight": 0.005724977460940227, "doc_count": 5 }, { "source": 2, "target": 5, "weight": 0.00015519515214322833, "doc_count": 80 }, { "source": 5, "target": 7, "weight": 0.022734810798933344, "doc_count": 26 }, { "source": 7, "target": 2, "weight": 0.0006823241440183544, "doc_count": 13 } ] }
Note how the connections are described using the relative (zero-based) instance number of the vertices. You can also see that the width of a connection is based on the weight (calculated from the significant terms algorithm), rather than document count. Compare the connection width of timelion/kibana (vertices 1 and 5 respectively), with a weighting of 0.33 (kibana -> timelion) and 0.045 (timelion -> kibana) but overlapping document count of 29:
with elasticsearch -> kibana that has an overlapping document count of 80 but only a weight of 0.0001.
Elasticsearch’s documentation describes the significant terms algorithm thus, using the example of suggesting “H5N1” when users search for “bird flu” in text:
In all these cases the terms being selected are not simply the most popular terms in a set. They are the terms that have undergone a significant change in popularity measured between a foreground and background set. If the term “H5N1” only exists in 5 documents in a 10 million document index and yet is found in 4 of the 100 documents that make up a user’s search results that is significant and probably very relevant to their search. 5/10,000,000 vs 4/100 is a big swing in frequency.
So from this, we can roughly say that Graph is looking at the number of documents in which timelion is mentioned as a proportion of the whole dataset, and then in the number of documents in which the hashtag Kibana exists and also timelion is mentioned. Since the former is a plugin of the latter, the close relationship would be expected. You can use Kibana to explore the significant terms concept further – for example, taking the same ‘seed’ as the original Graph query above, Kibana, gives a similar set of results as the Graph:
More information about the scoring can be found here, which includes the fact that the scoring is, in part, based on TF-IDF (Term Frequency-Inverse Document Frequency).
Licensing
Graph requires a licence – see here for details.
Conclusion
This tool is a great way to dip one’s toe into the waters of Graph analysis and visualisation. It’s another approach to consider in the data discovery phase of your analytics work, when you don’t even know the questions that you’ve got for the data in front of you. Your data can remain in Elasticsearch in the same format it’s always been, and the Graph function just runs on top of it.
I’ll not profess to be a Graph theory expert, so can’t pass much comment on the theoretical rigour of the results and techniques seen. One thing that struck me with it was that there’s no (apparent) way to manually influence the weight of connections and vertices – for example, based on the number of followers someone has one twitter consider them more (or less) relevant when determining relationships.
For a well-informed view on Graph theory and Social Network Analysis (SNA), see Jordan Meyer’s presentation here (and associated R code), as well as Mark Rittman’s presentation from BIWA this year.
Footnote: The Twitter Dataset
The dataset I’m using is a live stream from Twitter, via Logstash and Kafka, searching for a set of terms related to me and the field I work in. Therefore, there’s going to be a bunch of relationships missing (if I’ve not included the relevant term in my tweet search), and relationships over-stated (because as a proportion of all the records the terms I’ve selected will dominate).
An interesting use of Graph (or Elasticsearch’s significant terms aggregation in general) could be to identify all the relevant terms that I should be including in my twitter search, by sampling an ‘unpolluted’ feed for relationships. For example, if I’m interested in capturing Kafka tweets, perhaps I should also be capturing those related to Samza, Spark, and so on.
The post Experiments with Elastic’s Graph Tool appeared first on Rittman Mead Consulting.
Contemplating Upgrading to OBIEE 12c?
Where You Are Now
OBIEE 12c has been out for some time, and it seems like most folks are delaying upgrading to OBIEE 12c until the very last minute. Or at least until Oracle decides to put out another major version change of OBIEE, which is understandable. You’ve already spent time and money and devoted hundreds of resource hours to system monitoring, maintenance, testing, and development. Maybe you’ve invested in staff training to try to maximize your ROI in your existing OBIEE purchase. And now, after all this time and effort, you and your team have finally gotten things just right. Your BI engine is humming along, user adoption and stickiness are up, and you don’t have a lot of dead objects clogging up the Web Catalog. Your report hacks and work-arounds have been worked and reworked to become sustainable and maintainable business solutions. Everyone is getting what they want.
Sure, this scenario is part fantasy, but it doesn’t mean that as a BI team lead or member, you’re not always working toward this end. It would be nice to think that the people designing the tools with which we do this work understood the daily challenges and processes we must undergo in order to maintain the precarious homeostasis of our BI ecosystems. That’s where Rittman Mead comes in. If you’re considering upgrading to OBIEE 12c, or are even curious, keep reading. We’re here to help.
So Why Upgrade
Let’s get right down to it. Shoot over here and here to check out what our very own Mark Rittman had to say about the good, the bad, and the ugly of 12c. Our Silvia Rauton did a piece on lots of the nuts and bolts of 12c’s new front-end features. They’re all worth a read. Upgrading to OBIEE 12c offers many exciting new features that shouldn’t be ignored.
How Rittman Mead Can Help
We understand what it is to be presented with so many project challenges. Do you really want to risk the potential perils and pitfalls presented by upgrading to OBIEE 12c? We work both harder and smarter to make this stuff look good. And we get the most out of strategy and delivery via a number of in-house tools designed to keep your OBIEE deployment in tip top shape.
Maybe you want to make sure all your Catalog and RPD content gets ported over without issue? Instead of spending hours on testing every dashboard, report, and other catalog content post-migration, we’ve got the Automated Regression Testing package in our tool belt. We deploy this series of proprietary scripts and dashboards to ensure that everything will work just the way it was, if not better, from one version to the next.
Maybe you’d like to make sure your system will fire on all cylinders or you’d like to proactively monitor your OBIEE implementation. For that we’ve got the Performance Analytics Dashboards, built on the open source ELK stack to give you live, active monitoring of critical BI system stats and the underlying database and OS.
On top of these tools, we’ve got the strategies and processes in place to not only guarantee the success of your upgrade, but to ensure that you and your team remain active and involved in the process.
What to Expect
You might be wondering what kinds of issues you can expect to experience during upgrading to OBIEE 12c (which is to say, nothing’s going to break, right?). Are you going to have to go through a big training curve? Does upgrading to OBIEE 12c mean you’re going to experience considerable resource downtime as your team, or an even an outside company, manages this process? To answer this question, I’m reminded of a quote from the movie Fight Club: “Choose your level of involvement.”
While we always prefer to work alongside your BI or IT team to facilitate the upgrade process, we also know that resource time is valuable and that your crew can’t stop what they’re doing until things wraps up. We often find that the more clients are engaged with the process, however, the easier the hand-off is because clients better understand best practices, and IT and BI teams are more empowered for the future.
Learning More about OBIEE 12c
But if you’re like many organizations, maybe you have to stay more hands off and get training after the upgrade is complete. Check out the link here to look over the agenda of our OBIEE 12c Bootcamp training course. Like our hugely popular 11g course, this program is five days of back-to-front instruction taught via a selection of seminars and hands-on labs, designed to impart most everything your team will need to know to continue or begin their successful BI practice.
What we often find is that, in addition to being a thorough and informative course, the Bootcamp is a great way to bring together teams or team members, often dispersed among different offices, under one roof to gain common understanding about how each person plays an important role as a member of the BI process. Whether they handle the ETL, data modeling, or report development, everyone can benefit from what often evolves from a training session into some impromptu team building.
Feel Empowered
If you’re still on the fence about whether or not to upgrade, as I said before, you’re not alone. There are lots of things you need to consider, and rightfully so. You might be thinking, “What does this mean for extra work on the plates of my resources? How can I ensure the success of my project? Is it worth it to do it now, or should I wait for the next release?” Whatever you may be mulling over, we’ve been there, know how to answer the questions, and have some neat tools in our utility belt to move the process along. In the end, I hope to have presented you with some bits to aid you in making a decision about upgrading to OBIEE 12c, or at least the impetus to start thinking about it.
If you’d like any more information or just want to talk more about the ins and outs of what an upgrade might entail, send over an email or give us a call.
The post Contemplating Upgrading to OBIEE 12c? appeared first on Rittman Mead Consulting.
Data Integration Tips: Oracle Data Integrator 12c Passwords
Hey everyone, it’s Sunday night and we have just enough time for another Data Integration Tip from Rittman Mead. This one has originated from many years of Oracle Data Integrator experience – and several lost passwords. Let me start first by stating there is never any blame placed when a password is lost, forgotten, or just never stored away in a safe place. It happens more often than you might wish to think! Unfortunately, there is no “Forgot password?” link in ODI 12c, which is why I wanted to share my approach to password recovery for these situations.
The Challenge: Lost Password
There are typically two passwords used in Oracle Data Integrator 12c that are forgotten and difficult to recover:
- The Work Repository password, created during the setup of the ODI repositories.
- The SUPERVISOR user password.
Often there will be more than one ODI user with Supervisor privileges, allowing the SUPERVISOR user account password to be reset and making everyone’s life a bit easier. With that, I’ll focus on the Work Repository password and a specific use case I ran into just recently. This approach will work for both lost password instances and I have used it for each in the past.
Now yes, there is a feature that allows us to change the Work Repository password from within ODI Studio. But (assuming you do have the ability to edit the Work Repository object) as you can see in the image, you also need to know the “current password”. Therein lies the problem.
The Scenario
Ok, here we go. The situation I ran into was related to an ODI 11g to 12c upgrade. During the upgrade, we cloned the master and work repositories and set them up on a new database instance in order to lessen the impact on the current 11g repositories. To make this work, a few modifications are required after cloning and before the ODI upgrade assistant can be run. Find more details on these steps in Brian Sauer’s post Upgrade to ODI 12c: Repository and Standalone Agent.
- Modify the Work repository connection from within the Master repository. The cloned Master repository is still pointed to the original ODI 11g Work repository and the connection must be updated.
- Update the SYSTEM.SCHEMA_VERSION_REGISTRY$ table to add an entry for the cloned ODI repository in the new database instance.
- Detach the Work repository from the original Master repository.
Easy enough. The upgrade assistant completed successfully and everything was working great during testing, until we attempted to open the Work repository object in ODI.
“Work repository is already attached to another master repository”
Uh-oh. It seems the last bullet point above was skipped. No worries, we have a simple solution to this problem. We can detach the Work repository from the Master, then attach it once again. Interestingly enough, the action of detaching the repository cleans up the metadata and allows the Work repository to be added to the cloned master with no problem.
Detaching is easy. Just confirm that you want to remove the Work repository and poof, it’s gone. It’s the re-attaching where we run into an issue…our lost password issue (you knew I was going to bring that up, didn’t you?). Adding a Work repository requires a JDBC connection to a new or existing repository. In this case, we choose the existing repository in our cloned database. The same one we just detached from the Master. Just make sure that you choose to keep the repository contents or you’ll have a much bigger challenge ahead of you.
But then, out of nowhere, we’re prompted for the Work Repository password.
Hmm…well, we set the ODI 11g repository up in 2011. Jim, who installed it for us, doesn’t work here any longer. Hmm is right!
Here’s the Tip
Before we go any further, full disclosure – this is most likely not considered a supported action in the eyes of Oracle, so beware. Also, I haven’t attempted to use the ODI SDK and a Groovy script to update a password, so that might be the way to go if you’re concerned about this being a hack. But, desperate times require desperate measures, as they say.
In order to “recover” a password for the Work repository, we must actually change it behind the scenes in the repository tables. There’s a great deal of metadata we can access via the repository schema, and the modification of this data via the schema is not typical nor recommended, but sometimes necessary.
Oracle Support has a Knowledge Base document, Oracle Data Integrator 11g and 12c Repository Description (Doc ID 1903225.1), which provides a nice data dictionary for the repositories. Looking at the ODI 12.2.1 version of the repository definition, we find that the table SNP_LOC_REPW in the Work repository stores the value for the repository password in the column REP_PASSW. Now, the password must be encoded to match the repository and environment, so it cannot simply be added to the table in plain text.
Encoding a password is something that Oracle Data Integrator developers and admins have been doing for years, most often when setting up a Standalone agent. As a part of the agent installation, there is a script called encode.sh (or encode.bat for Windows) that will accept a plain text password as a parameter and output the encoded string. Brilliant! Let’s try it out.
Browse to the ODI agent domain home and drill into the bin directory. From there, we can execute the encode command. A quick look at the script shows us the expected input parameters.
The instance name is actually the Agent name. Ensure the agent is running and fire off the script:
[oracle@ODIGettingStarted bin]$ ./encode.sh -INSTANCE=OGG_ODI_AGENT 2016-04-24 22:00:50.791 TRACE JRFPlatformUtil:Unable to obtain JRF server platform. Probably because you are in JSE mode where oracle.jrf.ServerPlatformSupportFactory is not available which is expected. 2016-04-24 22:00:56.855 NOTIFICATION New data source: [OGG_ODI_REPO/*******@jdbc:oracle:thin:@//localhost:1521/ORCL] 2016-04-24 22:01:01.931 NOTIFICATION Created OdiInstance instance id=1 Enter password to encode:
Now you can enter a password to encode, hit return and boom! Here’s your encoded string.
Enter password to encode: ejjYhIeqYp4xBWNUooF31Q==
Let’s take the entire string and write a quick update statement for the work repository SNP_LOC_REPW table. Even though I know there is only one work repository, I still use a where clause to ensure I’m updating the correct row.
update SNP_LOC_REPW set REP_PASSW = 'ejjYhIeqYp4xBWNUooF31Q==' where REP_NAME = ‘OGG_ODI_WREP’;
Commit the transaction and Bob’s your uncle! Now we can continue on with adding the Work repository through ODI Studio. Just enter the password used in the encode.sh command and you’re in!
As I mentioned earlier, this same approach can be used to update the SUPERVISOR user password, or really any ODI user password (if they are stored in the repository). In this case, the use of encode.sh is the same, but this time we update the SNP_USER table in the Master repository. The column PASS stores the encoded password for each user. Just remember to change the password everywhere that the user is set to access ODI (agents, etc).
So there you have it. A quick, simple way to “recover” a lost ODI password. Just be sure that this information doesn’t fall into the wrong hands. Lock down your ODI agent file directory to only those administrators who require access. Same goes for the repository schemas. And finally, use this approach in only the most dire situation of a completely lost password. Thanks for reading and look here if you want more DI Tips. Enjoy your week!
The post Data Integration Tips: Oracle Data Integrator 12c Passwords appeared first on Rittman Mead Consulting.