Category Archives: Rittman Mead

Why DevOps Matters for Enterprise BI

Why DevOps Matters for Enterprise BI

Why are people frustrated with their existing enterprise BI tools such as OBIEE? My view is because it costs too much to produce relevant content. I think some of this is down to the tools themselves, and some of it is down to process.

Starting with the tools, they are not “bad” tools; the traditional licensing model can be expensive in today’s market, and traditional development methods are time-consuming and hence expensive. The vendor’s response is to move to the cloud and to highlight cost savings that can be made by having a managed platform. Oracle Analytics Cloud (OAC) is essentially OBIEE installed on Oracle’s servers in Oracle’s data centres with Oracle providing your system administration, coupled with the ability to flex your licensing on a monthly or annual basis.

Cloud does give organisations the potential for more agility. Provisioning servers can no longer hold up the start of a project, and if a system needs to increase capacity, then more CPUs or nodes can be added. This latter case is a bit murky due to the cost implications and the option to try and resolve performance issues through query efficiency on the database.

I don’t think this solves the problem. Tools that provide reports and dashboards are becoming more commoditised, up and coming vendors and platform providers are offering the service for a fraction of the cost of the traditional vendors. They may lack some of the enterprise features like open security models; however, these are an area that platform providers are continually improving. Over the last 10 years, Oracle's focus for OBIEE has been on more on integration than innovation. Oracle DV was a significant change; however, there is a danger that Oracle lost the first-mover advantage to tools such as Tableau and QlikView. Additionally, some critical features like lineage, software lifecycle development, versioning and process automation are not built in to OBIEE and worse still, the legacy design and architecture of the product often hinders these.

So this brings me back round to process. Defining “good” processes and having tools to support them is one of the best ways you can keep your BI tools relevant to the business by reducing the friction in generating content.

What is a “good” process? Put simply, a process that reduces the time between the identification of a business need and the realising it with zero impact on existing components of the system. Also, a “good” process should provide visibility of any design, development and testing, plus documentation of changes, typically including lineage in a modern BI system. Continuous integration is the Holy Grail.

This why DevOps matters. Using automated migration across environments, regression tests, automatically generated documentation in the form of lineage, native support for version control systems, supported merge processes and ideally a scripting interface or API to automate the generation of repetitive tasks such as changing the data type of a group of fields system-wide, can dramatically reduce the gap from idea to realisation.

So, I would recommend that when looking at your enterprise BI system, you not only consider the vendor, location and features but also focus on the potential for process optimisation and automation. Automation could be something that the vendor builds into the tool, or you may need to use accelerators or software provided by a third party. Over the next few weeks, we will be publishing some examples and case studies of how our BI and DI Developer Toolkits have helped clients and enabled them to automate some or all of the BI software development cycle, reducing the time to release new features and increasing the confidence and robustness of the system.

Real-time Sailing Yacht Performance – stepping back a bit (Part 1.1)

Slight change to the planned article. At the end of my analysis in Part 1 I discovered I was missing a number of key messages. It turns out that not all the SeaTalk messages from the integrated instruments were being translated to an NMEA format and therefore not being sent wirelessly from the AIS hub. I didn't really want to introduce another source of data directly from the instruments as it would involve hard wiring the instruments to the laptop and then translating a different format of a message (SeaTalk). I decided to spend on some hardware (any excuse for new toys). I purchased a SeaTalk to NMEA converter from DigitalYachts (discounted at the London boat show I'm glad to say).

This article is about the installation of that hardware and the result (hence Part 1.1), not our usual type of blog. You never know it may be of interest to somebody out there and this is a real-life data issue! Don't worry it will be short and more of an insight into Yacht wiring than anything.

The next blog will be very much back on track. Looking at Kafka in the architecture.

The existing wiring

The following image shows the existing setup, what's behind the panels and how it links to the instrument architecture documented in Part 1. No laughing at the wiring spaghetti - I stripped out half a tonne of cable last year so this is an improvement. Most of the technology lives near the chart table and we have access to the navigation lights, cabin lighting, battery sensors and DSC VHF. The top left image also shows a spare GPS (Garmin) and far left an EPIRB.

Approach

I wanted to make sure I wasn't breaking anything by adding the new hardware so followed the approach we use as software engineers. Check before, during and after any changes enabling us to narrow down the point errors are introduced. To help with this I create a little bit of Python that reads the messages and lets me know the unique message types, the total number of messages and the number of messages in error.

 
import json
import sys
  
#DEF Function to test message
def is_message_valid (orig_line):

........ [Function same code described in Part 1]

#main body
f = open("/Development/step_1.log", "r")

valid_messages = 0
invalid_messages = 0
total_messages = 0
my_list = [""]
#process file main body
for line in f:

  orig_line = line

  if is_message_valid(orig_line):
    valid_messages = valid_messages + 1
    #look for wind message
    #print "message valid"

    if orig_line[0:1] == "$":
      if len(my_list) == 0:
        #print "ny list is empty"
        my_list.insert(0,orig_line[0:6]) 
      else:
        #print orig_line[0:5]
        my_list.append(orig_line[0:6])

      #print orig_line[20:26]
 
  else:
    invalid_messages = invalid_messages + 1

  total_messages = total_messages + 1

new_list = list(set(my_list))

i = 0

while i < len(new_list):
    print(new_list[i])
    i += 1

#Hight tech report
print "Summary"
print "#######"
print "valid messages -> ", valid_messages
print "invalid messages -> ", invalid_messages
print "total mesages -> ", total_messages

f.close()

For each of the steps, I used nc to write the output to a log file and then use the Python to analyse the log. I log about ten minutes of messages each step although I have to confess to shortening the last test as I was getting very cold.

nc -l 192.168.1.1 2000 > step_x.log

While spooling the message I artificially generate some speed data by spinning the wheel of the speedo. The image below shows the speed sensor and where it normally lives (far right image). The water comes in when you take out the sensor as it temporarily leaves a rather large hole in the bottom of the boat, don't be alarmed by the little puddle you can see.

Step 1;

I spool and analyse about ten minutes of data without making any changes to the existing setup.

The existing setup takes data directly from the back of a Raymarine instrument seen below and gets linked into the AIS hub.

Results;

 
$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

$IIDBT -> Depth sensor
$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 

Summary
#######
valid messages ->  2129
invalid messages ->  298
total mesages ->  2427
12% error

Step 2;

I disconnect the NMEA interface between the AIS hub and the integrated instruments. So in the diagram above I disconnect all four NMEA wires from the back of the instrument.

I observe the Navigation display of the integrated instruments no longer displays any GPS information (this is expected as the only GPS messages I have are coming from the AIS hub).

Results;


$AITXT -> AIS (from AIS hub)

$GPRMC -> GPS (form AIS hub)
$GPGGA
$GPGLL
$GPGBS

No $II messages as expected 

Summary
#######
valid messages ->  3639
invalid messages ->  232
total mesages ->  3871
6% error

Step 3;

I wire in the new hardware both NMEA in and out then directly into the course computer.

Results;


$AITXT -> AIS (from AIS hub)

$GPGBS -> GPS messages
$GPGGA
$GPGLL
$GPRMC

$IIMTW -> Sea temperature sensor
$IIMWV -> Wind speed 
$IIVHW -> Heading & Speed
$IIRSA -> Rudder Angle
$IIHDG -> Heading
$IIVLW -> Distance travelled

Summary
#######
valid messages ->  1661
invalid messages ->  121
total mesages ->  1782
6.7% error

Conclusion;

I get all the messages I am after (for now) the hardware seems to be working.

Now to put all the panels back in place!

In the next article, I will get back to technology and the use of Kafka in the architecture.

Real-time Sailing Yacht Performance - Getting Started (Part 1)

Real-time Sailing Yacht Performance - Kafka (Part 2)

Rittman Mead at Kscope 2018

Rittman Mead at Kscope 2018

Kscope 2018 is just a week away! Magnificent location (Walt Disney World Swan and Dolphin Resort) for one of the best tech conferences of the year! The agenda is impressive (look here) spanning over ten different tracks from the traditional EPM, BI Analytics and Data Visualization, to the newly added Blockchain! Plenty of great content and networking opportunities!

I'll be representing Rittman Mead with two talks: one about Visualizing Streams (Wednesday at 10:15 Northern Hemisphere A2, Fifth Level) on how to build a modern analytical platform including Apache Kafka, Confluent's KSQL, Apache Drill and Oracle's Data Visualization (Cloud or Desktop).

Rittman Mead at Kscope 2018

During the second talk, titled DevOps and OBIEE:
Do it Before it's Too Late!
(Monday at 10:45 Northern Hemisphere A1, Fifth Level), I'll be sharing details, based on our experience, on how OBIEE can be fully included in a DevOps framework, what's the cost of "avoiding" DevOps and automation in general and how Rittman Mead's toolkits, partially described here, can be used to accelerate the adoption of DevOps practices in any situation.

Rittman Mead at Kscope 2018

If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.

Rittman Mead at OUG Norway 2018

Rittman Mead at OUG Norway 2018

This week I am very pleased to represent Rittman Mead by presenting at the Oracle User Group Norway Spring Seminar 2018 delivering two sessions about Oracle Analytics, Kafka, Apache Drill and Data Visualization both on-premises and cloud. The OUGN conference it's unique due to both the really high level of presentations (see related agenda) and the fascinating location being the Color Fantasy Cruiseferry going from Oslo to Kiev and back.

Rittman Mead at OUG Norway 2018

I'll be speaking on Friday 9th at 9:30AM in Auditorium 2 about Visualizing Streams on how the world of Business Analytics has changed in recent years and how to successfully build a Modern Analytical Platform including Apache Kafka, Confluent's recently announced KSQL and Oracle's Data Visualization.

Rittman Mead at OUG Norway 2018

On the same day at 5PM, always in Auditorium 2, I'll be delivering the session OBIEE: Going Down the Rabbit Hole: providing details, built on experience, on how diagnostic tools, non standard configuration and well defined processes can enhance, secure and accelerate any analytical project.

If you’re at the event and you see me in sessions, around the conference or during my talks, I’d be pleased to speak with you about your projects and answer any questions you might have.

Spring into action with our new OBIEE 12c Systems Management & Security On Demand Training course

Rittman Mead are happy to release a new course to the On Demand Training platform.

The OBIEE 12c Systems Management & Security course is the essential learning tool for any developers or administrators who will be working on the maintenance & optimisation of their OBIEE platform.

Baseline Validation Tool

View lessons and live demos from our experts on the following subjects:

  • What's new in OBIEE 12c
  • Starting & Stopping Services
  • Managing Metadata
  • System Preferences
  • Troubleshooting Issues
  • Caching
  • Usage Tracking
  • Baseline Validation Tool
  • Direct Database Request
  • Write Back
  • LDAP Users & Groups
  • Application Roles
  • Permissions

Get hands on with the practical version of the course which comes with an OBIEE 12c training environment and 9 lab exercises.
System Preferences

Rittman Mead will also be releasing a theory version of the course. This will not include the lab exercises but gives each of the lessons and demos that you'd get as part of the practical course.

Course prices are as follows:

OBIEE 12c Systems Management & Security - PRACTICAL - $499

  • 30 days access to lessons & demos
  • 30 days access to OBIEE 12c training environment for lab exercises
  • 30 days access to Rittman Mead knowledge base for Q&A and lab support

OBIEE 12c Systems Management & Security - THEROY - $299

  • 30 days access to lessons & demos
  • 30 days access to Rittman Mead knowledge base for Q&A

To celebrate the changing of seasons we suggest you Spring into action with OBIEE 12c by receiving a 25% discount on both courses until 31st March 2018 using voucher code:

ODTSPRING18

Access both courses and the rest of our catalog at learn.rittmanmead.com