Category Archives: Rittman Mead

Rittman Mead at KScope16

June is the perfect month: summer begins, major football (and futbol) tournaments are in full swing, and of course, KScope16 is on! Rittman Mead have participated in many of the past KScope conferences and will continue that tradition this year with a wide range of presentation and training topics across the Business Intelligence, Data Warehousing, Big Data, and Advanced Analytics subject areas. This year the event is held in Chicago at the Sheraton Grand Chicago, a great location right on the river and a short distance from sights such as the Navy Pier and Magnificent Mile. And, as always, there will be some great networking opportunities throughout the week.

chicago-kscope16

This year, we’re pleased to congratulate our own Becky Wagner (of Becky’s BI Apps Corner) on her graduation from the ODTUG Leadership Program. This is the third class to complete the program, which aims to build leadership skills, confidence, and work/life balance skills, amongst many other things, throughout the 9 month class duration. Well done, Becky! You’ll be able to find her and the rest of the 2016 Leadership class at the Career Community/Leadership Program Information Breakfast – Monday, June 27th 7:30AM at the Sheraton hotel main dining area. They will be kicking off the application season for the 2017 Leadership Program and providing further information about the new ODTUG career program as well.

The rest of the Rittman Mead attendees will be presenting one or more sessions during the week.

KScope16 Rittman Mead Schedule

The week kicks off with my talk Monday on GoldenGate and Kafka integration using the new Oracle GoldenGate for Big Data release 12.2. The topic of this session, real-time data streaming, provides a great solution for the challenging task of ingesting data from multiple different sources, and different structures, and making that data readily available for others in the organization to use. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

GoldenGate to Kafka logo

Tuesday morning, Charles Elliott, Managing Director of Rittman Mead in the US, will begin the first of his two-part hands on lab sessions on Free Form Data Visualizations. This hands-on introduction to all things visual in OBIEE’s new Visual Analyzer (VA) will allow attendees to see the top reporting features within the current release while also understanding the core differences between OBIEE’s previous reporting platform and what’s available now. In addition to creating some of the most eye-catching visualizations the new tool has to offer, this session will highlight new OBIEE 12c features such as mapping, advanced analytics, and an overview of the brand-new web catalog. Whether you are just interested in what’s new in free-form Oracle BI visualizations or preparing for an upgrade, this session is for you!

During the lunch and learn sessions you can find me moderating the BI and Data Warehousing panel and Mark Rittman will be moderating the Big Data and Advanced Analytics crew. Grab a lunch and join us!

The remainder of Rittman Mead sessions fall on Wednesday, with some interesting topics around Big Data Discovery, OBIEE branding and Essbase integration, and the Kimball ETL Subsystems. Also, the second day of the Free Form Data Visualizations HOL will be held on Wednesday afternoon.

It all starts with Mark Rittman presenting “OBIEE 12c: What’s New for Integration and Reporting Against EPM Sources”. He’ll dig into interesting new features around OBIEE 12c and Essbase integration, including improvements around using Essbase as a query acceleration layer and general improvements around usability and RPD/Outline integration. Come to this session to see what they are, how they affect OBIEE/Essbase integration, and where this initiative may be going over time.

Following Mark and his dive into OBIEE and Essbase on Wednesday will be my session, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration”, with a focus on how to implement the ETL Subsystems using Oracle Data Integration solutions. As you may know, Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet analytical reporting needs. His book, “The Data Warehouse ETL Toolkit,” is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories, describing how the Oracle Data Integration products (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality) are perfectly suited for the Kimball approach.

After lunch, Andy Rocha and Pete Tamisin will dive into “How to Brand and Own Your OBIEE Interface: Past, Present, and Future”. In this presentation, they will explore the different techniques in branding and customizing your OBIEE environment in different versions: 11.1.1.7, 11.1.1.9, and 12c. What that really means is they will talk about how to brand the interface in a thoughtful way… not just throwing the company’s colors on the screen, but thinking about what will actually make the application more attractive and useful for the user community.

bdd-rittman

Closing out the Rittman Mead Wednesday sessions is Mark Rittman with his unique look at machine learning and advanced visualizations using Oracle Big Data Discovery (BDD). Mark will use the BDD Shell, pySpark, and Jupyter to ingest, process and run machine learning routines on IoT and smart home data, which he has been collecting over the past year. Oracle Big Data Discovery provides a graphical environment for cataloging, searching, and displaying data from Hadoop and other sources along with some basic data wrangling features and display options. In this session, he’ll look at how BDD can be extended using the aforementioned APIs to cover more advanced predictive modeling, machine learning, and data science tasks, and how the graphics API can be used to add custom D3 advanced visualizations for your datasets and analyses. I’m definitely looking forward to this one!

We’re all looking forward to attending the event in Chicago this year and can’t wait for next week! If you’d like to get together to discuss any of the above topics we’re presenting, or anything data integration or BI related, feel free to reach out via email (info@rittmanmead.com), twitter (@mRainey) or drop a note in the comments below. Hope to see you there!

The post Rittman Mead at KScope16 appeared first on Rittman Mead Consulting.

Rittman Mead at KScope16

Rittman Mead at KScope16

June is the perfect month: summer begins, major football (and futbol) tournaments are in full swing, and of course, KScope16 is on! Rittman Mead have participated in many of the past KScope conferences and will continue that tradition this year with a wide range of presentation and training topics across the Business Intelligence, Data Warehousing, Big Data, and Advanced Analytics subject areas. This year the event is held in Chicago at the Sheraton Grand Chicago, a great location right on the river and a short distance from sights such as the Navy Pier and Magnificent Mile. And, as always, there will be some great networking opportunities throughout the week.

Rittman Mead at KScope16

This year, we're pleased to congratulate our own Becky Wagner (of Becky's BI Apps Corner) on her graduation from the ODTUG Leadership Program. This is the third class to complete the program, which aims to build leadership skills, confidence, and work/life balance skills, amongst many other things, throughout the 9 month class duration. Well done, Becky! You'll be able to find her and the rest of the 2016 Leadership class at the Career Community/Leadership Program Information Breakfast - Monday, June 27th 7:30AM at the Sheraton hotel main dining area. They will be kicking off the application season for the 2017 Leadership Program and providing further information about the new ODTUG career program as well.

The rest of the Rittman Mead attendees will be presenting one or more sessions during the week.

Rittman Mead at KScope16

The week kicks off with my talk Monday on GoldenGate and Kafka integration using the new Oracle GoldenGate for Big Data release 12.2. The topic of this session, real-time data streaming, provides a great solution for the challenging task of ingesting data from multiple different sources, and different structures, and making that data readily available for others in the organization to use. By combining GoldenGate, Oracle’s premier data replication technology, and Apache Kafka, the latest open-source streaming and messaging system for big data, we can implement a fast, durable, and scalable solution.

Rittman Mead at KScope16

Tuesday morning, Charles Elliott, Managing Director of Rittman Mead in the US, will begin the first of his two-part hands on lab sessions on Free Form Data Visualizations. This hands-on introduction to all things visual in OBIEE’s new Visual Analyzer (VA) will allow attendees to see the top reporting features within the current release while also understanding the core differences between OBIEE’s previous reporting platform and what’s available now. In addition to creating some of the most eye-catching visualizations the new tool has to offer, this session will highlight new OBIEE 12c features such as mapping, advanced analytics, and an overview of the brand-new web catalog. Whether you are just interested in what’s new in free-form Oracle BI visualizations or preparing for an upgrade, this session is for you!

During the lunch and learn sessions you can find me moderating the BI and Data Warehousing panel and Mark Rittman will be moderating the Big Data and Advanced Analytics crew. Grab a lunch and join us!

The remainder of Rittman Mead sessions fall on Wednesday, with some interesting topics around Big Data Discovery, OBIEE branding and Essbase integration, and the Kimball ETL Subsystems. Also, the second day of the Free Form Data Visualizations HOL will be held on Wednesday afternoon.

Rittman Mead at KScope16

It all starts with Mark Rittman presenting "OBIEE 12c: What's New for Integration and Reporting Against EPM Sources". He'll dig into interesting new features around OBIEE 12c and Essbase integration, including improvements around using Essbase as a query acceleration layer and general improvements around usability and RPD/Outline integration. Come to this session to see what they are, how they affect OBIEE/Essbase integration, and where this initiative may be going over time.

Following Mark and his dive into OBIEE and Essbase on Wednesday will be my session, “A Walk Through the Kimball ETL Subsystems with Oracle Data Integration", with a focus on how to implement the ETL Subsystems using Oracle Data Integration solutions. As you may know, Ralph Kimball, dimensional modeling and data warehousing expert and founder of The Kimball Group, spent much of his career working to build an enterprise data warehouse methodology that can meet analytical reporting needs. His book, “The Data Warehouse ETL Toolkit,” is a guide for many ETL developers. This session will walk you through his ETL Subsystem categories, describing how the Oracle Data Integration products (Oracle Data Integrator, GoldenGate, & Enterprise Data Quality) are perfectly suited for the Kimball approach.

After lunch, Andy Rocha and Pete Tamisin will dive into "How to Brand and Own Your OBIEE Interface: Past, Present, and Future". In this presentation, they will explore the different techniques in branding and customizing your OBIEE environment in different versions: 11.1.1.7, 11.1.1.9, and 12c. What that really means is they will talk about how to brand the interface in a thoughtful way… not just throwing the company’s colors on the screen, but thinking about what will actually make the application more attractive and useful for the user community.

Rittman Mead at KScope16

Closing out the Rittman Mead Wednesday sessions is Mark Rittman with his unique look at machine learning and advanced visualizations using Oracle Big Data Discovery (BDD). Mark will use the BDD Shell, pySpark, and Jupyter to ingest, process and run machine learning routines on IoT and smart home data, which he has been collecting over the past year. Oracle Big Data Discovery provides a graphical environment for cataloging, searching, and displaying data from Hadoop and other sources along with some basic data wrangling features and display options. In this session, he'll look at how BDD can be extended using the aforementioned APIs to cover more advanced predictive modeling, machine learning, and data science tasks, and how the graphics API can be used to add custom D3 advanced visualizations for your datasets and analyses. I'm definitely looking forward to this one!

We're all looking forward to attending the event in Chicago this year and can't wait for next week! If you'd like to get together to discuss any of the above topics we're presenting, or anything data integration or BI related, feel free to reach out via email (info@rittmanmead.com), twitter (@mRainey) or drop a note in the comments below. Hope to see you there!

Using Jupyter Notebooks with Big Data Discovery 1.2

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

Whilst BDD Shell is command-line based, there’s also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive “Notebook”. This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a ‘Notebook’ enables you to modify and re-run commands, and then once correct retain them as a fully functioning script for future use.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you’d find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2.

For information how on to set up BDD Shell and Jupyter Notebooks, see this previous post. For the purpose of this article I’m running Jupyter on port 18888 so as not to clash with Hue:

cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It’s important that you run this from the bdd-shell folder, otherwise the BDD shell won’t initialise properly
  • Jupyter by default only listens locally, so you need to use a web browser local to the server, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and from the New menu select a Python 2 notebook:

You should then see an empty notebook, ready for use:

The ‘cell’ (grey box after the In [ ]:) is where you enter code to run – type in execfile('ipython/00-bdd-shell-init.py') and press shift-Enter. This will execute it – if you don’t press shift you just get a newline. Whilst it’s executing you’ll notice the line prefix changes from [ ] to [*], and in the terminal window from which you launched Jupyter you’ll see some output related to the BDD Shell starting

WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).
WARNING: Running spark-class from user-defined location.
spark.driver.cores is set but does not apply in client mode.

Now back in the Notebook, enter the following – use Enter, not Shift-enter, between lines:

dss = bc.datasets()
dss.count

Now press shift-enter to execute it. This uses the pre-defined bc BDD context to get the datasets object, and return a count from it.

BDD_Shell_example_04

By clicking the + button on the toolbar, using the up and down arrows on the toolbar, and the Code/Markdown dropdown, it’s possible to insert “cells” which are not code but instead commentary on what the code is. This way you can produce fully documented, but executable, code objects.

From the File menu give the notebook a name, and then Close and Halt, which destroys the Jupyter process (‘kernel’) that was executing the BDD Shell session. Back at the Jupyter main page, you’ll note that a ipynb file has been created, which holds the notebook definition and can be downloaded, sent to colleagues, uploaded to blogs to share, saved in source control, and so on. Here’s the file for the notebook above – note that it’s hosted on gist, which automagically previews it as a Notebook, so click on Raw to see the actual code behind it.

The fantastically powerful thing about the Notebooks is that you can modify and re-run steps as you go — but you never lose the history of how you got somewhere. Most people will be familar with learning or exploring a tool and its capabilities and eventually getting it to work – but no idea how they got there. Even for experienced users of a tool, being able to prove how to replicate a final result is important for (a) showing the evidence for how they got there and (b) enabling others to take that work and build on it.

With an existing notebook file, whether a saved one you created or one that someone sent you, you can reopen it in Jupyter and re-execute it, in order to replicate the results previously seen. This is an important tenet of [data] science in general – show your workings, and it’s great that Big Data Discovery supports this option. Obviously, showing the count of datasets is not so interesting or important to replicate. The real point here is being able to take datasets that you’ve got in BDD, done some joining and wrangling on already taking advantage of the GUI, and then dive deep into the data science and analytics world of things like Spark MLLib, Pandas, and so on. As a simple example, I can use a couple of python libraries (installed by default with Anaconda) to plot a correlation matrix for one of my BDD datasets:

jupyter_corr_matrix

As well as producing visualisations or calculations within BDD shell, the real power comes in being able to push the modified data back into Hive, and thus continue to work with it within BDD.

With Jupyter Notebooks not only can you share the raw notebooks for someone else to execute, you can export the results to HTML, PDF, and so on. Here’s the notebook I started above, developed out further and exported to HTML – note how you can see not only the results, but exactly the code that I ran in order to get them. In this I took the dataset from BDD, added a column into it using a pandas windowing function, and then saved it back to a new Hive table:
(you can view the page natively here, and the ipynb here)

Once the data’s been written back to Hive from the Python processing, I ran BDD’s data_processing_CLI to add the new table back into BDD

/u01/bdd/v1.2.0/BDD-1.2.0.31.813/dataprocessing/edp_cli/data_processing_CLI --table updated_accident_data

And once that’s run, I can then continue working with the data in BDD:

This workflow enables a continual loop of data wrangling, enrichment, advanced processing, and visualisation – all using the most appropriate tools for the job.

You can also use BDD Shell/Jupyter as another route for loading data into BDD. Whilst you can import CSV and XLS files into BDD directly through the web GUI, there are limitations – such as an XLS workbook with multiple sheets has to be imported one sheet at a time. I had a XLS file with over 40 sheets of reference data in it, which was not going to be time-efficient to load one at a time into BDD.

Pandas supports a lot of different input types – including Excel files. So by using Pandas to pull the data in, then convert it to a Spark dataframe I can write it to Hive, from where it can be imported to BDD. As before, the beauty of the Notebook approach is that I could develop and refine the code, and then simply share the Notebook here

The post Using Jupyter Notebooks with Big Data Discovery 1.2 appeared first on Rittman Mead Consulting.

Using Jupyter Notebooks with Big Data Discovery 1.2

Using Jupyter Notebooks with Big Data Discovery 1.2

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

Whilst BDD Shell is command-line based, there's also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive "Notebook". This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a 'Notebook' enables you to modify and re-run commands, and then once correct retain them as a fully functioning script for future use.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you'd find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2.

For information how on to set up BDD Shell and Jupyter Notebooks, see this previous post. For the purpose of this article I'm running Jupyter on port 18888 so as not to clash with Hue:

  
cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell  
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It's important that you run this from the bdd-shell folder, otherwise the BDD shell won't initialise properly
  • Jupyter by default only listens locally, so you need to use a web browser local to the server, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and from the New menu select a Python 2 notebook:

Using Jupyter Notebooks with Big Data Discovery 1.2

You should then see an empty notebook, ready for use:

Using Jupyter Notebooks with Big Data Discovery 1.2

The 'cell' (grey box after the In [ ]:) is where you enter code to run - type in execfile('ipython/00-bdd-shell-init.py') and press shift-Enter. This will execute it - if you don't press shift you just get a newline. Whilst it's executing you'll notice the line prefix changes from [ ] to [*], and in the terminal window from which you launched Jupyter you'll see some output related to the BDD Shell starting

  
WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).  
WARNING: Running spark-class from user-defined location.  
spark.driver.cores is set but does not apply in client mode.  

Now back in the Notebook, enter the following - use Enter, not Shift-enter, between lines:

  
dss = bc.datasets()  
dss.count  

Using Jupyter Notebooks with Big Data Discovery 1.2

Now press shift-enter to execute it. This uses the pre-defined bc BDD context to get the datasets object, and return a count from it.

Using Jupyter Notebooks with Big Data Discovery 1.2

By clicking the + button on the toolbar, using the up and down arrows on the toolbar, and the Code/Markdown dropdown, it's possible to insert "cells" which are not code but instead commentary on what the code is. This way you can produce fully documented, but executable, code objects.

Using Jupyter Notebooks with Big Data Discovery 1.2

From the File menu give the notebook a name, and then Close and Halt, which destroys the Jupyter process ('kernel') that was executing the BDD Shell session. Back at the Jupyter main page, you'll note that a ipynb file has been created, which holds the notebook definition and can be downloaded, sent to colleagues, uploaded to blogs to share, saved in source control, and so on. Here's the file for the notebook above - note that it's hosted on gist, which automagically previews it as a Notebook, so click on Raw to see the actual code behind it.

The fantastically powerful thing about the Notebooks is that you can modify and re-run steps as you go -- but you never lose the history of how you got somewhere. Most people will be familar with learning or exploring a tool and its capabilities and eventually getting it to work - but no idea how they got there. Even for experienced users of a tool, being able to prove how to replicate a final result is important for (a) showing the evidence for how they got there and (b) enabling others to take that work and build on it.

With an existing notebook file, whether a saved one you created or one that someone sent you, you can reopen it in Jupyter and re-execute it, in order to replicate the results previously seen. This is an important tenet of [data] science in general - show your workings, and it's great that Big Data Discovery supports this option. Obviously, showing the count of datasets is not so interesting or important to replicate. The real point here is being able to take datasets that you've got in BDD, done some joining and wrangling on already taking advantage of the GUI, and then dive deep into the data science and analytics world of things like Spark MLLib, Pandas, and so on. As a simple example, I can use a couple of python libraries (installed by default with Anaconda) to plot a correlation matrix for one of my BDD datasets:

Using Jupyter Notebooks with Big Data Discovery 1.2

As well as producing visualisations or calculations within BDD shell, the real power comes in being able to push the modified data back into Hive, and thus continue to work with it within BDD.

With Jupyter Notebooks not only can you share the raw notebooks for someone else to execute, you can export the results to HTML, PDF, and so on. Here's the notebook I started above, developed out further and exported to HTML - note how you can see not only the results, but exactly the code that I ran in order to get them. In this I took the dataset from BDD, added a column into it using a pandas windowing function, and then saved it back to a new Hive table:
(you can view the page natively here, and the ipynb here)

https://gist.github.com/rmoff/f1024dd043565cb8a58a0c54e9a782f2

Once the data's been written back to Hive from the Python processing, I ran BDD's dataprocessingCLI to add the new table back into BDD

/u01/bdd/v1.2.0/BDD-1.2.0.31.813/dataprocessing/edp_cli/data_processing_CLI --table updated_accident_data

And once that's run, I can then continue working with the data in BDD:

Using Jupyter Notebooks with Big Data Discovery 1.2

Using Jupyter Notebooks with Big Data Discovery 1.2

This workflow enables a continual loop of data wrangling, enrichment, advanced processing, and visualisation - all using the most appropriate tools for the job.

You can also use BDD Shell/Jupyter as another route for loading data into BDD. Whilst you can import CSV and XLS files into BDD directly through the web GUI, there are limitations - such as an XLS workbook with multiple sheets has to be imported one sheet at a time. I had a XLS file with over 40 sheets of reference data in it, which was not going to be time-efficient to load one at a time into BDD.

Pandas supports a lot of different input types - including Excel files. So by using Pandas to pull the data in, then convert it to a Spark dataframe I can write it to Hive, from where it can be imported to BDD. As before, the beauty of the Notebook approach is that I could develop and refine the code, and then simply share the Notebook here

https://gist.github.com/rmoff/3fa5d857df8ca5895356c22e420f3b22

Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5

New in Big Data Discovery 1.2 is the addition of BDD Shell, an integration point with Python. This exposes the datasets and BDD functionality in a Python and PySpark environment, opening up huge possibilities for advanced data science work on BDD datasets, particularly when used in conjunction with Jupyter Notebooks. With the ability to push back to Hive and thus BDD data modified in this environment, this is important functionality that will make BDD even more useful for navigating and exploring big data.

The Big Data Lite virtual machine is produced by Oracle for demo and development purposes, and hosts all the components that you’d find on the Big Data Appliance, all configured and integrated for use. Version 4.5 was released recently, which included BDD 1.2. In this article we’ll see how to configure BDD Shell on Big Data Lite 4.5 (along with Jupyter Notebooks), and in a subsequent post dive into how to actually use them.

Setting up BDD Shell on Big Data Lite

You can find the BDD Shell installation document here.

Login to BigDataLite 4.5 (oracle/welcome1) and open a Terminal window. The first step is to download Anaconda, which is a distribution of Python that also includes “[…] over 100 of the most popular Python, R and Scala packages for data science” as well as Jupyter notebook, which we’ll see in a moment.

cd ~/Downloads/
wget http://repo.continuum.io/archive/Anaconda2-4.0.0-Linux-x86_64.sh

Then install it: (n.b. bash is part of the command to enter)

bash Anaconda2-4.0.0-Linux-x86_64.sh

Accept the licence when prompted, and then select a install location – I used /u01/anaconda2 where the rest of the BigDataLite installs are

Anaconda2 will now be installed into this location:
/home/oracle/anaconda2

  - Press ENTER to confirm the location
  - Press CTRL-C to abort the installation
  - Or specify a different location below

[/home/oracle/anaconda2] >>> /u01/anaconda2

After a few minutes of installation, you’ll be prompted to whether you want to prepend Anaconda’s location to the PATH environment variable. I opted not to (which is the default) since Python is used elsewhere on the system and by prepending it it’ll take priority and possibly break things.

Do you wish the installer to prepend the Anaconda2 install location
to PATH in your /home/oracle/.bashrc ? [yes|no]
[no] >>> no

Now edit the BDD Shell configuration file (/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.conf) in your favourite text editor to add/amend the following lines:

SPARK_EXECUTOR_PYTHON=/u01/anaconda2/bin/python
LOCAL_PYTHON_HOME=/u01/anaconda2

Amend the path if you didn’t install Anaconda into /u01

In the same configuration file, add/amend:

SPARK_HOME=/usr/lib/spark/
SPARK_EXTRA_CLASSPATH=/usr/lib/oozie/oozie-sharelib-yarn/lib/spark/spark-avro_2.10-1.1.0-cdh5.7.0.jar

Now run the BDD Shell setup:

/u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/setup.sh

This should succeed:

[bigdatalite.localdomain] Validating pre-requisites...
[bigdatalite.localdomain] Validation Success
[bigdatalite.localdomain] Setting up BDD Shell...
[bigdatalite.localdomain] Setup Success
[oracle@bigdatalite Downloads]$

Assuming it does, you can now launch the shell bdd-shell.sh:

[oracle@bigdatalite Downloads]$ /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell/bdd-shell.sh
WARNING: User-defined SPARK_HOME (/usr/lib/spark) overrides detected (/usr/lib/spark/).
WARNING: Running spark-class from user-defined location.
spark.driver.cores is set but does not apply in client mode.
Welcome to
     ___   ___   ___       __   _     ____  _     _
    | |_) | |  | |      ( (` | |_| | |_  | |   | |
    |_|_) |_|_/ |_|_/     _)_) |_| | |_|__ |_|__ |_|__

SparkContext available as sc, HiveContext available as sqlContext.
BDD Context available as bc.

>>>

From the BDD Shell you can interact with BDD, for example to list out the datasets currently defined in the Catalog:

>>> bc.datasets().count
17
>>> for ds in bc.datasets():
...     print ds
...

media_demo_customer     edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        edp_cli_edp_2c7f41ee-65bf-43ac-8bb4-5b6b59a55d75        Hive    default.media_demo_customer

movie_genre     default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        default_edp_7d4c18a5-6f02-4067-9f63-91f950078b1e        Hive    default.movie_genre

media_demo_customer     default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        default_edp_89c616b6-aa10-4827-aa82-1e9c3fcc419e        Hive    default.media_demo_customer

Whilst BDD Shell is command-line based, there’s also the option to run Jupyter Notebooks (previous iPython Notebooks) which is a web-based interactive “Notebook”. This lets you build up scripts exploring and manipulating the data within BDD, using both Python and Spark. The big advantage of this over the command-line interface is that a ‘Notebook’ enables you to modify and re-run commands, and then once correct, retain them as a fully functioning script for future use.

To launch it, run:

cd /u01/bdd/v1.2.0/BDD-1.2.0.31.813/bdd-shell
/u01/anaconda2/bin/jupyter-notebook --port 18888

Important points to note:

  • It’s important that you run this from the bdd-shell folder, otherwise the BDD shell won’t initialise properly
  • By default Jupyter uses 8888, which is already in use on BigDataLite by Hue, so use a different one by specifying --port
  • Jupyter by default only listens locally, so you need to either be using BigDataLite desktop to run Firefox, or use port-forwarding if you want to access Jupyter from your local web browser.

Go to http://localhost:18888 in your web browser, and you should see the default Jupyter screen with a list of files:

In the next article, we’ll see how to use Jupyter Notebooks with Big Data Discovery, and get an idea of just how powerful the combination can be.

The post Running Big Data Discovery Shell and Jupyter Notebook on Big Data Lite VM 4.5 appeared first on Rittman Mead Consulting.