Yesterday, I explained how we can access MySQL database from Zeppelin which comes with Oracle Big Data Cloud Service Compute Edition (BDCSCE). Although we can use Zeppelin to access MySQL, we still need something more powerful to combine data from two different sources (for example data from CSV file and RDBMS tables). Spark is a great choice to process data. In this blog post, I’ll write a simple PySpark (Python for Spark) code which will read from MySQL and CSV, join data and write the output to MySQL again. Please keep in mind that I use Oracle BDCSCE which supports Spark 2.1. So I tested my codes on only Spark 2.1 and used Zeppelin environment. I expect you run all these steps on same environment. Otherwise, you may need to modify paths and codes.
For my sample script, I’ll use the flight information belongs to year 2008. If you read my my blog post series about BDCSCE, you should be familiar with it. Anyway, do not worry about the data structure, I use only a few columns of the data, and you get more information about it on the statistical computing website.
First, I’ll create a table on MySQL to store most active carriers (in 2008). I already set Zeppelin to access my MySQL database, so I create a new paragraph, put the following SQL commands and run them.
create table carriers (code TEXT, description TEXT );
insert into carriers values( 'WN','Southwest Airlines Co.');
insert into carriers values( 'AA','American Airlines Inc.');
insert into carriers values( 'OO','Skywest Airlines Inc.');
insert into carriers values( 'MQ','American Eagle Airlines Inc.');
insert into carriers values( 'US','US Airways Inc.');
insert into carriers values( 'DL','Delta Air Lines Inc.');
insert into carriers values( 'UA','United Air Lines Inc.');
select * from carriers;