Previewing reorganized data

Let us preview the data using reorganized data.

  • We will use new location going forward - /public/airlines_all/airlines-part. Data is already copied into that location.

  • We have partitioned data by month and stored in that location.

  • Instead of using complete data set we will read the data from one partition /public/airlines_all/airlines-part/flightmonth=200801

  • First let us create a DataFrame object by using spark.read.parquet("/public/airlines_all/airlines-part/flightmonth=200801") - let’s say airlines.

  • We can get the schema of the DataFrame using airlines.printSchema()

  • Use airlines.show() or airlines.show(100, truncate=False) to preview the data.

  • We can also use display(airlines) to get airlines data in tabular format as part of Databricks Notebook.

  • We can also use airlines.describe().show() to get some statistics about the Data Frame and airlines.count() to get the number of records in the DataFrame.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass
username = getpass.getuser()

spark = SparkSession. \
    builder. \
    config('spark.ui.port', '0'). \
    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
    enableHiveSupport(). \
    appName(f'{username} | Python - Data Processing - Overview'). \
    master('yarn'). \
    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse