Understand airlines dataΒΆ

Let us read one of the files and understand more about the data to determine right API with right options to process data later.

  • Our airlines data is in text file format.

  • We can use spark.read.text on one of the files to preview the data and understand the following

    • Whether header is present in files or not.

    • Field Delimiter that is being used.

  • Once we determine details about header and field delimiter we can use spark.read.csv with appropriate options to read the data.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass
username = getpass.getuser()

spark = SparkSession. \
    builder. \
    config('spark.ui.port', '0'). \
    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
    enableHiveSupport(). \
    appName(f'{username} | Python - Data Processing - Overview'). \
    master('yarn'). \
    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
%%sh

hdfs dfs -ls -h /public/airlines_all/airlines/part-00000
airlines = spark.read. \
    text("/public/airlines_all/airlines/part-00000")
type(airlines)
help(airlines.show)
airlines.show(truncate=False)
help(spark.read.text)
  • Data have header and each field is delimited by a comma.