Basic Filtering of Data

Let us get started with Basic Filtering using Spark Data Frame APIs.

  • Filtering can be done either by using filter or where. These are like synonyms to each other.

  • When it comes to the condition, we can either pass it in SQL Style or Data Frame Style.

  • Example for SQL Style - airtraffic.filter("IsArrDelayed = 'YES'").show() or airtraffic.where("IsArrDelayed = 'YES'").show()

  • Example for Data Frame Style - airtraffic.filter(airtraffic["IsArrDelayed"] == 'YES').show() or airtraffic.filter(airtraffic.IsArrDelayed == 'YES').show(). We can also use where instead of filter.

  • Here are the other operations we can perform to filter the data - !=, >, <, >=, <=, LIKE, BETWEEN with AND

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass
username = getpass.getuser()

spark = SparkSession. \
    builder. \
    config('spark.ui.port', '0'). \
    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
    enableHiveSupport(). \
    appName(f'{username} | Python - Basic Transformations'). \
    master('yarn'). \
    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Tasks

Let us perform some tasks to understand filtering in detail. Solve all the problems by passing conditions using both SQL Style as well as API Style.

  • Read the data for the month of 2008 January. We will be using only 2008 January data for the demos.

airtraffic_path = "/public/airtraffic_all/airtraffic-part/flightmonth=200801"
airtraffic = spark. \
    read. \
    parquet(airtraffic_path)
airtraffic.count()
airtraffic.printSchema()
  • Get count of cancelled flights.

airtraffic.select('Cancelled').distinct().show()
airtraffic.filter('Cancelled = 1').count()
airtraffic.filter(airtraffic['Cancelled'] == 1).count()
airtraffic.filter(airtraffic.Cancelled == 1).count()
  • Get number of flights scheduled for departure from SFO airport.

airtraffic.filter("Origin = 'SFO'").count()
airtraffic.filter(airtraffic['Origin'] == 'SFO').count()
airtraffic.filter(airtraffic.Origin == 'SFO').count()
  • Get number of flights that are departed from airport with out any delay.

airtraffic.select('IsDepDelayed').distinct().show()
airtraffic.filter("IsDepDelayed = 'NO'").select('Cancelled').distinct().show()
airtraffic.filter("IsDepDelayed = 'NO'").count()
airtraffic.filter(airtraffic['IsDepDelayed'] == 'NO').count()
airtraffic.filter(airtraffic.IsDepDelayed == 'NO').count()