Solutions - Problem 3ΒΆ
Get the list of airports in the US from which flights are not departed in the month of 2008 January.
This is an example for outer join.
We need to get those airports which are in airport codes but not in 2008 January airtraffic data set.
Based on the side of the airport codes data set, we can say left or right. We will invoke join using airport codes data set and hence we will use left outer join.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
from pyspark.sql import SparkSession
import getpass
username = getpass.getuser()
spark = SparkSession. \
builder. \
config('spark.ui.port', '0'). \
config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
enableHiveSupport(). \
appName(f'{username} | Python - Joining Data Sets'). \
master('yarn'). \
getOrCreate()
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
Using Spark SQL
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Scala
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Pyspark
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
spark.conf.set("spark.sql.shuffle.partitions", "2")
airtrafficPath = "/public/airtraffic_all/airtraffic-part/flightmonth=200801"
airtraffic = spark. \
read. \
parquet(airtrafficPath)
airtraffic. \
select(
"Year", "Month", "DayOfMonth",
"Origin", "Dest", "CRSDepTime"
). \
show()
+----+-----+----------+------+----+----------+
|Year|Month|DayOfMonth|Origin|Dest|CRSDepTime|
+----+-----+----------+------+----+----------+
|2008| 1| 16| BGR| CVG| 1735|
|2008| 1| 17| SYR| CVG| 1701|
|2008| 1| 17| SAV| BOS| 1225|
|2008| 1| 17| CVG| GRR| 1530|
|2008| 1| 17| STL| CVG| 1205|
|2008| 1| 18| STL| JFK| 1150|
|2008| 1| 18| MCI| CVG| 1009|
|2008| 1| 19| TUL| CVG| 835|
|2008| 1| 20| JFK| PHL| 1935|
|2008| 1| 20| RDU| CVG| 830|
|2008| 1| 21| CVG| DTW| 1640|
|2008| 1| 21| MSY| LGA| 1204|
|2008| 1| 21| JFK| PHL| 1935|
|2008| 1| 21| DCA| JFK| 1830|
|2008| 1| 21| HSV| DCA| 700|
|2008| 1| 22| ORD| CVG| 1910|
|2008| 1| 22| CVG| JFK| 1320|
|2008| 1| 23| LGA| SAV| 908|
|2008| 1| 23| CLT| CVG| 1252|
|2008| 1| 23| GSP| LGA| 635|
+----+-----+----------+------+----+----------+
only showing top 20 rows
airtraffic.count()
605659
airportCodesPath = "/public/airtraffic_all/airport-codes"
def getValidAirportCodes(airportCodesPath):
airportCodes = spark. \
read. \
option("sep", "\t"). \
option("header", True). \
option("inferSchema", True). \
csv(airportCodesPath). \
filter("!(State = 'Hawaii' AND IATA = 'Big') AND Country = 'USA'")
return airportCodes
airportCodes = getValidAirportCodes(airportCodesPath)
airportCodes.count()
443
from pyspark.sql.functions import col
airportCodes. \
join(airtraffic, col("IATA") == col("Origin"), "left"). \
filter("Origin IS NULL"). \
select(airportCodes["*"], col("Origin")). \
show()
+--------------+-----+-------+----+------+
| City|State|Country|IATA|Origin|
+--------------+-----+-------+----+------+
| Aberdeen| SD| USA| ABR| null|
| Alamosa| CO| USA| ALS| null|
| Alliance| NE| USA| AIA| null|
| Alpena| MI| USA| APN| null|
| Altoona| PA| USA| AOO| null|
| Athens| GA| USA| AHN| null|
| Augusta| ME| USA| AUG| null|
| Bar Harbor| ME| USA| BHB| null|
| Beckley| WV| USA| BKW| null|
| Bedford| MA| USA| BED| null|
| Bemidji| MN| USA| BJI| null|
| Bettles| AK| USA| BTT| null|
| Bloomington| IN| USA| BMG| null|
| Bluefield| WV| USA| BLF| null|
| Brookings| SD| USA| BKX| null|
| Burlington| IA| USA| BRL| null|
| Burlington| MA| USA| BBF| null|
|Cape Girardeau| MO| USA| CGI| null|
| Carlsbad| NM| USA| CNM| null|
| Cheyenne| WY| USA| CYS| null|
+--------------+-----+-------+----+------+
only showing top 20 rows
airportCodes. \
join(airtraffic, col("IATA") == col("Origin"), "left"). \
filter("Origin IS NULL"). \
count()
173