Read and Process data from Metastore Tables¶
Let us see how we can read tables using functions such as spark.read.table
and process data using Data Frame APIs.
Using Data Frame APIs -
spark.read.table("table_name")
.We can also prefix the database name to read tables belonging to a particular database.
When we read the table, it will result in a Data Frame.
Once Data Frame is created we can use functions such as
filter
orwhere
,groupBy
,sort
ororderBy
to process the data in the Data Frame.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
from pyspark.sql import SparkSession
import getpass
username = getpass.getuser()
spark = SparkSession. \
builder. \
config('spark.ui.port', '0'). \
config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
enableHiveSupport(). \
appName(f'{username} | Python - Spark Metastore'). \
master('yarn'). \
getOrCreate()
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
Using Spark SQL
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Scala
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Pyspark
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
spark.conf.set('spark.sql.shuffle.partitions', '2')
Tasks¶
Let us see how we can create a table using data in a file and then read into a Data Frame.
Create Database for airtraffic data.
import getpass
username = getpass.getuser()
spark.sql(f"CREATE DATABASE IF NOT EXISTS {username}_airtraffic")
spark.catalog.setCurrentDatabase(f"{username}_airtraffic")
spark.catalog.currentDatabase()
'itversity_airtraffic'
Create table by name airport_codes for file airport-codes.txt. The file contains header and each field in each row is delimited by a tab character.
airport_codes_path = f"/user/{username}/airtraffic_all/airport-codes"
spark.sql(f'DROP TABLE {username}_airtraffic.airport_codes')
airport_codes_df = spark. \
read. \
csv(airport_codes_path,
sep="\t",
header=True,
inferSchema=True
)
airport_codes_df.write.saveAsTable(f"{username}_airtraffic.airport_codes")
Read data from table and get number of airports by state.
airport_codes = spark.read.table("airport_codes")
type(airport_codes)
pyspark.sql.dataframe.DataFrame
spark.sql('DESCRIBE FORMATTED airport_codes').show(100, False)
+----------------------------+------------------------------------------------------------------------------------------+-------+
|col_name |data_type |comment|
+----------------------------+------------------------------------------------------------------------------------------+-------+
|City |string |null |
|State |string |null |
|Country |string |null |
|IATA |string |null |
| | | |
|# Detailed Table Information| | |
|Database |itversity_airtraffic | |
|Table |airport_codes | |
|Owner |itversity | |
|Created Time |Sat Mar 13 09:25:02 EST 2021 | |
|Last Access |Wed Dec 31 19:00:00 EST 1969 | |
|Created By |Spark 2.4.7 | |
|Type |MANAGED | |
|Provider |parquet | |
|Table Properties |[transient_lastDdlTime=1615645502] | |
|Statistics |9048 bytes | |
|Location |hdfs://m01.itversity.com:9000/user/itversity/warehouse/itversity_airtraffic.db/airport_codes| |
|Serde Library |org.apache.hadoop.hive.ql.io.parquet.serde.ParquetHiveSerDe | |
|InputFormat |org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat | |
|OutputFormat |org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat | |
|Storage Properties |[serialization.format=1] | |
+----------------------------+------------------------------------------------------------------------------------------+-------+
airport_codes.printSchema()
root
|-- City: string (nullable = true)
|-- State: string (nullable = true)
|-- Country: string (nullable = true)
|-- IATA: string (nullable = true)
airport_codes. \
groupBy("state"). \
count(). \
show()
+-----+-----+
|state|count|
+-----+-----+
| BC| 22|
| SD| 7|
| NY| 18|
| NM| 9|
| NE| 9|
| MI| 18|
| NWT| 4|
| NC| 10|
| NJ| 3|
| MD| 3|
| WV| 8|
| MN| 8|
| IL| 12|
| ID| 6|
| IA| 8|
| MO| 8|
| SC| 6|
| VA| 7|
| PEI| 1|
| TN| 6|
+-----+-----+
only showing top 20 rows
from pyspark.sql.functions import count, lit, col
airport_codes. \
groupBy("state"). \
agg(count(lit(1)).alias('airport_count')). \
orderBy(col('airport_count').desc()). \
show()
+-----+-------------+
|state|airport_count|
+-----+-------------+
| CA| 29|
| TX| 26|
| AK| 25|
| BC| 22|
| NY| 18|
| ON| 18|
| MI| 18|
| FL| 18|
| MT| 14|
| PA| 13|
| PQ| 13|
| IL| 12|
| CO| 12|
| NC| 10|
| WY| 10|
| NE| 9|
| WI| 9|
| WA| 9|
| GA| 9|
| NM| 9|
+-----+-------------+
only showing top 20 rows