Inferring SchemaΒΆ

Let us understand how we can quickly get schema using one file and apply on other files.

  • We can pass the file name pattern to spark.read.csv and read all the data in files under hdfs://public/airlines_all/airlines into Data Frame.

  • We can use options such as header and inferSchema to assign names and data types.

  • However inferSchema will end up going through the entire data to assign schema. We can use samplingRatio to process fraction of data and then infer the schema.

  • In case if the data in all the files have similar structure, we should be able to get the schema using one file and then apply it on others.

  • In our airlines data, schema is consistent across all the files and hence we should be able to get the schema by going through one file and apply on the entire dataset.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass
username = getpass.getuser()

spark = SparkSession. \
    builder. \
    config('spark.ui.port', '0'). \
    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
    enableHiveSupport(). \
    appName(f'{username} | Python - Data Processing - Overview'). \
    master('yarn'). \
    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
airlines_part_00000 = spark.read. \
    csv("/public/airlines_all/airlines/part-00000",
        header=True,
        inferSchema=True
       )
type(airlines_part_00000)
airlines_part_00000.show(truncate=False)
airlines_part_00000.printSchema()
airlines_part_00000.schema
type(airlines_part_00000.schema)
airlines_schema = spark.read. \
    csv("/public/airlines_all/airlines/part-00000",
        header=True,
        inferSchema=True
       ). \
    schema
type(airlines_schema)
help(spark.read.csv)
airlines = spark.read. \
    schema(airlines_schema). \
    csv("/public/airlines_all/airlines/part*",
        header=True
       )
airlines = spark.read. \
    csv("/public/airlines_all/airlines/part*",
        schema=airlines_schema,
        header=True
       )
help(airlines)
airlines.
airlines.show()
airlines.printSchema()
airlines.count()