Create Dummy Data FrameΒΆ
Let us go ahead and create data frame using dummy data to explore Spark functions.
Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.
from pyspark.sql import SparkSession
import getpass
username = getpass.getuser()
spark = SparkSession. \
builder. \
config('spark.ui.port', '0'). \
config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
enableHiveSupport(). \
appName(f'{username} | Python - Processing Column Data'). \
master('yarn'). \
getOrCreate()
If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.
Using Spark SQL
spark2-sql \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Scala
spark2-shell \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
Using Pyspark
pyspark2 \
--master yarn \
--conf spark.ui.port=0 \
--conf spark.sql.warehouse.dir=/user/${USER}/warehouse
l = [('X', )]
# Oracle dual (view)
# dual - dummy CHAR(1)
# "X" - One record
df = spark.createDataFrame(l, "dummy STRING")
df.printSchema()
root
|-- dummy: string (nullable = true)
df.show()
+-----+
|dummy|
+-----+
| X|
+-----+
Once Data Frame is created, we can use to understand how to use functions. For example, to get current date, we can run df.select(current_date()).show()
.
It is similar to Oracle Query SELECT sysdate FROM dual
l = [('X', )]
df = spark.createDataFrame(l, "dummy STRING")
from pyspark.sql.functions import current_date
df.select(current_date()). \
show()
+--------------+
|current_date()|
+--------------+
| 2021-02-24|
+--------------+
df.select(current_date().alias("current_date")). \
show()
+------------+
|current_date|
+------------+
| 2021-02-24|
+------------+
Here is another example of creating Data Frame using collection of employees. We will be using this Data Frame to explore all the important functions to process column data in detail.
employees = [
(1, "Scott", "Tiger", 1000.0,
"united states", "+1 123 456 7890", "123 45 6789"
),
(2, "Henry", "Ford", 1250.0,
"India", "+91 234 567 8901", "456 78 9123"
),
(3, "Nick", "Junior", 750.0,
"united KINGDOM", "+44 111 111 1111", "222 33 4444"
),
(4, "Bill", "Gomes", 1500.0,
"AUSTRALIA", "+61 987 654 3210", "789 12 6118"
)
]
len(employees)
4
employeesDF = spark. \
createDataFrame(employees,
schema="""employee_id INT, first_name STRING,
last_name STRING, salary FLOAT, nationality STRING,
phone_number STRING, ssn STRING"""
)
employeesDF.printSchema()
root
|-- employee_id: integer (nullable = true)
|-- first_name: string (nullable = true)
|-- last_name: string (nullable = true)
|-- salary: float (nullable = true)
|-- nationality: string (nullable = true)
|-- phone_number: string (nullable = true)
|-- ssn: string (nullable = true)
employeesDF.show(truncate=False)
+-----------+----------+---------+------+--------------+----------------+-----------+
|employee_id|first_name|last_name|salary|nationality |phone_number |ssn |
+-----------+----------+---------+------+--------------+----------------+-----------+
|1 |Scott |Tiger |1000.0|united states |+1 123 456 7890 |123 45 6789|
|2 |Henry |Ford |1250.0|India |+91 234 567 8901|456 78 9123|
|3 |Nick |Junior |750.0 |united KINGDOM|+44 111 111 1111|222 33 4444|
|4 |Bill |Gomes |1500.0|AUSTRALIA |+61 987 654 3210|789 12 6118|
+-----------+----------+---------+------+--------------+----------------+-----------+