Aggregate Functions ExamplesΒΆ

Let us perform few tasks to understand the usage of aggregate functions.

  • Get all the employees details who are making more than average department salary expense.

  • Get all the employees details who are making less than 50% of the highest salaries employee in each department.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass
username = getpass.getuser()

spark = SparkSession. \
    builder. \
    config('spark.ui.port', '0'). \
    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
    enableHiveSupport(). \
    appName(f'{username} | Python - Windowing Functions'). \
    master('yarn'). \
    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse
  • First let us understand relevance of these functions using employees data set.

employeesPath = '/public/hr_db/employees'
employees = spark. \
    read. \
    format('csv'). \
    option('sep', '\t'). \
    schema('''employee_id INT, 
              first_name STRING, 
              last_name STRING, 
              email STRING,
              phone_number STRING, 
              hire_date STRING, 
              job_id STRING, 
              salary FLOAT,
              commission_pct STRING,
              manager_id STRING, 
              department_id STRING
            '''). \
    load(employeesPath)
employees.show()
+-----------+----------+----------+--------+------------------+----------+--------+-------+--------------+----------+-------------+
|employee_id|first_name| last_name|   email|      phone_number| hire_date|  job_id| salary|commission_pct|manager_id|department_id|
+-----------+----------+----------+--------+------------------+----------+--------+-------+--------------+----------+-------------+
|        127|     James|    Landry| JLANDRY|      650.124.1334|1999-01-14|ST_CLERK| 2400.0|          null|       120|           50|
|        128|    Steven|    Markle| SMARKLE|      650.124.1434|2000-03-08|ST_CLERK| 2200.0|          null|       120|           50|
|        129|     Laura|    Bissot| LBISSOT|      650.124.5234|1997-08-20|ST_CLERK| 3300.0|          null|       121|           50|
|        130|     Mozhe|  Atkinson|MATKINSO|      650.124.6234|1997-10-30|ST_CLERK| 2800.0|          null|       121|           50|
|        131|     James|    Marlow| JAMRLOW|      650.124.7234|1997-02-16|ST_CLERK| 2500.0|          null|       121|           50|
|        132|        TJ|     Olson| TJOLSON|      650.124.8234|1999-04-10|ST_CLERK| 2100.0|          null|       121|           50|
|        133|     Jason|    Mallin| JMALLIN|      650.127.1934|1996-06-14|ST_CLERK| 3300.0|          null|       122|           50|
|        134|   Michael|    Rogers| MROGERS|      650.127.1834|1998-08-26|ST_CLERK| 2900.0|          null|       122|           50|
|        135|        Ki|       Gee|    KGEE|      650.127.1734|1999-12-12|ST_CLERK| 2400.0|          null|       122|           50|
|        136|     Hazel|Philtanker|HPHILTAN|      650.127.1634|2000-02-06|ST_CLERK| 2200.0|          null|       122|           50|
|        137|    Renske|    Ladwig| RLADWIG|      650.121.1234|1995-07-14|ST_CLERK| 3600.0|          null|       123|           50|
|        138|   Stephen|    Stiles| SSTILES|      650.121.2034|1997-10-26|ST_CLERK| 3200.0|          null|       123|           50|
|        139|      John|       Seo|    JSEO|      650.121.2019|1998-02-12|ST_CLERK| 2700.0|          null|       123|           50|
|        140|    Joshua|     Patel|  JPATEL|      650.121.1834|1998-04-06|ST_CLERK| 2500.0|          null|       123|           50|
|        141|    Trenna|      Rajs|   TRAJS|      650.121.8009|1995-10-17|ST_CLERK| 3500.0|          null|       124|           50|
|        142|    Curtis|    Davies| CDAVIES|      650.121.2994|1997-01-29|ST_CLERK| 3100.0|          null|       124|           50|
|        143|   Randall|     Matos|  RMATOS|      650.121.2874|1998-03-15|ST_CLERK| 2600.0|          null|       124|           50|
|        144|     Peter|    Vargas| PVARGAS|      650.121.2004|1998-07-09|ST_CLERK| 2500.0|          null|       124|           50|
|        145|      John|   Russell| JRUSSEL|011.44.1344.429268|1996-10-01|  SA_MAN|14000.0|          0.40|       100|           80|
|        146|     Karen|  Partners|KPARTNER|011.44.1344.467268|1997-01-05|  SA_MAN|13500.0|          0.30|       100|           80|
+-----------+----------+----------+--------+------------------+----------+--------+-------+--------------+----------+-------------+
only showing top 20 rows
employees.printSchema()
root
 |-- employee_id: integer (nullable = true)
 |-- first_name: string (nullable = true)
 |-- last_name: string (nullable = true)
 |-- email: string (nullable = true)
 |-- phone_number: string (nullable = true)
 |-- hire_date: string (nullable = true)
 |-- job_id: string (nullable = true)
 |-- salary: float (nullable = true)
 |-- commission_pct: string (nullable = true)
 |-- manager_id: string (nullable = true)
 |-- department_id: string (nullable = true)
employees.count()
107
from pyspark.sql.functions import col
employees. \
    select('employee_id', 
           col('department_id').cast('int').alias('department_id'), 
           'salary'
          ). \
    orderBy('department_id', 'salary'). \
    show()
+-----------+-------------+-------+
|employee_id|department_id| salary|
+-----------+-------------+-------+
|        178|         null| 7000.0|
|        200|           10| 4400.0|
|        202|           20| 6000.0|
|        201|           20|13000.0|
|        119|           30| 2500.0|
|        118|           30| 2600.0|
|        117|           30| 2800.0|
|        116|           30| 2900.0|
|        115|           30| 3100.0|
|        114|           30|11000.0|
|        203|           40| 6500.0|
|        132|           50| 2100.0|
|        128|           50| 2200.0|
|        136|           50| 2200.0|
|        135|           50| 2400.0|
|        127|           50| 2400.0|
|        131|           50| 2500.0|
|        140|           50| 2500.0|
|        191|           50| 2500.0|
|        144|           50| 2500.0|
+-----------+-------------+-------+
only showing top 20 rows
  • Let us say we want to compare individual salary with department wise salary expense.

  • Here is one of the approach which require self join.

    • Compute department wise expense usig groupBy and agg.

    • Join with employees again on department_id.

from pyspark.sql.functions import sum, col
department_expense = employees. \
    groupBy('department_id'). \
    agg(sum('salary').alias('expense'))
department_expense.show()
+-------------+--------+
|department_id| expense|
+-------------+--------+
|           30| 24900.0|
|          110| 20300.0|
|          100| 51600.0|
|           70| 10000.0|
|           90| 58000.0|
|           60| 28800.0|
|           40|  6500.0|
|           20| 19000.0|
|           10|  4400.0|
|           80|304500.0|
|         null|  7000.0|
|           50|156400.0|
+-------------+--------+
employees. \
    select('employee_id', 'department_id', 'salary'). \
    join(department_expense, employees.department_id == department_expense.department_id). \
    orderBy(employees.department_id, col('salary')). \
    show()
+-----------+-------------+-------+-------------+--------+
|employee_id|department_id| salary|department_id| expense|
+-----------+-------------+-------+-------------+--------+
|        200|           10| 4400.0|           10|  4400.0|
|        113|          100| 6900.0|          100| 51600.0|
|        111|          100| 7700.0|          100| 51600.0|
|        112|          100| 7800.0|          100| 51600.0|
|        110|          100| 8200.0|          100| 51600.0|
|        109|          100| 9000.0|          100| 51600.0|
|        108|          100|12000.0|          100| 51600.0|
|        206|          110| 8300.0|          110| 20300.0|
|        205|          110|12000.0|          110| 20300.0|
|        202|           20| 6000.0|           20| 19000.0|
|        201|           20|13000.0|           20| 19000.0|
|        119|           30| 2500.0|           30| 24900.0|
|        118|           30| 2600.0|           30| 24900.0|
|        117|           30| 2800.0|           30| 24900.0|
|        116|           30| 2900.0|           30| 24900.0|
|        115|           30| 3100.0|           30| 24900.0|
|        114|           30|11000.0|           30| 24900.0|
|        203|           40| 6500.0|           40|  6500.0|
|        132|           50| 2100.0|           50|156400.0|
|        136|           50| 2200.0|           50|156400.0|
+-----------+-------------+-------+-------------+--------+
only showing top 20 rows

However, using this approach is not very efficient and also overly complicated. Windowing functions actually simplify the logic and also runs efficiently

Now let us get into the details related to Windowing functions.

  • Main package pyspark.sql.window

  • It has classes such as Window and WindowSpec

  • Window have APIs such as partitionBy, orderBy etc

  • These APIs (such as partitionBy) return WindowSpec object. We can pass WindowSpec object to over on functions such as rank(), dense_rank(), sum() etc

  • Syntax: sum().over(spec) where spec = Window.partitionBy('ColumnName')

from pyspark.sql import window
help(window)
Help on module pyspark.sql.window in pyspark.sql:

NAME
    pyspark.sql.window

DESCRIPTION
    # Licensed to the Apache Software Foundation (ASF) under one or more
    # contributor license agreements.  See the NOTICE file distributed with
    # this work for additional information regarding copyright ownership.
    # The ASF licenses this file to You under the Apache License, Version 2.0
    # (the "License"); you may not use this file except in compliance with
    # the License.  You may obtain a copy of the License at
    #
    #    http://www.apache.org/licenses/LICENSE-2.0
    #
    # Unless required by applicable law or agreed to in writing, software
    # distributed under the License is distributed on an "AS IS" BASIS,
    # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
    # See the License for the specific language governing permissions and
    # limitations under the License.
    #

CLASSES
    builtins.object
        Window
        WindowSpec
    
    class Window(builtins.object)
     |  Utility functions for defining window in DataFrames.
     |  
     |  For example:
     |  
     |  >>> # ORDER BY date ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW
     |  >>> window = Window.orderBy("date").rowsBetween(Window.unboundedPreceding, Window.currentRow)
     |  
     |  >>> # PARTITION BY country ORDER BY date RANGE BETWEEN 3 PRECEDING AND 3 FOLLOWING
     |  >>> window = Window.orderBy("date").partitionBy("country").rangeBetween(-3, 3)
     |  
     |  .. note:: When ordering is not defined, an unbounded window frame (rowFrame,
     |       unboundedPreceding, unboundedFollowing) is used by default. When ordering is defined,
     |       a growing window frame (rangeFrame, unboundedPreceding, currentRow) is used by default.
     |  
     |  .. note:: Experimental
     |  
     |  .. versionadded:: 1.4
     |  
     |  Static methods defined here:
     |  
     |  orderBy(*cols)
     |      Creates a :class:`WindowSpec` with the ordering defined.
     |      
     |      .. versionadded:: 1.4
     |  
     |  partitionBy(*cols)
     |      Creates a :class:`WindowSpec` with the partitioning defined.
     |      
     |      .. versionadded:: 1.4
     |  
     |  rangeBetween(start, end)
     |      Creates a :class:`WindowSpec` with the frame boundaries defined,
     |      from `start` (inclusive) to `end` (inclusive).
     |      
     |      Both `start` and `end` are relative from the current row. For example,
     |      "0" means "current row", while "-1" means one off before the current row,
     |      and "5" means the five off after the current row.
     |      
     |      We recommend users use ``Window.unboundedPreceding``, ``Window.unboundedFollowing``,
     |      and ``Window.currentRow`` to specify special boundary values, rather than using integral
     |      values directly.
     |      
     |      :param start: boundary start, inclusive.
     |                    The frame is unbounded if this is ``Window.unboundedPreceding``, or
     |                    any value less than or equal to max(-sys.maxsize, -9223372036854775808).
     |      :param end: boundary end, inclusive.
     |                  The frame is unbounded if this is ``Window.unboundedFollowing``, or
     |                  any value greater than or equal to min(sys.maxsize, 9223372036854775807).
     |      
     |      .. versionadded:: 2.1
     |  
     |  rowsBetween(start, end)
     |      Creates a :class:`WindowSpec` with the frame boundaries defined,
     |      from `start` (inclusive) to `end` (inclusive).
     |      
     |      Both `start` and `end` are relative positions from the current row.
     |      For example, "0" means "current row", while "-1" means the row before
     |      the current row, and "5" means the fifth row after the current row.
     |      
     |      We recommend users use ``Window.unboundedPreceding``, ``Window.unboundedFollowing``,
     |      and ``Window.currentRow`` to specify special boundary values, rather than using integral
     |      values directly.
     |      
     |      :param start: boundary start, inclusive.
     |                    The frame is unbounded if this is ``Window.unboundedPreceding``, or
     |                    any value less than or equal to -9223372036854775808.
     |      :param end: boundary end, inclusive.
     |                  The frame is unbounded if this is ``Window.unboundedFollowing``, or
     |                  any value greater than or equal to 9223372036854775807.
     |      
     |      .. versionadded:: 2.1
     |  
     |  ----------------------------------------------------------------------
     |  Data descriptors defined here:
     |  
     |  __dict__
     |      dictionary for instance variables (if defined)
     |  
     |  __weakref__
     |      list of weak references to the object (if defined)
     |  
     |  ----------------------------------------------------------------------
     |  Data and other attributes defined here:
     |  
     |  currentRow = 0
     |  
     |  unboundedFollowing = 9223372036854775807
     |  
     |  unboundedPreceding = -9223372036854775808
    
    class WindowSpec(builtins.object)
     |  A window specification that defines the partitioning, ordering,
     |  and frame boundaries.
     |  
     |  Use the static methods in :class:`Window` to create a :class:`WindowSpec`.
     |  
     |  .. note:: Experimental
     |  
     |  .. versionadded:: 1.4
     |  
     |  Methods defined here:
     |  
     |  __init__(self, jspec)
     |      Initialize self.  See help(type(self)) for accurate signature.
     |  
     |  orderBy(self, *cols)
     |      Defines the ordering columns in a :class:`WindowSpec`.
     |      
     |      :param cols: names of columns or expressions
     |      
     |      .. versionadded:: 1.4
     |  
     |  partitionBy(self, *cols)
     |      Defines the partitioning columns in a :class:`WindowSpec`.
     |      
     |      :param cols: names of columns or expressions
     |      
     |      .. versionadded:: 1.4
     |  
     |  rangeBetween(self, start, end)
     |      Defines the frame boundaries, from `start` (inclusive) to `end` (inclusive).
     |      
     |      Both `start` and `end` are relative from the current row. For example,
     |      "0" means "current row", while "-1" means one off before the current row,
     |      and "5" means the five off after the current row.
     |      
     |      We recommend users use ``Window.unboundedPreceding``, ``Window.unboundedFollowing``,
     |      and ``Window.currentRow`` to specify special boundary values, rather than using integral
     |      values directly.
     |      
     |      :param start: boundary start, inclusive.
     |                    The frame is unbounded if this is ``Window.unboundedPreceding``, or
     |                    any value less than or equal to max(-sys.maxsize, -9223372036854775808).
     |      :param end: boundary end, inclusive.
     |                  The frame is unbounded if this is ``Window.unboundedFollowing``, or
     |                  any value greater than or equal to min(sys.maxsize, 9223372036854775807).
     |      
     |      .. versionadded:: 1.4
     |  
     |  rowsBetween(self, start, end)
     |      Defines the frame boundaries, from `start` (inclusive) to `end` (inclusive).
     |      
     |      Both `start` and `end` are relative positions from the current row.
     |      For example, "0" means "current row", while "-1" means the row before
     |      the current row, and "5" means the fifth row after the current row.
     |      
     |      We recommend users use ``Window.unboundedPreceding``, ``Window.unboundedFollowing``,
     |      and ``Window.currentRow`` to specify special boundary values, rather than using integral
     |      values directly.
     |      
     |      :param start: boundary start, inclusive.
     |                    The frame is unbounded if this is ``Window.unboundedPreceding``, or
     |                    any value less than or equal to max(-sys.maxsize, -9223372036854775808).
     |      :param end: boundary end, inclusive.
     |                  The frame is unbounded if this is ``Window.unboundedFollowing``, or
     |                  any value greater than or equal to min(sys.maxsize, 9223372036854775807).
     |      
     |      .. versionadded:: 1.4
     |  
     |  ----------------------------------------------------------------------
     |  Data descriptors defined here:
     |  
     |  __dict__
     |      dictionary for instance variables (if defined)
     |  
     |  __weakref__
     |      list of weak references to the object (if defined)

DATA
    __all__ = ['Window', 'WindowSpec']

FILE
    /opt/spark-2.4.7-bin-hadoop2.7/python/pyspark/sql/window.py

Functions

API or Function

Aggregate Functions

  • sum
  • avg
  • min
  • max

Ranking Functions

  • rank
  • dense_rank
  • percent_rank
  • row_number
  • ntile

Analytic Functions

  • cume_dist
  • first
  • last
  • lead
  • lag