Common String Manipulation Functions

Let us go through some of the common string manipulation functions using pyspark as part of this topic.

  • Concatenating strings

    • We can pass a variable number of strings to concat function.

    • It will return one string concatenating all the strings.

    • If we have to concatenate literal in between then we have to use lit function.

  • Case Conversion and Length

    • Convert all the alphabetic characters in a string to uppercase - upper

    • Convert all the alphabetic characters in a string to lowercase - lower

    • Convert first character in a string to uppercase - initcap

    • Get number of characters in a string - length

    • All the 4 functions take column type argument.

Let us start spark context for this Notebook so that we can execute the code provided. You can sign up for our 10 node state of the art cluster/labs to learn Spark SQL using our unique integrated LMS.

from pyspark.sql import SparkSession

import getpass
username = getpass.getuser()

spark = SparkSession. \
    builder. \
    config('spark.ui.port', '0'). \
    config("spark.sql.warehouse.dir", f"/user/{username}/warehouse"). \
    enableHiveSupport(). \
    appName(f'{username} | Python - Processing Column Data'). \
    master('yarn'). \
    getOrCreate()

If you are going to use CLIs, you can use Spark SQL using one of the 3 approaches.

Using Spark SQL

spark2-sql \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Scala

spark2-shell \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Using Pyspark

pyspark2 \
    --master yarn \
    --conf spark.ui.port=0 \
    --conf spark.sql.warehouse.dir=/user/${USER}/warehouse

Tasks - Concatenating Strings

Let us perform few tasks to understand more about concat function.

  • Let’s create a Data Frame and explore concat function.

employees = [(1, "Scott", "Tiger", 1000.0, 
                      "united states", "+1 123 456 7890", "123 45 6789"
                     ),
                     (2, "Henry", "Ford", 1250.0, 
                      "India", "+91 234 567 8901", "456 78 9123"
                     ),
                     (3, "Nick", "Junior", 750.0, 
                      "united KINGDOM", "+44 111 111 1111", "222 33 4444"
                     ),
                     (4, "Bill", "Gomes", 1500.0, 
                      "AUSTRALIA", "+61 987 654 3210", "789 12 6118"
                     )
                ]
employeesDF = spark. \
    createDataFrame(employees,
                    schema="""employee_id INT, first_name STRING, 
                    last_name STRING, salary FLOAT, nationality STRING,
                    phone_number STRING, ssn STRING"""
                   )
employeesDF.show()
+-----------+----------+---------+------+--------------+----------------+-----------+
|employee_id|first_name|last_name|salary|   nationality|    phone_number|        ssn|
+-----------+----------+---------+------+--------------+----------------+-----------+
|          1|     Scott|    Tiger|1000.0| united states| +1 123 456 7890|123 45 6789|
|          2|     Henry|     Ford|1250.0|         India|+91 234 567 8901|456 78 9123|
|          3|      Nick|   Junior| 750.0|united KINGDOM|+44 111 111 1111|222 33 4444|
|          4|      Bill|    Gomes|1500.0|     AUSTRALIA|+61 987 654 3210|789 12 6118|
+-----------+----------+---------+------+--------------+----------------+-----------+
  • Create a new column by name full_name concatenating first_name and last_name.

from pyspark.sql.functions import concat
employeesDF. \
    withColumn("full_name", concat("first_name", "last_name")). \
    show()
+-----------+----------+---------+------+--------------+----------------+-----------+----------+
|employee_id|first_name|last_name|salary|   nationality|    phone_number|        ssn| full_name|
+-----------+----------+---------+------+--------------+----------------+-----------+----------+
|          1|     Scott|    Tiger|1000.0| united states| +1 123 456 7890|123 45 6789|ScottTiger|
|          2|     Henry|     Ford|1250.0|         India|+91 234 567 8901|456 78 9123| HenryFord|
|          3|      Nick|   Junior| 750.0|united KINGDOM|+44 111 111 1111|222 33 4444|NickJunior|
|          4|      Bill|    Gomes|1500.0|     AUSTRALIA|+61 987 654 3210|789 12 6118| BillGomes|
+-----------+----------+---------+------+--------------+----------------+-----------+----------+
  • Improvise by adding a comma followed by a space in between first_name and last_name.

from pyspark.sql.functions import concat, lit
employeesDF. \
    withColumn("full_name", concat("first_name", lit(", "), "last_name")). \
    show()
+-----------+----------+---------+------+--------------+----------------+-----------+------------+
|employee_id|first_name|last_name|salary|   nationality|    phone_number|        ssn|   full_name|
+-----------+----------+---------+------+--------------+----------------+-----------+------------+
|          1|     Scott|    Tiger|1000.0| united states| +1 123 456 7890|123 45 6789|Scott, Tiger|
|          2|     Henry|     Ford|1250.0|         India|+91 234 567 8901|456 78 9123| Henry, Ford|
|          3|      Nick|   Junior| 750.0|united KINGDOM|+44 111 111 1111|222 33 4444|Nick, Junior|
|          4|      Bill|    Gomes|1500.0|     AUSTRALIA|+61 987 654 3210|789 12 6118| Bill, Gomes|
+-----------+----------+---------+------+--------------+----------------+-----------+------------+

Tasks - Case Conversion and length

Let us perform tasks to understand the behavior of case conversion functions and length.

  • Use employees data and create a Data Frame.

  • Apply all 4 functions on nationality and see the results.

employees = [(1, "Scott", "Tiger", 1000.0, 
                      "united states", "+1 123 456 7890", "123 45 6789"
                     ),
                     (2, "Henry", "Ford", 1250.0, 
                      "India", "+91 234 567 8901", "456 78 9123"
                     ),
                     (3, "Nick", "Junior", 750.0, 
                      "united KINGDOM", "+44 111 111 1111", "222 33 4444"
                     ),
                     (4, "Bill", "Gomes", 1500.0, 
                      "AUSTRALIA", "+61 987 654 3210", "789 12 6118"
                     )
                ]
employeesDF = spark.createDataFrame(employees). \
    toDF("employee_id", "first_name",
         "last_name", "salary",
         "nationality", "phone_number",
         "ssn"
        )
from pyspark.sql.functions import col, lower, upper, initcap, length
employeesDF. \
  select("employee_id", "nationality"). \
  withColumn("nationality_upper", upper(col("nationality"))). \
  withColumn("nationality_lower", lower(col("nationality"))). \
  withColumn("nationality_initcap", initcap(col("nationality"))). \
  withColumn("nationality_length", length(col("nationality"))). \
  show()
+-----------+--------------+-----------------+-----------------+-------------------+------------------+
|employee_id|   nationality|nationality_upper|nationality_lower|nationality_initcap|nationality_length|
+-----------+--------------+-----------------+-----------------+-------------------+------------------+
|          1| united states|    UNITED STATES|    united states|      United States|                13|
|          2|         India|            INDIA|            india|              India|                 5|
|          3|united KINGDOM|   UNITED KINGDOM|   united kingdom|     United Kingdom|                14|
|          4|     AUSTRALIA|        AUSTRALIA|        australia|          Australia|                 9|
+-----------+--------------+-----------------+-----------------+-------------------+------------------+