pyspark.sql.DataFrame.rollup#
- DataFrame.rollup(*cols)[source]#
- Create a multi-dimensional rollup for the current - DataFrameusing the specified columns, allowing for aggregation on them.- New in version 1.4.0. - Changed in version 3.4.0: Supports Spark Connect. - Parameters
- Returns
- GroupedData
- Rolled-up data based on the specified columns. 
 
 - Notes - A column ordinal starts from 1, which is different from the 0-based - __getitem__().- Examples - >>> df = spark.createDataFrame([("Alice", 2), ("Bob", 5)], schema=["name", "age"]) - Example 1: Rollup-by ‘name’, and calculate the number of rows in each dimensional. - >>> df.rollup("name").count().orderBy("name").show() +-----+-----+ | name|count| +-----+-----+ | NULL| 2| |Alice| 1| | Bob| 1| +-----+-----+ - Example 2: Rollup-by ‘name’ and ‘age’, and calculate the number of rows in each dimensional. - >>> df.rollup("name", df.age).count().orderBy("name", "age").show() +-----+----+-----+ | name| age|count| +-----+----+-----+ | NULL|NULL| 2| |Alice|NULL| 1| |Alice| 2| 1| | Bob|NULL| 1| | Bob| 5| 1| +-----+----+-----+ - Example 3: Also Rollup-by ‘name’ and ‘age’, but using the column ordinal. - >>> df.rollup(1, 2).count().orderBy(1, 2).show() +-----+----+-----+ | name| age|count| +-----+----+-----+ | NULL|NULL| 2| |Alice|NULL| 1| |Alice| 2| 1| | Bob|NULL| 1| | Bob| 5| 1| +-----+----+-----+