LinearRegression¶
- 
class pyspark.ml.regression.LinearRegression(*, featuresCol: str = 'features', labelCol: str = 'label', predictionCol: str = 'prediction', maxIter: int = 100, regParam: float = 0.0, elasticNetParam: float = 0.0, tol: float = 1e-06, fitIntercept: bool = True, standardization: bool = True, solver: str = 'auto', weightCol: Optional[str] = None, aggregationDepth: int = 2, loss: str = 'squaredError', epsilon: float = 1.35, maxBlockSizeInMB: float = 0.0)[source]¶
- Linear regression. - The learning objective is to minimize the specified loss function, with regularization. This supports two kinds of loss: - squaredError (a.k.a squared loss) 
- huber (a hybrid of squared error for relatively small errors and absolute error for relatively large ones, and we estimate the scale parameter from training data) 
 - This supports multiple types of regularization: - none (a.k.a. ordinary least squares) 
- L2 (ridge regression) 
- L1 (Lasso) 
- L2 + L1 (elastic net) 
 - New in version 1.4.0. - Notes - Fitting with huber loss only supports none and L2 regularization. - Examples - >>> from pyspark.ml.linalg import Vectors >>> df = spark.createDataFrame([ ... (1.0, 2.0, Vectors.dense(1.0)), ... (0.0, 2.0, Vectors.sparse(1, [], []))], ["label", "weight", "features"]) >>> lr = LinearRegression(regParam=0.0, solver="normal", weightCol="weight") >>> lr.setMaxIter(5) LinearRegression... >>> lr.getMaxIter() 5 >>> lr.setRegParam(0.1) LinearRegression... >>> lr.getRegParam() 0.1 >>> lr.setRegParam(0.0) LinearRegression... >>> model = lr.fit(df) >>> model.setFeaturesCol("features") LinearRegressionModel... >>> model.setPredictionCol("newPrediction") LinearRegressionModel... >>> model.getMaxIter() 5 >>> model.getMaxBlockSizeInMB() 0.0 >>> test0 = spark.createDataFrame([(Vectors.dense(-1.0),)], ["features"]) >>> abs(model.predict(test0.head().features) - (-1.0)) < 0.001 True >>> abs(model.transform(test0).head().newPrediction - (-1.0)) < 0.001 True >>> abs(model.coefficients[0] - 1.0) < 0.001 True >>> abs(model.intercept - 0.0) < 0.001 True >>> test1 = spark.createDataFrame([(Vectors.sparse(1, [0], [1.0]),)], ["features"]) >>> abs(model.transform(test1).head().newPrediction - 1.0) < 0.001 True >>> lr.setParams(featuresCol="vector") LinearRegression... >>> lr_path = temp_path + "/lr" >>> lr.save(lr_path) >>> lr2 = LinearRegression.load(lr_path) >>> lr2.getMaxIter() 5 >>> model_path = temp_path + "/lr_model" >>> model.save(model_path) >>> model2 = LinearRegressionModel.load(model_path) >>> model.coefficients[0] == model2.coefficients[0] True >>> model.intercept == model2.intercept True >>> model.transform(test0).take(1) == model2.transform(test0).take(1) True >>> model.numFeatures 1 >>> model.write().format("pmml").save(model_path + "_2") - Methods - clear(param)- Clears a param from the param map if it has been explicitly set. - copy([extra])- Creates a copy of this instance with the same uid and some extra params. - explainParam(param)- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. - Returns the documentation of all params with their optionally default values and user-supplied values. - extractParamMap([extra])- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - fit(dataset[, params])- Fits a model to the input dataset with optional parameters. - fitMultiple(dataset, paramMaps)- Fits a model to the input dataset for each param map in paramMaps. - Gets the value of aggregationDepth or its default value. - Gets the value of elasticNetParam or its default value. - Gets the value of epsilon or its default value. - Gets the value of featuresCol or its default value. - Gets the value of fitIntercept or its default value. - Gets the value of labelCol or its default value. - getLoss()- Gets the value of loss or its default value. - Gets the value of maxBlockSizeInMB or its default value. - Gets the value of maxIter or its default value. - getOrDefault(param)- Gets the value of a param in the user-supplied param map or its default value. - getParam(paramName)- Gets a param by its name. - Gets the value of predictionCol or its default value. - Gets the value of regParam or its default value. - Gets the value of solver or its default value. - Gets the value of standardization or its default value. - getTol()- Gets the value of tol or its default value. - Gets the value of weightCol or its default value. - hasDefault(param)- Checks whether a param has a default value. - hasParam(paramName)- Tests whether this instance contains a param with a given (string) name. - isDefined(param)- Checks whether a param is explicitly set by user or has a default value. - isSet(param)- Checks whether a param is explicitly set by user. - load(path)- Reads an ML instance from the input path, a shortcut of read().load(path). - read()- Returns an MLReader instance for this class. - save(path)- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. - set(param, value)- Sets a parameter in the embedded param map. - setAggregationDepth(value)- Sets the value of - aggregationDepth.- setElasticNetParam(value)- Sets the value of - elasticNetParam.- setEpsilon(value)- Sets the value of - epsilon.- setFeaturesCol(value)- Sets the value of - featuresCol.- setFitIntercept(value)- Sets the value of - fitIntercept.- setLabelCol(value)- Sets the value of - labelCol.- setLoss(value)- Sets the value of - loss.- setMaxBlockSizeInMB(value)- Sets the value of - maxBlockSizeInMB.- setMaxIter(value)- Sets the value of - maxIter.- setParams(self, \*[, featuresCol, labelCol, …])- Sets params for linear regression. - setPredictionCol(value)- Sets the value of - predictionCol.- setRegParam(value)- Sets the value of - regParam.- setSolver(value)- Sets the value of - solver.- setStandardization(value)- Sets the value of - standardization.- setTol(value)- Sets the value of - tol.- setWeightCol(value)- Sets the value of - weightCol.- write()- Returns an MLWriter instance for this ML instance. - Attributes - Returns all params ordered by name. - Methods Documentation - 
clear(param: pyspark.ml.param.Param) → None¶
- Clears a param from the param map if it has been explicitly set. 
 - 
copy(extra: Optional[ParamMap] = None) → JP¶
- Creates a copy of this instance with the same uid and some extra params. This implementation first calls Params.copy and then make a copy of the companion Java pipeline component with extra params. So both the Python wrapper and the Java pipeline component get copied. - Parameters
- extradict, optional
- Extra parameters to copy to the new instance 
 
- Returns
- JavaParams
- Copy of this instance 
 
 
 - 
explainParam(param: Union[str, pyspark.ml.param.Param]) → str¶
- Explains a single param and returns its name, doc, and optional default value and user-supplied value in a string. 
 - 
explainParams() → str¶
- Returns the documentation of all params with their optionally default values and user-supplied values. 
 - 
extractParamMap(extra: Optional[ParamMap] = None) → ParamMap¶
- Extracts the embedded default param values and user-supplied values, and then merges them with extra values from input into a flat param map, where the latter value is used if there exist conflicts, i.e., with ordering: default param values < user-supplied values < extra. - Parameters
- extradict, optional
- extra param values 
 
- Returns
- dict
- merged param map 
 
 
 - 
fit(dataset: pyspark.sql.dataframe.DataFrame, params: Union[ParamMap, List[ParamMap], Tuple[ParamMap], None] = None) → Union[M, List[M]]¶
- Fits a model to the input dataset with optional parameters. - New in version 1.3.0. - Parameters
- datasetpyspark.sql.DataFrame
- input dataset. 
- paramsdict or list or tuple, optional
- an optional param map that overrides embedded params. If a list/tuple of param maps is given, this calls fit on each param map and returns a list of models. 
 
- dataset
- Returns
- Transformeror a list of- Transformer
- fitted model(s) 
 
 
 - 
fitMultiple(dataset: pyspark.sql.dataframe.DataFrame, paramMaps: Sequence[ParamMap]) → Iterator[Tuple[int, M]]¶
- Fits a model to the input dataset for each param map in paramMaps. - New in version 2.3.0. - Parameters
- datasetpyspark.sql.DataFrame
- input dataset. 
- paramMapscollections.abc.Sequence
- A Sequence of param maps. 
 
- dataset
- Returns
- _FitMultipleIterator
- A thread safe iterable which contains one model for each param map. Each call to next(modelIterator) will return (index, model) where model was fit using paramMaps[index]. index values may not be sequential. 
 
 
 - 
getAggregationDepth() → int¶
- Gets the value of aggregationDepth or its default value. 
 - 
getElasticNetParam() → float¶
- Gets the value of elasticNetParam or its default value. 
 - 
getEpsilon() → float¶
- Gets the value of epsilon or its default value. - New in version 2.3.0. 
 - 
getFeaturesCol() → str¶
- Gets the value of featuresCol or its default value. 
 - 
getFitIntercept() → bool¶
- Gets the value of fitIntercept or its default value. 
 - 
getLabelCol() → str¶
- Gets the value of labelCol or its default value. 
 - 
getLoss() → str¶
- Gets the value of loss or its default value. 
 - 
getMaxBlockSizeInMB() → float¶
- Gets the value of maxBlockSizeInMB or its default value. 
 - 
getMaxIter() → int¶
- Gets the value of maxIter or its default value. 
 - 
getOrDefault(param: Union[str, pyspark.ml.param.Param[T]]) → Union[Any, T]¶
- Gets the value of a param in the user-supplied param map or its default value. Raises an error if neither is set. 
 - 
getParam(paramName: str) → pyspark.ml.param.Param¶
- Gets a param by its name. 
 - 
getPredictionCol() → str¶
- Gets the value of predictionCol or its default value. 
 - 
getRegParam() → float¶
- Gets the value of regParam or its default value. 
 - 
getSolver() → str¶
- Gets the value of solver or its default value. 
 - 
getStandardization() → bool¶
- Gets the value of standardization or its default value. 
 - 
getTol() → float¶
- Gets the value of tol or its default value. 
 - 
getWeightCol() → str¶
- Gets the value of weightCol or its default value. 
 - 
hasDefault(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param has a default value. 
 - 
hasParam(paramName: str) → bool¶
- Tests whether this instance contains a param with a given (string) name. 
 - 
isDefined(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user or has a default value. 
 - 
isSet(param: Union[str, pyspark.ml.param.Param[Any]]) → bool¶
- Checks whether a param is explicitly set by user. 
 - 
classmethod load(path: str) → RL¶
- Reads an ML instance from the input path, a shortcut of read().load(path). 
 - 
classmethod read() → pyspark.ml.util.JavaMLReader[RL]¶
- Returns an MLReader instance for this class. 
 - 
save(path: str) → None¶
- Save this ML instance to the given path, a shortcut of ‘write().save(path)’. 
 - 
set(param: pyspark.ml.param.Param, value: Any) → None¶
- Sets a parameter in the embedded param map. 
 - 
setAggregationDepth(value: int) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - aggregationDepth.
 - 
setElasticNetParam(value: float) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - elasticNetParam.
 - 
setEpsilon(value: float) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - epsilon.- New in version 2.3.0. 
 - 
setFeaturesCol(value: str) → P¶
- Sets the value of - featuresCol.- New in version 3.0.0. 
 - 
setFitIntercept(value: bool) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - fitIntercept.
 - 
setLoss(value: str) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - loss.
 - 
setMaxBlockSizeInMB(value: float) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - maxBlockSizeInMB.- New in version 3.1.0. 
 - 
setMaxIter(value: int) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - maxIter.
 - 
setParams(self, \*, featuresCol="features", labelCol="label", predictionCol="prediction", maxIter=100, regParam=0.0, elasticNetParam=0.0, tol=1e-6, fitIntercept=True, standardization=True, solver="auto", weightCol=None, aggregationDepth=2, loss="squaredError", epsilon=1.35, maxBlockSizeInMB=0.0)[source]¶
- Sets params for linear regression. - New in version 1.4.0. 
 - 
setPredictionCol(value: str) → P¶
- Sets the value of - predictionCol.- New in version 3.0.0. 
 - 
setRegParam(value: float) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - regParam.
 - 
setSolver(value: str) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - solver.
 - 
setStandardization(value: bool) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - standardization.
 - 
setTol(value: float) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - tol.
 - 
setWeightCol(value: str) → pyspark.ml.regression.LinearRegression[source]¶
- Sets the value of - weightCol.
 - 
write() → pyspark.ml.util.JavaMLWriter¶
- Returns an MLWriter instance for this ML instance. 
 - Attributes Documentation - 
aggregationDepth= Param(parent='undefined', name='aggregationDepth', doc='suggested depth for treeAggregate (>= 2).')¶
 - 
elasticNetParam= Param(parent='undefined', name='elasticNetParam', doc='the ElasticNet mixing parameter, in range [0, 1]. For alpha = 0, the penalty is an L2 penalty. For alpha = 1, it is an L1 penalty.')¶
 - 
epsilon= Param(parent='undefined', name='epsilon', doc='The shape parameter to control the amount of robustness. Must be > 1.0. Only valid when loss is huber')¶
 - 
featuresCol= Param(parent='undefined', name='featuresCol', doc='features column name.')¶
 - 
fitIntercept= Param(parent='undefined', name='fitIntercept', doc='whether to fit an intercept term.')¶
 - 
labelCol= Param(parent='undefined', name='labelCol', doc='label column name.')¶
 - 
loss= Param(parent='undefined', name='loss', doc='The loss function to be optimized. Supported options: squaredError, huber.')¶
 - 
maxBlockSizeInMB= Param(parent='undefined', name='maxBlockSizeInMB', doc='maximum memory in MB for stacking input data into blocks. Data is stacked within partitions. If more than remaining data size in a partition then it is adjusted to the data size. Default 0.0 represents choosing optimal value, depends on specific algorithm. Must be >= 0.')¶
 - 
maxIter= Param(parent='undefined', name='maxIter', doc='max number of iterations (>= 0).')¶
 - 
params¶
- Returns all params ordered by name. The default implementation uses - dir()to get all attributes of type- Param.
 - 
predictionCol= Param(parent='undefined', name='predictionCol', doc='prediction column name.')¶
 - 
regParam= Param(parent='undefined', name='regParam', doc='regularization parameter (>= 0).')¶
 - 
solver= Param(parent='undefined', name='solver', doc='The solver algorithm for optimization. Supported options: auto, normal, l-bfgs.')¶
 - 
standardization= Param(parent='undefined', name='standardization', doc='whether to standardize the training features before fitting the model.')¶
 - 
tol= Param(parent='undefined', name='tol', doc='the convergence tolerance for iterative algorithms (>= 0).')¶
 - 
weightCol= Param(parent='undefined', name='weightCol', doc='weight column name. If this is not set or empty, we treat all instance weights as 1.0.')¶