Skip to content

This function computes metrics from tuning results. The arguments and output formats are closely related to those from collect_metrics(), but this function additionally takes a metrics argument with a metric set for new metrics to compute. This allows for computing new performance metrics without requiring users to re-evaluate models against resamples.

Note that the control option save_pred = TRUE must have been supplied when generating x.

Usage

compute_metrics(x, metrics, summarize, event_level, ...)

# Default S3 method
compute_metrics(x, metrics, summarize = TRUE, event_level = "first", ...)

# S3 method for class 'tune_results'
compute_metrics(x, metrics, ..., summarize = TRUE, event_level = "first")

Arguments

x

The results of a tuning function like tune_grid() or fit_resamples(), generated with the control option save_pred = TRUE.

metrics

A metric set of new metrics to compute. See the "Details" section below for more information.

summarize

A single logical value indicating whether metrics should be summarized over resamples (TRUE) or return the values for each individual resample. See collect_metrics() for more details on how metrics are summarized.

event_level

A single string containing either "first" or "second". This argument is passed on to yardstick metric functions when any type of class prediction is made, and specifies which level of the outcome is considered the "event".

...

Not currently used.

Value

A tibble. See collect_metrics() for more details on the return value.

Details

Each metric in the set supplied to the metrics argument must have a metric type (usually "numeric", "class", or "prob") that matches some metric evaluated when generating x. e.g. For example, if x was generated with only hard "class" metrics, this function can't compute metrics that take in class probabilities ("prob".) By default, the tuning functions used to generate x compute metrics of all needed types.

Examples

# load needed packages:
library(parsnip)
library(rsample)
library(yardstick)

# evaluate a linear regression against resamples.
# note that we pass `save_pred = TRUE`:
res <-
  fit_resamples(
    linear_reg(),
    mpg ~ cyl + hp,
    bootstraps(mtcars, 5),
    control = control_grid(save_pred = TRUE)
  )

# to return the metrics supplied to `fit_resamples()`:
collect_metrics(res)
#> # A tibble: 2 × 6
#>   .metric .estimator  mean     n std_err .config             
#>   <chr>   <chr>      <dbl> <int>   <dbl> <chr>               
#> 1 rmse    standard   3.21      5  0.363  Preprocessor1_Model1
#> 2 rsq     standard   0.732     5  0.0495 Preprocessor1_Model1

# to compute new metrics:
compute_metrics(res, metric_set(mae))
#> # A tibble: 1 × 6
#>   .metric .estimator  mean     n std_err .config             
#>   <chr>   <chr>      <dbl> <int>   <dbl> <chr>               
#> 1 mae     standard    2.57     5   0.347 Preprocessor1_Model1

# if `metrics` is the same as that passed to `fit_resamples()`,
# then `collect_metrics()` and `compute_metrics()` give the same
# output, though `compute_metrics()` is quite a bit slower:
all.equal(
  collect_metrics(res),
  compute_metrics(res, metric_set(rmse, rsq))
)
#> [1] TRUE