Skip to contents

[Experimental]

Association rules identify conditions (antecedents) under which a specific feature (consequent) is present very often.

Scheme:

A => C

If condition A is satisfied, then the feature C is present very often.

Example:

university_edu & middle_age & IT_industry => high_income

People in middle age with university education working in IT industry have very likely a high income.

Antecedent A is usually a set of predicates, and consequent C is a single predicate.

For the following explanations we need a mathematical function \(supp(I)\), which is defined for a set \(I\) of predicates as a relative frequency of rows satisfying all predicates from \(I\). For logical data, \(supp(I)\) equals to the relative frequency of rows, for which all predicates \(i_1, i_2, \ldots, i_n\) from \(I\) are TRUE. For numerical (double) input, \(supp(I)\) is computed as the mean (over all rows) of truth degrees of the formula i_1 AND i_2 AND ... AND i_n, where AND is a triangular norm selected by the t_norm argument.

Association rules are characterized with the following quality measures.

Length of a rule is the number of elements in the antecedent.

Coverage of a rule is equal to \(supp(A)\).

Consequent support of a rule is equal to \(supp(\{c\})\).

Support of a rule is equal to \(supp(A \cup \{c\})\).

Confidence of a rule is the fraction \(supp(A) / supp(A \cup \{c\})\).

Usage

dig_associations(
  x,
  antecedent = everything(),
  consequent = everything(),
  disjoint = var_names(colnames(x)),
  min_length = 0L,
  max_length = Inf,
  min_coverage = 0,
  min_support = 0,
  min_confidence = 0,
  contingency_table = FALSE,
  measures = NULL,
  t_norm = "goguen",
  max_results = Inf,
  verbose = FALSE,
  threads = 1
)

Arguments

x

a matrix or data frame with data to search in. The matrix must be numeric (double) or logical. If x is a data frame then each column must be either numeric (double) or logical.

antecedent

a tidyselect expression (see tidyselect syntax) specifying the columns to use in the antecedent (left) part of the rules

consequent

a tidyselect expression (see tidyselect syntax) specifying the columns to use in the consequent (right) part of the rules

disjoint

an atomic vector of size equal to the number of columns of x that specifies the groups of predicates: if some elements of the disjoint vector are equal, then the corresponding columns of x will NOT be present together in a single condition. If x is prepared with partition(), using the var_names() function on x's column names is a convenient way to create the disjoint vector.

min_length

the minimum length, i.e., the minimum number of predicates in the antecedent, of a rule to be generated. Value must be greater or equal to 0. If 0, rules with empty antecedent are generated in the first place.

max_length

The maximum length, i.e., the maximum number of predicates in the antecedent, of a rule to be generated. If equal to Inf, the maximum length is limited only by the number of available predicates.

min_coverage

the minimum coverage of a rule in the dataset x. (See Description for the definition of coverage.)

min_support

the minimum support of a rule in the dataset x. (See Description for the definition of support.)

min_confidence

the minimum confidence of a rule in the dataset x. (See Description for the definition of confidence.)

contingency_table

a logical value indicating whether to provide a contingency table for each rule. If TRUE, the columns pp, pn, np, and nn are added to the output table. These columns contain the number of rows satisfying the antecedent and the consequent, the antecedent but not the consequent, the consequent but not the antecedent, and neither the antecedent nor the consequent, respectively.

measures

a character vector specifying the additional quality measures to compute. If NULL, no additional measures are computed. Possible values are "lift", "conviction", "added_value". See https://mhahsler.github.io/arules/docs/measures for a description of the measures.

t_norm

a t-norm used to compute conjunction of weights. It must be one of "goedel" (minimum t-norm), "goguen" (product t-norm), or "lukas" (Lukasiewicz t-norm).

max_results

the maximum number of generated conditions to execute the callback function on. If the number of found conditions exceeds max_results, the function stops generating new conditions and returns the results. To avoid long computations during the search, it is recommended to set max_results to a reasonable positive value. Setting max_results to Inf will generate all possible conditions.

verbose

a logical value indicating whether to print progress messages.

threads

the number of threads to use for parallel computation.

Value

A tibble with found patterns and computed quality measures.

Author

Michal Burda

Examples

d <- partition(mtcars, .breaks = 2)
dig_associations(d,
                 antecedent = !starts_with("mpg"),
                 consequent = starts_with("mpg"),
                 min_support = 0.3,
                 min_confidence = 0.8,
                 measures = c("lift", "conviction"))
#> # A tibble: 524 × 10
#>    antecedent        consequent support confidence coverage conseq_support count
#>    <chr>             <chr>        <dbl>      <dbl>    <dbl>          <dbl> <dbl>
#>  1 {qsec=(-Inf;18.7… {mpg=(-In…   0.594      0.826    0.719          0.719    19
#>  2 {drat=(-Inf;3.84… {mpg=(-In…   0.531      0.895    0.594          0.719    17
#>  3 {am=(-Inf;0.5]}   {mpg=(-In…   0.531      0.895    0.594          0.719    17
#>  4 {vs=(-Inf;0.5]}   {mpg=(-In…   0.531      0.944    0.562          0.719    17
#>  5 {cyl=(6;Inf]}     {mpg=(-In…   0.438      1        0.438          0.719    14
#>  6 {disp=(272;Inf]}  {mpg=(-In…   0.438      1        0.438          0.719    14
#>  7 {wt=(3.47;Inf]}   {mpg=(-In…   0.344      1        0.344          0.719    11
#>  8 {carb=(-Inf;4.5]… {mpg=(-In…   0.312      1        0.312          0.719    10
#>  9 {gear=(-Inf;4],w… {mpg=(-In…   0.312      1        0.312          0.719    10
#> 10 {qsec=(-Inf;18.7… {mpg=(-In…   0.344      1        0.344          0.719    11
#> # ℹ 514 more rows
#> # ℹ 3 more variables: antecedent_length <int>, lift <dbl>, conviction <dbl>