Using LLM to Solve One-Class Problems

One-class classification problems can be used to define rules regarding a single class, where all, or almost all, of the data in the dataset belongs to the same class. The generated rules either describe the class to which the dataset belongs (one-class) or the class to which the data does not belong (anomaly). These problems are performed in Rulex with the One-Class Logic Learning Machine (LLM) task.


Prerequisites

Additional tabs

The following additional tabs are provided:

  • Documentation tab where you can document your task,

  • Parametric options tab where you can configure process variables instead of fixed values. Parametric equivalents are expressed in italics in this page (PO).

  • Monitor and Results tabs, where you can see the output of the task computation. See Results table below.


Procedure

  1. Drag and drop the One-Class Logic Learning Machine task onto the stage.

  2. Connect a task, which contains the attributes from which you want to create the model, to the new task.

  3. Double click the LLM task. The left-hand pane displays a list of all the available attributes in the dataset, which can be ordered and searched as required.

  4. Configure the options described in the table below.

  5. Save and compute the task.

One-Class LLM options

Parameter Name

PO

Description

Aggregate data before processing

aggregate

If selected, identical patterns are aggregated and considered as a single pattern during the training phase.

Minimize number of conditions

minimal

If selected, rules with fewer conditions, but the same covering, are privileged.

Perform a coarse-grained training (faster)

lowest

If selected, the LLM training algorithm considers the conditions with the subset of values that maximizes covering for each input attribute. Otherwise, only one value at a time is added to each condition, thus performing a more extensive search. The coarse-grained training option has the advantage of being faster than performing an extensive search.

Prevent interval conditions for ordered attributes

nointerval

If selected, interval conditions such as 1<x≤5 are avoided, and only conditions with > (greater than) ≤ (lower or equal than) are generated.

Ignore attributes not present in rules

reducenames

If selected, attributes that have not been included in rules will be flagged Ignore at the end of the training process, to reflect their redundancy in the classification problem at hand.

Hold all the generated rules

holdrules

If selected, even redundant generated rules, which are verified only by training samples that already covered by other more powerful rules, are kept.

Ignore outliers while building rules

coveroutlier

If selected, the set of remaining patterns, not covered by generated rules, are ignored if its size is less than the threshold defined in the Expected percentage of anomalies in the training set (%) option.

Consider relative error instead of absolute

relerrmax

Specify whether the relative or absolute error must be considered.

The Expected percentage of anomalies in the training set % is set by considering proportions of samples belonging to different classes. Imagine a scenario where for given rule pertaining to the specific output value yo:

  • TP is the number of true positives (samples with the output value yo that verify the conditions of the rule).

  • TN is the number of true negatives (samples with output values different from yo that do not verify the conditions of the rule).

  • FP is the number of false positives (samples with output values different from yo that do verify the conditions of the rule).

  • FN is the number of false negatives (samples with the output values yo that do not verify the conditions of the rule).

In this scenario the absolute error of that rule is FP/(TN+FP)whereas the relative error is obtained as follows: FP/Min(TP+FN,TN+FP) (samples with the output value yo that do verify the conditions of the rule).

Also generate rules with zero covering

zerocov

If selected, it is possible to generate rules that do not cover any pattern of the training set. This option may be useful when generating rules for anomalies.

Generate rules for

anomaly

Select which type of problem you want to solved:

  • One-class (0) generates rules that fully characterize the dataset, with high covering and no errors.

  • Anomaly (1) generates rules that do not characterize the dataset, with low covering (zero covering is possible if the Also generate rules with zero covering option has been selected) and a high error percentage.

Maximum number of trials in bottom-up mode

nbuiter

The number of times a bottom-up procedure can be repeated, after which a top-down procedure will be adopted. 

The bottom-up procedure starts by analyzing all possible cases, defining conditions and reducing the extension of the rules. If, at the end of this procedure, the error is higher than the value entered for the Expected percentage of anomalies in the training set (%) option, the procedure starts again, inserting an increased penalty on the error. If the maximum number of trials is reached without obtaining a satisfactory rule, the procedure is switched to a top-down approach.

Expected percentage of anomalies in the training set (%)

errmax

Set the maximum percentage of anomalies the dataset should contain.

For one-class problems this percentage will correspond to the maximum percentage of errors, for anomalies to the maximum percentage of covering. 

Number of generated rules (0 means 'automatic')

numrules

If set to 0 the minimum number of rules required to cover all patterns in the training set is generated. Otherwise set the desired number of rules for each class.

Maximum number of conditions for a rule

maxant

Set the maximum number of conditions in a rule.

Overlap between rules (%)

maxoverlap

Set the maximum percentage of patterns, which can be shared by two rules.

Maximum nominal values

maxnomval

Set the maximum number of nominal values that can be contained in a condition. This is useful for simplifying conditions and making them more manageable, for example when an attribute has a very high number of possible nominal values. It is worth noting that overly complicated conditions also run the risk of over-fitting, where rules are too specific for the test data, and not generic enough to be accurate on new data.

Differentiate multiple rules by attributes

onlyatt

If selected, when multiple rules have to be generated, the presence of the same attributes in their conditions is penalized.

Allow to use complements in conditions on nominal

alsocompl

If selected, conditions on nominal attributes can be expressed as complements.

Percentage of negative patterns (%)

negperc

The percentage of patterns which do not belong to the dataset class.

Missing values verify any rule condition

missrelax

If selected, missing values will be assumed to satisfy any condition. If there is a high number of missing values, this choice can have an important impact on the outcome.

Prevent rules in input from being included in the LLM model

avoidrules

If selected, rules fed into the LLM task should not be included in the final ruleset.

Minimum rule distance for additional rules

minruledist

The minimum difference between additional rules, taken into consideration if the Prevent rules in input from being included in the LLM model option has been selected.

Initialize random generator with seed

initrandom, iseed

If selected, a seed, which defines the starting point in the sequence, is used during random generation operations. Consequently using the same seed each time will make each execution reproducible. Otherwise, each execution of the same task (with same options) may produce dissimilar results due to different random numbers being generated in some phases of the process.

Append results

append

If selected, the results of this computation are appended to the dataset, otherwise they replace the results of previous computations.

Change roles for input and output attributes

keeproles

If selected, input and output roles defined in previous Data Manager tasks will be overwritten by those defined in this task.

Input attributes

inpnames

Drag and drop here the attributes that will act as input attributes in generating classification rules when the task is computed.

Key attributes

keynames

Drag and drop here the key attributes. Rulex will create a different set of rules for each key attribute value.​ Instead of manually dragging and dropping attributes, they can be defined via a filtered list.

Key attributes are the attributes that must always be taken into consideration in rules, and every rule must always contain a condition for each of the key attributes.

 

Results

The results of the LLM task can be viewed in two separate tabs:

  • The Monitor tab, where it is possible to view the statistics related to the generated rules as a set of histograms, such as the number of conditions, covering value, or error value. Rules relative to different classes are displayed as bars of a specific color. These plots can be viewed during and after computation operations. 

  • The Results tab, where statistics on the LLM computation are displayed, such as the execution time, number of rules, average covering etc.



Need to get in touch? https://www.rulex.ai/contact/ - Need a license renewal? https://rulex.atlassian.net/servicedesk/customer/user/login?destination=portals
© 2024 Rulex, Inc. All rights reserved.