Orange includes a component based naive Bayesian classifier that can handle both, discrete and continuous attributes, while the class needs to be discrete (or, at least discretized). It need several components for estimation of conditional and unconditional probabilities that are described on a separate page.
Attributes
True
)False
). Class BayesClassifier
represents a naive Bayesian classifier. Probability of class C, knowing that values of attributes A1, A2, ..., An are v1, v2, ..., vn, is computed as
p(C|v1, v2, ..., vn) = p(C) * [p(C|v1)/p(C)] * [p(C|v2)/p(C)] * ... * [p(C|vn)/p(C)].
Note that when relative frequencies are used to estimate probabilities, the more usual formula (with factors of form p(vi|C)/p(vi)) and the above formula are exactly equivalent (without any additional assumptions of independency, as one could think at a first glance). The difference becomes important when using other ways to estimate probabilities, like, for instance, m-estimate. In this case, the above formula is much more appropriate.
When computing the formula, probabilities p(C) are read from distribution
which is of type Distribution
and stores a (normalized) probability of each class. When distribution
is None
, BayesClassifier
calls estimator
to assess the probability. The former method is faster and is actually used by all existing methods of probability estimation. The latter is more flexible.
Conditional probabilities are computed similarly. Field conditionalDistribution
is of type DomainContingency
which is basically a list of instances of Contingency
, one for each attribute; the outer variable of the contingency is the attribute and the inner is the class. Contingency can be seen as a list of normalized probability distributions. For attributes for which there is no contingency in conditionalDistribution
a corresponding estimator in conditionalEstimators
is used. The estimator is given the attribute value and returns distributions of classes.
If neither, nor pre-computed contingency nor conditional estimator exist, the attribute is ignored without issuing any warning. The attribute is also ignored if its value is undefined; this cannot be overriden by estimators.
Any field (distribution
, estimator
, conditionalDistributions
, conditionalEstimators
) can be None
. For instance, BayesLearner
normally constructs a classifier which has either distribution
or estimator
defined. While it is not an error, to have both, only distribution
will be used in that case. As for the other two fields, they can be both defined and used complementarily; the elements which are missing in one are defined in the other. However, if there is no need for estimators, BayesLearner
will not construct an empty list; it will not construct a list at all, but leave the field conditionalEstimators
empty.
If you only need probabilities of individual class call BayesClassifier
's method p(class, example)
to compute the probability of this class only. Note that this probability will not be normalized and will thus, in general, not equal the probability returned by the call operator.
Attributes
True
).False
(to conform with the usual naive bayesian
classifiers), but setting it to True
can increase the
accuracy considerably.As first, you do not need to understand anything of above (or below) to use the classifier. You can simply leave everything as is, call the classifier and it will work as you expect. And better, it will even handle continuous attributes as continuous.
The first three fields are empty (None
) by default.
estimatorConstructor
is left undefined, p(C) will be estimated by relative frequencies of examples (see ProbabilityEstimatorConstructor_relative).conditionalEstimatorConstructor
is left undefined, it will use the same constructor as for estimating unconditional probabilities (estimatorConstructor
is used as an estimator in (ConditionalProbabilityEstimatorConstructor_ByRows).
That is, by default, both will use relative frequencies. But when estimatorConstructor
is set to, for instance, estimate probabilities by m-estimate with m=2.0, m-estimates with m=2.0 will be used for estimation of conditional probabilities, too.The learner first constructs an estimator for p(C). It tries to get a precomputed distribution of probabilities; if the estimator is capable of returning it, the distribution is stored in the classifier's field distribution
and the just constructed estimator is disposed. Otherwise, the estimator is stored in the classifier's field estimator
, while the distribution
is left empty.
The same is then done for conditional probabilities. Different constructors are used for discrete and continuous attributes. If the constructed estimator can return all conditional probabilities in form of Contingency
, the contingency is stored and the estimator disposed. If not, the estimator is stored. If there are no contingencies when the learning is finished, the resulting classifier's conditionalDistributions
is None
. Alternatively, if all probabilities are stored as contingencies, the conditionalEstimators
fields is None
.
Field normalizePredictions
is copied to the resulting classifier.
Let us load the data, induce a classifier and see how it performs on the first five examples.
The classifier is correct in all five cases. Interested in probabilities, maybe?
While very confident about the second and the fourth example, the classifier guessed the correct class of the first one only by a small margin of 42 vs. 58 percents.
Now, let us peek into the classifier.
The classifier has no estimator
since probabilities are stored in distribution
. The probability of the first class is 0.167, of the second 0.208 and the probability of the third class is 0.625. Nor does it have conditionalEstimators
, probabilities are stored in conditionalDistributions
. We printed the contingency matrix for the first attribute and, in the last line, conditional probabilities of the three classes when the value of the first attribute is "young".
Let us now use m-estimate instead of relative frequencies.
The classifier is still correct for all examples.
Observing probabilities shows a shift towards the third, more frequent class - as compared to probabilities above, where relative frequencies were used.
Note that the change in error estimation did not have any effect on apriori probabilities:
The reason for this is that this same distribution was used as apriori distribution for m-estimation. (How to enforce another apriori distribution? While the orange C++ core supports of it, this feature has not been exported to Python yet.)
Finally, let us show an example with continuous attributes. We will take iris dataset that contains four continuous and no discrete attributes.
The classifier works well. To see a glimpse of how it works, let us observe conditional distributions for the first attribute. It is stored in conditionalDistributions
, as before, except that it now behaves as a dictionary, not as a list like before (see information on distributions.
For a nicer picture, we can print out the probabilities, copy and paste it to some graph drawing program ... and get something like the figure below.
If petal lengths are shorter, the most probable class is "setosa". Irises with middle petal lengths belong to "versicolor", while longer petal lengths indicate for "virginica". Critical values where the decision would change are at about 5.4 and 6.3.
It is important to stress that the curves are relatively smooth although no fitting (either manual or automatic) of parameters took place.