Problem

One problem with the Naive Bayes classifier is that the confidence of the model in its predictions can change even when no new information has been gained.

Let $p(x_i \mid y)$ for $i = 1, \ldots, m$ and $p(y)$ be the probability distributions defining a Naive Bayes model. There are $m$ features and $y$ can take on $n$ possible classes. Assume the the prior over labels $p(y)$ is uniform.

Show that if every feature is repeated exactly once, the model becomes more confident in its predicted output class for an arbitrary example.

Solution

show

This fact was pointed out to me in Section 2.2.3 of An Introduction to Conditional Random Fields.