One problem with the Naive Bayes classifier is that the confidence of the model in its predictions can change even when no new information has been gained.
Let for and be the probability distributions defining a Naive Bayes model. There are features and can take on possible classes. Assume the the prior over labels is uniform.
Show that if every feature is repeated exactly once, the model becomes more confident in its predicted output class for an arbitrary example.
This fact was pointed out to me in Section 2.2.3 of An Introduction to Conditional Random Fields.