Due to being fast, easy to implement and relatively effective, some state-of-the-art naive Bayes text classifiers with the strong assumption of conditional independence among attributes, such as multinomial naive Bayes, complement naive Bayes and the one-versus-all-but-one model, have
received a great deal of attention from researchers in the domain of text classification. In this article, we revisit these naive Bayes text classifiers and empirically compare their classification performance on a large number of widely used text classification benchmark datasets. Then, we
propose a locally weighted learning approach to these naive Bayes text classifiers. We call our new approach locally weighted naive Bayes text classifiers (LWNBTC). LWNBTC weakens the attribute conditional independence assumption made by these naive Bayes text classifiers by applying the locally
weighted learning approach. The experimental results show that our locally weighted versions significantly outperform these state-of-the-art naive Bayes text classifiers in terms of classification accuracy.
No Reference information available - sign in for access.
No Citation information available - sign in for access.
No Supplementary Data.
No Article Media
complement naive Bayes;
locally weighted learning;
multinomial naive Bayes;
the one-versus-all-but-one model
Document Type: Research Article
Department of Computer Science, China University of Geosciences, Wuhan, Hubei 430074, China
Faculty of Computer Science, University of New Brunswick, Fredericton, New Brunswick E3B5A3, Canada
Department of Electronic Engineering, China University of Geosciences, Wuhan, Hubei 430074, China
Publication date: June 1, 2013
More about this publication?