Conditional random fields are state-of-the-art models for sequencing tasks such as named entity recognition. However, being globally conditioned, they have a tendency to overfit to a greater extent than other sequencing models. We introduce an approach to combat this overfitting called a logarithmic opinion pool (LOP). A LOP consists of a weighted combination of constituent models. We present the theory behind LOPs, and show that effective LOPs require constituent models that are diverse from one another. We examine different ways to introduce such diversity, including an approach that involves training the constituent models together, interactively. Our results show that, as expected from the underlying theory, explicitly optimising for constituent model diversity can improve performance over standard approaches to regularisation.