Inference with transposable data: modelling the effects of row and column correlations
Summary. We consider the problem of large‐scale inference on the row or column variables of data in the form of a matrix. Many of these data matrices are transposable meaning that neither the row variables nor the column variables can be considered independent instances. An example of this scenario is detecting significant genes in microarrays when the samples may be dependent because of latent variables or unknown batch effects. By modelling this matrix data by using the matrix variate normal distribution, we study and quantify the effects of row and column correlations on procedures for large‐scale inference. We then propose a simple solution to the myriad of problems that are presented by unexpected correlations: we simultaneously estimate row and column covariances and use these to sphere or decorrelate the noise in the underlying data before conducting inference. This procedure yields data with approximately independent rows and columns so that test statistics more closely follow null distributions and multiple‐testing procedures correctly control the desired error rates. Results on simulated models and real microarray data demonstrate major advantages of this approach: increased statistical power, less bias in estimating the false discovery rate and reduced variance of the false discovery rate estimators.
No Supplementary Data
No Article Media
Document Type: Research Article
Affiliations: 1: Baylor College of Medicine and Rice University, Houston, USA 2: Stanford University, Stanford, USA
Publication date: September 1, 2012