Skip to main content

Penalized composite quasi-likelihood for ultrahigh dimensional variable selection

Buy Article:

$43.00 plus tax (Refund Policy)


In high dimensional model selection problems, penalized least square approaches have been extensively used. The paper addresses the question of both robustness and efficiency of penalized model selection methods and proposes a data-driven weighted linear combination of convex loss functions, together with weighted L1-penalty. It is completely data adaptive and does not require prior knowledge of the error distribution. The weighted L1-penalty is used both to ensure the convexity of the penalty term and to ameliorate the bias that is caused by the L1-penalty. In the setting with dimensionality much larger than the sample size, we establish a strong oracle property of the method proposed that has both the model selection consistency and estimation efficiency for the true non-zero coefficients. As specific examples, we introduce a robust method of composite L1L2, and an optimal composite quantile method and evaluate their performance in both simulated and real data examples.
No References
No Citations
No Supplementary Data
No Data/Media
No Metrics

Keywords: Composite quasi-maximum likelihood estimation; Lasso; Model selection; Non-polynomial dimensionality; Oracle property; Robust statistics; Smoothly clipped absolute deviation

Document Type: Research Article

Affiliations: 1: Princeton University, USA 2: University of Texas Health Science Center, Houston, USA

Publication date: 2011-06-01

  • Access Key
  • Free content
  • Partial Free content
  • New content
  • Open access content
  • Partial Open access content
  • Subscribed content
  • Partial Subscribed content
  • Free trial content
Cookie Policy
Cookie Policy
Ingenta Connect website makes use of cookies so as to keep track of data that you have filled in. I am Happy with this Find out more