This paper is a commentary on Hattis' three laws of risk assessment. The first law, that "application of standard statistical techniques to a single data set will nearly always reveal only a trivial proportion of the overall uncertainty in the parameter value" is illustrated both by examining the relevance of animal models to man and by a retrospective view of exposure conditions whose importance has only recently been recognized to be important. The second law, that "any estimate of the uncertainty of a parameter value will always itself be more uncertain than the estimate of the parameter value," is examined in terms of a model addressing multiple levels of uncertainty, e.g., the "uncertainty in the uncertainty". A argument is made that the number of terms needed for convergence of this uncertainty hierarchy depends on how far from the central tendency of the risk distribution one goes. The further out the "tail" of the distribution, the more terms in the uncertainty hierarchy are needed for convergence. The third law, that "nearly all parameter distribu tions look lognormal, as long as you don't look too closely," is illustrated with a number of examples. Several reasons are put forward as to why risk variables appear so frequently to be lognormal. Recognition of the lognormal character of variable distributions can provide insight into the proper form for the associated uncertainty distributions.