Welcome to RegCheck, a blog on regulatory issues sponsored by Regulatory Checkbook, a nonprofit and nonpartisan organization founded in 2001 whose mission is to enhance the quality of science and economic analysis used as the foundation for regulatory decision-making.
Unlike most public policy oriented nonprofits, Regulatory Checkbook does not engage in advocacy with respect to substantive public policy issues. Our advocacy is limited to:
- The use of unbiased scientific information, where unbiased means policy-neutral. Much of the scientific information used in regulatory decision-making has within it embedded and usually undisclosed public policy preferences. Information containing embedded policy preferences tends to lead to policy decisions that tilt in favor of those preferences.
- Technically correct economic analysis of the consequences of regulatory decisions, including the consequences of taking no action. Our framework is explicitly and unapologetically benefit-cost analysis properly applied, with rigor appropriate to the scale and scope of the consequences of regulatory decisions. By correct we mean free of material error in the application of fundamental concepts. We define a material error as one so great that, if corrected, it would alter a decision-maker's choice of alternative presuming that he or she is motivated to seek the greatest possible net social benefits.
- The use of information that meets appropriate data quality standards. Because there are no scholarly standards for information quality in these areas, we adhere generally to the government-wide guidelines established in 2002 by the federal Office of Management and Budget. OMB established a flexible regime in which the quality of information disseminated by federal agencies must satisfy standards of utility, integrity and objectivity that rise with the scale and scope of the information's significance.
- The use of peer review procedures appropriately designed for public policy applications. Traditional scholarly peer review is intended to achieve very limited purposes -- i.e., decide which grant proposals to fund and which scientific papers to publish. Fundamental "correctness" is not a criterion used in scholarly or academic peer review. Nevertheless, governments have borrowed these procedures for an altogether different purpose -- i.e., determining which inferences and conclusions drawn from vast scientific and economic literatures are "correct".
- The use of oversight procedures that (a) reward high-quality scientific data, economic analysis and logical reasoning, and (b) enhance the likelihood that material errors in science and economics will be detected and removed. Effective oversight requires, inter alia, broad public access to data, models and analytic methods so that appropriately trained professionals and scholars can reproduce (and perhaps improve upon) regulatory agency work products prior to their use in regulatory decision-making. Oversight procedures that deny public access to critical information or adequate review time inherently fail these tests of public accountability.
Regulatory Checkbook has produced a considerable amount of information on these topics, but little of it is widely available. I will be posting this archival information as time permits.
N. B. When we began conducting policy assessments and benefit-cost analyses in the 1970s, calculations were done by hand and statistical analyses required overnight use of a university's mainframe. An IBM Selectric was considered the ultimate in word processing capability. The best way to describe the changes that have occurred over the past 30 years may be to shamelessly borrow from economist Ben Stein, who said in Ferris Buehller's Day Off: "Wow."
This blog represents a controlled scientific experiment testing the hypothesis that old dogs cannot learn new tricks. In the great tradition of scientific method, we hope to falsify that hypothesis.
No comments:
Post a Comment