It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.
The judicial performance evaluation (JPE) commission is the key player in the JPE process. The commission generally has very few binding rules to follow and thus has a great deal of discretion in how to proceed. This prelminary investigation suggests the JPE commissions may be relying heavily on the attorney surveys to identify recipients of negative recommendations.
Judicial performance evaluations (JPEs) are a critical part of selecting judges, especially in states using merit-based selection systems. This article shows empirical evidence that gender and race bias still exist in attorney surveys conducted in accordance with the ABA's Guidelines. This systematic bias is related to a more general problem with the design and implementation of JPE surveys, which results in predictable problems with the reliability and validity of the information obtained through these survey instruments. This analysis raises questions about the validity and reliability of the JPE. This is a particularly poor outcome, as it means that we are subjecting many judges to state-sponsored evaluations that are systematically biased against women and minorities.
The federal government has expressed fear that immigrants abuse the appellate process to delay their deportations by filing meritless petitions for review with the federal courts. Some courts have responded to these concerns by imposing stricter standards for issuing stays of removal, so that the government can more easily deport petitioners even while their appeals remain pending. The risk with this approach is that immigrants who ultimately prevail may be erroneously deported. What is often overlooked is that the potential for abuse is really a function of time, with longer appeals posing a greater threat to immigration enforcement. This study presents new empirical evidence showing that most circuit courts actually decide immigration appeals faster than previously assumed. Moreover, in many circuits the appeals most likely to be frivolous are resolved especially quickly. These results undermine the concerns that lead the government to oppose stays of removal and illustrate the importance of efficient case management systems to the administration of justice.
The government may deport an immigrant appealing a deportation order in federal court even before the court rules on the case, unless the court issues a stay of removal. In its 2009 decision in Nken v. Holder, the Supreme Court clarified that the legal standard for stays of removal is the same test courts use for preliminary injunctions. Yet Justice Kennedy expressed frustration that the Court had little data to inform its decision. The Court will likely need to revisit this issue, as doubts cloud the meaning of Nken’s main holdings, in part because the government misled the Court. This Article responds to Justice Kennedy’s request for data and sheds light on the doctrinal controversies surrounding stays by presenting groundbreaking empirical analysis of 1646 cases in all the circuits that hear immigration appeals. It offers a singular window into an arena of adjudication where decisions are rarely articulated in writing. Among our most important findings, the circuit courts denied stays of removal in about half of the appeals that were ultimately granted, an alarming type of error that could result in people being errantly deported to countries where they risk persecution or torture. Our results also suggest that legal doctrine makes an important difference in how accurately courts identify which cases merit a stay, but that no magic bullet exists to avoid errors. In order to adopt an effective approach to stays of removal, courts must confront an important value judgment about whether to err on the side of preventing wrongful removal or on the side of avoiding delayed deportation.
Judicial Performance Evaluation (JPE) is generally seen as an important part of the merit system, which often suffers from a lack of relevant voter information. Utah's JPE system has undergone significant change in recent years. Using data from the two most recent JPE surveys, we provide a preliminary look at the operation of this new system. Our results suggest that the survey component has difficulty distinguishing among the judges on the basis of relevant criteria. The question prompts intended to measure performance on different ABA Categories are also indistinguishable. We also find evidence that, on some measures, female judges do disproportionately worse than male judges. We suggest that the free response comments and the new Court Observation Program results may improve the ability of the commission to make meaningful distinctions among the judges on the basis of appropriate criteria.
Public debate on state judicial elections versus merit selection spans more than a century. The empirical evidence suggests there is no "best" system for selecting judges; all systems have advantages and disadvantages. The relative merit of the various systems depends on the goals we wish to maximize.
In 2011, Ramji-Nogales, Schoenholtz, and Schrag published Refugee Roulette,1 a book that changed the academic conversation about refugee status determination (RSD) in the United States (US). Building on a previous law review article with the same name, the book amassed an extraordinary amount of data on asylum adjudications at every level of the American RSD regime, and demonstrated empirically what everyone familiar with the system had already sensed: that the particular decision maker to whom an asylum seeker is assigned is the single biggest determinant of whether they will be granted refugee status. Even when the applicant’s country of origin was held constant, the study revealed that some decision makers rejected almost every claim, while colleagues in the same office accepted the vast majority of claims they heard.