Abstract
In this paper we explore the community college (institutional) effect on student outcomes in the nation's largest public two-year higher education system—the California Community College system. We investigate whether there are significant differences in student outcomes across community college campuses after adjusting for observed student differences and potential unobserved determinates that drive selection. To do so, we leverage a unique administrative dataset that links community college students to their K–12 records in order to control for key student inputs. We find meaningful differences in student outcomes across California's Community Colleges, after adjusting for differences in student inputs. We also compare college rankings based on unadjusted mean differences with college rankings adjusted for student inputs. Our results suggest that policymakers wishing to rank schools based on quality should adjust such rankings for differences in student-level inputs across campuses.
Identifying college quality has been a key element of the Obama administration's efforts to increase accountability in higher education. In 2013, the White House launched the College Scorecard with the goal of providing students and their families information about the “cost, value, and quality” of specific colleges in order to make more informed decisions (U.S. Department of Education 2015). Beyond transparency, the administration is also pushing for performance-based funding in higher education (White House 2013). Specifically, President Obama's proposal aims, by 2018, to tie federal aid to a rating system of colleges based on affordability, student completion rates, and graduate earnings.
Much discussion has been had on these ratings, and has included skepticism about the quality of the data used for the ratings and whether, as the president of the University of California system Janet Napolitano states, “criteria can be developed that are in the end meaningful” (Anderson 2013). Admittedly, policymakers have recognized the host of issues in developing the accountability metrics, and have solicited feedback on the college ratings methodology.
Among the many critiques of the rating systems is whether it is reasonable to compare institutions that are quite different from one another in terms of the institutional goals and the student populations served. Some have noted that even if scorecard rankings are adjusted for institutional or individual differences across campuses, biases will still favor elite institutions and institutions that serve more traditional college students (Gross 2013). Relatedly, others worry that a rating system, particularly one tied to performance is “antithetical” to the open access mission of community colleges (Fain 2013).
The idea of performance-based accountability may be novel in higher education, but in K–12 it has been at the heart of both federal and state accountability systems, which developed—albeit to varying success—structures to grade K–12 schools on a variety of performance measures. Long before state and federal accountability systems took hold, school leaders and the research community were preoccupied with understanding the unique effects of schools on individual outcomes. Nearly fifty years after the Coleman Report, many scholarly efforts have been made to isolate the specific contribution of schools on student outcomes, controlling for individual and family characteristics.
Several studies since this canonical report, which concluded that the differences between K–12 schools account for only a small fraction of differences in pupil achievement, find that school characteristics explain less than 20 percent of the variation in student outcomes, though one study concludes that as much as 40 percent is attributable to schools, even after taking into account students’ family background (Startz 2012; Borman and Dowling 2010; Rumberger and Palardy 2005; Rivkin, Hanushek, and Kain 2005; Goldhaber et al. 2010). In higher education, however, school effects have primarily focused on college selectivity, or have been constrained by existing aggregate data and small samples.
In this paper, we explore the community college (institutional) effect on student outcomes in the nation's largest public two-year higher education system—the California Community College system. We seek to know whether differences in student outcomes across community college campuses are significant after adjusting for observed student differences and potential unobserved determinates that drive selection. Additionally, we ask whether college rankings based on unadjusted mean differences across campuses provide meaningful information. To do so, we leverage a unique administrative dataset that links community college students to their K–12 records to control for key student inputs.
Results show that differences in student outcomes across the 108 California Community Colleges in our sample, after adjusting for differences in student inputs, are meaningful. For example, our lower-bound estimates show that going from the 10th to 90th percentile of campus quality is associated with a 3.68 (37.3 percent) increase in student transfer units earned, an 0.14 (20.8 percent) increase in the probability of persisting, an 0.09 (42.2 percent) increase in the probability of transferring to a four-year college, and an 0.08 (26.6 percent) increase in the probability of completion. We also show that college rankings based on unadjusted mean differences can be quite misleading. After adjusting for differences across campus, the average school rank changed by over thirty ranks. Our results suggest that policymakers wishing to rank schools based on quality should adjust such rankings for differences in student-level inputs across campuses.
BACKGROUND
Research on college quality has focused largely on more selective four-year colleges and universities, and on the relationship between college quality and graduates’ earnings. Reasons for students wanting to attend elite private and public universities are sound. More selective institutions appear to have a higher payoff in terms of persistence to degree completion (Alon and Tienda 2005; Bowen, Chingos, and McPherson 2009; Small and Winship 2007; Long 2008), graduate or professional school attendance (Mullen, Goyette, and Soares 2003), and earnings later in life (Black and Smith 2006; Hoekstra 2009; Long 2008; Monks 2000). However, empirical work on the effect of college quality on earnings is a bit more mixed (Brand and Halaby 2006; Dale and Krueger 2002; Hoekstra 2009; Hoxby 2009).
The difficulty in establishing a college effect results from the nonrandom selection of students into colleges of varying qualities (Black and Smith 2004). Namely, the characteristics that lead students to apply to particular colleges may be the same ones that lead to better postenrollment outcomes. Prior work has addressed this challenge largely through conditioning on key observable characteristics of students, namely, academic qualifications. To more fully address self-selection, Stacy Dale and Alan Krueger (2002, 2012) adjust for the observed set of institutions to which students submitted an application. They argue that the application set reflects students’ perceptions, or “self-revelation,” about their academic potential (2002); students who apply to more selective colleges and universities do so because they believe they can succeed in such environments. They find relatively small differences in outcomes between students who attended elite universities and those who were admitted but chose to attend a less selective university. Jesse Cunha and Trey Miller (2014) examine institutional differences in student outcomes across Texas's thirty traditional four-year public colleges. Their results show that controlling for student background characteristics (race, gender, free lunch, SAT score, and so on), the quality of high school attended, and application behavior significantly reduces the mean differences in average earned income, persistence and graduation across four-year college campuses. However, recent papers that exploit a regression discontinuity approach in the probability of admissions find larger positive returns to attending a more selective university (Hoekstra 2009; Anelli 2014).
Community colleges are the primary point of access to higher education for many Americans, yet research on quality differences between community colleges has been scant. The multiple missions and goals of community colleges have been well documented in the academic literature (Rosenbaum 2001; Dougherty 1994; Grubb 1991; Brint and Karabel 1989). Community colleges have also captured the attention of policymakers concerned with improving workforce shortages and the overall economic health of the nation (see The White House 2010). The Obama administration identified community colleges as key drivers in the push to increase the stock of college graduates in the United States and to raise the skills of the American workforce. “It's time to reform our community college so that they provide Americans of all ages a chance to learn the skills and knowledge necessary to compete for the jobs of the future,” President Obama remarked at a White House Summit on Community Colleges.
The distinct mission and open access nature of community colleges and the diverse goals of the students they serve make it difficult to assess differences in quality across campuses. First, it is often unclear which outcomes should actually be measured (Bailey et al. 2006). Moreover, selection issues into community colleges may differ from those between four-year institutions. Nevertheless, community college quality has been a key component of the national conversation about higher education accountability. This paper is not the first to explore institutional quality differences among community colleges. A recent study explored variation in success measures across North Carolina's fifty-eight community colleges, and finds that conditional on student differences, colleges were largely indistinguishable from one another in degree receipt or transfer coursework, save for the differences between the very top and very bottom performing colleges (Clotfelter et al. 2013). Other efforts have looked at the role of different institutional inputs as proxies for institutional quality. In particular, Kevin Stange (2012) exploits differences in instructional expenditures per student across community colleges and finds no impact on student attainment, degree receipt, or transfer. This finding corroborates with Juan Calcagno and his colleagues (2008), though they identify several other institutional characteristics that do influence student outcomes. Specifically, larger enrollment, more minority students, and more part-time faculty are associated with lower degree attainment and lower four-year transfer rates (Calcagno et al. 2008).
In this paper, we explore institutional effects of community colleges in the state with the largest public two-year community college system, using a unique administrative dataset that links students’ K–12 data to postsecondary schooling at community college.
Setting
California is home to the largest public higher education system, including its 112-campus community college system. Two-thirds of all California college students attend a community college. The role of community colleges as a vehicle in human capital production was the cornerstone of California's 1960 Master Plan for Higher Education, which stipulated that the California community college system will admit “any student capable of benefiting from instruction” (State of California 1960).1 Over the years, the system has grown and its schools have been applauded for remaining affordable, open access institutions. However, the colleges are also continually criticized for producing weak outcomes, in particular low degree receipt and transfer rates to four-year institutions (Shulock and Moore 2007; Sengupta and Jepsen 2006).
Several years before Obama's proposed college scorecard, California leaders initiated greater transparency and accountability in performance through the Student Success Act, signed into law by Governor Brown in 2012. Among the components of this act is an accountability scorecard, the Student Success Scorecard, that tracks several key dimensions in student success: remedial course progression rate; persistence rates; completion of a minimum of thirty units (roughly equivalent to one year of full-time enrollment status); subbaccalaureate degree receipt and transfer status, and certificate, degree or transfer among career and technical educationn (CTE) students. This scorecard is not focused on comparing institutions, rather on performance improvement over time within institutions. Nevertheless, policymakers desire critical information about the effectiveness of the postsecondary system to improve human capital production in the state and to increase postsecondary degree receipt.
In 2013, the community college system in California (CCC) served more than 2.5 million students from a tremendous range of demographic and academic backgrounds. California's community colleges are situated in urban, suburban, and rural areas of the state, and their students come from public high schools that are both among the best and among the worst in the nation. California is an ideal state to explore institutional differences at community colleges because of the large number of institutions present, and because of the larger governance structure of the CCC system and its articulation to the state's public four-year colleges. Moreover, the diversity of California's community college population reflects the student populations of other states in the United States and the mainstream public two-year colleges that educate them. Given the diversity of California's students and public schools, and the increasing diversity of students entering the nation's colleges and universities,2 we believe that other states can learn important lessons from California's public postsecondary institutions.
RESEARCH DESIGN
To explore institutional differences between community colleges, we use an administrative dataset that links four cohorts of California high school juniors to the community college system. These data were provided by the California Community College Chancellor's Office and the California Department of Education. Because California does not have an individual identifier that follows students from K–12 to postsecondary schooling, we linked all transcript and completion data for four first-time freshmen fall-semester cohorts (2004–2008) age seventeen to nineteen enrolled at a California community college with the census of California eleventh-grade students with standardized test score data. The match, performed on name and birth date, high school attended, and cohort, initially captured 69 percent of first-time freshmen ages seventeen through nineteen enrolled at a California community college (consistent with similar studies conducted by the California Community College Chancellor's Office matched to K–12 data).3
The California Community Colleges is an open access system, one in which any student can take any number of courses at any time, including, for example, while enrolled in high school, or the summer before college for those who intend to start as first-time freshman at a four-year institution. In addition, community colleges serve multiple goals, including facilitating transfer to four-year universities, subbaccalaureate degree and certificate, career and technical education, basic skills instruction, and supporting lifelong learning. We restrict the sample for our study to first-time freshman at the community college, of traditional age. We built cohorts of students who started in the summer or fall within one year of graduating high school, who attempted more than two courses (six units) in their first year, and had complete high school test and demographic information. This sample contains 254,865 students across 108 California community college campuses.4
Measures
We measure four outcomes intended to capture community college success in the short term through credit accumulation and persistence into year two, as well as through degree-certificate receipt and four-year transfer. First, we measure how many transferrable units a student completes during the first year. This includes units that are transferrable to California's public four-year universities (the University of California system and the California State University system) that were taken at any community college. Second, we measure whether a student persists to the second year of community college. This outcome indicates whether a student attempts any units in the fall semester after the first year at any community college in California. Third, we measure whether a student ever transfers to a four-year college. Using National Student Clearinghouse data that the CCC Chancellor's office linked with their own data, we are able to tell whether a student transferred to a four-year college at any point after attending a California community college. Last, we measure degree-certificate completion at a community college. This measure indicates whether a student earned an AA degree, or a sixty-unit certificate, or transferred to a four-year university. These outcomes represent only a few of the community college system's many goals, and as such are not meant to be an exhaustive list of how we might examine community college quality or effectiveness.
Our data are unique in that we have the ability to connect a student's performance and outcomes at community college with his or her high school data. As community colleges are open access, students do not submit transcripts from their high school, and have not necessarily taken college entrance exams such as the SAT or ACT to enter. As a result, community colleges often know very little about their students’ educational backgrounds. Researchers interested in understanding the community college population often face the same constraints. Examining the outcomes of community colleges without considering the educational backgrounds of the students enrolling in that college may confound college effects with students’ self-selection.
To address ubiquitous selection issues, we adjust our estimates of quality for important background information about a student's high school academic performance. We measure a student's performance on the eleventh grade English and mathematics California Standardized Tests (CSTs).5 We also determine which math course a student took in eleventh grade. In addition, we measure race-ethnicity, gender, and parent education levels from the high school file as sets of binary variables.
To account for high school quality, we include the Academic Performance Index (API) of high school attended. Importantly, as students are enrolling in community college, they are asked about their goals for attending community college. Students can pick from a list of fifteen choices, including transfer with an associate's degree, transfer without an associate's degree, vocation certification, discover interests, improve basic skills, undecided, and others. We include students’ self-reported goals as an additional covariate for their postsecondary degree intentions. Last, we add additional controls for college-level by cohort means of our individual characteristics (eleventh grade CST math and English scores, race-ethnicity, gender, parental education, API, and student goal). Table 1 includes descriptive statistics on all of our measures at the individual level; table 2 includes descriptive statistics at the college level.6
Empirical Methods
We begin by examining our outcomes across the community colleges in our sample. Figure 1 presents the distribution of total transfer units, proportion persisting to year 2, proportion transfer, and proportion completing across our 108 community colleges. To motivate the importance of accounting for student inputs, we plot each outcome against students’ eleventh grade math test scores at the college level (figure 2).
From these simple scatterplots it is clear that average higher student test scores are associated with better average college outcomes. However, we also note considerable variation in average outcomes for students with similar high school test scores.
To examine whether there are significant differences in quality across community college campuses, we estimate the following linear random effects model:
where Yiscty is our outcome variable of interest (transfer units earned, persistence into year two, transfer to a four-year institutions, or degree-certificate completion) for individual i, from high school s, who is a first-time freshman enrolled at community college c, in term t in year y; xi is a vector of individual-level characteristics (race-ethnicity, gender, parental education, and eleventh grade math and English language arts test scores), x̅cy are community college by cohort means of xi, and ws is a measure of the quality of the high school (California's API score)7 attended for each individual. And εiscty is the individual-level error term.
The main parameter of interest is the community college random effect, ζc.8 We estimate using an empirical Bayes shrinkage estimator to adjust for reliability. The empirical Bayes estimates are best linear unbiased predictors (BLUPs) of each community college's random effect (quality), which takes into account the variance (signal to noise) and the number of observations (students) at each college campus. Estimates of ζc with a higher variance and a fewer number of observations are shrunk toward zero (Rabe-Hesketh and Skrondal 2008).
The empirical Bayes technique is commonly used in measuring the quality of hospitals (Dimick, Staiger, and Birkmeyer 2010), schools or neighborhoods (Altonji and Mansfield 2014), and teachers (Kane, Rockoff, and Staiger 2008; Carrell and West 2010). In particular, we use methodologies similar to those recently used in the literature to rank hospital quality, which shows the importance of adjusting mortality rates for patient risk (Parker et al. 2006) and statistical reliability (caseload size) (Dimick, Staiger, and Burkmeir 2010). In our context, we similarly adjust our college rankings for “student risk” (such as student preparation, quality, and unobserved determinants of selection) as well as potential noise in our estimates driven by differences in campus size and student population.
RESULTS
Are there measured differences in college outcomes?
Because we are interested in knowing whether student outcomes differ across community college campuses, we start by examining whether variation in our estimates of 's for our various outcomes of interest is significant. Table 3 presents results of the estimated variance, , in our college effects for various specifications of equation (1). High values of indicate there is significant variation in student outcomes across community college campuses, while low values of would indicate that there is little difference in student outcomes across campuses (that is, no difference in college “quality”).
In row 1, we start with the most naïve estimates, which include only a year-by-semester indicator variable. We use these estimates as our baseline model for comparative purposes and consider this to be the upper bound of the campus effects. These unadjusted estimates are analogous to comparing means (adjusted for reliability) in student outcomes across campuses. Estimates of in row 1 show considerable variation in mean outcomes across California's community college campuses.
For ease of interpretation, we discuss these effects in standard deviation units. For our transfer units completed outcome in column 1, the estimated variance in the college effect of 4.86 suggests that a one standard deviation difference in campus quality is associated with an average difference of 2.18 transfer units completed in the first year for each student at that campus. Likewise, variation across campuses in our other three outcome measures is signficant. A one standard deviation increase in campus quality is associated with a 6.3 percentage point increase in the probability of persisting to year two ( = 0.0042), a 7.3 percentage point increase in the probability of transferring to a four-year college ( = 0.0056), and a 7.3 percentage point increase in the probability of completion ( = 0.0056).9
One potential concern is that our estimates of may be biased due to differences in student quality (aptitude, motivation, and so on) across campuses. That is, the mean differences in student outcomes across campuses that we measure in row 1 may not be due to real differences in college quality, but rather to differences (observable or unobservable) in student-level inputs (such as ability). To highlight this potential bias, figure 2 shows considerable variation across campuses in our measures of student ability. The across campus standard deviation in eleventh grade CST math and English scores is 0.25 and 0.27 standard deviation, respectively.
Therefore, in results shown in rows 2 through 5 of table 3, we sequentially adjust our estimates of for a host of student-level covariates. This procedure is analogous to the hospital quality literature that calculates “risk adjusted” mortality rates by controlling for patient observable characteristics (Dimick, Staiger, and Birkmeyer 2010). Results in row 2 control for eleventh grade math and English standardized test scores. Row 3 additionally controls for our vector of individual-level demographic characteristics (race-ethnicity, gender, and parental education level). Results in row 4 add a measure of student motivation, which is an indicator for student's reported goal to transfer to a four-year college. Finally, in row 5 we add a measure of the quality of the high school that each student attended, as measured by California's API score.
The pattern of results in rows 2 through 5 suggests that controlling for differences in student-level observable characteristics accounts for some, but not all of the differences in student outcomes across community colleges. Results for our transfer units earned outcome in column 1 show that the estimated variance in the college effects shrinks by 37 percent when going from our basic model to the fully saturated model. Despite this decrease, there still remains considerable variation in our estimated college effects, with a one standard deviation increase in campus quality associated with a 1.73 increase in the average number of transfer units completed by each student ( = 3.07).
Examining results for our other three outcomes of interest, we find that controlling for student-level covariates shrinks the estimated variance in college quality by 26 percent for our persistence outcome, 70 percent for our transfer outcome, and 60 percent for completion. Again, despite these rather large decreases in the variance of the estimated college effects, considerable variation remains in student outcomes across campuses. A one standard deviation increase in college quality is associated with a 0.053 increase in the probability of persisting ( = 0.0031), a 0.039 increase in the probability of transferring ( = 0.0017), and a 0.045 increase in the probability of completion ( = 0.0022). Graphical representations of the BLUPs from model 5 are presented in figure 3.
Although the estimates shown in row 5 control for a rich set of individual-level observable characteristics, there remains potential concern that our campus quality estimates may still be biased due to selection on unobservables that are correlated with college choice (Altonji, Elder, and Tabor 2005). To directly address this concern, recent work by Joseph Altonji and Richard Mansfield (2014) shows that controlling for group averages of observed individual-level characteristics adequately controls for selection on unobservables and provides a lower bound of the estimated variance in school quality effects.10
Therefore, in results shown in row 6 we additionally control for college by cohort-level means of our individual characteristics (eleventh grade CST math and English scores, race-ethnicity, gender, parental education and API score). We find that controlling for college-level covariates shrinks the estimated variance in college quality over the naïve model (model 1) by 39 percent for transfer units, 36 percent for our persistence outcome, 71 percent for our transfer outcome, and 64 percent for completion. Model 5 remains our preferred specification, however, even in this highly specified model, we still find considerable variation in student outcomes across community college campuses.
Exploring Campus Ranking
Given recent proposals by the Obama administration to create a college scorecard, it is particularly critical to determine how stable (or unstable) our college quality estimates, , are across specifications with various control variables. On the one hand, if our naïve estimates in row 1 result in a similar rank ordering of colleges as the fully saturated estimates in rows 5 and 6, then scorecards based on unadjusted mean outcomes will provide meaningful information to prospective students. On the other hand, if the rank ordering of the estimated ,'s are unstable across specifications, it is critical that college scorecards be adjusted for various student-level inputs.11
To help answer this question, we examine how the rank ordering of our college quality estimates change after controlling for our set of observable student characteristics. Figure 4 graphically presents the unadjusted and adjusted estimated college quality effects for our transfer unit outcome (our preferred specification model 5 from table 3).
The squares represent the unadjusted effects, and the dots the effects and 95 percent confidence intervals after adjusting for student-level covariates. This graph highlights two important findings: schools at the very bottom and very top end of the quality distribution tend to stay at the bottom and top of the rankings, and movement up and down in the middle of the distribution is considerable. This result indicates that unadjusted mean outcomes may be valuable in predicting the very best and very worst colleges, but they likely do a poor job in predicting the variation in college quality in the middle of the distribution. The same pattern can be noted in the other outcomes not pictured.
In a more detailed look at how the rankings of college quality change when adjusting for student-level covariates, figure 5 plots rank changes in transfer units in the first year by campus. This graph show that the rank ordering of campuses change considerably after controlling for covariates. The average campus changed plus or minus thirty ranks, the largest positive change being seventy-five and the largest drop, negative forty-nine.
These results highlight the importance of controlling for student-level inputs when estimating college quality. They also throw caution to policymakers who may be tempted to rank colleges based on unadjusted mean outcome measures such as graduation rates or post-graduation wages.
CONCLUSION
Understanding quality differences among educational institutions has been a preoccupation of both policymakers and social scientists for more than half a century (Coleman 1966). It is well established that individual ability and socioeconomic factors bear a stronger relation to academic achievement than the school attended. In fact, when these factors are statistically controlled for, it appears that differences between schools account for only a small fraction of differences in pupil achievement. Yet the influence of institutional quality differences in the postsecondary setting, particularly at the less selective two-year sector, where the majority of Americans begin their postsecondary schooling, has rarely been explored.
To help fill this gap, we use data from California's Community College System to examine whether differences in student outcomes across college campuses are significant. Our results show considerable differences across campuses in both short-term and longer-term student outcomes. However, much of these differences are accounted for by student inputs, namely measured ability, demographic characteristics, college goals, and unobservables that drive college selection. Nevertheless, after controlling for these inputs, our results show that important differences between colleges remain. What is the marginal impact of being at a better quality college? Our lower-bound estimates indicate that going from the 10th to 90th percentile of campus quality is associated with a 3.68 (37.3 percent) increase in student transfer units earned, a 0.14 (20.8 percent) increase in the probability of persisting, an 0.09 (42.2 percent) increase in the probability of transferring to a four-year college, and an 0.08 (26.6 percent) increase in the probability of completion.
A natural follow-up question is what observable institutional differences, if any, might be driving these effects? A close treatment of what might account for these institutional differences in our setting is beyond the scope of this paper. However, prior work has identified several characteristics that may be associated with student success, including peer quality, faculty quality, class size or faculty-student ratio, and a variety of measures for college costs (Long 2008; Calcagno et al. 2008; Bailey et al. 2006; Jacoby 2006).
Finally, identifying institutional effects is not purely an academic exercise. In today's policy environment, practitioners and higher education leaders are looking to identify the conditions and characteristics of postsecondary institutions that lead to student success. Given the recent push by policymakers to provide college scorecards, our analysis furthers that goal for a critical segment of higher education, public open access community colleges, and the diverse students they serve. Our results show that college rankings based on unadjusted mean differences can be quite misleading. After adjusting for student-level differences across campus, the average school rank in our sample changed by plus or minus thirty ranks. Our results suggest that policymakers wishing to rank schools based on quality should adjust such rankings for differences across campuses in student-level inputs.
Acknowledgments
We thank the California Community College Chancellor's Office and the California Department of Education for their assistance with data access. Opinions reflect those of the authors and do not necessarily reflect those of the state agencies providing data.
FOOTNOTES
↵1. The master plan articulated the distinct functions of each of the state's three public postsecondary segments. The University of California (UC) is designated as the state's primary academic research institution and is reserved for the top one eighth of the State's graduating high school class. The California State University (CSU) is primarily to serve the top one-third of California's high school graduating class in undergraduate training, and graduate training through the master's degree, focusing primarily on professional training such as teacher education. Finally, the California Community Colleges are to provide academic instruction for students through the first two years of undergraduate education (lower division), as well as provide vocational instruction, remedial instruction, English as a second language courses, adult noncredit instruction, community service courses, and workforce training services.
↵2. Between 2007 and 2018, the number of students enrolled in a college or university is expected to increase by 4 percent for whites but by 38 percent for Hispanics, 29 percent for Asian–Pacific Islanders, and 26 percent for African Americans (Hussar and Bailey 2009).
↵3. Our match rates may be the result of several considerations. First, the name match occurred on the first three letters of a student's first name and last name, leading to many duplicates. Students may have entered different names or birthdays at the community college. Students may have omitted information at either system. Second, the denominator may also be too high; not all community college students attended California high schools. Finally, students who did attend a California high school, but did not take the eleventh grade standardized tests were not included in the high school data.
↵4. We excluded the three campuses that use the quarter system, as well as three adult education campuses. Summer students were allowed in the sample only if they took enough units in their first year to guarantee they also took units in the fall.
↵5. We include CST scaled scores, which are approximately normally distributed across the state.
↵6. Unlike the four-year college quality literature, we do not account for students’ college choice set since most community college students enroll in the school closest to where they attended high school. Using nationally representative data, Stange (2012) finds that in contrast to four-year college students, community college students do not appear to travel farther in search of higher quality campuses, and, importantly, “conditional on attending a school other than the closest one, there does not appear to be a relationship between student characteristics, school characteristics, and distance traveled among community college students” (2012, 81).
↵7. The Academic Performance Index (API) is a measure of California schools’ academic performance and growth. It is the chief component of California's Public Schools Accountability Act, passed in 1999. API is composed of schools’ state standardized test scores and results on the California High School Exit Exam; scores range from a low of 200 to a high of 1,000.
↵8. We use a random effects model instead of fixed effects model due to the efficiency (minimum variance) of the random effects model. However, our findings are qualitatively similar when using a fixed effects framework.
↵9. Completion appears to be driven almost entirely by transfer; that is, few students who do not transfer appear to complete AA degrees, as such, these two outcomes are likely measuring close to the same thing.
↵10. Altonji and Mansfield (2014) show that, under reasonable assumptions, controlling for group means of individual-level characteristics “also controls for all of the across-group variation in the unobservable individual characteristics.” This procedure provides a lower bound of the school quality effects because school quality is likely an unobservable that drives individual selection.
↵11. Both hospital rankings and teacher quality rankings have been shown to be sensitive to controlling for individual characteristics (see, for example, Kane and Staiger 2008; Dimick, Staiger, and Birkmeyer 2010).
- Copyright © 2016 by Russell Sage Foundation. All rights reserved. Printed in the United States of America. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Reproduction by the United States Government in whole or in part is permitted for any purpose. We thank the California Community College Chancellor’s Office and the California Department of Education for their assistance with data access. Opinions reflect those of the authors and do not necessarily reflect those of the state agencies providing data. Direct correspondence to: Michal Kurlaender at mkurlaender{at}ucdavis.edu, University of California Davis, One Shields Ave., Davis, CA 95616; Scott Carrell at secarrell{at}ucdavis.edu, University of California Davis, One Shields Ave., Davis, CA 95616; Jacob Jackson at jackson{at}ppic.org, Senator Office Building, 1121 L Street, Suite 801, Sacramento, California 95814.
Open Access Policy: RSF: The Russell Sage Foundation Journal of the Social Sciences is an open access journal. This article is published under a Creative Commons Attribution-NonCommercial-NoDerivs 3.0 Unported License.