The Relationship of Home Access to a Computer and Academic Achievement
Presented in Partial Fulfillment of the Requirements for the Degree
Doctor of Education
University of Louisiana Lafayette
Southeastern Louisiana University
University of Louisiana Lafayette
© Mitchell Coffman
All Rights Reserved
APPROVAL OF QUALIFYING PAPER
The Relationship of Home Access to a Computer and Academic Achievement
Robert O. Slater Date
Qualifying Paper Committee Advisor
Douglas Williams Date
Qualifying Paper Committee Member
cc: Coordinator of Doctoral Program
The purpose of this literature review was to examine the relationship between student access to a computer at home and academic achievement. The relationship of computers to academic achievement has been a topic of researched since the introduction of the computer to the classroom and instruction. Recently, research has concentrated on the use of computers and the Internet in relationship to academic achievement. As society adopts computers in different forms and in nearly every aspect of daily life, computer access outside the classroom, specifically access at home and the relationship to academic achievement poses a dynamic research question for which to review the literature. This literature review examines the research available regarding the current state of computer access at home and the advantages and disadvantages of computer access at home in relation to student academic achievement. Socioeconomic influences as well as type of use for students’ accessing home computers within that subset is also part of this literature review. Advocates of computer access for students’ argue for ubiquitous incorporation of the computer into the educational field as a means to promote constructivist instruction, anytime-anywhere learning opportunities, as well as a means to maintain pace with society’s adoption of the computer in the workforce. Studies are limited regarding the impact home access to a computer and the impact on academic achievement. Little theoretical or historical evidence of how computer access at home influences achievement was discovered in the literature. This literature review reviewed focused on empirical studies and case studies that revealed access to computers at home had mixed results on student outcomes, with mostly insignificant positive relationships to and effects upon academic achievement for all students. Furthermore, the literature showed that making a causal link between home access and academic achievement was difficult mainly because of the multitude of variables associated with such an undertaking, e.g. socio-economic status and the way in which students used computers when accessed at home, e.g. social networking or gaming.
The purpose of this study was to examine what we know about the relationship between access to a computer at home and academic achievement. Are students who have access to computers at home more likely to have higher academic achievement?
The direct link between computers and academic achievement has been the focus of extensive literature for several decades. Studies have tried to explain the role and the benefit of the computer in classrooms and on students’ academic achievement since the mainstreaming of the computer into the classroom in the late 1990s (Cuban, 1998; Malamud & Pop-Eleches, 2011; Wenglinsky, 1998). The initial body of literature explored the impact of computer uses upon the classroom (Angrist & Lavy, 2002; Cuban, 1998; Dahmani & Youseff, 2008). More recently, a second body of literature explored the impact of the Internet upon the classroom computer and student academic achievement (Jackson, Von Eye, Biocca, Barbatsis, Zhao, & Fitzgerald, 2006; 2011; Keller, 2009).
This qualifying paper investigated the impact of student computer access at home upon academic achievement. “Surprisingly, the role of home computers in the educational process has drawn very little attention in the literature” (Beltran, Das, & Fairlie, 2010, p.6; Fairlie, 2013; Fairlie & Robinson, 2013). In the United States, the federal government has made the integration of and provision of improved access to all students a mandated part of the National Educational Technology Plan incorporated into the No Child Left Behind Act (U.S. Department of Education, 2004). Home access to a computer and its impact on academic achievement is a rich area for review especially as computer use and computer integration becomes ubiquitous in American society and business (Malamud & Pop-Eleches, 2011). The Gates Foundation, founded by Microsoft Owner Bill Gates, reported that at the turn of this century, “Little research is available in this area” (Fouts, 2000 p. ii).
Consequently, this qualifying paper reviewed the literature on the question of the relationship between students’ access to computers at home and their academic achievement. Are students who have access to computers at home more likely to have higher academic achievement? In general, what do us, as educational researchers know about the effects of home computer use on academic achievement?
By the mid-1990s, computers had altered the nature of humankind very rapidly and had revolutionized many diverse environments across the world (Cuban & Tyack, 1995). The educational field was added to this list of diverse environments by the late 1990s as a common tool of education accepted for instructional purposes (Cuban, 2009, 2010; DeSutter, 2004; Samuelson & Varian, 2001). Through this literature review, the general awareness of students’ home access to computers and the impact that ubiquitous computing in education can have on student academic achievement will be extended to technology facilitators and policymakers for consideration.
This question is important because current use of computers, whether at school or in the classroom and even at home, are considered to be beneficial to student academic achievement and accepted forms of instruction (Beltran et al., 2006; Cuban, 2009, 2010; Fairlie, 2013; Livingstone, 2012). Principals and teachers believe they are beneficial (Beltran et al., 2006). As such, perhaps computers used to enhance learning and information gathering in these typical educational settings might be considered as an equally important means of improving student achievement through full time home access to a computer (Schmitt & Wadsworth, 2004). Educational reforms especially those related to computers and technology-based instruction are being implemented nationally and internationally at a rapid pace (ISTE, 2012). Integrating the computer in all of its manifestations into the educational field is a dynamic area of study especially in light of this rapidly evolving pace of computer technology (Blazer, 2008; Dias, 1999).
Perhaps the most important reason to research this topic is the fact that the evidence for effectiveness is both limited and mixed (Livingstone, 2012; Malamud & Pop-Eleches, 2011; Thomas & McGee, 2012) and often-spurious (Fuchs & Woessmann, 2004). Additionally, state standards and federal mandates, combined with performance testing, suggests that computers be linked not only to student learning but also to academic achievement (Beltran et al., 2006; Fouts, 2000; ISTE, 2012; Wenglinsky, 1998).
Critics and even some students argue that computers, although beneficial in some instances for enhanced instruction, cannot replace the central role of and importance of teachers with a “black box” (Megarry, 2013, Abstract). Instructors who engaged in traditional pedagogical instructional methods, it is argued would always be essential to student achievement (Kolikant, 2012; Oppenheimer, 1997). Critics as well as advocates of full time computer access argue that the digital divide issue could present a firm roadblock to equal access of home computers for students of all races and an unfair challenge to minority ownership for home placement (Wenglinsky, 1998; Fairlie, 2013; Megarry, 2013; Warschauer, 2010). More recently, critics argue that computers at home entice students to engage in unproductive use of the computer, e.g., gaming and social networking (Subrahmanyam et al. 2000, 2001; Wirth & Klieme 2003 as discussed in Fuchs & Woessman, 2004). These issues are of reasonable and historical consideration for families and schools, yet these concerns are fading arguments as virtual schooling, flipped classrooms, video conferencing, higher order gaming simulations, and lower manufacturing costs for computers, development of smaller computing units and phones, as well as the growing presence of widespread public and private Internet connections made available for teachers to extend learning opportunities outside of the school are expanded (Means et al., 2009; Warschauer, 2004; Warschauer & Matchuniak, 2010).
(n) Permission to enter a channel or network or to interface with a computer or a device connected to a computer (Graves, 2009). (v) To gain entry to a channel or network or to interface with a computer or with a device attached to a computer or network. For example, when one logs onto a computer at work, home, school or in a public setting, one accesses a computer and/or network (Graves, 2009).
A computer is an electronic device that manipulates information, or data. It has the ability to store, retrieve, and process data. You can use a computer to type documents, send email, and browse the World Wide Web. You can also use it to handle spreadsheets, accounting, database management, presentations, games, and more (Baldauf, Amer, and Gower-Winter, 2014). Synonyms: personal computer, PC, laptop, netbook, ultraportable, desktop, terminal, mainframe, Internet appliance, smartphone (Google, 2014).
A computer could be understood more easily based upon the characteristics and functions of the multiple tasks, software and hardware associated (Rajarman, 2010). Computers are built to carry out a variety of instructions, e.g. add, subtract, read or write characters, compare and analyze algorithms, computer code, applications, or programs that are input correctly (Rajarman, 2010, p. 6). Google, one of the predominant Internet search engines of the 21st Century, defines a computer as an electronic device for storing and processing data, typically in binary form, according to instructions given to the device in a program allowing for variable processing.
Socioeconomic status (SES) is measured by determining education, income, occupation, or a composite of these dimensions (Winkleby, Jatulis, Frank, & Fortmann, 1992). In educational research, SES is a statistical control used because established data supports the notion that SES is a significant contributor to individual differences in educational outcomes (Alexander, Entwisle, & Thompson, 1987; Coleman, 1966; Mercy & Steelman, 1982; Roscigno, & Ainsworth-Darnell, 1999 as discussed in Dickinson & Adeleson, 2014).
This literature review informs educators, teachers, school leaders, policymakers and students on the potential impact of home computer access on academic achievement. The paper is organized as follows. In the first section, the theoretical framework for undertaking the literature review research question regarding computer access at home, for student use in education is presented. This section includes a literature review of the relationship among and between the computer and the learning theory of constructivism and the historical framing of the computer in education. The following section reviewed the literature related to the access of computers at home and any correlation to student academic achievement. This section identified and discussed common influential factors and/or variables that influence or correlated to student academic achievement and computer access at home. This section discussed the effect of socioeconomic status (SES) factors that affect the frequency and/or type of computer use by students. Additional measures of academic performance were considered in this section, including graduation rates, discipline, and homework completion in regards to the relationship access to a computer has upon these variables. The third section reported returns from studies of state sponsored initiatives regarding computer access at home, viewed through national and international peer reviewed case studies (Spiezia, 2010). The summary of findings concludes the literature review. The results of the literature review are intended to show whether home computer access has any correlation to student academic achievement.
The literature review is concentrated in peer-reviewed research that investigated the relationship of and any possible correlation to home access to computers has had upon academic achievement. Little research has been done in the area regarding home access (Blazer, 2008; Fairlie, 2013; Livingstone, 2012). This void of knowledge, as well as the rapid evolution of the computer in society, including types of computing devices presents a challenge to researchers trying to maintain pace with technology and its role in educational instruction.
This literature review does not research specific operational processes of the implementation of computers and technologies into educational settings, e.g. computer labs or classrooms (Rabelais, 2014). Discussion of teacher computer training, implementation and professional development of computing skills was not a part of this literature review. This study did not deliberate specific technology policy guidelines nor individual state or local standards. Case histories from international studies are presented as well as results from national trials and local computer access initiatives. Extensive research exists on the influence of the Internet on academics and therefore is not a central point of research for this literature review. A discussion of the digital divide was included as it relates to the literature review question. Students can use software at home that may be independent of the Internet for academics. Likewise, specific software programs, learning modules or applications may be referenced but were not a significant part of the research plan.
Numerous web-based sources, both printed and digital professional journal articles, dissertations, newspapers, magazines and published books were explored to obtain literature germane to computers and the relationship to student home access and academic achievement. A robot email news alert system notified me of any publications posted to the Internet for prominent scholars writing on the subject of computer access and education. These news alerts as well as reading of these scholars’ blogs and viewing videos posted online were also part of the literature review background. Review of standard setting non-governmental organizations involved in computer and education policies were also a part of the research, e.g., International Society Technology in Education (ISTE), although policy was not analyzed. Several editorials and articles from local and national newspapers reporting anecdotal evidence, narratives and reviews of scholarly studies were also a part of the research. Technology and education related commission, consortium reports, both governmental and non-governmental were also used in the research. In addition, scholarly and peer reviewed journals were used for research. A number of reports from state educational institutions and philanthropic organizations were also explored. Academic databases including Educational Resources Information Center (ERIC), Edutopia, Ed.gov, National Center for Educational Statistics (NCES), National Assessment of Educational Progress (NAEP), Google Scholar, and LOUIS (The Louisiana Library Network) provided access to national and international reports (Rabalais, 2014). The review contains sources from both quantitative and qualitative research.
The development and dissemination of information and communication technologies has had a concentrated effect on modern life and modern education (Cuban, 2001; Fairlie, 2013, Livingstone, 2013; Yu, 2008). The affordances of newer, more compact, and more powerful processing hardware, intuitive interactive user interfaces and other developments has increased adoption of computers into society and education at an unprecedented pace (Fairlie, 2014; Yu, 2008; Megarry 2013; Warschauer & Matchuniak, 2010). The growing ubiquity of computers in society combined with an expanding presence in educational settings, especially in the homes of students could influence students in unprecedented ways. Therefore, it is prudent to ask, has access to computers at home had an impact on student academic achievement? This qualifying paper reviewed the literature associated with computer use in education and specifically reviewed literature that investigated any relationship student access to a computer at home may have upon academic achievement.
The limited body of research reviewed linking computers with student academic achievement over the past twenty-five years reported an association with marginally improved academic achievement (Angrist & Lavy, 2002; Beltran et al., 2008; Borsheim, Merritt, & Reed, 2008; ISTE, 2008, 2012; Warschauer, 2010). However, the totality of the available research reviewed was mixed (Fairlie, 2013; Schmitt & Wadsworth, 2004). Some studies revealed this positive association to be small, often statistically insignificant and at times negative in different areas of academic performance, notably academic achievement variance across subject areas (Wenglinsky, 1998). The results in the literature review are generally consistent with in the research conclusions regarding the impact home access to a computer has on academic achievement. The literature reported that any impact that home access to a computer had on academic achievement was dependent upon multiple variables and characteristics associated with the student and/or study focus. A variety of household characteristics correlated with computer access and educational outcomes (Malamud & Pop-Eleches, 2011; Fairlie, 2013; Schmitt & Wadsworth, 2012; Warschauer, 2010; Wenglinsky, 1998). Specifically, the more influential factors that impacted student academic achievement were mostly dependent upon the students’ socioeconomic status and the type of computer use engaged in while accessing a computer at home (Angrist & Lavy, 2002; Beltran et al., 2008; Blanton, Moorman, Hayes, & Warner, 1997; Clotfelter, Ladd & Vigdor, 2008; Dynarski, 2007; Fairlie, 2013; Fuchs & Woessman, 2004; Goolsbee & Guryan, 2006; Higgins, Xiao, & Katsipataki, 2012; Inan & Lowther, 2009; Li, Atkins, & Stanton, 2006; Livingstone, 2012; Mahlamud & Pop-Eleches, 2008, 2011; Rouse & Krueger, 2004; Toyama, 2011; Vigdor & Ladd, 2010; Warschauer, 2010; Warschauer & Matchuniak, 2010; Wenglinsky, 1998). In fact, once control for various household characteristics were implemented, correlations with home access to computers and educational outcomes, consistently produced mixed support for the view home access was associated with improved educational achievement (Schmitt & Wadsworth, 2004). Fairlie and Robinson wrote, “There is no strong consensus in this literature on whether the effects of home computers are positive or negative” (Fairlie & Robinson, 2013, p. 2).
Little theoretical support exists for the mandating the use of computers in education as a means to improve academic achievement (Beltran, et al., 2008; Fairlie, 2013). Alper and Gulbahar (2009) reported that only a few researchers addressed teaching theories and learning models for computer environments and any effect upon academic achievement. They conducted research to investigate the theoretical basis of related articles published to a Jordanian national online journal from 2003-2007. Though their results were not generalizable, the authors reported that a global shortcoming in the theoretical basis for research appears to be related to the youth of the academic subject combined with the speed of technology in this area of research despite the location (Webster & Watson, 2002 as discussed in Alper & Gulbahar, 2009, p. 8). Evidence of computers having any positive benefits as a pedagogy is scattered and typically tailored or gathered from smaller studies (LeBaron & McDonough, 2009 as discussed in Livingstone, 2012, p. 10). In early research, Warschauer and Healey (1998) considered the mechanical input a model similar to the behaviorist model of Skinner (as discussed in Bataineh & Baniabdelraham, 2006, Introduction). Beetham and Sharpe hinted that Jean Piaget’s learning theories and his influence on education in the form of the construction of knowledge was conducive to the implementation of the computer in education, i.e. gathering knowledge, observing and building upon that information (Beetham & Sharpe, 2013).
Seymour Papert, protégé to Piaget and advocate for constructivist and what he termed constructionist educational pedagogy argued for a full-scale shift in instructional pedagogy toward incorporating computers into the curriculum (Papert, 1999). Papert argued that computers were likely to be the motivational instrument that led to implementation of full time constructivist education in modern society. “The computer alters the nature of the process by shifting the balance between the transfers of knowledge from instructor to the production of knowledge by students” (Papert, 1991 p. 10.). In 1996, Papert reiterated his faith in computers, noting that a computer outside the control of schools would be most successful in promoting the constructivist philosophy. He expressed the necessity to integrate the computer into students’ educational lifestyles rapidly as the computer was the American lifestyle (Papert, 1996, par. 11).
The minimal action that will make a serious difference in education is ensuring that each and every child has a personal computer, which is mostly about opening new methods of learning by having full time access to a computer” (Papert & Caperton, 1999, sec., VI).
Despite the welcome and common placement of the computer into the classroom in the 1990s, few theorists were convinced of its necessity and even fewer researchers anticipated the influence access to computers outside of the classroom could have upon instructional pedagogy and student achievement (Beltran et al., 2010; Christensen & Horn, 2008; Cuban, 1998; Oppenheimer, 1997).
Contemporary educational policy makers and academia emphasize constructivist instructional methods as the predominant educational theory employed to promote higher order thinking in students (Lowe, 2004; Yu, 2008). Constructivism is grounded in the belief that learning is constructed by human interactions and decisions where learners construct knowledge based on what they already understand as they make connections between new information and old information (Beetham & Sharpe, 2013; D’Angelo, et al., 2009). The connection between computer use and constructivist classroom instructional methods was well documented in the research (Duffy & Jonassen, 1992). Strommen and Lincoln (1992) were early adopters of integrating contemporary computers into education as a constructivist tool. In 1992, the pair outlined ways in which to integrate computers into the traditional curriculum under constructivist pedagogy. They argued that constructivism, computers, and learning have much in common that could be the basis for a pedagogical change associated with the educational system.
Strommen and Lincoln wrote,
Embrace the future and empower our children to learn with the cultural tools they have been given. Computers, engage children with the immediacy they are used to in their everyday lives, and bends it to a new pedagogical purpose (Strommen & Lincoln, 1992. p. 469).
Having more access to computers has not automatically led to their greater effectiveness. Wenglinsky (1998) set the tone for measuring the effectiveness of computing in education, noting that it could be judged by whether the computer benefits students, i.e., academic achievement. Wenglinsky (2005) analyzed data of the NCES National Assessment of Education Progress (NAEP) database, commonly referred to as the nation’s report card, from 1996 for any evidence existing regarding access to computers in education, both at home and in class, to that of academic achievement. He found a negative interaction existed between computer uses in the classroom and computer use at home and test score outcomes in mathematics at both the fourth and eighth grade level, and in science at both the fourth and eighth grade level, and reading at the eighth grade level (Wenglinsky, 1998; Warschauer & Matchuniak, 2010, p. 204). In contrast, in literacy and reading achievement, 14 out of 19 International Society for Technology Education (ISTE) studies reviewed from 2000-2008 showed strong positive effects of home computers on reading achievement (ISTE, 2008). In science, ISTE research revealed positive effects of the home computer on students’ science achievement assessments (ISTE, 2008; Dunleavy & Heinecke 2007). Other research on the influence of computer use on student achievement was reported to have many benefits for students regarding academic achievement, including improved performance scores in core subject areas (Attewell & Battle, 1991). Johnson (2000) studied the effects of accessing a computer using NAEP reading scores where he used a multiple regression to analyze the effects of the computer and other variables, such as familial income upon student achievement (Davis, 2004). Although his focus was on the quality of teacher instruction, Johnson’s multiple regression model demonstrated that at least on the NAEP reading test for both fourth and eighth grades computer access had no effect on academic achievement of students (p. 8).
Research on computers and the relationship to academic achievement and student performance remains mixed, presenting positive and negative evidence of the influence upon student learning and assessment (Cuban, 1995, 2001; Giacquinta, Bauer, & Levin, 1993; Johnson, 2000; Oppenheimer, 2003; Ravitz, Mergendoller & Rush, 2002; Rochelle, 2000; Stoll, 1995). These mixed results generally agree that the degree of impact access to a home computer had upon academic achievement was determinant upon societal, contextual, environmental and behavioral factors. Fuchs and Woessman (2004) caution that evidence on the relationship between computers and students’ educational achievement was misleading because computer availability at home is correlated strongly with a myriad of other family background factors. The research reported in this literature review consistently reported that the potential for computer access at home to improve academic achievement could be attributed most importantly to two significant variables, socioeconomic status (SES) and the type of computer use engaged in by the student at home (Al-Senaidi, Lin & Poirot, 2009; Mahlamud & Pop-Eleches, 2008; Warschauer, 2010; Wenglinsky, 1998, 2008). Warschauer and Matuchniak (2010) reported that student socioeconomic status (SES) was the strongest factor that predicted whether computer use would be positively or negatively associated with test score outcomes and academic achievement (p. 204).
Wenglinsky (2008) emphasized that the conflicting data reported was influenced heavily by the results of activities predominately used with the computer, especially with lower SES students. He found the more constructivist directed computer use with higher SES students was correlated with higher test score outcomes. For instance, in mathematics, Wenglinsky (2005) found that the utilization of simulations in eighth grade and the use of complex games in the fourth grade-impacted test scores positively, while drill and practice exercises at the eighth grade level negatively affected scores. Again, emphasizing use, Wenglinsky concluded that the more familiar drill and practice activities favored in low SES schools tend to be ineffective, whereas the uses of computers in high SES schools that applied a more constructivist approach in computer use achieved results that were more positive (Wenglinsky, 1998, 2005). This point was consistent with Viadero’s (1997) who posited that when used in tutorial or drill and skill fashion, use of the computer leads to student gains roughly equivalent to other kinds of classroom interventions such as personal tutoring. Types of computer use have also been associated directly with differing levels of socioeconomic status.
In the 2001 Programme for International Student Assessment, Mahlamud and Pop-Eleches (2008, 2011) found similar results in their analysis of the impact of vouchers provided for home computers for students living in low SES home environments. The data suggested that home access to a computer failed to show significant improvement in performance scores. Mahlamud and Pop-Eleches (2008) concluded that any negative relationship in the PISA study depended on the form of use for which the computer was accessed. Attewell and Battle (1999) used the National Longitudinal Youth Survey (NLYS88) based on standardized tests and found that without other controls, having a home computer was correlated with a 12% increase in both reading and math test scores. When SES and other factors were controlled for in their analysis, having a home computer raised test scores by a smaller 3% to 5% of the average score. Their findings suggest that students having access to a computer at home have marginally improved scores in reading and mathematics (Ben Youssef & Dahmani, 2008).
This finding was confirmed by Clotfelter et al., (2008) who reported that SES mitigated the impact having access to a home computer and the Internet had upon academic performance. Clotfelter et al., (2008) supported the results of Fuchs and Woessman (2004) finding that 5th through 8th grade students perform better on math and reading tests when there is no access to a computer at home. Clotfelter et al., (2008, p. 38) reported that the optimal rate of use is infrequent, twice a month or less and that for the average student, the introduction of home internet service did not produce additional benefits in academic achievement. They reported that students who accessed a home computer for school once or twice per month scored between four to five percent of a standard deviation higher on both reading and math assessments and students who owned a computer and did not use the computer for school, had math scores nearly indistinguishable from those without a home computer. These students, (who own a computer and do not use the computer for homework) scored slightly better in reading than those students reporting no access to a home computer (Clotfelter et al., 2008).
Scardamalia and Bereiter (2003) reported that the use of computers e.g. keyboarding, web page viewing, emailing, etc., was less engaging, a lower level of learning, i.e., non-constructivist, as Papert had originally suggested, and Wenglinsky originally reported. Warschauer (2010) argued that simple nonetheless useful skills were beneficial to academic achievements for all students, but more importantly were not an efficient model of computer use for maximizing constructive learning with computers. He suggested that computers be used to engage collaborative student centered work on real life project based situations or simulations. Warschauer (2010) noted that full time any time computer access presented students an active challenge to the material presented. This argument also parallels Papert’s earliest writings regarding computers in education while confirming Wenglinsky’s research into the relationship computers have upon student performance (Wenglinsky, 1998).
Warschauer and Matchuniak (2010) confirmed Papert’s original assertion writing,
The most persuasive evidence that access to computers raises standard academic outcomes, such as grades, test scores, and graduation rates, comes from home rather than school settings may be the case that at home, people are more able to make computers part of their personal space and tailor them to their needs (p. 220).
Beltran et al., (2010) found a relationship between home ownership of computers and high school graduation rates. They found a differential in graduation rates between computer owners and non-owners of 24-percentage points in the NLSY97 data and 16-percentage points in the CPS data. They noted that the 16-point difference found in the CPS data was larger than the White/Black difference (13 points) in the NLSY97 data, yet the differences between and among teenagers of all races was comparable. Beltran et al., (2010) attributed the difference in computer ownership to a wide range of factors, most notably SES and type of use. Despite this, they reported an increase of six to eight percentage points in student graduation rates for those who had access to a computer at home. Fairlie (2013) reported no improvement in credits earned, improved attendance, or disciplinary actions.
Beltran et al., (2010) found that having access to a computer was associated with a slight 0.22 point positive difference in grade point average based on a four point grading scale, equal to roughly 2/3 the value of a (+) or (−) grade. They reported findings similar to earlier studies discussed that noted the influence extraneous factors had in any attempt to determine a causal relationship of computer access to academic achievement and improved performance scores. Beltran et al., (2010) noted that the use of a home computer for homework or associated schoolwork was a principal activity for those students that had access at home. They cited data from the CPS that reported 93% of U.S. public school students who had access to a computer at home used them for school assignments. Nevertheless, the Beltran et al., (2010) research confirmed earlier studies that noted the importance of extraneous factors when computer use studies attempted to determine any causal relationship of computers to academic achievement and improved student performance.
Students used home computers for many purposes (Fairlie, 2005). The most common use reported by students was to gain access to the Internet, followed by gaming, emailing, and word processing. Fairlie (2005) noted that students reported accessing a computer at home primarily to complete school assignments (p. 6). The ISTE (International Society for Technology Education) reported that strong positive effects were seen in scores among elementary and secondary students that used computers to complete homework that reinforced the instructional objectives addressed during class (ISTE, 2008, Policy Brief, p. 6). Malamud & Pop-Eleches (2011) reported that low income students who were provided a voucher awarded computer for home use, were detracted from homework completion and the computer acted as a distraction.
Papanastasiou, Zemblyas, and Vrasidas (2003) found the way in which computers are used was more determinant of the positive or negative effect on academic achievement rather than SES, which they contended had a lesser effect on student outcomes (North Central Regional Educational Laboratory [NCREL], 2005). Mahlamud and Pop Eleches (2008) used the PISA results to find that computer use was negatively associated with high student achievement in some countries (Papanastasiou et al., 2003, CESifo, 2004). More specifically, 15-year-old U.S students, based on the data from the PISA, showed that once accessed, the manner of use of the computer was associated more significantly with a positive or negative effect on performance scores (CESifo, 2004), especially in subject specific assessments. Papanastasiou et al., 2003, reported that when students’ SES was controlled for, their results indicated that the students, who used computers frequently at home, including for the purpose of writing papers, tended to have higher science achievement.
In 2000, the Kaiser Family Foundation interviewed a nationally representative sample of more than 2,000 eight to 18-year-old children enrolled in public schools and found that 74% of the students reported living in houses with computers. The percentages rose to 78% for 11to 14-year-olds and 80% of 15 to 18-year-olds who reported computer access at home (Roberts, Foehr, & Rideout, 2005). Households with children had greater access to computers than the general population. According to CPS data cited in Warschauer & Matchuniak (2010), 70% of family households with kids under the age of 18 had computers and Internet access at home when compared to 57% of households without children (p. 183-184), suggesting children, presumably partaking in some form of necessitated access to a computer.
In fact, approximately nine out of 10 high school students who had access to a home computer used that computer to complete school assignments (Beltran, et al., 2008; SRI, 2002). Laptop programs indicated high rates of use of the computers for homework (Mitchell Institute 2004; Urban-Lorain & Zhao, 2004). Students report that home access to a computer has allowed an amount of flexibility that is more conducive to self-directed learning, individualization, group collaboration, extended learning opportunities and increased motivation (Thomson, 2010).
Research data indicated that access to computers at home provided autonomy for students in an environment that is difficult to replicate outside of the home environment (see discussion in Dimaggio, Hargittai, Celeste, & Shafer, 2004; Fairlie & London, 2009). Beltran et al., (2010) suggested that many students used computers at school and libraries but noted home access “Represents the highest quality access in terms of availability and autonomy, which may provide the most benefits to the user” (p. 10), which paralleled Thomson’s 2010 findings. Access to a home computer was reported to increase familiarity with computing skills and strengthen understanding of the material presented in the classroom, as a result arising in an increased value of computer access (Mitchell Institute, 2004; Underwood, J., Billingham, & Underwood, G., 1994).
The Pew Internet and American Life Project interviewed 700 students’ parents by telephone in 2008 regarding access to computers in the home. The report showed that 89% of students ages 12 to 17 used a computer to access software and the Internet at home or any available location or time (Lenhart, Arafeh, & Smith, 2008), which confirmed the Roberts, Foehr & Rideout, 2005 study. Fairlie reported that despite the increase in access to computers at home, households without computers tend to be substantially poorer and less educated than other households (U.S. Department of Commerce 2011 as cited in Fairlie, 2013, p. 2). Robinson and Fairlie (2013) cited research from the National Telecommunications and Information Administration of 2011 that reported roughly 1 out of 4 U.S. public school students did not have access to a computer at home (Robinson & Fairlie, 2013, p. 21).
Robinson and Fairlie, wrote,
While this gap in access to home computers seems troubling, there is no theoretical or empirical consensus on whether the home computer is a valuable input in the educational production function and whether these disparities limit academic achievement (2013, p. 21).
In regards to home access, the 2003 Census Population Survey (CPS) data contained statistical proof of a home use divide not categorized by race nor SES for access but by type of use by students’ (DeBell & Chapman, 2006) again confirming Wenglinsky’s earliest research (1998). Cotten, Davison, Shank and Ward indicated that white access to a computer and the Internet, (in a racially diverse mid-Atlantic school district) did not report any advantage over several other differing races and ethnicities in the study (2014). They reported the finding was true even when accounting for a number of socioeconomic and demographic background factors “…that are known to affect Internet usage” (Section, Findings). Their study added to the evidence that within the United States the digital divide has become more about what the authors term “other dimensions” such as how the Internet is used, rather than merely access or ownership (Cotton, et al., 2014, section Research Implications).
Warschauer (2010) suggested that many aspects of the digital divide may be dissipating in terms of access and use amongst racial groups,
Evolving over time into a more dramatic divide occurring in the level of constructivist instruction provided lower SES schools to those of schools with higher SES ratings (p. 199).
Warschauer (2010) referenced the 2nd divide discussed in Scardamalia and Bereiter (2003) in which they noted drill and repeat use of the computer resulted in shallow or rote learning. Attewell and Battle (1991) originally foreshadowed limiting computer use to rote applications suggesting lower SES schools employed fewer computer literate teachers and were often located in low SES school districts that had schools with limited resources. Attewell (2001) considered this 2nd divide an emerging new social problem that threatened students’ learning outcomes (p. 252). Warschauer believed the 2nd divide issue to be one of great concern and expressed the need to “Deepen public understanding of this issue through a more thorough appraisal of what access to computers entails and of the ends that such access serves” (Warschauer, Conclusion, 2009).
Several state programs reported that student performance increased when computers were used at home, in combination with traditional instructional methods (Fouts, 2000). Personal engagement with new media and computers provided a seamless learning experience for many students especially when blended into traditional instruction (Gee, 2003, 2004; Jenkins, 2009).
In North Carolina, several high poverty elementary and middle schools implemented the IMPACT systemic reform program in 1995 (North Carolina Department of Public Instruction, 1995). The program provided access to computers for student use in core curricular classes, in an attempt to improve student performance scores. In the four-year study, students in high need schools enrolled in the IMPACT program demonstrated that they were 33% more likely to improve one full grade level each year in comparison to the control schools (SETDA, 2008). Student achievement was reported consistently higher in the IMPACT schools and teacher retention was reported to be 65% higher under the program. The number of college bound students also increased from 26 % to 84% over a five-year period of the program (SETDA, 2008).
In 2002, The Maine Learning Technology Initiative (MLTI) provided full time access to computer laptops for all seventh and eighth grade students. By 2009, the state expanded the program to include high schools (Stephens, 2012). Analysis of the results revealed that students with computer laptops scored higher in science, math, writing, reading, and social studies, while students who did not participate in the program scored lower in these subjects on the annual Maine Educational Assessment (MEA) statewide performance test than those with computer laptops (Lemke & Martin, 2003; Muir, Knezek, & Christensen, 2004; Silvernail & Gritter, 2007). As noted by Rockman (2004), critics of Maine’s computer laptop program asserted that the $28 million per year investment was not cost efficient because the program failed to produce any results that could be interpreted as influencing student scores in an empirically significant positive outcome.
Ravitz, Mergendoller, and Rush (2002) analyzed the Iowa Test of Basic Skills (ITBS) reading, language arts, and math scores for 31,000 Idaho students in 8th and 11th grades. They found that students who scored higher on the ITBS used computers more often at home and less often in school. Blazer (2008) suggested the ITBS results could be a result of the lack of prior computer use at home, which she considered a larger barrier to improved student performance than the lack of access to a computer at school. Ravitz, et al., (2002) reported that students’ with access both at home and school, declared that their personal computer literacy capability was greater than students with access to a computer at only one location, i.e. school or home. Students with access to a computer at home and not at school self-reported their computer literacy and software knowledge as average, whereas those with access to computers only at school rated their software and computer capabilities as below average (Blazer, 2008).
In Texas, the Technology Immersion Pilot (TIP) program implemented in middle schools across the state demonstrated that discipline referrals went down by more than one-half with the addition of the computer in to one particular high school. In a separate middle school participating, 6th grade standardized math scores increased by 5%, 7th grade by 42%, and 8th grade by 24% (SETDA, 2012). Despite these positive results, the TIP program reported ultimately that student academic achievement in 22 of the TIP schools had no significant academic gains when compared to 22 similar control schools (Blazer, 2008).
Stephens (2012) acknowledged the improved standardized test scores reported in the TIP data for middle school reading students as just one of many benefits computers provided students (p.2). She relied upon and quoted Warschauer (2006, 2008), directly, who concluded that the addition of computer technology made possible, ‘Literacy processes more public, collaborative, authentic, and iterative, with greater amounts of scaffolding and feedback provided’ (Warschauer, 2008, p. 64 as quoted in Stephens, 2012), which was a point emphasized by Wenglinsky, (1998, 2005) and envisioned by Papert (1991, 1996). However, in Texas, many participating schools restricted allowing computers to be taken home, which weakened the main effect, that Robinson and Fairlie (2013) said, “resulted in providing one computer for every student in the classroom, rather than to increase home access” (Endnote 5., p. 3).
In a 2006 state sponsored program, Michigan enlisted 195 schools into the Freedom to Learn (FTL) program designed to increase student academic achievement. Blazer (2008) reported that no significant differences in student achievement were found between eight control schools participating in the program and eight schools abstaining from the program (p. 16).
O’Dwyer, Russell, Bebell and Tucker-Seeley (2005) studied the relationship between 4th grade students and the use of computer technology on the English Language Arts (ELA) section of the Massachusetts Comprehensive Assessment. They reported finding no relationship between students’ scores and computer access at home once SES was controlled for in the data (O’Dwyer, et al, 2005). Blazer (2008) reported that the O’Dwyer et al., 2005 study showed that those students who reported higher levels of recreational home computer use received lower performance assessment scores. They also reported no relationship between students who used their computers at home for academic activities and improved performance scores. Al Senaidi et al., (2009) noted that results from studies on this topic might indeed be mostly attributable to SES.
Fairlie (2013) followed with a study that provided evidence on the educational impacts of home computers by conducting a randomized control experiment with 1,123 students in grades 6-10 attending 15 schools across California, where students were provided free home computers at any level. None of the students participating in the study had a home computer before the study began. Fairlie (2013) found that even though the experiment had a large effect on computer ownership and total hours of computer use, there was no evidence of an effect on educational outcomes, including grades and standardized test scores. Fairlie (2013) notes, “Our estimates are precise enough to rule out even moderately-sized positive or negative effects (p. 4)”.
Prior studies that examined the relationship between home computers and student academic achievement found mixed results. In one of the seminal studies regarding this, Wenglinsky found a positive association between home access to a computer in the data from the 1995 NAEP for specific subject areas, yet reported the influence as insignificant. Attewell and Battle (1999) found that test scores and grades among eighth graders were related positively to home computer use. Fairlie (2005) found a positive cross-sectional relationship between home computers and school enrollment in the 2001 Census Population Statistics (CPS). Schmitt and Wadsworth (2006) found a positive relationship between home computers and performance on the British school examinations in 1991 through 2001. Beltran et al., (2010) found a relationship between home ownership of computers and high school graduation rates.
In contrast, Fuchs and Woessman (2004) found a negative relationship between access to home computers and math and reading test scores in the Programme for International Student Achievement (PISA). Malamud and Pop-Eleches found that providing home computers to low-income children in Romania lowered academic achievement even while it improved their computer skills and cognitive ability (Malamud & Pop-Eleches, 2010). The conclusions drawn from this literature on the relationship between home computers and educational outcomes are limited. Mixed results presented evidence of potential bias in any study of the computer and any influence access at home could have upon academic achievement (Beltran, et al., 2008).
The literature review results show a pattern consistent across much of the existing non-experimental and experimental research literature. Socioeconomic status and use were the largest determining factors that influenced the relationship between academic achievement for students with access to a computer at home (Beltran, et al., 2010; Clotfelter et al., 2008; Fuchs and Woessman, 2004: Higgins, et al., 2012; Malamahud & Pop-Eleches, 2008; Papanastasiou, et al., 2005; Warschauer, 2010; Warschauer & Matuchniak, 2010; Wenglinsky, 1998, 2005). The studies revealed that once the contextual factors were controlled for (Beltran et al., 2010; Malamahud & Pop-Eleches, 2008; Fuchs & Woessman, 2004; Wenglinsky, 1998) and socioeconomic status and students’ type of use was considered (Beltran et al., 2010; Fairlie, 2005; Fuchs & Woessman, 2004; Malamahud & Pop- Eleches, 2008, 2011; Papanastasiou, et al., 2005.; Warschauer, 2010, 2011, 2012; Warschauer & Matuchniak, 2010; Wenglinsky, 1998, 2005) the more impact home access to a computer had upon student academic achievement. In general, computers in most areas of education were discovered to have a marginal impact on academic achievement (Ertmer & Ottenbreit-Leftwhich, 2010). Access to computers in a home does offer all students the opportunity to “Open the doors to learning” (Cuban & Kirkpatrick, 2001; Cuban & Peck, 2001 as quoted in Fairlie, 2005) and extend opportunities that support constructivist learning.
The literature reviewed reported that ultimately there was no direct causal link between access to a computer at home and improved academic achievement. Fuchs and Woessman (2004) described most of the studies investigating the relationship of the computer to academic achievement as descriptive analyses that could be misinterpreted to show evidence of a causal relationship. However, they noted that although no direct link was found, these studies come much closer to determining a causal relationship between access to a computer at home and improved academic achievement (p.9). Fuchs and Woessman (2004) reported that any finding that used bivariate analysis to declare an outright causal impact of computers on student academic performance, “May well be spurious, being driven by other important factors associated with using computers at home” (Fuchs & Woessman, 2004, p. 14).
Beltran et al., (2010) confirmed that contextual factors and associated environmental factors for both student and schools and other multiple related variables make finding a causal relationship between computers and academic performance difficult. Research in this literature review repeatedly emphasized the association of multiple factors that must be considered when researching the direct relationship to computer access and student performance (Al Senaidi, et al., 2009, p. 577). In the literature reviewed, the largest obstacles to making this connection in relation to access of a computer were SES, which typically directs the type of use (constructivist or non-constructivist) engaged in by the student, which in turn determines some degree of a positive effect or negative effect on academic achievement. Beltran et al., (2008) reported that the omission of any effects of unobserved factors, as well as observed factors could invalidate any causal interpretation of the results (p. 19).
The impact of computer use upon academic achievement has generated a great amount of interest among administrators, policymakers, parents and teachers seeking to interpret the data most valuable to student learning. Researchers agreed that measurement of the effectiveness of computers in academics and instruction can be difficult to conduct without consideration of these multiple factors or conditions associated with access, SES, types of use and a multitude of interrelated dependent variables.
Mixed and misinterpreted results of academic and professional studies prompt further investigation into the questions related to home computer access and academic achievement.
Fuchs and Woessman (2004) noted,
Our best estimates still do not necessarily show the causal effect of computers on student performance. Rather, the estimates need to be interpreted cautiously as descriptive conditional correlations, in the sense that they report the relationship between computers and student learning conditional on holding constant the other family-background and school resources (p. 9).
Chapter 3: Methodology
The purpose of this study was to investigate the following question: Do students having computer access at home demonstrate higher academic achievement versus those students not having computer access at home? Results yielded by the research question may provide empirical support for or against adding computer access at home to education and instruction efforts. The research question suggests the following null hypothesis: There is no difference in achievement scores between students having computer access at home and students not having computer access at home.
A quantitative, non-experimental approach with a causal comparative analysis was utilized to research the hypothesis. Student results from national assessments were acquired from a central database to determine if computer access at home has an effect on U.S. public school students’ average achievement scores in science for 12th grade as reported in the 2009 National Center for Education Statistics (NCES) National Assessment of Educational Progress (NAEP) database. In this section a brief introduction into the database is provided along with a discussion of the variables.
To operationalize this design and investigate the research question, data from the National Assessment of Educational Progress (NAEP) Data Explorer was analyzed. NAEP administered national assessments since 1969 and began offering state assessments in 1990 (NAEP, 2012). NAEP “is the largest nationally continuing assessment of what America’s students know and can do” (NAEP, 2014, para. 1). NAEP was not designed to offer individual scores of students or schools; instead “it offers results for populations of students… and groups within those populations” (What NAEP does section, para 1). NAEP offers assessments in various content areas. The NAEP database is governed by the National Center for Education Statistics in order to track and monitor academic progress across subjects, states, and various student populations. NAEP field staff members administer questionnaires, surveys, and assessment exercises from chosen samples (2014).
Variables, Definitions and Acronyms
Using the Main NAEP database and public domain data tool of the NAEP, NDE or National Data Explorer the following specific questions were included in the 2009 NAEP data set for the 12th grade general science assessment of U.S. public school students.
- Subject, Grade: Science, Grade 12
- Jurisdiction: National public
- Measure: Overall science scale
- Year: 2009
- All students
- Full Title: All students
- ID: TOTAL
- Values: All students
- Computer at home
- Full Title: Is there a computer at home that you use? (student-reported)
- ID: B017101
- Values: Yes, No
- National School Lunch Program eligibility, 3 categories
- Full Title: Student eligibility for National School Lunch Program based on school records (collapsed to three categories)
- ID: SLUNCH3
- Values: Eligible, Not eligible, Information not available
- Full Title: Gender of student as taken from school records
- ID: GENDER
- Values: Male, Female
- Race/ethnicity allowing multiple responses, student-reported
- Full Title: Race/ethnicity based on student responses to two background questions with an option to choose more than one race data for Asian and Native Hawaiian/Other Pacific Islander categories are combined; variable not used in NAEP reporting
- ID: DRACEM
- Values: White, Black, Hispanic, Asian/Pacific Islander, American Indian/Alaska Native, More than one, Unclassified
- Classrooms have desktop computer for 12th grade science
- Full Title: Classrooms have desktop computer for 12th grade science
- ID: C075101
- Values: 0%, 1-25%, 26-50%, 51-75%, 76-99%, 100%
Note. Criteria, measures, jurisdiction, and variable information from NAEP Data Explorer (2011). See http://nces.ed.gov/nationsreportcard/hstsdata/
NAEP Science Achievement Score. The NAEP science scale score ranges from 0 to 300; for the purposes of this study, an average of scores was evaluated. The NAEP Science assessment measures student across three broad areas including Physical Science, Life Science, Earth and Space Sciences. Conceptual understanding is the primary focus of the test; other assessment items include “paper- and–pencil questions, hands on performance tasks, and interactive computer tasks” (NAEP, 2012c, Comparison Frameworks section).
As previously mentioned, the research was causal-comparative in nature and investigated to determine if achievement scores were impacted by computer access at home. In this study, the dependent variable was the average general science achievement score for 12th grade U.S. public school students. Results from the 2009 NAEP National assessments in science were available to weigh student progress. Assessments are reviewed and administered periodically, with input from “subject area experts, school administrators, policymakers, teachers, parents” and are given in a uniform manner by certified staff (NAEP 2012b; NAEP 2012c).
Computer access at home
This study is dedicated to better understanding the arguments for incorporating computer access at home into science and related technical fields, e.g. STEM (Science, Technology, Engineering, Math). Advocates for the increasingly popular use of STEM teaching in science call for a full integration of computer access in the classroom and at home so as to maximize technology use in every aspect of scientific learning. As such, accepting or rejecting the null hypothesis of this study will assist in determining if there is any relationship between having computer access at home and scientific achievement.
The literature review detailed several ongoing difficulties of assessing current and past programs that focused upon student computer access from home, specifically a misrepresentation of those students and families of a low socioeconomic status (SES) (ASHE, 2011). Advocates for computer access at home cited studies in which computer access at home has widened the achievement gap of these same students (Vigdor and Ladd, 2008). Therefore, demographic variables were isolated and controlled to create subgroups for comparison. Determination will be made if increased computer access at home better serves these students for learning. The following control variables will be featured in this study: gender, race, and National School Lunch Program eligibility. The strong sampling of NAEP participants across physical and social demography will aid with generalizing findings.
In an average state, 2,500 students in approximately 100 public schools are assessed per grade, for each subject assessed. The selection process for schools uses stratified random sampling within categories of schools with similar characteristics. A national sample will have sufficient schools and students to yield data for public schools, each of the four NAEP regions of the country, as well as sex, race, degree of urbanization of school location, parent education, and participation in the National School Lunch Program (NSLP), (NAEP, 2011).
NAEP achievement levels
“Students’ performance on main NAEP assessments is reported as scale scores and as the percentages of students at or above three achievement levels (Basic, Proficient, and Advanced). Average scores are reported on a 0–300 scale for science, writing, civics, and mathematics at grade 12. Scores at five percentiles on each scale provide results for lower- performing students (at the 10th and 25th percentiles), middle-performing students (at the 50th percentile), and higher-performing students (at the 75th and 90th percentiles) in that subject.
“Achievement levels are the standards that the Governing Board adopts, used to report what students should know and be able to do for basic, proficient, and advanced performance in each grade and subject tested. For each level there is a written description of performance, a set of illustrative sample questions, and a minimum score on the NAEP scale. This research used the scaled score average for the data analysis.
Race / Ethnicity Prior to 2011 as reported by NAEP
Student race/ethnicity was obtained from school records and reported for the following six mutually exclusive categories. Students identified with more than one racial/ethnic group were classified as “other” and were included as part of the “unclassified” category, along with students who had a background other than the ones listed below or whose race/ethnicity could not be determined as outlined by NAEP in the following categories.
- Asian/Pacific Islander’
- American Indian/Alaska Native
- Other or unclassified
National School Lunch Program
NAEP collects data on student eligibility for the National School Lunch Program (NSLP) as an indicator of family income. Under the guidelines of NSLP, children from families with incomes below 130 percent of the poverty level are eligible for free meals. Those from families with incomes between 130 and 185 percent of the poverty level are eligible for reduced-price meals. (For the period July 1, 2010 through June 30, 2011, for a family of four, 130 percent of the poverty level was $28,665, and 185 percent was $40,793 in most states.) Some schools provide free meals to all students regardless of individual eligibility, using their own funds to cover the costs of non-eligible students. Under special provisions of the National School Lunch Act intended to reduce the administrative burden of determining student eligibility every year, schools can be reimbursed based on eligibility data for a single base year. Participating schools might have high percentages of eligible students and report all students as eligible for free lunch. For more information on NSLP, visit http://www.fns.usda.gov/cnd/lunch/.
Operationalization (questions in the NAEP data set).
“While the primary focus of NAEP is on achievement in specific subject areas, NAEP collects a wealth of other information to address many questions about student achievement. NAEP attempts to address these questions and others through data collected on background questionnaires. Sampled students, as well as their teachers and principals, complete these questionnaires to provide NAEP with data about students’ school backgrounds and educational activities. Students answer questions about courses, homework, and a limited number of additional factors related to instruction. Teachers answer questions about their professional qualifications and teaching activities, while principals answer questions about school-level practices and policies. Relating student performance on the subject-related portions of the assessments to the information gathered on the background questionnaires increases the usefulness of NAEP findings and provides a context for understanding student achievement.” (NAEP, 2009, p.6-7).
“NAEP is a representative-sample assessment. It reports on the achievement of large groups of students, and does not give results for individuals or schools. Participating schools are selected by the National Center for Education Statistics and its contractor according to a sampling frame in order to produce results that are nationally representative and also representative of participating states and urban districts. Within each school students are selected randomly from a list of all those enrolled at the target grade. Individual schools and students cannot volunteer for NAEP, and state, district, and local officials cannot choose the students or schools that become involved. Information about students and schools are kept confidential.
In national-only samples there usually are about 8,000 to 12,000 students assessed per subject per grade in about 400 to 650 public and private schools. For a particular subject, each state sample has about 2,500 to 3,000 students per grade in about 100 public schools. Each district sample has about 1,000 to 2,000 students per grade and subject in about 20 to 100 public schools” (National Assessment Governing Board, n.d. retrieved from http://www.nagb.org/toolbar/faqs.html#naep-design).
Sampling and Weighting
“The schools and students participating in NAEP assessments are selected to be representative of all schools nationally and of public schools at the state level. Samples of schools and students are drawn from each state and from the District of Columbia and Department of Defense schools. The results from the assessed students are combined to provide accurate estimates of the overall performance of students in the nation and in individual states and other jurisdictions. While national results reflect the performance of students in public, private, and other types of schools (i.e., Bureau of Indian Education schools and Department of Defense schools), state-level results reflect the performance of public school students only. More information on sampling can be found at http://nces.ed.gov/nationsreportcard/about/nathow.asp. Because each school that participated in the assessment, and each student assessed, represents a portion of the population of interest, the results are weighted to account for the disproportionate representation of the selected sample. This includes oversampling of schools with high concentrations of students from certain racial/ethnic groups and the lower sampling rates of students who attend schools with fewer than 20 students” (NCES, Technical Notes, 2011, retreived from http://nces.ed.gov/nationsreportcard/pdf/main2011/2012465.pdf.
Data Analysis Methods
Several statistical measures can be analyzed to draw inferences when comparing the average scaled scores between groups. First, NAEP Data Explorer allows researchers to test the statistical significance between populations of interest by means of a t test for independent groups (NAEP, 2008). A t test for independent samples determines a statistical difference two groups under examination. Groups representing a statistical difference at alpha level .05 or below are characterized as statistically significant. Groups with a statistical difference at an alpha level above .05 are not characterized as statistically significant.
Second, the NAEP Data Explorer reports a standard error for the mean scale scores of selected populations (“Standard Error”, n.d.). “The standard error of measurement allows you to determine the probable range within which the individual’s true score falls” (Gall, Gall, & Borg, 2003, p. 199). In accordance with proper distribution, accounting for a “plus or minus two standard errors of measurement” allows a researcher to predict the mean range with 95 percent accuracy (2003). The standard error of measurement accompanies mean scale scores of student populations.
“Comparisons over time or between groups are based on statistical tests that consider both the size of the differences and the standard errors of the two statistics being compared. Standard errors are margins of error, and estimates based on smaller groups are likely to have larger margins of error. The size of the standard errors may also be influenced by other factors such as how representative the assessed students are of the entire population. When an estimate has a large standard error, a numerical difference that seems large may not be statistically significant. Differences of the same magnitude may or may not be statistically significant depending upon the size of the standard errors of the estimates. To ensure that significant differences in NAEP data reflect actual differences and not mere chance, error rates need to be controlled when making multiple simultaneous comparisons. The more comparisons that are made (e.g., comparing the performance of White, Black, Hispanic, Asian/Pacific Islander, and American Indian/Alaska Native students), the higher the probability of finding significant differences by chance. In NAEP, the Benjamini-Hochberg False Discovery Rate (FDR) procedure is used to control the expected proportion of falsely rejected hypotheses relative to the number of comparisons that are conducted. A detailed explanation of this procedure can be found at http://nces.ed.gov/nationsreportcard/tdw/analysis/infer.asp. NAEP employs a number of rules to determine the number of comparisons conducted, which in most cases is simply the number of possible statistical tests” (NCES, 2011 retrieved from http://nces.ed.gov/nationsreportcard/pdf/main2011/2012465.pdf).
Another option for statistical analysis in NAEP data explorer is confidence intervals. Confidence intervals are important in surmising results and discussing the implications indicated by a sample on the whole population. Confidence intervals enable researchers to establish parameters of the population in an effort to better generalize sample results (Gall, Gall, & Borg, 2003). Applying the same “plus or minus two standard errors approximates a 95 percent confidence interval for the corresponding population quantity“ (“Confidence intervals”, n.d., para. 2). In other words, the confidence parameters of the sample evaluated lends itself to an accurate picture of the true population.
Additionally, the NDE, uses a online statistical module that permits linear regression analysis to be run on the variable sets. Current research primarily utilized multiple regression analysis to manipulate data. Regression is useful to determine the relationship between an independent variable and a dependent variable; multiple regression examines this relationship with multiple independent, or predictor, variables. This technique enables researchers to determine the relative impact of each variable on the dependent, or criterion, variable (Cohen & Cohen, 1975). A multiple regression equation can be created with this information to predict the value of an unknown criterion variable. The current research used multiple regression analysis to understand the impact of demographic variables on academic performance. The first question sought to establish the relations of race, eligibility for free- or reduced-lunch, and gender on achievement of students in public schools; the aim was to determine the differential impact of these predictors.
Multiple regression was used for the first question to determine the relation of the demographic variables to the academic performance of students in public schools: How do gender, race and National School Lunch Eligibility relate to academic achievement in combination? The criterion variable for this analysis was the composite score for each combination of grade level and subject. The predictor variables for all of these analyses were race (African American, Latino, Asian, Native American, and White), eligibility for free-or reduced price lunch, and access to a computer in the classroom for science.
Finally, if a null hypothesis is rejected and a significant relationship is found, effect sizes will be calculated to discuss the impact of findings. The effect size quantifies the impact of the independent variable. As cited by Salkind (2011), Cohen’s ranges of effect are referenced as the baseline in labeling variance between groups. “A small effect size ranges from 0.0 to .20… A medium effect size ranges from .20 to .50…A large effect size is any value above .50” (Salkind, 2011, p.198).
Al-Senaidi, S., Lin, L., & Poirot, J. (2009). Barriers to adopting technology for teaching and learning in Oman. Computers & Education, 53 (3), 575-590.
Alper, A., & Gulbahar, Y. (2009). Trends and Issues in Educational Technologies: A review of recent research in TOJET. The Turkish Online Journal of Educational Technology-TOJET, 2 (8), 12. Retrieved from http://files.eric.ed.gov/fulltext/ED505942.pdf
Angrist, J., Lavy, V., & Schlosser, A. (2010). Multiple experiments for the causal link between the quantity and quality of children. Journal of Labor Economics, 28 (4), 773-824.
Attewell, P. & Battle, J. (1999). Home computers and school performance. Information Society, 15 (1), 1-10.
Attewell, P. (2001). Comment: The first and second digital divides. Sociology of Education, 252-259. Retrieved from http://www.emilienneireland.com/blackboard/sources/Attewell_Digital_Divide_2001.pdf
Attewell, P., Battle, J., & Suazo-Garcia, B. (2003). Computers and young children: Social benefit or social problem? Social Forces, 82 (1), 277-296. Retrieved from http://mtw160-198.ippl.jhu.edu/journals/social_forces/v082/82.1attewell.pdf
Baldauf, K., Amer, B., Gower-Winter, K., (2013). Emerge with Computer Learning. Cenage Learning. Retrieved from http://www.cengage.com/resource_uploads/downloads/1285780140_428378.pdf
Bataineh, R., & Baniabdelrahman, A. (2006). Jordanian EFL students’ perceptions of their computer literacy. International Journal of Education and Development using ICT [Online], 2(2). Retrieved from http://ijedict.dec.uwi.edu/viewarticle.php?id=169.
Behind, N.C.L. (2002). Act of 2001, Pub. L. No. 107-110, § 115.Stat, 1425, 107-110.
Becker, H. J. (2000). Findings from the Teaching, Learning, and Computing Survey: Is Larry Cuban Right? Education Policy Analysis Archives, 8 (51), n51.
Beltran, D. O., Das, K. K., & Fairlie, R. W. (2008). Are Computers Good for Children? The effects of home computers on educational outcomes. Centre for Economic Policy Research, ANU. Retrieved from http://citeseerx.ist.psu.edu/viewdoc/download?rep=rep1&type=pdf&doi=10.1.1.223.106
Beltran, D. O., Das, K., & Fairlie, R. W. (2010). Home computers and educational outcomes: evidence from the NLSY97 and CPS. FRB International Finance Discussion Paper, (958). Retrieved from http://people.ucsc.edu/~rfairlie/papers/published/econ%20inquiry%202010%20-%20home%20computers.pdf
Ben Youssef, A., Dahmani, M. (2008). “The Impact of ICT on Student Performance in Higher Education: Direct effects, indirect effects and organizational change”. In: “The Economics of E-learning” Revista de Universidad y Sociedad del Conocimiento (RUSC). Vol. 5, no. 1. UOC. Accessed from http://www.uoc.edu/rusc/5/1/dt/eng/benyoussef_dahmani.pdf
Blanton, W. E., Moorman, G. B., Hayes, B. A., & Warner, M. L. (1997). Effects of participation in the Fifth Dimension on far transfer. Journal of Educational Computing Research, 16 (4), 371-396.
Blazer, C. (2008). Literature Review: Educational technology. Research Services, Miami-Dade County Public Schools.
Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2008). Scaling the digital divide: Home computer technology and student achievement. In Education Policy Colloquia Series, Harvard University, Cambridge, MA.
Cuban, L. (2009). Oversold and underused: Computers in the classroom. Harvard University Press. http://www.urosario.edu.co/urosario_files/28/28745b9b-7870-4676-9b0e-a84b26278639.pdf
Cuban, L. (2010). Rethinking education in the age of technology: The digital revolution and schooling in America. Science Education, 94(6), 1125-1127.
Cuban, L., Kirkpatrick, H., & Peck, C. (2001). High access and low use of technologies in high school classrooms: Explaining an apparent paradox. American Educational Research Journal, 38(4), 813-834.
Cuban, L., & Tyack, D. (1995). Tinkering toward utopia: A century of public school reform. Nation. Cambridge, MA: Harvard University Press.
D’Angelo, C., Touchman, S., Clark, D., O’Donnell, A., Mayer, R., Dean Jr., D., & Hmelo-Silver, C. (2009). Historical Roots of Constructivism. Retrieved from http://www.education.com/reference/article/constructivism/
Davis, J. & Kmitta, D. 2010. Why PT3: An analysis of the impact of educational technology.
DeBell, M., & Chapman, C. (2006). Computer and Internet Use by Students in 2003. Statistical Analysis Report. NCES 2006-065. National Center for Education Statistics.
Dias, L. B. (1999). Integrating technology. Learning and Leading with Technology, 27, 10-13.
Dickinson, E. R. & Adelson, J. L. (2014). Exploring the Limitations of Measures of Students ‟Socioeconomic Status (SES). Practical Assessment, Research & Evaluation, 19(1). Retrieved from http://pareonline.net/getvn.asp?v=19&n=1
DiMaggio, P., Hargittai, E., Celeste, C., & Shafer, S. (2004). Digital inequality: From unequal access to differentiated use. Social Inequality, 355-400. Retrieved from http://books.google.com/books?id=5h2pEfBVyRwC&lpg=PA355&ots=LwZJg9j-cB&dq=Dimaggio%2C%20Hargittai%2C%20Celeste%2C%20%26%20Shafer%2C%202004&lr&pg=PA358#v=onepage&q=Dimaggio,%20Hargittai,%20Celeste,%20&%20Shafer,%202004&f=false
Duffy, T. M., & Jonassen, D. H. (Eds.). (1992). Constructivism and the technology of instruction: A conversation. Psychology Press.
Dynarski, M. (2007). Effectiveness of reading and mathematics software products findings from the first student cohort report. DIANE Publishing.
Ertmer, P. A., & Ottenbreit-Leftwhich, A. T. (2010). Teacher Technology Change: How knowledge, confidence, beliefs, and culture intersect. Journal of Research on Technology in Education, 42 (3). Retrieved from http://files.eric.ed.gov/fulltext/EJ882506.pdf
Fairlie, Robert. (2014). The Effects of Home Computers on School Enrollment. Economics of Education Review. UC Santa Cruz: Department of Economics, UCSC. Retrieved from: http://escholarship.org/uc/item/82w8v1m8
Fairlie, R. W. (2005). The effects of home computers on school enrollment. Economics of Education Review, 24(5), 533-547. Retrieved from http://www.cjtc.ucsc.edu/docs/r_schoolcomp6.pdf
Fairlie, R. W. (2012). Academic achievement, technology and race: Experimental evidence. Economics of Education Review, 31(5), 663-679. Retrieved from http://www.kent.k12.wa.us/cms/lib/WA01001454/Centricity/Domain/568/Citations/Academic%20Achievement%20Technology%20and%20Race.pdf
Fairlie, R. W., & London, R. A. (2012). The Effects of Home Computers on Educational Outcomes: Evidence from a Field Experiment with Community College Students. The Economic Journal, 122(561), 727-753. Retrieved from http://people.ucsc.edu/~rfairlie/papers/butte22.docx
Fairlie, R. W., & Robinson, J. (2013).Experimental evidence on the effects of home computers on academic achievement among schoolchildren (No. w19060). National Bureau of Economic Research. Retrieved from http://archive.nyu.edu/bitstream/2451/31408/2/11_14.pdf
Fouts, J., (2000). Research on Computers and Education: Past, present and future. The Bill and Melinda Gates Foundation. Seattle WA.
Fuchs, T., and Woessman, L. (2004a).Computers and student learning: Bivariate and multivariate evidence on the availability and use of computers at home and at school (No. 1321). CESifo working papers. Retrieved from http://www.econstor.eu/bitstream/10419/18686/1/cesifo1_wp1321.pdf
Fuchs, T., & Woessman, L. (2004b). What accounts for international differences in student performance? (No. 1235). CESifo working papers. Retrieved from http://www.econstor.eu/bitstream/10419/18874/1/cesifo1_wp1235.pdf
Giacquinta, J. B. (1993).Beyond technology’s promise: An examination of children’s educational computing at home. Cambridge University Press.
Goolsbee, A., & Guryan, J. (2006). World Wide Wonder? Measuring the (Non-) Impact of Internet Subsidies to Public Schools. Education Next, 6(1), 60-65.
Graves, M., (2009) Computing Encyclopedia. Cenage Learning.
Higgins, S., Xiao, Z., & Katsipataki, M. (2012). The Impact of Digital Technology on Learning: A Summary for the Education Endowment Foundation.
Inan, F. A., & Lowther, D. L. (2010). Factors affecting technology integration in K-12 classrooms: A path model. Educational Technology Research and Development, 58(2), 137-154.
ISTE (International Society for Technology in Education) (2008). ISTE Policy Brief—Technology and Student Achievement—The Indelible Link. Washington, DC. Retrieved from http://www.k12hsn.org/files/research/Technology/ISTE_policy_brief_student_achievement.pdf
ISTE (International Society for Technology in Education) (2012). The ISTE Standards and NCATE (National Council for Accreditation of Teacher Education). Washington, DC. Retrieved from http://www.iste.org/standards
Jackson, L. A., Von Eye, A., Biocca, F. A., Barbatsis, G., Zhao, Y., & Fitzgerald, H. E. (2001). Computers in Human Behavior, 27, (1), p. 228-239.
Jackson, L. A., Von Eye, A., Biocca, F. A., Barbatsis, G., Zhao, Y., & Fitzgerald, H. E. (2006). Does home internet use influence the academic performance of low-income children? Developmental psychology, 42(3), 429.
Johnson, K.A. (2000). Do computers in the classroom boost academic achievement? A report of the Heritage Center for Data Analysis. Retrieved March 21, 20010, from http://www.heritage.org./Research/Education/CDA00-08.cfm
Keller, J. (2009). Studies Explore Whether the Internet Makes Students Better Writers. Journal of Higher Education.
Kolikant, Y. B.-D. (2012). Using ICT for school purposes: Is there a student-school disconnect? Computers & Education, 59(3), 907+. Retrieved from http://go.galegroup.com/ps/i.do?id=GALE%7CA294942932&v=2.1&u=qa_mig19&it=r&p=AONE&sw=w&asid=475b54056ea084a975f48814241db770
Livingstone, Sonia (2012) Critical reflections on the benefits of ICT in education. Oxford Review of Education, 38 (1). pp. 9-24. Retrieved from http://eprints.lse.ac.uk/42947/1/__libfile_repository_Content_Livingstone,%20S_Critical%20reflections_Livingstone_Critical%20reflections_2014.pdf
Lemke, C., & Martin, C. (2003). One-to-one computing in Maine: A state profile. Los Angeles, CA: Metiri Group. Retrieved from http://westsidecs.wms.schoolfusion.us/modules/groups/homepagefiles/cms/701787/File/laptopresearch/ME-Profile.pdf
Lenhart, A., Arafeh, S., & Smith, A. (2008). Writing, Technology and Teens. Pew Internet & American Life Project.
Li, X., Atkins, M. S., & Stanton, B. (2006). Effects of home and school computer use on school readiness and cognitive development among Head Start children: A randomized controlled pilot trial. Merrill-Palmer Quarterly, 52(2), 239-263.
Loges, W. E., & Jung, J. Y. (2001). Exploring the digital divide internet connectedness and age. Communication Research, 28(4), 536-562.
Lowther, D. L., Ross, S. M., & Morrison, G. R. (2001, July). Evaluation of a laptop program: Successes and recommendations. 2001 National Educational Computing Conference Proceedings. Eugene, OR. International Society for Technology in Education. Retrieved from http://amoyemaat.org/lowther.pdf
Mahlamud, O., & Pop-Eleches, C. (2011). Home computer use and the development of human capital. The Quarterly Journal of Economics, 126(2), 987-1027.
Megarry, J. (2013). Thinking, learning, and educating: The role of the computer. World Yearbook of Education 1982/83: Computers and Education, 15-28.
Mitchell Institute. (2004). One-to-one Laptops in a High School Environment, Piscataquis Community High School Study Final Report. Great Maine Schools Project.
Muir, M., Knezek, G., & Christensen, R. (2004). Power The. Learning & Leading with Technology, 32(3), 6. Retrieved from http://www.eastpennsd.org/Committees/_techcomm/techcommpics/laptopinitiative.pdf
National Telecommunications and Information Administration. (1999).Falling Through the Net—Defining the Digital Divide: A report on the telecommunications and information technology gap in America. U.S. Department of Commerce. Retrieved from www.ntia.doc.gov/ntiahome/fttn99/contents.html
North Carolina Department of Public Instruction. (2005). IMPACT: Guidelines for North Carolina media and technology programs. Raleigh, NC.
North Central Regional Educational Laboratory [NCREL]. (2005). Critical Issue: Using technology to improve student achievement. Retrieved from http://www.ncrel.org/sdrs/areas/issues/methods/technlgy/te800.htm#contact
Oppenheimer, T. (1997). The computer delusion. The Atlantic Monthly, 280(1), 45-62.
Papanastasiou, E. C., Zembylas, M., & Vrasidas, C. (2003). Can computer use hurt science achievement? The USA results from PISA. Journal of Science Education and Technology, 12(3), 325-332. Retrieved from http://vrasidas.com/wp-content/uploads/2007/07/science.pdf
Papert, S. (1996). Computers in the classroom: Agents of change. The Washington Post education review. Retrieved from http://ucladodgeroz.tripod.com/Printables/Papert.doc
Papert, S. (1999). Vision for education: The Caperton-Papert platform. For the 91st annual National Governors’ association meeting in St. Louis, August 1999.
Papert, S., & Harel, I. (1991). Situating constructionism. Constructionism, 36, 1-11.
Peck, C., Cuban, L., & Kirkpatrick, H. (2002). Techno-promoter dreams, student realities. Phi Delta Kappan, 83(6), 472-480.
Rabalais, M. (2014). STEAM: A national study of the integration of the arts into STEM instruction and its impact on student achievement. Retrieved via email from author.
Rajaraman, V., (2010). Fundamentals of Computers. Phi Learning Pvt. Ltd.
Reiser, R. A. (2002). A history of instructional design and technology., In R.A. Reiser & J.V. Dempsey (Eds.),Trends and issues in instructional design and technology., Upper Saddle River, NJ: Merrill Prentice Hall.
Roberts, D. F., Foehr, U. G., & Rideout, V. J. (2005). Generation M: Media in the lives of 8-18 year-olds. Henry J. Kaiser Family Foundation.
Rockman, S. (2004). Getting results with laptops. Technology & Learning, 25(3), 1-12. Retrieved from http://www.sca2006.tic-educa.org/archivos/modulo_2/sesion_3/getting_results_with_laptops.pdf
Rouse, C. E., & Krueger, A. B. (2004). Putting computerized instruction to the test: a randomized evaluation of a “scientifically based” reading program. Economics of Education Review, 23(4), 323-338.
SRI International. 2002. The Integrated Studies of Educational Technology: Professional development and teachers’ use of technology, SRI International Report.
Samuelson, P. and Varian, H.R. (2001). The “New Economy” and Information Technology Policy University of California, Berkeley July 18, 2001, Retrieved on February 23, 2014 from http://people.ischool.berkeley.edu/~pam/papers/infopolicy.
Schmitt, J., & Wadsworth, J. (2006). Is there an impact of household computer ownership on children’s educational attainment in Britain? Economics of Education review, 25(6), 659-673.
Silvernail, D. L., & Lane, D. M. (2004). The impact of Maine’s one-to-one laptop program on middle school teachers and students. Maine Education Policy Research Institute (MEPRI), University of Southern Maine.
Silvernail, D. L., & Gritter, A. K. (2007). Maine’s middle school laptop program: Creating better writers. Retrieved from Retrieved from https://usm.maine.edu/sites/default/files/Center%20for%20Education%20Policy%2C%20Applied%20Research%2C%20and%20Evaluation/Impact_on_Student_Writing_Brief.pdf
Spiezia, V. (2010). Does Computer Use Increase Educational Achievements? Student-level Evidence from PISA.
State Educational Technology Directors Association, SETDA (2008). SETDA’s National Trend Report Highlights of 2008.
State Educational Technology Directors Association, SETDA (2012). AARA Case Studies 2012.
Stoll, C. (1996). Silicon snake oil: Second thoughts on the information highway. Random House LLC.
Stevens, T., (2012). Research Findings on the Effects of One-to-One Student Computing. Retrieved from https://techsvcweb.madison.k12.wi.us/files/techsvc/Research%20Findings%20on%20the%20Effects%20of%20One-to-One%20Student%20Computing.pdf
Strommen, E.F. and Lincoln, B., (1992). Constructivism, Technology, and the Future of Classroom Learning. Education and Urban Society. Volume 24, Number 4, August 1992, pp.466-476, New York. Retrieved from http://m.alicechristie.org/classes/530/constructivism.pdf
Sun, L. & Bradley, K.D. (2010). Using the U.S. PISA results to investigate the relationship between school computer use and student academic performance. Retrieved from http://www.uky.edu/kdbrad2/MWERA_Letao.pdf
Swan K., & Mitrani, M. (1993) The changing nature of teaching and learning in computer-based classrooms. Journal of Research on Computing in Education, 26, (1), 40-54.
Thomas, K. M., & McGee, C. D. (2012). The only thing we have to fear is… 120 characters. TechTrends, 56(1), 19-33.
Thomson, D. L. (2010). Beyond the classroom walls: Teachers and students perspectives on how online learning can meet the needs of gifted students. Journal of Advanced Academics, 21(4), 662-712.
Texas Center for Educational Research. (2008). Evaluation of the Texas Technology Immersion Pilot: Outcomes for the third year (2006-2007). Austin, TX: TCER. Retrieved from http://www.tea.state.tx.us/WorkArea/DownloadAsset.aspx?id=2147497816
Underwood, J., Billingham, M., & Underwood, G. (1994). Predicting Computer Literacy: How do the technological experiences of schoolchildren predict their computer‐based problem‐solving skills? Journal of Information Technology for Teacher Education, 3 (1), 115-126.
- S. Department of Education, National Center for Education Statistics. 1999. Teacher Quality: A Report on the Preparation and Qualifications of Public School Teachers. Washington, DC: NCES 1999–080.
- S. Department of Education, National Center for Education Statistics. 2000. Internet Access in U. S. Public Schools and Classrooms: 1994–99. Washington, DC: NCES 2000–086.
- S. Department of Education. National Center for Education Statistics. 2000. Teachers’ Tools for the 21st Century: A Report on Teacher’s Use of Technology. Washington, DC: NCES 2000102.
- S. Department of Education, Office of Educational Research and Improvement, National Educational Longitudinal Study (NELS: 88/94) Methodology Report, Publication #NCES 96-174.
- S. Department of Education. National Education Technology Plan. Jessup, MD: Editorial Publications Center, 2004.
U.S. Department of Education, Institute of Education Sciences, National Center for Education Statistics, National Assessment of Educational Progress (NAEP), 2004 Science Assessment.
Vigdor, J. L., & Ladd, H. F. (2010). Scaling the digital divide: Home computer technology and student achievement (No. w16078). National Bureau of Economic Research.
Warschauer, M. (1997). Computer‐mediated collaborative learning: theory and practice. The Modern Language Journal, 81(4), 470-481.
Warschauer, M. (2003). Demystifying the digital divide. Scientific American, 289(2), 42-47.
Warschauer, M. (2004).Technology and social inclusion: Rethinking the digital divide. MIT press.
Warschauer, M. (2006). Laptops and literacy: A multi-site case study. Pedagogies: An International Journal, 3(1), 52-67. Retrieved from http://gse.uci.edu/person/warschauer_m/docs/ll-pedagogies.pdf
Warschauer, M., & Matuchniak, T. (2010). New technology and digital worlds: Analyzing evidence of equity in access, use, and outcomes. Review of Research in Education, 34(1), 179-225. Retrieved from http://gse.uci.edu/person/warschauer_m/docs/equity.pdf
Waxman, H. C., & Huang, S. L. (1996). Classroom instruction differences by level of technology use in middle school math. Journal of Educational Computing Research, 14, 147-159.
Waxman, H.C., Meng-Fen L., & Michko. G. M. (2003). A Meta-Analysis of the Effectiveness of Teaching and Learning with Technology on Student Outcomes. University of Houston.
Wenglinsky, H. (1998). Does it compute? The relationship between educational technology and student achievement in mathematics. Princeton, NJ. Educational Testing Service. Retrieved from http://www.ets.org/Media/Research/pdf/PICTECHNOLOG.pdf
Wenglinsky, H. (2005). Using technology wisely: The keys to success in schools. New York: Teachers College Press.
Winkleby, M. A., Jatulis, D. E., Frank, E., & Fortmann, S. P. (1992). Socioeconomic status and health: how education, income, and occupation contribute to risk factors for cardiovascular disease. American journal of public health, 82(6), 816-820. Retrieved from http://ajph.aphapublications.org/doi/abs/10.2105/AJPH.82.6.816