Skip to main content

Methodology & Some Conclusions

 https://www.ccsu.edu/wmln/methodology.html

 


The World’s Most Literate Nations (WMLN) is a descriptive study of rank orders created from a collection of variables of two kinds: those related to tested literacy achievement and those representing examples of literate behaviors. The latter include 15 variables grouped under five categories, including Libraries, Newspapers, Education System - Inputs, Education System - Outputs, and Computer Availability, as well as population, which is used for establishing per capita ratios, where appropriate.

None of the variables indexed in WMLN was newly created for this study. Rather, they all originate in databases from current sources. The validity of the data is dependent upon the accuracy of those sources. Each data source is cited and can be assessed by the reader. In the case of most variables, data for all of the countries studied came from a single source (e.g., UNESCO). In a limited number of instances, a country was not included in a given database; in these instances, comparable data were compiled from a second source. This action was only taken if the data from the second source were deemed to be similar to the first source, both in collection technique and dates of collection.

International studies vary greatly in terms of the countries that participate in them. Many more countries were initially examined in this study than the 61 that were finally included. If a given country did not contribute relevant data to most variables in all five categories, it was not included in the study. If a very limited number of variables under one of the categories were missing, and not supplied by a comparable source, they were mathematically treated in such a way that the missing data neither advantaged nor disadvantaged the country in question.

In terms of the recency of data, the timeliest source available was always used. In a few cases, such as literacy test scores, both the last and next-to-last administrations of a test were used to create scores for the variables. The great majority of the data for all variables were collected in the last few years. In some cases, for which only older data were available, these data were used but had been collected within the last decade and in nearly all of cases represented a small part of the database. This sometimes occurred because databases involved rotating sets of countries in an annual administration. Consequently, two countries may have both participated in data collection at their most recent opportunity, but that could have been 2012 for one country and 2015 for the other.

Data were analyzed taking into account the size of the population of each country, if per capita analysis was relevant. Sometimes it was. For example, the number of public libraries is not a fair basis for comparing Iceland and Germany. Thus, such comparisons were made on a per capita basis. In other instances, such as the percent of homes with a desktop or laptop computer, the variable already accounts for population differences. In still other instances, such as literacy test scores, the population of the country is not relevant. However, most of the variables in the five areas needed to be calculated on a per capita basis.

The Variables
Two important points need to be made clear from the outset. First, each of the variables described here has been influenced by a nation’s history and policy over an extended period. Fully appreciating how a particular variable functions today requires a deeper understanding of these influences. In our 2016 book, World Literacy: How Countries Rank and Why It Matters, Miller and McKenna provide an extended analysis of many of the factors involved in the WMLN study, and this source may be helpful in interpreting the results.

Second, it is critical to note that this examination was not a controlled experimental study. Instead, large-scale trends across multiple indicators of literacy and literate behavior were analyzed. Can the case be made that there could have been different operational definitions of literacy and thus different variables indexed? Yes, that is clearly possible, but databases are not always available for indexing all possible sets of variables. For example, some theorists have recently extended the idea of literacy to other forms of communication such as viewing and listening. Data relative to this extended view might have varied considerably from the more traditional definition that guided the WMLN study, a definition that included only variables “grounded in visible written language” (Miller & McKenna, 2016, p. 18). Finally, even if an alternative database were available it might not have been one to which a large number of nations contributed data.

The five categories, as noted, have differing numbers of variables comprising them. Each of the variables is weighed equally in the calculation of the score for the categories. For example, the category “newspapers” is comprised of four equally weighed variables. These are scored and then rank ordered by country. The only addition to the score and rank ordering comes in Education Outputs. Because of the wide differences of the numbers of the four testing variables in which countries participated, a “bonus” of .2 of rank order score for each of the four tests administered is added to the final score prior to ranking.

Some Conclusions
If Educational Outputs (PIRLS and PISA test results) were the only indices of literacy for ranking nations, the final results of the study would be very different from an analysis in which all five sets of variables are used. Adding in Educational Inputs, Computer Penetration, Newspapers, and Libraries leads to a wide variation of results.

For example, the relatively recent emergence of the Pacific Rim countries is strong and obvious when test performance is the only indicator. Four of the top five countries are Singapore, South Korea, Japan, and China, with Finland the only non-Pacific Rim country.

When factors other than test scores are included, there is not a single Pacific Rim country among the top 25. Japan is the highest ranked (26th) unless New Zealand is considered in this group (15th).

Western Hemisphere countries do not compare very favorably with nations from Europe and Asia. In the overall world ranking, Canada is 11th, the United States 7th, and Mexico 38th, while Brazil is 43rd and Costa Rica is 46th among the 61 countries studied.

When inputs and outputs are correlated by a country’s rank, there is one consistent finding. It involves the relationship between education input measures (including the expenditure on education expressed as a percentage of Gross Domestic Product) and years of compulsory schooling, compared to PISA and PIRLS test scores.

There are virtually no meaningful correlations between the input measures and the output measures, whether rank order correlations or raw score correlations are calculated. This is true for either of the two input measures separately and for the combined ranking measure when correlated with the four output measures (test scores) or their combined rank score. Only a small number of the 60 correlations are significant and these are not strong (r.3). When effect size is considered, years of compulsory schooling and educational expenditures bear little relationship to the test scores. The great majority of coefficients are in the r=.1 range, and they are just as likely to be positive as negative.

Two important caveats must be considered. First, years of compulsory schooling is not the maximum but the required minimum. Because students are generally age 8 to 9 and 15 when the two tests are administered, the lack of correlation is not particularly surprising. It is also important to recognize that expenditures expressed as percentage of Gross Domestic Product are not the simple amount expended. As such, countries with a huge GDP might spend a great amount per person, yet a relatively low percentage of GDP. For example, Denmark, Sweden, and Finland all spend far more per person as a percentage of GDP than does the United States, but the U.S. spends far more in absolute “dollars,” due in part to the number of people. And they spend far more as a percent of GDP, due in part to the smaller sizes of their economies.

There are many other possible conclusions for readers to explore. The three conclusions drawn here are for the purpose of encouraging speculation and investigation. As previously stated, many other points of view and speculations are discussed in detail in the companion book to this study, World Literacy: How Countries Rank and Why It Matters, John W. Miller and Michael C. McKenna (2016).

Comments

Popular posts from this blog

The Difference Between LEGO MINDSTORMS EV3 Home Edition (#31313) and LEGO MINDSTORMS Education EV3 (#45544)

http://robotsquare.com/2013/11/25/difference-between-ev3-home-edition-and-education-ev3/ This article covers the difference between the LEGO MINDSTORMS EV3 Home Edition and LEGO MINDSTORMS Education EV3 products. Other articles in the ‘difference between’ series: * The difference and compatibility between EV3 and NXT ( link ) * The difference between NXT Home Edition and NXT Education products ( link ) One robotics platform, two targets The LEGO MINDSTORMS EV3 robotics platform has been developed for two different target audiences. We have home users (children and hobbyists) and educational users (students and teachers). LEGO has designed a base set for each group, as well as several add on sets. There isn’t a clear line between home users and educational users, though. It’s fine to use the Education set at home, and it’s fine to use the Home Edition set at school. This article aims to clarify the differences between the two product lines so you can decide which

Let’s ban PowerPoint in lectures – it makes students more stupid and professors more boring

https://theconversation.com/lets-ban-powerpoint-in-lectures-it-makes-students-more-stupid-and-professors-more-boring-36183 Reading bullet points off a screen doesn't teach anyone anything. Author Bent Meier Sørensen Professor in Philosophy and Business at Copenhagen Business School Disclosure Statement Bent Meier Sørensen does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations. The Conversation is funded by CSIRO, Melbourne, Monash, RMIT, UTS, UWA, ACU, ANU, ASB, Baker IDI, Canberra, CDU, Curtin, Deakin, ECU, Flinders, Griffith, the Harry Perkins Institute, JCU, La Trobe, Massey, Murdoch, Newcastle, UQ, QUT, SAHMRI, Swinburne, Sydney, UNDA, UNE, UniSA, UNSW, USC, USQ, UTAS, UWS, VU and Wollongong.

Building a portable GSM BTS using the Nuand bladeRF, Raspberry Pi and YateBTS (The Definitive and Step by Step Guide)

https://blog.strcpy.info/2016/04/21/building-a-portable-gsm-bts-using-bladerf-raspberry-and-yatebts-the-definitive-guide/ Building a portable GSM BTS using the Nuand bladeRF, Raspberry Pi and YateBTS (The Definitive and Step by Step Guide) I was always amazed when I read articles published by some hackers related to GSM technology. H owever , playing with GSM technologies was not cheap until the arrival of Software Defined Radios (SDRs), besides not being something easy to be implemented. A fter reading various articles related to GSM BTS, I noticed that there were a lot of inconsistent and or incomplete information related to the topic. From this, I decided to write this article, detailing and describing step by step the building process of a portable and operational GSM BTS. Before starting with the “hands on”, I would like to thank all the pioneering Hackers and Researchers who started the studies related to previously closed GSM technology. In particul