The full papers presented in this Online Conference Paper panel are no longer accessible. You can see the original abstracts of the papers submitted for presentation in the abstract books.
Rationale for the Panel Discussion
The rise of mis- and disinformation has increased interest in media literacy (ML) among policymakers, who clamor for data and evidence to characterize the competencies required for work, life, and citizenship in a digital age. As general consensus among policymakers and academics grows that ML is defined as the ability to access, analyse, evaluate and create media in multiple forms, there has been some progress in measuring ML competencies in local and national contexts.
Some approaches to measuring media literacy have repurposed existing data or data that can be easily collected, rather than choosing stronger measures identified through empirical research (Bulger, 2012). For example, in Europe, the Media Literacy Index was created in 2017 as a response to the ‘post-truth’ phenomenon to measure the resilience of EU citizens to ‘fake-news’ and its consequences. The survey ranks 35 countries in Europe using proxy indicators for media freedom, quality of education, interpersonal trust, and e-participation as a way to gauge levels of public trust. But critics point out that such measures do not adequately characterize media literacy competencies in ways that enable policymakers to take action (citation).
New survey research has emerged that begins to paint a portrait of media literacy competencies using survey research methodologies that ask people to self-report their competencies (OFCOM, 2020). Performance-based measures that ask people to complete tasks have also been used to measure media literacy, but these studies involve small samples that are not generalizable to larger populations. Recent survey research with representative samples of Australian adults show that most people have a low level of confidence in their own use of media for information activities like safely navigating online environments, changing privacy settings on social media, and identifying misinformation (Notley et al, 2021). Other studies have examined high school teachers’ assessments of students’ skills (Hamilton et al, 2020) and perceptions by educators, parents and community members about the implementation of ML instructional practices in local school districts (Media Education Lab, 2021).
The need for more granular measures of ML competencies that can be assessed cross-nationally remains high. At a time when the vast majority of people in advanced democracies are increasingly reliant on digital media, there is a sense of urgency to develop reliable and valid measures of ML competency that can be conceptually distinct from measures of media usage. This panel discussion brings together researchers who are interested in the methodological dimensions of measuring media literacy competencies. Questions to be considered include: