Producing an authentic English for academic purposes (EAP) summative reading assessment can be challenging because its main purpose is to assess language ability and academic skills that prospective university students need to develop to become successful during their university degree courses. However, integrating authenticity is key in producing high-quality reading tests and this could be achieved to some extent by implementing a task-based approach in adapted texts that are representative of subject-specific genres. Such approach to achieving authenticity can significantly enhance the quality of reading tests. This article explores ways of achieving authenticity through creating academically appropriate scripts and tasks that contribute to producing authentic-like assessment measures while thinking about the specific test-taker population – which is pre-university students. the term ‘pre-university students’ will be used to refer mostly to pre-sessional students.

The importance of authenticity has been emphasised in many studies. For example, Wood (1993) suggests that authenticity is an integral part of a test’s validity and reliability. Douglas (1997) explores the concept further, arguing that authenticity is particularly important when writing tests for specific purposes. It should, therefore, be integrated in assessment measures designed to test language skills and abilities of prospective university students who are destined to study various subjects at a university. Davidson (2020) emphasises that ‘authentic assessment can be a more accurate measure of a student’s skills and abilities because it measures them in a more direct way’, and so producing authentic tests gives students ‘opportunity for interaction with tasks that reflect real-world problems and situations’ (Abidin, 2012). By doing so, it encourages the replication of real mental processes while the tasks are being tackled by students. In the case of pre-university students, solving the real-world tasks allows them to activate and develop skills and competencies required for them to succeed in the university context. Although authenticity is, therefore, highly desirable, it can be challenging to achieve because it requires test writers to replicate tasks and activities that students do on their university degree courses while students are still being prepared for their future education.

One way of achieving authenticity in reading tests is creating scripts that represent particular genres and are of a more generic academic topic. The main reason these factors are important is that pre-university students will be exposed to various discipline-specific genres once they begin their university degree courses, and so raising their genre awareness should be integrated in pre-sessional programmes. Training students and then testing them on working with such genres (journal articles, laboratory reports, critical reviews, case studies, for example) would replicate the types of texts that students analyse during their degree courses. As Maggie Charles and Diane Pecorari (2016:129) argue, when ‘learners arrive at a point at which [they] need to read the expert genres, their purpose for reading begins to change’. This, however, poses a challenge when thinking about creating authentic tasks for testing purposes. It is impossible to include many genres in summative assessment because of time limits and marking practicalities. However, what is achievable is focusing on a genre or genres that are applicable to a few subjects or which are part of student life. This could, for example, be a laboratory report, university email, reflective journal entry, students’ blog entries, or a more generic journal article. The topics of such texts cannot be too subject specific, otherwise some students might be advantaged over others. As long as the topics are generic and academic, they will represent some degree of authenticity. They could, for example, cover such areas as fitness levels among students or usefulness of technology for studying. These kinds of topics allow for an academic angle, so would include having to write about data, or advantages and disadvantages, and are relatable to students.

A task-based approach to reading tests also introduces elements of authenticity. The tasks included in tests need to replicate the coursework practices and assignments from the various target departments where students are planning to complete their degree courses. For example, an authentic technique of testing students reading would be extracting information from two texts for a focus task. This makes students read purposefully. Students can be required to focus on some sections of the texts that are relevant to the focus task and select information by distinguishing between important, less important, and unimportant information. Incorporating two texts discussing different aspects of the same topic in one reading test, therefore, enhances authenticity because it encourages students to activate such academic skills as dealing with multiple texts on the same topic and so encouraging reading expeditiously under exam conditions. Weir and Urquhart (1998) define expeditious reading as a ‘conscious use of strategies to sample a text in the most efficient fashion in line with a particular purpose’. Since students are encouraged to read widely and actively during their university degree courses, a reading test could have a shorter text and a longer one, possibly each of different genre. Having two texts also allows for incorporation of such academically authentic tasks as comparing and contrasting texts in the light of specific details or key ideas. For example, students could identify a function of a holistic text. They could be asked to classify which text gives recommendations and which explores a problem / situation approach in more detail. This approach encourages students to analyse texts globally and further develops their awareness of academic genres.

Asking students to recognise the differences or similarities between two or more texts is another authentic approach since it activates students’ critical thinking. As Du (2010) points out, students need to be critical readers, and this means ‘reading critically and thinking critically’. This critical thinking and reading refers also to students being able to ‘read the lines, read between the lines, [and] read beyond the lines’ (Du, 2010). The critical thinking is thus activated not only when students analyse texts globally, but also locally. These skills of close text analysis could be tested by including short answer questions (SAQs) in reading assessment. Although SAQs are not generally considered an authentic measure of students’ language skills, formulating questions eliciting information that is important in academic contexts, would also enhance test authenticity. The SAQs could include questions such as inferring meaning from the text, and this could be tested by asking a question about the author’s point of view. Another question type that would enhance authenticity and encourage close text analysis would involve verifying numeral information and providing graphic representation of numbers or facts in the form of charts, diagrams and other infographics. Including questions requiring students to interpret data creates an authentic situation because students are researchers and they will face data analysis, interpretation and presentation during their university degree course.

Authenticity in summative EAP assessment can be achieved by designing a test that targets the skills required on university degree courses. This can be accomplished by creating text scripts that represent real academic genres and are thematically linked to academic study. Producing real tasks that accompany such scripts and careful selection of their format to match the target skills also further enhances test quality and so their validity and reliability.

References

Abidin, Y. (2012). In Susani, R.G. (2018). ‘The implementation of authentic assessment in extensive reading’. International Journal of Education 11 1:87–92.

Charles, M. & Pecorari, D. (2016). Introducing English for Academic Purposes 2nd Edition. Routledge.

Davidson, P. (2020). ‘What is academic writing?’ In T. Aks¸it & Hande, I.M. (Eds.). The Future of Teaching English for Academic Purposes, Standards, Provision and Practices. 93–99. Cambridge Scholar Publishing.

Du, L. (2010). ‘Variables influencing the validity of the reading test’. English Language Teaching 3 3:206–210.

Douglas, D. (1997). ‘Language for specific purposes testing’. In Clapham, C. & Corson, P. (Eds.). Encyclopaedia of Language and Education: volume 7 language testing and assessment. 111–20. Kluwer Academic.

Weir, C. & Urquhart, A.H. (1998). Reading in a Second Language Process, Product and Practice. Longman.

Wood, R. (1993). Assessment and Testing. Cambridge University Press.


Anna Ziomek is a Lecturer of English for Academic Purposes with Assessment responsibilities at the University of Reading and has more recently taken over the role of BALEAP Testing Officer. Her particular interests are integration of EDI in curriculum and assess-ment and developing ESP teaching materials.