Use Of Jacobs ESL Composition Profile to Evaluate University Students’ Writing

Hari Prasad Tiwari(1*),

(1) Tribhuvan University
(*) Corresponding Author




DOI: https://doi.org/10.26858/eltww.v10i2.51632

Abstract


A teacher should have an access to a reliable soring rubric in order to assess the ESL/ EFL (English as a Second Language/ English as a Foreign Language) students' writing abilities accurately. One of the most widely used tools for assessing writing skills of non-native speakers of English is Jacobs ESL Composition Profile which can help teachers tailor their instruction to better meet the needs of their students. The present study attempts to determine the internal consistency between teachers i.e. raters who use Jacob’s Scoring Rubric (SR) to evaluate students' essays, describe the level of students' writing performance as measured by the Jacob’s SR, and describe teachers' viewpoints while using the profile. The research utilizes descriptive quantitative research design. The participants consisted of two ESL/EFL teachers who have been teaching and testing writing at bachelor’s level for more than five years and 40 fourth year ESL/ EFL students studying at undergraduate level at Madhypaschim Multiple Campus, Nepalgunj Banke, Nepal. The researchers used judgmental sampling to select the participants of the study. The teachers were instructed to score 40 essays written by fourth-year students. The finding revealed High internal consistency (r = 0.86, α = 0.00 < 0.05) between TA and TB when assessing student essays using the Jacobs ESL Composition Profile. In addition, the Cronbach alpha analysis reveals a value of 0.918, which suggests a high degree of consistency. The writing performance of students was found to be in four categories:  very good (3.75 %), good (52.5 %), satisfactory (36.5 %)   and acceptable (7.5 %). The outcome of the research demonstrates that the Jacobs ESL Composition Profile remains reliable for essay scoring, even though proficiency and experience are necessary due to its comprehensive guidelines.


Keywords


Essay writing; internal consistency; rater; scoring rubric; opinions

Full Text:

PDF

References


Al-Abed al-Haq, F. A., and Ahmed, A. (1994). Discourse Problems in Argumentative Writing. World Englishes, 13(3), 307-323. doi: 10.1111/j.1467-971X.1994.tb00318. x.

Alderson, J. C, Clapham, C., & Wall, D. (1995). Language test construction and evaluation. Cambridge University Press.

Bacha, N. (2001). Writing evaluation: what can analytic versus holistic essay scoring tell us? System, 29 (4), 371–383.

Bachman, L. (1990). Fundamental considerations in language testing. Oxford University Press.

Bachman, L. F., & Palmer, A. S. (1996). Language testing in practice. Oxford University Press.

Bachman, L.F. and Palmer, A.S. (1996) Language Testing in Practice: Designing and Developing Useful Language Tests. Oxford University Press, Oxford.

Barkaoui, K. (2007). Rating scale impact on EFL essay marking: A mixed-method study. Assessing Writing, 12(2), 86-107.

Becker, A. (2011). Examining rubrics used to measure writing performance in U.S. intensive English programs. The CATESOL Journal, 2 (22), 113-130.

Brindley, G.( 1998). Describing language development? Rating scales and SLA. In L. Bachman, & A.Cohen (Eds.), Interfaces between second language acquisition and language testing research (pp. 112-140). Cambridge University Press.

Casanave, C. P. (2007). Controversies in second language writing: dilemmas and decisions in research and instruction. MI: University of Michigan Press.

Cumming, A. (1990). Expertise in evaluating second language compositions. Language Testing, 7(1), 31-51.

Farzanehnejad, A. R. (1992). A new objective measure for calculating EFL writing tasks [Master’s thesis, Iran University of Science and Technology, Tehran, Iran].

Fulcher, G. (1996). Does thick description lead to smart tests? A data-based approach to rating scale construction. Language Testing, 13(2), 208–238.

Fulcher, G. (2003). Testing second language speaking. Pearson Longman.

Fulcher, G., Davidson, F., & Kemp, J. (2011). Effective rating scale development for speaking tests: Performance decision trees. Language Testing, 28(1), 5-29.

Ghanbari, B., Barati, H., & Moinzadeh, A. (2012). Rating scales revisited: EFL writing assessment context of Iran under scrutiny. Language Testing in Asia. 2 (1), 83-100

Hamp-Lyons, L. (1991). Reconstructing academic writing proficiency. In L. Hamp-Lyons (Ed.), Assessing second language writing in academic contexts (pp. 127–154). Ablex.

Haswell, R. H. (2005). Researching teacher evaluation of second language writing via prototype theory. In P. Matsuda, & T. Silva (Eds.), Second language writing research: Perspectives on the process of knowledge construction (pp. 105-120). Erlbaum.

Hughes, A. (1989). Testing for language teachers. In T. McNamara (Ed.), Measuring second language performance. Longman.

Huot, B. (2002). (Re)articulating writing assessment for teaching and learning. Utah State University Press.

Jacobs, H. L., Zinkgraf, S.A., Wormouth, D.R., Hartfiel, V. F., & Hughey, J. B. (1981). Testing ESL composition: A practical approach. Newbury House.

Jonsson, A., and Svingby, G. (2007). The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences. Educational Research Review, 2, (7), 130–144.

Klimova, B. F. (2011). Evaluating writing in English as a second language. Procedia - Social and Behavioral Sciences, 28 (5), 390 – 394

Knoch, U. (2007). Little coherence, considerable strain for reader: A comparison between two rating scales for the assessment of coherence. Assessing Writing, 12(2), 108-128.

Knoch, U. (2009). Diagnostic assessment of writing: A comparison of two rating scales. Language Testing, 26(2), 275-304.

Knoch, U. (2011). Rating scales for diagnostic assessment of writing: What should they look like and where should the criteria come from? Assessing Writing, 16, 81-96.

Longman. Ostovar, F., & Hajmalek, M. (2010). Writing assessment: Rating rubrics as a principle of scoring validity. Paper presented at the fifth conference on issues in English language teaching in Iran (IELTI-5), University of Tehran, Iran.

Maftoon, P., & Akef, K. (2010). Developing rating scale descriptors for assessing the stages of writing process: The constructs underlying students' writing performances. Journal of Language and Translation, 1(1), 1-17.

Montgomery, K. (2000). Classroom Rubrics: Systematizing what Teachers do Naturally. The Clearinghouse, 73 (6) , 324-328.

Morrison, G. R., and Ross, S. M. (1998). Evaluating Technology-based Processes and Products. New Directions for Teaching and Learning 74, 69–77.

Moskal, B. M., & Leydens, J. A. (2000). Scoring rubric development: validity and reliability. Practical Assessment, Research & Evaluation, 7(10), 23- 37.

Nemati, M. (2007). To be or not to be: A search for new objective criteria to evaluate EFL compositions. Pazhuhesh-e Zabanha-ye Kahreji, 32, (2),175-186.

Nimehchisalem, V., and Mukundan, J. (20121). Determining the Evaluative Criteria of an Argumentative Writing Scale. English Language Teaching, 4(1), 58-69.

North, B., & Schneider, G. (1998). Scaling descriptors for language proficiency scales. Language Testing, 15(2), 217–263.

Norton, B. (2000). Writing assessment: Language, meaning, and marking memoranda. In A. Kunnan (Ed.), Fairness and validation in language assessment (pp. 20-29). Cambridge University Press.

Odell, L., & Cooper, C. (1980). Procedures for evaluating writing: Assumptions and needed research, College English, 42(1), 35-43.

Reid, J. (1993). Teaching ESL Writing. New Jersey: Prentice Hall Regents.

Shaw, D, S., & Weir, J. C. (2007). Examining writing: Research and practice in assessing second language writing. Cambridge University Press.

Shohamy, E. (1993). The exercise of power and control in the rhetorics of testing. In A. Hutta, K. Sajavaara, & S. Takala (Eds.), Language testing: New openings. University of Jyvaskyla, Finland.

Spandel, V. (2006). In Defence of Rubrics. English Journal, 96 (1), 19–22

Stemler, S. E. (2004). A Comparison of Consensus, Consistency, and Measurement Approaches to Estimating Inter-rater Reliability. Practical Assessment, Research & Evaluation, 9(4), 34-48.

Turner, C. E., & Upshur, J. A. (2002). Rating scales derived from student samples: Effects of the scale maker and the student sample on scale content and student scores. TESOL Quarterly, 36(1), 49–70.

Weigle, S. C. (2002). Assessing writing. Cambridge University Press.

Weir, C. J. (1990). Communicative language testing. Prentice Hall.

Zomorodian, M. (1998). Iranian EFL teachers’ and students' assessment of the student essays [ Master’s thesis, Iran University of Science and Technology, Tehran, Iran].


Article Metrics

Abstract view : 197 times | PDF view : 67 times

Refbacks

  • There are currently no refbacks.


Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

View My Stats

Published by:

Pascasarjana, Universitas Negeri Makassar

Jalan Bonto Langkasa, Banta-Bantaeng, Rappocini, Banta-Bantaeng, Kec. Rappocini, Kota Makassar, Sulawesi Selatan 90222
elt.worldwide@unm.ac.id

E-ISSN: 2503-2291

P-ISSN: 2303-3037


ELT Worldwide: Journal of English Language Teaching is licensed under a Creative Commons Attribution-NonCommercial- 4.0 International License.