2020
Development of adaptive digital placement tests for languages
2020-1-HR01-KA204-077724

Project description

Determining level of existing language competence in potential learners is of utmost importance in developing individual education strategies. This importance can be demonstrated through simple statistics: during the last three academic years, between 40 and 45 percent of new students (i.e. students who have never previously taken a course with us) enrolled entry level (A1). This information indicates that most of our students come to us with some form of language competence. Since practically the entire world adopted CEFR more than 10 years ago and most language courses are aligned with this framework, it is necessary to match existing language competence with one of the CEFR levels (A1 to C2).

Placement tests for determining this competence have been used for as long as there have been language schools, so the concept of these tests is not a novel one. However, testing practices have not significantly changed for decades, while information technology allowing certain improvements has evolved dramatically. There have been some advances in pre-testing technology – in the beginning, placement tests were explicitly linear: a potential student would go through a set number of questions and his/her language competence would be determined by taking a single score on a scale from 1 to n (where n is a total number of questions). This linear approach certainly had certain effect, but it had just as many disadvantages that had a negative impact on the final result. Jantar, for example, compensated for these disadvantages by analyzing specific groups of questions within the test, creating our own scoring system based on years of previous experience. If results were still unclear, an additional oral test was administered by one of our teachers, to pinpoint a specific language level.

Some institutions, like Cambridge Assessment English, started developing their own digital tests over the last few years, but these are only partly adaptive and available exclusively for English language. Similar products were made by some publishers (such as Pearson or Oxford University Press) but these tests are, once again, limited to English language and have very low level of adaptivity.

Through this project, Jantar will develop our own innovative and fully adaptive tests for determining existing level of English, German, French, Italian, Spanish and Russian language in our potential students. Unique algorithms for question selection and progressive determination of language competence will be developed by an IT company from Split, Amber IT Solutions, which will be the first company to implement complex mathematical models used in, for example, ranking of chess players. These extremely advanced algorithms are currently not used by any test on the market.

In addition to Jantar, language methodology part of the project will be co-developed by British School Pisa from Italy and Blackbird from Serbia, renowned language schools with implemented quality assurance systems. Additional support is provided by Molih from Spain, a holding company with shares in multiple language schools across the globe. Our aim is to create unique tests which will determine existing language level of our potential students with surgical precision, ensuring enrolment in adequate language courses. Through this process we aim to eliminate potential drop-outs from language courses as a result of motivation loss due to enrolment in a course where one does not belong according to their existing language competence.

Activities

1st transnational meeting
Online
Dates:26/11/2020 - 27/11/2020
Host: Amber IT Solutions – Split, Croatia
Unfortunately, covid pandemic forced us to hold our first transnational meeting online instead of meeting in Split. However, thanks to advances in IT and competences of our project team, our meeting was organized without any difficulties. Over two full days of joint brainstorming, we successfully defined all details necessary for commencement of our development process.
2nd transnational meeting
Split, Croatia / JANTAR – International House Split
Dates:08/07/2021 - 09/07/2021
Host: Amber IT Solutions – Split, Croatia
At the beginning of July 2021, we held our second transnational project meeting. The meeting was organized by Amber IT Solutions at Jantar’s premises, with participation of all project partners (Blackbird – Serbia, Molehill Holdings – Spain, British School Pisa – Italy). Main topic was the development of intellectual outputs with emphasis on task-builder software, as well as the language tasks themselves.
3rd transnational meeting
Čačak, Serbia
Dates:29/11/2021 - 30/11/2021
Host: Blackbird, Čačak – Serbia
Third transnational project meeting was held in November of 2021 at language school Blackbird, our project partner from Čačak (Serbia). During the meeting, partners completed a revision of produced Use of Language questions, established the development plan for Reading Comprehension and Listening questions, as well as the framework for the algorithm software (IO2). In addition to two days of working hard on project implementation, partners also managed to squeeze in some teambuilding on the beautiful river of Morava.
Intellectual Output 1: Language task builder
Online
Dates:2021
Host: Activity
Completed development of the language task builder module available at: app.nextgenplacements.org In October of 2021, we completed the development of the largest part of the first intellectual output: a module for creating language tasks. Although we were initially planning on including only Use of Language type questions, partners decided to add Reading Comprehension and Listening. That way, NGPT will be testing all language skills except for speaking. In this stage of our project, the platform was available to all interested language schools for creating language tasks with the purpose of testing and collecting user feedback.
Intellectual Output 2: Rating algorithm
Online
Dates:2021
Host: Activity
Completed development of the rating algorithm used to determine the CEFR level of test-taker. NGPT implements modified version of the ELO rating system to determine the CEFR level of the test-taker. Named after its creator, a Hungarian-American physics professor Arpad Elo, ELO rating system is a method for calculating the relative skill levels of players in zero-sum games. In NGPT, each question from the database (and can, therefore, appear on the test) is assigned a numerical value based on its CEFR level, representing player 1. Test-taker represents player 2, and is assigned a mid-range score to begin with. Based on the statistical likeliness to successfully answer a question of a certain rating, player's score is adjusted until the final score is determined, corresponding to test-taker's CEFR level.
NGPT Newsletter #1 - English
NGPT Newsletter #1 - Croatian
NGPT Newsletter #2 - English
NGPT Newsletter #3 - English
NGPT Newsletter #3 - Croatian
NGPT Newsletter #4 - Croatian
NGPT Newsletter #4 - English
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors