ML-DRIVEN DYNAMIC TEST ITEM GENERATION | IJCT Volume 13 – Issue 2 | IJCT-V13I2P33

International Journal of Computer Techniques
ISSN 2394-2231
Volume 13, Issue 2  |  Published: March – April 2026

Author

Lankepalli. Kaveri, Kavali. Silpa, Kanaka. Ganayathri , MR.P. Jayachandran

Abstract

Machine Learning–driven dynamic test item generation has gained significant attention in modern educational and assessment systems due to the increasing demand for scalable, adaptive, and personalized evaluation methods. The primary objective of such systems is to automatically generate high-quality test items that align with learner proficiency levels while reducing the manual effort involved in traditional question paper design. This project investigates the application of machine learning and natural language processing techniques to analyze educational content and generate contextually relevant, difficulty-adaptive test items. The proposed approach focuses on leveraging data-driven models to understand semantic relationships, learning objectives, and cognitive complexity within instructional material. Limitations of existing manual and rule-based test generation methods, including lack of adaptability, time inefficiency, and limited personalization, are identified through a review of current assessment practices. The findings emphasize the potential of ML-based dynamic test item generation systems to enhance assessment accuracy, support intelligent learning platforms, and improve the overall effectiveness of technology-driven.

Keywords

ML, Natural language Processing, Dynamic test generation, Adaptive assessment.

Conclusion

Machine Learning–Driven Dynamic Test Item Generation (DTIG) systems play a critical role in enhancing the efficiency, accuracy, and reliability of modern educational assessment environments. By continuously analyzing large volumes of instructional content and learner interaction data, DTIG systems are capable of generating contextually relevant, difficulty-adaptive, and pedagogically aligned test items using advanced machine learning and natural language processing techniques. These capabilities significantly reduce reliance on static question banks and manual assessment design while enabling timely and scalable evaluation. 1.Predictive Maintenance: By leveraging machine learning and predictive analytics, DTIG systems can anticipate potential test item quality issues—such as difficulty mismatch, ambiguity, or bias—before deployment. This enables timely corrective actions and reduces the risk of invalid or ineffective assessments. 2.Autonomous Operations: State-of-the-art DTIG systems support autonomous decision-making in test item generation, allowing assessments to adapt dynamically to learner performance and contextual changes without continuous human intervention. 3.Data Integration: State-of-the-art DTIG systems support autonomous decision-making in test item generation, allowing assessments to adapt dynamically to learner performance and contextual changes without continuous human intervention. 4.Enhanced Mission Safety: By detecting and mitigating generation risks at early stages, DTIG systems significantly improve the reliability, fairness, and consistency of assessments, particularly in large-scale or high-stakes educational environments. Cost Efficiency: Early identification and resolution of test item generation issues reduce manual rework, maintenance effort.

References

[1]S. Kurdi, A. Leo, S. Parsia, and S. Sattler, “A Systematic Review of Automatic Question Generation for Educational Purposes,” International Journal of Artificial Intelligence in Education, vol. 30, no. 1, pp. 121–204, 2020. [2]M. Heilman and N. A. Smith, “Question Generation via Overgenerating Transformations and Ranking,” Language Resources and Evaluation, vol. 44, no. 1–2, pp. 157–187, 2010. [3]Z. A. Pardos and N. T. Heffernan, “Modeling Individualization in a Bayesian Network Implementation of Knowledge Tracing,” User Modeling and User-Adapted Interaction, vol. 20, no. 4, pp. 365–415, 2010. [4]X. Liu, J. Calvo, and R. A. Perez, “Automatic Item Generation Using Natural Language Processing and Machine Learning Techniques,” IEEE Transactions on Learning Technologies, vol. 13, no. 4, pp. 742–755, 2020. [5]Y. Liu, M. Ott, N. Goyal, et al., “RoBERTa: A Robustly Optimized BERT Pretraining Approach,” arXiv preprint arXiv:1907.11692, 2019.

How to Cite This Paper

Lankepalli. Kaveri, Kavali. Silpa, Kanaka. Ganayathri , MR.P. Jayachandran (2026). ML-DRIVEN DYNAMIC TEST ITEM GENERATION. International Journal of Computer Techniques, 13(2). ISSN: 2394-2231.

© 2026 International Journal of Computer Techniques (IJCT). All rights reserved.

Submit Your Paper