I have implemented a number of online exam engines, both applet and HTML based.
It is not going to be simple if you are going to provide ANY flexibility and have a user friendly interface. There are plenty of commercial efforts in this area and probably open source implementations (but I have not tried to keep up.)
Thanks for your valuable suggestion but, what i wanna know is some rough logic.coz they go for IRT based test,which is adaptive in nature.So if the user goes on answering the correctly he might get all the advance level questions and the user who is poor in answering will always get simple questions right? Could you tell me how to deal with these extreme conditions.
Author and all-around good cowpoke
Joined: Mar 22, 2000
I have done one "adaptive" test implementation. There are several tricky points:
1. Creating a question set where each question has a difficulty rating. This is harder than you might think because question authors do not necessarily have a good idea of how difficult a question is to the general population of students.
In our case we exposed candidate questions online and recorded all answers - to get meaningful statistics you need a big population of example users - tens of thousands would be nice. We also uncovered many questions where the author was in error or the question had multiple interpretations, so provide for instant user feedback so they can point out problems.
Rather than try for a numeric difficulty we ended up just using easy/medium/hard and a rather unscientific hack to come up with a difficulty and select the next question.
2. You also need to recognize that a student may be strong in some areas and weak in others. We would typically have 6-10 major "categories" in a question set and track proficiency in each category. This is really great for feedback to students in practice tests.
(I disagree with moving this to intermediate - we are talking a hard problem in design here)