Saturday, October 16, 2010

Assessment and models of learning

This is really more a thought experiment than a practical suggestion but please walk with me for a bit on this. What if we use the student at the end of their course as the input for an expert system? i.e. we queried their knowledge over an extended period until we felt it was captured within a rule set and a data store.
Now imagine we have evaluated the students, and we now have a set of expert systems, all encoded from separate students impressions and connections built while taking part in the course.
Now how do we compare these expert systems to verify that they are close enough to a canonical system to qualify as being fair copies?
Would such a comparison, if it were possible, be a valid assessment?
Would it preclude the possibility that the student had gone beyond the teacher and built a more valid learning network?

What’s interesting here is, as ever, the implied spaces.

Matters arising: The first issue is can an expert system, given sufficient time and resources, successfully capture the sum of the knowledge acquired by the student. If it can’t, what is it going to miss and how can that be assessed?

The second question is assuming such a thing is possible, should all such systems resemble each other? If the topic is flying a plane, we would appreciate it if they were similar, and bore some passing resemblance to the Airbus peoples model of the actions of an ideal pilot, but if the subject is creative writing, its less clear that this method will work. Here I would claim that the implied space, the part of the learning which cannot be captured in a set of rules, is the real learning on the course. #PLENK2010

No comments:

Post a Comment