Analysis of Assessments

This analysis looks at the assessments made on the examples as well as those made during the peer assessment phase of the assignment. It attempts to select the better assessments out of this pool of teacher and student assessments. These "good" assessments are then used in the calculation of the final grade.

This analysis is best done when there are teacher assessments available. These assessments can act as a benchmark against which the student assessments can be judged. The teacher does not need to assess every example and every submission but for the analysis to be meaningful it is better to have more assessments from the teacher than the average number of assessments made by each student. And the more assessments made by the teacher the more confident the teacher can be of the results of the analysis.

The Analysis is usually done in a number of times, each time changing one or more of the options. The analysis is controlled by the three options which appear on the top of the page.

  1. The Loading for Teacher Assessments sets the weighting to given the teacher's assessments compared to the students' assessments in the error analysis stage. If the teacher wants their own grading strategy to dominate the way the students grade the submissions then the teacher should be the assessor with the smallest average error in the "Error Table". If the teacher is not the first one listed than the loading of the teacher's assessments is increased until the teacher has the lowest average error. This then implies that the teacher's assessments are dominate and the students who grade like the teacher are also listed in the top part of the Error Table. The students listed at the bottom part of the table are grading in ways which not match the teacher assessments (nor those of the students in the top of the table). The more assessments that are available from the teacher the more likely that this option will not have to be used to force the teacher to the top of the table. Note that this option does not apply a weighting factor the teacher's assessments when they used in the calculation of final grades. In that calculation the teacher assessments have the same weight as the student assessments. So for example if a student's submission is graded at 41% by the teacher and 45% and 55% by their peers the final grade given to the submission is (41% + 45% + 55%) / 3, that is 47%.
  2. The Weight for Grading of Assessments is used in the calculation of the Final Grade. A simple formula is used to calculate a student's "Grading Performance". It is the proportion of "good"assessments that student have done compared to the maximum number of assessments open to them. So, if for example, the assignment asks the students to do 3 assessments of the example submission and 5 peer assessments and the student does 7 assessments and 1 of those is dropped from the analysis (see below), then their grading performance is (7 - 1)/8, that is 75%. The final grade for the assignment a weighted combination of this grading performance and the grade given to their submission (or best grade if they made more than one submission). The grade for the submission is always given a weight of 1. So setting this option to say, 0.5, means that the two grades are added together in the proportion 0.5:1 or 33% of the grading performance and 66% of the grade of the submission.
  3. The Percentage of Assessments to drop determines the number of the assessments which are to be excluded when calculating the final grades. This number can be set in one of two ways.
    • Given the way the Grading Performance is calculated each student could, if they assessed all the work allocated to them, achieve full marks (for this element) if no assessments are dropped. If the teacher wishes to have a more reason average grade then setting this option to 30% would result in the average Grading Performance of about 70% (again if all students graded all the assessments open to them).
    • Alternatively the number of assessments to drop might be set such that the remaining "good" assessments result in the Average Errors being constrained to some reasonable value. These are the percentages given in the fourth column of the Error Table. For example, it may be thought that all the student assessments should (on average) lie within the 20% range. Then the analysis is repeated a number of times adjusting the number of assessments to drop until the figures in this column all lie within a particular limit.

In addition to the Error Table the analysis lists the grades of all assessments and the final grades given to the students. This table should be inspected to see if the results are reasonable. In particular if many assessments are dropped then some submissions may left unassessed and the student's final grade will be far too small. The analysis does given the number of submissions at the top of page and again just before the Grades Table. These two numbers should be same. If there are one or more unassessed submissions and the teacher does not want to decrease the number of dropped assessments then those submissions should be assessed by the teacher and the analysis repeated. It is important that all submissions are assessed at least once in the final stage of the analysis that is when the final grades are calculated.

There is a balance between the number of assessments dropped and the overall final grade. The more assessments dropped the lower the final grades are likely to be. However, if poor assessments are not dropped then students may complain about the quality of the assessments which determine the grade for their work. Provided there are enough assessments by the teacher to dominate the analysis without too much forcing, then it would seem reasonable to drop somewhere between 15% and 30% of the assessments.

Note that this analysis does take a long time as it involves an iterative process. Lengthy delays are to be expected.

Índice de todos os ficheiros de ajuda

Moodle Appliance - Powered by Solemp