Time for the final post our Exhibition and Evaluation Principles. To this point we’ve discussed our philosophy on display vs. evaluation, the importance of feedback and the type of feedback we want to provide modellers, and the way be want to welcome young and beginner modellers while providing a challenge for more advanced modellers.

Now it’s time to talk about Judges… how we see judges earning their stripes and how they can improve. We’ve had suggestions that another word – Evaluators or something similar – might be more appropriate than Judges. After all, “Judge” does sound a little, well.. judgey. But nothing is written in stone yet.

Principle 9 – Judges are trained and certified.

Those trusted to evaluate another modeller’s work at a juried exhibition must have a firm grasp of the class-specific standard they are using to evaluate that work and how to apply that standard as they examine a model. Like skill levels used to distinguish between Entrants’ models, Judge training and certification will be tiered to account for additional training and experience. This will enable Judges to evaluate models at larger juried exhibitions, to evaluate submissions at higher skill levels, and to serve in senior judging appointments. Just as Entrants may enter submissions in different classes at different skill levels, Judge training and certification will be Class-specific.

Principle 10 – Judging the Judges.

Our juried exhibition system will provide tools to analyze Judges’ performance. Statistical analysis of each Judge’s evaluation at every event provides quantitative feedback – did a Judge score models higher, lower, or the same as their fellow Judges? A unique feature of our system will be the ability of modellers to provide constructive feedback to the Judges who evaluated their work. If they wish to, Entrants will be able to comment on each point of feedback provided by each Judge. This combination of quantitative and qualitative feedback will allow Judges to hone their evaluation skills, making them better Judges and improving the evaluation experience for all participants.


    6 responses to “Exhibition & Evaluation Principles Part 5”

    1. Stewart Brouillette Avatar
      Stewart Brouillette

      This is moving in a positive direction. Training and certifying judges will certainly take a unique effort. I don’t know if there ever has been a formal training and certification process. This of course will require a syllabus that can and should be vetted and shared. I’ve got about 15 years under my belt performing institutional training development and delivery under the umbrella of the the Army Signal School proponency. If there is any way I can assist, please let me know.

      1. Bruce Worrall Avatar

        Thanks for offering to help Stewart. I’ll send you a PM.

    2. Toni Levine Avatar
      Toni Levine

      Having been a judge (and there is nothing wrong with the word) at several venues, I agree that setting a standard for their knowledge base is important. An independent party to judge the judges helps remove outliers. The problem is finding someone qualified to critique the judges, and how they would be certified. The problem with entrants formally commenting about the quality of judging is that there will always be an element of defensiveness on the part of the entrant.

      1. Bruce Worrall Avatar

        Hi Toni.
        Our plan for evaluating Judges has three components. First, Judging Team Leads, who will ideally have more experience and training that the members of their team, will guide and mentor their teammates. Second, we’ll track all of our Judges scoring centrally, and use statistical analysis to identify Judges who may be evaluating models too generously or too harshly, and provide them with feedback. Finally, as mentioned, Entrants will be able to provide feedback on the feedback they received. We expect all parties to be respectful, and those who aren’t will be warned.

    3. Mark Wilson Avatar
      Mark Wilson

      Adjudicated athletics, like gymnastics, could be a close parallel to some of the tenets you described above. Judges are trained, scored and ranked for very senior events. My suggestion is to make a more formalized incorporation of Scope of Effort (SoE). SoE in athletics is of course degree of difficulty that affects the total score. Regardless, include me in in Class #001.

      1. Bruce Worrall Avatar

        Hi Mark,
        We’ve discussed judging criteria during many, many meetings over the past six months, and have decided on an evaluation formula that specifically does not include DoD/scope of work.
        Our feeling is that DoD, in terms of how difficult (or easy) it is to get a nice result from any particular kit requires a degree of knowledge about which kits are hard or easy to build that would be difficult to include in training… we’d need a database of virtually every kit that might show up at an exhibition rated based on its DoD.
        We also felt that DoD is a criteria that can be abused by an unscrupulous Judge to give an entry an unwarranted advantage over others (which happened at the 2023 IPMS Nationals).
        We decided that we can include aspects of DoD in other criteria instead… at the Advanced and Masters skill levels we have criteria for Detailing and Artistic Merit, which represent extra skill and effort in the production of a first-class model.

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    This site uses Akismet to reduce spam. Learn how your comment data is processed.