What would your awards program be without the input of your judges? They bring expertise and knowledge to the table, adding precious value to your program’s reputation and outcomes.
But when it comes to judging entries, subjectivity can be a risk – your judges are human, and from time to time are subject to bias and conflicts of interest. Their preferences, area of expertise or professional activity can all affect the way entries are evaluated.
You need effective and efficient methods to keep your program fair and without favour. But, fear not! It’s all covered in Award Force. Here are six ways you can minimise subjectivity in the assessment of entries.
Recruit impartial judges
It may seem like a no-brainer, but considering conflicts of interest should be done from the very start. Think of this phase as an investment in the quality and credibility of your awards program.
As you look for judges, make sure to consider not only their expertise but also their assessment skills. Keep an open eye for any potential conflicts of interest between judges and entrants. And look for whether judges have any stakes in the outcomes– professional, personal or financial.
In Award Force, you can ask for this information directly from judges: create a round of entries specifically for judges where you ask about past judging experience, references and whatever input you need to screen your judges.
Manage conflicts of interest
Of course, it’s impossible to prevent all conflicts of interest. So it’s critical that you provide a way for judges to recuse themselves if conflicts of interest arise.
This is where good awards management software comes in! Keep track of which entries are assigned to which judges so you can change judging assignments if the need arises.
In Award Force, you can easily recuse a judge from assessing specific entries and re-assign any relevant entries in the Assignments view and Panels. And, if a judge identifies a potential conflict of interest, they can also abstain from assessing it with a simple click.
Subjectivity can also be compromised by including personal information about an entrant, such as name, gender, education or location. An easy way to combat this is to make entries anonymous. Show judges only the information they need to assess entries, reducing the risk of bias.
In Award Force, you can anonymise entries: use the field visibility tool to hide personal entrant information from your judges in their judging view, even when they download entries or attachments.
Guide the judging experience with effective configuration
The more you control the assessment experience, the less room you leave to subjectivity. After all, you know your program best, and you’re therefore best suited to determine what to include in the judging– and it starts with your entry form. Establish the questions you want entrants to answer and in what format you’d like for them to respond. These questions will define how your judges will make their decisions.
If the assessment is built on in-depth scoring and feedback from judges, configuring the criteria of this assessment becomes even more essential to reduce any potential bias.
Set scoring controls such as minimum and maximum score. You can also provide weighting of each piece of judging criteria.
In Award Force, the flexible entry form builder offers you several choices to ask for information in the right way. And you can use the score-based judging mode, VIP Judging, to create as many criteria as you need.
Avoid peer influence with a private judging view
Judging can either be individual or collaborative. Whilst the latter has its benefits, it also can create peer influence. If minimising subjectivity is your concern, individual judging is a great choice. You can assign the same entries to several judges knowing they’ll have a clean view, independent from other judges’ scores and input. This allows you to leverage the expertise of several judges while taking necessary action to minimise bias.
In Award Force, you’re in control of what your judges see. Set up judging views so that each judge sees only the entries they’ve been assigned in a nice, clean view.
Apply “the wisdom of crowds”
In his book, Wisdom of Crowds, James Surowiecki justifies the idea that judgement from a group of people can be better than the judgement from one individual. In other words, increasing the number of assessors can minimise subjectivity, as long as they are not influencing one another’s decision. For your awards program, this could mean inviting a higher number of judges or creating a public voting round to involve more people in the assessment.
In Award Force, you can use judging panels to create and manage a big group of judges. Panels help you organise what gets judged, when and by whom – and, as we mentioned above, you can keep their views and scores private. On the other end of things, you can also create a public voting round by using the Voting mode and invite your entire community to vote publicly on their favorite entries.
Fair assessment, a valuable awards program
The human element of assessment is what gives power and sophistication to your awards program. Your judges are experts, and their expertise is invaluable. The more knowledge they share in their assessment, the more power it gives your program, increasing its legitimacy, integrity and reputation.
By wisely choosing judges and creating a judging process that cultivates objective decision-making, you can add credibility and value to your awards program.