having some set categories for each judge to evaluate on generally prevents scoring issues like we had this year
it ensures that a judge cant just get away with giving a jump in its entirety a 1 because they feel like it
they can definitely give a jump a 1 for creativity, but they also have to consider the (theoretical) categories of difficulty and execution, which act as counters to any bias they may or may not have
a system that uses a median or an average of about three separate categories per jump per judge would take a few seconds of additional effort but ultimately create more accurate results overall