jump.tf Forums
Welcome B)

Beginnings 4

John · 296 · 51207

tyjle

  • Advanced
  • *****
    • Posts: 519
    • Frags: +0/-0
    • View Profile
I think we need to establish how to judge jumps.
Originality does play a role in jumping yes, but so does difficulty.
So for Beginnings 5 i suggest we have these categories to judge, to be fair to all participants and avoid issues like this
1. Difficulty /10
2. Creativity /10
3. Execution /10
This.
I mean this makes sense... but difficulty is a relative thing. What's hard for me probably isn't very hard for someone who's been playing wa longer than I have. Like, for example, I think simple demo air pogos are hard, but someone good at demo jumping won't. Also execution? What does that even mean? If they didn't execute the jump then they're not going to send it in.


scotch

  • Administrator
  • Proficient
  • *****
    • Posts: 336
    • Frags: +0/-0
    • View Profile
Well that's why it is execution out of 10, rather than executed: Yes or No.

It's just a measure of how well a jump was performed, if a sync jump had the rockets slightly out of sync then that would get a lower execution score than a sync jump with the rockets perfectly aligned.


Dr. Heinz

  • needs to stop posting
  • *
    • Posts: 1036
    • Frags: +0/-0
  • Relax.
    • View Profile
but difficulty is a relative thing. What's hard for me probably isn't very hard for someone who's been playing wa longer than I have. Like, for example, I think simple demo air pogos are hard, but someone good at demo jumping won't.
That doesn't matter.
You can still compare jumps with eachother.


pants

  • Proficient
  • ****
    • Posts: 458
    • Frags: +0/-0
    • View Profile
I still think incorporating a public poll somehow would be good. Even if its just part of a scoring scheme which still includes criteria from the judges and only done for say the ~top ten jumps.


scotch

  • Administrator
  • Proficient
  • *****
    • Posts: 336
    • Frags: +0/-0
    • View Profile
I still think incorporating a public poll somehow would be good. Even if its just part of a scoring scheme which still includes criteria from the judges and only done for say the ~top ten jumps.
I think a public poll would be very easy to abuse, as it offers nothing in terms of transparency or accountability of each vote.

Opening it up to a larger audience is a good idea though, it just needs to be done in a way that keeps the player submissions anonymous and the judges identifiable in order to minimise bias.

But I still think using judges is better, even if the reason is it means less work for whoever is managing the event.


Vexon

  • needs to stop posting
  • *
    • Posts: 1290
    • Frags: +0/-0
  • :}
    • View Profile
public vote is bad idear


5:01 PM - john | jump.tf: 👌


nick

  • Intermediate
  • ***
    • Posts: 175
    • Frags: +2/-0
    • View Profile
I still think incorporating a public poll somehow would be good. Even if its just part of a scoring scheme which still includes criteria from the judges and only done for say the ~top ten jumps.
I think a public poll would be very easy to abuse, as it offers nothing in terms of transparency or accountability of each vote.

Opening it up to a larger audience is a good idea though, it just needs to be done in a way that keeps the player submissions anonymous and the judges identifiable in order to minimise bias.

But I still think using judges is better, even if the reason is it means less work for whoever is managing the event.
ideally what id love to have is at least 5 judges for each class and then using all 10 for coop, key words are at least as having more would also be good and a separation of judges for each class would have the judges look at ones they would be more "qualified" to judge


Melon

  • Proficient
  • ****
    • Posts: 384
    • Frags: +0/-0
  • cool cats club
    • View Profile
having some set categories for each judge to evaluate on generally prevents scoring issues like we had this year

it ensures that a judge cant just get away with giving a jump in its entirety a 1 because they feel like it

they can definitely give a jump a 1 for creativity, but they also have to consider the (theoretical) categories of difficulty and execution, which act as counters to any bias they may or may not have

a system that uses a median or an average of about three separate categories per jump per judge would take a few seconds of additional effort but ultimately create more accurate results overall


John

  • video games
  • Novice
  • **
    • Posts: 88
    • Frags: +421/-69
    • View Profile
Can confirm.  Fishy's a really fluffy-hearted community manager for JA.

john get this man a purple right now          !!!

as u wish


fishy

  • Guest
john is a saint everyone give him presents


Sere

  • Intermediate
  • ***
    • Posts: 227
    • Frags: +0/-0
    • View Profile
Remove the highest and lowest score for each jump when computing the average
"Simplicity is the ultimate form of sophistication." - Leonardo Da Vinci


Superchuck

  • Proficient
  • ****
    • Posts: 412
    • Frags: +0/-0
  • y-you too
    • View Profile
Remove the highest and lowest score for each jump when computing the average
I agree with this. I'm pretty diving does the same thing, where the lowest and highest judge are removed to prevent outliers.
The Rat Master


scotch

  • Administrator
  • Proficient
  • *****
    • Posts: 336
    • Frags: +0/-0
    • View Profile
Remove the highest and lowest score for each jump when computing the average
I agree with this. I'm pretty diving does the same thing, where the lowest and highest judge are removed to prevent outliers.
Why would you do this. It effectively removes two of the judges votes from what is already going to be a small pool of results. This isn't even sound statistical data analysis practice as you have given clear boundaries for the judges to score within, removing any possibility that a score could be classified as an outlier.

This also dissuades judges from voting outside of what they would consider an average voting range for a particular jump, further diluting the scores they will give.


HyperDan

  • Advanced
  • *****
    • Posts: 807
    • Frags: +0/-0
  • Gimmick Goblin
    • View Profile
Remove the highest and lowest score for each jump when computing the average

That's effectively killing like a quarter of the votes there


nolem

  • Proficient
  • ****
    • Posts: 261
    • Frags: +3/-0
    • View Profile
    • Youtube
Remove the highest and lowest score for each jump when computing the average
I agree with this. I'm pretty diving does the same thing, where the lowest and highest judge are removed to prevent outliers.
Why would you do this. It effectively removes two of the judges votes from what is already going to be a small pool of results. This isn't even sound statistical data analysis practice as you have given clear boundaries for the judges to score within, removing any possibility that a score could be classified as an outlier.

This also dissuades judges from voting outside of what they would consider an average voting range for a particular jump, further diluting the scores they will give.

"The truncated mean is a useful estimator because it is less sensitive to outliers than the mean but will still give a reasonable estimate of central tendency or mean for many statistical models. In this regard it is referred to as a robust estimator. For example, in its use in Olympic judging, truncating the maximum and minimum prevents a single judge from increasing or lowering the overall score by giving an exceptionally high or low score." - https://www.wikipedia.org/wiki/Truncated_mean

Assuming that the voting is fairly regular, removing the top and bottom scores and then averaging the rest would do nearly nothing, so there are no disadvantages. In this case where there are clear and unjustifiable outliers, this system would have helped remove stupid votes that were a result of some internal bias.
« Last Edit: February 22, 2017, 11:06:47 AM by nolem »