LogFAQs > #935400483

LurkerFAQs, Active DB, DB1, DB2, DB3, DB4, DB5, Database 6 ( 01.01.2020-07.18.2020 ), DB7, DB8, DB9, DB10, DB11, DB12, Clear
Topic List
Page List: 1
TopicAn analysis on Guru bracket winners and amount of risks taken
ZeldaTPLink
03/08/20 3:25:57 PM
#2:






This second batch of graphs seem to show clearer slopes in general. Maybe the board got better at contests after so much time.

Games 09 is a very interesting case because the norm is huge risks all across the field, with a lot of people over 25% near the top score. Yet, the winner only took 13% risks. This means having a lot of risks helped you get points in 2009, but to actually win the Guru, you had to shoot close to the magical 14%. This does fall in line with the theory I made above that multi-ways favor more risks, though, even though the user who finished #1 avoided that trend himself.

The first Game of the Decade has a lower average score than most. Yet, the risks were very concentrated under 15%, with the winner risking 12%. A lot of people risked a lot less, actually, getting close to 5%. Comparing it to this year's contest, I give the impression that people all followed the pack in what looked like a more chalkier bracket, yet when the matches actually happened, a lot of unpredictable results happened, and the people who breached that 9% limit did not succeed.

Chars 10 also showed a similar risk trend to GotD10, although the average score was much higher.

Rivals 11 has the guys at the top all going below 10%, with the winner going below 6%, which suggests a really chalky bracket where the chalk actually succeeded.



This batch of graphs also seems to have an obvious downward trend.

The graph for 2013 is tiny because of the buttdevastation brought upon by the Draven rally, which reduced average scores a lot. This is an interesting one because while the winner took 20%, most people below him took risks close to 10%. Apparently he was one of two people to pick Samus to go the finals. Most people had Mario, but Mario had that legendary defeat to Vivi in an early round. In the end, a single moment of inspiration brought him above the masses.

In 2015, most people who did well had below 15%. A lot of people went below 10% though, and the contest was not kind to those. The winner took 13% risks, so a healthy amount of risk taking paid off in the end.

2017 was our ultimate cookie contest, with the winner only taking a single risk (which was 3% of the total matches). Risks below 10% are the norm torwards the top, though, so this can be interpreted as an effect of the contest's uniqueness.

Finally, 2018 shows some clunky trends torwards the end, but generally, between 10-15% was the path to victory. The winner took 9%, though.

Conclusions:

  • Hypothesis 1 is right. If you want to win, it's probably a bad idea to differ from the masses by more than 20%, unless you really think you've figured something out the rest didn't.
  • The average risk of the guru winner is 14%, with most winners of 1v1 contests staying between 10 and 15%. Multi-ways actually seem to influence this ratio though, and it's probably better to take between 15% and 20% risks in them, although both in 2008 and 2009, the winner shot way above/below that.
  • Bracket size doesn't seem to matter much to risk taking, since both 2006 contests were small but had similar risk ratios to the rest. But contest type does. A more gimmicky contest, like Rivals or Villains, can give the win to more chalky brackets if there aren't many unpredictable matches. If such a contest is going on, watch where the pack is going and don't try to be a hero. This could happen again if we have a Best Console contest, for example.
  • It's still hard to know if the 10-15% range actually increases winning chance or if it's just the range most people are likely to be in anyway. Yet, there were a bunch of cases were the masses were either way above or way below that range, and the winner was still the guy who went closer to it (2008 is the notable exception). This gives some strength to the theory that you need a certain level of risks to avoid elimination, yet shouldn't go too much above 15% (or 20% for multi-ways). If you are lazy and don't want to think much, just take 14% risks. For this contest, this means 17-18 risks.
  • There is a lot of talk going about how this year's bracket seems chalky, yet we don't have much data so we could be wrong on a lot of things. The GotD1 results seem to match that sentiment. Expect to make a lot of mistakes, and yet, the 10%-15% range is likely to be the right one anyway. This means you should mostly follow the pack, but knowing which upsets to pick is critical. There will likely be some insane results no one will see coming. If you can see them, the contest could be yours.
  • One big limitation of this analysis is that it treats all matches as if they are equal, when truth is that later matches count more points (although the importance is not necessarily directly proportional to points, since you need to get certain early matches right to even have the right characters in the later ones). Ideally, I would also analyse how many points were risked by each guru, not just individual matches. But that would require a lot more data crunching since this info is not available in the guru site. And I wanted to finish this protect some time before bracket lockdown, so I had to go with what I had.
  • Please submit your bracket to the BOP so we can reliably calculate our risks, thanks. Link: https://gamefaqs.gamespot.com/boards/8-gamefaqs-contests/78450828


And this is it. I think I've filled my nerdiness quota for the whole trimester, at least. Let me know what you think of this analysis!
... Copied to Clipboard!
Topic List
Page List: 1