Site icon Art's FuDGe

Event Rankings Guide

In 2023 (mid season), we added Event Rankings to our lineup of fresh, relevant content. We were surprised it took us so long to add what ought to be obvious offering for any ranking service. As of mid 2024, we still believe we are only media publishing such a list. AF Event Rankings have evolved significantly since inception and are still undergoing notable changes.

Some History

In year 1 of Event Rankings (last half of 2023 season), the order was always whatever AF standard rankings lists the player at, that is the order we are ‘predicting’ the player will finish. The prediction never felt accurate and downplayed player’s performance history at each event. It didn’t offer leeway in making picks, and was closer to ‘ranking for the event‘ rather than predicting how players finish.

During that 2023 season, we longed for the current approach, believing it had to be more accurate. Here on the other side, we sometimes long for that earlier method, as it wasn’t all that far off in predicting final outcome. The updated approach is full steam ahead on predictive analysis and educated hunches. It is more compelling, educational (read as humbling), and a wee bit addictive.

At start of the 2024 season, tracking how accurate our predictions for each spot in the order actually are, took on more importance, as we were engaged more actively in predictions. Our scale for accuracy runs from restrictive (off by 1 or 2 spots) to very liberal, 20 to 40 spots off.

Why Event Rankings?

There are multiple reasons we do event rankings, and why we still reference them as ‘rankings’ in title rather than predictions. Truly, a top reason: it strikes our brand as obvious service to offer by media that does rankings. Our list entices the audience, by depicting fantasy relevant players, playing in the bigger events on tour, and providing sense of how the event might unfold, based on existing data metrics.

Another reason for Event Rankings is to inform. With the added color layers, Event Ranking we publish are indicating performance history at the venue for each player we list, along with value they hold if the order holds true. Additionally, our colors indicate what tier players are entering the tournament, and after the event, how well our ranking held up with regard to actual place they finished.

A third reason for continuing Event Rankings: they are compelling. They are fun to come up with, while also revealing insights our brand is still coming to terms with. Our first set of predictions in 2024 were without ties, as predicting ties is counter intuitive. By the third event, it no longer made sense to create the list without invoking ties. We have also learned that getting ‘semi close or better’ in accuracy is perhaps best measurement of how we are actually doing. See “Targets for Accuracy” below.

Timing Is Everything

Posting Event Rankings weeks in advance of a tournament is possible, but not very meaningful. During first third of 2024 season, we went with a full list, which was always posted soon after we updated standard rankings. The latest trends for players are clearly a factor that goes into how well the player might fare at the next event.

We’ve added a new list which is a shorter (top 10), and temporary set of predictions, to help build excitement for Event Ranking, more than the 2 to 4 days our standard updates provide before each tournament. The top 10 list is mostly based on performance history at the venue, along with any top tier pro that is suddenly hot. This first rendition of Event Rankings is published 5 to 10 days before the event.

Days leading up to tournaments often come with players dropping from the event. The timing around this is often a recurring issue in managing a fantasy roster, regardless of the format. Art’s Fudge tracks this as best we can, and we publish updates multiple times a day during week of tournaments, namely to note anyone that is dropped. Some drops also lead us to reworking the event ranking list, depending on how well the dropped player was predicted to finish.

Tables for Event Rankings Explained

The Top 10 list that is published first uses an abbreviated version of our full list approach. For this top 10 list, only Player Name column is receiving color layering, and follows what’s below.

Our full list, typically 60 for MPO and 30 for FPO adds 2 columns, and 4 of that 5 column list include color layering, in nuanced ways. Here is break down of that:

The first column – Order of the list – has no color layering, and is straight forward in designating the spot AF views the player as most likely to finish at.

quick color guide

AF Ranking Column

This column like other ranking lists on our site is depicting tier of fantasy the player currently sits within our rankings. Event Rankings though makes use of a broader scale of tiers in following manner:

Player Name Column

This column has multiple combinations for depicting performance history, and we’ll walk through each of the scenarios. In general, we are conveying how well the player has performed at the venue over the last 5 years, based on average placement.

Rating Column

In fantasy ‘weekly re-draft’ leagues, player ratings equate to player salary price. In season draft fantasy leagues, player ratings are generally a non factor. Player value (FRV) in relation to their rating helps with perspective of whether the player, as pick for fantasy, is solid, or poor. With Event Rankings, this value is predicated on where the player is predicted; whereas most ranking lists on our site are conveying this value as it relates to a key stat, i.e. Elite Avg finishes.

A simple way to grasp this column’s coloring is to imagine that for any event, AF picks the lowest rated player in the division to finish 5th place. We would then designate the player as carrying top value, and if we are even semi close, it would still be a solid value pick. Likewise if AF picks the top rated player in the division to finish middle of the pack, this results in low value. If we are semi close or better on this prediction, then our assessment of them as poor value pick is accurate.

Even better than top tier. This color only appears in Ratings column for MPO Event Rankings, to represent a rating carrying exceptional value with where they are predicted to finish. Essentially 2.5+ tiers above their rating rank.
Top tier for FPO event rankings and 2nd tier for MPO. For FPO it is 1.5 tiers (or more) over rating rank. MPO at this tier is range of 1.5 to 2.5
2nd tier for FPO, 3rd tier for MPO, both with range of just under 1 to 1.5 tiers up over rating rank.
3rd tier FPO, 4th tier MPO, up to 3/4ths off rating rank.
4th tier FPO, 5th for MPO. Player's rating rank matches within 1 or 2 of event ranking.
5th tier FPO, 6th for MPO, down 3/4ths of rating rank.
6th FPO, 7th MPO, down just under 1 to 1.5 tiers
7th FPO, 8th MPO, down 1.5+ for FPO, or 1.5 to 2.5 down for MPO.
Even worse than bottom tier, or inverse of the light green color at the top. Only comes up on the MPO Event Ranking tables for reflecting players projected to finish 2.5+ tiers under their rating rank.

Place Column

The final column starts off empty, with no data or color, until the event results are added. Once the actual place is entered, we then add color to convey how close we were in our prediction that came before the event. We have 5 designations for measuring how close the pick was, which is scaled by division. A few times during the season, certain events are scaled differently than rest of events that season. Generally, Worlds and USWDGC are 2 events where the scales below are doubled.

MPO Prediction Accuracy
Spot on. This means the predicted place is within 2 of actual place. If number is green text, it means the pick is an exact match of actual place.
Semi Close prediction. This means the prediction is 3 to 10 spots off from the actual place.
Somewhat off. Prediction is 11 to 24 spots off.
Well off. 25 to 40 spots off.
Way off. 41+ spots off.
FPO Prediction Accuracy
Correct guess, or what we reference as Spot on. It means the predicted place is within 1 of actual place. If number is green text, it means the pick is an exact match of actual place.
Semi Close prediction. This means the prediction is 2 to 6 spots off from the actual place.
Somewhat off. Prediction is 7 to 12 spots off.
Well off. 13 to 20 spots off.
Way off. 21+ spots off.

Targets for Accuracy

When we wrap up a list of 60 players predicted in order for an MPO event, at least part of us believes this is it, this will be the time where we get all 60 in the correct order. We mostly realize that’s not realistic, but the compelling part of the whole process is going for exact matches.

If one were gambling with such information, wagering on how the pros do at events, chances are exact matches are the wager, rather than being close enough. Fantasy disc golf is wagering that rostered players will be good enough to land in upper part of the scoring system. Therefore the Semi Close designation is perhaps the more relevant way we assess and track our own accuracy with our predictions. And then the (rare) weeks where we have no gray, or no red and gray boxes in Place column is always received as a positive insight in the accuracy of our predictions.

We began with a target of 20% accuracy, or better, for each division, regarding Spot on (Yellow) predictions. Through mid to late 2024, we hit that target a half dozen times on FPO event rankings, and have yet to hit it on MPO predictions. Before mid point of 2024, we added additional target of 55% in FPO, or 45% in MPO on Semi Close or better (green and yellow boxes). As of the last update, there are no other published targets we are aiming for.

Last Updated: Mid Aug, 2024

Latest News for developments in our Event Rankings is that we are considering lowering MPO from predicting 60 players, to as low as 45 players. FPO is likely to remain at 30 players listed, as the standard for that division. We began MPO in 2024 with 75 MPO players listed for each tournament, and went to 60 to expedite the process. Then in summer of 2024 we asked ChatGPT ‘what are the chances of predicting 60 players, out of say 120, in exact order, including ties’ and after 15 seconds it spit out a number with enough zeroes to indicate astronomically low chance. We then asked what the chances are with same set up, but picking 1 thru 10 in order, and that was a mere 1 in 3.95 quintillion chance. We hope it was hallucinating. Our best in either division is 4 exact matches for any tournament in 2024. This feels embarrassing low, though we are learning, that may be par for the course.

We are aware of a few factors and nuances that impact accuracy, that perhaps will be shared at a later time. We encourage discussion on this process of Event Rankings, or predictions for pro disc golf events based on data, and with data doing the talking. Used to be that much of what is on this Guide page was on the Event Ranking page, which became cumbersome. This is all seemingly uncharted territory and sharing insights along the way is our approach. Making the predictions, and finding out how they end up is very compelling, partly frustrating, and mostly fun.

Exit mobile version