Downtown Ricky Brown
July 5, 2024
This fallacy results from people's tendencies to believe that a sequence of random past events will affect future outcomes. The coin flip is an often-used example.
Since there's a 50/50 chance for heads or tails, they figure tails are overdue when people see heads turn up multiple times in a row. However, the coin has no memory. The probability remains 50/50 regardless of any streak of heads or tails because the present flip has no dependence on a past flip.
The Monte Carlo fallacy got its name from a 1913 roulette game in Monte Carlo. In this game, bettors started piling onto red after 15 consecutive blacks came up. Indeed, red was overdue, right? However, many punters went broke as black continued to come up 26 times in a row. These bettors demonstrated the irrational belief that one spin somehow influences the next, which is not ideal for the pocketbook.
Watch the video on Cognitive Biases in Sports Betting:
This bias occurs when someone focuses too heavily on specific entities that passed a selection process while ignoring the failures. It can result in optimistic beliefs because it overlooks the multiple failures in the same data sample.
In finance, dubious mutual funds will tout their successful investments while excluding the losing funds they closed or merged into other funds to improve their record.
However, one of the best examples of Survivorship Bias happened in WWII when a mathematician named Abraham Wald solved the problem of armor-plating Allied bombers experiencing heavy losses.
The problem was that engineers only examined bullet holes in returning aircraft and errantly reinforced the areas with the highest concentration of bullet holes. However, the clusters of bullet holes in returning aircraft only showed sections where the planes were already the most fortified to withstand the firepower.
The engineers needed to examine the planes that were shot down. However, that data sample wasn't retrievable since they crashed in Germany. Wald's advice seemed counterintuitive. He wanted to put extra armor plating where the bullet holes did not exist. Fortunately, the military brass listened, and successful results quickly followed.
So, the biggest problem with Survivorship Bias is that by ignoring failures, you can overestimate your chances of success. For example, you'll hear people argue that Steve Jobs, Bill Gates, and Mark Zuckerberg dropped out of prominent universities and became mega-successful. Therefore, staying in college is stupid.
Okay, but what about all the other people who dropped out of Harvard or even Stanford? What are they doing? Elizabeth Holmes dropped out of Stanford and started that hoax of a company called Theranos. Moreover, she was sentenced to over ten years in prison! However, she procured many lucrative investors in the first place because she wore black turtlenecks and dropped out of Stanford just like Steve Jobs.
The lesson is to be careful not to solely focus on examining the success stories in your sample.
This fallacy explains humans' tendency to see patterns where none exist. The name originated from a joke about a Texan who scattered gunfire across the side of a barn and painted a bull's-eye target around the tightest cluster of bullet holes. "Look y'all; I'm a sharpshooter!"
If you stare at randomly generated coordinates on a scatter plot graph long enough, you will start to see clusters of data points that can give you the impression that an underlying cause is driving the patterns. Thus, the Texas Sharpshooter fallacy typically arises when one fails to form a hypothesis before gathering and looking at the data.
It's essential to pause and think about this. Without forming a hypothesis beforehand, the Texas Sharpshooter scans the data clusters on a graph and assigns causes to the patterns. So, form a hypothesis before examining your data to avoid this fallacy.
This bias is the tendency to consider clusters in random data samples to be non-random. Furthermore, it can result in committing the Texas Sharpshooter Fallacy.
Having a "Hot Hand" refers to a hot streak. Conventional wisdom purports that a player on a hot streak will hit a more significant percentage of their shots than usual over a short time.
Researchers dismissed the Hot Hand as fallacious for years, contending that one shot did not affect the next, as in the Gambler's Fallacy. This fallacy got its name from a study in the 1980s that suggested a basketball player who makes a shot is no more likely to make the next shot just because of their initial success.
I have to disagree with this fallacy. As a competitive golfer, I've experienced the Hot Hand numerous times. The confidence boost from making a couple of putts in a row positively affected my state of mind and led to the success of my next putt. So, I've got to disagree with the pointy-headed intellectuals. IMHO, the Hot Hand phenomenon is real. However, the jury is still among the experts, so you can apply this information to your betting strategy as you see fit.
This bias can occur when selecting data without proper randomization. If you intentionally pick non-random data, you can derive false conclusions. It's like cherry-picking, which is the same as suppressing evidence. The act of selecting data that confirms your position while ignoring data that contradicts it is what media propagandists do, and it will negatively affect your sports betting profits if you engage.
Here's an example. You believe that long drivers perform better at a specific PGA event. If you have this bias, you must be careful when selecting your data to avoid unconsciously giving higher weights to the long drivers while ignoring the "short knocks."
This bias represents people's tendency to interpret info in a way that only confirms their previously held beliefs. It's a way of appearing open-minded by looking at alternative ideas, when in actuality Confirmation Bias prods one to torture the data to fit one's preconceived notions.
This is a type of Confirmation Bias. Congruence Bias happens when people over-rely on their initial hypothesis while neglecting to sample alternative hypotheses. In other words, they avoid testing anything that could disprove their initial belief.