Strength in Community – The Gestalt

This has been an interesting and very different kind of summer for me. It’s had its ups and its downs, as any decent length of time typically does, but more than anything, this summer has been something of a painful enlightenment.

It’s certainly been a summer of firsts! I’ve moved into an apartment in Cookeville, since I’m working here over the summer, so there have been lots of exciting first experiences in relation to that! First time being entirely dependent on my own cooking for one, and further, first time discovering that when buying chicken, if the package does not explicitly say “boneless, skinless”, this implies that the chicken does, in fact, have both skin and bones. (Oops…)

It’s also the first time I’ve ever lived alone, since my roommates aren’t moving in until fall semester. As someone who really REALLY doesn’t like being alone, and tends to feel lonely very easily, this aspect kind of sucked. Even worse, this unfortunately coincided with my developing both cubital tunnel and briefly carpal tunnel syndrome. (Nerves that are perpetually inflamed from trauma or overuse, and not having enough room, causing pain and tingling/numbness in the hands and forearms.) Normally if I’m feeling lonely, I bury myself in my projects, and focusing on building stuff makes me feel better. For a solid month and a half, I couldn’t type or use my computer outside of work, for fear of causing any kind of permanent damage to my hands. I desperately tried finding ways to distract myself from both pain and loneliness, but this was pretty difficult given that a small part of my brain was constantly panicking, wondering if I wouldn’t be able to type again.

I don’t mention this as some sort of pity plea, or anything like that, because it was actually a rather valuable experience, all in all. For one, for the first time in my life, I started an exercise routine that I’ve actually stuck to for longer than a week! Moreover, without the ability to distract myself, many long walks were taken so I could think about and seriously evaluate myself and how I felt and thought about things.

So enlightenment: “the state of being freed from ignorance and misinformation”, “based on full comprehension of the problems involved.” I’m in many ways an extrovert who’s very much not social (I’m weird, I know…if it seems like those should be antonyms, I’m using the word extrovert more to mean that I much prefer to be with people than not. I don’t “recharge” with “me time”, I more tend to derive energy from being around others. I just also happen to be particularly bad at socializing, which is an unfortunate combination…) Yet, despite this, and despite that I’ve known for a very long time that I don’t like being alone, I’ve apparently never given enough active thought to the idea that the opposite of going all lone wolf is investing in and being part of a community.

I’ve had this thing for as long as I can remember where I’ve held something of a delusion that if I stuck to myself and worked on building lots of really cool impressive things all on my own, I would be better liked, better respected, and better accepted into groups. (Note: that’s the misinformation and ignorance part.) Over the more recent years, I’ve started recognizing this as a delusion: not only is it not particularly true, it’s a rather unhealthy mindset to live in, both emotionally and mentally, and all this REALLY hit home over the last couple months.

All this to say: I now fully recognize that being part of a strong community is extremely important to me, and as such, I want to take part in and help build said strong community.

Coincidentally, what with the inordinate amounts of free time for reading I had this summer, I read (and am reading, and unfortunately now have to wait days at a time for the next chapter to come out…) Twig! Those of you who have read at least part of it will likely immediately understand why I bring this up and why it’s relevant, and believe me when I say it very much cemented and drove home all of the above points! (For those who haven’t read it, don’t worry, there aren’t any spoilers below, and I highly recommend reading it!)

So a point that Twig repeatedly and effectively hammers on is the idea of a “gestalt.” A gestalt is loosely defined as “an organized whole that is perceived as more than the sum of its parts.”

Twig directly and indirectly refers to this everywhere:

“The original plan was for each of us to have a role, a specific set of talents, and for us to be able to address any problem. A gestalt.” (2.05)

“Just like with any problem, when things start going south, we’ll approach it from our individual angles, we’ll support each other’s strengths, and shore up each other’s weaknesses.” (3.08)

“Different is good. Look at how the Lambs work. They are stronger because they’re all different. Everyone has things they’re good at and things they’re bad at and we make up for each other’s weaknesses.” (13.05)

“We unify, we cover each other’s weaknesses and magnify each other’s strengths.” (17.15)

A gestalt then, in my opinion, (and seems to be John McCrae’s as well) is effectively the epitome of what it means to be a strong community.

I found an article a few weeks ago from a group of people in the rationality community, related to all this, that’s definitely worth a read: Becoming Stronger Together. Briefly summarized, it’s essentially a real life attempt at a gestalt: a group of ~10 people who formed a community focused on self-improvement and helping everyone work together to grow both individually and collectively. They openly communicated with and supported each other in all the things from long-term endeavors to their daily lives. They acted as a support group for each other, a collection of trusted friends to talk to when life got rough. It was a group that inspired and encouraged good habits and overall life improvement.

Part of Ben’s original intent behind starting AWDE was to form a gestalt. After discussing the above article, we both agree that it could be really beneficial to try working towards a strong self-improvement community, similar in model to the one in the article. (Though perhaps to a more moderate degree.)

As such, there are a couple of interesting things we’d like to try doing!

The first is our new twitch voice chat server. The goal for this is to be an online space to just hang out at, whether you’re just idly browsing the interwebs and want some company, or you’re working on something and want help, encouragement, or motivation (or potentially distraction, depending on the type of work!) Our hope is that we all use it enough that whenever you pop on, there’s likely to be at least one other of us there to talk to and chill with.

The other is we’re reviving the old google groups forum! (Some of you may remember the, ah, INTERESTING, nature of some of the posts populating those particular annals of the internet.) In particular, we’re looking to use it to encourage a deeper level of discussion in the form of dialectics. Dialectics is a really old concept (think Socrates) that entails discussing different viewpoints/opinions/ideas on a subject with the sole goal of discovering the truth. The idea here is to accomplish what rational debates should be like, but so rarely are: it’s not meant to be a heated defense of viewpoints to which one is emotionally attached, but rather to have an environment where it’s encouraged to freely and openly explore and talk about ideas, without fear of having one’s head chewed off for thinking something slightly differently. An environment where there’s a moderate emotional dissociation from said ideas, and it’s totally okay to be wrong. (In traditional dialectics, it was considered honorable to bow and admit when you were wrong.) Again, the idea behind dialectics is not to prove anyone wrong, merely to collectively discover the truth via evidence and well thought-out discussion.

Some examples of interesting subjects we might start off with, if everyone’s open to them: the ethics of eugenics, universal basic income, and transhumanism.

The way we’d like this to work, we’ll make initial posts on google groups describing each subject/problem/question, and perhaps provide related links for some additional information on the topic. From there, everyone is free to (and encouraged to!) weigh in and comment with thought-out arguments/responses/opinions on the matter (evidence and sources are of course encouraged, this is in part meant to be a critical thinking type thing.) Emails are sent out by google groups as comments are added, so you should be able to see as the discussion unfolds. Ideally, we’d also like to, once any type of mutual group conclusion has been reached, make a little paper write up of the group’s best ideas/thoughts put forth, and publish it here on awdefy! (I’ve already volunteered to do the write up bit if no one else wants to.) Also, if anyone remembers the cookie thing, (cookie points for participating in discussions, that get converted into actual cookies at in person meetings) there’s talk about potentially bringing this back!

As always, thoughts and questions are welcomed in the comments below!
– WildfireXIII

Thoughts on the Game Jam

So last week’s 3 day game hackathon was quite fun! Even though our final result didn’t quite match our initial expectations, this in itself was to be expected. In review, I just wanted to leave some thoughts on how it went and what we might want to consider for future hackathons.

1. We certainly suffered from Hofstadter’s Law! (No surprise here.)

2. From the technical side of things, I believe our biggest issue was really just the engine we were trying to use. We hadn’t ever really used it before, and some of the…er….interesting….design decisions they went (such as profane variable names and error messages…and their abuse of the word “scene”…) caused the majority of our initial time to be spent frustratedly fighting the engine tooth and nail to try and get it to do what we needed. (Retrospective kudos to Ed for having tried to convince us to use Game Maker instead!)

3. Kudos to Chris for letting us work at his house for a good 6 hours longer than he was expecting on our second day!

4. Kudos to everyone for their delicious food contributions!

5. I think the overall design ideas we had for the game were excellent. It has the makings for a good (and very unique) game, and I think if we continue to periodically work on it, it’s gonna be pretty cool!

I think the biggest thing we should consider for next time is that it’s really important for the developers to already know the tools we’ll be using. I have sufficiently discovered that attempting to correctly learn and utilize an entire engine within 3 days is not feasible! This doesn’t mean we should already have content ideas going into the hackathon, but we should really determine potential tools to use beforehand (and learn them if they’re new), and then when we come up with ideas at the start, they can be filtered to what we can do using the tools that we know.

Happy new year’s everyone!

An Intuitive and Mathematical Exploration of the Monty Hall Problem

The Monty Hall Problem is a little probability challenge with a rather un-intuitive answer, and it was something I struggled with for quite some time. I knew what the answer was supposed to be, but it took quite a bit of mental wrestling and deriving for me to actually understand why the answer is what it is!

One of the issues I encountered whilst I searched the web for good explanations was that very few places contained a complete mathematical proof from start to finish…steps were skipped/assumed, parts were left out or just not explained well, and overall it was exceedingly difficult for me to draw a complete picture.

Therefore, my goal here is to present both the intuitive explanation I found the most enlightening, and as complete of a mathematical proof explanation as I can. (Without going the route of Mr. Russell and attempting to re-derive all of mathematics, which, although truly marvelous, this blog post is too small to contain.)

“Intuitive” Explanation

So! The Monty Hall Problem! For those who don’t know, the premise of this problem is that you’re on a game show with three doors. Behind two of the doors are goats, and behind one is a brand new car. You first select a door, and then Monty, the game show host, will open one of the doors you didn’t select, that has a goat behind it. You’re now given an option: do you switch to the other unopened door or stick with your original choice? (For further perusal, the wikipedia page on this problem is long and insightful.)

In case it isn’t obvious, (or if you have an interesting set of priorities) we will assume that it is in fact your preference to win the new car as opposed to a goat. Really, even if you would prefer the goat, selling a new car gets you sufficient funding with which you can obtain many MORE goats, so by expected goat value, winning the car is still the optimal outcome.

The most often offered “simple solution” for this problem is to show the three cases where the car is behind door 1, door 2, and door 3, and seeing that you win once if you stick with your original door and twice if you switch. I’m electing to NOT use this as my intuitive explanation, because while it’s understandable in this context, it makes the whole thing more confusing when you encounter variations of the problem. (Or at least it did for me!) My reasoning is that this solution only looks at the surface mechanics of the problem, and since you gain no real insight into what’s going on behind the scenes, it’s easy to end up incorrectly using the same results on problems that are similar but have a different twist.

To preface my personal favorite explanation, a little bit about probability:

One of the ways I now enjoy looking at probabilities is with parallel universes! If an event has a .5 probability of happening (said in other ways: 50% chance, 50-50 odds, happens 1/2 of the time) one can assume that if you take 10 parallel universes, on average, the event will occur in 5 of them, and not occur in the other 5. This is a useful illustration to use, because you can quantify events or determine probabilities of those different events based on the number of worlds in which they occur.

Now to try the game!

For the sake of an example, let’s suppose we choose door 1. Monty reveals the goat behind door 3. Is it more likely that the car is behind door 1 or door 2? (Or do they have the same odds/it doesn’t matter whether we switch or stay?)

To figure it out, let’s make some parallel worlds. In all of the worlds we will look at, we have made the initial choice of door 1, but Monty has NOT revealed a goat yet. (This is known as a “prior.” More on this later.) Now initially, without any extra information or evidence, we assume there’s an equal chance that the car is behind any particular door. There are three doors, so an equal probability for each is \(\frac{1}{3}\). There’s a \(\frac{1}{3}\) chance the car is behind door 1, a \(\frac{1}{3}\) chance it’s behind door 2, etc.

To reflect this, let’s make 3 groups of worlds. In the first group are 10 worlds in which the car is behind door 1, the second group has 10 worlds where the car is behind 2, and the last consists of 10 worlds where the car is behind door 3.


If it feels weird to be using parallel universes to solve a math problem, we can easily verify that this many-worlds-model fits our expectations so far: in 10 out of 30 of all the worlds the car is behind door 1, which as described above makes a \(\frac{1}{3}\) probability, and the same applies for door 2 and door 3.

Let’s hit the play button for each world and see what Monty does. Keep in mind that he cannot open the door you initially chose, and he must reveal a goat.

Looking first at group 1, the worlds where the car is behind door 1, Monty has the option of revealing either door 2 or door 3, since they both have goats. We’ll assume there’s an equal chance (or .5 probability) he’ll open either of those, so in half of the group 1 worlds, he opens door 2, and in the other half he opens door 3.

Looking at group 2, Monty MUST open door 3. He can’t open the door you chose (door 1) and the car is behind door 2. In all 10 of these worlds, he opens door 3.

In group 3, Monty must open door 2 for the same reasoning as in group 2. The car is behind door 3, and you chose door 1, so in all 10 worlds he opens door 2.

Our many-worlds-model looks like this now: (the numbers represent which door Monty opens.)


Coming back to our real world now, we have an extra piece of information: we KNOW that Monty opens door 3. We have evidence that narrows down which of our many worlds are possible. (Any worlds where he opened door 2 no longer make sense for evaluating the current situation, since we know he didn’t actually do that.)

So let’s regroup all of our worlds. We remove all of the ones in which he opens door 2 and see what we have left:


These are interesting results! In 10 of the 15 worlds present, the car was behind door 2, and in 5 of the 15, the car was behind door 1. If we turn these into probabilities, that means there’s a \(\frac{2}{3}\) probability we get the car if we switch to door 2, and a \(\frac{1}{3}\) probability we get it if we stick with our initial choice. The best strategy is to always switch to the other door!

Say What?!?

For me personally, this was something of an “uhh…wait, what?” moment, when I was attempting to prove it to myself.

There’s a fancy (and really important) probability theory concept called “conditional probability.” A conditional probability is where we want to know the probability of a specific event given that we know something else. (Given some sort of evidence, in other words.)

The two probabilities above are examples of this. There’s a \(\frac{2}{3}\) probability the car is behind door 2 given that Monty opened door 3, and a \(\frac{1}{3}\) probability the car is behind door 1 given that Monty opened door 3.

The reason that all of this works is because conditional probabilities are in part based on their “reversed” conditional probabilities. (This is part of Bayes’ theorem, another fancy probability theory concept I’ll explain below.) This means to determine the probability of a hypothesis given some evidence, you have to look at the probability of the evidence given that the hypothesis is correct. (Look at it the other way around, in other words.)

In the Monty Hall problem, this is done by comparing the probability that Monty would open door 3 (our evidence) given that the car is behind door 1 (one possible hypothesis) with the probability that he would open door 3 (our evidence) given that the car is behind door 2 (the other possible hypothesis.) The first “reversed” conditional probability is \(\frac{1}{2}\), since he can open either door 2 or 3, and the second is just \(1\), because he MUST open door 3.

This is a lot of words to say that given the fact he opened door 3, it’s more likely that the car is behind door 2 than door 1! (The many-worlds deal just helps visualize this, because it perfectly represents how a conditional probability “narrows down” the state space you’re looking at.)

Something you hear thrown about quite a bit when people are discussing Bayes’ theorem related things is this concept of information being added or evidence “updating” prior probabilities. A prior probability is the probability without any evidence applied. In our Monty Hall example, the prior probability that the car is behind door 2 is \(\frac{1}{3}\). So what causes the probabilities to update? Why does door 2 change and not door 1? Put another way, why does Monty choosing door 3 add information about door 2 but not door 1?

You initially chose door 1. Monty is NEVER going to open door 1, because by the problem statement, he opens one of the doors you didn’t select. The fact that Monty decided to open door 3 instead of door 1 doesn’t tell you anything about door 1, because you knew he wasn’t going to open it anyway. No information is added about door 1, so the probability doesn’t update.

The fact that Monty chose door 3 instead of door 2 does give you evidence about door 2. Monty potentially could have opened either door, but he specifically opened door 3. That means there’s a chance he had no choice but to open door 3, so it tells you a little bit about door 2. The fact that Monty didn’t choose door 2 increases (updates) the probability that the car is behind door 2.

The Cool-Math-Notation Part

Now for the fun stuff, the derivation! If you don’t derive pleasure from deriving math, you’re welcome to leave, but this is sort of the awesome formula stuff tl;dr section…now that you have a little bit of background on probability theory, this is significantly simpler to represent mathematically than it is using the ~1600 words and several diagrams in the explanation above.

Just a bit of notation explanation for non-statistically inclined people:

  • \(X\) – Some event that we are choosing to represent with the letter \(X\) (after all, verbosity is for the weak! Or something…)
  • \(P(X)\) – The probability that event \(X\) occurs
  • \(P(X|Y)\) – Conditional probability! This is the probability that event \(X\) occurred given that event \(Y\) occurred (\(Y\) is like our “evidence”)
  • \(P(X \cap Y)\) – The probability that both event \(X\) AND \(Y\) occurred. This is distinct from the conditional probability, as explained below.

The difference between conditional probability and the “and” intersection (the funky \(\cap\) symbol) is best explained by returning to the many-worlds-model-thing. The probability of the intersection of two events is found by taking the number of worlds in which both of the events occur out of all possible worlds. (So the intersection of the car’s existence behind door 1 and Monty choosing door 2 would be 5 out of 30. 5 worlds where both events occur.) The conditional probability first narrows down the number of possible worlds we’re looking at (to just worlds where event \(Y\) occurred), and then counts the number of worlds in which the event \(X\) occurs. (So the conditional probability of the car’s existence behind door 1 given Monty chose door 3 is 5 out of 15. 15 worlds where \(Y\) occurs, and of those 15, \(X\) occurs in 5 of them. It’s a subtle but very important distinction.)

Interestingly enough, conditional probability is mathematically defined using the intersection:
P(X|Y) = \frac{P(X \cap Y)}{P(Y)}

If you think about it, this actually makes sense, because we’re looking at all the times both \(X\) and \(Y\) occur out of the times \(Y\) ACTUALLY occurs. (That’s the \(P(Y)\) on bottom.) The denominator narrows down our possible worlds, and our intersection on top selects just from those worlds.

Tying in what I said before about Bayes’ theorem and reverse conditional probabilities, Bayes’ theorem is a restatement of conditional probability in terms of the reverse conditional probability because of magical algebraic manipulation:

\(P(X|Y) = \frac{P(X \cap Y)}{P(Y)}\)
\(P(X \cap Y) = P(X|Y)*P(Y)\)

\(P(Y|X) = \frac{P(X \cap Y)}{P(X)}\) (Note that \(P(X \cap Y)\) is the same thing as \(P(Y \cap X)\))
\(P(X \cap Y) = P(Y|X)*P(X)\)

So by substitution:
P(X|Y)*P(Y) = P(Y|X)*P(X)

And finally we arrive at Bayes’ theorem:
P(X|Y) = \frac{P(Y|X)*P(X)}{P(Y)}

On with the original derivation, let’s define some events: (Note that all of these events assume that you chose door 1. I am not explicitly representing this in the notation because it just makes it more complicated and doesn’t really add anything to the derivation. If you remain skeptical, or for a derivation that explicitly includes your choice in every event, see the proof on wikipedia)

  • \(C_1\) – The event that the car is behind door 1
  • \(C_2\) – The event that the car is behind door 2
  • \(D_3\) – The event that Monty reveals a goat behind door 3

The goal is to compare \(P(C_1|D_3)\) (the probability that the car is behind door 1 given that Monty reveals door 3) and \(P(C_2|D_3)\) (the probability that the car is behind door 2 given that Monty reveals door 3.) Whichever is higher is the door we should choose.

Let’s start with the event that the car is behind door 1 given that Monty revealed door 3:
P(C_1|D_3) = \frac{P(D_3|C_1)P(C_1)}{P(D_3)}

Breaking this formula down into all of its components:

  • \(P(D_3|C_1)\) – The “reverse” conditional probability that Monty reveals door 3 given that the car is behind door 1. He can freely reveal either door 2 or door 3 with equal probability in this situation, so it’s \(\frac{1}{2}\).
  • \(P(C_1)\) – The prior probability that the car is behind door 1. Initially, we assumed there’s an equal probability the car is behind any particular door, so this is \(\frac{1}{3}\).
  • \(P(D_3)\) – The prior probability that Monty reveals door 3. (Remembering that this is assuming we chose door 1) This is \(\frac{1}{2}\), which I’ll prove down below.

Plugging these probabilities back into the formula, we get:
P(C_1|D_3) = \frac{P(D_3|C_1)P(C_1)}{P(D_3)} = \frac{\frac{1}{2}*\frac{1}{3}}{\frac{1}{2}} = \frac{1}{3}

Now we do the same thing for the event that the car is behind door 2, given that Monty revealed door 3:

P(C_2|D_3) = \frac{P(D_3|C_2)P(C_2)}{P(D_3)}

Breaking it down again:

  • \(P(D_3|C_2)\) – The “reverse” conditional probability that Monty reveals door 3 given that the car is behind door 2. He MUST reveal door 3, so this is \(1\).
  • \(P(C_2)\) – The prior probability that the car is behind door 2. Same as before, we assume there’s an equal probability the car is behind any particular door, so this is \(\frac{1}{3}\).
  • \(P(D_3)\) – The prior probability that Monty reveals door 3. Also same as before, this is \(\frac{1}{2}\), which I prove down below.

We plug stuff back in to make fun things happen:
P(C_2|D_3) = \frac{P(D_3|C_2)P(C_2)}{P(D_3)} = \frac{1*\frac{1}{3}}{\frac{1}{2}} = \frac{2}{3}

QED. We have demonstrated via Bayes’ theorem that there’s a \(\frac{2}{3}\) chance the car is behind door 2, and a \(\frac{1}{3}\) chance it’s behind door 1! (Given that we chose door 1, and Monty reveals door 3.)

The only thing left is that strange little \(P(D_3)\) I said I would prove…

The probability of an event can be defined by the sum of that event intersected with each individual state across a set of disjoint states (mutually exclusive, only one can be true at a time) that span the state space. Our “state space” consists of 3 possible states: the car is behind door 1 (\(C_1\)), the car is behind door 2 (\(C_2\)), and the car is behind door 3 (\(C_3\)). We know that these states are mutually exclusive because a car can’t be behind more than one door at once, so this definition applies. (Unless this is some weird quantum thing, but since that isn’t part of the problem description, we can safely ignore that possibility for now.)

We want to know, out of all the worlds, in how many worlds does \(D_3\) occur? So based on the above definition if we add up the probabilities of \(D_3\) AND \(C_1\), \(D_3\) AND \(C_2\), and \(D_3\) AND \(C_3\), that should be the list of all possible worlds where \(D_3\) occurs. If we wrote out that sentence, intersections and all:

P(D_3) = P(D_3 \cap C_1) + P(D_3 \cap C_2) + P(D_3 \cap C_3)

Offhand it might seem difficult to calculate all of those intersections, but fortunately due to that funky algebraic derivation of Bayes’ theorem business, we already know a way to rewrite intersections:
P(D_3) = P(D_3|C_1)*P(C_1) + P(D_3|C_2)*P(C_2) + P(D_3|C_3)*P(C_3)

This is much easier, since we’ve already covered most of those parts. Each of the priors \(P(C_1)\), \(P(C_2)\), etc. is \(\frac{1}{3}\), and for the reverse conditionals:

  • \(P(D_3|C_1)\) – If the car is behind door 1, Monty can choose either door 2 or door 3, so this is \(\frac{1}{2}\)
  • \(P(D_3|C_2)\) – If the car is behind door 2, Monty can ONLY choose door 3, so this is \(1\)
  • \(P(D_3|C_3)\) – If the car is behind door 3, Monty can’t open that door! That makes this probability \(0\)

Plugging this mess back in, we FINALLY get:
P(D_3) = \frac{1}{2}*\frac{1}{3} + 1*\frac{1}{3} + 0*\frac{1}{3} = \frac{1}{2}

Even cooler, this can totally be verified by the many-worlds-model above! If you look at the number of worlds in which Monty opens door 3, and the number in which he opens door 2, they’re the same. 15 out of 30 each, which is in fact a probability of \(\frac{1}{2}\)!

Math is freaking awesome. ^_^

Conclusion (Finally)

So…that was longer than I expected…

I have covered this problem as thoroughly as I know how to, so I hope it made at least a little sense! Maybe I’m just a slow learner, but it took every bit of all of that before I personally felt like I understood why it is the way it is…

If you already understood the solution, I hope to have offered you a potentially different way of looking at the problem. If you knew the answer but never really got why, I hope to have explained the proof sufficiently that you now understand! If you were just idly curious and wandered in on this having no prior (…prior…see what I did there?) exposure to this particular problem, I hope to have sparked your interest in this probability business…it’s pretty cool stuff!

If you have any further questions, would like something clarified, or see a mistake somewhere, feel free to let me know in the comments below!

– WildfireXIII

My First Prediction!

So since it seems to be a thing so far for the first prediction to be a self prediction, I would like to submit the following:

Prediction: Every week for the rest of this year 2016, I will write at least one blog post, alternating between posting on AWDE, and posting on my own website; 80%

What’s interesting is this prediction game seems to offer a unique opportunity specifically for self predictions. The above is something I’ve been telling myself I would do for an exceptionally long time…but my expected payoff for writing a blog post was always pretty low; even though I personally gain from writing a post (I normally have to do some research/learn something, and then get a chance to put down and organize my thoughts about it) it takes me a lot of time to write, and very few people will see it and obtain any value from the work. So in the end, my own laziness and lack of time strictly dominates, and I never write anything.

However, this game pretty heavily influences that payoff matrix, for a couple of reasons:

  • There are now people who are publicly aware that I’m attempting this! I can be held accountable now. Expected payoff for writing a post goes up, because in the eyes of my peers, I followed through on something I said I would do. (I have a reputation to uphold here!)
  • Since this is a game, and it’s fun to try and do well in games, it’s in my best interests to make accurate predictions! An 80% prediction should be fulfilled 4 out of 5 times, so for my prediction to be accurate, I better be pretty likely to succeed! This likewise directly corresponds to higher values in my payoff matrix for writing a post.

I hereby call upon and invoke the powers of self-fulfilling prophecies!
Good luck to everyone in their predictions, past and future!
– WildfireXIII