145 Comments

Typo thread

"If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does Manifold allow this?" - This should say "Metaculus".

Expand full comment

Recent prediction market convert and they do things I really, really like. They teach people to forecast, they use the wisdom of crowds super efficiently, they tie consequences to being correct or incorrect, and they incentivize participation. It’s like turning in a search light and aiming it into the future.

Question: Does anyone have a long term strategy on how to make this stick to decision makers? That’s my main point of curiosity. Is the hope that they just get so efficient they can’t be ignored? I’m sure this can make money but I had hoped something like this (I have my own weird scheme I’m super into just like I’m sure everyone here does) could be a civilization’s sense organ.

Right now it seems like the plan is to make really good eyeballs and figure out how to hook them up to the brain later? Is that right? Genuine curiosity.

Expand full comment

What about Kalshi?

Expand full comment

Ooooh I like the per-market loan system. They should at least try it and see what happens!

Expand full comment

Hey there, I work for Metaculus and wanted to share my perspective on Scott's points about reputation and about how Metaculus incentivizes predictions. Tournaments have a different scoring mechanism than the rest of the platform, because there are cash prizes at stake. If someone is highly-ranked on a tournament leaderboard and wins prize money, it's because they outperformed other forecasters and contributed a lot of information with their forecasts.

Expand full comment

I was thinking some about the issue with betting on conditional markets that may well never trigger, particularly in the context of Scott's which book should I review markets, or similar. And I think at least a partial solution is the following:

If there are multiple conditional markets whose conditions for triggering are mutually exclusive, you should be allowed to use the same dollar to bet in as many of those markets as you choose.

Expand full comment
Feb 21, 2022·edited Feb 21, 2022

Absolute accuracy is usually represented by brier scores, and brier scores suck, because they don't use logarithms, so they can't appreciate the huge difference between 1% and 0.01%. I have an idea to construct a better formula. (https://en.wikipedia.org/wiki/Brier_score#Definition) Instead of just squaring (Ft-Ot), apply the transformation g(x) = -lg(1-x), where x=abs(Ft-Ot) So your penalty is ~zero if your probability estimate is close to correct, but your penalty goes to infinity as your confidence in the wrong outcome goes to 1.

Predictit is very negative-sum due to fees, and it works fine.

Positive sum sucks because it incentivizes spamming the most predictions without regard for accuracy.

Positive sum combined with bad formulas (see above) is what lets people get away with predicting 99% on questions that should be 96%. On a real money negative-sum market, you're not going to make much return by doing that. Predictit's incentives are such that 99.9% certainties often trade at 97 cents. I think negative-sum tends to under-estimate probabilities of 99% events, while positive-sum can over-estimate probabilites, but doesn't have to, if they fix the formulae (see above).

Expand full comment

Now that regulators have given the green light to some real-money prediction markets in the U.S., do we think they'll start to incorporate that into policy decisions? I feel like a fully-regulated prediction market is the only sustainable way to create real, skin-in-the-game based information signals.

Expand full comment

Alternatively/in addition to initial loans, is there a reason why these markets don't do some kind of dynamic margin depending on the implied probabilities of the market?

If I've bought 100 contracts at $0.90, then with ~90% probability I expect a $100 payout and a $10 profit. Given that my odds are so high, the market provider probably doesn't need me to put up $90 in collateral, right? At 'reasonable' 5x leverage I'd put up ~$20, at horrifying-crypto-exchange 50x leverage I'd put up $2.

I can see one issue, which is that if collateral requirements for long positions go down as the price goes up, then this makes it easier to go *even longer*; if the 'correct' price is lower, then the increased (or at least not-decreased) collateral requirements for short positions make it harder for the market to correct.

Expand full comment

One reason why your book review markets are showing probabilities significantly higher than 44% for getting 125 likes is that Yes bettors are incentivized to add more likes.

Anyone who bet Yes would very likely add a like to such a post, and maybe get friends and family to as well (or create fake accounts to add likes, if they really want to win play-money).

This is the problem with using metrics — they often stop being useful when you condition action on them.

One solution is to loop in human judgment: Will Scott believe that this book review was a relative success 1 month after the article was published?

Or, on a scale from 0 to 100, how valuable will Scott judge this book review to be one month after posting?

Or, if you want to be really free-form, you can use our new free-response markets: What will the reader reaction be to this book review? Users would then submit text answers and bet on which answer best describes how the book review was received, and then Scott would choose one winner (or multiple winners). This kind of market could give qualitative descriptions that binary and scalar markets cannot.

Expand full comment

This only very remotely on topic, but one of the reasons I did not participate in any of the book review markets is that I think "Number of likes" is not a very good mesure of readers satisfaction. I'd say that number of comments is a much better proxy.

Expand full comment

Typo: If Russia invades Ukraine, this person will win +58 points; if it doesn’t, they will win +32 points. Why does *Manifold* --> *Metaculus* allow this? They want to incentivize people to forecast.

Expand full comment
Feb 21, 2022·edited Feb 21, 2022

We are also still thinking about how to get better predictions for long term markets, where you note that incentives are not-so-good, like for Dwayne Johnson's presidential bid.

We talked to Robin Hanson today, and he suggested creating 3 parallel currencies which are used for short term (<1 month), medium term (1 month - 1 year) and long term (1 year+) questions. He says the shorter term currencies would be able to trade in the longer term markets, but not vice versa.

I quite like this solution and think it would work. It's another example of a zero-sum reputation solution.

Expand full comment

One fairly obvious wrinkle to add to your loan strategy (which I love!) is that you can't transfer it without first paying off the loan. So if you get a M$10 loan and take a position, you can only make money by selling it for more than M$10. There would be some margin to be had there for folks who trolled around looking for questions and taking mispriced positions, but that would be highly useful! Also, you could fairly easily incorporate that into the ranking: A given user could have at least 3 distinct values by which they could be sorted in a leaderboard: current balance, current gain (abs | %age) over money put in, and a weird confidence interval-looking thing that showed how much they currently owe on loans and the value of their positions.

It's probably necessary if you have this option to make every user pay at least a little to get in-otherwise someone could make a bunch of sockpuppets, each of which would make a large number of stupid bets against their main, making it rich.

Expand full comment

Regarding the leaderboard, I think it's important to remember that assessing a forecaster's performance based on a single number has the same problems as assessing an investor's performance based only on the amount of money made. You fall victim to a number of issues, like "did they make all of their money on a single high-payoff prediction that was mostly luck?" or "Were they good only for a brief period of time when there were markets on a specific political event?" or "are they a very high variance forecaster and they just happen to be on a lucky streak?"

You need more sophisticated analysis to answer how good a forecaster is: Things like a sharpe ratio of their profits, or a graph of their winnings over time to show when they were active or not would be a start.

For less formal analysis, Kaggle's rating system (https://www.kaggle.com/progression) might provide a good starting point

Expand full comment

> they solve this by actually reputationalizing play money profits, which works

It partially works. The problem is that profits are also a function of the money you have. Suppose I have $1 and I know for certain that a market that currently has a 1% odd will actually come true. At best, I can end up with ~$100. If I have $1000 to start with, maybe there isn't enough liquidity for me to wind up with $100,000, but maybe there's enough liquidity for me to wind up with $3000.

Buying more money allows you to multiply any winnings, so it's not a good judge of how good of a predictor you are. I think a better judge would be to normalize your profit by the amount you've bought, but that unfortunately takes away Manifold's business model.

Expand full comment

On the use of positive sum markets to incentivize voting -- there's probably some ideal tradeoff point between number of bids people make vs the accuracy of each guess (i.e. where they stand on the Susan-to-Randy spectrum) which maximizes the amount of real information entering the market. I don't know how Metaculus calculates payouts, but there's presumably also some parameter (or set of parameters) controlling how positive-sum the markets tend to be. This obviously gives Metaculus the ability to tune how generous payouts are in order to optimize user behavior (assuming that people, on average, resemble rational actors enough to vote more readily when payouts are more generous and vice versa.) Of note, they *wouldn't* have this ability if Metaculus bids were denominated in dollars or whatever, since then positive-sum markets simply drive them broke. So perhaps there is a way in which play money markets can be epistemically superior after all.

Of course, this same flexibilty could be joined to the benefits of a real-money market by having bids be made in play money which can be exchanged for real money afterwards, at some rate related to the generosity of the payout system in a way that keeps the market operators financially afloat.

Expand full comment

It's worth pointing out that real-money markets are negative-sum (the market takes a vig).

Also, real-money markets need to get a certain amount of attention (a few thousand dollars) before they are making any sort of useful prediction, which heavily restricts the non-sports markets that are created.

There is far more money on a first round tennis match in an obscure tournament than on any but the biggest non-sporting markets.

This is partly because people like getting results quickly, and sports generate large numbers of results in a short period of time, where most non-sporting predictions require locking up money for long periods of time. In many cases, you have to get in relatively early, as a great many predictions become near-certainties (99-1 propositions) for quite a long time before they are finally resolved.

If you have to lock up money for a long time if you want to be making a meaningful prediction, then the ROI is much worse than sports - but the predictive power of the market would be much more valuable. I think this is a hard problem to solve; if you could earn a lot more money by studying the 30th-50th ranked tennis players and predicting their early round results when they play each other, then many smart superpredictors would be incentivised to do that rather than providing information useful to society.

Expand full comment

I have a few observations about Manifold, which you may need to take with a grain of salt because I did manage to lose quite a bit of play money over the weekend.

Assuming I haven't misunderstood their Technical Guide (https://manifoldmarkets.notion.site/Technical-Guide-to-Manifold-Markets-b9b48a09ea1f45b88d991231171730c5), placing bets on Manifold is, in fact, (slightly) negative-sum. Of the bet pool, 4% goes to the market creator as a 'commision' and 1% is "burned" as a 'platform fee.

In addition to not wanting to tie up their money for long periods, another reason for people not to correct markets is elucidated here: https://kevin.zielnicki.com/2022/02/17/manifold/ - essentially, because your payout depends on the state of the market at resolution, not only on the state when you place your bid, you get less expected profit if the market moves in the correct direction as people get more information near when trading closes.

Expand full comment

Sports books try to set a "line" so that half the bets are on one side and half on the other. Over time, the line changes to keep things that way, so it is a kind of prediction market. Since the book always takes a cut, the market is negative-sum for the bettors.

Expand full comment

I am very interested to see how positive-sum systems resolve the issue of Randy-strategist bettors because of the implications for what I think is the most interesting model: positive-sum for-profit prediction markets. The idea is simple: by offering a game where the house always loses, you incentivize people to forecast. Your actual business model will be making money off of having more or less some knowledge of the future, as generated by the wisdom of crowds.

Expand full comment
Feb 22, 2022·edited Feb 22, 2022

I'd love to hear why the longterm prediction issue you highlight here, which is also pretty apparent to anybody who's used a real money or limited play money prediction market, is solvable.

If a prediction market can't successfully predict if the Rock will be president in two years, how can we expect prediction markets to predict if a new virus will be an issue in two years, if a potential war will be considered a good idea in two years etc.? If the idea is that these markets are accurate due to incentivizing and rewarding correctness, how can real world decision makers lean on prediction markets for their predictive value if nobody bothers to show up due to the inherently low annual return?

I am very worried that people are so fond of a decision making tool that by design only seems capable of extremely short term thinking.

Expand full comment

"Far from being a subsidy - money which it is easy for other people to get - this feels like smart money - money that other people should be scared to bet against. So how does this open the market at all?"

Scott, if you don't think that the market can beat your own prediction, why are you even betting in the first place? What is the point of asking the market for a prediction on your life events if you don't think they can outperform you?

(I agree they likely can't outperform you, but then again, I think that asking the market for predictions about your life events is stupid. You seem to think it is not stupid, so I don't understand your thought process here.)

Expand full comment

So, what's your Metaculus score, Scott? Trying to figure out if I can trust you.

Expand full comment

For the conditional prediction market on book review likes, it seems a bit like the problem with assassination markets where market participants can affect the outcome themselves: people can increase (but not decrease) the chance of any book passing the like threshold by just liking the post themselves or with sockpuppet accounts - 125 likes is not hard to achieve. That asymmetry seems like it could inflate probabilities a bit.

Or maybe Scott realized this and it's a 5head play to increase likes on his twitter posts.

Expand full comment

Scott, I think there may be a fundamental disconnect in people's (or at least my) understandings of prediction markets, because you didn't mention what I think is the most obvious reason the book review predictions are all positive: people think that you're most likely to review the books that have the highest probability of getting the required number of likes. Thus, they see these not only as predictions, but also as an opportunity to pay you to review books. Honestly, when you first announced the predictions, I thought that was your intention, "get involved in prediction markets, and you might get to pick what book I review next!" Even if that wasn't your intention, the fact that the bet going one way or the other will reward people via a side channel rather than purely with the play money seems like it will always have a distorting effect, and will be unavoidable for prediction markets. It seems like the more that real life decisions are based on the results of prediction markets, the worse this distortion will become.

Expand full comment

I feel I picked up a subtle hint in the last posts, so I'm gonna go for it: What is your Metaculus score?

Expand full comment

>Manifold lets you buy their play money for real money, which in theory would destroy any reputational value. But they solve this by actually reputationalizing play money profits, which works:

Say that you earn 1% returns on your predictions. Your profits would still be 10x higher if you spent 10x more real money, so you had more play money to invest, right?

Meaning, this leaderboard is still pay to win, even though you can't win by being below average - unless I'm missing something?

Expand full comment

Thank you for the article. I prefer how Metaculus awards points for a true and false statement. Part of being successful at predictions is to continue being around to predict, even if you were not on the money.

Expand full comment

Why can't a play money market offer separate blocks of money for each quarter of market resolution times. So the "Dwayne Johnson for President" money would be in an Autumn 2024 block (which all users would get, like with every block) but which could only be spent on questions that resolve in Autumn 2024.

Then no one would worry about tying up this money because it could only be used for things that paid out in Autumn 2024 anyway, so you would just use it whenever a good market came along, in any year, provided it resolved in Autumn 2024.

And ideally you'd be able to sell again (if someone wanted to buy at your offer price) at any time, but this is not entirely necessary.

Expand full comment

A few weeks ago you posted an article about 'why you suck', and while that was obviously hyperbolic I would point to 'Mantic Monday' as a reason I read you less and less. When I used to see an SSC article, I was excited – when I see an ACT article, I think 'I hope it's actually something interesting,' but usually it's something like this.

The blog has largely shifted from articles applying the rationalist perspective to wider issues to articles writen exclusively for the rationalist subculture and its status symbols and obsessions. This is fine! It's your subculture after all and you're on of its most prominent members, there's no reason for you not to write what you care about. But for me, this article really is about its title – the mechanics of using play money to gain reputation within a subculture just isn't particularly interesting.

These Monday posts never teach me anything about Russia or Ukraine. Knowing the aggregate opinion, even if it were accurate, isn't interesting without a discussion of the underlying reasons for the predictions. It's the equivalent of writing 'stocks go down on Russia news,' then a paragraph on which companies went down in the Dow, then one on the Nikkei, then one on the FTSE, etc., etc. Interesting only for people playing that game, and at least they play with real money, albeit usually not their own.

I don't mean this as a criticism, obviously this is interesting to others, but if that previous post was genuinely interested in how your blog is seen to have changed, the proliferation of inside-baseball rationalist articles is the big reason why I check the blog, but skip most content and don't pay to subscribe.

Expand full comment

I'm thinking conditional prediction markets would probably be improved a lot if you could use the same money to bet on each of the conditions (whenever the conditions are mutually exclusive).

Expand full comment

It's fascinating that your solutions for fixing what you see are errors in implementation that allow the market to be perverted for unrelated individual motives all rely on knowing what the prediction market *should* be predicting. Do you have sufficient faith in the result of fixing these errors that you'll then be able to accept on pure faith in the mechanism the predictions that *can't* be checked (by your instincts)? Because presumably to have any real value the predictions markets have to make predictions which can't be checked. How do you get to a place where you have sufficient faith in predictions that can't be checked that you bet important things -- real money, in large amounts, career and/or life choices? Do you just keep tweaking the system until it gives you results that you like for predictions that *can* be checked?

Expand full comment

Thank you for the considered overview of different theoretical prediction markets, and a somewhat deeper examination of 2 specific ones.

However, you don't address the fundamental value proposition of prediction: how can a prediction market weed out junk? Ukraine invasion is an example put forward by Scott Alexander in another article on prediction markets; my response was that each and every single prediction in that space is crap because there is NO ONE who can possibly have any relevant information on the matter outside of Putin and Shoigu.

Even the supposed "intelligence experts" and the POTUS have been constantly predicting invasion only to be shown wrong.

I wonder greatly if this entire prediction market thing is an outgrowth of efficient market theory - the largely garbage macroeconomic meme that somehow anyone and everyone in a free market is fully informed on everything and makes the right decisions.

In my view, GIGO - and enabling betting doesn't change that.

Expand full comment

if it's play money /now/, can they just dangle the reward of it maybe being tradable for something real in some unspecified future?

Expand full comment

Let's assume – utterly unfairly – that Robert McIntyre is cheating. What might he be doing?

It looks like you get M$1000 free when you create a Manifold account. This means I could create a dozen or so sock-puppet accounts and have my main account bet against them to farm play money. I don't know how much you can make off a given position, but Robert has M$6719 profit which is a pretty small multiple of the joining bonus.

On Reddit, even back when karma wasn't valuable in real-money terms, upvote bot rings were endemic. Both Reddit and Manifold attract a large number of coders with time on their hands who like winning at things.

Am I missing something here? Does anyone with more Manifold experience know a reason this wouldn't work?

Expand full comment

>so far, none of them actually produce any kind of a reputation. By this I mean something like: if I claim “I have an IQ of 160” or “I can bench press 300 lbs”, people might be impressed by me.

It is true that these systems do not produce social reputation, but this is a metric that doesn't really help us in designing a better reputation system. So, I think we should instead try to model "professional reputation", similar to the type of reputation that is vital for scientists. (but not too similar, we shouldn't copy the mistakes that lead to the "publish or perish" or in our case the "quantity over quality" culture.) You note that the problem with markets is that they are not "strategy proof" ( https://en.wikipedia.org/wiki/Strategyproofness ), i.e. revealing your true prediction is not always the best strategy to gain the most points. However, markets can get away without this property because they are efficient to a certain degree. (I guess we could say e.g. Polymarket is Pareto efficient ( https://en.wikipedia.org/wiki/Pareto_efficiency ) in the sense that if we consider the outcomes of prediction as "goods", users can always "trade" with the market (by making a prediction that differs from the current market consensus) when they have goods they do not like. But I'm not sure if this notion of efficiency is relevant in this case)

Can we design a reputation system that both incentivizes making truthful predictions (in particular, disincentivizes not making predictions when you believe you can make an accurate prediction) and accurately measures prediction strength? If we want strategy-proofness, we cannot measure reputation purely on the prediction outcomes. The reputation of scientist as a researcher is not primarily based on the number of papers published or grants obtained, but on the degree that other scientists trust this person.

We can try to measure trust as a separate metric besides reputation on prediction markets as follows (all numbers and percentages are made up): for a limited number of times per day, any user can choose to "trust" another user and immediately gains 1 reputation for the effort. If user A "trusts" user B, user A obtains 10% of the amount of reputation user B gains on the first prediction user B makes after being trusted. We keep track of every time someone trust someone else to build a trust network. By using a network ranking algorithm such as PageRank ( https://en.wikipedia.org/wiki/PageRank ), we obtain a "trust score/ranking" for each user, which is based on how often other users would expect you to make a correct prediction. To encourage users to become trusted, we can give periodic reputation awards for being highly trusted.

Of course, in practice, we may trust certain people only when they make predictions within a certain domain, so it may be useful to "trust" someone only for the next prediction on a question in a certain category. Another issue may be that many people simply only trust the highest ranked person. I'm not sure if this is bad. This is a pretty naive action that is likely more common with a low-trust user, and therefore does not generate much trust. And if someone somehow happens to be almost always right, well, then they should be highly trusted, of course.

Expand full comment

> Manifold lets you buy their play money for real money, which in theory would destroy any reputational value. But they solve this by actually reputationalizing play money profits, which works

Isn't this still game-able? Just buy lots of play money and bet it randomly on a bunch of different questions. The leaderboard appears to track total winnings, so you can still pull ahead of others by investing enough money to have larger payouts on the bets you do win. In order to solve this, the leaderboard would need to track something like winnings/investment.

Expand full comment

The biggest problem with positive-sum is that it incentivizes people to bet based on the expected value of the bet instead of their best estimate, and thereby degrades the informational value of their bet.

This is probably also why nobody brags about their metaculus score: they sense that rewarding gamesmanship degrades the usefulness of the market.

Expand full comment

Conditional prediction markets with mutually exclusive conditions should allow you to bet the same money in each of them- multiplying the available capital by the number of conditions.

Expand full comment

On the book review markets, I think there is less incentive for people for people to bid down because you've stated (and people also implicitly know) that the chance of the market resolving either way is dependent on how how positive it is. So bidding down doesn't just lock up my play money for a year, but also lowers the odds the market will resolve even if I am correct! Though I think this problem would also be fixed by the interest-free loan idea.

Expand full comment

Hope this is appropriate to post here: If anyone here is involved with Polymarket, I see that they’re hiring an engineer and would love to interview with them.

Expand full comment

That you told people that there is a "like" button might make posts before this less comparable to posts after.

I for one did not know one could "like" posts.

Expand full comment

Hmm, could Manifold do some sort of linked conditional markets? Where only (at most?) one market resolves, and the rest are N/A. That way you could put M$100 into the linked markets, and bet it on as many of them as you want. Since only one resolves, the most you could lose in M$100, so this doesn't risk anyone going negative.

Here, it might be "If Scott reviews this book FIRST of the books in the group, will it get 125 likes".

This doesn't resolve the issue of likes being gameable, and thus not an ideal resolution criteria.

Expand full comment

I think the best way to look at prediction markets is that you (as market maker) are offering a bounty for information. People will participate in the market if they think the share of the subsidy they can claim thanks to their personal knowledge is worth the cost of the time required to investigate the question.

Of course, this also shows why there are thorny legal issues surrounding them. In the traditional financial system, insider trading is considered wrong and illegal. However, insider trading is basically the entire raison d'etre of prediction markets. They're a decentralized leak bounty (at best - at worst they turn into straight out bribes if decision makers participate).

Expand full comment

I think what's going on here is that money, pretend or otherwise, has two purposes in a prediction market that don't quite line up. One is to provide an incentive structure and the other is to ration market-making power.

Also i think it's a good idea for play money markets to stay reasonably close to how irl money works for reasons of verisimilitude. We want to simulate a real money market so we hopefully find any pitfalls before they lead to financial ruin.

Expand full comment
Feb 22, 2022·edited Feb 22, 2022

I feel confused about "locking money for 2.5 years" issue. Assume that people are willing to wait 1 year, but not 2.5 half years. Wouldn't that imply that one year before resolution the price will become correct? But then wouldn't it imply that people who anticipate that fact will realise that buying at correct price two years before resolution and then selling one year before resolution will give them the profit they want in one year? But then wouldn't it imply that the price will become correct two years before resolution? And then people who realise this fact will start investing three years before...etc.etc. and by inducition the price should be corrected now?

Expand full comment
Feb 22, 2022·edited Feb 22, 2022

"They buy “yes” on books they like, but don’t buy “no” on books they don’t like, because that would be against the imaginary rules for the voting that they are falsely imagining this to be."

Are you sure those aren't the actual rules? You took a group of people who did not care about Manifold play money and told them that they could convert it into increased chance of you reviewing a book, they proceeded to exchange the worthless currency for the one they valued. Yes dominating no is a product of the seven-way race: why cast a no vote on every book except the one you want when you could cast six yes votes on the one you do?

Expand full comment

You could get around the "resolves N/A" thing by asking: "Will I review X *and* will it get at least 125 likes?". Then a vote for "yes" is a vote of confidence for X, and a vote for "no" is a vote of no confidence, but if you don't end up reviewing X then it resolves "no".

Expand full comment

"Still, part of me wishes that reputation systems could actually give someone a good reputation"

This is backwards. A good reputation should give someone a good rating in the reputation system, not the other way around. Otherwise the reputation system is fallaciously circular. But, yes, as a record of reputation, it would only be valuable if a good (recorded) reputation also had real consequences.

"why would anybody want play money? The obvious answer is that it’s a reputation system in disguise"

This is backwards. Your "reputation system" is a play money system in disguise (it is not a reputation system).

Expand full comment
Feb 22, 2022·edited Feb 23, 2022

Huh, I wanted to vote for Rene Girard, and then looked at all the odds and bet the negative of everything. (If you review Nixonland I might lose...) Having bet negative on everything, I won't be hearting any of the above book reviews, well unless it goes above 125 and I did like the review a lot. (I want to say that in general, positive feedback is bad for a control loop... and more people should build control loops to understand that simple fact.... our first job is to get the sign (+/-) right.) If more than 125 people vote that some review will get a heart, then it's almost guaranteed to be true.... hmm I may have bet wrong.

Can I ask something? If you don't review the book I just get my 'money' back is that right?

(I'm selling a lot of no votes...)

Expand full comment

I think the ~90% odds of 125 likes might be correct. Your normal book reviews rarely do so well, but if a bunch of people have play money riding on the outcome, all it takes is one of them caring enough to stuff the ballot. The average better doesn't even need to intend to stuff the ballot, merely noticing that there are a lot of betters should cause them to expect a significant chance of ballot-stuffing and adjust their bets accordingly.

Expand full comment

I think prediction markets could borrow some ideas from perpetual futures markets(I’ll preface this by saying I don’t have much experience with trading prediction markets, so maybe this is already available or doesn’t work for obvious reasons):

* First, you need the ability to trade in and out of the markets at any time - this means you can trade events that change the probability, instead of just the final outcome. i.e. if you think Putin's speech will raise the odds of Russia invading Ukraine, you can buy in at 70% probability and get out profitably an hour later at 80%. This lets you deploy your information in concentrated time periods

* Second, as you already say, you need leverage - this is how people can make money on low volatility markets like foreign exchange, where the prices move by basis points. Let's say you have $100 in your account. You could bet $500 on a given market for 5x leverage. Now, if the probability increases by 20%, you will have doubled your money. On the flip-side, a 20% drop will get you liquidated, so you need to manage your risk.

* The final piece of the puzzle is what's called cross-margin. This means that any position you hold counts towards your margin - let's say you start with $100 in your account. You bet $50 on some market. While the price of that specific market stays the same, you would still have $100 in your account - $100 + $0 profit. If your bet turns out good and doubles in value, you would now have $150 of margin, all without closing the position. At the same time, you could go and open as many other positions as you like, and your available margin will always be the the cash balance you have plus the sum of all profits and losses from your open positions.

All this combined lets and informed trader make much more money on small short term moves, requires much less capital for the same absolute returns and lets you construct a portfolio of uncorrelated bets which also boosts your returns if done correctly, solving the problem of small returns and long time-frames.

This is all adding risk of course - the more leverage you use, the quicker you can get liquidated, and similarly, if you bet on correlated markets they can all move against you at the same time. However all of this favours more informed participants at the expense of lay people, which is ultimately what you want if you want to get to a point where people are doing this professionally.

Expand full comment

What if you could get play-money "leverage" for betting "yes" when the probability is high, or when the question closes a long time in the future, or extra leverage if it's both?

Expand full comment

There is a proper-ish Metaculus leaderboard. This is based on points per question, so just spamming across questions at random will maybe accumulate many points, but not points per question. I'm currently sitting at top 45 out of the 200 people on that list. Seems to only include people who participated at least on some number of questions or attained some number of total points. I can't tell how many users Metaculus has in total, so it's hard to say how elite this leaderboard is, but nevertheless, it is interesting.

It is still a problematic measure since some kinds of questions give more points (longer running, continuous), so one can play strategically by participating mainly in those to get a higher point gain per question. One could probably get around this issue with some adjusting, but it's not been done, and the data from this leaderboard doesn't seem public, so I can't give it a try myself.

https://metaculusextras.com/points_per_question

Expand full comment

If it is playmoney, you could implement a system where you don't pay the cost of the bet untill you sell or the bet is resolved. Then no money will be locked up in long predictions. Combine this with having a maximum investment per market which is tied to your current money pool, to get the effect that people who done well previously should have more levarage in future bets.

This would only work with play money though. Becasue it would cause lots of people to go negative, which someone would have to pay for if it was reall money, but not a problem with play money.

There should probably also be some way to declare play money bankruptcy, where if you go permanently neggative you get to start over after time out, possibly with progressivly loger time outs everytime you screw up.

Also, points or play money don't have to have reall value. Lots of humans likes seeing numbers go up, even if these numbers don't mean anything. Lots of humans play computer games which also don't give braging rights.

When you get to reall money, there is a diffrence, becasue then people can really do predictions full time. If there is no money, it can't be more than a hoby. In this case I expect imaginary internet points is as good a reward as any.

Expand full comment