♠Methodology is the difference between analysis and opinion
A claim without a methodology is an opinion dressed up as a fact. "FreeCell is almost always winnable" is an opinion; "8 of the 32,000 Microsoft-numbered FreeCell deals have been proven unwinnable by exhaustive solver analysis" is a finding. The Research Desk exists to make sure the numbers, rankings, and strategy claims on this site are findings rather than opinions, and that every reader can audit where they came from.
♣The research sources we use
Our working library is small, deliberate, and visible in article citations. For rules and history we rely on Hoyle's Rules of Games (we treat the twentieth-century editions as the modern canon), Lady Adelaide Cadogan's Illustrated Games of Patience (first editions in the 1870s, still the best record of the Victorian patience tradition and the first English-language compendium to collect patience games systematically), David Parlett's Oxford Guide to Card Games and his longer Oxford History of Card Games for genealogy and naming, and Pagat.com for John McLeod's carefully-maintained online rule summaries. We use Wikipedia as a secondary source only: it is useful for triangulation and for finding citations, but we do not treat it as authoritative on its own. Every non-trivial claim in our articles links to the source that backs it.
For the Microsoft era specifically, we rely on Don Woods' and Michael Keller's published analyses of the original 32,000 deals, the released source behavior of the linear congruential generator Microsoft used to number those deals, and the community archives that grew up around FreeCell FAQ culture in the late 1990s. Those sources answer a surprising number of questions about deal numbering, solvability, and the few famously unwinnable deals that helped define the game's reputation. When a secondary blog cites one of those primary sources, we read the primary source directly and cite it ourselves.
♦How we run simulations
Most of the win-rate figures on this site come from Monte Carlo simulations we run ourselves. The approach is deliberately simple: for each game we deal N random games (usually N=10,000 or larger, depending on the variant and the question), play each deal to completion using a consistent strategy heuristic, record the outcome, and aggregate. The heuristic is disclosed alongside the result because the same game can have wildly different measured win rates depending on whether the simulated player cycles the stock, looks ahead, or plays greedily.
Every simulation we publish discloses the sample size, the random-seed methodology, and the strategy heuristic in use. When a deal is unsolvable within a fixed computation budget, we flag that explicitly rather than counting it as a loss, because "the solver ran out of time" and "this deal cannot be won" are different claims. We do not publish a win rate for a game unless we can back it with either a simulation we have run or published academic research. If neither exists, we say so and move on — a vaguely-sourced number is worse than no number at all.
♥Confidence intervals, not point estimates
We report confidence intervals rather than bare point estimates. A result like "Spider 2-suit wins 45% of the time" is misleading without an error bar; a result like "45.2% (95% CI: 44.1%–46.3%, N=10,000)" is auditable. When win rate varies by difficulty setting — Spider 1-suit vs 2-suit vs 4-suit is the classic example — we report each setting separately. Averaging across difficulties is misleading, because no player actually samples the three modes uniformly, and the aggregate number hides the exact information a reader is trying to use.
♣How we back strategy claims
Strategy writing is the easiest part of solitaire content to get wrong, because most claims sound plausible and can sit unchallenged for years. When we say "the first move in FreeCell should almost always be X," we back it with one of three things: a solver analysis (exhaustive when the state space is tractable, heuristic when it is not), simulation results across a large number of deals showing the recommended move outperforms alternatives, or a first-principles game-theory argument that names the tradeoffs. We show the reasoning in the article; we do not just hand the reader a list of tips and expect trust.
The reason strategy claims need such heavy backing is that solitaire players repeat advice to each other for decades without testing it. "Always move to the left column first" is the sort of tip that sounds reasonable, circulates widely, and turns out to be wrong under simulation for half the games it is applied to. Our rule is simple: if we cannot show the data that supports a recommendation, the recommendation does not go in the article.
♠Citation and attribution
Every non-trivial factual claim in our articles links to its source. When we cannot find an authoritative source, we do not pretend to have one — the claim gets marked inline as [editorial analysis] or [disputed] so readers know where confidence should sit. Transparency over false confidence is the rule we hold hardest. A reader who learns which of our claims are solid and which are tentative will trust the solid ones more, not less.
♦What we are still learning
We do not have all the answers. There are variants we cover at the rules level but have not yet analyzed rigorously, and there is research we would like to do but have not found the time for. If you see a gap, tell us — we would rather publish "we do not know yet" than paper over it. The working list of open questions lives on an internal research backlog, and when we close a question we add the result to the relevant page and note the date.
♦Related reading
The three-pillar testing framework we run every game through before publishing: rules accuracy, gameplay fidelity, and player experience.
House style, fact-checking workflow, and corrections policy.
The five specialty desks behind every article on the network.
Ready to Play?
See a number on the site that looks off? We re-run simulations in public and correct errors fast. Write to research@solitairestack.com.
