Random Distribution

The Pseudo-random distribution (often shortened to PRD) in DotA refers to the statistical mechanics of how certain probability-based items and abilities work. In true random distribution, every "roll" operates independently, but in PRD, the effect's chance increases every time it does not happen. In general, PRD is applied to the following types of abilities: Critical Strike, Bash, Damage Block, Chain Lightning, Maim.

Summary
For each instance which could trigger the effect but doesn't, the PRD augments the probability of the effect happening for the next instance by a certain constant. This constant (which is also the initial probability) is usually quite low compared to the stated probability of the effect it is shadowing. This probability counter resets every time an instance of the effect occurs. Over a moderately large period of time, the expected probability for each instance is almost exactly the listed probability (but see the note below), and since the probability increases steadily for each time the effect doesn't happen (and resets when it does happen), the effect occurs more consistently.

The important gameplay and balance effect of PRD is that effects based on it rarely occur many times in a row, or go a long time without happening. In the case of Bash, a bash will occur very nearly every four attacks, will rarely occur twice in a row, and will always happen within 12 attacks. This makes the game far less luck based and adds a great deal of consistency to many probability based abilities in Dota 2.

Gameplay wise, PRD is difficult to exploit. It is theoretically possible to increase your chance to bash or critical strike on the next attack by attacking creeps several times without the effect happening, but in practice this is nearly impossible to do. Note that for instances that would not trigger the effect, the probability counter does not increase. So a hero with critical strike attacking Buildings does not increase its chance to critical strike on its next attack, since critical strike does not work against buildings.

The probability of a modifier occurring on the Nth attack since the last successful modifier is given as P(N) = C * N. C is a constant derived from the expected probability of the modifier occurring. C serves as both the initial probability of the modifier and the increment by which it increases every time the modifier fails. When P(N) becomes greater than 1.00, the modifier will always succeed.

For example, Slardar's Bash has a 25% probability to Stun the target. On the first attack, however, it will only have an ~8.5% probability to bash; this is its PRD constant C. Each subsequent attack without a bash increases this probability by 8.5%. So on the second attack, the chance is 17%, on the third it is 25.5%, etc. After a bash procs, the probability resets to 8.5% for the next attack. These probabilities average out so that, over a moderate period of time, Bash will proc very nearly 25% of the time.

However, the table of C values used by Dota 2 will not always result in a modifier's listed chance being equal to its actual probability of occurring. This is especially clear with chances over 25%. For example, a Vanguard has a listed chance of 80% to block damage. To achieve that probability using PRD, a C value of around 75% would be used. However, the constant used by the game is closer to 50%, resulting in the actual chance to block being around 66%.

In the following table, P(T) is the theoretical probability of the modifier occurring. In the case of Bash, this is 25%. P(A) is the actual probability of the modifier occurring over an infinite number of attacks. Theoretical C is the constant that would result in the theoretical probability, while Actual C is the constant used by the game, resulting in the skewed probability. Max N is the minimum number of attacks that would result in C * N becoming greater than 1 (i.e. guarantee proc), and is based on the actual C. Average N is the expected value of N: the sum of the products of N and probabilities. Standard deviation is a measure of how spread the data is, using the weighted population formula. The lower the deviation, the more consistent the procs are.