In the last post, we learnt how to maximise a function subject to known constraints. The underlying motivation was the need to find the set of probabilities that maximise the Gibbs entropy . Using Lagrange multipliers, we can impose certain constraints on the and, in doing so, incorporate what little we know about the system into our equations.

Let’s first consider the case of ‘maximum ignorance’. We are considering a system about which we have absolutely no knowledge – well, *almost* no knowledge. The one thing we do know is that the sum of the probabilities must be equal to one. This is because if our list of microstates is complete, the system *must* be found in one of them. This is the most fundamental constraint we can place on our probabilities:

Using the method of Lagrange multipliers, we maximise the function

The first term is the function we want to maximise, the Gibbs entropy. The second term is the product of the Lagrange multiplier and a number equal to 0. Taking the differential of this expression gives

We set this expression equal to 0. It is best to check this equation for yourself if you are unconvinced. The product rule and chain rule for differentials work in exactly the same way as they do for derivatives. Then, for this expression to be equal to 0,

since the variables and are independent. Rearranging the first equation,

Substituting this into the second equation,

So each microstate is equiprobable. This assignment is perfectly reasonable; if we know *absolutely nothing* about a system, our best guess is that no single state is privileged, and that if we were to ambush the system and open its lid, we would be just as likely to find it in one state as any other. The fancy statistical name for this is the *principle of indifference*.

The kind of system described by a uniform probability distribution is an *isolated system*. It is one that cannot, even in principle, be measured. In practice, we’ve little use for systems with which we have absolutely no contact. As ignorant as we are, we can still make measurements of a system’s macroscopic properties – no system is truly isolated. In a following post we’ll consider a system about which we have some real information. The results are much more interesting.

But first (we’ll get there eventually, promise!) we have to learn how we quantify ‘knowledge‘ of a system.

### Like this:

Like Loading...

*Related*