Statistical Mechanics – Isolated Systems

In the last post, we learnt how to maximise a function subject to known constraints. The underlying motivation was the need to find the set of probabilities \{p_\alpha\} that maximise the Gibbs entropy S_G. Using Lagrange multipliers, we can impose certain constraints on the \{p_\alpha\} and, in doing so, incorporate what little we know about the system into our equations.

 

Let’s first consider the case of ‘maximum ignorance’. We are considering a system about which we have absolutely no knowledge – well, almost no knowledge. The one thing we do know is that the sum of the probabilities must be equal to one. This is because if our list of microstates is complete, the system must be found in one of them. This is the most fundamental constraint we can place on our probabilities:

\displaystyle \sum_\alpha p_\alpha =1

Using the method of Lagrange multipliers, we maximise the function

\displaystyle -\sum_\alpha p_\alpha\ln p_\alpha-\lambda\Big(\sum_\alpha p_\alpha - 1\Big)

The first term is the function we want to maximise, the Gibbs entropy. The second term is the product of the Lagrange multiplier \lambda and a number equal to 0. Taking the differential of this expression gives

\displaystyle -\sum_\alpha (\ln p_\alpha dp_\alpha+dp_\alpha)-\lambda \sum_\alpha dp_\alpha-\Big(\sum_\alpha p_\alpha-1\Big)d\lambda

\displaystyle -\sum_\alpha (\ln p_\alpha+1+\lambda)dp_\alpha-\Big(\sum_\alpha p_\alpha-1\Big)d\lambda

We set this expression equal to 0. It is best to check this equation for yourself if you are unconvinced. The product rule and chain rule for differentials work in exactly the same way as they do for derivatives. Then, for this expression to be equal to 0,

\ln p_\alpha+1+\lambda=0

\displaystyle \sum_\alpha p_\alpha - 1 =0

since the variables \{p_\alpha\} and \lambda are independent. Rearranging the first equation,

p_\alpha=e^{-(1+\lambda)}

Substituting this into the second equation,

\displaystyle \sum_{\alpha=1}^\Omega e^{-(1+\lambda)}=1

\Omega e^{-(1+\lambda)}=1

\displaystyle e^{-(1+\lambda)}=\frac{1}{\Omega}

\displaystyle p_\alpha=\frac{1}{\Omega}

So each microstate is equiprobable. This assignment is perfectly reasonable; if we know absolutely nothing about a system, our best guess is that no single state is privileged, and that if we were to ambush the system and open its lid, we would be just as likely to find it in one state as any other. The fancy statistical name for this is the principle of indifference.

The kind of system described by a uniform probability distribution is an isolated system. It is one that cannot, even in principle, be measured. In practice, we’ve little use for systems with which we have absolutely no contact. As ignorant as we are, we can still make measurements of a system’s macroscopic properties – no system is truly isolated. In a following post we’ll consider a system about which we have some real information. The results are much more interesting.

But first (we’ll get there eventually, promise!) we have to learn how we quantify ‘knowledge‘ of a system.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s