In the previous post, we found an expression for a closed system’s Gibbs entropy in terms of its energy , the partition function , and the enigmatic variable :

This expression for is not wholly enlightening, since only the variable that is easily interpretable is . Instead we’ll look at its *differential*, an infinitesimal change in its value:

Using the product rule on the first term gives

and using the chain rule on the second term gives

Let’s look at the second term in more detail:

The differential sign can be taken inside the sum, since the differential of a sum is the sum of the differentials:

Using the chain rule gives

We use the product rule again, and separate out the terms:

This expression looks unpleasant, but can be simplified when we recall that

where is the probability that the system is in the state . So

Now we implement the meaning of the set of probabilities .

The first sum is recognisable as the energy of the system, since it represents the statistical mean of the system’s energy.

The second sum is a subtler. It represents the mean change of the system’s microstate energies. The cause of this change cannot be identified yet, because we don’t know the form of the set of energy levels – we know the system has energy, but we don’t know by what virtue. For now, we will just call this mean change :

Now we’ve simplified this term, we can substitute it back into the expression for :

We rearrange:

Hmm. What does this mean?

It means the system’s energy can be changed in two ways.

The first way to change is to change the Gibbs entropy . Remember the Gibbs entropy is a function of the set of occupation probabilities . So to change is to change . The set quantifies the probability that the system occupies any one of the set of microstates with associated energies . So to change is to change the probability distribution of state energies.

You could imagine displaying the canonical ensemble on an array of screens, and twisting a dial controlling . As you did so, the distribution of microstates would change – screens would flicker as they updated, with some states (energies) becoming more probable and others less so. For example, it might be possible to twiddle the Gibbs entropy such that higher energy levels have a greater probability of being occupied. The system will be, on average, more energetic. Conversely, you might be able to change such that lower energy levels are more probable. The system calms, and becomes less energetic.

This action we will call the transferral of *heat.*

The second way to change is to bring about a non-zero – that is, to grab the system by the lapels and physically manipulate it such that the energy levels available to it are changed. This action could be something as simple as lifting it up through Earth’s gravitational field. Suppose the energy levels before are

If the system is raised a distance , the new energy levels will be

So the mean change in the system’s energy levels is , so the change in energy of the system is also , as we might expect.

This action we will call the performance of *work*.

The expression

is called the *first law of thermodynamics*, and is more usually written

where is the heat supplied to the system and is the work done on the system. It is an expression of the conservation of energy in the context of thermodynamics, and introduces the concept of heat as a physical, measurable quantity, on a par with mechanical work.

This differential form allows us to interpret what happens when two systems are brought into loose thermal contact, finally allowing us to give meaning to . In the next post, we establish how to calculate the Gibbs entropy of a composite system.