# Thermodynamic Free Energy

I spent a lot of time in my thermo courses minimizing free energy. But I could never figure out what free energy was, or why I was minimizing it. If you really think about it, minimizing energy doesn't make any sense: energy is conserved, remember? (There's a law about that...)

Sometimes I felt like there was a conspiracy to keep me from understanding this stuff. In years since, I've gone back and reviewed my old thermo texts, and...well...let's just say they don't do much to dispel the conspiracy theory.

So let's go back and try to figure out how we got here. The object of the game is to predict the evolution of a system. Here's our system: {...}. As shown, it's a pretty simple system, with only three particles. We know that the total energy of the system is conserved. Based on this, we have computed the wave functions of the system, and counted their degeneracies. In the randomness of things, we know that the system is more likely to be found in states with high degeneracy than in states with low degeneracy, simply because there are more of them. That's what degeneracy is all about. That's also what entropy is all about.

OK, so what's the big deal? Well, here's another system: {...(.x1e30)}. That bit in parenthesis stands for 1e30 particles, and indicates that this is a macroscopic system. We sent down to NCSA to compute it's wave functions, but they're not done yet.

While NCSA is crunching, we'll try a hack: we'll divide the system into two parts: {...}{(.x1e30)}. We still want to maximize entropy. Entropy has this nice additive property, so we can write

```S = S1 + S2                  <- maximize                   (1)
```
where S1 and S2 are the entropies of the two parts of the system, respectively. So we should just maximize S1 and S2, right? Well, not quite. The problem is that energy conservation only applies to the whole system:
```E = E1 + E2                  <- conserved                  (2)
```
and we don't know how to divide up the available energy between the two parts. To figure this out, we need to compute the entropy of the two parts as a function of energy allocated to each part:
```S(E1) = S1(E1) + S2(E-E1)    <- maximize                   (3)
```
System 1 only has three particles, so we can compute S1(E1) directly, as before. System 2...hmmm...they're still crunching. Well, for now we'll just assume that the entropy of system 2 has some functional dependence on its energy. And as long as we've assumed that, we might as well assume that it has a derivative, too:
```dS/dE1 = dS1/dE1 - dS2/dE1   <- maximize (zero derivative) (4)
```
Or, as differentials:
```dS = dS1 - dS2/dE dE1        <- maximize (zero derivative) (5)
```
Just so people don't catch on to how much of a hack this is, we'll make up a wizzy name for dS/dE:
```dS = dS1 - 1/T dE1           <- maximize (zero derivative) (6)
```
The problem with this form is that it's still not too hard to see what's going on. We can crank up the entropy of S1 by pumping energy into it, but that energy has to come from S2. 1/T is the conversion constant that tells how much the entropy of S2 is going to decrease for each unit of energy that we pump out of it. We maximize the entropy of the whole system when we find the value of E1 that zeros the difference on the right of (6).

To further obscure the issue, we'll multiply across by T, turn the expression around, call it "free energy" and tell people to minimize it. We don't need those subscripts either: anyone who can't figure out which system the quantities refer to doesn't deserve to understand this anyway.

```dF = dE - TdS                <- minimize (zero derivative) (7)
```
There. That should keep the riff-raff out.

I sort of understand why the texts use the form in (7). It's because later on they add terms for chemical potential, and changes in volume, and other stuff, and the form in (6) isn't amenable to adding these terms the way (7) is. What I don't understand is how they can make such a simple idea so obscure.

# Notes

We can crank up the entropy of S1 by pumping energy into it
almost always. There are a few systems where increasing the energy decreases the entropy. When this happens, the temperature is negative.

Steven W. McDougall / resume / swmcd@world.std.com / 1997 April 4