## Friday, February 18, 2011

### Systems in Equilibrium

At thermodynamic equilibrium, a system's properties are, by definition, unchanging in time. Systems in equilibrium are much simpler and easier to understand than systems which are not in equilibrium. Often, when analyzing a thermodynamic process, it can be assumed that each intermediate state in the process is at equilibrium. This will also considerably simplify the analysis.
In isolated systems it is consistently observed that as time goes on internal rearrangements diminish and stable conditions are approached. Pressures and temperatures tend to equalize, and matter arranges itself into one or a few relatively homogeneous phases. A system in which all processes of change have gone practically to completion is considered to be in a state of thermodynamic equilibrium. The thermodynamic properties of a system in equilibrium are unchanging in time. Equilibrium system states are much easier to describe in a deterministic manner than non-equilibrium states.
In thermodynamic processes, large departures from equilibrium during intermediate steps are associated with increases in entropy and increases in the production of heat rather than useful work. It can be shown that for a process to be reversible, each step in the process must be reversible. For a step in a process to be reversible, the system must be in equilibrium throughout the step. That ideal cannot be accomplished in practice because no step can be taken without perturbing the system from equilibrium, but the ideal can be approached by making changes slowly.

### Isolated System

An isolated system is more restrictive than a closed system as it does not interact with its surroundings in any way. Mass and energy remains constant within the system, and no energy or mass transfer takes place across the boundary. As time passes in an isolated system, internal differences in the system tend to even out and pressures and temperatures tend to equalize, as do density differences. A system in which all equalizing processes have gone practically to completion is considered to be in a state of thermodynamic equilibrium.
Truly isolated physical systems do not exist in reality (except perhaps for the universe as a whole), because, for example, there is always gravity between a system with mass and masses elsewhere. However, real systems may behave nearly as an isolated system for finite (possibly very long) times. The concept of an isolated system can serve as a useful model approximating many real-world situations. It is an acceptable idealization used in constructing mathematical models of certain natural phenomena.
In the attempt to justify the postulate of entropy increase in the second law of thermodynamics, Boltzmann’s H-theorem used equations which assumed a system (for example, a gas) was isolated. That is all the mechanical degrees of freedom could be specified, treating the walls simply as mirror boundary conditions. This inevitably led to Loschmidt's paradox. However, if the stochastic behavior of the molecules in actual walls is considered, along with the randomizing effect of the ambient, background thermal radiation, Boltzmann’s assumption of molecular chaos can be justified.
The second law of thermodynamics is only true for isolated systems. It states that the entropy of an isolated system not in equilibrium will tend to increase over time, approaching maximum value at equilibrium. Overall, in an isolated system, the available energy can never increase, and it complement, entropy, can never decrease. A closed system's entropy can decrease.
It is important to note that isolated systems are not equivalent to closed systems. Closed systems cannot exchange matter with the surroundings, but can exchange energy. Isolated systems can exchange neither matter nor energy with their surroundings, and as such are only theoretical and do not exist in reality (except, possibly, the entire universe).
It is worth noting that 'closed system' is often used in thermodynamics discussions when 'isolated system' would be correct - i.e. there is an assumption that energy does not enter or leave the system.

### Work and Engines

The dominating feature of an industrial society is its ability to utilize sources of energy other than the muscles of men or animals. Most energy supplies are in the form of fuels such as coal or oil, where the energy is stored as internal energy. The process of combustion releases the internal erergy and converts it to heat. In this form the energy may be utilized for heating, cooking, ... etc. But to operate a machine, or to propel a vehicle or a projectile, the heat must be converted to mechanical energy, and one of the problems of mechanical engineer is to carry out this conversion with the maximum possible efficiency.

The energy transformations in a heat engine are conveniently represented schematically by the flow diagram in Figure 07. The engine itself is represented by the circle. The heat Q2 supplied to the engine is proportional to the cross section of the incoming "pipeline" at the top of the diagram. The cross section of the outgoing pipeline at the bottom is proportional to that portion of the heat, Q1, which is rejected as heat in the exhaust. The branch line to the right represents that portion of the heat supplied, which the engine converts to mechanical work. The thermal efficiency Eff(%) is expressed by the formula:

Eff(%) = W / Q2 = (Q2 - Q1) / Q2 ---------- (6)

The most efficient heat engine cycle is the Carnot cycle, consisting of two isothermal processes and two adiabatic processes (see Figure 08). The Carnot cycle can be thought of as the most efficient heat engine cycle allowed by physical laws. When the second law of thermodynamics states that not all the supplied heat in a heat engine can be used to do work, the Carnot efficiency sets the limiting value on the fraction of the heat which can be so used. In order to approach the Carnot efficiency, the processes involved in the heat engine cycle

### States

A key concept in thermodynamics is the state of a system. A state consists of all the information needed to completely describe a system at an instant of time. When a system is at equilibrium under a given set of conditions, it is said to be in a definite state. For a given thermodynamic state, many of the system's properties (such as T, p, and ) have a specific value corresponding to that state. The values of these properties are a function of the state of the system. The number of properties that must be specified to describe the state of a given system (the number of degree of freedom) is given by Gibbs phase rule:

f = c - p + 2 ---------- (5a)

where f is the number of degrees of freedom, c is the number of components in the system, and p is the number of phases in the system. Components denote the different kind of species in the system. Phase means a system with uniform chemical composition and physical properties.

For example, the phase rule indicates that a single component system (c = 1) with only one phase (p = 1), such as liquid water, has 2 degrees of freedom (f = 1 - 1 + 2 = 2). For this case the degrees of freedom correspond to temperature and pressure, indicating that the system can exist in equilibrium for any arbitrary combination of temperature and pressure. However, if we allow the formation of a gas phase (then p = 2), there is only 1 degree of freedom. This means that at a given temperature, water in the gas phase will evaporate or condense until the corresponding equilibrium water vapor pressure is reached. It is no longer possible to arbitrarily fix both the temperature and the pressure, since the system will tend to move toward the equilibrium vapor pressure. For a single component with three phases (p = 3 -- gas, liquid, and solid) there are no degrees of freedom. Such a system is only possible at the temperature and pressure corresponding to the Triple point.

One of the main goals of Thermodynamics is to understand these relationships between the various state properties of a system. Equations of state are examples of some of these relationships. The ideal gas law:

pV = nRT ----------

### Entropy

A general definition of entropy was formulated by Boltzmann in 1872. It is expressed in terms of "coarse-graining volume" in the phase space, which amalgamates the positions and momenta of all particles in a system into one point (Figure 01c). The relentless increase toward higher entropy until reaching its maximum (i.e., in a state of thermal equilibrium) is related to the fact that the evolution of the phase point is more favorable toward the larger "coarse-graining volume"

• Configuration Space - It is a space consists of all the 3-dimensional spatial coordinates of N particles (N = 4 in Figure 01c, represented by the blue arrows) with all the 3N coordinate axes orthogonal (perpendicular) to each others. The horizontal axis for the phase space in Figure 01c is a much simplified visual aid for the 3N configuration space. At 300oK and standard atmospheric pressure of 101 kpa, the number of gas molecules N in a cube of 10 cm would be about 3x1022.
• Momentum Space - In addition to the position of the particle, each one needs at least three more numbers to specify its state, namely the three components of its momentum (red arrow in Figure 01c). Similar to the configuration space, the momentum space is made up by 3N orthogonal axes representing the momenta of N particles. At 300oK and standard atmospheric pressure of 101 kpa and assuming the gas molecules to be hydrogen atoms with mass m = 1.67x10-24gm, the root-mean-square velocity of the particles vrms = (3kT/m)1/2 ~ 3x104 cm/sec., the corresponding momentum p = mvrms = (2mE)1/2 ~ 4.5x10-20 erg-sec/cm (or E ~ 10-16 erg ~ 6x10-5 ev). The size of the momentum space for each particle can be estimated from a range below and above the rms value such that about 0.1% probabilities toward the tail ends are excluded.
• Phase Space - It is the orthogonal combination of the configuration and momentum spaces having altogether 6N dimensions as shown in Figure 01c. The dimensions are often referred to as the degrees of freedom. The phase space volume W is:

W = {[3N/2(2mE)3N/2VN]/[(N!(3N/2)]}(E/E), where

p = (2mE)1/2(E/2E) is the range of momentum,
23N/2(2mE)(3N-1)/2 comes from integrating up to the energy E = p2/2m,
V is the spatial volume containing the particles,
N! and (3N/2) are for removing the degeneracy related to the permutation symmetry of identical particles. (3N/2) is the Gamma function identical to (3N/2)! if the argument is an integer.
• Partition Function - It is the the number of microscopic states within the energy shell E of the phase space. The Planck's constant h = 6.625x10-27 erg-sec from the uncertainty relation px ~ h in quantum theory is conveniently taken as the basic unit (minimum size) of the microscopic states. Thus the partition function Z is just:

Z = W/h3N = {[(1/2(2mE)1/2V1/3)/h]3N/[N!(3N/2)]}(E/E) ~ {(108)3N/[N!(3N/2)]}(E/E)

where the numerical value 108 is computed from the previous assumptions for the size of the container and p. It shows that the number of microscopic states available is enormous in the order of 1024 even for a system of just one particle (N = 1).
• Entropy - Boltzmann's definition of entropy S is:

S = k ln(Z)

where k = 1.38x10-16 erg/oK is the Boltzmann constant. It is immediately clear that entropy would increase by adding # of particles N, energy E, or volume V as shown in figure 01b (the internal degrees of freedom are not considered here). Since Z depends on the parameters in power of 3N, it varies by huge amount with a relatively small change in these parameters.
• Coarse-graining Region - Each of this sub-volume w in the phase space is characterized by some macroscopic properties such as temperature, pressure, density, color, chemical composition etc. with a certain number of microscopic states. The w sub-volume has number of neighbors going up drastically with increasing dimension - typically 6 in the 2 dimensional case, 14 in 3 dimensions, ... As mentioned above, the various w sub-volumes tend to differ in size by absolutely enormous factors.
• Second Law of Thermodynamics - The evolutionary path of a phase point in the phase space is indicated by a curve as shown in Figure 01d. Although time and hence rate of change is absent in the picture, the direction of evolution is represented by an arrow. The path is determined by physical law such as the N-body Newtonian equation of motion, it has a higher probability of moving into another w sub-volume with larger size and hence higher entropy - the basic conception of the Second Law of Thermodynamics. The appearance of randomness is the manifestation of the fact that there are so many different microscopic states available for the same macroscopic state. The system reaches thermal equilibrium when the phase point enters the largest sub-volume and keeps wandering around inside. Note that there is a certain probability of going into a smaller w, but the probability goes down rapidly with decreasing sub-volume size.

### The Four Laws of Thermodynamics

Zeroth law - It is the definition of thermodynamic equilibrium. When two systems are put in contact with each other, energy and/or matter will be exchanged between them unless they are in thermodynamic equilibrium. In other word, two systems are in thermodynamic equilibrium with each other if they stay the same after being put in contact.

The original zeroth law is stated as If A and B are in thermodynamic equilibrium, and B and C are in thermodynamic equilibrium, then A and C are also in thermodynamic equilibrium.

Thermodynamic equilibrium includes thermal equilibrium (associated to heat exchange and parameterized by temperature), mechanical equilibrium (associated to work exchange and parameterized generalized forces such as pressure), and chemical equilibrium (associated to matter exchange and parameterized by chemical potential).

1st Law - This is the law of energy conservation. It is stated alternatively in many forms as follows:

The work exchanged in an adiabatic process depends only on the initial and the final state and not on the details of the process.
or
The heat flowing into a system equals the increase in internal energy of the system minus the work done by the system.
or
Energy cannot be created, or destroyed, only modified in form.

The second statement can be expressed mathematically in the form of Eq.(1) with negative W representing work done by the system. The adiabatic process in the first statement refers to a system with no heat transfer, i.e., Q = 0.

2nd Law - It can be stated in many ways, the most popular of which is:

It is impossible to obtain a process such that the unique effect is the subtraction of a positive heat from a reservoir and the production of a positive work.
or
A system operating in a cycle cannot produce a positive heat flow from a colder body to a hotter body.

The first statement is to exclude the un-realistic situations such as to drive a steamship across the ocean by extracting heat from the water, or to run a power plant by extracting heat from the surrounding air. The second statement expresses the impossibility of running refrigeration without work. Another form of the 2nd law states:

The entropy of an isolated system tends to remain constant or to increase. It is in this form that the arrow of time is defined. Figure 01b shows the various ways entropy can be added to a system.

3rd Law: This law explains why it is so hard to cool something to absolute zero:

All processes cease as temperature approaches zero.

This statement is expressed mathematically by Eq.(4), which shows that as the temperature T approaches zero the amount of heat extracted from the system also diminishes to zero. Thus, even using laser cooling would not be able to attain a temperature of absolute zero.

### Terminology

Thermodynamics is the branch of science that deals with the conversions of various forms of energy and the effect on the state of a system. It was developed in the 19th century, when it was of great practical importance in the era of steam engines. Since the microscopic structure of matter is not known at that time, it can only prescribe a macroscopic view. It remains valid and useful in the 21th century, but now we understand such macroscopic description is just the averaged behaviour of a large collection of microscopic constituents.

It is essential to define the terminology before learning more about the subject:
• Heat - Heat (Q) is a form of energy transfer associated with random motion of the microscopic particles.
• Work - Work (W) is the organized form of energy transfer associated with the motion of microscopic particles as a whole (in a certain direction), e.g., the expanding gas that propels a piston.
• Internal Energy - The internal energy (U) of a system is the total energy due to the motion of molecules, plus the rotation, and vibration of atoms within molecules. Heat and work are two methods of adding energy to or subtracting energy from a system. They represent energy in transit and are the terms used when energy is moving. Once the transfer of energy is over, the system is said to have undergone a change in internal energy dU. Thus, in terms of the amount of heat dQ and work dW:

dU = dQ + dW ---------- (1)

where dQ and dW are positive for energy transfer from the surroundings to the system, and negative for energy transfer from the system to the surroundings. If the process of energy transfer is broken down into finer details, e.g., change in disorder (dS), volume expansion/contraction (dV), and adding a new species of particles (dN), then the change in internal energy can be expressed as:

dU = T dS - p dV +  dN ---------- (2)

where  is the chemical potential.
• Free Energy - The amount of available energy that is capable of performing work.
• Temperature - Temperature (T) is related to the amount of internal energy in a system. As more heat or work is added the temperature rises, similarly a decrease in temperature corresponds to a loss of heat or work performed from the system. Temperature is an intrinsic property of a system, meaning that it does not depend on the system size or the amount of material in the system. Other intrinsic properties include pressure and density. The internal energy (U) is related to the temperature (T) by the formula:

U= (3nR/2) T ---------- (3)

where R = 8.314x107 erg/Ko-mole is called the gas constant.
• Pressure - Pressure (p) is the force normal to the surface of area upon which it exerts. Microscopically, it is the transfer of momenta from the particles that produces the force on the surface.
• Volume - Volume (V) is referred to the three dimensional space occupied by the system.
• Particle Number - Particle number (N) is the number of a particular constituents in a system.
• Avogadro's Number - Avogadro's number (N0) is 6.023 x 1023. One mole is defined as the unit that contains that many number of particles such as atoms, molecules, or ions, e.g., it is the number of carbon-12 atoms in 12 gram of the substance, or the number of protons in 1 gram of the same substance, etc.
• Number of Moles - Number of moles (n) is the number of particles in the unit of a mole, i.e., n = N / N0.
• Density - Density () is defined as mass per unit volume.
• Entropy - Entropy (S) is a measure of disorder in the system. Mathematically, the change of entropy dS is related to the amount of heat transfer dQ by the formula:

dS = dQ / T    or    dQ = T dS ---------- (4)
• Chemical Potential - The chemical potential () of a thermodynamic system is the change in the energy of the system when a different kind of constituent particle is introduced, with the entropy and volume held fixed.
Some thermodynamics definitions here such as temperature, pressure, and density are specified under an equilibrium condition. The changes in these variables are idealized with a succession of equilibrium states. Many important biochemical and physical

### Closed system

In a closed system, no mass may be transferred in or out of the system boundaries. The system will always contain the same amount of matter, but heat and work can be exchanged across the boundary of the system. Whether a system can exchange heat, work, or both is dependent on the property of its boundary.
• Adiabatic boundary – not allowing any heat exchange
• Rigid boundary – not allowing exchange of work
One example is fluid being compressed by a piston in a cylinder. Another example of a closed system is a bomb calorimeter, a type of constant-volume calorimeter used in measuring the heat of combustion of a particular reaction. Electrical energy travels across the boundary to produce a spark between the electrodes and initiates combustion. Heat transfer occurs across the boundary after combustion but no mass transfer takes place either way.
Beginning with the first law of thermodynamics for an open system, this is expressed as:
$\mathrm{d}U=Q-W+m_{i}(h+\frac{1}{2}v^2+gz)_{i}-m_{e}(h+\frac{1}{2}v^2+gz)_{e}$
where U is internal energy, Q is heat transfer, W is work, and since no mass is transferred in or out of the system, both expressions involving mass flow, , zeroes, and the first law of thermodynamics for a closed system is derived. The first law of thermodynamics for a closed system states that the amount of internal energy within the system equals the difference between the amount of heat added to or extracted from the system and the work done by or to the system. The first law for closed systems is stated by:
dU = δQ − δW
where U is the average internal energy within the system, Q is the heat added to or extracted from the system and W is the work done by or to the system.
Substituting the amount of work needed to accomplish a reversible process, which is stated by:
δW = PdV
where P is the measured pressure and V is the volume, and the heat required to accomplish a reversible process stated by the second law of thermodynamics, the universal principle of entropy, stated by:
δQ = TdS
where T is the absolute temperature and S is the entropy of the system, derives the fundamental thermodynamic relationship used to compute changes in internal energy, which is expressed as:
δU = TdS − PdV
For a simple system, with only one type of particle (atom or molecule), a closed system amounts to a constant number of particles. However, for systems which are undergoing a chemical reaction, there may be all sorts of molecules being generated and destroyed by the reaction process. In this case, the fact that the system is closed is expressed by stating that the total number of each elemental atom is conserved, no matter what kind of molecule it may be a part of. Mathematically:
$\sum_{j=1}^m a_{ij}N_j=b_i^0$
where Nj is the number of j-type molecules, aij is the number of atoms of element i in molecule j and bi0 is the total number of atoms of element i in the system, which remains constant, since the system is closed. There will be one such equation for each different element in the system.