On 2/27/2012 10:59 AM, Evgenii Rudnyi wrote:
On 27.02.2012 00:13 meekerdb said the following:
On 2/26/2012 5:58 AM, Evgenii Rudnyi wrote:
I have written a summary for the discussion in the subject:

http://blog.rudnyi.ru/2012/02/entropy-and-information.html

No doubt, this is my personal viewpoint. If you see that I have
missed something, please let me know.

I think you are ignoring the conceptual unification provided by
information theory and statistical mechanics. JANAF tables only
consider the thermodynamic entropy, which is a special case in which
the macroscopic variables are temperature and pressure. You can't
look up the entropy of magnetization in the JANAF tables.

I do not get your point. JANAF Tables have been created to solve a particular problem. If you need change in concentration, surface effects, magnetization effects, you have to extend the JANAF Tables. And this has been to solve particular problems. Experimental thermodynamics is not limited to JANAF Tables. For example, the databases in Thermocalc already include dependence on concentration.

And you don't get my point. Of course all forms of entropy can be measured and tabulated, but the information theory viewpoint shows how they are unified by the same concept.


Yet
magnetization of small domains is how information is stored on hard
disks, c.f. Donald McKay's book "Information Theory, Inference, and
Learning Algorithm" chapter 31.

Do you mean that when we consider magnetization, then the entropy become subjective, context-dependent, and it will be finally filled with information?

It is context dependent in that we consider the magnetization. What does the JANAF table assume about the magnetization of the materials it tabulates?


Did you actually read E. T. Jaynes 1957 paper in which he introduced
the idea of basing entropy in statistical mechanics (which you also
seem to dislike) on information? He wrote "The mere fact that the
same mathematical expression -SUM[p_i log(p_i)] occurs in both
statistical mechanics and in information theory does not in itself
establish a connection between these fields. This can be done only by
finding new viewpoints from which the thermodynamic entropy and
information-theory entropy appear as the same /concept/." Then he

I have missed this quote, I have to add it. In general, the first Jaynes's paper is in a way reasonable. I wanted to better understand it, as I like maximum likelihood, I have been using it in my own research a lot. However, when I have read in Jaynes's second paper the following (two quotes below), I gave up.

“With such an interpretation the expression “irreversible process” represents a semantic confusion; it is not the physical process that is irreversible, but rather our ability to follow it. The second law of thermodynamics then becomes merely the statement that although our information as to the state of a system may be lost in a variety of ways, the only way in which it can be gained is by carrying out further measurements.”

“It is important to realize that the tendency of entropy to increase is
not a consequence of the laws of physics as such, … . An entropy increase may occur unavoidably, due to our incomplete knowledge of the forces acting on a system, or it may be entirely voluntary act on our part.”

This I do not understand. Do you agree with these two quotes? If yes, could you please explain, what he means?

Yes. The physical processes are not irreversible. The fundamental physical laws are time reversible. The free-expansion of a gas is *statistically* irreversible because we cannot follow the individual molecules and their correlations, so when we consider only the macroscopic variables of pressure, density, temperature,... it seems irreversible. In very simple systems we might be able to actually follow the microscopic evolution of the state, but we can choose to ignore it and calculate the entropy increase as though this information were lost. Whether and how the information is lost is the crux of the measurement problem in QM. Almost everyone on this list assumes Everett's multiple worlds interpretation in which the information is not lost but is divided up among different continuations of the observer.


goes on to show how the principle of maximum entropy can be used to
derive statistical mechanics. That it *can* be done in some other
way, and was historically as you assert, is not to the point. As an
example of how the information view of statistical mechanics extends
its application he calculates how much the spins of protons in water
would be polarized by rotating the water at 36,000rpm. It seems you
are merely objecting to "new viewpoints" on the grounds that you can
see all that you /want/ to see from the old viewpoint.

Your quotation of Arnheim, from his book on the theory of entropy in
 art, just shows his confusion. The Shannon information, which is
greatest when the system is most disordered in some sense, does not
imply that the most disordered message contains the greatest
information. The Shannon information is that information we receive
when the *potential messages* are most disordered. It's a property of
an ensemble or a channel, not of a particular message.

It is not a confusion of Arnheim. His book is quite good.

Then why doesn't he know that Shannon's information does not refer to 
particular messages?

To this end, let me quote your second sentence in your message.

> I think you are ignoring the conceptual unification provided by
> information theory and statistical mechanics.

You see, I would love to understand the conceptual unification.

I find that doubt this. If you really loved to understand it there is lots of material online as well as good books which Russell and I have suggested.

To this end, I have created many simple problems to understand this better. Unfortunately you do not want to discuss them, you just saying general words but you do not want to apply it to my simple practical problems. Hence it is hard for me to understand you.

If to speak about confusion, just one example. You tell that the higher temperature the more information the system has. Yet, engineers seems to be unwilling to employ this knowledge in practice. Why is that? Why engineers seem not to be impressed by the conceptual unification?

I'm not an expert on this subject and it has been forty years since I studied statistical mechanics, which is why I prefer to refer you to experts. Engineers are generally not impressed with conceptual unification; they are interested in what can be most easily and reliably applied. RF engineers generally don't care that EM waves are really photons. Structural engineers don't care about interatomic forces, they just look yield strength in tables. Engineers are not at all concerned with 'in principle' processes which can only be realized in carefully contrived laboratory experiments. But finding unifying principles is the job of physicists.

Brent

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To post to this group, send email to everything-list@googlegroups.com.
To unsubscribe from this group, send email to 
everything-list+unsubscr...@googlegroups.com.
For more options, visit this group at 
http://groups.google.com/group/everything-list?hl=en.

Reply via email to