Monthly Archives: December 2013

Are fields real?

And here we go again, focusing on what is real and what not. I think I said several times that many of the constructions we use today are in fact only objects “of our mind” with no real presence in nature at all, other than the one needed for our brains to understand reality… One of these non-real objects are the quantum fields… Yes, there are no such things as quantum fields. Gauge theories say this pretty clearly and we know that from the old works of Batalin and Vilkovisky… What are out there are particles. There is no such thing as a “universal Higgs field”… there is however a Higgs boson which is a particle that interacts with other particles. Fields are things that tell us how particles move but they are not real. We can construct a theory that simply tells you that when there is an electric charge somewhere in space the electrically charged particles or the other particles sensible to e.m. interactions will move in some specific ways. You can simply write a theory that simply does not contain ANY fields at all but is perfectly correct from any mathematical or physical point of view. The same is true for all the other “fields”… Particles are physical objects. They are not “classical” particles and they obey a quantum statistics, that is clear… and sometimes the probabilities associated to them (like cross sections, decay rates, etc.) are accessible only through this or that field representation… but this is so because we are used to think in terms of fields… They are not “physical” in any sense… Nor is space-time “physical”… it is another way of thinking in situations when a metric is needed to make “life” easier for us to understand… However, one can describe all aspects of quantum mechanics without using a metric at all… it is indeed harder and less intuitive but definitely possible. If we have to think in terms of quantum gravity then it may be very likely that we have to give up the notions of space-time that are too closely related to ideas like metrics or distance… so, space-time is also “not real”… physicality is a very strong restriction… 

Advertisements

Complete descriptions

We discuss often about “complete descriptions”… but what does that mean? It mainly means that we get all the information we can get in the given framework. However the “wholeness” of the information depends on the framework. Not all questions are defined in all situations. However, all correctly defined frameworks are complete. They give full information about the system but in the context of a framework that doesn’t allow some questions as “meaningful”. It is very much like the idea of having full information in all cases but the “full information” is not compatible with any kind of questions. The easiest example is of course speed and position. While the full information about the system exists in any representation (the momentum or the position) some questions are meaningless, mainly those that ask about information defined in the other framework. I know, this is a relatively simple observation everyone makes in the first years of quantum mechanics… You will need to change the framework from one to another in order to define the other question correctly and get meaningful answers. Schrodinger’s cat is just a reiteration of the same problem but with the connection being made from “small” to “large conglomerates of particles” … If only physicists would learn mathematical concepts better: closed set, open set, clopen set, completeness, Hausdorff spaces, compact sets, etc.

what are we “made of”?

Dear friends of savagephysics… today I will discuss a very interesting aspect about reality and some western myths… first, the part about reality: greek philosophers started speculating about it but the idea was continued until modernity and accepted by the scientific community as a truth and by the “materialist dogmatics” of the 18th ,19th and 20th century. Today almost any scientific blog or popularization site will tell you the same thing: we are made out of atoms and the poetic twist : “we are made out of atoms, atoms try to know themselves through us”… now, I really don’t care about poetic license so please feel free to say that stars and galaxies and whatever you feel an attraction for are trying to know themselves through us… you may feel even somehow important through this but also, please let me tell you where this way of thinking fails. 

When we started discussing about probabilities in nature and about statistical mechanics and quantum mechanics we started observing all sort of strange things. These were called “paradoxes” but were subsequently explained pretty well by scientists. One of these “paradoxes” has its origin in an misfortunate use of some words… well, most of the quantum “paradoxes” have their origin in this… let me be more specific: I am talking about “Schrodinger’s cat”… it was a sort of wanna be gedanken-experiment invented by Schrodinger that became so popular mainly because it has the word “cat” in it… (so, if you want to become popular just use cats in your experiments). The experiment is however fundamental for the problems I stated in the first paragraph so, let me explain what it does: take a simple whole cat and put it in a box. Take some poison that once released could kill the cat and make so that it becomes active when a Geiger device detects the first radioactive decay of an atom. Prepare the system such that it is in a superposed state (quantum state) of decayed and not decayed and start measuring. Now, the situation becomes interesting because there are several wrong ways we can use our language to describe this experiment. Let’s start by thinking where do we consider a statistics? Well, that is a good question: essentially nowhere… the first radioactive decay will trigger the death of the cat and we described the cat as a single object so we are generally not even asking a good quantum mechanics question in the first place, but let us continue assuming that we ask about probabilities to be calculated on some samples of cats and atoms decaying. Then the answer is clear and plain: half of the cats will be dead in the end after an interval T calculated from the properties of the atoms. 

The paradox in this problem appears when you ask how will be the cat before the measurement of the state of the atom? Will it be a “undead cat”? Will we have Zombie cats moving around unobserved boxes? Of course NOT! If you look more carefully you will see the question above is not well posed. While you can add a wavefunction to the statistical description of an atom and get a probability you cannot use the same framework for the description of the cat. The two objects are incompatible. There is no well defined “cat” in the basis of radioactive atoms in a superposed state. The cat is a macroscopic object… entanglement defines how a “whole” and “its parts” are different from each other so you cannot in principle use the same language on one side and on the other of your experiment and expect to get meaningful answers. While you certainly could define a cat using only atomic wavefunctions you will have a HUGELY hard time to find out what the wavefunction is and what kind of entanglements and correlations are there and the end result will most likely have nothing from what we interpret as a “cat” … 

So, we cannot use atomic wavefunctions and cat wavefunctions simultaneously because we don’t give a complete description in either of the situations. 

Back to the question: what are we made of? This is a rather typical western culture question with a strong bias on things being made out of other things and so on. It is almost a biblical way of thinking if one has to trace back the origins of the concept. We are made out of large molecules, large molecules can be split in smaller ions and molecules, they can be split in electrons, protons and neutrons and they can be split in quarks, gluons, photons, leptons, weak bosons, higgs bosons, and so on… this way of thinking however interesting may look like is not a valid description of our reality at our length scale. There is a huge amount of information that we never considered when doing all the splitting so the fact that in the end we end up with “elementary” particles is primarily irrelevant for the way nature behaves at the scale we observe it usually. Do I talk about renormalization group? In a very distant way, maybe… there can be made a connection to the marginally relevant operators there but then again, that is not the full story. The full story is to acknowledge that the information about our macroscopic reality is not encoded in the fundamental building blocks and fundamental laws at a very small scale. 

Now, let’s go back to the ages of string theory… this idea came into being because we needed something fundamental enough to rely upon and this was, ab definitio the string. This idea did not work at all… some of the simplest dualities (like T duality) already showed pretty clearly that there is no way of finding out some relevant concepts about our reality from the very basic “fundamental building blocks” of it. It is a very western myth the idea that “fundamentality is necessarily something small that constitutes the building blocks of everything”. This is not so. I can say this quite surely by now and most of the better string theorists admit that finally too… The “fundamental string” is not fundamental at all… it is just a mathematical concept, a sort of “wavefunction” which may ease our understanding of some aspects but it certainly makes the understanding of other concepts harder. 

This is pretty hard to understand but, surprisingly so, quantum “fundamentalists” understand that relatively better nowadays… If I tell you that you are made of carbon, oxygen, nitrogen and some hydrogen atoms you will notice that what I told you is meaningless. It can be “poetic license” but it still remains completely pointless… While you understand that, many scientists still believe that the whole information about “the universe and everything in it and all that stuff” is somehow encoded only in the way in which their fundamental building blocks interact at the level of high energies… This is of course nonsense… 

Indeed, the whole is probably not more that the sum of its parts but it is something different that needs another framework in order to be correctly described… 

What is quantum mechanics?

Ha! This is a job for “savagephysics”… so, after I read today some nice and interesting papers (following my own criteria of “nice” and “interesting” , so… things not to be found in PRL, nature or science… :p ) I decided that during this evening I can make some sort of basic introduction of what “quatization” means… I could have said something about “supersymmetrization” but that may be a subject for a future serious paper of mine… about quantization however, unless everybody on earth forgot everything about physics one cannot say many new things… So, quantization: what means “quantizing a theory”? First of all, nature is “quantum”… that is how nature is, we didn’t make it like that… so, when we write “classical” theories we just do something somewhere wrong. Now, there are several aspects where quantization manifest itself but many fields of “research” see only some aspects of it… for example particle physicists working with one or another of the quantum field theories will tell you quantization is all about path integrals… This is quite vague… Feynman path integrals are a form of “quantization” in the sense of constructing the probability amplitudes taking into account the special topology of your problem… in essence this is how you construct a “statistics” and get in the end your probabilities. But before Feynman came with this idea, quantization was done in the old fashion style… like imposing commutation relations on operators and after some time doing second quantization and constructing perturbative theories etc. These methods are generally quite primitive although very much en-vogue today because of two simple reasons: they are suitable for the toy models used in quantum information theory and very few people can control the full strength of path integral quantization… but I am not discussing here about the social issues and lack of education in the western world… back to the point: While doing path integral quantization the Heisenberg relations are encoded in what is known as “time ordering prescription”… (or radial ordering when working on some conformal field theories). Now, because of this one doesn’t need any “operators” anymore but one does have to take into account the fact that in 3 or more dimensions one has fermions and bosons… because of this one has to use special “numbers” (Grassmann variables) in order to encode the quantization of fermions… and here comes the second aspect of “quantization”: symmetry… while the “classical world” doesn’t know what a fermion is, the quantum world behaves quite differently when one deals with fermions instead of bosons. So, dealing with the two of them is the next aspect of “quantization” and it is mostly related to the presence of a… sign… (a special sign in the definition of an algebraic structure)… That sign is very very (very) important and could mean a nobel prize (maybe for me… but I won’t tell more… ) So, this is another aspect of quantization. I will surprisingly not discuss much about the Bohm Aharonov effect or the Wilson loops as they are… not that fundamental.. the basic ideas behind them can be reduced to something I said before. I will also not discuss much now about entanglement mainly because I discussed about it in the posts about holography and it is NOT that fundamental either (it is a simple result of the topological properties of the space used) So, after speaking about topology and fermions what is the third most important thing that defines quantum mechanics? Well, indiscernability of course… It essentially means that one cannot add labels on fundamental particles or at least not as many as one is used to in classical physics. These tree aspects, again: topology, statistics and indiscernability are the 3 most basic constructs of quantum mechanics. All the others can be derived out of them in one way or another. So, whenever we speak about quantum entanglement don’t forget we actually speak about topology. When we speak about supersymmetry and supersymmetrization we speak in fact about the fact that we don’t really deal well with fermions and when we speak about most of the quantum paradoxes known to uneducated quantum foundation people we speak about indiscernability… 

“doing” string theory…

I must admit I always said (until recently) that I want to “do” string theory… What I actually want to do is to understand nature. I have no favorite theory for doing this. I don’t think there exists a “favorite” theory nowadays and if there exists one in any way then it is certainly not string theory. I want to work when necessary with string theory or with any other tool of knowledge or reason but I don’t see any reason to “do” string theory as it is or to promote it as a theory meaningful for understanding nature.

Is DMRG Wrong?

Well, I was pretty cautious in calling something completely wrong. I cannot call ER-EPR “wrong” nor can I say holography is “wrong”… they are just somehow trivial and limited in scope and power. About DMRG I can say pretty confidently that it is wrong. It is an idea based on the fact that there is no intrinsic quantum behavior in electron systems and that one could in principle fit whatever one wants to whatever one choses. It may work ok when the entanglement of ground states is small but whenever entanglement starts being relevant and scales not as a strict area law it becomes utter nonsense. This is why I never follow the DMRG ideas and I don’t go to DMRG conferences or “schools”… Now, if you, as an objective reader of this blog, wants to try DMRG please feel free to do so and read whatever has been written about the subject. You will see, after some effort that in my case was of around 10 minutes for a walk in Palermo with one of the persons that “applied” DMRG to quantum chemistry, that the whole idea is based on unphysical assumptions and it is plain wrong… Nature is not restricted to little or no entanglement and in the end you cannot perfectly factorize the tensors so that the density matrixes are separated completely. This is why the idea works ok-ish for 1 dimensional systems: the area law is there exact. entanglement is “contained” in the “fictive black hole” and interactions are short range… For everything else the situation is fundamentally different (see EPR effects, quantum correlations, etc.) and the idea is useless and pointless…

 

Is holography wrong? Is ER-EPR wrong?

No, they are just trivial on one side and incomplete descriptions on the other side. Why are they trivial? They are trivial because they present nothing new apart from what was known: topological dualities exist out there and that is mathematical fact. They bring eventually some new frameworks but these frameworks have to be constructed such that they are consistent with everything else known about nature. Why are they incomplete? Well, because there is no purely topological theory of “everything” despite the claims of “everythingness”… They are just particular cases that are completely invalidated by some rather simple examples of everyday physics, from condensed matter to black hole entanglement. The constructions are made specifically for some very particular cases and they cannot in principle (so not because practical difficulties) go beyond them…

Holography, part 2

Ok, so yesterday I put here the part 1 of the abstract about holography. Today I will go into part 2 of this idea. This relates quite a lot with something called quantum entanglement. In principle this is a purely statistical and topological effect. It means essentially that because one cannot in general separate a density matrix in any factorization of its algebra it may happen that in some cases, as Schrodinger would put it, speaking about “the whole” cannot be done while consistently speaking about the parts. Entanglement is in some sense a measure of how much we would lose in information when speaking about the parts distinctly in this way without considering “the whole”. The main observation however is that the factorization of the algebra is up to us. Of course, experimentally that would lead to changing the whole project but in purely mathematical sense there is always another way to factorize the algebra of the density matrices. In some cases this leads to more entanglement, in some cases to less but in principle it cannot be reduced to no entanglement at all, and this is a quantum property. Now, after having this in mind we also have to understand what critical points or systems at criticality mean: in principle they are described using some conformal theory and they are characterized by the divergence of some correlation length. In principle this means we have long range effects and this can in principle harm our area laws. So, the area laws that follow from the holographic principle are violated in fermion systems when long correlations are included but also some types of entanglement may produce violation of the area laws. Now we have to be careful at the terms used. While for me “area law with logarithmic divergence” is NOT an area law, for most of the authors that want to publish something  this is “a weakly violated area law”… now, about the character of a logarithmic divergence one can discuss a bit… it is “weak” in some sense and it can be eliminated following some renormalization criteria when it appears in perturbative Quantum Field Theories. However, when it appears in the expression of an area law for entropy I don’t see why one should go towards quantum field theory when the whole idea was to avoid it because of its poor UV behavior and anyhow one relates terms defined differently in different areas of physics just because they involve “logarithmic divergencies”… Using the old proverb that not everything that flies is a bird one may be skeptical…

Now, there was an emphasis on D0-branes in the paper I quoted yesterday. Why D0-branes? Well, they are no strings, they are point like objects or lines in space-time… why these? Why of course, because area laws have been proved to be exact only in 1 dimensional systems so the clear choice of proving the area law is on a system that you are sure it possesses an area law. Chance makes it that in more than 1 dimension most of the ideas fail pretty badly… well, it doesn’t fail for free or almost free bosons, this is why our friends in Japan chose carefully to disregard anything related to fermionic statistics in their “geometric” approach. Whenever one has a gapped, local model, and hence a length scale provided by the correlation length, we are told to believe an area law may be something plausible. Fact is, it is almost never so mainly because we have also interactions between bosons and the temperature is generally not 0 (Zero) Kelvin…

So, what did we learn today? Mainly that the “area laws” implied by the holographic principle are far from being “universal” as claimed by this or that pop-science newspaper (science and nature included). They are mostly related to the presence of some kind of “horizon” and are representation dependent. They are not specific to black holes. Any system that has a well defined boundary that plays the role of a “horizon” in the sense of we just ignoring the inside, will have some sort of area law in it but this is due to a choice of a specific factorization of the algebra of states and the therein resulting entanglement. Area laws fail dramatically whenever the “true nature” comes into play i.e. interactions, long distance correlations, entanglement etc. The holographic principle is again a choice of a framework and nothing more. Its predictive power is the same as the power of (co)bordisms in topology but with a far narrower applicability in physics.

 

The Holographic Principle

what is the “holographic principle”? Well, it is the result of an observation related to how one can encode information inside the… “universe”… where by “universe” I mean essentially spacetime. Let me start it in a different way: One of the finest and most complicated theories nowadays is… no, not string theory… it is Quantum Field Theory. In principle quantum field theory takes advantage of linear superposition of “fields” in order to encode the quantum “substrate” of nature. “Quantum” means mainly “topological”, or, otherwise stated, whatever is accessible in a statistical sense while having to deal with the full topological structure of your problem. But one knows pretty well that a field theoretical description has redundancies. There was never a true mystery in this. Not only that this theory has redundancies (gauge redundancies) but also it does have a strange property: it generally over-counts degrees of freedom. This is also well known and gravity just makes the issue more acute. If you want to excite the degrees of freedom of a field theory in a volume of the universe you start adding energy. While adding energy, after some limit you will obtain a black hole. Now, a black hole has a horizon… a horizon is that damn thing that doesn’t allow you to access whatever is inside it. Once something goes beyond the horizon it is unaccessible to you. In order to preserve whatever made sense out of thermodynamics (which is, nota bene, again a statistical theory) you have to consider the way in which information about the in-falling matter  is encoded in the area that surrounds the region. So, information encoded on an area gives away whatever is in the “volume”… now, that doesn’t mean the volume ceases to exist. There are several pretty good arguments that tell us that whatever is on the other side of the horizon is a “volume”… Ok, call it a “belief system” because I won’t sit here to explain it to you in detail but I do believe that once passing the horizon you won’t hit the “end of the universe”… So, we have volume behind the horizon so we have information there too… If we encode it on the boundary of that region it’s ok but it is just a way of representing it. In some situations this representation is useful. Now, I told you already that string theory is a theory of “worldsheets” i.e. paths described by strings (and branes in the end) while moving in some space-time. In the string case there is no surprise one has some holographic dualities: the theory has been constructed from the very beginning such that it is “critical” i.e. it generates conformal invariance in the critical dimension on the worldsheet. So, the whole construction is fundamentally “bordant”. According to this there can be no surprise of string theory being “holographic” in one sense or another. For whoever understands topology, the “holographic principle” is just the theory of (co)-bordisms.

Now, major statements are being made by some… first, I hear things about “the universe being intrinsically holographic”… that would mean that the universe (and whatever may be in it) must obey an area law… like the entropy S going like L^{d-1} where d is the dimension of the “universe”… This is not so, and it is not so for fermions especially, where corrections to the area law in form of logarithms must be introduced (S going like L^{d-1}*Log(L) ). This means that the entropy diverges logarithmically. This is a good indication of why Density Matrix Renormalization methods CAN NOT WORK, and also a good reason why I usually don’t even consider going to conferences or summer schools on DMRG… but this is not the scope of this post. The scope here is to show why the holographic principle, while interesting from some points of view is not so “fundamental” as one might believe… also the ER-EPR conjecture is also an interesting observation but it is by no means “fundamental” or “new” in any way… at least not to someone with a decent knowledge of Morse Surgery in topology… You may see that the real problem is the fact that for fermionic systems there is a “small” issue with the area law. Unsurprisingly so, what we read in the latest over-quoted paper by some Japanese authors is that higher derivative terms in the graviton 4-point amplitude have an important role in the description of the entropy of black holes but the “supersymmetrization” of them is not “yet” “fully” understood… what means that? Well… essentially it means that if you ask for local supersymmetry of your full action you don’t really know what you get. It is not necessarily bad because it just adds some points to the idea that while you want to keep the holographic principle and its area law you have a problem with the supersymmetric partners of scalar fields and while you want to give up the supersymmetrization you end up with trivial cases… however you do it, the “theory” is incomplete… but since the authors (like all the authors nowadays) are interested only in the things they can do we see that they ignore gravitino contributions, gluino contributions and whatever cannot be calculated in a pertinent way for the paper… After bravely imposing cancellations of terms in order to respect local supersymmetry wherever they consider it fit and neglect the places where they cannot do this trick we are rapidly said that in order to have an explicit and “controlled” behavior of quantum corrections we need to consider the supergravity (low energy effective theory) of M-theory in 11 dimensions. The only problem is that the fundamental object (the membrane) of M-theory cannot be quantized (for now)… however we can obtain some sort of quantum corrections by requiring the local supersymmetry of M-theory. Now, M-theory and the original type IIA superstring theory are related via a dimensional reduction and string theorists believe that IIA is some sort of lower energy effective theory of M-theory in the sense that the effective theory of M-theory should include some aspects of the effective theory of IIA theory. In principle this is because of the “dimensional reduction” relation between the two theories…  However, we know that the most important terms that should contribute to the quantum corrections in IIA cannot be supersymmetrized… these terms should alter the contributions to the area laws that they describe and falsify any claim of “holography”. However, after ignoring almost anything fermionic (Majorans, gravitinos, etc.) nobody will notice this “detail”… and anyway nobody can do anything better either…

So, I hope I explained here pretty clearly why the last paper presented in the previous sense is technically correct but essentially pointless and with no contributions to any of the problems it raises… what a typical paper in these times…