Quantum Field Theory I
Quantum Field Theory I
Abbreviations
BMP stands for Bogolyubov–Medvedev–Polivanov;
BRST-symmetry stands for the Becchi–Rouet–Stora–Tyutin-symmetry;
CQFT stands for the constructive quantum field theory;
HRST stands for the Haag–Ruelle scattering theory;
LSZ stands for Lehmann–Symanzik–Zimmermann;
QCD stands for quantum chromodynamics;
QED stands for quantum electrodynamics;
QFT stands for the quantum field theory;
RT stands for the renormalization theory;
WOF is a Wick ordered functional;
Quantum field theory is a set of ideas and tools that combine three of the
major themes of modern physics: the quantum theory, the field concept,
and the principle of relativity. Today, most working physicists need to know
some quantum field theory, and many others are curious about it. The
theory underlies modern elementary particle physics, and supplies essential
tools to nuclear physics, atomic physics, condensed matter physics, and
astrophysics. In addition, quantum field theory has led to new bridges
between physics and mathematics.
One might think that a subject of such power and widespread application
would be complex and difficult. In fact, the central concepts and techniques
of quantum field theory are quite simple and intuitive. This is especially
true of the many pictorial tools (Feynman diagrams, renormalization
group flows, and spaces of symmetry transformations) that are routinely
used by quantum field theorists. Admittedly, these tools take time to learn,
and tying the subject together with rigorous proofs can become extremely
technical. Nevertheless, we feel that the basic concepts and tools of quantum
field theory can be made accessible to all physicists, not just an elite
group of experts.
Michael Peskin and Daniel Schroeder
The gauge theories for the strong and electroweak interaction have become
the Standard Model of particle physics. They realize in a consistent way
the requirements of quantum theory, special relativity and symmetry principles.
For the first time, we have a consistent theory of the fundamental
interactions that allows for precision calculations for many experiments.
The Standard Model has, up to now, successfully passed all experimental
tests. This success establishes the importance of gauge theories, despite
the fact that gravity is not included and that the Standard Model is most
likely an effective theory resulting from the low-energy limit of a more
fundamental theory.
Manfred B¨ohm, Ansgar Denner, and Hans Joos
If supersymmetry plays the role in physics that we suspect it does, then it is
very likely to be discovered by the next generation of particle accelerators,
either at Fermilab in Batavia, Illinois, or at CERN in Geneva, Switzerland.
Edward Witten, 2000
Eberhard Zeidler
Max Planck Institute
for Mathematics in the Sciences
Inselstrasse 22
04103 Leipzig
Germany
e-mail: ezeidler@mis.mpg.de
The present comprehensive introduction to the mathematical and physical
aspects of quantum field theory consists of the following six volumes:
Volume I: Basics in Mathematics and Physics
Volume II: Quantum Electrodynamics
Volume III: Gauge Theory
Volume IV: Quantum Mathematics
Volume V: The Physics of the Standard Model
Volume VI: Quantum Gravity and String Theory.
Since ancient times, both physicists and mathematicians have tried to understand
the forces acting in nature. Nowadays we know that there exist four
fundamental forces in nature:
• Newton’s gravitational force,
• Maxwell’s electromagnetic force,
• the strong force between elementary particles, and
• the weak force between elementary particles (e.g., the force responsible for
the radioactive decay of atoms).
In the 20th century, physicists established two basic models, namely,
• the Standard Model in cosmology based on Einstein’s theory of general
relativity, and
• the Standard Model in elementary particle physics based on gauge theory.
Quantum Field Theory I: Basics in Mathematics and Physics
A Bridge between Mathematicians and Physicists
It is very difficult to write mathematics books today. If one does not take
pains with the fine points of theorems, explanations, proofs and corollaries,
then it won’t be a mathematics book; but if one does these things, then
the reading of it will be extremely boring.
Johannes Kepler (1571–1630)
Astronomia Nova
The interaction between physics and mathematics has always played an
important role. The physicist who does not have the latest mathematical
knowledge available to him is at a distinct disadvantage. The mathematician
who shies away from physical applications will most likely miss
important insights and motivations.
Marvin Schechter
Operator Methods in Quantum Mechanics5
In 1967 Lenard and I found a proof of the stability of matter. Our proof was
so complicated and so unilluminating that it stimulated Lieb and Thirring
to find the first decent proof. Why was our proof so bad and why was
theirs so good? The reason is simple. Lenard and I began with mathematical
tricks and hacked our way through a forest of inequalities without
any physical understanding. Lieb and Thirring began with physical understanding
and went on to find the appropriate mathematical language to
make their understanding rigorous. Our proof was a dead end. Theirs was
a gateway to the new world of ideas collected in this book.
Freeman Dyson
From the Preface to Elliott Lieb’s Selecta
The state of the art in quantum field theory. One of the intellectual
fathers of quantum electrodynamics is Freeman Dyson (born in 1923) who works at the Institute for Advanced Study in Princeton. He characterizes
the state of the art in quantum field theory in the following way:
All through its history, quantum field theory has had two faces, one looking
outward, the other looking inward. The outward face looks at nature and
gives us numbers that we can calculate and compare with experiments.
The inward face looks at mathematical concepts and searches for a consistent
foundation on which to build the theory. The outward face shows
us brilliantly successful theory, bringing order to the chaos of particle interactions,
predicting experimental results with astonishing precision. The
inward face shows us a deep mystery. After seventy years of searching, we
have found no consistent mathematical basis for the theory. When we try
to impose the rigorous standards of pure mathematics, the theory becomes
undefined or inconsistent. From the point of view of a pure mathematician,
the theory does not exist. This is the great unsolved paradox of quantum
field theory.
To resolve the paradox, during the last twenty years, quantum field theorists
have become string-theorists. String theory is a new version of quantum
field theory, exploring the mathematical foundations more deeply and
entering a new world of multidimensional geometry. String theory also
brings gravitation into the picture, and thereby unifies quantum field theory
with general relativity. String theory has already led to important
advances in pure mathematics. It has not led to any physical predictions
that can be tested by experiment. We do not know whether string theory
is a true description of nature. All we know is that it is a rich treasure
of new mathematics, with an enticing promise of new physics. During the
coming century, string theory will be intensively developed, and, if we are
lucky, tested by experiment.
Five golden rules. When writing the latex file of this book on my computer,
I had in mind the following five quotations. Let me start with the
mathematician Hermann Weyl (1885–1930) who became a follower of Hilbert
in G¨ottingen in 1930 and who left Germany in 1933 when the Nazi regime
came to power. Together with Albert Einstein (1879–1955) and John von
Neumann (1903–1957), Weyl became a member of the newly founded Institute
for Advanced Study in Princeton, New Jersey, U.S.A. in 1933. Hermann
Weyl wrote in 1938:
The stringent precision attainable for mathematical thought has led many
authors to a mode of writing which must give the reader an impression
of being shut up in a brightly illuminated cell where every detail sticks
out with the same dazzling clarity, but without relief. I prefer the open
landscape under a clear sky with its depth of perspective, where the wealth
of sharply defined nearby details gradually fades away towards the horizon.
For his fundamental contributions to electroweak interaction inside the Standard
Model in particle physics, the physicist StevenWeinberg (born 1933) was
awarded the Nobel prize in physics in 1979 together with Sheldon Glashow
(born 1932) and Abdus Salam (1926–1996). On the occasion of a conference
on the interrelations between mathematics and physics in 1986, Weinberg
pointed out the following:
I am not able to learn any mathematics unless I can see some problem I am
going to solve with mathematics, and I don’t understand how anyone can
teach mathematics without having a battery of problems that the student
is going to be inspired to want to solve and then see that he or she can
use the tools for solving them.
For his theoretical investigations on parity violation under weak interaction,
the physicist Cheng Ning Yang (born 1922) was awarded the Nobel prize in
physics in 1957 together with Tsung Dao Lee (born 1926). In an interview,
Yang remarked:
In 1983 I gave a talk on physics in Seoul, South Korea. I joked “There
exist only two kinds of modern mathematics books: one which you cannot
read beyond the first page and one which you cannot read beyond the first
sentence. The Mathematical Intelligencer later reprinted this joke of mine.
But I suspect many mathematicians themselves agree with me.
The interrelations between mathematics and modern physics have been promoted
by Sir Michael Atiyah (born 1929) on a very deep level. In 1966, the
young Atiyah was awarded the Fields medal. In an interview, Atiyah emphasized
the following:
The more I have learned about physics, the more convinced I am that
physics provides, in a sense, the deepest applications of mathematics. The
mathematical problems that have been solved, or techniques that have
arisen out of physics in the past, have been the lifeblood of mathematics. . .
The really deep questions are still in the physical sciences. For the health of
mathematics at its research level, I think it is very important to maintain
that link as much as possible.
The development of modern quantum field theory has been strongly influenced
by the pioneering ideas of the physicist Richard Feynman (1918–1988).
In 1965, for his contributions to the foundation of quantum electrodynamics,
Feynman was awarded the Nobel prize in physics together with Julian
Schwinger (1918–1994) and Sin-Itiro Tomonaga (1906–1979). In the beginning
of the 1960s, Feynman held his famous Feynman lectures at the California
Institute of Technology in Pasadena. In the preface to the printed version
of the lectures, Feynman told his students the following:
Finally, may I add that the main purpose of my teaching has not been
to prepare you for some examination – it was not even to prepare you to serve industry or military. I wanted most to give you some appreciation
of the wonderful world and the physicist’s way of looking at it, which, I
believe, is a major part of the true culture of modern times.
BMP stands for Bogolyubov–Medvedev–Polivanov;
BRST-symmetry stands for the Becchi–Rouet–Stora–Tyutin-symmetry;
CQFT stands for the constructive quantum field theory;
HRST stands for the Haag–Ruelle scattering theory;
LSZ stands for Lehmann–Symanzik–Zimmermann;
QCD stands for quantum chromodynamics;
QED stands for quantum electrodynamics;
QFT stands for the quantum field theory;
RT stands for the renormalization theory;
WOF is a Wick ordered functional;
Quantum field theory is a set of ideas and tools that combine three of the
major themes of modern physics: the quantum theory, the field concept,
and the principle of relativity. Today, most working physicists need to know
some quantum field theory, and many others are curious about it. The
theory underlies modern elementary particle physics, and supplies essential
tools to nuclear physics, atomic physics, condensed matter physics, and
astrophysics. In addition, quantum field theory has led to new bridges
between physics and mathematics.
One might think that a subject of such power and widespread application
would be complex and difficult. In fact, the central concepts and techniques
of quantum field theory are quite simple and intuitive. This is especially
true of the many pictorial tools (Feynman diagrams, renormalization
group flows, and spaces of symmetry transformations) that are routinely
used by quantum field theorists. Admittedly, these tools take time to learn,
and tying the subject together with rigorous proofs can become extremely
technical. Nevertheless, we feel that the basic concepts and tools of quantum
field theory can be made accessible to all physicists, not just an elite
group of experts.
Michael Peskin and Daniel Schroeder
The gauge theories for the strong and electroweak interaction have become
the Standard Model of particle physics. They realize in a consistent way
the requirements of quantum theory, special relativity and symmetry principles.
For the first time, we have a consistent theory of the fundamental
interactions that allows for precision calculations for many experiments.
The Standard Model has, up to now, successfully passed all experimental
tests. This success establishes the importance of gauge theories, despite
the fact that gravity is not included and that the Standard Model is most
likely an effective theory resulting from the low-energy limit of a more
fundamental theory.
Manfred B¨ohm, Ansgar Denner, and Hans Joos
If supersymmetry plays the role in physics that we suspect it does, then it is
very likely to be discovered by the next generation of particle accelerators,
either at Fermilab in Batavia, Illinois, or at CERN in Geneva, Switzerland.
Edward Witten, 2000
Eberhard Zeidler
Max Planck Institute
for Mathematics in the Sciences
Inselstrasse 22
04103 Leipzig
Germany
e-mail: ezeidler@mis.mpg.de
The present comprehensive introduction to the mathematical and physical
aspects of quantum field theory consists of the following six volumes:
Volume I: Basics in Mathematics and Physics
Volume II: Quantum Electrodynamics
Volume III: Gauge Theory
Volume IV: Quantum Mathematics
Volume V: The Physics of the Standard Model
Volume VI: Quantum Gravity and String Theory.
Since ancient times, both physicists and mathematicians have tried to understand
the forces acting in nature. Nowadays we know that there exist four
fundamental forces in nature:
• Newton’s gravitational force,
• Maxwell’s electromagnetic force,
• the strong force between elementary particles, and
• the weak force between elementary particles (e.g., the force responsible for
the radioactive decay of atoms).
In the 20th century, physicists established two basic models, namely,
• the Standard Model in cosmology based on Einstein’s theory of general
relativity, and
• the Standard Model in elementary particle physics based on gauge theory.
Quantum Field Theory I: Basics in Mathematics and Physics
A Bridge between Mathematicians and Physicists
It is very difficult to write mathematics books today. If one does not take
pains with the fine points of theorems, explanations, proofs and corollaries,
then it won’t be a mathematics book; but if one does these things, then
the reading of it will be extremely boring.
Johannes Kepler (1571–1630)
Astronomia Nova
The interaction between physics and mathematics has always played an
important role. The physicist who does not have the latest mathematical
knowledge available to him is at a distinct disadvantage. The mathematician
who shies away from physical applications will most likely miss
important insights and motivations.
Marvin Schechter
Operator Methods in Quantum Mechanics5
In 1967 Lenard and I found a proof of the stability of matter. Our proof was
so complicated and so unilluminating that it stimulated Lieb and Thirring
to find the first decent proof. Why was our proof so bad and why was
theirs so good? The reason is simple. Lenard and I began with mathematical
tricks and hacked our way through a forest of inequalities without
any physical understanding. Lieb and Thirring began with physical understanding
and went on to find the appropriate mathematical language to
make their understanding rigorous. Our proof was a dead end. Theirs was
a gateway to the new world of ideas collected in this book.
Freeman Dyson
From the Preface to Elliott Lieb’s Selecta
The state of the art in quantum field theory. One of the intellectual
fathers of quantum electrodynamics is Freeman Dyson (born in 1923) who works at the Institute for Advanced Study in Princeton. He characterizes
the state of the art in quantum field theory in the following way:
All through its history, quantum field theory has had two faces, one looking
outward, the other looking inward. The outward face looks at nature and
gives us numbers that we can calculate and compare with experiments.
The inward face looks at mathematical concepts and searches for a consistent
foundation on which to build the theory. The outward face shows
us brilliantly successful theory, bringing order to the chaos of particle interactions,
predicting experimental results with astonishing precision. The
inward face shows us a deep mystery. After seventy years of searching, we
have found no consistent mathematical basis for the theory. When we try
to impose the rigorous standards of pure mathematics, the theory becomes
undefined or inconsistent. From the point of view of a pure mathematician,
the theory does not exist. This is the great unsolved paradox of quantum
field theory.
To resolve the paradox, during the last twenty years, quantum field theorists
have become string-theorists. String theory is a new version of quantum
field theory, exploring the mathematical foundations more deeply and
entering a new world of multidimensional geometry. String theory also
brings gravitation into the picture, and thereby unifies quantum field theory
with general relativity. String theory has already led to important
advances in pure mathematics. It has not led to any physical predictions
that can be tested by experiment. We do not know whether string theory
is a true description of nature. All we know is that it is a rich treasure
of new mathematics, with an enticing promise of new physics. During the
coming century, string theory will be intensively developed, and, if we are
lucky, tested by experiment.
Five golden rules. When writing the latex file of this book on my computer,
I had in mind the following five quotations. Let me start with the
mathematician Hermann Weyl (1885–1930) who became a follower of Hilbert
in G¨ottingen in 1930 and who left Germany in 1933 when the Nazi regime
came to power. Together with Albert Einstein (1879–1955) and John von
Neumann (1903–1957), Weyl became a member of the newly founded Institute
for Advanced Study in Princeton, New Jersey, U.S.A. in 1933. Hermann
Weyl wrote in 1938:
The stringent precision attainable for mathematical thought has led many
authors to a mode of writing which must give the reader an impression
of being shut up in a brightly illuminated cell where every detail sticks
out with the same dazzling clarity, but without relief. I prefer the open
landscape under a clear sky with its depth of perspective, where the wealth
of sharply defined nearby details gradually fades away towards the horizon.
For his fundamental contributions to electroweak interaction inside the Standard
Model in particle physics, the physicist StevenWeinberg (born 1933) was
awarded the Nobel prize in physics in 1979 together with Sheldon Glashow
(born 1932) and Abdus Salam (1926–1996). On the occasion of a conference
on the interrelations between mathematics and physics in 1986, Weinberg
pointed out the following:
I am not able to learn any mathematics unless I can see some problem I am
going to solve with mathematics, and I don’t understand how anyone can
teach mathematics without having a battery of problems that the student
is going to be inspired to want to solve and then see that he or she can
use the tools for solving them.
For his theoretical investigations on parity violation under weak interaction,
the physicist Cheng Ning Yang (born 1922) was awarded the Nobel prize in
physics in 1957 together with Tsung Dao Lee (born 1926). In an interview,
Yang remarked:
In 1983 I gave a talk on physics in Seoul, South Korea. I joked “There
exist only two kinds of modern mathematics books: one which you cannot
read beyond the first page and one which you cannot read beyond the first
sentence. The Mathematical Intelligencer later reprinted this joke of mine.
But I suspect many mathematicians themselves agree with me.
The interrelations between mathematics and modern physics have been promoted
by Sir Michael Atiyah (born 1929) on a very deep level. In 1966, the
young Atiyah was awarded the Fields medal. In an interview, Atiyah emphasized
the following:
The more I have learned about physics, the more convinced I am that
physics provides, in a sense, the deepest applications of mathematics. The
mathematical problems that have been solved, or techniques that have
arisen out of physics in the past, have been the lifeblood of mathematics. . .
The really deep questions are still in the physical sciences. For the health of
mathematics at its research level, I think it is very important to maintain
that link as much as possible.
The development of modern quantum field theory has been strongly influenced
by the pioneering ideas of the physicist Richard Feynman (1918–1988).
In 1965, for his contributions to the foundation of quantum electrodynamics,
Feynman was awarded the Nobel prize in physics together with Julian
Schwinger (1918–1994) and Sin-Itiro Tomonaga (1906–1979). In the beginning
of the 1960s, Feynman held his famous Feynman lectures at the California
Institute of Technology in Pasadena. In the preface to the printed version
of the lectures, Feynman told his students the following:
Finally, may I add that the main purpose of my teaching has not been
to prepare you for some examination – it was not even to prepare you to serve industry or military. I wanted most to give you some appreciation
of the wonderful world and the physicist’s way of looking at it, which, I
believe, is a major part of the true culture of modern times.
由一星于2014-08-01, 13:51进行了最后一次编辑,总共编辑了5次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Modern elementary particle physics is based on the Standard Model in
particle physics introduced in the late 1960s and the early 1970s. Before
studying thoroughly the Standard Model in the next volumes, we will discuss
the phenomenology of this model in the present volume. It is the goal of
quantum field theory to compute
• the cross sections of scattering processes in particle accelerators which characterize
the behavior of the scattered particles,
• the masses of stable elementary particles (e.g., the proton mass as a bound
state of three quarks), and
• the lifetime of unstable elementary particles in particle accelerators.
To this end, physicists use the methods of perturbation theory. Fortunately
enough, the computations can be based on only a few basic formulas which
we call magic formulas. The magic formulas of quantum theory are extremely
useful for describing the experimental data observed in particle accelerators,
but they are only valid on a quite formal level.
This difficulty is typical for present quantum field theory.
To help the reader in understanding the formal approach used in physics, we
consider the finite-dimensional situation in Chapter 6.
In the finite-dimensional case, we will rigorously prove all of the
magic formulas used by physicists in quantum field theory.
Furthermore, we relate physics to the following fields of mathematics:
• causality and the analyticity of complex-valued functions,
• many-particle systems, the Casimir effect in quantum field theory, and
number theory,
• propagation of physical effects, distributions (generalized functions), and
the Green’s function,
• rigorous justification of the elegant Dirac calculus,
• duality in physics (time and energy, time and frequency, position and momentum)
and harmonic analysis (Fourier series, Fourier transformation,
Laplace transformation, Mellin transformation, von Neumann’s general operator
calculus for self-adjoint operators, Gelfand triplets and generalized
eigenfunctions),
• the relation between renormalization, resonances, and bifurcation,
• dynamical systems, Lie groups, and the renormalization group,
• fundamental limits in physics,
• topology in physics (Chern numbers and topological quantum numbers),
• probability, Brownian motion, and the Wiener integral,
• the Feynman path integral,
• Hadamard’s integrals and algebraic Feynman integrals.
In fact, this covers a broad range of physical and mathematical subjects.
particle physics introduced in the late 1960s and the early 1970s. Before
studying thoroughly the Standard Model in the next volumes, we will discuss
the phenomenology of this model in the present volume. It is the goal of
quantum field theory to compute
• the cross sections of scattering processes in particle accelerators which characterize
the behavior of the scattered particles,
• the masses of stable elementary particles (e.g., the proton mass as a bound
state of three quarks), and
• the lifetime of unstable elementary particles in particle accelerators.
To this end, physicists use the methods of perturbation theory. Fortunately
enough, the computations can be based on only a few basic formulas which
we call magic formulas. The magic formulas of quantum theory are extremely
useful for describing the experimental data observed in particle accelerators,
but they are only valid on a quite formal level.
This difficulty is typical for present quantum field theory.
To help the reader in understanding the formal approach used in physics, we
consider the finite-dimensional situation in Chapter 6.
In the finite-dimensional case, we will rigorously prove all of the
magic formulas used by physicists in quantum field theory.
Furthermore, we relate physics to the following fields of mathematics:
• causality and the analyticity of complex-valued functions,
• many-particle systems, the Casimir effect in quantum field theory, and
number theory,
• propagation of physical effects, distributions (generalized functions), and
the Green’s function,
• rigorous justification of the elegant Dirac calculus,
• duality in physics (time and energy, time and frequency, position and momentum)
and harmonic analysis (Fourier series, Fourier transformation,
Laplace transformation, Mellin transformation, von Neumann’s general operator
calculus for self-adjoint operators, Gelfand triplets and generalized
eigenfunctions),
• the relation between renormalization, resonances, and bifurcation,
• dynamical systems, Lie groups, and the renormalization group,
• fundamental limits in physics,
• topology in physics (Chern numbers and topological quantum numbers),
• probability, Brownian motion, and the Wiener integral,
• the Feynman path integral,
• Hadamard’s integrals and algebraic Feynman integrals.
In fact, this covers a broad range of physical and mathematical subjects.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Another typical feature of physical mathematics is the description of manyparticle systems by partition functions which encode essential information. As we will show, the Feynman functional integral is nothing other than a partition function which encodes the essential properties of quantum fields. From the physical point of view, the Riemann zeta function is a partition function for the infinite system of prime numbers. The notion of partition function unifies
• statistical physics,
• quantum mechanics,
• quantum field theory, and
• number theory.
Summarizing, I dare say that
The most important notion of modern physics is the Feynman functional integral as a partition function for the states of many-particle systems.
It is a challenge of mathematics to understand this notion in a better way
than known today.
A panorama of mathematics. For the investigation of problems in
quantum field theory, we need a broad spectrum of mathematical branches.
This concerns
(a) algebra, algebraic geometry, and number theory,
(b) analysis and functional analysis,
(c) geometry and topology,
(d) information theory, theory of probability, and stochastic processes,
(e) scientific computing.
In particular, we will deal with the following subjects:
• Lie groups and symmetry, Lie algebras, Kac–Moody algebras (gauge groups,
permutation groups, the Poincar´e group in relativistic physics, conformal
symmetry),
• graded Lie algebras (supersymmetry between bosons and fermions),
• calculus of variations and partial differential equations (the principle of
critical action),
• distributions (also called generalized functions) and partial differential
equations (Green’s functions, correlation functions, propagator kernels, or
resolvent kernels),
• distributions and renormalization (the Epstein–Glaser approach to quantum
field theory via the S-matrix),
• geometric optics and Huygens’ principle (symplectic geometry, contact
transformations, Poisson structures, Finsler geometry),
• Einstein’s Brownian motion, diffusion, stochastic processes and the Wiener
integral, Feynman’s functional integrals, Gaussian integrals in the theory of
probability, Fresnel integrals in geometric optics, the method of stationary
phase,
• non-Euclidean geometry, covariant derivatives and connections on fiber
bundles (Einstein’s theory of general relativity for the universe, and the
Standard Model in elementary particle physics),
• the geometrization of physics (Minkowski space geometry and Einstein’s
theory of special relativity, pseudo-Riemannian geometry and Einstein’s
theory of general relativity, Hilbert space geometry and quantum states,
projective geometry and quantum states, K¨ahler geometry and strings,
conformal geometry and strings),
• spectral theory for operators in Hilbert spaces and quantum systems,
• operator algebras and many-particle systems (states and observables),
• quantization of classical systems (method of operator algebras, Feynman’s
functional integrals, Weyl quantization, geometric quantization, deformation
quantization, stochastic quantization, the Riemann–Hilbert problem,
Hopf algebras and renormalization),
• combinatorics (Feynman diagrams, Hopf algebras),
• quantum information, quantum computers, and operator algebras,
• conformal quantum field theory and operator algebras,
• noncommutative geometry and operator algebras,
• vertex algebras (sporadic groups, monster and moonshine),
• Grassmann algebras and differential forms (de Rham cohomology),
• cohomology, Hilbert’s theory of syzygies, and BRST quantization of gauge
field theories,
• number theory and statistical physics,
• topology (mapping degree, Hopf bundle, Morse theory, Lyusternik–Schnirelman
theory, homology, cohomology, homotopy, characteristic classes, homological
algebra, K-theory),
• topological quantum numbers (e.g., the Gauss–Bonnet theorem, Chern
classes, and Chern numbers, Morse numbers, Floer homology),
• the Riemann–Roch–Hirzebruch theorem and the Atiyah–Singer index theorem,
• analytic continuation, functions of several complex variables (sheaf theory),
• string theory, conformal symmetry, moduli spaces of Riemann su***ces,
and K¨ahler manifolds.
The role of proofs. Mathematics relies on proofs based on perfect logic.
The reader should note that, in this treatise, the terms
• proposition,
• theorem (important proposition), and
• proof
are used in the rigorous sense of mathematics. In addition, for helping the
reader in understanding the basic ideas, we also use ‘motivations’, ‘formal
proofs’, ‘heuristic arguments’ and so on, which emphasize intuition, but lack
rigor. Because of the rich material to be studied, it is impossible to provide
the reader with full proofs for all the different subjects. However, for missing proofs we add references to carefully selected sources.
Hilbert said the following to
the audience in 1900:
Each progress in mathematics is based on the discovery of stronger tools
and easier methods, which at the same time makes it easier to understand
earlier methods. By ma king these stronger tools and easier methods his
own, it is possible for the individual researcher to orientate himself in the
different branches of mathematics. . .
When the answer to a mathematical problem cannot be found, then the
reason is frequently that we have not recognized the general idea from
which the given problem only appears as a link in a chain of related problems.
. .
The organic unity of mathematics is inherent in the nature of this science,
for mathematics is the foundation of all exact knowledge of natural
phenomena.
• statistical physics,
• quantum mechanics,
• quantum field theory, and
• number theory.
Summarizing, I dare say that
The most important notion of modern physics is the Feynman functional integral as a partition function for the states of many-particle systems.
It is a challenge of mathematics to understand this notion in a better way
than known today.
A panorama of mathematics. For the investigation of problems in
quantum field theory, we need a broad spectrum of mathematical branches.
This concerns
(a) algebra, algebraic geometry, and number theory,
(b) analysis and functional analysis,
(c) geometry and topology,
(d) information theory, theory of probability, and stochastic processes,
(e) scientific computing.
In particular, we will deal with the following subjects:
• Lie groups and symmetry, Lie algebras, Kac–Moody algebras (gauge groups,
permutation groups, the Poincar´e group in relativistic physics, conformal
symmetry),
• graded Lie algebras (supersymmetry between bosons and fermions),
• calculus of variations and partial differential equations (the principle of
critical action),
• distributions (also called generalized functions) and partial differential
equations (Green’s functions, correlation functions, propagator kernels, or
resolvent kernels),
• distributions and renormalization (the Epstein–Glaser approach to quantum
field theory via the S-matrix),
• geometric optics and Huygens’ principle (symplectic geometry, contact
transformations, Poisson structures, Finsler geometry),
• Einstein’s Brownian motion, diffusion, stochastic processes and the Wiener
integral, Feynman’s functional integrals, Gaussian integrals in the theory of
probability, Fresnel integrals in geometric optics, the method of stationary
phase,
• non-Euclidean geometry, covariant derivatives and connections on fiber
bundles (Einstein’s theory of general relativity for the universe, and the
Standard Model in elementary particle physics),
• the geometrization of physics (Minkowski space geometry and Einstein’s
theory of special relativity, pseudo-Riemannian geometry and Einstein’s
theory of general relativity, Hilbert space geometry and quantum states,
projective geometry and quantum states, K¨ahler geometry and strings,
conformal geometry and strings),
• spectral theory for operators in Hilbert spaces and quantum systems,
• operator algebras and many-particle systems (states and observables),
• quantization of classical systems (method of operator algebras, Feynman’s
functional integrals, Weyl quantization, geometric quantization, deformation
quantization, stochastic quantization, the Riemann–Hilbert problem,
Hopf algebras and renormalization),
• combinatorics (Feynman diagrams, Hopf algebras),
• quantum information, quantum computers, and operator algebras,
• conformal quantum field theory and operator algebras,
• noncommutative geometry and operator algebras,
• vertex algebras (sporadic groups, monster and moonshine),
• Grassmann algebras and differential forms (de Rham cohomology),
• cohomology, Hilbert’s theory of syzygies, and BRST quantization of gauge
field theories,
• number theory and statistical physics,
• topology (mapping degree, Hopf bundle, Morse theory, Lyusternik–Schnirelman
theory, homology, cohomology, homotopy, characteristic classes, homological
algebra, K-theory),
• topological quantum numbers (e.g., the Gauss–Bonnet theorem, Chern
classes, and Chern numbers, Morse numbers, Floer homology),
• the Riemann–Roch–Hirzebruch theorem and the Atiyah–Singer index theorem,
• analytic continuation, functions of several complex variables (sheaf theory),
• string theory, conformal symmetry, moduli spaces of Riemann su***ces,
and K¨ahler manifolds.
The role of proofs. Mathematics relies on proofs based on perfect logic.
The reader should note that, in this treatise, the terms
• proposition,
• theorem (important proposition), and
• proof
are used in the rigorous sense of mathematics. In addition, for helping the
reader in understanding the basic ideas, we also use ‘motivations’, ‘formal
proofs’, ‘heuristic arguments’ and so on, which emphasize intuition, but lack
rigor. Because of the rich material to be studied, it is impossible to provide
the reader with full proofs for all the different subjects. However, for missing proofs we add references to carefully selected sources.
Hilbert said the following to
the audience in 1900:
Each progress in mathematics is based on the discovery of stronger tools
and easier methods, which at the same time makes it easier to understand
earlier methods. By ma king these stronger tools and easier methods his
own, it is possible for the individual researcher to orientate himself in the
different branches of mathematics. . .
When the answer to a mathematical problem cannot be found, then the
reason is frequently that we have not recognized the general idea from
which the given problem only appears as a link in a chain of related problems.
. .
The organic unity of mathematics is inherent in the nature of this science,
for mathematics is the foundation of all exact knowledge of natural
phenomena.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
From the physical point of view, quantum mechanics and quantum field
theory are described best by the Feynman approach via Feynman diagrams,
transition amplitudes, Feynman propagators (Green’s functions), and functional
integrals. In order to make the reader familiar with the fascinating
story of this approach, let us start with a quotation taken from Freeman
Dyson’s book Disturbing the Universe, Harper & Row, New York, 1979:4
Dick Feynman (1918–1988) was a profoundly original scientist. He refused
to take anybody’s word for anything. This meant that he was forced to
rediscover or reinvent for himself almost the whole physics. It took him
five years of concentrated work to reinvent quantum mechanics. He said
that he couldn’t understand the official version of quantum mechanics
that was taught in the textbooks and so he had to begin afresh from the
beginning. This was a heroic enterprise. He worked harder during those
years than anybody else I ever knew. At the end he had his version of
quantum mechanics that he could understand. . .
The calculations that I did for Hans Bethe,5 using the orthodox method,
took me several months of work and several hundred sheets of paper.
Dick could get the same answer, calculating on a blackboard, in
half an hour. . .
In orthodox physics, it can be said: Suppose an electron is in this state
at a certain time, then you calculate what it will do next by solving the
Schr¨odinger equation introduced by Schr¨odinger in 1926. Instead of this,
Dick simply said:
The electron does whatever it likes.
A history of the electron is any possible path in space and time. The
behavior of the electron is just the result of adding together all the histories
according to some simple rules that Dick worked out. I had the enormous
luck to be at Cornell in 1948 when the idea was newborn, and to be for a
short time Dick’s sounding board. . .
Dick distrusted my mathematics and I distrusted his intuition.
Dick fought against my scepticism, arguing that Einstein had failed because
he stopped thinking in concrete physical images and became a manipulator
of equations. I had to admit that was true. The discoveries of
Einstein’s earlier years were all based on direct physical intuition. Einstein’s
later unified theories failed because they were only sets of equations
without physical meaning. . .
Nobody but Dick could use his theory. Without success I tried to understand
him. . . At the beginning of September after vacations it was time to
go back East. I got onto a Greyhound bus and travelled nonstop for three
days and nights as far as Chicago. This time I had nobody to talk to. The
roads were too bumpy for me to read, and so I sat and looked out of the
window and gradually fell into a comfortable stupor. As we were droning
across Nebraska on the third day, something suddenly happened. For two
weeks I had not thought about physics, and now it came bursting into
my consciousness like an explosion. Feynman’s pictures and Schwinger’s
equations began sorting themselves out in my head with a clarity they had
never had before. I had no pencil or paper, but everything was so clear I
did not need to write it down.
Feynman and Schwinger were just looking at the same set of ideas
from two different sides.
Putting their methods together, you would have a theory of quantum electrodynamics
that combined the mathematical precision of Schwinger with
the practical flexibility of Feynman. . .
During the rest of the day as we watched the sun go down over the prairie,
I was mapping out in my head the shape of the paper I would write when
I got to Princeton. The title of the paper would be The radiation theories
of Tomonaga, Schwinger, and Feynman.6
For the convenience of the reader, in what follows let us summarize the prototypes
of basic formulas for the passage from classical physics to quantum
physics. These formulas are special cases of more general approaches due to
• Newton in 1666 (equation of motion),
• Euler in 1744 and Lagrange in 1762 (calculus of variations),
• Fourier in 1807 (Fourier method in the theory of partial differential equations,
Fourier series, and Fourier integral),
• Poisson in 1811 (Poisson brackets and conservation laws),
• Hamilton in 1827 (Hamiltonian and canonical equations),
• Green in 1828 (the method of Green’s function in electromagnetism),
• Lie in 1870 (continuous transformation groups (Lie groups) and infinitesimal
transformation groups (Lie algebras)),
• Poincar´e in 1892 (small divisors in celestial mechanics, and the renormalization
of the quasi-periodic motion of planets by adding regularizing terms
(also called counterterms) to the Poincar´e–Lindsted series),
• Fredholm in 1900 (integral equations),
• Hilbert in 1904 (integral equations, and spectral theory for infinite-dimensional
symmetric matrices),
• Emmy Noether in 1918 (symmetry, Lie groups, and conservation laws),
• Wiener in 1923 (Wiener integral for stochastic processes including Brownian
motion for diffusion processes),
• von Neumann in 1928 (spectral theory for unbounded self-adjoint operators
in Hilbert spaces, and calculus for operators),
• Stone in 1930 (unitary one-parameter groups and the general dynamics of
quantum systems).
From the physical point of view, the following formulas are special cases of
more general formulas due to Heisenberg in 1925, Born and Jordan in 1926,
Schr¨odinger in 1926, Dirac in 1927, Feynman in 1942, Heisenberg in 1943,
Dyson in 1949, Lippmann and Schwinger in 1950. In fact, we will study the
following approaches to quantum mechanics:
• the 1925 Heisenberg particle picture via time-dependent operators as observables,
and the Poisson–Lie operator equation of motion,
• the 1926 Schr¨odinger wave picture via time-dependent quantum states, and
the Schr¨odinger partial differential equation of motion,
• the 1927 Dirac interaction picture which describes the motion under an
interacting force as a perturbation of the interaction-free dynamics, and
• the 1942 Feynman picture based on a statistics for possible classical motions
via the Feynman path integral, which generalizes the 1923 Wiener
integral for the mathematical description of Einstein’s Brownian motion in
diffusion processes from the year 1905.
The fact that it is possible to describe quantum particles in an equivalent way
by either Heisenberg’s particle picture or Schr¨odinger’s wave picture reflects
a general duality principle in quantum physics:
Quantum particles are more general objects than classical particles
and classical waves.
This has been discovered in the history of physics step by step. Note that,
for didactic reasons, in this section we will not follow the historical route,
but we will present the material in a manner which is most convenient from
the modern point of view.7 Nowadays most physicists prefer the Feynman
approach to quantum physics. In what follows we restrict ourselves to formal
considerations.
theory are described best by the Feynman approach via Feynman diagrams,
transition amplitudes, Feynman propagators (Green’s functions), and functional
integrals. In order to make the reader familiar with the fascinating
story of this approach, let us start with a quotation taken from Freeman
Dyson’s book Disturbing the Universe, Harper & Row, New York, 1979:4
Dick Feynman (1918–1988) was a profoundly original scientist. He refused
to take anybody’s word for anything. This meant that he was forced to
rediscover or reinvent for himself almost the whole physics. It took him
five years of concentrated work to reinvent quantum mechanics. He said
that he couldn’t understand the official version of quantum mechanics
that was taught in the textbooks and so he had to begin afresh from the
beginning. This was a heroic enterprise. He worked harder during those
years than anybody else I ever knew. At the end he had his version of
quantum mechanics that he could understand. . .
The calculations that I did for Hans Bethe,5 using the orthodox method,
took me several months of work and several hundred sheets of paper.
Dick could get the same answer, calculating on a blackboard, in
half an hour. . .
In orthodox physics, it can be said: Suppose an electron is in this state
at a certain time, then you calculate what it will do next by solving the
Schr¨odinger equation introduced by Schr¨odinger in 1926. Instead of this,
Dick simply said:
The electron does whatever it likes.
A history of the electron is any possible path in space and time. The
behavior of the electron is just the result of adding together all the histories
according to some simple rules that Dick worked out. I had the enormous
luck to be at Cornell in 1948 when the idea was newborn, and to be for a
short time Dick’s sounding board. . .
Dick distrusted my mathematics and I distrusted his intuition.
Dick fought against my scepticism, arguing that Einstein had failed because
he stopped thinking in concrete physical images and became a manipulator
of equations. I had to admit that was true. The discoveries of
Einstein’s earlier years were all based on direct physical intuition. Einstein’s
later unified theories failed because they were only sets of equations
without physical meaning. . .
Nobody but Dick could use his theory. Without success I tried to understand
him. . . At the beginning of September after vacations it was time to
go back East. I got onto a Greyhound bus and travelled nonstop for three
days and nights as far as Chicago. This time I had nobody to talk to. The
roads were too bumpy for me to read, and so I sat and looked out of the
window and gradually fell into a comfortable stupor. As we were droning
across Nebraska on the third day, something suddenly happened. For two
weeks I had not thought about physics, and now it came bursting into
my consciousness like an explosion. Feynman’s pictures and Schwinger’s
equations began sorting themselves out in my head with a clarity they had
never had before. I had no pencil or paper, but everything was so clear I
did not need to write it down.
Feynman and Schwinger were just looking at the same set of ideas
from two different sides.
Putting their methods together, you would have a theory of quantum electrodynamics
that combined the mathematical precision of Schwinger with
the practical flexibility of Feynman. . .
During the rest of the day as we watched the sun go down over the prairie,
I was mapping out in my head the shape of the paper I would write when
I got to Princeton. The title of the paper would be The radiation theories
of Tomonaga, Schwinger, and Feynman.6
For the convenience of the reader, in what follows let us summarize the prototypes
of basic formulas for the passage from classical physics to quantum
physics. These formulas are special cases of more general approaches due to
• Newton in 1666 (equation of motion),
• Euler in 1744 and Lagrange in 1762 (calculus of variations),
• Fourier in 1807 (Fourier method in the theory of partial differential equations,
Fourier series, and Fourier integral),
• Poisson in 1811 (Poisson brackets and conservation laws),
• Hamilton in 1827 (Hamiltonian and canonical equations),
• Green in 1828 (the method of Green’s function in electromagnetism),
• Lie in 1870 (continuous transformation groups (Lie groups) and infinitesimal
transformation groups (Lie algebras)),
• Poincar´e in 1892 (small divisors in celestial mechanics, and the renormalization
of the quasi-periodic motion of planets by adding regularizing terms
(also called counterterms) to the Poincar´e–Lindsted series),
• Fredholm in 1900 (integral equations),
• Hilbert in 1904 (integral equations, and spectral theory for infinite-dimensional
symmetric matrices),
• Emmy Noether in 1918 (symmetry, Lie groups, and conservation laws),
• Wiener in 1923 (Wiener integral for stochastic processes including Brownian
motion for diffusion processes),
• von Neumann in 1928 (spectral theory for unbounded self-adjoint operators
in Hilbert spaces, and calculus for operators),
• Stone in 1930 (unitary one-parameter groups and the general dynamics of
quantum systems).
From the physical point of view, the following formulas are special cases of
more general formulas due to Heisenberg in 1925, Born and Jordan in 1926,
Schr¨odinger in 1926, Dirac in 1927, Feynman in 1942, Heisenberg in 1943,
Dyson in 1949, Lippmann and Schwinger in 1950. In fact, we will study the
following approaches to quantum mechanics:
• the 1925 Heisenberg particle picture via time-dependent operators as observables,
and the Poisson–Lie operator equation of motion,
• the 1926 Schr¨odinger wave picture via time-dependent quantum states, and
the Schr¨odinger partial differential equation of motion,
• the 1927 Dirac interaction picture which describes the motion under an
interacting force as a perturbation of the interaction-free dynamics, and
• the 1942 Feynman picture based on a statistics for possible classical motions
via the Feynman path integral, which generalizes the 1923 Wiener
integral for the mathematical description of Einstein’s Brownian motion in
diffusion processes from the year 1905.
The fact that it is possible to describe quantum particles in an equivalent way
by either Heisenberg’s particle picture or Schr¨odinger’s wave picture reflects
a general duality principle in quantum physics:
Quantum particles are more general objects than classical particles
and classical waves.
This has been discovered in the history of physics step by step. Note that,
for didactic reasons, in this section we will not follow the historical route,
but we will present the material in a manner which is most convenient from
the modern point of view.7 Nowadays most physicists prefer the Feynman
approach to quantum physics. In what follows we restrict ourselves to formal
considerations.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The Emergence of Physical Mathematics – a New
Dimension of Mathematics
At the International Congress of Mathematicians in Kyoto in 1990, the young
physicist Edward Witten (Institute for Advanced Study in Princeton) was
awarded the Fields medal in mathematics. In his laudation for Edward Witten,
Sir Michael Atiyah emphasized the following:47
The past decade has seen a remarkable renaissance in the interaction between
mathematics and physics. This has been mainly due to the increasingly
sophisticated mathematical models employed by elementary particle
physicists, and the consequent need to use the appropriate mathematical
machinery. In particular, because of the strongly non-linear nature of the
theories involved, topological ideas and methods have played a prominent
part.
The mathematical community has benefited from this interaction in two
ways. First, and more conventionally, mathematicians have been spurred
into learning some of the relevant physics and collaborating with colleagues
in theoretical physics. Second, and more surprisingly, many of the ideas
emanating from physics have led to significant new insights in purely mathematical
problems, and remarkable discoveries have been made in consequence.
The main input from physics has come from quantum field theory.
While the analytic foundations of quantum field theory have been intensively
studied by mathematicians for many years, the new stimulus has
involved the more formal (algebraic, geometric, topological) aspects.
In all this large and exciting field, which involves many of the leading physicists
and mathematicians in the world, Edward Witten stands clearly as
the most influential and dominating figure. Although he is definitely a
physicist his command of mathematics is rivalled by few mathematicians,
and his ability to interpret physical ideas in mathematical form is quite
unique. Time and again he has surprised the mathematical community by
a brilliant application of physical insight leading to new and deep mathematical
theorems.
In 1986, the American Mathematical Society invited mathematicians and
physicists to a joined symposium devoted to Mathematics: the Unifying
Thread in Science. In his quite remarkable speech, the physicist Steven Weinberg
pointed out the following.48
String theory is right now the hot topic of theoretical physics. According
to this picture, the fundamental constituents of nature are not, in fact,
particles, or even fields, but are instead little strings, little elementary
rubber bands that go zipping around, each in its own state of vibration.
In these theories what we call a particle is just a string in a particular
state of vibration, and what we call a reaction among particles, is just
the collision of two or more strings, each in its own state of vibration,
forming a single joined string which then later breaks up, forming several
independent strings, each again in its own mode of vibration.
It seems like a strange notion for physicists to have come to after all these
years of talking about particles and fields, and it would take too long to
explain why we think this is not an unreasonable picture of nature, but
perhaps I can summarize it in one sentence:
String theories incorporate gravitation.
In fact, not only do they incorporate it, you cannot have a string theory
without gravitation. The graviton, the quantum of gravitational radiation,
the particle which is transmitted when a gravitational force is exerted
between two masses, is just the lowest mode of vibration of a fundamental
closed string (closed meaning that it is a loop). Not only do they include
and necessitate gravitation, but these string theories for the first time allow
a description of gravitation on a microscopic quantum level which is free
of mathematical inconsistencies.
All other descriptions of gravity broke down mathematically, gave nonsensical
results when carried to very small distances or very high energies.
String theory is our first chance at a reasonable theory of gravity which
extends from the very large down to very small and as such, it is natural
that we are all agog over it. String theory itself has focused the attention
of physicists on branches of mathematics that most of us weren’t fortunate
enough to have learned when we were students. You can easily see that a
string (just think of a little bit of cord) travelling through space, sweeps
out a two-dimensional su***ce.
A very convenient description of string theory is to say that it is
the theory of these two-dimensional Riemann su***ces.
The theory of two-dimensional su***ces is remarkably beautiful. There are
ways of classifying all possible two-dimensional su***ces according to their
handles on them and the number of boundaries, which simply don’t exist in
any higher dimension. The theory of two-dimensional su***ces is a branch
of mathematics that when you get into it is one of the loveliest things you
can learn. It was developed in the 19th century, starting with Riemann, and further developed by mathematicians working in the late 19th century
motivated by problems in complex analysis, and then continuing into the
20th century.
There are mathematicians who have spent their whole lives working on
this theory of two-dimensional su***ces, who have never heard of string
theory (or at least not until very recently). Yet when the physicists started
to figure out how to solve the dynamical problems of strings, and they
realized what they had to do was to perform sums49 over all possible twodimensional
su***ces in order to add up all the ways that reactions could
occur, they found the mathematics just ready for their use, developed over
the past 100 years.
String theory involves another branch of mathematics which goes back to
group theory.
The equations which govern these su***ces have a very large group
of symmetries, known as the conformal group.
One description of these symmetries is in terms of an algebraic structure
representing all the possible group transformations, which is actually infinite
dimensional. Mathematicians have been doing a lot of work developing
the theory of these infinite dimensional Lie algebras which underlie symmetry
groups, again without a clear motivation in terms of physics, and
certainly without knowing anything about string theory. Yet when the
physicists started to work on it, there it was.
Speaking quite personally, I have found it exhilarating at my stage of life to
have to go back to school and learn all this wonderful mathematics. Some
of us physicists have enjoyed our conversations with mathematicians, in
which
We beg them to explain things to us in terms we can understand.
At the same time the mathematicians are pleased and somewhat bemused
that we are paying attention to them after all these years. The mathematics
department of the University of Texas at Austin now allows the physicists
to use one of their lounges – which would have been unlikely in previous
years.
Unfortunately, I must admit that there is no experimental evidence yet
for string theory, and so, if theoretical physicists are spending more time
talking to the mathematicians, they are spending less time talking to the
experimentalists, which is not good.
The Seven Millennium Prize Problems of the Clay
Mathematics Institute
At the Second World Congress of Mathematicians in Paris in 1900, in a
seminal lecture, Hilbert formulated his famous 23 open problems.50 The
hundredth anniversary of Hilbert’s lecture was celebrated in Paris, in the “Amphith´eatre” of the Coll`ege de France, on May 24, 2000. The Scientific
Advisory Board of the newly founded Clay Mathematics Institute (CMI)
in Cambridge, Massachusetts, U.S.A., selected seven Millennium prize problems.
The Scientific Advisory Board consists of Arthur Jaffe (director of the
CMI, Harvard University, U.S.A), Alain Connes (Institut des Hautes ´Etudes
Scientifiques (IH´ES) and Coll`ege de France), Andrew Wiles (Princeton University,
U.S.A.), and Edward Witten (Institute for Advanced Study, Princeton,
U.S.A.). The CMI explains its intention as follows:
Mathematics occupies a privileged place among the sciences. It embodies
the quintessence of human knowledge, reaching into every field of human
endeavor. The frontiers of mathematical understanding evolve today in
deep and unfathomable ways. Fundamental advances go hand in hand with
discoveries in all fields of science. Technological applications of mathematics
underpin our daily life, including our ability to communicate thanks
to cryptology and coding theory, our ability to navigate and to travel, our
health and well-being, our security, and they also play a central role in our
economy. The evolution of mathematics will remain a central tool to shaping
civilization. To appreciate the scope of mathematical truth challenges
the capabilities of the human mind.
In order to celebrate mathematics in the new millennium, the CMI has
named seven “Millennium prize problems”. The Scientific Advisory Board
of the CMI selected these problems, focusing on important classic questions
that have resisted solution over the years. The Board of Directors of CMI
designated a $ 7 million prize fund to these problems, with $ 1 million
allocated to each.
The seven Millennium prize problems read as follows:
(i) The Riemann conjecture in number theory on the zeros of the Riemann zeta
function and the asymptotics of prime numbers.
(ii) The Birch and Swinnerton–Dyer conjecture in number theory on the relation
between the size of the solution set of a Diophantine equation and the behavior
of an associated zeta function near the critical point s = 1.
(iii) The Poincar´e conjecture in topology on the exceptional topological structure
of the 3-dimensional sphere.
(iv) The Hodge conjecture in algebraic geometry on the nice structure of projective
algebraic varieties.
(v) The Cook problem in computer sciences of deciding whether an answer that
can be quickly checked with inside knowledge, may without such help require
much longer to solve, no matter how clever a program we write.
(vi) The solution of the turbulence problem for viscous fluids modelled by the
Navier–Stokes partial differential equations.
(vii) The rigorous mathematical foundation of a unified quantum field theory for
elementary particles.
A detailed description of the problems can be found on the following Internet
address:
http://www.claymath.org/prize−problems/
For a detailed discussion of the seven prize problems, we refer to K. Devlin,
The Millennium Problems: the Seven Greatest Unsolved Mathematical
Puzzles of Our Time, Basic Books, New York, 2002.
Dimension of Mathematics
At the International Congress of Mathematicians in Kyoto in 1990, the young
physicist Edward Witten (Institute for Advanced Study in Princeton) was
awarded the Fields medal in mathematics. In his laudation for Edward Witten,
Sir Michael Atiyah emphasized the following:47
The past decade has seen a remarkable renaissance in the interaction between
mathematics and physics. This has been mainly due to the increasingly
sophisticated mathematical models employed by elementary particle
physicists, and the consequent need to use the appropriate mathematical
machinery. In particular, because of the strongly non-linear nature of the
theories involved, topological ideas and methods have played a prominent
part.
The mathematical community has benefited from this interaction in two
ways. First, and more conventionally, mathematicians have been spurred
into learning some of the relevant physics and collaborating with colleagues
in theoretical physics. Second, and more surprisingly, many of the ideas
emanating from physics have led to significant new insights in purely mathematical
problems, and remarkable discoveries have been made in consequence.
The main input from physics has come from quantum field theory.
While the analytic foundations of quantum field theory have been intensively
studied by mathematicians for many years, the new stimulus has
involved the more formal (algebraic, geometric, topological) aspects.
In all this large and exciting field, which involves many of the leading physicists
and mathematicians in the world, Edward Witten stands clearly as
the most influential and dominating figure. Although he is definitely a
physicist his command of mathematics is rivalled by few mathematicians,
and his ability to interpret physical ideas in mathematical form is quite
unique. Time and again he has surprised the mathematical community by
a brilliant application of physical insight leading to new and deep mathematical
theorems.
In 1986, the American Mathematical Society invited mathematicians and
physicists to a joined symposium devoted to Mathematics: the Unifying
Thread in Science. In his quite remarkable speech, the physicist Steven Weinberg
pointed out the following.48
String theory is right now the hot topic of theoretical physics. According
to this picture, the fundamental constituents of nature are not, in fact,
particles, or even fields, but are instead little strings, little elementary
rubber bands that go zipping around, each in its own state of vibration.
In these theories what we call a particle is just a string in a particular
state of vibration, and what we call a reaction among particles, is just
the collision of two or more strings, each in its own state of vibration,
forming a single joined string which then later breaks up, forming several
independent strings, each again in its own mode of vibration.
It seems like a strange notion for physicists to have come to after all these
years of talking about particles and fields, and it would take too long to
explain why we think this is not an unreasonable picture of nature, but
perhaps I can summarize it in one sentence:
String theories incorporate gravitation.
In fact, not only do they incorporate it, you cannot have a string theory
without gravitation. The graviton, the quantum of gravitational radiation,
the particle which is transmitted when a gravitational force is exerted
between two masses, is just the lowest mode of vibration of a fundamental
closed string (closed meaning that it is a loop). Not only do they include
and necessitate gravitation, but these string theories for the first time allow
a description of gravitation on a microscopic quantum level which is free
of mathematical inconsistencies.
All other descriptions of gravity broke down mathematically, gave nonsensical
results when carried to very small distances or very high energies.
String theory is our first chance at a reasonable theory of gravity which
extends from the very large down to very small and as such, it is natural
that we are all agog over it. String theory itself has focused the attention
of physicists on branches of mathematics that most of us weren’t fortunate
enough to have learned when we were students. You can easily see that a
string (just think of a little bit of cord) travelling through space, sweeps
out a two-dimensional su***ce.
A very convenient description of string theory is to say that it is
the theory of these two-dimensional Riemann su***ces.
The theory of two-dimensional su***ces is remarkably beautiful. There are
ways of classifying all possible two-dimensional su***ces according to their
handles on them and the number of boundaries, which simply don’t exist in
any higher dimension. The theory of two-dimensional su***ces is a branch
of mathematics that when you get into it is one of the loveliest things you
can learn. It was developed in the 19th century, starting with Riemann, and further developed by mathematicians working in the late 19th century
motivated by problems in complex analysis, and then continuing into the
20th century.
There are mathematicians who have spent their whole lives working on
this theory of two-dimensional su***ces, who have never heard of string
theory (or at least not until very recently). Yet when the physicists started
to figure out how to solve the dynamical problems of strings, and they
realized what they had to do was to perform sums49 over all possible twodimensional
su***ces in order to add up all the ways that reactions could
occur, they found the mathematics just ready for their use, developed over
the past 100 years.
String theory involves another branch of mathematics which goes back to
group theory.
The equations which govern these su***ces have a very large group
of symmetries, known as the conformal group.
One description of these symmetries is in terms of an algebraic structure
representing all the possible group transformations, which is actually infinite
dimensional. Mathematicians have been doing a lot of work developing
the theory of these infinite dimensional Lie algebras which underlie symmetry
groups, again without a clear motivation in terms of physics, and
certainly without knowing anything about string theory. Yet when the
physicists started to work on it, there it was.
Speaking quite personally, I have found it exhilarating at my stage of life to
have to go back to school and learn all this wonderful mathematics. Some
of us physicists have enjoyed our conversations with mathematicians, in
which
We beg them to explain things to us in terms we can understand.
At the same time the mathematicians are pleased and somewhat bemused
that we are paying attention to them after all these years. The mathematics
department of the University of Texas at Austin now allows the physicists
to use one of their lounges – which would have been unlikely in previous
years.
Unfortunately, I must admit that there is no experimental evidence yet
for string theory, and so, if theoretical physicists are spending more time
talking to the mathematicians, they are spending less time talking to the
experimentalists, which is not good.
The Seven Millennium Prize Problems of the Clay
Mathematics Institute
At the Second World Congress of Mathematicians in Paris in 1900, in a
seminal lecture, Hilbert formulated his famous 23 open problems.50 The
hundredth anniversary of Hilbert’s lecture was celebrated in Paris, in the “Amphith´eatre” of the Coll`ege de France, on May 24, 2000. The Scientific
Advisory Board of the newly founded Clay Mathematics Institute (CMI)
in Cambridge, Massachusetts, U.S.A., selected seven Millennium prize problems.
The Scientific Advisory Board consists of Arthur Jaffe (director of the
CMI, Harvard University, U.S.A), Alain Connes (Institut des Hautes ´Etudes
Scientifiques (IH´ES) and Coll`ege de France), Andrew Wiles (Princeton University,
U.S.A.), and Edward Witten (Institute for Advanced Study, Princeton,
U.S.A.). The CMI explains its intention as follows:
Mathematics occupies a privileged place among the sciences. It embodies
the quintessence of human knowledge, reaching into every field of human
endeavor. The frontiers of mathematical understanding evolve today in
deep and unfathomable ways. Fundamental advances go hand in hand with
discoveries in all fields of science. Technological applications of mathematics
underpin our daily life, including our ability to communicate thanks
to cryptology and coding theory, our ability to navigate and to travel, our
health and well-being, our security, and they also play a central role in our
economy. The evolution of mathematics will remain a central tool to shaping
civilization. To appreciate the scope of mathematical truth challenges
the capabilities of the human mind.
In order to celebrate mathematics in the new millennium, the CMI has
named seven “Millennium prize problems”. The Scientific Advisory Board
of the CMI selected these problems, focusing on important classic questions
that have resisted solution over the years. The Board of Directors of CMI
designated a $ 7 million prize fund to these problems, with $ 1 million
allocated to each.
The seven Millennium prize problems read as follows:
(i) The Riemann conjecture in number theory on the zeros of the Riemann zeta
function and the asymptotics of prime numbers.
(ii) The Birch and Swinnerton–Dyer conjecture in number theory on the relation
between the size of the solution set of a Diophantine equation and the behavior
of an associated zeta function near the critical point s = 1.
(iii) The Poincar´e conjecture in topology on the exceptional topological structure
of the 3-dimensional sphere.
(iv) The Hodge conjecture in algebraic geometry on the nice structure of projective
algebraic varieties.
(v) The Cook problem in computer sciences of deciding whether an answer that
can be quickly checked with inside knowledge, may without such help require
much longer to solve, no matter how clever a program we write.
(vi) The solution of the turbulence problem for viscous fluids modelled by the
Navier–Stokes partial differential equations.
(vii) The rigorous mathematical foundation of a unified quantum field theory for
elementary particles.
A detailed description of the problems can be found on the following Internet
address:
http://www.claymath.org/prize−problems/
For a detailed discussion of the seven prize problems, we refer to K. Devlin,
The Millennium Problems: the Seven Greatest Unsolved Mathematical
Puzzles of Our Time, Basic Books, New York, 2002.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The main tasks of quantum field theory. There exist two fundamental
kinds of quantum states, namely, scattering states and bound states. In
terms of classical celestial mechanics, scattering states correspond to comets
and bound states correspond to closed orbits of planets . Physicists
use quantum field theory in order to compute
• the cross section of scattering processes,
• the lifetime of decaying particles,
• the energy of composite particles (bound states), and
• the magnetic moment of particles.
A final theory should also allow us to compute the electric charge and further
properties of elementary particles.
In the classical form of the Standard Model of particle physics, it is assumed that neutrinos are massless and always left-handed.
However, this is only an approximation of reality. On the basis of recent
experiments, physicists assume that neutrinos possess a small mass and there
exist also right-handed neutrinos in nature. This is based on the following
experimental observation. In the burning sun, only electron neutrinos are
produced. The measurements of astrophysicists show that there appears a
shortage of sun neutrinos by a factor two. This neutrino defect problem in
the sun can be solved in the following way: If we assume that neutrinos have
a small mass, then this small neutrino mass makes it possible that neutrino
oscillations occur which convert the electron neutrino into other types of
neutrinos on its way from sun to earth; this changes the number of observed
neutrinos. As we will show later on, the Dirac equation allows us to define
the chirality of a fermion in an elegant way.
Three general principles for elementary particles. The following
three principles play a fundamental role in quantum physics.
(i) The principle of indistinguishability. Quantum particles are not individuals.
They cannot be distinguished individually.
(ii) Pauli’s exclusion principle. In contrast to bosons, two fermions can never
be in the same quantum state of a given quantum system.
(iii) Pauli’s spin-statistics principle. Bosons (resp. fermions) obey Bose (resp.
Fermi) statistics.35
We will show later on that principle (i) is responsible for the fact that the
quantum states of bosons (resp. fermions) are symmetric (resp. antisymmetric)
under permutations of elementary particles. Many strange properties of
quantum systems are consequences of (ii) and (iii) (e.g., Bose–Einstein condensation
– a group of bosons which are all in the same quantum state at
extremely low temperature, and which behave like a single entity36).
The color charge of quarks is a consequence of the Pauli exclusion
principle.
In particular, for photons only the quantum states | ± 1, 1, n are realized.
This corresponds to the classical fact that the polarization of electromagnetic
waves is always transversal. This means that both the electric and the magnetic
field are transversal to the direction of propagation of the wave. Since
the methods of statistical physics are based on the counting of quantum
states, the following holds true.
If certain quantum states are forbidden by a general principle, then
this has crucial consequences for the physical properties of quantum
systems.
For example, this concerns the Pauli exclusion principle below. The spin of
an elementary particle is related to the 3-dimensional rotation group SO(3).
This is only part of the truth. Since the group SO(3) is not simply connected,
it has a simply connected universal covering group given by the group SU(2)
which is also called Spin(3) (3-dimensional spin group).
The spin of elementary particles results from the fact that the irreducible
representations of the group SU(2) can be characterized by a
number j which coincides with the spin quantum number.
Since the irreducible representations of the group SO(3) have integer spin
quantum numbers, j = 0, 1, . . ., one can say that
The existence of fermions is related to the nontrivial topological structure
of the 3-dimensional rotation group.
In fact, we will show later on that quarks violate the Pauli exclusion principle
if we do not add additional degrees of freedom (called color). In the framework
of axiomatic quantum field theory created by G˚arding and Wightman in the
1960s, the spin-statistics principle is a rigorous mathematical theorem. This
can be found in Streater and Wightman (1968).
Particles and antiparticles are described by mathematical objects
which are dual to each other (elements of a Hilbert space and of its
dual space).
Symmetry, as wide or as narrow you may define its meaning, is one idea
by which man through the ages has tried to comprehend and create order,
beauty, and perfection.
Hermann Weyl, 1952
Symmetry
The determination of the 230 space groups in 1891, independently by
Schoenflies and Fedorov (these are the discrete subgroups of the Euclidean
group which contain three non-coplanar translations) was a masterpiece of
analysis and so was the determination by Groth of the possible properties
of crystals with the symmetries of these space groups in 1926. . .
Symmetry transformations in pre-quantum theory were rather obvious
transformations of 3-dimensional space; in quantum theory they became
unitary transformations of Hilbert space. These form subgroups of all unitary
transformations which are essentially homomorphic to the symmetry
group in question, essentially homomorphic only because a unitary transformation
in quantum mechanics is equivalent to any of its multiples by
a numerical factor (of modulus 1). However, this essential homomorphy
could be reduced, particularly, as a result of Bargmann’s investigations in
most cases to a true homomorphy to an extended group which is called,
then, the quantum mechanical symmetry group.46 The quantum mechanical
operations of the symmetry group break up the Hilbert space of all
states into subspaces each of which is invariant under the operations in
question. . .
The unitary representations of the Poincar´e group were determined in the
late 30’s; except for the trivial one, they were all shown to be infinitedimensional.
47 This is equivalent with the statement that no system can
be relativistically invariant unless it can be in an infinity of orthogonal
states. By calling attention to the properties of the unitary representations
of noncompact Lie groups, the physicists have stimulated the mathematicians’
interest in this field. The mathematicians are now very much ahead
of us in this field, and it is not easy to catch up with the results of Gelfand,
Naimark, Harish-Chandra, and so many others.
Eugene Wigner, Gibbs Lecture 196848
Symmetry Principles in Old and New Physics
The most important lesson that we have learned in this century is that the
secret of nature is symmetry. Starting with relativity, proceeding through
the development of quantum mechanics, and culminating in the Standard
Model of particle physics, symmetry principles have assumed a central position
in the fundamental theories of nature. Local gauge theories provide
the basis of the Standard Model and of Einstein’s theory of gravitation. . .
In recent years we have discovered a new and extremely powerful symmetry
– supersymmetry – which might explain many mysteries of the Standard
Model.
Another part of the lesson of symmetry is that much of the texture of the
world is due to mechanisms of symmetry breaking. In quantum mechanical
systems with a finite number of degrees of freedom global symmetries are
realized in only one way. The laws of physics are invariant and the ground
state of the theory is unique and symmetric. . . However, in systems with
an infinite number of degrees of freedom a second realization of symmetry
is possible, in which the ground state is asymmetric. This spontaneous symmetry breaking is responsible for magnetism, superconductivity, and
the structure of the unified electroweak theory in the Standard Model.
The second important lesson we have learned is the idea of renormalization
group and effective dynamics. The decoupling of physical phenomena at
different scales of energy is an essential characteristic of nature. It is this
feature of nature that makes it possible to understand the limited range
of physical phenomena without having to understand everything at once.
The characteristic behavior of the solutions of the renormalization equations
is that they approach a finite dimensional submanifold in the infinite
dimensional space of all theories. This defines an effective low energy
theory. . . Thus, for example quantum chromodynamics is the theory of
quarks whose interactions are mediated by gluons. This is the appropriate
description at energies of billions of electron volts. However, if we want to
describe the properties of ordinary nuclei at energies of millions of electron
volts, we employ instead an effective theory of nucleons, composites of
the quarks, whose interactions are mediated by other quark composites –
mesons. . . There may be more than one, equally fundamental, formulation
of a particular quantum field theory, each appropriate at a different scale
of energy.
David Gross
The triumph and limitations of quantum field theory.49
Symmetries have always played an important role in physics. With quantum
mechanics, however, the interplay between physics and symmetries
has reached a new dimension. The very structure of quantum mechanics
invites the application of group theoretical methods. . .
Symmetries are also a direct mediator between experimental facts and
the theoretical structure of a theory. This is the case because there is a
direct connection between symmetries and conservation laws. Space-time
symmetries are an obvious example. Conservation of energy, momentum
and angular momentum are linked to invariance under time translation,
space translation and rotation in space.
It was in atomic physics that space-time symmetries became significant,
but they are also important in nuclear physics and in all the physics discovered
after that. In nuclear physics, however, a new concept of symmetries,
symmetries in an internal space, was discovered with isospin. All our socalled
fundamental models describing what we know about strong-weak
and electromagnetic interactions are built on symmetries in space-time
and internal spaces. These symmetries are not only used to extract information
from a theory, they are also used to construct these theories,
and this for good reasons. It turned out that only theories possessing such
symmetries make sense as quantum field theories. Thus symmetries are
not only a good tool to deal with quantum field theoretical models, they
are necessary to define such models.
Following this line of thought, a new type symmetry, the so-called supersymmetry,
proved to be extremely successful. Quantum theory seems to
have a very deep relation to supersymmetry.50 Thus it is not surprising that the most promising models of physics are based on supersymmetry,
even when they go beyond a local quantum field theory, as string theory
does.51
Julius Wess
Quantum Theory Centenary, Berlin, 2000
For a deeper understanding of physical processes in nature, it is of fundamental
importance to answer the following question.
Suppose that we observe a process in a physical system. Which transformed
versions of this process can also happen?
As we will show, this question is closely connected with the concept of symmetry
which plays a fundamental role in modern physics. From the mathematical
point of view, symmetry is described by group theory which we will
encounter very frequently in this monograph. At this point, we only want
to discuss some basic ideas. First let us consider three important examples:
energy conservation, irreversible processes, and parity violation in weak interaction。
Elementary particle processes are invariant under the CPT symmetry. This
is one of the most fundamental symmetries in physics.
Folklore
kinds of quantum states, namely, scattering states and bound states. In
terms of classical celestial mechanics, scattering states correspond to comets
and bound states correspond to closed orbits of planets . Physicists
use quantum field theory in order to compute
• the cross section of scattering processes,
• the lifetime of decaying particles,
• the energy of composite particles (bound states), and
• the magnetic moment of particles.
A final theory should also allow us to compute the electric charge and further
properties of elementary particles.
In the classical form of the Standard Model of particle physics, it is assumed that neutrinos are massless and always left-handed.
However, this is only an approximation of reality. On the basis of recent
experiments, physicists assume that neutrinos possess a small mass and there
exist also right-handed neutrinos in nature. This is based on the following
experimental observation. In the burning sun, only electron neutrinos are
produced. The measurements of astrophysicists show that there appears a
shortage of sun neutrinos by a factor two. This neutrino defect problem in
the sun can be solved in the following way: If we assume that neutrinos have
a small mass, then this small neutrino mass makes it possible that neutrino
oscillations occur which convert the electron neutrino into other types of
neutrinos on its way from sun to earth; this changes the number of observed
neutrinos. As we will show later on, the Dirac equation allows us to define
the chirality of a fermion in an elegant way.
Three general principles for elementary particles. The following
three principles play a fundamental role in quantum physics.
(i) The principle of indistinguishability. Quantum particles are not individuals.
They cannot be distinguished individually.
(ii) Pauli’s exclusion principle. In contrast to bosons, two fermions can never
be in the same quantum state of a given quantum system.
(iii) Pauli’s spin-statistics principle. Bosons (resp. fermions) obey Bose (resp.
Fermi) statistics.35
We will show later on that principle (i) is responsible for the fact that the
quantum states of bosons (resp. fermions) are symmetric (resp. antisymmetric)
under permutations of elementary particles. Many strange properties of
quantum systems are consequences of (ii) and (iii) (e.g., Bose–Einstein condensation
– a group of bosons which are all in the same quantum state at
extremely low temperature, and which behave like a single entity36).
The color charge of quarks is a consequence of the Pauli exclusion
principle.
In particular, for photons only the quantum states | ± 1, 1, n are realized.
This corresponds to the classical fact that the polarization of electromagnetic
waves is always transversal. This means that both the electric and the magnetic
field are transversal to the direction of propagation of the wave. Since
the methods of statistical physics are based on the counting of quantum
states, the following holds true.
If certain quantum states are forbidden by a general principle, then
this has crucial consequences for the physical properties of quantum
systems.
For example, this concerns the Pauli exclusion principle below. The spin of
an elementary particle is related to the 3-dimensional rotation group SO(3).
This is only part of the truth. Since the group SO(3) is not simply connected,
it has a simply connected universal covering group given by the group SU(2)
which is also called Spin(3) (3-dimensional spin group).
The spin of elementary particles results from the fact that the irreducible
representations of the group SU(2) can be characterized by a
number j which coincides with the spin quantum number.
Since the irreducible representations of the group SO(3) have integer spin
quantum numbers, j = 0, 1, . . ., one can say that
The existence of fermions is related to the nontrivial topological structure
of the 3-dimensional rotation group.
In fact, we will show later on that quarks violate the Pauli exclusion principle
if we do not add additional degrees of freedom (called color). In the framework
of axiomatic quantum field theory created by G˚arding and Wightman in the
1960s, the spin-statistics principle is a rigorous mathematical theorem. This
can be found in Streater and Wightman (1968).
Particles and antiparticles are described by mathematical objects
which are dual to each other (elements of a Hilbert space and of its
dual space).
Symmetry, as wide or as narrow you may define its meaning, is one idea
by which man through the ages has tried to comprehend and create order,
beauty, and perfection.
Hermann Weyl, 1952
Symmetry
The determination of the 230 space groups in 1891, independently by
Schoenflies and Fedorov (these are the discrete subgroups of the Euclidean
group which contain three non-coplanar translations) was a masterpiece of
analysis and so was the determination by Groth of the possible properties
of crystals with the symmetries of these space groups in 1926. . .
Symmetry transformations in pre-quantum theory were rather obvious
transformations of 3-dimensional space; in quantum theory they became
unitary transformations of Hilbert space. These form subgroups of all unitary
transformations which are essentially homomorphic to the symmetry
group in question, essentially homomorphic only because a unitary transformation
in quantum mechanics is equivalent to any of its multiples by
a numerical factor (of modulus 1). However, this essential homomorphy
could be reduced, particularly, as a result of Bargmann’s investigations in
most cases to a true homomorphy to an extended group which is called,
then, the quantum mechanical symmetry group.46 The quantum mechanical
operations of the symmetry group break up the Hilbert space of all
states into subspaces each of which is invariant under the operations in
question. . .
The unitary representations of the Poincar´e group were determined in the
late 30’s; except for the trivial one, they were all shown to be infinitedimensional.
47 This is equivalent with the statement that no system can
be relativistically invariant unless it can be in an infinity of orthogonal
states. By calling attention to the properties of the unitary representations
of noncompact Lie groups, the physicists have stimulated the mathematicians’
interest in this field. The mathematicians are now very much ahead
of us in this field, and it is not easy to catch up with the results of Gelfand,
Naimark, Harish-Chandra, and so many others.
Eugene Wigner, Gibbs Lecture 196848
Symmetry Principles in Old and New Physics
The most important lesson that we have learned in this century is that the
secret of nature is symmetry. Starting with relativity, proceeding through
the development of quantum mechanics, and culminating in the Standard
Model of particle physics, symmetry principles have assumed a central position
in the fundamental theories of nature. Local gauge theories provide
the basis of the Standard Model and of Einstein’s theory of gravitation. . .
In recent years we have discovered a new and extremely powerful symmetry
– supersymmetry – which might explain many mysteries of the Standard
Model.
Another part of the lesson of symmetry is that much of the texture of the
world is due to mechanisms of symmetry breaking. In quantum mechanical
systems with a finite number of degrees of freedom global symmetries are
realized in only one way. The laws of physics are invariant and the ground
state of the theory is unique and symmetric. . . However, in systems with
an infinite number of degrees of freedom a second realization of symmetry
is possible, in which the ground state is asymmetric. This spontaneous symmetry breaking is responsible for magnetism, superconductivity, and
the structure of the unified electroweak theory in the Standard Model.
The second important lesson we have learned is the idea of renormalization
group and effective dynamics. The decoupling of physical phenomena at
different scales of energy is an essential characteristic of nature. It is this
feature of nature that makes it possible to understand the limited range
of physical phenomena without having to understand everything at once.
The characteristic behavior of the solutions of the renormalization equations
is that they approach a finite dimensional submanifold in the infinite
dimensional space of all theories. This defines an effective low energy
theory. . . Thus, for example quantum chromodynamics is the theory of
quarks whose interactions are mediated by gluons. This is the appropriate
description at energies of billions of electron volts. However, if we want to
describe the properties of ordinary nuclei at energies of millions of electron
volts, we employ instead an effective theory of nucleons, composites of
the quarks, whose interactions are mediated by other quark composites –
mesons. . . There may be more than one, equally fundamental, formulation
of a particular quantum field theory, each appropriate at a different scale
of energy.
David Gross
The triumph and limitations of quantum field theory.49
Symmetries have always played an important role in physics. With quantum
mechanics, however, the interplay between physics and symmetries
has reached a new dimension. The very structure of quantum mechanics
invites the application of group theoretical methods. . .
Symmetries are also a direct mediator between experimental facts and
the theoretical structure of a theory. This is the case because there is a
direct connection between symmetries and conservation laws. Space-time
symmetries are an obvious example. Conservation of energy, momentum
and angular momentum are linked to invariance under time translation,
space translation and rotation in space.
It was in atomic physics that space-time symmetries became significant,
but they are also important in nuclear physics and in all the physics discovered
after that. In nuclear physics, however, a new concept of symmetries,
symmetries in an internal space, was discovered with isospin. All our socalled
fundamental models describing what we know about strong-weak
and electromagnetic interactions are built on symmetries in space-time
and internal spaces. These symmetries are not only used to extract information
from a theory, they are also used to construct these theories,
and this for good reasons. It turned out that only theories possessing such
symmetries make sense as quantum field theories. Thus symmetries are
not only a good tool to deal with quantum field theoretical models, they
are necessary to define such models.
Following this line of thought, a new type symmetry, the so-called supersymmetry,
proved to be extremely successful. Quantum theory seems to
have a very deep relation to supersymmetry.50 Thus it is not surprising that the most promising models of physics are based on supersymmetry,
even when they go beyond a local quantum field theory, as string theory
does.51
Julius Wess
Quantum Theory Centenary, Berlin, 2000
For a deeper understanding of physical processes in nature, it is of fundamental
importance to answer the following question.
Suppose that we observe a process in a physical system. Which transformed
versions of this process can also happen?
As we will show, this question is closely connected with the concept of symmetry
which plays a fundamental role in modern physics. From the mathematical
point of view, symmetry is described by group theory which we will
encounter very frequently in this monograph. At this point, we only want
to discuss some basic ideas. First let us consider three important examples:
energy conservation, irreversible processes, and parity violation in weak interaction。
Elementary particle processes are invariant under the CPT symmetry. This
is one of the most fundamental symmetries in physics.
Folklore
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Permutations and Pauli’s Exclusion Principle
The states of an elementary particle system are invariant under permutations
of the particles (principle of indistinguishable particles). More precisely, we
use
• even permutations for bosons, and
• odd permutations for fermions.
This guarantees that two identical fermions can never stay together
in the same state (Pauli’s exclusion principle). According to Hermann
Weyl, permutation groups also play a crucial role in order to determine the
irreducible representations of the Lie group SU(3) which are fundamental for
elementary particle physics.
Symmetry Breaking
Imperfection of matter sows the seed of death.
Thomas Mann (1875–1955)
Many phenomena in nature can be understood by using the fact that symmetries
are disturbed under an external influence. We speak of symmetry
breaking which is frequently related to phase transition. As a typical process,
consider the cooling of water. At a critical temperature, water is transformed
into bizarre flowers of ice by a phase transition. Obviously, the ice flowers
possess a lower degree of symmetry than the homogeneous water. Mathematically,
it is much easier to describe water than ice flowers. Similarly, physicists
expect that shortly after the Big Bang, there existed only one fundamental
force. In the process of cooling the universe, the gravitational force, the strong
force, the weak force, and the electromagnetic force crystallized out step by
step.
Irreversibility
The time-evolution of living beings is not reversible. There arises the important
question whether processes for elementary particles are always reversible.
The answer is ‘no’. In fact, each process for elementary particles is invariant
under the combined CPT symmetry transformation. If time reversal T would
be a universal symmetry, then the CP symmetry would be universally realized.
However, this contradicts the CP violation observed in experiments.
Force equals curvature in modern physics.
Folklore
Physics and Modern Differential Geometry
In modern differential geometry, one starts with the notion of parallel transport,
which corresponds to the transport of information in physics. Parallel
transport allows us the construction of covariant derivatives.59 Finally, commutation
relations between covariant derivatives lead us to the crucial notion
of curvature. This will be studied in Volume III by using the modern language
of fiber bundles, which fits best the idea of parallel transport of mathematical
objects. This approach called gauge field theory applies to
• the curvature of curves,
• the classical Gaussian curvature of 2-dimensional su***ces,
• the Riemann curvature of n-dimensional Riemannian manifolds,
• Einstein’s theory of general relativity and the Standard Model of modern
cosmology,
• the Cartan–Ehresmann curvature of fiber bundles,
• the Maxwell theory of electromagnetism with respect to the gauge group
U(1),
• the Yang–Mills gauge field theory with respect to the gauge group SU(n)
where n = 2, 3, . . . ,
• the Standard Model of elementary particle physics with respect to the
gauge group U(1) × SU(2) × SU(3),
• supergravity, and
• string theory.
Historical remarks. In the 20th century, physicists discovered step by
step that the fundamental interactions in nature can be described mathematically
by so-called gauge field theories. At this point let us only mention
the following. For example, in nature we observe electrons as basic particles.
Mathematically, electrons are governed by the 1928 Dirac equation which
combines Einstein’s 1905 theory of special relativity with Schr¨odinger’s 1926
quantum mechanics. Dirac noticed immediately that his equation predicts
the existence of an antiparticle to the electron which has the positive electric
charge e. In 1932 Anderson discovered the positron experimentally in cosmic
rays. Now to the point of gauge field theory.
If we postulate that the Dirac equation is invariant under local symmetries
(i.e., suitable phase transformations), then we have to introduce
mathematically an additional field.
It turns out that this additional field corresponds to the electromagnetic
field. According to Einstein, the electromagnetic field consists of light quanta
(photons). Roughly speaking, we can say that
The existence of the electron implies the existence of its antiparticle
and of the photon which mediates the interaction between electrons
and positrons.
The same remains true for the other interactions described by the Standard
Model of particle physics. The existence of the 12 basic particles (quarks and
leptons) of the Standard Model implies the existence of their antiparticles
and of the 12 interacting particles (8 gluons, the photon, and the three vector
bosons W+,W−, Z0.) The number of the interacting particles is closely
related to the fact that the gauge group U(1)×SU(2)×SU(3) of the Standard
Model of particle physics has 1 + 3 + 8 = 12 dimensions.
In his 1915 theory of general relativity, Einstein described Newton’s gravitational
force by the curvature of the 4-dimensional pseudo-Riemannian
space-time manifold. ´Elie Cartan discovered in the 1920’s that one can assign
the notion of curvature to fairly general mathematical objects. This generalizes
Gauss’ famous theorema egregium. In the 1950s, Ehresmann formulated
the final abstract mathematical theory of the curvature of fiber bundles. Yang
and Mills discovered in 1954 that it is possible to generalize Maxwell’s theory
of electromagnetism to more general noncommutative symmetry groups.
Nowadays we know that the curvature of fiber bundles is behind
• Einstein’s theory of general relativity,
• Maxwell’s theory of electromagnetism,
• quantum electrodynamics,
• the Standard Model of particle physics as a generalization of quantum
electrodynamics,
• string theory,
• supergravity theory, and so on.
In the terminology of physicists, all of these theories are gauge theories.
Mnemonically,
force = curvature.
This is the most important principle of modern physics. Since ancient times,
physicists have made strong efforts to understand the forces in nature. Mathematicians
studied geometric objects and wanted to understand their curvature.
It turns out that physicists and mathematicians studied in fact the
same problem. This beautiful interaction between mathematics and physics
will be thoroughly studied in Volume II on quantum electrodynamics and in
Volume III on gauge theory.
The states of an elementary particle system are invariant under permutations
of the particles (principle of indistinguishable particles). More precisely, we
use
• even permutations for bosons, and
• odd permutations for fermions.
This guarantees that two identical fermions can never stay together
in the same state (Pauli’s exclusion principle). According to Hermann
Weyl, permutation groups also play a crucial role in order to determine the
irreducible representations of the Lie group SU(3) which are fundamental for
elementary particle physics.
Symmetry Breaking
Imperfection of matter sows the seed of death.
Thomas Mann (1875–1955)
Many phenomena in nature can be understood by using the fact that symmetries
are disturbed under an external influence. We speak of symmetry
breaking which is frequently related to phase transition. As a typical process,
consider the cooling of water. At a critical temperature, water is transformed
into bizarre flowers of ice by a phase transition. Obviously, the ice flowers
possess a lower degree of symmetry than the homogeneous water. Mathematically,
it is much easier to describe water than ice flowers. Similarly, physicists
expect that shortly after the Big Bang, there existed only one fundamental
force. In the process of cooling the universe, the gravitational force, the strong
force, the weak force, and the electromagnetic force crystallized out step by
step.
Irreversibility
The time-evolution of living beings is not reversible. There arises the important
question whether processes for elementary particles are always reversible.
The answer is ‘no’. In fact, each process for elementary particles is invariant
under the combined CPT symmetry transformation. If time reversal T would
be a universal symmetry, then the CP symmetry would be universally realized.
However, this contradicts the CP violation observed in experiments.
Force equals curvature in modern physics.
Folklore
Physics and Modern Differential Geometry
In modern differential geometry, one starts with the notion of parallel transport,
which corresponds to the transport of information in physics. Parallel
transport allows us the construction of covariant derivatives.59 Finally, commutation
relations between covariant derivatives lead us to the crucial notion
of curvature. This will be studied in Volume III by using the modern language
of fiber bundles, which fits best the idea of parallel transport of mathematical
objects. This approach called gauge field theory applies to
• the curvature of curves,
• the classical Gaussian curvature of 2-dimensional su***ces,
• the Riemann curvature of n-dimensional Riemannian manifolds,
• Einstein’s theory of general relativity and the Standard Model of modern
cosmology,
• the Cartan–Ehresmann curvature of fiber bundles,
• the Maxwell theory of electromagnetism with respect to the gauge group
U(1),
• the Yang–Mills gauge field theory with respect to the gauge group SU(n)
where n = 2, 3, . . . ,
• the Standard Model of elementary particle physics with respect to the
gauge group U(1) × SU(2) × SU(3),
• supergravity, and
• string theory.
Historical remarks. In the 20th century, physicists discovered step by
step that the fundamental interactions in nature can be described mathematically
by so-called gauge field theories. At this point let us only mention
the following. For example, in nature we observe electrons as basic particles.
Mathematically, electrons are governed by the 1928 Dirac equation which
combines Einstein’s 1905 theory of special relativity with Schr¨odinger’s 1926
quantum mechanics. Dirac noticed immediately that his equation predicts
the existence of an antiparticle to the electron which has the positive electric
charge e. In 1932 Anderson discovered the positron experimentally in cosmic
rays. Now to the point of gauge field theory.
If we postulate that the Dirac equation is invariant under local symmetries
(i.e., suitable phase transformations), then we have to introduce
mathematically an additional field.
It turns out that this additional field corresponds to the electromagnetic
field. According to Einstein, the electromagnetic field consists of light quanta
(photons). Roughly speaking, we can say that
The existence of the electron implies the existence of its antiparticle
and of the photon which mediates the interaction between electrons
and positrons.
The same remains true for the other interactions described by the Standard
Model of particle physics. The existence of the 12 basic particles (quarks and
leptons) of the Standard Model implies the existence of their antiparticles
and of the 12 interacting particles (8 gluons, the photon, and the three vector
bosons W+,W−, Z0.) The number of the interacting particles is closely
related to the fact that the gauge group U(1)×SU(2)×SU(3) of the Standard
Model of particle physics has 1 + 3 + 8 = 12 dimensions.
In his 1915 theory of general relativity, Einstein described Newton’s gravitational
force by the curvature of the 4-dimensional pseudo-Riemannian
space-time manifold. ´Elie Cartan discovered in the 1920’s that one can assign
the notion of curvature to fairly general mathematical objects. This generalizes
Gauss’ famous theorema egregium. In the 1950s, Ehresmann formulated
the final abstract mathematical theory of the curvature of fiber bundles. Yang
and Mills discovered in 1954 that it is possible to generalize Maxwell’s theory
of electromagnetism to more general noncommutative symmetry groups.
Nowadays we know that the curvature of fiber bundles is behind
• Einstein’s theory of general relativity,
• Maxwell’s theory of electromagnetism,
• quantum electrodynamics,
• the Standard Model of particle physics as a generalization of quantum
electrodynamics,
• string theory,
• supergravity theory, and so on.
In the terminology of physicists, all of these theories are gauge theories.
Mnemonically,
force = curvature.
This is the most important principle of modern physics. Since ancient times,
physicists have made strong efforts to understand the forces in nature. Mathematicians
studied geometric objects and wanted to understand their curvature.
It turns out that physicists and mathematicians studied in fact the
same problem. This beautiful interaction between mathematics and physics
will be thoroughly studied in Volume II on quantum electrodynamics and in
Volume III on gauge theory.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The Challenge of Different Scales in Nature
Between quantum length scales (atomic diameters of about 10−10m) and
the earth’s diameter (106m) there are about 16 length scales. Most of technology
and much of science occurs in this range. Between the Planck length
(10−35m) and the diameter of the visible universe there are 70 length scales;
70, 16, or even 2 is a very large number. Most theories become intractable
when they require coupling between even two adjacent length scales. Computational
resources are generally not sufficient to resolve multiple length
scales in three-dimensional problems and even in many two-dimensional
problems. The problem is not merely one of presently available computational
resources, which are growing at a rapid rate. To obtain an extra
factor of 10 in computational resolution requires in the most favorable
case a factor 104 in computational resources for time-dependent threedimensional
problems. When multiple length scales are in question, the
under-resolution of computations performed with today’s algorithms will
be with us for some time to come, and the essential role which must be assigned
to theory, and to the design of algorithms of a new nature, becomes
evident. It is for this reason that nonlinear and stochastic phenomena, often
described by the theory of coherent and chaotic structures, coupling
adjacent and multiple length scales, is a vital topic.1
James Glimm, 1991
The Trouble with Scale Changes
In physics we frequently have to perform singular limits when passing from
one essential scale to another one. There are the following typical examples:
(i) the singular limit from Einstein’s theory of general relativity to Newtonian
mechanics,
(ii) the singular limit from the mesoscopic Boltzmann equation to the macroscopic
Navier–Stokes equations in continuum mechanics,
(iii) phase transitions as a singular limit related to the Ginzburg–Landau
equation,
(iv) thin films as singular limits,
(v) thin plates as singular limits of 3-dimensional elasticity theory,
(vi) the emergence of microstructures in nature and high-technology
Wilson’s Renormalization Group Theory in Physics
Each Lagrangian density represents a physical theory. The idea of flows for
ordinary differential equations can be generalized to flows (or semi-flows) in
the space of physical theories. It turns out that appropriate fixed points of
the semi-flow correspond to phase transitions. The physical idea behind this
fixed point is the observation that
Physical systems are invariant under rescaling at phase transitions.
Intuitively, this is based on the following fact: Since the correlation length
becomes infinite at a phase transition (large fluctuations), the system loses its
typical length scale. We will study this in great detail in the later volumes. In
the collection of seminal papers that appeared in the journal Physical Review,
Joel Lebowitz writes the following:2
The Wilson renormalization group in statistical physics had antecedents in
quantum field theory by Gell-Mann and Low.3 Following Wilson’s work,
the renormalization group method has spread and had enormous influence
on almost all fields of science. It provides a method for quantitative
analysis of the “essential” features of a large class of nonlinear phenomena
exhibiting self-similar structures. This includes not only scale invariant
critical systems (phase transitions) where fluctuations are “infinite” on the
microscopic spatial and temporal scale, but also fractals, dynamical systems
exhibiting Feigenbaum period doubling, Kolmogorov–Arnold–Moser
theory (KAM theory) on critical resonances in celestial mechanics, singular
behavior in nonlinear partial differential equations, and “chaos”. Even
where not directly applicable, the renormalization group often provides a
paradigm for the analysis of complex phenomenas. . . A lot of mathematical
work remains to be done to make it into a well-defined theory of phase
transitions.
For his theory of critical phenomena in terms of the renormalization group,
Kenneth Wilson (born 1935) was awarded the Nobel prize in physics in 1982.
Wilson’s ideas changed the paradigm of theoretical physics.
(i) In the past, physicists studied specific theories like the motion of planets
around the sun or the motion of the electron around the nucleus of the
hydrogen atom.
(ii) Nowadays physicists want to study the behavior of physical phenomena
at quite different scales. The idea of the renormalization group helps to
bridge the different scales.
As a typical example for (ii), consider the cooling of the universe after the Big
Bang. To understand this, we have to study the behavior of elementary particles
at completely different energy scales. Let us mention two fundamental
phase transitions in the early universe:
• First phase transition: Using the method of running coupling constants
in renormalized quantum field theory, physicists discovered that, 10−35
seconds after the Big Bang at a temperature of 1028K, strong interaction
and electroweak interaction decoupled. This phase transition corresponds
to a particle energy of 1015 GeV.
• Second phase transition: 10−12 seconds after the Big Bang at a temperature
of 1016K, weak and electromagnetic interaction decoupled. This corresponds
to a particle energy of 103 GeV.
Note that the most powerful particle accelerator in the world will begin to
work at CERN (Geneva, Switzerland) in the year 2008. There, the particle
energy will be about 103 GeV. This means that we will be able in near future
to reach the energy scale of the second phase transition in a huge laboratory
on earth.
If we want to create a unified theory for all fundamental interactions in
the universe including gravitation, then we have to bridge 60 scales,
• from the radius of the visible universe, r = 1028cm,
• down to the Planck length, l = 10−33cm
A New Paradigm in Physics
To emphasize the role of Wilson’s new paradigm in physics, let us quote from
the introduction to a series of lectures on the renormalization group given by
David Gross:
Physics is scale dependent. For example, consider a fluid. At each scale of
distances, we need a different theory to describe its behavior:
• at ∼ 1 cm – classical continuum mechanics (Navier–Stokes equations),
• at ∼ 10−5cm – theory of granular structure,
• at 10−8 cm – theory of atom (nucleus plus electronic cloud),
• at 10−13cm – nuclear physics (nucleons),
• at ∼ 10
−13cm – 10
−18cm – quantum chromodynamics (quarks)
• at ∼ 10
−33cm – string theory.
At each scale, we have different degrees of freedom and different dynamics.
Physics at a larger scale (largely) decouples from the physics at a smaller
scale. For example, to describe the behavior of a fluid at the scale ∼ 1cm,
we do not need to know about the granular structure, nor about atoms
or nucleons. The only things we need to know are the viscosity and the
density of the fluid. Of course, these values can be computed from the
physics of a smaller scale, but if we found them out in some way (for
example, measurement), we can do without smaller scale theories at all.
Similarly, if we want to describe atoms, we do not need to know anything
about the nucleus except its mass and electric charge.
Thus, a theory at a larger scale remembers only finitely many parameters
from the theories at smaller scales, and throws the rest of the details away.
More precisely, when we pass from a smaller scale to a larger scale, we
average over irrelevant degrees of freedom. Mathematically, this means
that they become integration variables and thus disappear in the answer.
This decoupling is the reason why we are able to do physics. If there was
no decoupling, it would be necessary for Newton to know string theory to
describe the motion of a viscous fluid. . .
The general aim of the renormalization group method is to explain how
this decoupling takes place and why exactly information is transmitted
from scale to scale through finitely many parameters. In quantum theory,
decoupling of scales is not at all obvious. Indeed, because of the uncertainty
principle, we have to work at all scales at once. The renormalization group
describes why decoupling survives in quantum theory.
The Adler–Bell–Jackiw Anomaly
In 1969 Adler, Bell, and Jackiw pointed out that there exist special Feynman
diagrams in the theory of electroweak interaction which cause nasty divergent
expressions called Adler–Bell–Jackiw anomalies. Fortunately enough, these
anomalies disappear if one postulates the following lepton–quark symmetry:
The number of leptons is equal to the number of quarks.
This condition is fulfilled in the Standard Model of particle physics. Here, we
have six leptons and six quarks. For the theory of anomalies, we refer to the
monograph by Fujikawa and Suzuki (2004).
In statistical mechanics, it turns out that the renormalization-group
flow (or semi-flow) corresponds to iterative methods which possess
a fixed point along with the typical property that there exist stable
and unstable manifolds which are passing through the fixed point. In
terms of physics, the fixed point describes a phase transition.
Between quantum length scales (atomic diameters of about 10−10m) and
the earth’s diameter (106m) there are about 16 length scales. Most of technology
and much of science occurs in this range. Between the Planck length
(10−35m) and the diameter of the visible universe there are 70 length scales;
70, 16, or even 2 is a very large number. Most theories become intractable
when they require coupling between even two adjacent length scales. Computational
resources are generally not sufficient to resolve multiple length
scales in three-dimensional problems and even in many two-dimensional
problems. The problem is not merely one of presently available computational
resources, which are growing at a rapid rate. To obtain an extra
factor of 10 in computational resolution requires in the most favorable
case a factor 104 in computational resources for time-dependent threedimensional
problems. When multiple length scales are in question, the
under-resolution of computations performed with today’s algorithms will
be with us for some time to come, and the essential role which must be assigned
to theory, and to the design of algorithms of a new nature, becomes
evident. It is for this reason that nonlinear and stochastic phenomena, often
described by the theory of coherent and chaotic structures, coupling
adjacent and multiple length scales, is a vital topic.1
James Glimm, 1991
The Trouble with Scale Changes
In physics we frequently have to perform singular limits when passing from
one essential scale to another one. There are the following typical examples:
(i) the singular limit from Einstein’s theory of general relativity to Newtonian
mechanics,
(ii) the singular limit from the mesoscopic Boltzmann equation to the macroscopic
Navier–Stokes equations in continuum mechanics,
(iii) phase transitions as a singular limit related to the Ginzburg–Landau
equation,
(iv) thin films as singular limits,
(v) thin plates as singular limits of 3-dimensional elasticity theory,
(vi) the emergence of microstructures in nature and high-technology
Wilson’s Renormalization Group Theory in Physics
Each Lagrangian density represents a physical theory. The idea of flows for
ordinary differential equations can be generalized to flows (or semi-flows) in
the space of physical theories. It turns out that appropriate fixed points of
the semi-flow correspond to phase transitions. The physical idea behind this
fixed point is the observation that
Physical systems are invariant under rescaling at phase transitions.
Intuitively, this is based on the following fact: Since the correlation length
becomes infinite at a phase transition (large fluctuations), the system loses its
typical length scale. We will study this in great detail in the later volumes. In
the collection of seminal papers that appeared in the journal Physical Review,
Joel Lebowitz writes the following:2
The Wilson renormalization group in statistical physics had antecedents in
quantum field theory by Gell-Mann and Low.3 Following Wilson’s work,
the renormalization group method has spread and had enormous influence
on almost all fields of science. It provides a method for quantitative
analysis of the “essential” features of a large class of nonlinear phenomena
exhibiting self-similar structures. This includes not only scale invariant
critical systems (phase transitions) where fluctuations are “infinite” on the
microscopic spatial and temporal scale, but also fractals, dynamical systems
exhibiting Feigenbaum period doubling, Kolmogorov–Arnold–Moser
theory (KAM theory) on critical resonances in celestial mechanics, singular
behavior in nonlinear partial differential equations, and “chaos”. Even
where not directly applicable, the renormalization group often provides a
paradigm for the analysis of complex phenomenas. . . A lot of mathematical
work remains to be done to make it into a well-defined theory of phase
transitions.
For his theory of critical phenomena in terms of the renormalization group,
Kenneth Wilson (born 1935) was awarded the Nobel prize in physics in 1982.
Wilson’s ideas changed the paradigm of theoretical physics.
(i) In the past, physicists studied specific theories like the motion of planets
around the sun or the motion of the electron around the nucleus of the
hydrogen atom.
(ii) Nowadays physicists want to study the behavior of physical phenomena
at quite different scales. The idea of the renormalization group helps to
bridge the different scales.
As a typical example for (ii), consider the cooling of the universe after the Big
Bang. To understand this, we have to study the behavior of elementary particles
at completely different energy scales. Let us mention two fundamental
phase transitions in the early universe:
• First phase transition: Using the method of running coupling constants
in renormalized quantum field theory, physicists discovered that, 10−35
seconds after the Big Bang at a temperature of 1028K, strong interaction
and electroweak interaction decoupled. This phase transition corresponds
to a particle energy of 1015 GeV.
• Second phase transition: 10−12 seconds after the Big Bang at a temperature
of 1016K, weak and electromagnetic interaction decoupled. This corresponds
to a particle energy of 103 GeV.
Note that the most powerful particle accelerator in the world will begin to
work at CERN (Geneva, Switzerland) in the year 2008. There, the particle
energy will be about 103 GeV. This means that we will be able in near future
to reach the energy scale of the second phase transition in a huge laboratory
on earth.
If we want to create a unified theory for all fundamental interactions in
the universe including gravitation, then we have to bridge 60 scales,
• from the radius of the visible universe, r = 1028cm,
• down to the Planck length, l = 10−33cm
A New Paradigm in Physics
To emphasize the role of Wilson’s new paradigm in physics, let us quote from
the introduction to a series of lectures on the renormalization group given by
David Gross:
Physics is scale dependent. For example, consider a fluid. At each scale of
distances, we need a different theory to describe its behavior:
• at ∼ 1 cm – classical continuum mechanics (Navier–Stokes equations),
• at ∼ 10−5cm – theory of granular structure,
• at 10−8 cm – theory of atom (nucleus plus electronic cloud),
• at 10−13cm – nuclear physics (nucleons),
• at ∼ 10
−13cm – 10
−18cm – quantum chromodynamics (quarks)
• at ∼ 10
−33cm – string theory.
At each scale, we have different degrees of freedom and different dynamics.
Physics at a larger scale (largely) decouples from the physics at a smaller
scale. For example, to describe the behavior of a fluid at the scale ∼ 1cm,
we do not need to know about the granular structure, nor about atoms
or nucleons. The only things we need to know are the viscosity and the
density of the fluid. Of course, these values can be computed from the
physics of a smaller scale, but if we found them out in some way (for
example, measurement), we can do without smaller scale theories at all.
Similarly, if we want to describe atoms, we do not need to know anything
about the nucleus except its mass and electric charge.
Thus, a theory at a larger scale remembers only finitely many parameters
from the theories at smaller scales, and throws the rest of the details away.
More precisely, when we pass from a smaller scale to a larger scale, we
average over irrelevant degrees of freedom. Mathematically, this means
that they become integration variables and thus disappear in the answer.
This decoupling is the reason why we are able to do physics. If there was
no decoupling, it would be necessary for Newton to know string theory to
describe the motion of a viscous fluid. . .
The general aim of the renormalization group method is to explain how
this decoupling takes place and why exactly information is transmitted
from scale to scale through finitely many parameters. In quantum theory,
decoupling of scales is not at all obvious. Indeed, because of the uncertainty
principle, we have to work at all scales at once. The renormalization group
describes why decoupling survives in quantum theory.
The Adler–Bell–Jackiw Anomaly
In 1969 Adler, Bell, and Jackiw pointed out that there exist special Feynman
diagrams in the theory of electroweak interaction which cause nasty divergent
expressions called Adler–Bell–Jackiw anomalies. Fortunately enough, these
anomalies disappear if one postulates the following lepton–quark symmetry:
The number of leptons is equal to the number of quarks.
This condition is fulfilled in the Standard Model of particle physics. Here, we
have six leptons and six quarks. For the theory of anomalies, we refer to the
monograph by Fujikawa and Suzuki (2004).
In statistical mechanics, it turns out that the renormalization-group
flow (or semi-flow) corresponds to iterative methods which possess
a fixed point along with the typical property that there exist stable
and unstable manifolds which are passing through the fixed point. In
terms of physics, the fixed point describes a phase transition.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Analyticity
Mathematicians and physicists like holomorphic and meromorphic functions, since the local behavior of such functions determines completely their global behavior.
Folklore
The theory of complex-valued holomorphic functions is one of the most beautiful parts of mathematics; it has played a key role in the development of modern
mathematics (analysis, topology, algebraic geometry, and number theory). As an introduction
to complex function theory, we recommend Hurwitz and Courant (1964)
(classic), Remmert (1991), (1998), and Zeidler (2004).
Holomorphic functions. A function f : U → C is called holomorphic on the open
set U iff it is differentiable at each point z of U.
整函数
[ltr]整函数(entrie function)是在整个复平面上全纯的函数。典型的例子有多项式函数、指数函数、以及它们的和、积及复合函数。每一个整函数都可以表示为处处收敛的幂级数。而对数函数和平方根都不是整函数。
整函数的阶可以用上极限定义如下:
其中是到的距离,是时的最大绝对值。如果,我们也可以定义它的类型:
整函数在无穷远处可能具有奇点,甚至是本性奇点,这时该函数便称为超越整函数。根据刘维尔定理,在整个黎曼球面(复平面和无穷远处的点)上的整函数是常数。
刘维尔定理确立了整函数的一个重要的性质:任何一个有界的整函数都是常数。这个性质可以用来证明代数基本定理。皮卡小定理强化了刘维尔定理,它表明任何一个不是常数的整函数都取遍所有的复数值,最多只有一个值例外,例如指数函数永远不能是零。[/ltr]
Entire functions. By definition, a function f : C → C is called entire iff it
is holomorphic on the complex plane. For example, polynomials, the exponential
function z → ez, and the trigonometric functions z → sin z and z → cos z are entire
functions.
Locally holomorphic functions. Let z0 be a point of the complex plane. A
complex-valued function f is called locally holomorphic at the point z0 iff there
exists an open ball B(z0) centered at z0 such that the function f : B(z0) → C is
holomorphic.
Biholomorphic functions. Let U and V be open subsets of the complex plane
C. A function
f : U → V
is called biholomorphic iff it is bijective and both f and f−1 are holomorphic. Biholomorphic
maps are always angle-preserving, that is, the oriented angles between
intersecting curves are preserved.
Conformal maps. Fix the point z0 of the complex plane and the complex
numbers a, b with b = 0. The function
f(z) := a + b(z − z0), z∈ C
is the superposition of a translation, a rotation around the center z0, and a similarity
transformation with respect to z0. Obviously, this map is angle-preserving and a
biholomorphic map f : C → C from the complex plane onto itself. Such a map is
called a conformal map of the complex plane onto itself. We want to generalize this
concept. To this end, let
f : U → C (4.1)
be a holomorphic function on the nonempty open subset U of the complex plane.
This map is called a conformal map from U onto f(U) iff it is an angle-preserving
diffeomorphism1 from the set U onto the set f(U). For a function (4.1) on the
nonempty open subset U of the complex plane C, the following three properties are
equivalent.
(i) The map f is conformal from U onto f(U).
(ii) The function f : U → C is holomorphic, injective, and f(z0) = 0 for all points
z0 in U.2
(ii) The set f(U) is open and the function f : U → f(U) is biholomorphic.
In the case (ii), the function f looks locally like
f(z) = f(z0) + f(z0)(z − z0) + . . . .
in a sufficiently small open neighborhood of each point z0 ∈ U. Because of the
condition f(z0) = 0, the map f is not locally degenerate at z0.
Meromorphic functions. The function f : U → C has an isolated singularity at
the point z0 iff it is holomorphic in a punctured open neighborhood of z0.
The poles of meromorphic functions describe essential physical properties
(e.g., the masses of elementary particles).
Mathematicians and physicists like holomorphic and meromorphic functions, since the local behavior of such functions determines completely their global behavior.
Folklore
The theory of complex-valued holomorphic functions is one of the most beautiful parts of mathematics; it has played a key role in the development of modern
mathematics (analysis, topology, algebraic geometry, and number theory). As an introduction
to complex function theory, we recommend Hurwitz and Courant (1964)
(classic), Remmert (1991), (1998), and Zeidler (2004).
Holomorphic functions. A function f : U → C is called holomorphic on the open
set U iff it is differentiable at each point z of U.
整函数
[ltr]整函数(entrie function)是在整个复平面上全纯的函数。典型的例子有多项式函数、指数函数、以及它们的和、积及复合函数。每一个整函数都可以表示为处处收敛的幂级数。而对数函数和平方根都不是整函数。
整函数的阶可以用上极限定义如下:
其中是到的距离,是时的最大绝对值。如果,我们也可以定义它的类型:
整函数在无穷远处可能具有奇点,甚至是本性奇点,这时该函数便称为超越整函数。根据刘维尔定理,在整个黎曼球面(复平面和无穷远处的点)上的整函数是常数。
刘维尔定理确立了整函数的一个重要的性质:任何一个有界的整函数都是常数。这个性质可以用来证明代数基本定理。皮卡小定理强化了刘维尔定理,它表明任何一个不是常数的整函数都取遍所有的复数值,最多只有一个值例外,例如指数函数永远不能是零。[/ltr]
Entire functions. By definition, a function f : C → C is called entire iff it
is holomorphic on the complex plane. For example, polynomials, the exponential
function z → ez, and the trigonometric functions z → sin z and z → cos z are entire
functions.
Locally holomorphic functions. Let z0 be a point of the complex plane. A
complex-valued function f is called locally holomorphic at the point z0 iff there
exists an open ball B(z0) centered at z0 such that the function f : B(z0) → C is
holomorphic.
Biholomorphic functions. Let U and V be open subsets of the complex plane
C. A function
f : U → V
is called biholomorphic iff it is bijective and both f and f−1 are holomorphic. Biholomorphic
maps are always angle-preserving, that is, the oriented angles between
intersecting curves are preserved.
Conformal maps. Fix the point z0 of the complex plane and the complex
numbers a, b with b = 0. The function
f(z) := a + b(z − z0), z∈ C
is the superposition of a translation, a rotation around the center z0, and a similarity
transformation with respect to z0. Obviously, this map is angle-preserving and a
biholomorphic map f : C → C from the complex plane onto itself. Such a map is
called a conformal map of the complex plane onto itself. We want to generalize this
concept. To this end, let
f : U → C (4.1)
be a holomorphic function on the nonempty open subset U of the complex plane.
This map is called a conformal map from U onto f(U) iff it is an angle-preserving
diffeomorphism1 from the set U onto the set f(U). For a function (4.1) on the
nonempty open subset U of the complex plane C, the following three properties are
equivalent.
(i) The map f is conformal from U onto f(U).
(ii) The function f : U → C is holomorphic, injective, and f(z0) = 0 for all points
z0 in U.2
(ii) The set f(U) is open and the function f : U → f(U) is biholomorphic.
In the case (ii), the function f looks locally like
f(z) = f(z0) + f(z0)(z − z0) + . . . .
in a sufficiently small open neighborhood of each point z0 ∈ U. Because of the
condition f(z0) = 0, the map f is not locally degenerate at z0.
Meromorphic functions. The function f : U → C has an isolated singularity at
the point z0 iff it is holomorphic in a punctured open neighborhood of z0.
The poles of meromorphic functions describe essential physical properties
(e.g., the masses of elementary particles).
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
卷绕数[编辑]
[size][ltr]
平面上的闭曲线关于某个点的卷绕数,是一个整数,它表示了曲线绕过该点的总次数。卷绕数与曲线的定向有关,如果曲线依顺时针方向绕过某个点,则卷绕数是负数。
卷绕数在代数拓扑中是基本的概念,在向量分析、复分析、几何拓扑、微分几何和物理学中也扮演了重要的角色。
[/ltr][/size]
[size][ltr]
描述[编辑]
[/ltr][/size][size][ltr]
假设在xy平面上有一条有向的闭曲线。我们可以把曲线想象为某个物体的运动轨迹,运动方向就是曲线的方向。曲线的卷绕数就是物体逆时针绕过原点的总次数。
计算绕过原点的总次数时,逆时针方向的运动算正数,顺时针方向的运动算负数。例如,如果物体首先依逆时针方向绕过原点四次,然后再依顺时针方向绕过原点一次,那么曲线的卷绕数就是3。
利用这种方案,根本不绕过原点的曲线的卷绕数就是零,而顺时针绕过原点的曲线的卷绕数就是负数。因此,曲线的卷绕数可以是任何整数。以下的图中显示了卷绕数为-2、-1、0、1、2和3的曲线:
[/ltr][/size]
[size][ltr]
正式的定义[编辑]
x-y平面上的曲线可以用参数方程来定义:
如果我们把参数t视为时间,那么这个方程就描述了物体在t = 0和t = 1期间在平面上的运动。只要函数x(t)和y(t)是连续的,运动的轨迹就是一条曲线。只要物体的位置于t = 0和t = 1时相同,这条曲线就是闭曲线。
我们可以用极坐标系来定义这种曲线的卷绕数。假设曲线不经过原点,我们可以把参数方程写成极坐标的形式:
函数r(t)和θ(t)必须是连续的, r > 0。因为最初和最终的位置是相同的,所以θ(0)和θ(1)的差必须是2π的整数倍。这个整数就是卷绕数:
卷绕数
这个公式定义了xy平面上曲线关于原点的卷绕数。把坐标系平移,我们就可以把这个定义推广到关于任何点p的卷绕数。
其它定义[编辑]
卷绕数在不同的数学领域中通常有不同的定义。以下的定义都与上面的定义等价。
微分几何[编辑]
在微分几何中,通常假设参数方程是可微的(或至少分段可微的)。在这种情况下,极坐标系θ与直角坐标系x和y有以下的关系:
,其中
根据微积分基本定理,θ的总变化等于dθ的积分。因此,我们可以把可微曲线的卷绕数表示为一个曲线积分:
卷绕数
复分析[编辑]
在复分析中,闭曲线C的卷绕数可以表示为复数坐标z = x + iy。特别地,如果我们记z = reiθ,那么:
因此:
ln(r)的总变化是零,因此dz ⁄ z的积分等于i乘以θ的总变化。所以:
卷绕数
更加一般地,C关于任何复数a的卷绕数由以下的公式给出:
这是柯西积分公式的一个特例。卷绕数在复分析中扮演了一个十分重要的角色(例如在留数定理的表述中)。
回转数[编辑]
我们也可以考虑曲线关于它本身的卷绕数(又称为回转数 turning number),也就是曲线的切向量旋转的次数。在右面的图中,曲线的回转数是4(或−4),那个小的回路也计算在内。这只对可微且光滑的曲线才有定义。参见:回转切线定理。
参见[编辑]
[/ltr][/size]
[size][ltr]
参考文献[编辑]
[/ltr][/size]
[size][ltr]
外部链接[编辑]
[/ltr][/size]
[size][ltr]
平面上的闭曲线关于某个点的卷绕数,是一个整数,它表示了曲线绕过该点的总次数。卷绕数与曲线的定向有关,如果曲线依顺时针方向绕过某个点,则卷绕数是负数。
卷绕数在代数拓扑中是基本的概念,在向量分析、复分析、几何拓扑、微分几何和物理学中也扮演了重要的角色。
[/ltr][/size]
[size][ltr]
描述[编辑]
[/ltr][/size][size][ltr]
假设在xy平面上有一条有向的闭曲线。我们可以把曲线想象为某个物体的运动轨迹,运动方向就是曲线的方向。曲线的卷绕数就是物体逆时针绕过原点的总次数。
计算绕过原点的总次数时,逆时针方向的运动算正数,顺时针方向的运动算负数。例如,如果物体首先依逆时针方向绕过原点四次,然后再依顺时针方向绕过原点一次,那么曲线的卷绕数就是3。
利用这种方案,根本不绕过原点的曲线的卷绕数就是零,而顺时针绕过原点的曲线的卷绕数就是负数。因此,曲线的卷绕数可以是任何整数。以下的图中显示了卷绕数为-2、-1、0、1、2和3的曲线:
[/ltr][/size]
−2 | −1 | 0 | ||
1 | 2 | 3 |
正式的定义[编辑]
x-y平面上的曲线可以用参数方程来定义:
如果我们把参数t视为时间,那么这个方程就描述了物体在t = 0和t = 1期间在平面上的运动。只要函数x(t)和y(t)是连续的,运动的轨迹就是一条曲线。只要物体的位置于t = 0和t = 1时相同,这条曲线就是闭曲线。
我们可以用极坐标系来定义这种曲线的卷绕数。假设曲线不经过原点,我们可以把参数方程写成极坐标的形式:
函数r(t)和θ(t)必须是连续的, r > 0。因为最初和最终的位置是相同的,所以θ(0)和θ(1)的差必须是2π的整数倍。这个整数就是卷绕数:
卷绕数
这个公式定义了xy平面上曲线关于原点的卷绕数。把坐标系平移,我们就可以把这个定义推广到关于任何点p的卷绕数。
其它定义[编辑]
卷绕数在不同的数学领域中通常有不同的定义。以下的定义都与上面的定义等价。
微分几何[编辑]
在微分几何中,通常假设参数方程是可微的(或至少分段可微的)。在这种情况下,极坐标系θ与直角坐标系x和y有以下的关系:
,其中
根据微积分基本定理,θ的总变化等于dθ的积分。因此,我们可以把可微曲线的卷绕数表示为一个曲线积分:
卷绕数
复分析[编辑]
在复分析中,闭曲线C的卷绕数可以表示为复数坐标z = x + iy。特别地,如果我们记z = reiθ,那么:
因此:
ln(r)的总变化是零,因此dz ⁄ z的积分等于i乘以θ的总变化。所以:
卷绕数
更加一般地,C关于任何复数a的卷绕数由以下的公式给出:
这是柯西积分公式的一个特例。卷绕数在复分析中扮演了一个十分重要的角色(例如在留数定理的表述中)。
回转数[编辑]
我们也可以考虑曲线关于它本身的卷绕数(又称为回转数 turning number),也就是曲线的切向量旋转的次数。在右面的图中,曲线的回转数是4(或−4),那个小的回路也计算在内。这只对可微且光滑的曲线才有定义。参见:回转切线定理。
参见[编辑]
[/ltr][/size]
[size][ltr]
参考文献[编辑]
[/ltr][/size]
- Krantz, S. G. "The Index or Winding Number of a Curve about a Point." §4.4.4 in Handbook of Complex Variables. Boston, MA: Birkhäuser, pp. 49-50, 1999.
[size][ltr]
外部链接[编辑]
[/ltr][/size]
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
A Glance at Topology
Topology is precisely that mathematical discipline which allows a passage
from the local to the global.
Ren´e Thom (1923–2002)
Topology studies the qualitative behavior of mathematical and physical objects.
The following results discussed in the preceding chapter are related to topology:
• deformation invariance of the integral of holomorphic functions,
• Cauchy’s residue theorem,
• properties of the winding number,
• Liouville’s theorem,
• analytic continuation of holomorphic functions,
• Abelian integrals and Riemann su***ces.
Topology was created by Poincar´e (1854–1912) at the end of the 19th century
and was motivated by the investigation of Riemann su***ces and the qualitative
behavior of the orbits of planets, asteroids, and comets in celestial mechanics.
Topology studies far-reaching generalizations of the results summarized above.
Intersection number交叉数的概念
From Wikipedia, the free encyclopedia
[ltr]This article is about algebraic geometry. For the concept in graph theory, see Intersection number (graph theory).
In mathematics, and especially in algebraic geometry, the intersection number generalizes the intuitive notion of counting the number of times two curves intersect to higher dimensions, multiple (more than 2) curves, and accounting properly for tangency. One needs a definition of intersection number in order to state results like Bézout's theorem.
The intersection number is obvious in certain cases, such as the intersection of x- and y-axes which should be one. The complexity enters when calculating intersections at points of tangency and intersections along positive dimensional sets. For example if a plane is tangent to a su***ce along a line, the intersection number along the line should be at least two. These questions are discussed systematically in intersection theory.[/ltr]
5 Self-intersections
6 Applications
7 References
[ltr]
Definition for Riemann su***ces[edit]
Let X be a Riemann su***ce. Then the intersection number of two closed curves on X has a simple definition in terms of an integral. For every closed curve c on X (i.e., smooth function ), we can associate a differential form with the pleasant property that integrals along c can be calculated by integrals over X:
, for every closed (1-)differential on X,
where is the wedge product of differentials, and is the hodge star. Then the intersection number of two closed curves, a and b, on X is defined as
.
The have an intuitive definition as follows. They are a sort of dirac deltaalong the curve c, accomplished by taking the differential of a unit step function that drops from 1 to 0 across c. More formally, we begin by defining for a simple closed curve c on X, a function fc by letting be a small strip around c in the shape of an annulus. Name the left and right parts of as and . Then take a smaller sub-strip around c, , with left and right parts and . Then define fc by
.
The definition is then expanded to arbitrary closed curves. Every closed curve c on X is homologous to for some simple closed curves ci, that is,
, for every differential .
Define the by
.
Definition for algebraic varieties[edit]
The usual constructive definition in the case of algebraic varieties proceeds in steps. The definition given below is for the intersection number of divisors on a nonsingular variety X.
1. The only intersection number that can be calculated directly from the definition is the intersection of hypersu***ces (subvarieties of X of codimension one) that are in general position at x. Specifically, assume we have a nonsingular variety X, and n hypersu***ces Z1, ..., Zn which have local equations f1, ..., fn near x for polynomials fi(t1, ..., tn), such that the following hold:[/ltr]
[ltr]
Then the intersection number at the point x is
,
where is the local ring of X at x, and the dimension is dimension as a k-vector space. It can be calculated as the localization , where is the maximal ideal of polynomials vanishing at x, and U is an open affine set containing x and containing none of the singularities of the fi.
2. The intersection number of hypersu***ces in general position is then defined as the sum of the intersection numbers at each point of intersection.
3. Extend the definition to effective divisors by linearity, i.e.,
and .
4. Extend the definition to arbitrary divisors in general position by noticing every divisor has a unique expression as D = P - N for some effective divisors P and N. So let Di = Pi - Ni, and use rules of the form
to transform the intersection.
5. The intersection number of arbitrary divisors is then defined using a "Chow's moving lemma" that guarantees we can find linearly equivalent divisors that are in general position, which we can then intersect.
Note that the definition of the intersection number does not depend on the order of the divisors.
Further definitions[edit]
The definition can be vastly generalized, for example to intersections along subvarieties instead of just at points, or to arbitrary complete varieties.
In algebraic topology, the intersection number appears as the Poincaré dual of the cup product. Specifically, if two manifolds, X and Y, intersect transversely in a manifold M, the homology class of the intersection is thePoincaré dual of the cup product of the Poincaré duals of X and Y.
Intersection multiplicities for plane curves[edit]
There is a unique function assigning to each triplet (P, Q, p) consisting of a pair of polynomials, P and Q, in K[x, y] and a point p in K2 a numberIp(P, Q) called the intersection multiplicity of P and Q at p that satisfies the following properties:[/ltr]
[ltr]
Although these properties completely characterize intersection multiplicity, in practice it is realised in several different ways.
One realization of intersection multiplicity is through the dimension of a certain quotient space of the power series ring K[[x,y]]. By *** a change of variables if necessary, we may assume that the point p is (0,0). Let P(x, y) and Q(x, y) be the polynomials defining the algebraic curves we are interested in. If the original equations are given in homogeneous form, these can be obtained by setting z = 1. Let I = (P, Q) denote the ideal ofK[[x,y]] generated by P and Q. The intersection multiplicity is the dimension of K[[x, y]]/I as a vector space over K.
Another realization of intersection multiplicity comes from the resultant of the two polynomials P and Q. In coordinates where p is (0,0), the curves have no other intersections with y = 0, and the degree of P with respect to x is equal to the total degree of P, Ip(P, Q) can be defined as the highest power of y that divides the resultant of P and Q (with P and Q seen as polynomials over K[x]).
Intersection multiplicity can also be realised as the number of distinct intersections that exist if the curves are perturbed slightly. More specifically, if P and Q define curves which intersect only once in theclosure of an open set U, then for a dense set of (ε,δ) in K2, P − ε andQ − δ are smooth and intersect transversally (i.e. have different tangent lines) at exactly some number n points in U. Ip(P, Q) = n.
Example[edit]
Consider the intersection of the x-axis with the parabola
Then
and
so
Thus, the intersection degree is two; it is an ordinary tangency.
Self-intersections[edit]
Some of the most interesting intersection numbers to compute are self-intersection numbers. This should not be taken in a *** sense. What is meant is that, in an equivalence class of divisors of some specific kind, two representatives are intersected that are in general position with respect to each other. In this way, self-intersection numbers can become well-defined, and even negative.
Applications[edit]
The intersection number is partly motivated by the desire to define intersection to satisfy Bézout's theorem.
The intersection number arises in the study fixed points, which can be cleverly defined as intersections of function graphs with a diagonals. Calculating the intersection numbers at the fixed points counts the fixed points with multiplicity, and leads to the Lefschetz fixed point theorem in quantitative form.
References[edit][/ltr]
Topology is precisely that mathematical discipline which allows a passage
from the local to the global.
Ren´e Thom (1923–2002)
Topology studies the qualitative behavior of mathematical and physical objects.
The following results discussed in the preceding chapter are related to topology:
• deformation invariance of the integral of holomorphic functions,
• Cauchy’s residue theorem,
• properties of the winding number,
• Liouville’s theorem,
• analytic continuation of holomorphic functions,
• Abelian integrals and Riemann su***ces.
Topology was created by Poincar´e (1854–1912) at the end of the 19th century
and was motivated by the investigation of Riemann su***ces and the qualitative
behavior of the orbits of planets, asteroids, and comets in celestial mechanics.
Topology studies far-reaching generalizations of the results summarized above.
Intersection number交叉数的概念
From Wikipedia, the free encyclopedia
[ltr]This article is about algebraic geometry. For the concept in graph theory, see Intersection number (graph theory).
In mathematics, and especially in algebraic geometry, the intersection number generalizes the intuitive notion of counting the number of times two curves intersect to higher dimensions, multiple (more than 2) curves, and accounting properly for tangency. One needs a definition of intersection number in order to state results like Bézout's theorem.
The intersection number is obvious in certain cases, such as the intersection of x- and y-axes which should be one. The complexity enters when calculating intersections at points of tangency and intersections along positive dimensional sets. For example if a plane is tangent to a su***ce along a line, the intersection number along the line should be at least two. These questions are discussed systematically in intersection theory.[/ltr]
- 1 Definition for Riemann su***ces
- 2 Definition for algebraic varieties
- 3 Further definitions
- 4 Intersection multiplicities for plane curves
- 4.1 Example
[ltr]
Definition for Riemann su***ces[edit]
Let X be a Riemann su***ce. Then the intersection number of two closed curves on X has a simple definition in terms of an integral. For every closed curve c on X (i.e., smooth function ), we can associate a differential form with the pleasant property that integrals along c can be calculated by integrals over X:
, for every closed (1-)differential on X,
where is the wedge product of differentials, and is the hodge star. Then the intersection number of two closed curves, a and b, on X is defined as
.
The have an intuitive definition as follows. They are a sort of dirac deltaalong the curve c, accomplished by taking the differential of a unit step function that drops from 1 to 0 across c. More formally, we begin by defining for a simple closed curve c on X, a function fc by letting be a small strip around c in the shape of an annulus. Name the left and right parts of as and . Then take a smaller sub-strip around c, , with left and right parts and . Then define fc by
.
The definition is then expanded to arbitrary closed curves. Every closed curve c on X is homologous to for some simple closed curves ci, that is,
, for every differential .
Define the by
.
Definition for algebraic varieties[edit]
The usual constructive definition in the case of algebraic varieties proceeds in steps. The definition given below is for the intersection number of divisors on a nonsingular variety X.
1. The only intersection number that can be calculated directly from the definition is the intersection of hypersu***ces (subvarieties of X of codimension one) that are in general position at x. Specifically, assume we have a nonsingular variety X, and n hypersu***ces Z1, ..., Zn which have local equations f1, ..., fn near x for polynomials fi(t1, ..., tn), such that the following hold:[/ltr]
- .
- for all i. (i.e., x is in the intersection of the hypersu***ces.)
- (i.e., the divisors are in general position.)
- The are nonsingular at x.
[ltr]
Then the intersection number at the point x is
,
where is the local ring of X at x, and the dimension is dimension as a k-vector space. It can be calculated as the localization , where is the maximal ideal of polynomials vanishing at x, and U is an open affine set containing x and containing none of the singularities of the fi.
2. The intersection number of hypersu***ces in general position is then defined as the sum of the intersection numbers at each point of intersection.
3. Extend the definition to effective divisors by linearity, i.e.,
and .
4. Extend the definition to arbitrary divisors in general position by noticing every divisor has a unique expression as D = P - N for some effective divisors P and N. So let Di = Pi - Ni, and use rules of the form
to transform the intersection.
5. The intersection number of arbitrary divisors is then defined using a "Chow's moving lemma" that guarantees we can find linearly equivalent divisors that are in general position, which we can then intersect.
Note that the definition of the intersection number does not depend on the order of the divisors.
Further definitions[edit]
The definition can be vastly generalized, for example to intersections along subvarieties instead of just at points, or to arbitrary complete varieties.
In algebraic topology, the intersection number appears as the Poincaré dual of the cup product. Specifically, if two manifolds, X and Y, intersect transversely in a manifold M, the homology class of the intersection is thePoincaré dual of the cup product of the Poincaré duals of X and Y.
Intersection multiplicities for plane curves[edit]
There is a unique function assigning to each triplet (P, Q, p) consisting of a pair of polynomials, P and Q, in K[x, y] and a point p in K2 a numberIp(P, Q) called the intersection multiplicity of P and Q at p that satisfies the following properties:[/ltr]
- is infinite if and only if P and Q have a common factor that is zero at p.
- is zero if and only if one of P(p) or Q(p) is non-zero (i.e. the point p is off one of the curves).
- where the point p is at (x, y).
- for any R in K[x, y]
[ltr]
Although these properties completely characterize intersection multiplicity, in practice it is realised in several different ways.
One realization of intersection multiplicity is through the dimension of a certain quotient space of the power series ring K[[x,y]]. By *** a change of variables if necessary, we may assume that the point p is (0,0). Let P(x, y) and Q(x, y) be the polynomials defining the algebraic curves we are interested in. If the original equations are given in homogeneous form, these can be obtained by setting z = 1. Let I = (P, Q) denote the ideal ofK[[x,y]] generated by P and Q. The intersection multiplicity is the dimension of K[[x, y]]/I as a vector space over K.
Another realization of intersection multiplicity comes from the resultant of the two polynomials P and Q. In coordinates where p is (0,0), the curves have no other intersections with y = 0, and the degree of P with respect to x is equal to the total degree of P, Ip(P, Q) can be defined as the highest power of y that divides the resultant of P and Q (with P and Q seen as polynomials over K[x]).
Intersection multiplicity can also be realised as the number of distinct intersections that exist if the curves are perturbed slightly. More specifically, if P and Q define curves which intersect only once in theclosure of an open set U, then for a dense set of (ε,δ) in K2, P − ε andQ − δ are smooth and intersect transversally (i.e. have different tangent lines) at exactly some number n points in U. Ip(P, Q) = n.
Example[edit]
Consider the intersection of the x-axis with the parabola
Then
and
so
Thus, the intersection degree is two; it is an ordinary tangency.
Self-intersections[edit]
Some of the most interesting intersection numbers to compute are self-intersection numbers. This should not be taken in a *** sense. What is meant is that, in an equivalence class of divisors of some specific kind, two representatives are intersected that are in general position with respect to each other. In this way, self-intersection numbers can become well-defined, and even negative.
Applications[edit]
The intersection number is partly motivated by the desire to define intersection to satisfy Bézout's theorem.
The intersection number arises in the study fixed points, which can be cleverly defined as intersections of function graphs with a diagonals. Calculating the intersection numbers at the fixed points counts the fixed points with multiplicity, and leads to the Lefschetz fixed point theorem in quantitative form.
References[edit][/ltr]
- William Fulton (1974). Algebraic Curves. Mathematics Lecture Note Series. W.A. Benjamin. pp. 74–83. ISBN 0-8053-3082-8.
- Robin Hartshorne (1977). Algebraic Geometry. Graduate Texts in Mathematics 52. ISBN 0-387-90244-9. Appendix A.
- William Fulton (1998). Intersection Theory. Mathematics Lecture Note Series. Springer. ISBN 0-387-98549-2. ISBN 9780387985497.
- Algebraic Curves: An Introduction To Algebraic Geometry, by William Fulton with Richard Weiss. New York: Benjamin, 1969. Reprint ed.: Redwood City, CA, USA: Addison-Wesley, Advanced Book Classics, 1989. ISBN 0-201-51010-3. Full text online.
- Irwin Kra (1980). Riemann Su***ces. Graduate Texts in Mathematics71. ISBN 0-387-90465-4.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Topological invariants. Suppose that we assign a mathematical object to a
class of topological spaces.
• This object is called a topological invariant iff it is invariant under homeomorphisms.
• This object is called a homotopy invariant iff it is the same for homotopically
equivalent topological spaces.
The most important topological invariant is the Euler characteristic to be introduced
below. Topological invariants play a fundamental role in mathematics and
physics (e.g., Betti numbers, Gauss’ linking number, genus of a Riemannian su***ce,
winding number, mapping degree, Morse index of a dynamical system, characteristic
classes, Stiefel–Whitney classes, Chern classes and Chern numbers, Atiyah–Singer
index, Gromov–Witten invariants, Donaldson invariants of 4-dimensional manifolds,
Seiberg–Witten invariants, Jones polynomials of knots, homology groups, cohomology
groups, homotopy groups, K-groups as generalized cohomology groups). We
will study this thoroughly in Volume IV on quantum mathematics. At this point,
we will only sketch same basic ideas.
As a rule, topological invariants are also homotopy invariants.
Topological Quantum Numbers
In the 20th century, physicists learned that quantum phenomena in nature can be
classified by quantum numbers. There arises the question how to describe quantum
numbers in terms of mathematics. It turns out that there are two important
possibilities to obtain quantum numbers , namely,
(S) symmetry (the representation theory of compact Lie groups), and
(T) topology (topological invariants as topological charges or topological quantum
numbers).
The Euler characteristic of homotopically equivalent topological
spaces is the same.
This is a deep result of modern topology
Topology is rooted in Maxwell’s theory on the electromagnetic field.
Folklore
Of the geometria situs, which Leibniz (1646–1716) sensed, and of which
only a few geometers, Euler (1707–1783) and Vandermonde (1735–1796),
were granted an obscured view, we know and have, after a hundred and
fifty years, still little more than nothing.
Carl Friedrich Gauss, 1833
It was the discovery by Gauss of this very integral expressing the work
done on a magnetic pole while describing a closed curve in presence of a
closed electric current and indicating the geometric connection between the
two closed curves, that led him to lament the small progress made in the
Geometry of Position since the time of Leibniz, Euler and Vandermonde.
We now have some progress to report, chiefly due to Riemann, Helmholtz
and Listing.
James Clerk Maxwell, 1873
A Treatise on Electricity and Magnetism
In obtaining a topological invariant by using a physical field theory, Gauss
had anticipated Topological Field Theory by almost 150 years. Even the
term topology was not used then. It was introduced by Johann Listing
(1806-1882), a student and proteg´e of Gauss, in his 1847 essay Preliminary
Studies on Topology. Gauss’ linking number formula can also be interpreted
as the equality of topological and analytical degree of a suitable function.
Starting with this a far-reaching generalization of the Gauss integral to
higher linking self-linking integrals can be obtained. This forms a small
part of a program initiated by Maxim Kontsevich to relate topology of
low-dimensional manifolds, homotopical algebras, and non-commutative
geometry with topological field theories and Feynman diagrams in physics.
Kishore Marathe, 2001
A chapter in physical mathematics: theory of knots in the sciences
Like others, we came to the heat kernel via one direction of mathematics.
However, as we progressed in that direction, we realized that the heat
kernel plays a central role in almost all directions we can think of. That
it bears a name related to physics only indicates that it was originally
discovered in direction with heat. But even in physics, its significance
goes way beyond the giving a mathematical model for heat distribution.
The name cannot be changed – it’s too late for that – but the impression
some people may have that an occurrence of a certain kernel called the
heat kernel means one is necessarily doing physics is a false impression.
Maybe one is and maybe one isn’t. . . There is a universal gadget which is
a dominant factor practically everywhere in mathematics, also in physics,
and has very simple and powerful properties. We have no a priori explanation
(psychological, philosophical, mathematical) for the phenomenon of
the existence of such a universal gadget.
Jay Jorgenson and Serge Lang, 2001
The Ubiquitous Heat Kernel
In the last twenty years a body of mathematics has evolved with strong
direct input from theoretical physics, for example from classical and quantum
field theories, statistical mechanics and string theory. In particular,
in the geometry and topology of low dimensional manifolds (i.e., manifolds
of dimensions 2, 3 and 4) we have seen new results, some of them
quite surprising, as well as new ways of looking at old results. Donaldson’s
work based on his study of the solution space of the Yang–Mills equations,
monopole equations of Seiberg–Witten, Floer homology, quantum groups
and topological quantum field theoretical interpretation of the Jones polynomial
and other knot invariants are some of the examples of this development.
Donaldson, Jones and Witten have received the Fields medal for
their work. We think the name “Physical Mathematics” is appropriate to
describe this new, exciting and fast growing area of mathematics. Recent
developments in knot theory make it an important chapter in “Physical
Mathematics.” Until the early 1980s it was an area in the backwaters of
topology. Now it is a very active area of research with its own journal.32
Kishore Marathe, 2001
A chapter in physical mathematics: theory of knots in the sciences
The Jones polynomials were discovered by Jones in 1985 when studying the
structure of von Neumann operator algebras.34 A few years later, Witten (1989)
published a fundamental paper on a beautiful physical interpretation of the Jones
polynomials via a special model in quantum field theory. The idea is to use
• the principle of critical action for the Chern–Simons Lagrangian on the 3-
dimensional sphere S3 with respect to the gauge group SU(2);
• quantization of this classical field theory yields the corresponding Feynman functional
integral, as a formal partition function;
• to each knot on S3 one can assign a physical quantity called the Wilson loop;
• finally, the Jones polynomial of a knot or link is the vacuum expectation value
of the corresponding Wilson loop.
Roughly speaking, the Chern–Simons Lagrangians represent quite natural gauge
field theories on 3-dimensional manifolds.
Topological quantum field theory. This new branch of topology was
founded by E. Witten, Topological quantum field theory, Commun. Math. Phys.
117 (1988), 353–386. The basic idea of topological quantum field theory is to use
the Lagrangians of special gauge field theories and the corresponding Feynman
functional integrals in order to construct sophisticated topological invariants for
low-dimensional manifolds. As an introduction to modern knot theory and its relations
to physics, chemistry, and biology (DNA), we recommend the survey article
by Marathe (2001) and the monographs by Kaufman (2001) and Flappan (2000)
(molecular chirality in chemistry). For topological quantum field theory, see the lectures
given by Atiyah (1990b) and Witten (1999c). We also refer to the monograph
by Jost (2002a) (geometric analysis and classical models in quantum field theory).
For a wealth of material on the relation between topological charges and nonlinear
partial differential equations arising in modern physics, we recommend the
monographs by Felsager (1997), Naber (1997), and Yang (2001).
Summarizing, there exist the following three equivalent possibilities for defining
the space of quantum states of the Hilbert space C2, namely,
(i) the orbit space S(C2)/U(1) of the unit sphere S(C2) under the action of the
group U(1) on S(C2),
(ii) the 1-Grassmann manifold G1(C2) of C2 which consists of all complex onedimensional
subspaces of C2,
(iii) the complex one-dimensional projective space P1
C of rays in C2.
we have the following homeomorphisms:
S(C2)/U(1)∼=S2 ∼=G1(C2)∼=P1C.
Physicists and mathematicians have studied completely different deep questions
posed by nature and by intrinsic mathematics, respectively. Finally,
they have arrived at the same highly nontrivial mathematical tools.
Perspectives
For further prototypes of important topological phenomena in physics, we refer to
the following sections of the present volume:
• cohomology and potentials of physical fields (Sect. 16.8.2);
• the cohomology of the unite circle, the unit sphere, and the torus (Sect. 16.8.3);
• cohomology and atomic spectra (Sect. 16.8.4);
• cohomology and BRST symmetry (the classification of physical states and the
elimination of ghosts, and the cohomology of Lie groups) (Sect. 16.8.5).
Much material on topology can be found in Volume IV on quantum mathematics.
In particular, we will investigate there homology groups, cohomology groups,
homotopy groups, and characteristic classes. In particular, we will show that the
fundamental concepts of homology and cohomology are rooted in the properties of
electric circuits.
class of topological spaces.
• This object is called a topological invariant iff it is invariant under homeomorphisms.
• This object is called a homotopy invariant iff it is the same for homotopically
equivalent topological spaces.
The most important topological invariant is the Euler characteristic to be introduced
below. Topological invariants play a fundamental role in mathematics and
physics (e.g., Betti numbers, Gauss’ linking number, genus of a Riemannian su***ce,
winding number, mapping degree, Morse index of a dynamical system, characteristic
classes, Stiefel–Whitney classes, Chern classes and Chern numbers, Atiyah–Singer
index, Gromov–Witten invariants, Donaldson invariants of 4-dimensional manifolds,
Seiberg–Witten invariants, Jones polynomials of knots, homology groups, cohomology
groups, homotopy groups, K-groups as generalized cohomology groups). We
will study this thoroughly in Volume IV on quantum mathematics. At this point,
we will only sketch same basic ideas.
As a rule, topological invariants are also homotopy invariants.
Topological Quantum Numbers
In the 20th century, physicists learned that quantum phenomena in nature can be
classified by quantum numbers. There arises the question how to describe quantum
numbers in terms of mathematics. It turns out that there are two important
possibilities to obtain quantum numbers , namely,
(S) symmetry (the representation theory of compact Lie groups), and
(T) topology (topological invariants as topological charges or topological quantum
numbers).
The Euler characteristic of homotopically equivalent topological
spaces is the same.
This is a deep result of modern topology
Topology is rooted in Maxwell’s theory on the electromagnetic field.
Folklore
Of the geometria situs, which Leibniz (1646–1716) sensed, and of which
only a few geometers, Euler (1707–1783) and Vandermonde (1735–1796),
were granted an obscured view, we know and have, after a hundred and
fifty years, still little more than nothing.
Carl Friedrich Gauss, 1833
It was the discovery by Gauss of this very integral expressing the work
done on a magnetic pole while describing a closed curve in presence of a
closed electric current and indicating the geometric connection between the
two closed curves, that led him to lament the small progress made in the
Geometry of Position since the time of Leibniz, Euler and Vandermonde.
We now have some progress to report, chiefly due to Riemann, Helmholtz
and Listing.
James Clerk Maxwell, 1873
A Treatise on Electricity and Magnetism
In obtaining a topological invariant by using a physical field theory, Gauss
had anticipated Topological Field Theory by almost 150 years. Even the
term topology was not used then. It was introduced by Johann Listing
(1806-1882), a student and proteg´e of Gauss, in his 1847 essay Preliminary
Studies on Topology. Gauss’ linking number formula can also be interpreted
as the equality of topological and analytical degree of a suitable function.
Starting with this a far-reaching generalization of the Gauss integral to
higher linking self-linking integrals can be obtained. This forms a small
part of a program initiated by Maxim Kontsevich to relate topology of
low-dimensional manifolds, homotopical algebras, and non-commutative
geometry with topological field theories and Feynman diagrams in physics.
Kishore Marathe, 2001
A chapter in physical mathematics: theory of knots in the sciences
Like others, we came to the heat kernel via one direction of mathematics.
However, as we progressed in that direction, we realized that the heat
kernel plays a central role in almost all directions we can think of. That
it bears a name related to physics only indicates that it was originally
discovered in direction with heat. But even in physics, its significance
goes way beyond the giving a mathematical model for heat distribution.
The name cannot be changed – it’s too late for that – but the impression
some people may have that an occurrence of a certain kernel called the
heat kernel means one is necessarily doing physics is a false impression.
Maybe one is and maybe one isn’t. . . There is a universal gadget which is
a dominant factor practically everywhere in mathematics, also in physics,
and has very simple and powerful properties. We have no a priori explanation
(psychological, philosophical, mathematical) for the phenomenon of
the existence of such a universal gadget.
Jay Jorgenson and Serge Lang, 2001
The Ubiquitous Heat Kernel
In the last twenty years a body of mathematics has evolved with strong
direct input from theoretical physics, for example from classical and quantum
field theories, statistical mechanics and string theory. In particular,
in the geometry and topology of low dimensional manifolds (i.e., manifolds
of dimensions 2, 3 and 4) we have seen new results, some of them
quite surprising, as well as new ways of looking at old results. Donaldson’s
work based on his study of the solution space of the Yang–Mills equations,
monopole equations of Seiberg–Witten, Floer homology, quantum groups
and topological quantum field theoretical interpretation of the Jones polynomial
and other knot invariants are some of the examples of this development.
Donaldson, Jones and Witten have received the Fields medal for
their work. We think the name “Physical Mathematics” is appropriate to
describe this new, exciting and fast growing area of mathematics. Recent
developments in knot theory make it an important chapter in “Physical
Mathematics.” Until the early 1980s it was an area in the backwaters of
topology. Now it is a very active area of research with its own journal.32
Kishore Marathe, 2001
A chapter in physical mathematics: theory of knots in the sciences
The Jones polynomials were discovered by Jones in 1985 when studying the
structure of von Neumann operator algebras.34 A few years later, Witten (1989)
published a fundamental paper on a beautiful physical interpretation of the Jones
polynomials via a special model in quantum field theory. The idea is to use
• the principle of critical action for the Chern–Simons Lagrangian on the 3-
dimensional sphere S3 with respect to the gauge group SU(2);
• quantization of this classical field theory yields the corresponding Feynman functional
integral, as a formal partition function;
• to each knot on S3 one can assign a physical quantity called the Wilson loop;
• finally, the Jones polynomial of a knot or link is the vacuum expectation value
of the corresponding Wilson loop.
Roughly speaking, the Chern–Simons Lagrangians represent quite natural gauge
field theories on 3-dimensional manifolds.
Topological quantum field theory. This new branch of topology was
founded by E. Witten, Topological quantum field theory, Commun. Math. Phys.
117 (1988), 353–386. The basic idea of topological quantum field theory is to use
the Lagrangians of special gauge field theories and the corresponding Feynman
functional integrals in order to construct sophisticated topological invariants for
low-dimensional manifolds. As an introduction to modern knot theory and its relations
to physics, chemistry, and biology (DNA), we recommend the survey article
by Marathe (2001) and the monographs by Kaufman (2001) and Flappan (2000)
(molecular chirality in chemistry). For topological quantum field theory, see the lectures
given by Atiyah (1990b) and Witten (1999c). We also refer to the monograph
by Jost (2002a) (geometric analysis and classical models in quantum field theory).
For a wealth of material on the relation between topological charges and nonlinear
partial differential equations arising in modern physics, we recommend the
monographs by Felsager (1997), Naber (1997), and Yang (2001).
Summarizing, there exist the following three equivalent possibilities for defining
the space of quantum states of the Hilbert space C2, namely,
(i) the orbit space S(C2)/U(1) of the unit sphere S(C2) under the action of the
group U(1) on S(C2),
(ii) the 1-Grassmann manifold G1(C2) of C2 which consists of all complex onedimensional
subspaces of C2,
(iii) the complex one-dimensional projective space P1
C of rays in C2.
we have the following homeomorphisms:
S(C2)/U(1)∼=S2 ∼=G1(C2)∼=P1C.
Physicists and mathematicians have studied completely different deep questions
posed by nature and by intrinsic mathematics, respectively. Finally,
they have arrived at the same highly nontrivial mathematical tools.
Perspectives
For further prototypes of important topological phenomena in physics, we refer to
the following sections of the present volume:
• cohomology and potentials of physical fields (Sect. 16.8.2);
• the cohomology of the unite circle, the unit sphere, and the torus (Sect. 16.8.3);
• cohomology and atomic spectra (Sect. 16.8.4);
• cohomology and BRST symmetry (the classification of physical states and the
elimination of ghosts, and the cohomology of Lie groups) (Sect. 16.8.5).
Much material on topology can be found in Volume IV on quantum mathematics.
In particular, we will investigate there homology groups, cohomology groups,
homotopy groups, and characteristic classes. In particular, we will show that the
fundamental concepts of homology and cohomology are rooted in the properties of
electric circuits.
由一星于2014-06-14, 05:29进行了最后一次编辑,总共编辑了6次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
拉回 (微分几何)[编辑]
[size][ltr]
在微分几何中,拉回是将一个流形上某种结构转移到另一个流形上的一种方法。具体地说,假设 φ:M→ N 是从光滑流形 M 到 N 的光滑映射;那么伴随有一个从 N 上 1- 形式(余切丛的截面)到 M 上 1-形式的线性映射,这个映射称为由 φ 拉回,经常记作 φ*。更一般地,任何 N 上共变张量场——特别是任何微分形式——都可以由 φ 拉回到 M 上。
当映射 φ 是微分同胚,那么拉回与前推一起,可以将任何 N 上的张量场变换到 M,或者相反。特别地,如果 φ是 Rn 的开集与 Rn 之间的微分同胚,视为坐标变换(也许在流形 M 上不同的坐标卡上),那么拉回和前推描述了共变与反变张量用更传统方式(用基)表述的变换性质。
拉回概念背后的本质很简单,是一个函数和另外一个函数的前复合。但是将这种想法运用到许多不同的情形,可以构造许多复杂的拉回。本文从简单的操作开始,然后利用它们构造更复杂的。粗略地讲,拉回手法(利用前复合)将微分几何中多种不同的结构变成反变函子。
[/ltr][/size]
[size][ltr]
光滑函数与光滑映射[编辑]
设 φ:M→ N 是光滑流形 M 与 N 之间的光滑映射,假设 f:N→R 是 N 上一个光滑函数。则 f 通过 φ 的拉回是 M 上的光滑函数 φ*f,定义为 (φ*f)(x) = f(φ(x))。类似地,如果 f 是 N 中开集 U 上的光滑函数,则相同的公式定义了 M中开集 φ-1(U) 上一个光滑函数。用层的语言说,拉回定义了 N 上光滑函数层到 φ 的直接像(在 M 上光滑函数层中)的一个态射。
更一般地,如果 f:N→A 是从 N 到任意其他流形 A 的的光滑映射,则φ*f(x)=f(φ(x)) 是从 M 到 A 的一个光滑映射。
丛与截面[编辑]
如果 E 是 N 上一个向量丛(或任意纤维丛),φ:M→N 是光滑映射,那么拉回丛 φ*E 是 M 上一个向量丛(或更一般地纤维丛),其 M 中的点 x 处的纤维由 (φ*E)x = Eφ(x) 给出。
在此情形,前复合定义了 E 上截面的一个变换:如果 s 是 N 上 E 的一个截面,那么拉回截面 是 M 上拉回丛 φ*E 的一个截面。
多重线性形式[编辑]
设 Φ:V→ W 是向量空间 V 与 W 之间的一个线性映射(即,Φ 是 L(V,W) 中的元素,也记成 Hom(V,W)),设
是 W 上一个多重线性形式(也称为 (0,s) 阶张量——但不要和张量场混淆——这里 s 是乘积中 W 的因子的个数)。则 F 由 Φ 的拉回 Φ*F 是一个 V上的多重线性形式,定义为 F 与 Φ 的前复合。准确地,给定 V 中向量v1,v2,...,vs, Φ*F 由公式定义
这是 V 上一个多重线性形式。从而 Φ* 是一个从 W 上的多重线性形式到 V上的多重线性形式的(线性)算子。作为一个特例,注意到如果 F 是 W 上一个线性形式(或 (0,1) -张量),那么 F 是 W 的对偶空间 W* 中一个元素,则 Φ*F 是 V* 中一个元素,所以拉回定义了对偶空间之间一个线性映射,作用的方向与线性映射 Φ 自己的方向相反:
从张量的观点来看,自然想把来回这种概念推广到任何阶,即 W 上取值于 r个 W 的张量积 的线性映射。但是,这种张量积不能自然的拉回:不过有从 到 的前推算子,定义为
然而,如果 Φ 可逆,拉回可以用逆函数 Φ-1 的前推定义。将一个可逆线性映射与这两个构造放在一起,得到了对任何 (r,s) 阶张量一个拉回算子。
余切向量与 1 形式[编辑]
设 φ : M → N 是光滑流形间的光滑映射。那么 φ 的前推:φ* = dφ (或Dφ),是从 M 的切丛 TM 到拉回丛 φ*TN 的(在 M 上)向量丛同态。从而φ* 的转置是从 φ*T*N 到 M 的余切丛 T*M 的丛映射。
现在假设 α 是 T*N 的一个截面(N 上一个 1-形式),将 α 与 φ 前复合得到φ*T*N 的一个拉回截面。将上述(逐点)丛映射应用到截面导致 α 由 φ 的拉回,是 M 上一个 1-形式,定义为:
对 x 属于 M 与 X 属于 TxM。
(共变)张量场[编辑]
对任何自然数 s,上述构造马上可推广到 (0,s) 阶张量丛上。流形 N 上 (0,s)张量场 是 N 上张量丛的一个截面,在 N 中 y 点的截面是多重线性 s-形式空间
取 Φ 等于从 M 到 N 的一个光滑映射的微分(逐点的),多重线性形式的拉回可与截面的拉回复合得出 M 上 (0,s) 张量场的拉回。更确切地,如果 S 是N 上一个 (0,s)-张量场,那么 S 由 φ 的拉回 是 M 上 (0,s)-张量场 φ*S,定义为
对 x 属于 M 与 Xj 属于 TxM。
微分形式[编辑]
共变张量场拉回的一个特别重要的例子是微分形式的拉回。如果 α 是一个微分 k-形式,即 TN(逐点)反交换 k-形式组成的外丛 ΛkT*N 的一个截面,则 α 的拉回是 M 上一个微分 k-形式,定义与上一节相同:
对 x 属于 M 与 Xj 属于 TxM。
微分形式的拉回有两个性质,使其非常有用。
1. 和楔积相容:假设同上,对 N 上的微分形式 α 与 β,
2. 和外导数 d 相容:如果 α 是 N 上一个微分形式,则
由微分同胚拉回[编辑]
当流形之间的映射 φ 是微分同胚,即有一个光滑逆函数,则在向量场上也像 1-形式一样定义拉回,从而通过扩张,对流形上任何混合张量场都可拉回。线性映射
可逆,给出
一个一般的混合型张量场通过张量积分解为 TN 与 T*N 两部分,分别用 Φ 与 Φ-1 变换。当 M = N 时,则拉回和前推刻画了流形 M 上张量场的变换性质。用传统术语说,拉回描述了张量共变指标的变换性质;相对地,反变指标的变换性质由前推给出。
由自同构拉回[编辑]
上一节的构造有一个代表性特例,若 φ 是流形 M 到自己的微分同胚。在这种情况下,导数 dφ 是 GL(TM,φ*TM) 的一个截面。这样便在通过一个一般线性群 GL(m) (m = dim M) 相配于 M 的标架丛 GL(M) 的任何丛的截面上导出了拉回作用。
拉回与李导数[编辑]
另见:李导数
将上述想法应用到由向量场 M 定义的微分同胚单参数群,对参数求导,得到了任意丛上的李导数概念。
联络(共变导数)[编辑]
如果 是 N 上向量丛 E 的联络(或共变导数),φ 是从 M 到 N 的光滑映射,那么在 M 上的向量丛 φ*E 上有拉回联络 ,由等式
惟一确定。
另见[编辑]
[/ltr][/size]
[size][ltr]
参考文献[编辑]
[/ltr][/size]
[size][ltr]
在微分几何中,拉回是将一个流形上某种结构转移到另一个流形上的一种方法。具体地说,假设 φ:M→ N 是从光滑流形 M 到 N 的光滑映射;那么伴随有一个从 N 上 1- 形式(余切丛的截面)到 M 上 1-形式的线性映射,这个映射称为由 φ 拉回,经常记作 φ*。更一般地,任何 N 上共变张量场——特别是任何微分形式——都可以由 φ 拉回到 M 上。
当映射 φ 是微分同胚,那么拉回与前推一起,可以将任何 N 上的张量场变换到 M,或者相反。特别地,如果 φ是 Rn 的开集与 Rn 之间的微分同胚,视为坐标变换(也许在流形 M 上不同的坐标卡上),那么拉回和前推描述了共变与反变张量用更传统方式(用基)表述的变换性质。
拉回概念背后的本质很简单,是一个函数和另外一个函数的前复合。但是将这种想法运用到许多不同的情形,可以构造许多复杂的拉回。本文从简单的操作开始,然后利用它们构造更复杂的。粗略地讲,拉回手法(利用前复合)将微分几何中多种不同的结构变成反变函子。
[/ltr][/size]
[size][ltr]
光滑函数与光滑映射[编辑]
设 φ:M→ N 是光滑流形 M 与 N 之间的光滑映射,假设 f:N→R 是 N 上一个光滑函数。则 f 通过 φ 的拉回是 M 上的光滑函数 φ*f,定义为 (φ*f)(x) = f(φ(x))。类似地,如果 f 是 N 中开集 U 上的光滑函数,则相同的公式定义了 M中开集 φ-1(U) 上一个光滑函数。用层的语言说,拉回定义了 N 上光滑函数层到 φ 的直接像(在 M 上光滑函数层中)的一个态射。
更一般地,如果 f:N→A 是从 N 到任意其他流形 A 的的光滑映射,则φ*f(x)=f(φ(x)) 是从 M 到 A 的一个光滑映射。
丛与截面[编辑]
如果 E 是 N 上一个向量丛(或任意纤维丛),φ:M→N 是光滑映射,那么拉回丛 φ*E 是 M 上一个向量丛(或更一般地纤维丛),其 M 中的点 x 处的纤维由 (φ*E)x = Eφ(x) 给出。
在此情形,前复合定义了 E 上截面的一个变换:如果 s 是 N 上 E 的一个截面,那么拉回截面 是 M 上拉回丛 φ*E 的一个截面。
多重线性形式[编辑]
设 Φ:V→ W 是向量空间 V 与 W 之间的一个线性映射(即,Φ 是 L(V,W) 中的元素,也记成 Hom(V,W)),设
是 W 上一个多重线性形式(也称为 (0,s) 阶张量——但不要和张量场混淆——这里 s 是乘积中 W 的因子的个数)。则 F 由 Φ 的拉回 Φ*F 是一个 V上的多重线性形式,定义为 F 与 Φ 的前复合。准确地,给定 V 中向量v1,v2,...,vs, Φ*F 由公式定义
这是 V 上一个多重线性形式。从而 Φ* 是一个从 W 上的多重线性形式到 V上的多重线性形式的(线性)算子。作为一个特例,注意到如果 F 是 W 上一个线性形式(或 (0,1) -张量),那么 F 是 W 的对偶空间 W* 中一个元素,则 Φ*F 是 V* 中一个元素,所以拉回定义了对偶空间之间一个线性映射,作用的方向与线性映射 Φ 自己的方向相反:
从张量的观点来看,自然想把来回这种概念推广到任何阶,即 W 上取值于 r个 W 的张量积 的线性映射。但是,这种张量积不能自然的拉回:不过有从 到 的前推算子,定义为
然而,如果 Φ 可逆,拉回可以用逆函数 Φ-1 的前推定义。将一个可逆线性映射与这两个构造放在一起,得到了对任何 (r,s) 阶张量一个拉回算子。
余切向量与 1 形式[编辑]
设 φ : M → N 是光滑流形间的光滑映射。那么 φ 的前推:φ* = dφ (或Dφ),是从 M 的切丛 TM 到拉回丛 φ*TN 的(在 M 上)向量丛同态。从而φ* 的转置是从 φ*T*N 到 M 的余切丛 T*M 的丛映射。
现在假设 α 是 T*N 的一个截面(N 上一个 1-形式),将 α 与 φ 前复合得到φ*T*N 的一个拉回截面。将上述(逐点)丛映射应用到截面导致 α 由 φ 的拉回,是 M 上一个 1-形式,定义为:
对 x 属于 M 与 X 属于 TxM。
(共变)张量场[编辑]
对任何自然数 s,上述构造马上可推广到 (0,s) 阶张量丛上。流形 N 上 (0,s)张量场 是 N 上张量丛的一个截面,在 N 中 y 点的截面是多重线性 s-形式空间
取 Φ 等于从 M 到 N 的一个光滑映射的微分(逐点的),多重线性形式的拉回可与截面的拉回复合得出 M 上 (0,s) 张量场的拉回。更确切地,如果 S 是N 上一个 (0,s)-张量场,那么 S 由 φ 的拉回 是 M 上 (0,s)-张量场 φ*S,定义为
对 x 属于 M 与 Xj 属于 TxM。
微分形式[编辑]
共变张量场拉回的一个特别重要的例子是微分形式的拉回。如果 α 是一个微分 k-形式,即 TN(逐点)反交换 k-形式组成的外丛 ΛkT*N 的一个截面,则 α 的拉回是 M 上一个微分 k-形式,定义与上一节相同:
对 x 属于 M 与 Xj 属于 TxM。
微分形式的拉回有两个性质,使其非常有用。
1. 和楔积相容:假设同上,对 N 上的微分形式 α 与 β,
2. 和外导数 d 相容:如果 α 是 N 上一个微分形式,则
由微分同胚拉回[编辑]
当流形之间的映射 φ 是微分同胚,即有一个光滑逆函数,则在向量场上也像 1-形式一样定义拉回,从而通过扩张,对流形上任何混合张量场都可拉回。线性映射
可逆,给出
一个一般的混合型张量场通过张量积分解为 TN 与 T*N 两部分,分别用 Φ 与 Φ-1 变换。当 M = N 时,则拉回和前推刻画了流形 M 上张量场的变换性质。用传统术语说,拉回描述了张量共变指标的变换性质;相对地,反变指标的变换性质由前推给出。
由自同构拉回[编辑]
上一节的构造有一个代表性特例,若 φ 是流形 M 到自己的微分同胚。在这种情况下,导数 dφ 是 GL(TM,φ*TM) 的一个截面。这样便在通过一个一般线性群 GL(m) (m = dim M) 相配于 M 的标架丛 GL(M) 的任何丛的截面上导出了拉回作用。
拉回与李导数[编辑]
另见:李导数
将上述想法应用到由向量场 M 定义的微分同胚单参数群,对参数求导,得到了任意丛上的李导数概念。
联络(共变导数)[编辑]
如果 是 N 上向量丛 E 的联络(或共变导数),φ 是从 M 到 N 的光滑映射,那么在 M 上的向量丛 φ*E 上有拉回联络 ,由等式
惟一确定。
另见[编辑]
[/ltr][/size]
[size][ltr]
参考文献[编辑]
[/ltr][/size]
- Jurgen Jost, Riemannian Geometry and Geometric Analysis, (2002) Springer-Verlag, Berlin ISBN 3-540-42627-2 See sections 1.5 and 1.6.
- Ralph Abraham and Jarrold E. Marsden, Foundations of Mechanics, (1978) Benjamin-Cummings, London ISBN 0-8053-0102-X See section 1.7 and 2.3.
- B. A. Dubrovin, et al., Modern Geometry Methods and Applications(Part I), (1999) Beijing World Publishing Corp., ISBN 7-5062-0123-2 See section 22.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Many-Particle Systems in Mathematics andPhysics
Partition functions are the main tool for studying many-particle systems.
Folklore
Many-particle systems play a fundamental role in both mathematics and physics.
• In physics, we encounter systems of molecules (e.g., gases or liquids) or systems
of elementary particles in quantum field theory.
• In mathematics, for example, we want to study the system of prime numbers.
In the 19th century, physicists developed the methods of statistical mechanics for
studying many-particle systems, whereas mathematicians proved the distribution
law for prime numbers. It turns out that the two apparently different approaches
can be traced back to the same mathematical root, namely, the notion of partition
function. In modern quantum field theory, the Feynman functional integral can be
viewed as a partition function, as we will discuss later on. The typical procedure
proceeds in the following two steps.
(i) Coding: The many-particle system is encoded into one single function called a
partition function (e.g., the Boltzmann partition function in statistical physics,
Riemann’s zeta function or Dirichlet’s L-function for describing prime numbers,
the Feynman functional integral in quantum field theory).
(ii) Decoding: The task is to get crucial information about the many-particle system
by studying the properties of the partition function. The idea goes back to
Riemann. He recognized that the zeta function ζ extends holomorphically to
the punctured complex plane C \ {0}, and that the detailed knowledge on the
distribution of the zeros of the zeta function allows far-reaching statements
about the asymptotic distribution of the prime numbers. This is related to the
famous Riemann hypothesis to be considered on page 296.
The high-temperature approximation of quantum statistics is obtained from
classical statistical physics by subdividing the phase space into cells of volume
h3.
Applications of discrete dynamical systems. In mathematics, the equations
of motion of a discrete dynamical system are called difference equations. Nowadays
such equations are used
• for solving partial differential equations on computers (see Knabner and Angermann
(2003)),
• for modelling deterministic chaos in physics (see Schuster (1994)),
• for studying mathematical models on both the origin of life, taking metabolism
into account (see Dyson (1999a)), and
• virus dynamics in immunology (see Noack and May (2000)). This concerns, for
example, the spread of aids.
We also refer to W. de Melo and S. van Strien, One-Dimensional Dynamics,
Springer, Berlin, 1993 and to J. Jost, Dynamical Systems: Examples of Complex
Behavior, Springer, Berlin, 2005.
Partition functions are the main tool for studying many-particle systems.
Folklore
Many-particle systems play a fundamental role in both mathematics and physics.
• In physics, we encounter systems of molecules (e.g., gases or liquids) or systems
of elementary particles in quantum field theory.
• In mathematics, for example, we want to study the system of prime numbers.
In the 19th century, physicists developed the methods of statistical mechanics for
studying many-particle systems, whereas mathematicians proved the distribution
law for prime numbers. It turns out that the two apparently different approaches
can be traced back to the same mathematical root, namely, the notion of partition
function. In modern quantum field theory, the Feynman functional integral can be
viewed as a partition function, as we will discuss later on. The typical procedure
proceeds in the following two steps.
(i) Coding: The many-particle system is encoded into one single function called a
partition function (e.g., the Boltzmann partition function in statistical physics,
Riemann’s zeta function or Dirichlet’s L-function for describing prime numbers,
the Feynman functional integral in quantum field theory).
(ii) Decoding: The task is to get crucial information about the many-particle system
by studying the properties of the partition function. The idea goes back to
Riemann. He recognized that the zeta function ζ extends holomorphically to
the punctured complex plane C \ {0}, and that the detailed knowledge on the
distribution of the zeros of the zeta function allows far-reaching statements
about the asymptotic distribution of the prime numbers. This is related to the
famous Riemann hypothesis to be considered on page 296.
The high-temperature approximation of quantum statistics is obtained from
classical statistical physics by subdividing the phase space into cells of volume
h3.
Applications of discrete dynamical systems. In mathematics, the equations
of motion of a discrete dynamical system are called difference equations. Nowadays
such equations are used
• for solving partial differential equations on computers (see Knabner and Angermann
(2003)),
• for modelling deterministic chaos in physics (see Schuster (1994)),
• for studying mathematical models on both the origin of life, taking metabolism
into account (see Dyson (1999a)), and
• virus dynamics in immunology (see Noack and May (2000)). This concerns, for
example, the spread of aids.
We also refer to W. de Melo and S. van Strien, One-Dimensional Dynamics,
Springer, Berlin, 1993 and to J. Jost, Dynamical Systems: Examples of Complex
Behavior, Springer, Berlin, 2005.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
欧拉函数[编辑]
[size][ltr]
在数论中,对正整数n,欧拉函数是小于或等于n的正整数中与n互质的数的数目。此函数以其首名研究者欧拉命名,它又称为φ函数、欧拉商数[来源请求]等。
例如,因为1,3,5,7均和8互质。
欧拉函数实际上是模n的同余类所构成的乘法群(即环的所有单位元组成的乘法群)的阶。这个性质与拉格朗日定理一起构成了欧拉定理的证明。
[/ltr][/size]
[size][ltr]
欧拉函数的值[编辑]
(小于等于1的正整数中唯一和1互质的数就是1本身)。
若n是质数p的k次幂,,因为除了p的倍数外,其他数都跟n互质。
欧拉函数是积性函数,即是说若m,n互质,。证明:设A, B, C是跟m, n, mn互质的数的集,据中国剩余定理,和可建立双射(一一对应)的关系。因此的值使用算术基本定理便知,
若则。
其中是使得整除的最大整数(这里)。
例如
性质[编辑]
n的欧拉函数 也是循环群 Cn 的生成元的个数(也是n阶分圆多项式的次数)。Cn 中每个元素都能生成 Cn 的一个子群,即必然是某个子群的生成元。而且按照定义,不同的子群不可能有相同的生成元。此外, Cn 的所有子群都具有 Cd 的形式,其中d整除n(记作d | n)。因此只要考察n的所有因数d,将 Cd 的生成元个数相加,就将得到 Cn 的元素总个数:n。也就是说:
其中的d为n的正约数。
运用默比乌斯反转公式来“翻转”这个和,就可以得到另一个关于的公式:
其中 μ 是所谓的默比乌斯函数,定义在正整数上。
对任何两个互质的正整数a, m(即 gcd(a,m) = 1),,有
即欧拉定理。
这个定理可以由群论中的拉格朗日定理得出,因为任意与m互质的a都属于环 的单位元组成的乘法群
当m是质数p时,此式则为:
即费马小定理。
生成函数[编辑]
以下两个由欧拉函数生成的级数都是来自于上节所给出的性质:。
由(n)生成的狄利克雷级数是:
其中ζ(s)是黎曼ζ函数。推导过程如下:
使用开始时的等式,就得到:于是
欧拉函数生成的朗贝级数如下:
其对于满足 |q|<1 的q收敛。
推导如下:
后者等价于:
欧拉函数的走势[编辑]
随着n变大,估计 的值是一件很难的事。当n为质数时,,但有时又与n差得很远。
在n足够大时,有估计:
对每个 ε > 0,都有n > N(ε)使得
如果考虑比值:
由以上已经提到的公式,可以得到其值等于类似的项的乘积。因此,使比值小的n将是两两不同的质数的乘积。由素数定理可以知道,常数 ε 可以被替换为:
就平均值的意义上来说是与n很相近的,因为:
其中的O表示大O符号。这个等式也可以说明在集合 {1, 2, ..., n} 中随机选取两个数,则当n趋于无穷大时,它们互质的概率趋于 。一个相关的结果是比值的平均值:
其他与欧拉函数有关的等式[编辑]
[/ltr][/size]
[size][ltr]
与欧拉函数有关的不等式[编辑]
[/ltr][/size]
[size][ltr]
参考来源[编辑]
[/ltr][/size]
本文介绍的是小于或等于n的正整数中与n互质的数的数目。关于形式为的函数,详见“欧拉函数 (复变函数)”。
[size][ltr]
在数论中,对正整数n,欧拉函数是小于或等于n的正整数中与n互质的数的数目。此函数以其首名研究者欧拉命名,它又称为φ函数、欧拉商数[来源请求]等。
例如,因为1,3,5,7均和8互质。
欧拉函数实际上是模n的同余类所构成的乘法群(即环的所有单位元组成的乘法群)的阶。这个性质与拉格朗日定理一起构成了欧拉定理的证明。
[/ltr][/size]
[size][ltr]
欧拉函数的值[编辑]
(小于等于1的正整数中唯一和1互质的数就是1本身)。
若n是质数p的k次幂,,因为除了p的倍数外,其他数都跟n互质。
欧拉函数是积性函数,即是说若m,n互质,。证明:设A, B, C是跟m, n, mn互质的数的集,据中国剩余定理,和可建立双射(一一对应)的关系。因此的值使用算术基本定理便知,
若则。
其中是使得整除的最大整数(这里)。
例如
性质[编辑]
n的欧拉函数 也是循环群 Cn 的生成元的个数(也是n阶分圆多项式的次数)。Cn 中每个元素都能生成 Cn 的一个子群,即必然是某个子群的生成元。而且按照定义,不同的子群不可能有相同的生成元。此外, Cn 的所有子群都具有 Cd 的形式,其中d整除n(记作d | n)。因此只要考察n的所有因数d,将 Cd 的生成元个数相加,就将得到 Cn 的元素总个数:n。也就是说:
其中的d为n的正约数。
运用默比乌斯反转公式来“翻转”这个和,就可以得到另一个关于的公式:
其中 μ 是所谓的默比乌斯函数,定义在正整数上。
对任何两个互质的正整数a, m(即 gcd(a,m) = 1),,有
即欧拉定理。
这个定理可以由群论中的拉格朗日定理得出,因为任意与m互质的a都属于环 的单位元组成的乘法群
当m是质数p时,此式则为:
即费马小定理。
生成函数[编辑]
以下两个由欧拉函数生成的级数都是来自于上节所给出的性质:。
由(n)生成的狄利克雷级数是:
其中ζ(s)是黎曼ζ函数。推导过程如下:
使用开始时的等式,就得到:于是
欧拉函数生成的朗贝级数如下:
其对于满足 |q|<1 的q收敛。
推导如下:
后者等价于:
欧拉函数的走势[编辑]
随着n变大,估计 的值是一件很难的事。当n为质数时,,但有时又与n差得很远。
在n足够大时,有估计:
对每个 ε > 0,都有n > N(ε)使得
如果考虑比值:
由以上已经提到的公式,可以得到其值等于类似的项的乘积。因此,使比值小的n将是两两不同的质数的乘积。由素数定理可以知道,常数 ε 可以被替换为:
就平均值的意义上来说是与n很相近的,因为:
其中的O表示大O符号。这个等式也可以说明在集合 {1, 2, ..., n} 中随机选取两个数,则当n趋于无穷大时,它们互质的概率趋于 。一个相关的结果是比值的平均值:
其他与欧拉函数有关的等式[编辑]
[/ltr][/size]
- 使得
- 使得
[size][ltr]
与欧拉函数有关的不等式[编辑]
[/ltr][/size]
[size][ltr]
参考来源[编辑]
[/ltr][/size]
- Milton Abramowitz、Irene A. Stegun, Handbook of Mathematical Functions, (1964) Dover Publications , New York. ISBN 0-486-61272-4. 24.3.2节.
- Eric Bach、Jeffrey Shallit, Algorithmic Number Theory, 卷 1, 1996, MIT Press. ISBN 0-262-02405-5, 8.8节,234页.
- 柯召,孙琦:数论讲义(上册),第二版,高等教育出版社,2001
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
模形式[编辑]
[ltr]模形式是数学上一个满足一些泛函方程与增长条件、在上半平面上的(复)解析函数。因此,模形式理论属于数论的范畴。模形式也出现在其他领域,例如代数拓扑和弦理论。
模形式理论是更广泛的自守形式理论的特例。自守形式理论的发展大致可分成三期:
[/ltr]
[size][ltr]
作为格的函数[编辑]
一个模形式可视为从所有格(即:中的离散加法子群,使得其商群紧致)的集合映至的函数,使之满足下述条件:
[/ltr][/size]
[size][ltr]
当,条件二表明仅决定于在相似变换下的等价类。这是重要的特例,但是权为零的模形式必为常数函数。若去掉条件三,并容许函数有极点,则存在非常数的例子,称作模函数。
这个状况可以与射影空间作类比:对于射影空间,我们欲寻找向量空间上对座标的多项式函数,并满足;不幸的是,这种函数必为常数。一种办法是容许有分母(即考虑有理函数),则满足条件的是分子、分母为同次数齐次多项式的有理函数。另一种办法则是修改条件为,则满足此条件的函数为次齐次多项式,对每个固定的,这些函数构成有限维向量空间。借着考虑所有可能的,我们可以找出构造上的有理函数所需之分子与分母。
既然次齐次多项式在上并非真正的函数,该如何从几何上诠释?代数几何给出了一个答案:它们是上某个层的截面。模形式的情形也类似,但考虑的不是,而是某个模空间。
作为椭圆曲线模空间上的函数[编辑]
每个格都决定一条复椭圆曲线;两个格给出的椭圆曲线同构的充要条件是两个格之间差一个非零复数的倍数。因此模函数可以看作是复椭圆曲线的模空间上的函数。例如椭圆曲线的j-不变量就是模函数。模形式可视作模空间上某些线丛的截面。
每个格在乘上某个非零复数倍数后皆可表成。对一模形式,置。模形式的第二个条件可改写成函数方程:对所有且(即模群之定义),有
例如,取:
如果上述方程仅对内的某个有限指数子群成立,则称为对的模形式。最常见的例子是同余子群,以下将详述。
广义定义[编辑]
令为正整数,相应的模群定义为
令为正整数,权为的级(或级群为)模形式定义为一个上半平面上的全纯函数,对任何
及任何属于上半平面的,有
而且在尖点全纯。所谓尖点,是在作用下的轨道。例如当时,代表了唯一的尖点。模形式在尖点全纯,意谓时有界。当此尖点为时,这等价于有傅立叶展开式
其中。对于其它尖点,同样可藉座标变换得到傅立叶展开。
若对每个尖点都有,则称之为尖点形式(德文:Spitzenform)。使得的最小称作在该尖点的阶。以上定义的模形式有时也称为整模形式,以区分带极点的一般情形(如j-不变量)。
另一种的推广是考虑某类函数,并将函数方程改写为
上式所取的称为自守因子。若另取适当的,则在此框架下亦可探讨戴德金η函数,这是权等于1/2的模形式。例如:一个权等于、级、nebentypus为(是模的一个狄利克雷特征)是定义于上半平面,并具下述性质的全纯函数:对任意
及属于上半平面的,有函数方程
此外,必须在尖点全纯。
例子[编辑]
艾森斯坦级数[编辑]
模形式最简单的例子是艾森斯坦级数:对每个偶数,定义
(条件用于确立收敛性)
θ函数[编辑]
所谓中的偶单位模格,是指由一个行列式等于一的阶矩阵的行向量展成之格,并使得每个中的向量长度均为偶数。根据普瓦松求和公式,此时对应的Theta函数
是权的模形式。偶单位模格的构造并不容易,以下是方法之一:令为8的倍数,并考虑所有向量,使得的座标均为奇数或均为偶数,且的各座标总和为奇数。由此构成的格写作。当,此格由根系的根生成。虽然与并不相似,由于权的模形式只有一个(至多差一个常数倍),遂得到
约翰·米尔诺发现:对这两个格的商空间给出两个16维环面,彼此不相等距同构,但它们的拉普拉斯算子有相同的特征值(计入重数)。
戴德金η函数[编辑]
戴德金η函数定义为
模判别式是权的模形式。拉马努金有一个著名的猜想:在的傅立叶展开式中,对任一素数,的系数的绝对值恒。此猜想最后由德利涅证明。
上述诸例点出了模形式与若干古典数论问题的联系,例如以二次型表示整数以及整数分拆问题。赫克算子理论阐释了模形式与数论的关键联系,同时也联系了模形式与表示理论。
其他模函数概念的推广[编辑]
模函数的概念还能做一些推广。
例如,可以去掉全纯条件:马斯形式是上半平面的拉普拉斯算子的特征函数,但并非全纯函数。
此外,可以考虑以外的群。希尔伯特模形式是个变元的函数,每个变元都属于上半平面。其函数方程则由分布于某个全实域的二阶方阵来定义。若以较大的辛群取代,便得到西格尔模形式。模形式与椭圆曲线相关,而西格尔模形式则涉及更广义的阿贝尔簇。
自守形式的概念可用于一般的李群。
参考文献[编辑]
[/ltr][/size]
[ltr]模形式是数学上一个满足一些泛函方程与增长条件、在上半平面上的(复)解析函数。因此,模形式理论属于数论的范畴。模形式也出现在其他领域,例如代数拓扑和弦理论。
模形式理论是更广泛的自守形式理论的特例。自守形式理论的发展大致可分成三期:
[/ltr]
[size][ltr]
作为格的函数[编辑]
一个模形式可视为从所有格(即:中的离散加法子群,使得其商群紧致)的集合映至的函数,使之满足下述条件:
[/ltr][/size]
- 若考虑形如之格,其中为常数而为变量,则是的全纯函数。
- 存在常数(通常取正整数),使得对任何,有。常数k称为此模形式之权。
- 对于最小非零元与原点距离大于一定值之格,有上界。
[size][ltr]
当,条件二表明仅决定于在相似变换下的等价类。这是重要的特例,但是权为零的模形式必为常数函数。若去掉条件三,并容许函数有极点,则存在非常数的例子,称作模函数。
这个状况可以与射影空间作类比:对于射影空间,我们欲寻找向量空间上对座标的多项式函数,并满足;不幸的是,这种函数必为常数。一种办法是容许有分母(即考虑有理函数),则满足条件的是分子、分母为同次数齐次多项式的有理函数。另一种办法则是修改条件为,则满足此条件的函数为次齐次多项式,对每个固定的,这些函数构成有限维向量空间。借着考虑所有可能的,我们可以找出构造上的有理函数所需之分子与分母。
既然次齐次多项式在上并非真正的函数,该如何从几何上诠释?代数几何给出了一个答案:它们是上某个层的截面。模形式的情形也类似,但考虑的不是,而是某个模空间。
作为椭圆曲线模空间上的函数[编辑]
每个格都决定一条复椭圆曲线;两个格给出的椭圆曲线同构的充要条件是两个格之间差一个非零复数的倍数。因此模函数可以看作是复椭圆曲线的模空间上的函数。例如椭圆曲线的j-不变量就是模函数。模形式可视作模空间上某些线丛的截面。
每个格在乘上某个非零复数倍数后皆可表成。对一模形式,置。模形式的第二个条件可改写成函数方程:对所有且(即模群之定义),有
例如,取:
如果上述方程仅对内的某个有限指数子群成立,则称为对的模形式。最常见的例子是同余子群,以下将详述。
广义定义[编辑]
令为正整数,相应的模群定义为
令为正整数,权为的级(或级群为)模形式定义为一个上半平面上的全纯函数,对任何
及任何属于上半平面的,有
而且在尖点全纯。所谓尖点,是在作用下的轨道。例如当时,代表了唯一的尖点。模形式在尖点全纯,意谓时有界。当此尖点为时,这等价于有傅立叶展开式
其中。对于其它尖点,同样可藉座标变换得到傅立叶展开。
若对每个尖点都有,则称之为尖点形式(德文:Spitzenform)。使得的最小称作在该尖点的阶。以上定义的模形式有时也称为整模形式,以区分带极点的一般情形(如j-不变量)。
另一种的推广是考虑某类函数,并将函数方程改写为
上式所取的称为自守因子。若另取适当的,则在此框架下亦可探讨戴德金η函数,这是权等于1/2的模形式。例如:一个权等于、级、nebentypus为(是模的一个狄利克雷特征)是定义于上半平面,并具下述性质的全纯函数:对任意
及属于上半平面的,有函数方程
此外,必须在尖点全纯。
例子[编辑]
艾森斯坦级数[编辑]
模形式最简单的例子是艾森斯坦级数:对每个偶数,定义
(条件用于确立收敛性)
θ函数[编辑]
所谓中的偶单位模格,是指由一个行列式等于一的阶矩阵的行向量展成之格,并使得每个中的向量长度均为偶数。根据普瓦松求和公式,此时对应的Theta函数
是权的模形式。偶单位模格的构造并不容易,以下是方法之一:令为8的倍数,并考虑所有向量,使得的座标均为奇数或均为偶数,且的各座标总和为奇数。由此构成的格写作。当,此格由根系的根生成。虽然与并不相似,由于权的模形式只有一个(至多差一个常数倍),遂得到
约翰·米尔诺发现:对这两个格的商空间给出两个16维环面,彼此不相等距同构,但它们的拉普拉斯算子有相同的特征值(计入重数)。
戴德金η函数[编辑]
戴德金η函数定义为
模判别式是权的模形式。拉马努金有一个著名的猜想:在的傅立叶展开式中,对任一素数,的系数的绝对值恒。此猜想最后由德利涅证明。
上述诸例点出了模形式与若干古典数论问题的联系,例如以二次型表示整数以及整数分拆问题。赫克算子理论阐释了模形式与数论的关键联系,同时也联系了模形式与表示理论。
其他模函数概念的推广[编辑]
模函数的概念还能做一些推广。
例如,可以去掉全纯条件:马斯形式是上半平面的拉普拉斯算子的特征函数,但并非全纯函数。
此外,可以考虑以外的群。希尔伯特模形式是个变元的函数,每个变元都属于上半平面。其函数方程则由分布于某个全实域的二阶方阵来定义。若以较大的辛群取代,便得到西格尔模形式。模形式与椭圆曲线相关,而西格尔模形式则涉及更广义的阿贝尔簇。
自守形式的概念可用于一般的李群。
参考文献[编辑]
[/ltr][/size]
- Jean-Pierre Serre, A Course in Arithmetic. Graduate Texts in Mathematics 7, Springer-Verlag, New York, 1973.在其第七章提供了模形式理论的浅介
- Tom M. Apostol, Modular functions and Dirichlet Series in Number Theory (1990), Springer-Verlag, New York. ISBN 0-387-97127-0
- Goro Shimura: Introduction to the arithmetic theory of automorphic functions. Princeton University Press, Princeton, N.J., 1971.提供较进阶的阐述
- Stephen Gelbart: Automorphic forms on adele groups. Annals of Mathematics Studies 83, Princeton University Press, Princeton, N.J., 1975.就表示理论观点审视模形式
- Robert A. Rankin, Modular forms and functions, (1977) Cambridge University Press, Cambridge. ISBN 0-521-21212-X
- Stein's notes on Ribet's course Modular Forms and Hecke Operators
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
黎曼ζ函数[编辑]
[ltr]黎曼ζ函数ζ(s)的定义如下: 设一复数s,其实数部份> 1而且:
它亦可以用积分定义:
在区域{s : Re(s) > 1}上, 此无穷级数收敛并为一全纯函数。(上式中Re表示复数的实部。)。欧拉在1740考虑过s为正整数的情况,后来切比雪夫拓展到s>1。[1] 波恩哈德·黎曼认识到:ζ函数可以通过解析开拓来扩展到一个定义在复数域(s, s≠ 1)上的全纯函数ζ(s)。这也是黎曼猜想所研究的函数。
虽然黎曼的ζ函数被数学家认为主要和“最纯”的数学领域数论相关,它也出现在应用统计学中(参看齐夫定律(Zipf's Law)和齐夫-曼德尔布罗特定律(Zipf-Mandelbrot Law)),还有物理,以及调音的数学理论中。
[/ltr]
[size][ltr]
和素数的关系[编辑]
主条目:证明黎曼ζ函数的欧拉乘积公式
此函数和素数的关系已由欧拉所揭示:
这是一个延展到所有的质数p的无穷乘积,被称为欧拉乘积。这是几何级数的公式和算术基本定理的一个结果。
ζ(s)的零点很重要,因为特定的涉及到函数ln(1/ζ(s))的路径积分可以用来估算质数个数函数π(x)。这些路径积分用留数定理计算,所以必须知道被积式的奇异点。
我们可以用莫比乌斯函数μ(n)表达ζ函数的倒数如下
对于所有实部>1的复数s。这和上面ζ(2)的表达式一起可以用来证明两个随机整数互质的概率是6/π2。
\frac{}{}== 函数值 ==
[/ltr][/size][size][ltr]
ζ函数满足如下函数方程:
对于所有C\{0,1}中的s成立。这里,Γ表示Γ函数。这个公式原来用来构造解析连续性。在s = 1,ζ函数有一个简单极点其留数为1。上述方程中有sin函数,的零点为偶数s = 2n,这些位置是可能的零点,但s为正偶数时,为不为零的规则函数(Regular function),只有s为负偶数时,ζ函数才有零点,称为平凡零点。
当s为正整数[编辑]
主条目:巴塞尔问题
欧拉也能计算ζ(2k),对于偶整数2k,他使用公式
其中B2k是伯努利数。从这个,我们可以看到ζ(2) = π2/6, ζ(4) = π4/90, ζ(6) = π6/945等等。(序列A046988/A002432列在OEIS)。这些给出了著名的π的无穷级数。奇整数的情况没有这么简单。拉马努金在这上面做了很多了不起的工作。 为正偶数时的函数值公式已经由欧拉计算出。但当为正奇数时,尚未找到封闭式。
这是调和级数。 (OEIS中的数列A078434)该值用于计算具有周期性边界条件的玻色-爱因斯坦凝聚的临界温度以及磁系统的自旋波物理。 (OEIS中的数列A013661)即巴塞尔问题。这个结果的倒数回答了这个问题:随机选取两个数字而互质的概率是多少?[2] (OEIS中的数列A002117)称为阿培里常数。 (OEIS中的数列A0013662)黑体辐射里的斯特藩-玻尔兹曼定律和维恩近似。
负整数[编辑]
同样由欧拉发现,ζ函数在负整数点的值是有理数,这在模形式中发挥着重要作用,而且ζ函数在负偶整数点的值为零。
复数值[编辑]
,x>1。
幅角[编辑]
,
函数值表[编辑]
,,,,,,,,,,,,,
参考资料[编辑]
[/ltr][/size]
[size][ltr]
参见[编辑]
[/ltr][/size]
[ltr]黎曼ζ函数ζ(s)的定义如下: 设一复数s,其实数部份> 1而且:
它亦可以用积分定义:
在区域{s : Re(s) > 1}上, 此无穷级数收敛并为一全纯函数。(上式中Re表示复数的实部。)。欧拉在1740考虑过s为正整数的情况,后来切比雪夫拓展到s>1。[1] 波恩哈德·黎曼认识到:ζ函数可以通过解析开拓来扩展到一个定义在复数域(s, s≠ 1)上的全纯函数ζ(s)。这也是黎曼猜想所研究的函数。
虽然黎曼的ζ函数被数学家认为主要和“最纯”的数学领域数论相关,它也出现在应用统计学中(参看齐夫定律(Zipf's Law)和齐夫-曼德尔布罗特定律(Zipf-Mandelbrot Law)),还有物理,以及调音的数学理论中。
[/ltr]
[size][ltr]
和素数的关系[编辑]
主条目:证明黎曼ζ函数的欧拉乘积公式
此函数和素数的关系已由欧拉所揭示:
这是一个延展到所有的质数p的无穷乘积,被称为欧拉乘积。这是几何级数的公式和算术基本定理的一个结果。
ζ(s)的零点很重要,因为特定的涉及到函数ln(1/ζ(s))的路径积分可以用来估算质数个数函数π(x)。这些路径积分用留数定理计算,所以必须知道被积式的奇异点。
我们可以用莫比乌斯函数μ(n)表达ζ函数的倒数如下
对于所有实部>1的复数s。这和上面ζ(2)的表达式一起可以用来证明两个随机整数互质的概率是6/π2。
\frac{}{}== 函数值 ==
[/ltr][/size][size][ltr]
ζ函数满足如下函数方程:
对于所有C\{0,1}中的s成立。这里,Γ表示Γ函数。这个公式原来用来构造解析连续性。在s = 1,ζ函数有一个简单极点其留数为1。上述方程中有sin函数,的零点为偶数s = 2n,这些位置是可能的零点,但s为正偶数时,为不为零的规则函数(Regular function),只有s为负偶数时,ζ函数才有零点,称为平凡零点。
当s为正整数[编辑]
主条目:巴塞尔问题
欧拉也能计算ζ(2k),对于偶整数2k,他使用公式
其中B2k是伯努利数。从这个,我们可以看到ζ(2) = π2/6, ζ(4) = π4/90, ζ(6) = π6/945等等。(序列A046988/A002432列在OEIS)。这些给出了著名的π的无穷级数。奇整数的情况没有这么简单。拉马努金在这上面做了很多了不起的工作。 为正偶数时的函数值公式已经由欧拉计算出。但当为正奇数时,尚未找到封闭式。
这是调和级数。 (OEIS中的数列A078434)该值用于计算具有周期性边界条件的玻色-爱因斯坦凝聚的临界温度以及磁系统的自旋波物理。 (OEIS中的数列A013661)即巴塞尔问题。这个结果的倒数回答了这个问题:随机选取两个数字而互质的概率是多少?[2] (OEIS中的数列A002117)称为阿培里常数。 (OEIS中的数列A0013662)黑体辐射里的斯特藩-玻尔兹曼定律和维恩近似。
负整数[编辑]
同样由欧拉发现,ζ函数在负整数点的值是有理数,这在模形式中发挥着重要作用,而且ζ函数在负偶整数点的值为零。
复数值[编辑]
,x>1。
幅角[编辑]
,
函数值表[编辑]
,,,,,,,,,,,,,
参考资料[编辑]
[/ltr][/size]
- ^ Devlin, Keith. The Millennium Problems: The Seven Greatest Unsolved Mathematical Puzzles of Our Time. New York: Barnes & Noble. 2002: 43–47. ISBN 978-0760786598.
- ^ C. S. Ogilvy & J. T. Anderson Excursions in Number Theory, pp. 29–35, Dover Publications Inc., 1988 ISBN 0-486-25778-9
[size][ltr]
参见[编辑]
[/ltr][/size]
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Mellin transform
From Wikipedia, the free encyclopedia
The Mellin transformation is a magic wand.
Folklore
[ltr]In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.
The Mellin transform of a function f is
The inverse transform is
The notation implies this is a line integral taken over a vertical line in the complex plane. Conditions under which this inversion is valid are given in the Mellin inversion theorem.
The transform is named after the Finnish mathematician Hjalmar Mellin.[/ltr]
3 As a unitary operator on L2
4 In probability theory
5 Applications
6 Examples
7 See also
8 Notes
9 References
10 External links
[ltr]
Relationship to other transforms[edit]
The two-sided Laplace transform may be defined in terms of the Mellin transform by
and conversely we can get the Mellin transform from the two-sided Laplace transform by
The Mellin transform may be thought of as integrating using a kernel xswith respect to the multiplicative Haar measure, , which is invariant under dilation , so that ; the two-sided Laplace transform integrates with respect to the additive Haar measure , which is translation invariant, so that .
We also may define the Fourier transform in terms of the Mellin transform and vice-versa; if we define the two-sided Laplace transform as above, then
We may also reverse the process and obtain
The Mellin transform also connects the Newton series or binomial transform together with the Poisson generating function, by means of thePoisson–Mellin–Newton cycle.
Examples[edit]
Cahen–Mellin integral[edit]
For , and on the principal branch, one has
where is the gamma function. This integral is known as the Cahen-Mellin integral.[1]
Number theory[edit]
An important application in number theory includes the simple function for which
assuming
As a unitary operator on L2[edit]
In the study of Hilbert spaces, the Mellin transform is often posed in a slightly different way. For functions in (see Lp space) the fundamental strip always includes , so we may define a linear operator as
In other words we have set
This operator is usually denoted by just plain and called the "Mellin transform", but is used here to distinguish from the definition used elsewhere in this article. The Mellin inversion theorem then shows that is invertible with inverse
Furthermore this operator is an isometry, that is to say for all (this explains why the factor of was used). Thus is a unitary operator.
In probability theory[edit]
In probability theory, the Mellin transform is an essential tool in studying the distributions of products of random variables.[2] If X is a random variable, and X+ = max{X,0} denotes its positive part, while X − = max{−X,0} is its negative part, then the Mellin transform of X is defined as [3]
where γ is a formal indeterminate with γ2 = 1. This transform exists for all sin some complex strip D = {s: a ≤ Re(s) ≤ b}, where a ≤ 0 ≤ b.[3]
The Mellin transform of a random variable X uniquely determines its distribution function FX.[3] The importance of the Mellin transform in probability theory lies in the fact that if X and Y are two independent random variables, then the Mellin transform of their products is equal to the product of the Mellin transforms of X and Y:[4]
Applications[edit]
The Mellin Transform is widely used in computer science for the analysis of algorithms[clarification needed] because of its scale invariance property. The magnitude of the Mellin Transform of a scaled function is identical to the magnitude of the original function. This scale invariance property is analogous to the Fourier Transform's shift invariance property. The magnitude of a Fourier transform of a time-shifted function is identical to the magnitude of the Fourier transform of the original function.
This property is useful in image recognition. An image of an object is easily scaled when the object is moved towards or away from the camera.
Examples[edit][/ltr]
[ltr]
See also[edit][/ltr]
[ltr]
Notes[edit][/ltr]
[ltr]
References[edit][/ltr]
[ltr]
External links[edit][/ltr]
From Wikipedia, the free encyclopedia
The Mellin transformation is a magic wand.
Folklore
[ltr]In mathematics, the Mellin transform is an integral transform that may be regarded as the multiplicative version of the two-sided Laplace transform. This integral transform is closely connected to the theory of Dirichlet series, and is often used in number theory and the theory of asymptotic expansions; it is closely related to the Laplace transform and the Fourier transform, and the theory of the gamma function and allied special functions.
The Mellin transform of a function f is
The inverse transform is
The notation implies this is a line integral taken over a vertical line in the complex plane. Conditions under which this inversion is valid are given in the Mellin inversion theorem.
The transform is named after the Finnish mathematician Hjalmar Mellin.[/ltr]
[ltr]
Relationship to other transforms[edit]
The two-sided Laplace transform may be defined in terms of the Mellin transform by
and conversely we can get the Mellin transform from the two-sided Laplace transform by
The Mellin transform may be thought of as integrating using a kernel xswith respect to the multiplicative Haar measure, , which is invariant under dilation , so that ; the two-sided Laplace transform integrates with respect to the additive Haar measure , which is translation invariant, so that .
We also may define the Fourier transform in terms of the Mellin transform and vice-versa; if we define the two-sided Laplace transform as above, then
We may also reverse the process and obtain
The Mellin transform also connects the Newton series or binomial transform together with the Poisson generating function, by means of thePoisson–Mellin–Newton cycle.
Examples[edit]
Cahen–Mellin integral[edit]
For , and on the principal branch, one has
where is the gamma function. This integral is known as the Cahen-Mellin integral.[1]
Number theory[edit]
An important application in number theory includes the simple function for which
assuming
As a unitary operator on L2[edit]
In the study of Hilbert spaces, the Mellin transform is often posed in a slightly different way. For functions in (see Lp space) the fundamental strip always includes , so we may define a linear operator as
In other words we have set
This operator is usually denoted by just plain and called the "Mellin transform", but is used here to distinguish from the definition used elsewhere in this article. The Mellin inversion theorem then shows that is invertible with inverse
Furthermore this operator is an isometry, that is to say for all (this explains why the factor of was used). Thus is a unitary operator.
In probability theory[edit]
In probability theory, the Mellin transform is an essential tool in studying the distributions of products of random variables.[2] If X is a random variable, and X+ = max{X,0} denotes its positive part, while X − = max{−X,0} is its negative part, then the Mellin transform of X is defined as [3]
where γ is a formal indeterminate with γ2 = 1. This transform exists for all sin some complex strip D = {s: a ≤ Re(s) ≤ b}, where a ≤ 0 ≤ b.[3]
The Mellin transform of a random variable X uniquely determines its distribution function FX.[3] The importance of the Mellin transform in probability theory lies in the fact that if X and Y are two independent random variables, then the Mellin transform of their products is equal to the product of the Mellin transforms of X and Y:[4]
Applications[edit]
The Mellin Transform is widely used in computer science for the analysis of algorithms[clarification needed] because of its scale invariance property. The magnitude of the Mellin Transform of a scaled function is identical to the magnitude of the original function. This scale invariance property is analogous to the Fourier Transform's shift invariance property. The magnitude of a Fourier transform of a time-shifted function is identical to the magnitude of the Fourier transform of the original function.
This property is useful in image recognition. An image of an object is easily scaled when the object is moved towards or away from the camera.
Examples[edit][/ltr]
- Perron's formula describes the inverse Mellin transform applied to aDirichlet series.
- The Mellin transform is used in analysis of the prime-counting functionand occurs in discussions of the Riemann zeta function.
- Inverse Mellin transforms commonly occur in Riesz means.
- The Mellin transform can be used in Audio timescale-pitch modification(needs substantive reference).
[ltr]
See also[edit][/ltr]
[ltr]
Notes[edit][/ltr]
- Jump up^ Hardy, G. H.; Littlewood, J. E. (1916). "Contributions to the Theory of the Riemann Zeta-Function and the Theory of the Distribution of Primes". Acta Mathematica 41 (1): 119–196. doi:10.1007/BF02422942. (See notes therein for further references to Cahen's and Mellin's work, including Cahen's thesis.)
- Jump up^ Galambos & Simonelli (2004, p. 15)
- ^ Jump up to:a b c Galambos & Simonelli (2004, p. 16)
- Jump up^ Galambos & Simonelli (2004, p. 23)
[ltr]
References[edit][/ltr]
- Galambos, Janos; Simonelli, Italo (2004). Products of random variables: applications to problems of physics and to arithmetical functions. Marcel Dekker, Inc. ISBN 0-8247-5402-6.
- Paris, R. B.; Kaminski, D. (2001). Asymptotics and Mellin-Barnes Integrals. Cambridge University Press.
- Polyanin, A. D.; Manzhirov, A. V. (1998). Handbook of Integral Equations. Boca Raton: CRC Press. ISBN 0-8493-2876-4.
- Flajolet, P.; Gourdon, X.; Dumas, P. (1995). "Mellin transforms and asymptotics: Harmonic sums". Theoretical Computer Science 144 (1-2): 3–58. doi:10.1016/0304-3975(95)00002-e.
- Tables of Integral Transforms at EqWorld: The World of Mathematical Equations.
- Hazewinkel, Michiel, ed. (2001), "Mellin transform", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Mellin Transform", MathWorld.
[ltr]
External links[edit][/ltr]
- Philippe Flajolet, Xavier Gourdon, Philippe Dumas, Mellin Transforms and Asymptotics: Harmonic sums.
- Antonio Gonzáles, Marko Riedel Celebrando un clásico, newsgroup es.ciencia.matematicas
- Juan Sacerdoti, Funciones Eulerianas (in Spanish).
- Mellin Transform Methods, Digital Library of Mathematical Functions, 2011-08-29, National Institute of Standards and Technology
- Antonio De Sena and Davide Rocchesso, A FAST MELLIN TRANSFORM WITH APPLICATIONS IN DAFX
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
卡西米尔效应[编辑]
[size][ltr]
卡西米尔效应(Casimir effect)是由荷兰物理学家亨德里克·卡西米尔(Hendrik Casimir)于1948年提出,此效应随后即被侦测到,并以卡西米尔为名纪念他。其根据量子场论的“真空不空”观念——即使没有物质存在的真空仍有能量涨落,而提出此效应:真空中两片中性(不带电)的金属板会出现吸力;这在经典理论中是不会出现的现象。这种效应只有在两物体的距离非常之小时才可以被检测到。例如,在亚微米尺度上,该效应导致的吸引力成为中性导体之间主要作用力。事实上在10纳米间隙上(大概是一个原子尺度的100倍),卡西米尔效应能产生1个大气压的压力。(101.3千帕)。一对中性原子之间的范德瓦耳斯力是一种类似的效应。
[/ltr][/size]
[size][ltr]
概论[编辑]
卡西米尔效应在理解上,可以看为金属导体或介电材料的存在改变了真空二次量子化后电磁场能量的期望值。这个值与导体和介电材料的形状及位置相关,因此卡西米尔效应表现就成了与这些属性相关的力。
真空能量[编辑]
卡西米尔效应是量子场论的自然结果;量子场论陈述了所有各式各样的基本场—例如电磁场—必须在空间中每个点且处处被量子化。采单纯的观点来说,物理场可以想作是充满空间的振动球,之间以弹簧相连接。场的强度可以看作是球偏离其平衡位置的位移。场的振动可以传播,并由对应于此特殊场的适当波方程所主导。量子场论的二次量子化程序要求球与弹簧的组合是呈现量子化的,也就是说场强度在空间中每一点被量子化。正则式地(Canonically)来说,空间中每点的场是个谐振子,量子化则成了每点有个量子谐振子。场的激发则对应到粒子物理学中的基本粒子。然而,这样的图像会显示出:即使是真空也有极其复杂的结构。所有量子场论的计算都须与这样的真空模型有所关联。
真空因此暗地里具有了一颗粒子所拥有的全部性质:自旋,或光的极化,以及能量等等。若作平均,这些性质会彼此相销而得到零值——真空的“空”是以这样的概念维持着。其中一个重要的例外是真空能量或能量的真空期望值。简谐振子的量子化过程指出存在有一个最低的能量值,称作零点能量,此值不为零:
。
卡西米尔效应[编辑]
卡西米尔所做的研究是针对二次量子化的电磁场。若其中存在一些大块的物体,可为金属或介电材料,做成一如经典电磁场所须遵从的边界条件,这些相应的边界条件便影响了真空能量的计算。
举例来说,考虑金属腔室中电磁场真空期望值的计算;这样的金属腔实例如雷达波腔或微波波导。这样的例子中,正确找出场的零点能量的方法是将腔中驻波能量加总起来。每一个可能的驻波对应了一种能量值;例如,第n个驻波的能量值是。腔室中电磁场的真空期望值则为:
此和是对所有可能驻波的n加总起来。1/2的因子反映出被加总的是零点能量(此1/2与方程的1/2相同)。以这样方式写出,很明显地和会发散;然而也是可以将它写成有限值的表示。
特别来说,可能会有人问为何零点能量会和腔室形状s相依?原因是:每个能阶都和形状相依,因此应该将能阶以及真空期望值写成形状s的函数。再此可以得到一项观察:在腔室壁上每个点p的力等同于壁形状s出现微扰时的真空能量变动,这样的形状微扰可写为,是位置点p的函数。因此得到:
此值在许多实际场合是有限的。[/ltr][/size]
本条目的内容可能尚不周全,请考虑将en:Casimir effect的内容翻译成中文。 欢迎您积极参与,协助改善这篇条目。 |
[size][ltr]
卡西米尔效应(Casimir effect)是由荷兰物理学家亨德里克·卡西米尔(Hendrik Casimir)于1948年提出,此效应随后即被侦测到,并以卡西米尔为名纪念他。其根据量子场论的“真空不空”观念——即使没有物质存在的真空仍有能量涨落,而提出此效应:真空中两片中性(不带电)的金属板会出现吸力;这在经典理论中是不会出现的现象。这种效应只有在两物体的距离非常之小时才可以被检测到。例如,在亚微米尺度上,该效应导致的吸引力成为中性导体之间主要作用力。事实上在10纳米间隙上(大概是一个原子尺度的100倍),卡西米尔效应能产生1个大气压的压力。(101.3千帕)。一对中性原子之间的范德瓦耳斯力是一种类似的效应。
[/ltr][/size]
[size][ltr]
概论[编辑]
卡西米尔效应在理解上,可以看为金属导体或介电材料的存在改变了真空二次量子化后电磁场能量的期望值。这个值与导体和介电材料的形状及位置相关,因此卡西米尔效应表现就成了与这些属性相关的力。
真空能量[编辑]
卡西米尔效应是量子场论的自然结果;量子场论陈述了所有各式各样的基本场—例如电磁场—必须在空间中每个点且处处被量子化。采单纯的观点来说,物理场可以想作是充满空间的振动球,之间以弹簧相连接。场的强度可以看作是球偏离其平衡位置的位移。场的振动可以传播,并由对应于此特殊场的适当波方程所主导。量子场论的二次量子化程序要求球与弹簧的组合是呈现量子化的,也就是说场强度在空间中每一点被量子化。正则式地(Canonically)来说,空间中每点的场是个谐振子,量子化则成了每点有个量子谐振子。场的激发则对应到粒子物理学中的基本粒子。然而,这样的图像会显示出:即使是真空也有极其复杂的结构。所有量子场论的计算都须与这样的真空模型有所关联。
真空因此暗地里具有了一颗粒子所拥有的全部性质:自旋,或光的极化,以及能量等等。若作平均,这些性质会彼此相销而得到零值——真空的“空”是以这样的概念维持着。其中一个重要的例外是真空能量或能量的真空期望值。简谐振子的量子化过程指出存在有一个最低的能量值,称作零点能量,此值不为零:
。
卡西米尔效应[编辑]
卡西米尔所做的研究是针对二次量子化的电磁场。若其中存在一些大块的物体,可为金属或介电材料,做成一如经典电磁场所须遵从的边界条件,这些相应的边界条件便影响了真空能量的计算。
举例来说,考虑金属腔室中电磁场真空期望值的计算;这样的金属腔实例如雷达波腔或微波波导。这样的例子中,正确找出场的零点能量的方法是将腔中驻波能量加总起来。每一个可能的驻波对应了一种能量值;例如,第n个驻波的能量值是。腔室中电磁场的真空期望值则为:
此和是对所有可能驻波的n加总起来。1/2的因子反映出被加总的是零点能量(此1/2与方程的1/2相同)。以这样方式写出,很明显地和会发散;然而也是可以将它写成有限值的表示。
特别来说,可能会有人问为何零点能量会和腔室形状s相依?原因是:每个能阶都和形状相依,因此应该将能阶以及真空期望值写成形状s的函数。再此可以得到一项观察:在腔室壁上每个点p的力等同于壁形状s出现微扰时的真空能量变动,这样的形状微扰可写为,是位置点p的函数。因此得到:
此值在许多实际场合是有限的。[/ltr][/size]
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Casimir effect
From Wikipedia, the free encyclopedia
A water wave analogue of the Casimir effect. Two parallel plates are submerged into colored water contained in a sonicator. When the sonicator is turned on, waves are excited imitating vacuum fluctuations; as a result, the plates attract to each other.
[ltr]
In quantum field theory, theCasimir effect and the Casimir–Polder force are physical forcesarising from a quantized field. They are named after the Dutch physicist Hendrik Casimir.
The typical example is of twouncharged metallic plates in avacuum, placed a few nanometers apart. In a classical description, the lack of an external field also means that there is no field between the plates, and no force would be measured between them.[1] When this field is instead studied using the QED vacuum of quantum electrodynamics, it is seen that the plates do affect the virtual photonswhich constitute the field, and generate a net force[2]—either an attraction or a repulsion depending on the specific arrangement of the two plates. Although the Casimir effect can be expressed in terms ofvirtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of aquantized field in the intervening space between the objects. This force has been measured, and is a striking example of an effect captured formally by second quantization.[3][4] However, the treatment of boundary conditions in these calculations has led to some controversy. In fact "Casimir's original goal was to compute the van der Waals forcebetween polarizable molecules" of the metallic plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields.[5]
Dutch physicists Hendrik B. G. Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947, and, after a conversation with Niels Bohr who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948; the former is called the Casimir–Polder force while the latter is the Casimir effect in the narrow sense. Predictions of the force were later extended to finite-conductivity metals and dielectrics by Lif***z and his students, and recent calculations have considered more general geometries. It was not until 1997, however, that a direct experiment, by S. Lamoreaux, described above, quantitatively measured the force (to within 15% of the value predicted by the theory),[6]although previous work [e.g. van Blockland and Overbeek (1978)] had observed the force qualitatively, and indirect validation of the predicted Casimir energy had been made by measuring the thickness of liquid helium films by Sabisky and Anderson in 1972. Subsequent experiments approach an accuracy of a few percent.
Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is extremely small. On a submicron scale, this force becomes so strong that it becomes the dominant force between uncharged conductors. In fact, at separations of 10 nm—about 100 times the typical size of an atom—the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depending on su***ce geometry and other factors).[7]
In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; and in applied physics, it is significant in some aspects of emerging microtechnologies and nanotechnologies.[8]
Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string[9][10] as well as plates submerged in noisy water[11] or gas[12] exhibit the Casimir force.[/ltr]
3 Effects
4 Derivation of Casimir effect assuming zeta-regularization
4.1 More recent theory
5 Measurement
6 Regularisation
7 Generalities
8 Dynamical Casimir effect
8.1 Analogies
9 Repulsive forces
10 Applications
11 See also
12 References
13 Further reading
13.1 Introductory readings
13.2 Papers, books and lectures
13.3 Temperature dependence
14 External links
[ltr]
Overview[edit]
The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second quantized electromagnetic field.[13][14] Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.
Possible causes[edit]
Vacuum energy[edit][/ltr]
[ltr]
Main article: Vacuum energy
The causes of the Casimir effect are described byquantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particlesof particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum.
The vacuum has, implicitly, all of the properties that a particle may have:spin, or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is
Summing over all possible oscillators at all points in space gives an infinite quantity. To remove this infinity, one may argue that only differences in energy are physically measurable; this argument is the underpinning of the theory of renormalization.[citation needed] In all practical calculations, this is how the infinity is always handled.[citation needed] In a wider systematic sense however, renormalization isn't a mathematically harmonious method for the removal of this infinity, and it presents a challenge in the search for a Theory of Everything. Currently there is no compelling explanation for why this infinity should be treated as essentially zero; a non-zero value is essentially the cosmological constant[citation needed] and any large value causes trouble in cosmology.
Relativistic van der Waals force[edit]
Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha → infinity limit," and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates."[15]
Effects[edit]
Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals ordielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric.
Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, aradar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the nth standing wave is . The vacuum expectation value of the energy of the electromagnetic field in the cavity is then
with the sum running over all possible values of n enumerating the standing waves. The factor of 1/2 corresponds to the fact that the zero-point energies are being summed (it is the same 1/2 as appears in the equation ). Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions.
In particular, one may ask how the zero-point energy depends on the shape s of the cavity. Each energy level depends on the shape, and so one should write for the energy level, and for the vacuum expectation value. At this point comes an important observation: the force at point p on the wall of the cavity is equal to the change in the vacuum energy if the shape s of the wall is perturbed a little bit, say by , at point p. That is, one has
This value is finite in many practical calculations.[16]
Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance L apart). With a << L, the states within the slot of width a are highly constrained so that the energy E of any one mode is widely separated from that of the next. This is not the case in open region L, where there is a large number (about L/a) of states with energy evenly spaced between E and the next mode in the narrow slot---in other words, all slightly larger than E. Now on shortening a by da (< 0), the mode in the slot shrinks in wavelength and therefore increases in energy proportional to -da/a, whereas all the outside L/a states lengthen and correspondingly lower energy proportional to da/L (note the denominator). The net change is slightly negative, because all the L/a modes' energies are slightly larger than the single mode in the slot.
Derivation of Casimir effect assuming zeta-regularization[edit]
In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the su***ce of a conductor. Assuming the parallel plates lie in the xy-plane, the standing waves are
where stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, and are the wave vectors in directions parallel to the plates, and
is the wave-vector perpendicular to the plates. Here, n is an integer, resulting from the requirement that ψ vanish on the metal plates. The frequency of this wave is
where c is the speed of light. The vacuum energy is then the sum over all possible excitation modes
where A is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce aregulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is
In the end, the limit is to be taken. Here s is just a complex number, not to be confused with the shape discussed previously. This integral/sum is finite for s real and larger than 3. The sum has a pole at s = 3, but may be analytically continued to s = 0, where the expression is finite. The above expression simplifies to:
where polar coordinates were introduced to turn thedouble integral into a single integral. The in front is the Jacobian, and the comes from the angular integration. The integral converges if Re[s] > 3, resulting in
The sum diverges at s in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of theRiemann zeta function to s = 0 is assumed to make sense physically in some way, then one has
But and so one obtains
The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area for idealized, perfectly conducting plates with vacuum between them is
where
(hbar, ħ) is the reduced Planck constant, is the speed of light, is the distance between the two plates
The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of shows that the Casimir force per unit area is very small, and that furthermore, the force is inherently of quantum-mechanical origin.
NOTE: In Casimir's original derivation [1], a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance L apart). The 0-point energy on both sides of the plate is considered. Instead of the above ad hoc analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as in the above.
More recent theory[edit]
Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Lif***z and his students.[17][18]Using this approach, complications of the bounding su***ces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lif***z' theory for two metal plates reduces to Casimir's idealized 1/a4 force law for large separations a much greater than the skin depth of the metal, and conversely reduces to the 1/a3 force law of the London dispersion force (with a coefficient called a Hamaker constant) for small a, with a more complicated dependence on a for intermediate separations determined by the dispersion of the materials.[19]
Lif***z' result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions.[20] For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius R is much larger than the separation a, in which case the nearby su***ces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate R/a3 force (neglecting both skin-depth and higher-order curvature effects).[20][21] However, in the 2000s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned su***ces or objects of various shapes.[20]
Measurement[edit]
One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven, in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory,[22][23] but with large experimental errors. Some of the experimental details as well as some background information on how Casimir, Polder and Sparnaay arrived at this point[24] are highlighted in a 2007 interview with Marcus Sparnaay.
The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory,[25] and by Umar Mohideen and Anushree Roy of the University of California at Riverside.[26] In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of asphere with a large radius.
In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates usingmicroresonators.[27]
Regularisation[edit]
In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator.
The heat kernel or exponentially regulated sum is
where the limit is taken in the end. The divergence of the sum is typically manifested as
for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant C which does not depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator
is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. Thezeta function regulator
is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in thecomplex s plane, with the bulk divergence at s = 4. This sum may beanalytically continued past this pole, to obtain a finite part at s = 0.
Not every cavity configuration necessarily leads to a finite part (the lack of a pole at s = 0) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in Landau and Lif***z, "Theory of Continuous Media".)
Generalities[edit]
The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles".
More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over theeigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, inconfiguration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects.
In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling thetopological winding number of the pion field surrounding the nucleon.
Dynamical Casimir effect[edit]
The dynamical Casimir effect is the production of particles and energy from an accelerated moving mirror. This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s.[28] In May 2011 an announcement was made by researchers at theChalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect.[29] [30]
Analogies[edit]
A similar analysis can be used to explain Hawking radiation that causes the slow "evaporation" of black holes (although this is generally visualized as the escape of one particle from a virtual particle-antiparticle pair, the other particle having been captured by the black hole).[citation needed]
Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect has been used to better understand acceleration radiation such as the Unruh effect.[citation needed]
Repulsive forces[edit]
There are few instances wherein the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lif***z showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise.[31] This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lif***z was recently carried out by Munday et al.[32] Other scientists have also suggested the use of gain media to achieve a similar levitation effect,[33] though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers-Kronig relations). Casimir and Casimir-Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al.[34]
Applications[edit]
It has been suggested that the Casimir forces have application in nanotechnology,[35] in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, silicon array propulsion for space drives, and so-called Casimir oscillators.[36]
On 4 June 2013 it was reported[37] that a conglomerate of scientists fromHong Kong University of Science and Technology, University of Florida,Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory have for the first time demonstrated a compact integrated silicon chip that can measure the Casimir force.[38]
See also[edit][/ltr]
From Wikipedia, the free encyclopedia
A water wave analogue of the Casimir effect. Two parallel plates are submerged into colored water contained in a sonicator. When the sonicator is turned on, waves are excited imitating vacuum fluctuations; as a result, the plates attract to each other.
[ltr]
In quantum field theory, theCasimir effect and the Casimir–Polder force are physical forcesarising from a quantized field. They are named after the Dutch physicist Hendrik Casimir.
The typical example is of twouncharged metallic plates in avacuum, placed a few nanometers apart. In a classical description, the lack of an external field also means that there is no field between the plates, and no force would be measured between them.[1] When this field is instead studied using the QED vacuum of quantum electrodynamics, it is seen that the plates do affect the virtual photonswhich constitute the field, and generate a net force[2]—either an attraction or a repulsion depending on the specific arrangement of the two plates. Although the Casimir effect can be expressed in terms ofvirtual particles interacting with the objects, it is best described and more easily calculated in terms of the zero-point energy of aquantized field in the intervening space between the objects. This force has been measured, and is a striking example of an effect captured formally by second quantization.[3][4] However, the treatment of boundary conditions in these calculations has led to some controversy. In fact "Casimir's original goal was to compute the van der Waals forcebetween polarizable molecules" of the metallic plates. Thus it can be interpreted without any reference to the zero-point energy (vacuum energy) of quantum fields.[5]
Dutch physicists Hendrik B. G. Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947, and, after a conversation with Niels Bohr who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948; the former is called the Casimir–Polder force while the latter is the Casimir effect in the narrow sense. Predictions of the force were later extended to finite-conductivity metals and dielectrics by Lif***z and his students, and recent calculations have considered more general geometries. It was not until 1997, however, that a direct experiment, by S. Lamoreaux, described above, quantitatively measured the force (to within 15% of the value predicted by the theory),[6]although previous work [e.g. van Blockland and Overbeek (1978)] had observed the force qualitatively, and indirect validation of the predicted Casimir energy had been made by measuring the thickness of liquid helium films by Sabisky and Anderson in 1972. Subsequent experiments approach an accuracy of a few percent.
Because the strength of the force falls off rapidly with distance, it is measurable only when the distance between the objects is extremely small. On a submicron scale, this force becomes so strong that it becomes the dominant force between uncharged conductors. In fact, at separations of 10 nm—about 100 times the typical size of an atom—the Casimir effect produces the equivalent of about 1 atmosphere of pressure (the precise value depending on su***ce geometry and other factors).[7]
In modern theoretical physics, the Casimir effect plays an important role in the chiral bag model of the nucleon; and in applied physics, it is significant in some aspects of emerging microtechnologies and nanotechnologies.[8]
Any medium supporting oscillations has an analogue of the Casimir effect. For example, beads on a string[9][10] as well as plates submerged in noisy water[11] or gas[12] exhibit the Casimir force.[/ltr]
[ltr]
Overview[edit]
The Casimir effect can be understood by the idea that the presence of conducting metals and dielectrics alters the vacuum expectation value of the energy of the second quantized electromagnetic field.[13][14] Since the value of this energy depends on the shapes and positions of the conductors and dielectrics, the Casimir effect manifests itself as a force between such objects.
Possible causes[edit]
Vacuum energy[edit][/ltr]
Feynman diagram |
History |
Background[show] |
Tools[show] |
Equations[show] |
Incomplete theories[show] |
Scientists[show] |
Main article: Vacuum energy
The causes of the Casimir effect are described byquantum field theory, which states that all of the various fundamental fields, such as the electromagnetic field, must be quantized at each and every point in space. In a simplified view, a "field" in physics may be envisioned as if space were filled with interconnected vibrating balls and springs, and the strength of the field can be visualized as the displacement of a ball from its rest position. Vibrations in this field propagate and are governed by the appropriate wave equation for the particular field in question. The second quantization of quantum field theory requires that each such ball-spring combination be quantized, that is, that the strength of the field be quantized at each point in space. At the most basic level, the field at each point in space is a simple harmonic oscillator, and its quantization places a quantum harmonic oscillator at each point. Excitations of the field correspond to the elementary particlesof particle physics. However, even the vacuum has a vastly complex structure, so all calculations of quantum field theory must be made in relation to this model of the vacuum.
The vacuum has, implicitly, all of the properties that a particle may have:spin, or polarization in the case of light, energy, and so on. On average, most of these properties cancel out: the vacuum is, after all, "empty" in this sense. One important exception is the vacuum energy or the vacuum expectation value of the energy. The quantization of a simple harmonic oscillator states that the lowest possible energy or zero-point energy that such an oscillator may have is
Summing over all possible oscillators at all points in space gives an infinite quantity. To remove this infinity, one may argue that only differences in energy are physically measurable; this argument is the underpinning of the theory of renormalization.[citation needed] In all practical calculations, this is how the infinity is always handled.[citation needed] In a wider systematic sense however, renormalization isn't a mathematically harmonious method for the removal of this infinity, and it presents a challenge in the search for a Theory of Everything. Currently there is no compelling explanation for why this infinity should be treated as essentially zero; a non-zero value is essentially the cosmological constant[citation needed] and any large value causes trouble in cosmology.
Relativistic van der Waals force[edit]
Alternatively, a 2005 paper by Robert Jaffe of MIT states that "Casimir effects can be formulated and Casimir forces can be computed without reference to zero-point energies. They are relativistic, quantum forces between charges and currents. The Casimir force (per unit area) between parallel plates vanishes as alpha, the fine structure constant, goes to zero, and the standard result, which appears to be independent of alpha, corresponds to the alpha → infinity limit," and that "The Casimir force is simply the (relativistic, retarded) van der Waals force between the metal plates."[15]
Effects[edit]
Casimir's observation was that the second-quantized quantum electromagnetic field, in the presence of bulk bodies such as metals ordielectrics, must obey the same boundary conditions that the classical electromagnetic field must obey. In particular, this affects the calculation of the vacuum energy in the presence of a conductor or dielectric.
Consider, for example, the calculation of the vacuum expectation value of the electromagnetic field inside a metal cavity, such as, for example, aradar cavity or a microwave waveguide. In this case, the correct way to find the zero-point energy of the field is to sum the energies of the standing waves of the cavity. To each and every possible standing wave corresponds an energy; say the energy of the nth standing wave is . The vacuum expectation value of the energy of the electromagnetic field in the cavity is then
with the sum running over all possible values of n enumerating the standing waves. The factor of 1/2 corresponds to the fact that the zero-point energies are being summed (it is the same 1/2 as appears in the equation ). Written in this way, this sum is clearly divergent; however, it can be used to create finite expressions.
In particular, one may ask how the zero-point energy depends on the shape s of the cavity. Each energy level depends on the shape, and so one should write for the energy level, and for the vacuum expectation value. At this point comes an important observation: the force at point p on the wall of the cavity is equal to the change in the vacuum energy if the shape s of the wall is perturbed a little bit, say by , at point p. That is, one has
This value is finite in many practical calculations.[16]
Attraction between the plates can be easily understood by focusing on the one-dimensional situation. Suppose that a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance L apart). With a << L, the states within the slot of width a are highly constrained so that the energy E of any one mode is widely separated from that of the next. This is not the case in open region L, where there is a large number (about L/a) of states with energy evenly spaced between E and the next mode in the narrow slot---in other words, all slightly larger than E. Now on shortening a by da (< 0), the mode in the slot shrinks in wavelength and therefore increases in energy proportional to -da/a, whereas all the outside L/a states lengthen and correspondingly lower energy proportional to da/L (note the denominator). The net change is slightly negative, because all the L/a modes' energies are slightly larger than the single mode in the slot.
Derivation of Casimir effect assuming zeta-regularization[edit]
In the original calculation done by Casimir, he considered the space between a pair of conducting metal plates at distance apart. In this case, the standing waves are particularly easy to calculate, because the transverse component of the electric field and the normal component of the magnetic field must vanish on the su***ce of a conductor. Assuming the parallel plates lie in the xy-plane, the standing waves are
where stands for the electric component of the electromagnetic field, and, for brevity, the polarization and the magnetic components are ignored here. Here, and are the wave vectors in directions parallel to the plates, and
is the wave-vector perpendicular to the plates. Here, n is an integer, resulting from the requirement that ψ vanish on the metal plates. The frequency of this wave is
where c is the speed of light. The vacuum energy is then the sum over all possible excitation modes
where A is the area of the metal plates, and a factor of 2 is introduced for the two possible polarizations of the wave. This expression is clearly infinite, and to proceed with the calculation, it is convenient to introduce aregulator (discussed in greater detail below). The regulator will serve to make the expression finite, and in the end will be removed. The zeta-regulated version of the energy per unit-area of the plate is
In the end, the limit is to be taken. Here s is just a complex number, not to be confused with the shape discussed previously. This integral/sum is finite for s real and larger than 3. The sum has a pole at s = 3, but may be analytically continued to s = 0, where the expression is finite. The above expression simplifies to:
where polar coordinates were introduced to turn thedouble integral into a single integral. The in front is the Jacobian, and the comes from the angular integration. The integral converges if Re[s] > 3, resulting in
The sum diverges at s in the neighborhood of zero, but if the damping of large-frequency excitations corresponding to analytic continuation of theRiemann zeta function to s = 0 is assumed to make sense physically in some way, then one has
But and so one obtains
The analytic continuation has evidently lost an additive positive infinity, somehow exactly accounting for the zero-point energy (not included above) outside the slot between the plates, but which changes upon plate movement within a closed system. The Casimir force per unit area for idealized, perfectly conducting plates with vacuum between them is
where
(hbar, ħ) is the reduced Planck constant, is the speed of light, is the distance between the two plates
The force is negative, indicating that the force is attractive: by moving the two plates closer together, the energy is lowered. The presence of shows that the Casimir force per unit area is very small, and that furthermore, the force is inherently of quantum-mechanical origin.
NOTE: In Casimir's original derivation [1], a moveable conductive plate is positioned at a short distance a from one of two widely separated plates (distance L apart). The 0-point energy on both sides of the plate is considered. Instead of the above ad hoc analytic continuation assumption, non-convergent sums and integrals are computed using Euler–Maclaurin summation with a regularizing function (e.g., exponential regularization) not so anomalous as in the above.
More recent theory[edit]
Casimir's analysis of idealized metal plates was generalized to arbitrary dielectric and realistic metal plates by Lif***z and his students.[17][18]Using this approach, complications of the bounding su***ces, such as the modifications to the Casimir force due to finite conductivity, can be calculated numerically using the tabulated complex dielectric functions of the bounding materials. Lif***z' theory for two metal plates reduces to Casimir's idealized 1/a4 force law for large separations a much greater than the skin depth of the metal, and conversely reduces to the 1/a3 force law of the London dispersion force (with a coefficient called a Hamaker constant) for small a, with a more complicated dependence on a for intermediate separations determined by the dispersion of the materials.[19]
Lif***z' result was subsequently generalized to arbitrary multilayer planar geometries as well as to anisotropic and magnetic materials, but for several decades the calculation of Casimir forces for non-planar geometries remained limited to a few idealized cases admitting analytical solutions.[20] For example, the force in the experimental sphere–plate geometry was computed with an approximation (due to Derjaguin) that the sphere radius R is much larger than the separation a, in which case the nearby su***ces are nearly parallel and the parallel-plate result can be adapted to obtain an approximate R/a3 force (neglecting both skin-depth and higher-order curvature effects).[20][21] However, in the 2000s a number of authors developed and demonstrated a variety of numerical techniques, in many cases adapted from classical computational electromagnetics, that are capable of accurately calculating Casimir forces for arbitrary geometries and materials, from simple finite-size effects of finite plates to more complicated phenomena arising for patterned su***ces or objects of various shapes.[20]
Measurement[edit]
One of the first experimental tests was conducted by Marcus Sparnaay at Philips in Eindhoven, in 1958, in a delicate and difficult experiment with parallel plates, obtaining results not in contradiction with the Casimir theory,[22][23] but with large experimental errors. Some of the experimental details as well as some background information on how Casimir, Polder and Sparnaay arrived at this point[24] are highlighted in a 2007 interview with Marcus Sparnaay.
The Casimir effect was measured more accurately in 1997 by Steve K. Lamoreaux of Los Alamos National Laboratory,[25] and by Umar Mohideen and Anushree Roy of the University of California at Riverside.[26] In practice, rather than using two parallel plates, which would require phenomenally accurate alignment to ensure they were parallel, the experiments use one plate that is flat and another plate that is a part of asphere with a large radius.
In 2001, a group (Giacomo Bressi, Gianni Carugno, Roberto Onofrio and Giuseppe Ruoso) at the University of Padua (Italy) finally succeeded in measuring the Casimir force between parallel plates usingmicroresonators.[27]
Regularisation[edit]
In order to be able to perform calculations in the general case, it is convenient to introduce a regulator in the summations. This is an artificial device, used to make the sums finite so that they can be more easily manipulated, followed by the taking of a limit so as to remove the regulator.
The heat kernel or exponentially regulated sum is
where the limit is taken in the end. The divergence of the sum is typically manifested as
for three-dimensional cavities. The infinite part of the sum is associated with the bulk constant C which does not depend on the shape of the cavity. The interesting part of the sum is the finite part, which is shape-dependent. The Gaussian regulator
is better suited to numerical calculations because of its superior convergence properties, but is more difficult to use in theoretical calculations. Other, suitably smooth, regulators may be used as well. Thezeta function regulator
is completely unsuited for numerical calculations, but is quite useful in theoretical calculations. In particular, divergences show up as poles in thecomplex s plane, with the bulk divergence at s = 4. This sum may beanalytically continued past this pole, to obtain a finite part at s = 0.
Not every cavity configuration necessarily leads to a finite part (the lack of a pole at s = 0) or shape-independent infinite parts. In this case, it should be understood that additional physics has to be taken into account. In particular, at extremely large frequencies (above the plasma frequency), metals become transparent to photons (such as X-rays), and dielectrics show a frequency-dependent cutoff as well. This frequency dependence acts as a natural regulator. There are a variety of bulk effects in solid state physics, mathematically very similar to the Casimir effect, where the cutoff frequency comes into explicit play to keep expressions finite. (These are discussed in greater detail in Landau and Lif***z, "Theory of Continuous Media".)
Generalities[edit]
The Casimir effect can also be computed using the mathematical mechanisms of functional integrals of quantum field theory, although such calculations are considerably more abstract, and thus difficult to comprehend. In addition, they can be carried out only for the simplest of geometries. However, the formalism of quantum field theory makes it clear that the vacuum expectation value summations are in a certain sense summations over so-called "virtual particles".
More interesting is the understanding that the sums over the energies of standing waves should be formally understood as sums over theeigenvalues of a Hamiltonian. This allows atomic and molecular effects, such as the van der Waals force, to be understood as a variation on the theme of the Casimir effect. Thus one considers the Hamiltonian of a system as a function of the arrangement of objects, such as atoms, inconfiguration space. The change in the zero-point energy as a function of changes of the configuration can be understood to result in forces acting between the objects.
In the chiral bag model of the nucleon, the Casimir energy plays an important role in showing the mass of the nucleon is independent of the bag radius. In addition, the spectral asymmetry is interpreted as a non-zero vacuum expectation value of the baryon number, cancelling thetopological winding number of the pion field surrounding the nucleon.
Dynamical Casimir effect[edit]
The dynamical Casimir effect is the production of particles and energy from an accelerated moving mirror. This reaction was predicted by certain numerical solutions to quantum mechanics equations made in the 1970s.[28] In May 2011 an announcement was made by researchers at theChalmers University of Technology, in Gothenburg, Sweden, of the detection of the dynamical Casimir effect. In their experiment, microwave photons were generated out of the vacuum in a superconducting microwave resonator. These researchers used a modified SQUID to change the effective length of the resonator in time, mimicking a mirror moving at the required relativistic velocity. If confirmed this would be the first experimental verification of the dynamical Casimir effect.[29] [30]
Analogies[edit]
A similar analysis can be used to explain Hawking radiation that causes the slow "evaporation" of black holes (although this is generally visualized as the escape of one particle from a virtual particle-antiparticle pair, the other particle having been captured by the black hole).[citation needed]
Constructed within the framework of quantum field theory in curved spacetime, the dynamical Casimir effect has been used to better understand acceleration radiation such as the Unruh effect.[citation needed]
Repulsive forces[edit]
There are few instances wherein the Casimir effect can give rise to repulsive forces between uncharged objects. Evgeny Lif***z showed (theoretically) that in certain circumstances (most commonly involving liquids), repulsive forces can arise.[31] This has sparked interest in applications of the Casimir effect toward the development of levitating devices. An experimental demonstration of the Casimir-based repulsion predicted by Lif***z was recently carried out by Munday et al.[32] Other scientists have also suggested the use of gain media to achieve a similar levitation effect,[33] though this is controversial because these materials seem to violate fundamental causality constraints and the requirement of thermodynamic equilibrium (Kramers-Kronig relations). Casimir and Casimir-Polder repulsion can in fact occur for sufficiently anisotropic electrical bodies; for a review of the issues involved with repulsion see Milton et al.[34]
Applications[edit]
It has been suggested that the Casimir forces have application in nanotechnology,[35] in particular silicon integrated circuit technology based micro- and nanoelectromechanical systems, silicon array propulsion for space drives, and so-called Casimir oscillators.[36]
On 4 June 2013 it was reported[37] that a conglomerate of scientists fromHong Kong University of Science and Technology, University of Florida,Harvard University, Massachusetts Institute of Technology, and Oak Ridge National Laboratory have for the first time demonstrated a compact integrated silicon chip that can measure the Casimir force.[38]
See also[edit][/ltr]
Physics portal |
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
[ltr]References[size=13][edit]
[/ltr][/size]
[size][ltr]
Further reading[edit]
Introductory readings[edit]
[/ltr][/size]
[size][ltr]
Papers, books and lectures[edit]
[/ltr][/size]
[size][ltr]
Temperature dependence[edit]
[/ltr][/size]
[size][ltr]
External links[edit]
[/ltr][/size]
[/ltr][/size]
- Jump up^ Cyriaque Genet, Francesco Intravaia, Astrid Lambrecht and Serge Reynaud (2004) "Electromagnetic vacuum fluctuations, Casimir and Van der Waals forces"
- Jump up^ The Force of Empty Space, Physical Review Focus, 3 December 1998
- Jump up^ A. Lambrecht, The Casimir effect: a force from nothing, Physics World, September 2002.
- Jump up^ American Institute of Physics News Note 1996
- Jump up^ Jaffe, R. (2005). "Casimir effect and the quantum vacuum". Physical Review D 72 (2): 021301. arXiv:hep-th/0503158.Bibcode:2005PhRvD..72b1301J. doi:10.1103/PhysRevD.72.021301.
- Jump up^ Photo of ball attracted to a plate by Casimir effect
- Jump up^ "The Casimir effect: a force from nothing". physicsworld.com. 1 September 2002. Retrieved 17 July 2009.
- Jump up^ Astrid Lambrecht,Serge Reynaud and Cyriaque Genet" Casimir In The Nanoworld"
- Jump up^ Griffiths, D. J.; Ho, E. (2001). "Classical Casimir effect for beads on a string". American Journal of Physics 69 (11): 1173.doi:10.1119/1.1396620. edit
- Jump up^ Cooke, J. H. (1998). "Casimir force on a loaded string". American Journal of Physics 66 (7): 569–561. doi:10.1119/1.18907. edit
- Jump up^ Denardo, B. C.; Puda, J. J.; Larraza, A. S. (2009). "A water wave analog of the Casimir effect". American Journal of Physics 77 (12): 1095.doi:10.1119/1.3211416. edit
- Jump up^ Larraza, A. S.; Denardo, B. (1998). "An acoustic Casimir effect".Physics Letters A 248 (2–4): 151. doi:10.1016/S0375-9601(98)00652-5.edit
- Jump up^ E. L. Losada" Functional Approach to the Fermionic Casimir Effect"
- Jump up^ Michael Bordag, Galina Leonidovna Klimchitskaya, Umar Mohideen (2009). "Chapter I; §3: Field quantization and vacuum energy in the presence of boundaries". Advances in the Casimir effect. Oxford University Press. pp. 33 ff. ISBN 0-19-923874-X.
- Jump up^ R.L.Jaffe (2005). "The Casimir Effect and the Quantum Vacuum".arXiv:hep-th/0503158v1.pdf.
- Jump up^ For a brief summary, see the introduction in Passante, R.; Spagnolo, S. (2007). "Casimir-Polder interatomic potential between two atoms at finite temperature and in the presence of boundary conditions". Physical Review A 76 (4): 042112. arXiv:0708.2240. Bibcode:2007PhRvA..76d2112P.doi:10.1103/PhysRevA.76.042112.
- Jump up^ Dzyaloshinskii, I E; Lif***z, E M; Pitaevskii, Lev P (1961). "GENERAL THEORY OF VAN DER WAALS' FORCES". Soviet Physics Uspekhi 4(2): 153. Bibcode:1961SvPhU...4..153D.doi:10.1070/PU1961v004n02ABEH003330.
- Jump up^ Dzyaloshinskii, I E; Kats, E I (2004). "Casimir forces in modulated systems". Journal of Physics: Condensed Matter 16 (32): 5659.arXiv:cond-mat/0408348. Bibcode:2004JPCM...16.5659D.doi:10.1088/0953-8984/16/32/003.
- Jump up^ V. A. Parsegian, Van der Waals Forces: A Handbook for Biologists, Chemists, Engineers, and Physicists (Cambridge Univ. Press, 2006).
- ^ Jump up to:a b c Rodriguez, A. W.; Capasso, F.; Johnson, Steven G. (2011). "The Casimir effect in microstructured geometries". Nature Photonics 5 (4): 211–221. Bibcode:2011NaPho...5..211R.doi:10.1038/nphoton.2011.39. Review article.
- Jump up^ B. V. Derjaguin, I. I. Abrikosova, and E. M. Lif***z, Quarterly Reviews, Chemical Society, vol. 10, 295–329 (1956).
- Jump up^ Sparnaay, M. J. (1957). "Attractive Forces between Flat Plates". Nature180 (4581): 334. Bibcode:1957Natur.180..334S.doi:10.1038/180334b0.
- Jump up^ Sparnaay, M (1958). "Measurements of attractive forces between flat plates". Physica 24 (6–10): 751. Bibcode:1958Phy....24..751S.doi:10.1016/S0031-8914(58)80090-7.
- Jump up^ Movie
- Jump up^ Lamoreaux, S. K. (1997). "Demonstration of the Casimir Force in the 0.6 to 6 μm Range". Physical Review Letters 78: 5.Bibcode:1997PhRvL..78....5L. doi:10.1103/PhysRevLett.78.5.
- Jump up^ Mohideen, U.; Roy, Anushree (1998). "Precision Measurement of the Casimir Force from 0.1 to 0.9 µm". Physical Review Letters 81 (21): 4549.arXiv:physics/9805038. Bibcode:1998PhRvL..81.4549M.doi:10.1103/PhysRevLett.81.4549.
- Jump up^ Bressi, G.; Carugno, G.; Onofrio, R.; Ruoso, G. (2002). "Measurement of the Casimir Force between Parallel Metallic Su***ces". Physical Review Letters 88 (4): 041804. arXiv:quant-ph/0203002.Bibcode:2002PhRvL..88d1804B. doi:10.1103/PhysRevLett.88.041804.PMID 11801108.
- Jump up^ Fulling, S. A.; Davies, P. C. W. (1976). "Radiation from a Moving Mirror in Two Dimensional Space-Time: Conformal Anomaly". Proceedings of the Royal Society A 348 (1654): 393. Bibcode:1976RSPSA.348..393F.doi:10.1098/rspa.1976.0045.
- Jump up^ "First Observation of the Dynamical Casimir Effect". Technology Review.
- Jump up^ Wilson, C. M.; Johansson, G.; Pourkabirian, A.; Simoen, M.; Johansson, J. R.; Duty, T.; Nori, F.; Delsing, P. (2011). "Observation of the Dynamical Casimir Effect in a Superconducting Circuit". Nature 479(7373): 376–379. arXiv:1105.4714. Bibcode:2011Natur.479..376W.doi:10.1038/nature10561. PMID 22094697.
- Jump up^ Dzyaloshinskii, I.E.; Lif***z, E.M.; Pitaevskii, L.P. (1961). "The general theory of van der Waals forces†". Advances in Physics 10 (38): 165.Bibcode:1961AdPhy..10..165D. doi:10.1080/00018736100101281.
- Jump up^ Munday, J.N.; Capasso, F.; Parsegian, V.A. (2009). "Measured long-range repulsive Casimir-Lif***z forces". Nature 457 (7226): 170–3.Bibcode:2009Natur.457..170M. doi:10.1038/nature07610.PMID 19129843.
- Jump up^ Highfield, Roger (6 August 2007). "Physicists have 'solved' mystery of levitation". The Daily Telegraph (London). Retrieved 28 April 2010.
- Jump up^ Milton, K. A.; Abalo, E. K.; Parashar, Prachi; Pourtolami, Nima; Brevik, Iver; Ellingsen, Simen A. (2012). "Repulsive Casimir and Casimir-Polder Forces". J. Phys. A 45 (37): 4006. arXiv:1202.6415v2.Bibcode:2012JPhA...45K4006M. doi:10.1088/1751-8113/45/37/374006.
- Jump up^ Capasso, F.; Munday, J.N.; Iannuzzi, D.; Chan, H.B. (2007). "Casimir forces and quantum electrodynamical torques: physics and nanomechanics". IEEE Journal of Selected Topics in Quantum Electronics 13 (2): 400. doi:10.1109/JSTQE.2007.893082.
- Jump up^ Serry, F.M.; Walliser, D.; MacLay, G.J. (1995). "The anharmonic Casimir oscillator (ACO)-the Casimir effect in a model microelectromechanical system". Journal of Microelectromechanical Systems 4 (4): 193. doi:10.1109/84.475546.
- Jump up^ "Chip harnesses mysterious Casimir effect force". 4 June 2013. Retrieved 4 June 2013.
- Jump up^ Zao, J. et al; Marcet, Z.; Rodriguez, A. W.; Reid, M. T. H.; McCauley, A. P.; Kravchenko, I. I.; Lu, T.; Bao, Y.; Johnson, S. G.; Chan, H. B. (14 May 2013). "Casimir forces on a silicon micromechanical chip". Nature Communications 4: 1845. arXiv:1207.6163.Bibcode:2013NatCo...4E1845Z. doi:10.1038/ncomms2842.PMID 23673630. Retrieved 5 June 2013.
[size][ltr]
Further reading[edit]
Introductory readings[edit]
[/ltr][/size]
- Casimir effect description from University of California, Riverside's version of the Usenet physics FAQ.
- A. Lambrecht, The Casimir effect: a force from nothing, Physics World, September 2002.
- Casimir effect on Astronomy Picture of the Day
[size][ltr]
Papers, books and lectures[edit]
[/ltr][/size]
- H. B. G. Casimir, and D. Polder, "The Influence of Retardation on the London-van der Waals Forces", Phys. Rev. 73, 360–372 (1948).
- Casimir, H. B. G. (1948). "On the attraction between two perfectly conducting plates". Proc. Kon. Nederland. Akad. Wetensch. B51: 793-795.
- S. K. Lamoreaux, "Demonstration of the Casimir Force in the 0.6 to 6 µm Range", Phys. Rev. Lett. 78, 5–8 (1997)
- M. Bordag, U. Mohideen, V.M. Mostepanenko, "New Developments in the Casimir Effect", Phys. Rep. 353, 1–205 (2001), arXiv. (200+ page review paper.)
- Kimball A.Milton: "The Casimir effect", World Scientific, Singapore 2001,ISBN 981-02-4397-9
- Diego Dalvit, et al.: Casimir Physics. Springer, Berlin 2011, ISBN 978-3-642-20287-2
- Bressi, G.; Carugno, G.; Onofrio, R.; Ruoso, G. (2002). "Measurement of the Casimir Force between Parallel Metallic Su***ces". Physical Review Letters 88 (4): 041804. arXiv:quant-ph/0203002.Bibcode:2002PhRvL..88d1804B.doi:10.1103/PhysRevLett.88.041804. PMID 11801108.
- Kenneth, O.; Klich, I.; Mann, A.; Revzen, M. (2002). "Repulsive Casimir Forces". Physical Review Letters 89 (3): 033001. arXiv:quant-ph/0202114. Bibcode:2002PhRvL..89c3001K.doi:10.1103/PhysRevLett.89.033001. PMID 12144387.
- J. D. Barrow, "Much ado about nothing", (2005) Lecture at Gresham College. (Includes discussion of French naval analogy.)
- Barrow, John D. (2000). The book of nothing: vacuums, voids, and the latest ideas about the origins of the universe (1st American ed.). New York: Pantheon Books. ISBN 0-09-928845-1. (Also includes discussion of French naval analogy.)
- Jonathan P. Dowling, "The Mathematics of the Casimir Effect", Math. Mag. 62, 324–331 (1989).
- Patent № PCT/RU2011/000847 Author Urmatskih.
[size][ltr]
Temperature dependence[edit]
[/ltr][/size]
- Measurements Recast Usual View of Elusive Force from NIST
- V.V. Nesterenko, G. Lambiase, G. Scarpetta, Calculation of the Casimir energy at zero and finite temperature: some recent results, arXiv:hep-th/0503100 v2 13 May 2005
[size][ltr]
External links[edit]
[/ltr][/size]
- Casimir effect article search on arxiv.org
- G. Lang, The Casimir Force web site, 2002
- J. Babb, bibliography on the Casimir Effect web site, 2009
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Rigorous Finite-Dimensional Magic Formulas of Quantum Field Theory
Everything should be made as simple as possible, but not simpler.
Albert Einstein (1879–1955)
The important things in the world appear as invariants. . . The things we
are immediately aware of are the relations of these invariants to a certain
frame of reference. . . The growth of the use of transformation theory, as
applied first to relativity and later to the quantum theory, is the essence
of the new method in theoretical physics.
Paul Dirac, 1930
St. John’s College, Cambridge, England
This chapter is completely elementary, but it is very important for understanding
both the basic ideas behind quantum field theory and the language
used by physicists. Mathematicians should note that we introduce two crucial
tools which are not mentioned in the standard literature on finite-dimensional
linear algebra, namely,
• the Dirac calculus and
• discrete path integrals (functional integrals).
These tools are also very useful for mathematics itself.
Geometrization of Physics
In his 1915 theory of general relativity, Einstein described observers by local
coordinate systems. However, since physics has to be independent of the
choice of observers, physical quantities like the gravitational force have to
be described by geometric objects which do not depend on the choice of the
observer. In the late 1920s, Dirac tried to translate this general philosophy
to quantum mechanics. To this end, he invented his transformation theory
in the setting of Hilbert spaces. More precisely, as we will show in Sect.
12.2, one needs the concept of a rigged Hilbert space or Gelfand triplet.
This is intimately related to Laurent Schwartz’s theory of distributions which
generalizes the Newton–Leibniz calculus. Summarizing, modern theoretical
physics is based on the following concept:
physical quantities (observables) and states
⇒ invariant geometric objects;
observers⇒ coordinate systems.
This underlines the importance of geometry for modern physics and the usefulness
of abstract mathematical notions for describing physical quantities.
In quantum physics, one has to use the geometry of Hilbert spaces.
[size=34]阿里阿德涅之线[/size]
[size=34]阿里阿德涅之线,来源于古希腊神话。常用来比喻走出迷宫的方法和路径,解决复杂问题的线索。[/size]
[size=34]在克里特岛上有位弥诺斯国王,他的儿子被雅典人出于忌妒害死了,他于是便向雅典人民挑战。雅典也因为这件事而遭到了神的惩罚,充满了灾荒和瘟疫。阿波罗神庙降下神谕:雅典人如果能够平息弥诺斯的愤恨,取得他的谅解,那么雅典的灾难和神祇们的愤怒都会立即解除。 雅典人只能向弥诺斯求和,求和的结果是雅典人每隔九年送七对童男童女到克里特岛,供奉看守岛上著名迷宫的人身牛头的米诺牛。而这个迷宫据说为伟大的建筑师代达罗斯所造,道路曲折纵横,谁进去都别想出来。
到了第三次进贡的时间,忒修斯十分心痛,决定杀掉米诺牛,解救自己的祖国。作为童男之一,忒斯修到了克里特岛。被带到国王弥诺斯面前时,这位充满青春活力的美男子深得国王妩媚动人的女儿阿里阿德涅的青睐,她偷偷地向忒修斯吐露了爱慕之意,并交给他一只线团,教他把线团的一端拴在迷宫的入口,然后跟着滚动的线团一直往前走,直到丑陋的弥诺牛(即弥诺陶洛斯)居处。另外,她又交给忒修斯一把用来斩杀弥诺陶洛斯的利剑。他用两件宝物战胜了米诺牛,并带着童男童女顺着线团又幸运地钻出了迷宫。他们出来以后,阿里阿德涅跟他们一起出逃。
英雄忒修斯在克里特公主阿里阿德涅的帮助下,用一个线团破解了迷宫,杀死了怪物弥诺陶洛斯。这个线团称为阿里阿德涅之线,是忒修斯在迷宫中的生命之线。[/size]
Ariadne’s Thread in Quantum Field Theory
Quantum field theory is based on the following tools:
• Fourier series (7.17) and Dirac calculus (7.20);
• the Fourier representation of the Green’s operator (7.28);
• the Laplace transform of the Green’s operator (7.30);
• the Dyson series for the Feynman propagator (7.51).
The magic formulas of quantum field theory. The following three
magic formulas lie at the heart of quantum field theory:
(i) the magic Dyson perturbation formula for the S-matrix (Sect. 7.18);
(ii) the magic Feynman propagator formula and the Feynman kernel in terms
of discrete path integrals (Sect. 7.21.1);
(iii) the magic Gell-Mann–Low formula for perturbed causal correlation functions
(Sect. 7.22.2).
Furthermore, the magic Gell-Mann–Low formula is closely related to the
magic Feynman formula for time-ordered products (Sect. 7.21.2) and the
Wick rotation trick for vacuum expectation values (Sect. 7.22.1).
Basic strategy. These formulas are used by physicists in a formal manner
for infinite-dimensional systems. There is a lack of rigorous justification. Our
strategy is the following one:
(F) Rigorous finite-dimensional approach: We prove the magic formulas rigorously
in finite-dimensional Hilbert spaces. In particular, we introduce a
rigorous discrete Feynman path integral. This will be done in the present
chapter.
(I) Formal infinite-dimensional approach:We translate straightforwardly the
rigorous finite-dimensional formulas to infinite dimensions in a formal
manner. This will be done in
• Chapter 14 (response approach) and
• Chapter 15 (operator approach).
This way, a mathematician should learn quickly the background of the formulas
used by physicists in quantum field theory.
The advantage of Feynman’s approach to quantum field theory is the fact
that both the transition amplitudes and the causal correlation functions can
be represented by functional integrals which only depend on the classical
action that appears in the principle of critical action.
Resonances and renormalization. In quantum field theory, a crucial
role is played by the methods of renormalization theory. In Sect. 8.1, we will
discuss the basic ideas of renormalization.
Using a simple rigorous finite-dimensional model, we will show that
the phenomenon of renormalization is related to resonances which
can be treated rigorously in terms of bifurcation theory.
The point is that in the resonance case, *** perturbation theory fails completely;
it has to be replaced by a more sophisticated approach.
The challenge for mathematics. The reader should note that physicists
successfully use the formal infinite-dimensional methods in order to predict
experimental data with extremely high precision.
From the mathematical point of view, the main problem is to justify
the passage from finite to infinite dimensions for quantities which can
be measured in physical experiments.
The study of the appropriate limits represents an important mathematical
task for the future.
Linear Spaces
Functional analysis uses linear spaces and equips them with additional structures.
This way, algebra, analysis, and geometry are combined with each
other. Physical systems with a finite (resp. infinite) number of degrees of freedom
are described by finite-dimensional (resp. infinite-dimensional) spaces.
Morphisms and isomorphisms in modern mathematics. In this
monograph, we will encounter many mathematical structures, for example,
linear spaces, Hilbert spaces, groups, algebras, topological spaces, manifolds,
and so on. Typical maps are called morphisms; they preserve the structure
under consideration. Isomorphic structures are always characterized by isomorphisms.
Let us explain this for linear spaces. Note that
The definitions of the standard notions “surjective, injective, and
bijective” can be found in the Appendix on page 931.
Linear morphisms. Linear spaces are characterized by the linear structure
based on linear combinations. Linear morphisms preserve linear structure.
Explicitly, let X and Y be complex linear spaces. By a linear morphism
(or a linear operator) A : X → Y we understand a map which preserves
linear combinations, that is,
A(αϕ + βψ) = αAϕ + βAψ
for all ϕ, ψ ∈ X and all complex numbers α, β.
Isomorphic linear spaces. By a linear isomorphism, we understand a
linear morphism A : X → Y which is bijective. Two complex linear spaces
X and Y are said to be isomorphic iff there exists a linear isomorphism
A : X → Y.
Nonlinear operators describe interactions in physics.
Therefore, nonlinear operators play a fundamental role in physics.
Terminology. In what follows, the symbol K denotes either R or C. By
a linear space over K we understand a real (resp. complex) linear space iff
K = R (resp. K = C).
The linear space L(X, Y ). The symbol L(X, Y ) denotes the set of all
linear operators
A : X → Y
where X and Y are linear spaces over K. Let A,B ∈ L(X, Y ) and α, β ∈ K.
The linear combination αA + βB is defined by setting
(αA + βB)ψ := αAψ + βBψ for all ϕ, ψ ∈ X.
This way, the set L(X, Y ) becomes a linear space over K. We now choose
Y = X.
The algebra L(X,X). For two linear operators A,B : X → X define
the product AB by setting
(AB)ψ := A(Bψ) for all ψ ∈ X.
The set L(X,X) becomes an algebra. For all A,B,C ∈ L(X,X) and all
α, β ∈ K, this means the following.
• L(X, Y ) is a linear space over K.
• (αA + βB)C = αAC + βBC and C(αA + βB) = αCA + βCB (distributivity).
• (AB)C = A(BC) (associativity).
Subgroup. A subset S of a group G is called a subgroup iff it is a group
with respect to the product on G. Again let X be a complex n-dimensional
Hilbert space with n = 1, 2, . . . Then the following are true.
• Unitary group U(X): The set U(X) of all unitary operators A : X → X
forms a subgroup of GL(X).
• Special unitary group SU(X): The set of all A ∈ U(X) with det(A) = 1
forms a subgroup of U(X) denoted by SU(X).
The group GL(1,R) consists of all nonzero real numbers.
The group U(1) consists of all complex numbers z with |z| = 1.
The real, general linear group GL(n,R) consists of all invertible real
(n × n)-matrices.
The real, special linear group SL(n,R) consists of all matrices A in
GL(n,R) with det(A) = 1.
The complex, general linear group GL(n,C) consists of all invertible
complex (n × n)-matrices.
The complex, special linear group SL(n,C) consists of all matrices A
in GL(n,C) with det(A) = 1.
The unitary group U(n) consists of all complex (n × n)-matrices with
AA† = A†A = I. Such matrices are called unitary matrices. This is a
subgroup of GL(n,C).
The special unitary group SU(n) consists of all matrices A ∈ U(n) with
detA = 1.
The orthogonal group O(n) consists of all real (n × n)-matrices A with
AAd = AdA = I. Such matrices are called orthogonal matrices. This is a
subgroup of GL(n,R).
The special orthogonal group SO(n) consists of all matrices A ∈ O(n)
with det(A) = 1. In particular, O(1) = {1,−1} and SO(1) = {1}.
If X is a complex n-dimensional Hilbert space with n = 1, 2, . . . , then we
have the following group isomorphisms:
• U(n) = U(X), SU(n) = SU(X),GL(n,C) = GL(X), SL(n,C) = SL(X).
Similarly, if X is a real n-dimensional Hilbert space, then we have the following
group isomorphisms:
• O(n) = U(X), SO(n) = SU(X),GL(n,R) = GL(X), SL(n,R) = SL(X).
A group is called commutative (or Abelian) iff AB = BA for all A,B ∈ G.
Lie Algebras
Let X be a real linear space. The prototype of a Lie algebra is the set gl(X)
of all linear operators A : X → X. Define the Lie product
[A,B] := AB − BA.
Set L := gl(X). Then, for all A,B,C ∈ L and all real numbers α, β the
following are met.
(L1) Linearity: L is a real linear space.
(L2) Consistency: [A,B] ∈ L.
(L3) Anticommutativity: [B,A] = −[A,B].
(L4) Bilinearity: [αA + βB,C] = α[A,C] + β[B,C].
(L5) Jacobi identity: [[A,B], C] + [[B,C],A] + [[C,A],B] = 0.
A set L is called a real Lie algebra iff it is equipped with a uniquely determined
Lie product [A,B] for all elements A,B of L such that conditions (L1) through
(L5) are met. The Lie product [A,B] is also called the Lie bracket.
Lie algebra morphism.Amap χ :L→Mbetween two real (resp. complex)
Lie algebras is called a Lie algebra morphism iff it is a linear morphism
and it respects Lie products, that is,
χ([A,B]) = [χ(A), χ(B)] for all A,B ∈ L.
Lie subalgebra. A subset S of a Lie algebra L is called a Lie subalgebra
iff it is a Lie algebra with respect to the Lie product on L. For example,
let X be a complex n-dimensional Hilbert space with n = 1, 2, . . . Then the
following are true.
• Skew-adjoint operators: The set u(X) of all linear operators A : X → X
with A† = −A forms a Lie subalgebra of gl(X).
• Traceless skew-adjoint operators: The set of all A ∈ u(X) with tr(A) = 0
forms a Lie subalgebra of u(X) denoted by su(n).
Classification of morphisms. The following terminology is used in
mathematics for all kind of morphisms:
• bijective morphisms are called isomorphisms if the inverse map is also a
morphism;
Furthermore, for linear morphisms (or group morphisms, Lie algebra morphisms),
we use the following convention:
• surjective morphisms are called epimorphisms;
• injective morphisms are called monomorphisms.
For example, two groups G and H are called isomorphic iff there exists a
group isomorphism χ :G →H.
Isomorphic groups and isomorphic Lie algebras describe the same
mathematics and physics.
Matrix Lie algebras. The following real Lie algebras with respect to
the Lie product [A,B] := AB − BA are frequently used.
(i) The Lie algebra gl(n,R) consists of all real (n × n)-matrices.
(ii) The Lie algebra sl(n,R) consists of all A ∈ gl(n,R) with tr(A) = 0.
(iii) The Lie algebra gl(n,C) consists of all complex (n × n)-matrices.
(iv) The Lie algebra sl(n,C) consists of all matrices A ∈ gl(n,C) with vanishing
trace, tr(A) = 0.
(v) The Lie algebra u(n) consists of all A ∈ gl(n,C) with A† = −A. These
matrices are called skew-adjoint.
(vi) The Lie algebra su(n) consists of all complex matrices A ∈ u(n) with
tr(A) = 0.
(vii) The Lie algebra o(n) consists of all A ∈ gl(n,R) with Ad = −A. These
matrices are called skew-symmetric.
(viii) The Lie algebra so(n) coincides with o(n).
For example, in order to prove that su(n) is a real Lie algebra, we have to
show that the following are true.
(a) If A,B ∈ su(n) and α, β ∈ R, then αA + βB ∈ su(n).
(b) If A,B ∈ su(n), then [A,B] ∈ su(n).
In addition, tr([A,B]) = tr(AB − BA) = tr(AB) − tr(BA) = 0.
If X is a complex n-dimensional Hilbert space with n = 1, 2, . . . , then we
have the following real Lie algebra isomorphisms:
u(n) = u(X), su(n) = su(X), gl(n,C) = gl(X), sl(n,C) = sl(X).
Similarly, if X is a real n-dimensional Hilbert space, then we have the following
real Lie algebra isomorphisms:
o(n) = so(n) = u(X), gl(n,R) = gl(X), sl(n,R) = sl(X).
Lie groups. The notion of Lie group combines the notion of group with
the notion of manifold. By definition, a finite-dimensional real manifold is
called a Lie group iff it is equipped with a group structure such that both
the multiplication map
(A,B) → AB
and the inversion map A → A−1 are smooth.
Lie’s linearization principle. The tangent space of a Lie group G at
the unit element can always be equipped with the structure of a Lie algebra
which is denoted by LG.
It is crucial for the theory of Lie groups that the Lie algebra LG
knows all about the local structure of the Lie group G.
This is the famous Lie linearization principle for Lie groups which we will
encounter quite often.
Lie group morphism. Let G and H be Lie groups. By a Lie group
morphism
f :G →H
we understand a map which is both a group morphism and a manifold morphism.
In other words, this is a smooth group morphism. The map f :G →H
is called a Lie group isomorphism iff it is both a group isomorphism and a
manifold isomorphism. In other words, this is a group isomorphism which is
a diffeomorphism, too.
The theory of manifolds is fundamental for modern physics, as we will
see at many different places of the volumes of this treatise. In particular, the
theory of Lie groups and Lie algebras (and their representations) is basic for
the study of symmetry phenomena in physics and mathematics.
The experience of physicists is that causal correlation functions
represent an important tool in quantum field theory.
Quantum field theory is mainly based on the computation of causal
correlations functions which describe the correlations of the quantum
field between different space points at different time points.
The causal correlation functions of a quantum field are also called 2-point
Green’s functions and higher-order Green’s functions.
Theorem Each nontrivial finite-dimensional Hilbert space possesses an
orthonormal basis.
As all roads lead to Rome so I find in my own case at least that all algebraic
inquiries, sooner or later, end at the Capitol of Modern Algebra over whose
shining portal is inscribed the Theory of Invariants.
James Sylvester (1814–1897)
The theory of invariants came into existence about the middle of the nineteenth
century somewhat like Minerva: a grown-up virgin, mailed in the
shining armor of algebra, she sprang forth from Cayley’s Jovian head:22
Her Athens over which she ruled and which she served as a tutelary and
beneficent goddess was projective geometry.23
Hermann Weyl (1885–1955)
Geometry is the invariant theory of groups of transformations.
Felix Klein (1849–1925)
Erlanger Program, 1872
The value tr(A) does not depend on the choice of the
complete orthonormal system. the trace of a linear operator on a finite-dimensional Hilbert space is invariant under linear isomorphisms.
The theory of invariants plays a fundamental role in physics.
We will encounter this quite often in this treatise. For example, the trace
of observables is fundamental for statistical physics and quantum physics.
This importance of the trace stems from the fact that tr(A) assigns a real
number to the observable A. This real number can be measured in a physical
experiment.
Banach spaces are used in order to study the convergence of iterative
methods which play a fundamental role in perturbation theory.
Folklore
Feynman’s approach to quantum physics is based on representing the
retarded propagator by a path integral.
The role of negative energies. In classical mechanics, the energy E
is always nonnegative. However, in quantum physics, negative energies are
allowed. In 1928 Dirac predicted that negative energies correspond to positive
energies of antiparticles.
The symmetry properties are of primary importance for the investigation of various
physical phenomena in the physics of interacting elementary particles. First of all,
these relates to the classification of particles within the framework of the corresponding
types of interactions, construction of the Lagrangians, determination of the integrals
of motion (conserved quantities), etc. The properties of symmetry are described
by the corresponding symmetry groups and an elementary particle is regarded as an
object whose states form the basis of irreducible representation of a certain symmetry
group.
The invariance of laws of the physics of elementary particles under the proper
inhomogeneous Lorentz group is regarded as a fundamental postulate of the
quantum field theory.
Everything should be made as simple as possible, but not simpler.
Albert Einstein (1879–1955)
The important things in the world appear as invariants. . . The things we
are immediately aware of are the relations of these invariants to a certain
frame of reference. . . The growth of the use of transformation theory, as
applied first to relativity and later to the quantum theory, is the essence
of the new method in theoretical physics.
Paul Dirac, 1930
St. John’s College, Cambridge, England
This chapter is completely elementary, but it is very important for understanding
both the basic ideas behind quantum field theory and the language
used by physicists. Mathematicians should note that we introduce two crucial
tools which are not mentioned in the standard literature on finite-dimensional
linear algebra, namely,
• the Dirac calculus and
• discrete path integrals (functional integrals).
These tools are also very useful for mathematics itself.
Geometrization of Physics
In his 1915 theory of general relativity, Einstein described observers by local
coordinate systems. However, since physics has to be independent of the
choice of observers, physical quantities like the gravitational force have to
be described by geometric objects which do not depend on the choice of the
observer. In the late 1920s, Dirac tried to translate this general philosophy
to quantum mechanics. To this end, he invented his transformation theory
in the setting of Hilbert spaces. More precisely, as we will show in Sect.
12.2, one needs the concept of a rigged Hilbert space or Gelfand triplet.
This is intimately related to Laurent Schwartz’s theory of distributions which
generalizes the Newton–Leibniz calculus. Summarizing, modern theoretical
physics is based on the following concept:
physical quantities (observables) and states
⇒ invariant geometric objects;
observers⇒ coordinate systems.
This underlines the importance of geometry for modern physics and the usefulness
of abstract mathematical notions for describing physical quantities.
In quantum physics, one has to use the geometry of Hilbert spaces.
[size=34]阿里阿德涅之线[/size]
[size=34]阿里阿德涅之线,来源于古希腊神话。常用来比喻走出迷宫的方法和路径,解决复杂问题的线索。[/size]
[size=34]在克里特岛上有位弥诺斯国王,他的儿子被雅典人出于忌妒害死了,他于是便向雅典人民挑战。雅典也因为这件事而遭到了神的惩罚,充满了灾荒和瘟疫。阿波罗神庙降下神谕:雅典人如果能够平息弥诺斯的愤恨,取得他的谅解,那么雅典的灾难和神祇们的愤怒都会立即解除。 雅典人只能向弥诺斯求和,求和的结果是雅典人每隔九年送七对童男童女到克里特岛,供奉看守岛上著名迷宫的人身牛头的米诺牛。而这个迷宫据说为伟大的建筑师代达罗斯所造,道路曲折纵横,谁进去都别想出来。
到了第三次进贡的时间,忒修斯十分心痛,决定杀掉米诺牛,解救自己的祖国。作为童男之一,忒斯修到了克里特岛。被带到国王弥诺斯面前时,这位充满青春活力的美男子深得国王妩媚动人的女儿阿里阿德涅的青睐,她偷偷地向忒修斯吐露了爱慕之意,并交给他一只线团,教他把线团的一端拴在迷宫的入口,然后跟着滚动的线团一直往前走,直到丑陋的弥诺牛(即弥诺陶洛斯)居处。另外,她又交给忒修斯一把用来斩杀弥诺陶洛斯的利剑。他用两件宝物战胜了米诺牛,并带着童男童女顺着线团又幸运地钻出了迷宫。他们出来以后,阿里阿德涅跟他们一起出逃。
英雄忒修斯在克里特公主阿里阿德涅的帮助下,用一个线团破解了迷宫,杀死了怪物弥诺陶洛斯。这个线团称为阿里阿德涅之线,是忒修斯在迷宫中的生命之线。[/size]
Ariadne’s Thread in Quantum Field Theory
Quantum field theory is based on the following tools:
• Fourier series (7.17) and Dirac calculus (7.20);
• the Fourier representation of the Green’s operator (7.28);
• the Laplace transform of the Green’s operator (7.30);
• the Dyson series for the Feynman propagator (7.51).
The magic formulas of quantum field theory. The following three
magic formulas lie at the heart of quantum field theory:
(i) the magic Dyson perturbation formula for the S-matrix (Sect. 7.18);
(ii) the magic Feynman propagator formula and the Feynman kernel in terms
of discrete path integrals (Sect. 7.21.1);
(iii) the magic Gell-Mann–Low formula for perturbed causal correlation functions
(Sect. 7.22.2).
Furthermore, the magic Gell-Mann–Low formula is closely related to the
magic Feynman formula for time-ordered products (Sect. 7.21.2) and the
Wick rotation trick for vacuum expectation values (Sect. 7.22.1).
Basic strategy. These formulas are used by physicists in a formal manner
for infinite-dimensional systems. There is a lack of rigorous justification. Our
strategy is the following one:
(F) Rigorous finite-dimensional approach: We prove the magic formulas rigorously
in finite-dimensional Hilbert spaces. In particular, we introduce a
rigorous discrete Feynman path integral. This will be done in the present
chapter.
(I) Formal infinite-dimensional approach:We translate straightforwardly the
rigorous finite-dimensional formulas to infinite dimensions in a formal
manner. This will be done in
• Chapter 14 (response approach) and
• Chapter 15 (operator approach).
This way, a mathematician should learn quickly the background of the formulas
used by physicists in quantum field theory.
The advantage of Feynman’s approach to quantum field theory is the fact
that both the transition amplitudes and the causal correlation functions can
be represented by functional integrals which only depend on the classical
action that appears in the principle of critical action.
Resonances and renormalization. In quantum field theory, a crucial
role is played by the methods of renormalization theory. In Sect. 8.1, we will
discuss the basic ideas of renormalization.
Using a simple rigorous finite-dimensional model, we will show that
the phenomenon of renormalization is related to resonances which
can be treated rigorously in terms of bifurcation theory.
The point is that in the resonance case, *** perturbation theory fails completely;
it has to be replaced by a more sophisticated approach.
The challenge for mathematics. The reader should note that physicists
successfully use the formal infinite-dimensional methods in order to predict
experimental data with extremely high precision.
From the mathematical point of view, the main problem is to justify
the passage from finite to infinite dimensions for quantities which can
be measured in physical experiments.
The study of the appropriate limits represents an important mathematical
task for the future.
Linear Spaces
Functional analysis uses linear spaces and equips them with additional structures.
This way, algebra, analysis, and geometry are combined with each
other. Physical systems with a finite (resp. infinite) number of degrees of freedom
are described by finite-dimensional (resp. infinite-dimensional) spaces.
Morphisms and isomorphisms in modern mathematics. In this
monograph, we will encounter many mathematical structures, for example,
linear spaces, Hilbert spaces, groups, algebras, topological spaces, manifolds,
and so on. Typical maps are called morphisms; they preserve the structure
under consideration. Isomorphic structures are always characterized by isomorphisms.
Let us explain this for linear spaces. Note that
The definitions of the standard notions “surjective, injective, and
bijective” can be found in the Appendix on page 931.
Linear morphisms. Linear spaces are characterized by the linear structure
based on linear combinations. Linear morphisms preserve linear structure.
Explicitly, let X and Y be complex linear spaces. By a linear morphism
(or a linear operator) A : X → Y we understand a map which preserves
linear combinations, that is,
A(αϕ + βψ) = αAϕ + βAψ
for all ϕ, ψ ∈ X and all complex numbers α, β.
Isomorphic linear spaces. By a linear isomorphism, we understand a
linear morphism A : X → Y which is bijective. Two complex linear spaces
X and Y are said to be isomorphic iff there exists a linear isomorphism
A : X → Y.
Nonlinear operators describe interactions in physics.
Therefore, nonlinear operators play a fundamental role in physics.
Terminology. In what follows, the symbol K denotes either R or C. By
a linear space over K we understand a real (resp. complex) linear space iff
K = R (resp. K = C).
The linear space L(X, Y ). The symbol L(X, Y ) denotes the set of all
linear operators
A : X → Y
where X and Y are linear spaces over K. Let A,B ∈ L(X, Y ) and α, β ∈ K.
The linear combination αA + βB is defined by setting
(αA + βB)ψ := αAψ + βBψ for all ϕ, ψ ∈ X.
This way, the set L(X, Y ) becomes a linear space over K. We now choose
Y = X.
The algebra L(X,X). For two linear operators A,B : X → X define
the product AB by setting
(AB)ψ := A(Bψ) for all ψ ∈ X.
The set L(X,X) becomes an algebra. For all A,B,C ∈ L(X,X) and all
α, β ∈ K, this means the following.
• L(X, Y ) is a linear space over K.
• (αA + βB)C = αAC + βBC and C(αA + βB) = αCA + βCB (distributivity).
• (AB)C = A(BC) (associativity).
Subgroup. A subset S of a group G is called a subgroup iff it is a group
with respect to the product on G. Again let X be a complex n-dimensional
Hilbert space with n = 1, 2, . . . Then the following are true.
• Unitary group U(X): The set U(X) of all unitary operators A : X → X
forms a subgroup of GL(X).
• Special unitary group SU(X): The set of all A ∈ U(X) with det(A) = 1
forms a subgroup of U(X) denoted by SU(X).
The group GL(1,R) consists of all nonzero real numbers.
The group U(1) consists of all complex numbers z with |z| = 1.
The real, general linear group GL(n,R) consists of all invertible real
(n × n)-matrices.
The real, special linear group SL(n,R) consists of all matrices A in
GL(n,R) with det(A) = 1.
The complex, general linear group GL(n,C) consists of all invertible
complex (n × n)-matrices.
The complex, special linear group SL(n,C) consists of all matrices A
in GL(n,C) with det(A) = 1.
The unitary group U(n) consists of all complex (n × n)-matrices with
AA† = A†A = I. Such matrices are called unitary matrices. This is a
subgroup of GL(n,C).
The special unitary group SU(n) consists of all matrices A ∈ U(n) with
detA = 1.
The orthogonal group O(n) consists of all real (n × n)-matrices A with
AAd = AdA = I. Such matrices are called orthogonal matrices. This is a
subgroup of GL(n,R).
The special orthogonal group SO(n) consists of all matrices A ∈ O(n)
with det(A) = 1. In particular, O(1) = {1,−1} and SO(1) = {1}.
If X is a complex n-dimensional Hilbert space with n = 1, 2, . . . , then we
have the following group isomorphisms:
• U(n) = U(X), SU(n) = SU(X),GL(n,C) = GL(X), SL(n,C) = SL(X).
Similarly, if X is a real n-dimensional Hilbert space, then we have the following
group isomorphisms:
• O(n) = U(X), SO(n) = SU(X),GL(n,R) = GL(X), SL(n,R) = SL(X).
A group is called commutative (or Abelian) iff AB = BA for all A,B ∈ G.
Lie Algebras
Let X be a real linear space. The prototype of a Lie algebra is the set gl(X)
of all linear operators A : X → X. Define the Lie product
[A,B] := AB − BA.
Set L := gl(X). Then, for all A,B,C ∈ L and all real numbers α, β the
following are met.
(L1) Linearity: L is a real linear space.
(L2) Consistency: [A,B] ∈ L.
(L3) Anticommutativity: [B,A] = −[A,B].
(L4) Bilinearity: [αA + βB,C] = α[A,C] + β[B,C].
(L5) Jacobi identity: [[A,B], C] + [[B,C],A] + [[C,A],B] = 0.
A set L is called a real Lie algebra iff it is equipped with a uniquely determined
Lie product [A,B] for all elements A,B of L such that conditions (L1) through
(L5) are met. The Lie product [A,B] is also called the Lie bracket.
Lie algebra morphism.Amap χ :L→Mbetween two real (resp. complex)
Lie algebras is called a Lie algebra morphism iff it is a linear morphism
and it respects Lie products, that is,
χ([A,B]) = [χ(A), χ(B)] for all A,B ∈ L.
Lie subalgebra. A subset S of a Lie algebra L is called a Lie subalgebra
iff it is a Lie algebra with respect to the Lie product on L. For example,
let X be a complex n-dimensional Hilbert space with n = 1, 2, . . . Then the
following are true.
• Skew-adjoint operators: The set u(X) of all linear operators A : X → X
with A† = −A forms a Lie subalgebra of gl(X).
• Traceless skew-adjoint operators: The set of all A ∈ u(X) with tr(A) = 0
forms a Lie subalgebra of u(X) denoted by su(n).
Classification of morphisms. The following terminology is used in
mathematics for all kind of morphisms:
• bijective morphisms are called isomorphisms if the inverse map is also a
morphism;
Furthermore, for linear morphisms (or group morphisms, Lie algebra morphisms),
we use the following convention:
• surjective morphisms are called epimorphisms;
• injective morphisms are called monomorphisms.
For example, two groups G and H are called isomorphic iff there exists a
group isomorphism χ :G →H.
Isomorphic groups and isomorphic Lie algebras describe the same
mathematics and physics.
Matrix Lie algebras. The following real Lie algebras with respect to
the Lie product [A,B] := AB − BA are frequently used.
(i) The Lie algebra gl(n,R) consists of all real (n × n)-matrices.
(ii) The Lie algebra sl(n,R) consists of all A ∈ gl(n,R) with tr(A) = 0.
(iii) The Lie algebra gl(n,C) consists of all complex (n × n)-matrices.
(iv) The Lie algebra sl(n,C) consists of all matrices A ∈ gl(n,C) with vanishing
trace, tr(A) = 0.
(v) The Lie algebra u(n) consists of all A ∈ gl(n,C) with A† = −A. These
matrices are called skew-adjoint.
(vi) The Lie algebra su(n) consists of all complex matrices A ∈ u(n) with
tr(A) = 0.
(vii) The Lie algebra o(n) consists of all A ∈ gl(n,R) with Ad = −A. These
matrices are called skew-symmetric.
(viii) The Lie algebra so(n) coincides with o(n).
For example, in order to prove that su(n) is a real Lie algebra, we have to
show that the following are true.
(a) If A,B ∈ su(n) and α, β ∈ R, then αA + βB ∈ su(n).
(b) If A,B ∈ su(n), then [A,B] ∈ su(n).
In addition, tr([A,B]) = tr(AB − BA) = tr(AB) − tr(BA) = 0.
If X is a complex n-dimensional Hilbert space with n = 1, 2, . . . , then we
have the following real Lie algebra isomorphisms:
u(n) = u(X), su(n) = su(X), gl(n,C) = gl(X), sl(n,C) = sl(X).
Similarly, if X is a real n-dimensional Hilbert space, then we have the following
real Lie algebra isomorphisms:
o(n) = so(n) = u(X), gl(n,R) = gl(X), sl(n,R) = sl(X).
Lie groups. The notion of Lie group combines the notion of group with
the notion of manifold. By definition, a finite-dimensional real manifold is
called a Lie group iff it is equipped with a group structure such that both
the multiplication map
(A,B) → AB
and the inversion map A → A−1 are smooth.
Lie’s linearization principle. The tangent space of a Lie group G at
the unit element can always be equipped with the structure of a Lie algebra
which is denoted by LG.
It is crucial for the theory of Lie groups that the Lie algebra LG
knows all about the local structure of the Lie group G.
This is the famous Lie linearization principle for Lie groups which we will
encounter quite often.
Lie group morphism. Let G and H be Lie groups. By a Lie group
morphism
f :G →H
we understand a map which is both a group morphism and a manifold morphism.
In other words, this is a smooth group morphism. The map f :G →H
is called a Lie group isomorphism iff it is both a group isomorphism and a
manifold isomorphism. In other words, this is a group isomorphism which is
a diffeomorphism, too.
The theory of manifolds is fundamental for modern physics, as we will
see at many different places of the volumes of this treatise. In particular, the
theory of Lie groups and Lie algebras (and their representations) is basic for
the study of symmetry phenomena in physics and mathematics.
The experience of physicists is that causal correlation functions
represent an important tool in quantum field theory.
Quantum field theory is mainly based on the computation of causal
correlations functions which describe the correlations of the quantum
field between different space points at different time points.
The causal correlation functions of a quantum field are also called 2-point
Green’s functions and higher-order Green’s functions.
Theorem Each nontrivial finite-dimensional Hilbert space possesses an
orthonormal basis.
As all roads lead to Rome so I find in my own case at least that all algebraic
inquiries, sooner or later, end at the Capitol of Modern Algebra over whose
shining portal is inscribed the Theory of Invariants.
James Sylvester (1814–1897)
The theory of invariants came into existence about the middle of the nineteenth
century somewhat like Minerva: a grown-up virgin, mailed in the
shining armor of algebra, she sprang forth from Cayley’s Jovian head:22
Her Athens over which she ruled and which she served as a tutelary and
beneficent goddess was projective geometry.23
Hermann Weyl (1885–1955)
Geometry is the invariant theory of groups of transformations.
Felix Klein (1849–1925)
Erlanger Program, 1872
The value tr(A) does not depend on the choice of the
complete orthonormal system. the trace of a linear operator on a finite-dimensional Hilbert space is invariant under linear isomorphisms.
The theory of invariants plays a fundamental role in physics.
We will encounter this quite often in this treatise. For example, the trace
of observables is fundamental for statistical physics and quantum physics.
This importance of the trace stems from the fact that tr(A) assigns a real
number to the observable A. This real number can be measured in a physical
experiment.
Banach spaces are used in order to study the convergence of iterative
methods which play a fundamental role in perturbation theory.
Folklore
Feynman’s approach to quantum physics is based on representing the
retarded propagator by a path integral.
The role of negative energies. In classical mechanics, the energy E
is always nonnegative. However, in quantum physics, negative energies are
allowed. In 1928 Dirac predicted that negative energies correspond to positive
energies of antiparticles.
The symmetry properties are of primary importance for the investigation of various
physical phenomena in the physics of interacting elementary particles. First of all,
these relates to the classification of particles within the framework of the corresponding
types of interactions, construction of the Lagrangians, determination of the integrals
of motion (conserved quantities), etc. The properties of symmetry are described
by the corresponding symmetry groups and an elementary particle is regarded as an
object whose states form the basis of irreducible representation of a certain symmetry
group.
The invariance of laws of the physics of elementary particles under the proper
inhomogeneous Lorentz group is regarded as a fundamental postulate of the
quantum field theory.
由一星于2014-07-10, 13:58进行了最后一次编辑,总共编辑了1次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Duhamel's principle
From Wikipedia, the free encyclopedia
[ltr]In mathematics, and more specifically in partial differential equations,Duhamel's principle is a general method for obtaining solutions toinhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations.[1]
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy u in Rn. The initial value problem is
where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation is
corresponds to adding an external heat energy ƒ(x,t)dt at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice t = t0. By linearity, one can add up (integrate) the resulting solutions through timet0 and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.[/ltr]
3 See also
4 References
[ltr]
General considerations[edit]
Formally, consider a linear inhomogeneous evolution equation for a function
with spatial domain D in Rn, of the form
where L is a linear differential operator that involves no time derivatives.
Duhamel's principle is, formally, that the solution to this problem is
where Psƒ is the solution of the problem
Duhamel's principle also holds for linear systems (with vector-valued functions u), and this in turn furnishes a generalization to higher tderivatives, such as those appearing in the wave equation (see below). Validity of the principle depends on being able to solve the homogeneous problem in an appropriate function space and that the solution should exhibit reasonable dependence on parameters so that the integral is well-defined. Precise analytic conditions on u and f depend on the particular application.
Examples[edit]
Wave equation[edit]
The linear wave equation models the displacement u of an idealized dispersionless one-dimensional string, in terms of derivatives with respect to time t and space x:
The function f(x,t), in natural units, represents an external force applied to string at the position (x,t). In order to be a suitable physical model for nature, it should be possible to solve it for any initial state that the string is in, specified by its initial displacement and velocity:
More generally, we should be able to solve the equation with data specified on any t = constant slice:
To evolve a solution from any given time slice T to T+dT, the contribution of the force must be added to the solution. That contribution comes from changing the velocity of the string by f(x,T)dT. That is, to get the solution at time T+dT from the solution at time T, we must add to it a new (forward) solution of the homogeneous (no external forces) wave equation
with the initial conditions
A solution to this equation is achieved by straightforward integration:
(The expression in parenthesis is just in the notation of the general method above.) So a solution of the original initial value problem is obtained by starting with a solution to the problem with the same prescribed initial values problem but with zero external force, and adding to that (integrating) the contributions from the added force in the time intervals from T to T+dT:
Constant-coefficient linear ODE[edit]
Duhamel's principle is the result that the solution to an inhomogeneous, linear, partial differential equation can be solved by first finding the solution for a step input, and then superposing using Duhamel's integral. Suppose we have a constant coefficient, mth order inhomogeneousordinary differential equation.
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First let G solve
Define , with being the characteristic function of the interval . Then we have
in the sense of distributions. Therefore
solves the ODE.
Constant-coefficient linear PDE[edit]
More generally, suppose we have a constant coefficient inhomogeneouspartial differential equation
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First, taking the Fourier transform in x we have
Assume that is an mth order ODE in t. Let be the coefficient of the highest order term of . Now for every let solve
Define . We then have
in the sense of distributions. Therefore
solves the PDE (after transforming back to x).
See also[edit][/ltr]
[ltr]
References[edit][/ltr]
杜哈梅原理[编辑]
[ltr]杜哈梅原理(英语:Duhamel's principle),又称为齐次化原理,是求解非齐次线性偏微分方程(如热传导方程、波动方程)的一种方法。杜哈梅原理以法国数学家杜哈梅的名字命名,他最早在非齐次热传导方程中应用了此方法。该方法可以看作是求解非齐次线性常微分方程时使用的常数变易法(Variation of parameters)的推广。[1]
杜哈梅原理将非齐次问题的求解转化为一组柯西问题(初值问题)的求解。以热传导方程为例,热能分布u为Rn上的函数。初值问题为
其中g表示初始的热分布。而相应的非齐次问题则为
可以将非齐次问题看成是无数个瞬时(t = t0)的齐次问题的叠加。由于方程是线性的,故将每一个t0时刻的齐次问题的解叠加(积分)之后就可以得到非齐次问题的解。这便是杜哈梅原理的基本思想。
参考文献[size=13][编辑]
[/ltr][/size]
From Wikipedia, the free encyclopedia
[ltr]In mathematics, and more specifically in partial differential equations,Duhamel's principle is a general method for obtaining solutions toinhomogeneous linear evolution equations like the heat equation, wave equation, and vibrating plate equation. It is named after Jean-Marie Duhamel who first applied the principle to the inhomogeneous heat equation that models, for instance, the distribution of heat in a thin plate which is heated from beneath. For linear evolution equations without spatial dependency, such as a harmonic oscillator, Duhamel's principle reduces to the method of variation of parameters technique for solving linear inhomogeneous ordinary differential equations.[1]
The philosophy underlying Duhamel's principle is that it is possible to go from solutions of the Cauchy problem (or initial value problem) to solutions of the inhomogeneous problem. Consider, for instance, the example of the heat equation modeling the distribution of heat energy u in Rn. The initial value problem is
where g is the initial heat distribution. By contrast, the inhomogeneous problem for the heat equation is
corresponds to adding an external heat energy ƒ(x,t)dt at each point. Intuitively, one can think of the inhomogeneous problem as a set of homogeneous problems each starting afresh at a different time slice t = t0. By linearity, one can add up (integrate) the resulting solutions through timet0 and obtain the solution for the inhomogeneous problem. This is the essence of Duhamel's principle.[/ltr]
- 1 General considerations
- 2 Examples
- 2.1 Wave equation
- 2.2 Constant-coefficient linear ODE
- 2.3 Constant-coefficient linear PDE
[ltr]
General considerations[edit]
Formally, consider a linear inhomogeneous evolution equation for a function
with spatial domain D in Rn, of the form
where L is a linear differential operator that involves no time derivatives.
Duhamel's principle is, formally, that the solution to this problem is
where Psƒ is the solution of the problem
Duhamel's principle also holds for linear systems (with vector-valued functions u), and this in turn furnishes a generalization to higher tderivatives, such as those appearing in the wave equation (see below). Validity of the principle depends on being able to solve the homogeneous problem in an appropriate function space and that the solution should exhibit reasonable dependence on parameters so that the integral is well-defined. Precise analytic conditions on u and f depend on the particular application.
Examples[edit]
Wave equation[edit]
The linear wave equation models the displacement u of an idealized dispersionless one-dimensional string, in terms of derivatives with respect to time t and space x:
The function f(x,t), in natural units, represents an external force applied to string at the position (x,t). In order to be a suitable physical model for nature, it should be possible to solve it for any initial state that the string is in, specified by its initial displacement and velocity:
More generally, we should be able to solve the equation with data specified on any t = constant slice:
To evolve a solution from any given time slice T to T+dT, the contribution of the force must be added to the solution. That contribution comes from changing the velocity of the string by f(x,T)dT. That is, to get the solution at time T+dT from the solution at time T, we must add to it a new (forward) solution of the homogeneous (no external forces) wave equation
with the initial conditions
A solution to this equation is achieved by straightforward integration:
(The expression in parenthesis is just in the notation of the general method above.) So a solution of the original initial value problem is obtained by starting with a solution to the problem with the same prescribed initial values problem but with zero external force, and adding to that (integrating) the contributions from the added force in the time intervals from T to T+dT:
Constant-coefficient linear ODE[edit]
Duhamel's principle is the result that the solution to an inhomogeneous, linear, partial differential equation can be solved by first finding the solution for a step input, and then superposing using Duhamel's integral. Suppose we have a constant coefficient, mth order inhomogeneousordinary differential equation.
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First let G solve
Define , with being the characteristic function of the interval . Then we have
in the sense of distributions. Therefore
solves the ODE.
Constant-coefficient linear PDE[edit]
More generally, suppose we have a constant coefficient inhomogeneouspartial differential equation
where
We can reduce this to the solution of a homogeneous ODE using the following method. All steps are done formally, ignoring necessary requirements for the solution to be well defined.
First, taking the Fourier transform in x we have
Assume that is an mth order ODE in t. Let be the coefficient of the highest order term of . Now for every let solve
Define . We then have
in the sense of distributions. Therefore
solves the PDE (after transforming back to x).
See also[edit][/ltr]
[ltr]
References[edit][/ltr]
- Jump up^ Fritz John, "Partial Differential Equations', New York, Springer-Verlag, 1982, 4th ed., 0387906096
杜哈梅原理[编辑]
[ltr]杜哈梅原理(英语:Duhamel's principle),又称为齐次化原理,是求解非齐次线性偏微分方程(如热传导方程、波动方程)的一种方法。杜哈梅原理以法国数学家杜哈梅的名字命名,他最早在非齐次热传导方程中应用了此方法。该方法可以看作是求解非齐次线性常微分方程时使用的常数变易法(Variation of parameters)的推广。[1]
杜哈梅原理将非齐次问题的求解转化为一组柯西问题(初值问题)的求解。以热传导方程为例,热能分布u为Rn上的函数。初值问题为
其中g表示初始的热分布。而相应的非齐次问题则为
可以将非齐次问题看成是无数个瞬时(t = t0)的齐次问题的叠加。由于方程是线性的,故将每一个t0时刻的齐次问题的解叠加(积分)之后就可以得到非齐次问题的解。这便是杜哈梅原理的基本思想。
参考文献[size=13][编辑]
[/ltr][/size]
- ^ Fritz John, "Partial Differential Equations' , New York, Springer-Verlag , 1982 , 4th ed., 0387906096
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Volterra integral equation
From Wikipedia, the free encyclopedia
[ltr]In mathematics, the Volterra integral equations are a special type ofintegral equations. They are divided into two groups referred to as the first and the second kind.
A linear Volterra equation of the first kind is
where ƒ is a given function and x is an unknown function to be solved for. A linear Volterra equation of the second kind is
In operator theory, and in Fredholm theory, the corresponding equations are called the Volterra operator.
A linear Volterra integral equation is a convolution equation if
The function in the integral is often called the kernel. Such equations can be analysed and solved by means of Laplace transform techniques.
The Volterra integral equations were introduced by Vito Volterra and then studied by Traian Lalescu in his 1908 thesis, Sur les équations de Volterra, written under the direction of Émile Picard. In 1911, Lalescu wrote the first book ever on integral equations.
Volterra integral equations find application in demography, the study ofviscoelastic materials, and in insurance mathematics through the renewal equation.
References[size=13][edit]
[/ltr][/size]
From Wikipedia, the free encyclopedia
[ltr]In mathematics, the Volterra integral equations are a special type ofintegral equations. They are divided into two groups referred to as the first and the second kind.
A linear Volterra equation of the first kind is
where ƒ is a given function and x is an unknown function to be solved for. A linear Volterra equation of the second kind is
In operator theory, and in Fredholm theory, the corresponding equations are called the Volterra operator.
A linear Volterra integral equation is a convolution equation if
The function in the integral is often called the kernel. Such equations can be analysed and solved by means of Laplace transform techniques.
The Volterra integral equations were introduced by Vito Volterra and then studied by Traian Lalescu in his 1908 thesis, Sur les équations de Volterra, written under the direction of Émile Picard. In 1911, Lalescu wrote the first book ever on integral equations.
Volterra integral equations find application in demography, the study ofviscoelastic materials, and in insurance mathematics through the renewal equation.
References[size=13][edit]
[/ltr][/size]
- Traian Lalescu, Introduction à la théorie des équations intégrales. Avec une préface de É. Picard, Paris: A. Hermann et Fils, 1912. VII + 152 pp.
- Hazewinkel, Michiel, ed. (2001), "Volterra equation", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Weisstein, Eric W., "Volterra Integral Equation of the First Kind",MathWorld.
- Weisstein, Eric W., "Volterra Integral Equation of the Second Kind",MathWorld.
- Integral Equations: Exact Solutions at EqWorld: The World of Mathematical Equations
- Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007)."Section 19.2. Volterra Equations". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press.ISBN 978-0-521-88068-8.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Dyson series
From Wikipedia, the free encyclopedia
[ltr]In scattering theory, a part of mathematical physics, the Dyson series, formulated by Freeman Dyson, is a perturbative series, and each term is represented by Feynman diagrams. This series diverges asymptotically, but in quantum electrodynamics (QED) at the second order the difference from experimental data is in the order of 10−10. This close agreement holds because the coupling constant (also known as the fine structure constant) of QED is much less than 1. Notice that in this article Planck units are used, so that ħ = 1 (where ħ is the reduced Planck constant).[/ltr]
[ltr]
The Dyson operator[edit]
We suppose we have a Hamiltonian H which we split into a "free" part H0and an "interacting" part V i.e. H = H0 + V. We will work in the interaction picture here and assume units such that the reduced Planck constant is 1.
In the interaction picture, the evolution operator U defined by the equation:
is called Dyson operator.
We have
and then (Tomonaga–Schwinger equation)
Thus:
Derivation of the Dyson series[edit]
This leads to the following Neumann series:
Here we have t1 > t2 > ..., > tn so we can say that the fields are time ordered, and it is useful to introduce an operator called time-orderingoperator, defining:
We can now try to make this integration simpler. in fact, by the following example:
Assume that K is symmetric in its arguments and define (look at integration limits):
The region of integration can be broken in n! sub-regions defined byt1 > t2 > ... > tn, t2 > t1 > ... > tn, etc. Due to the symmetry of K, the integral in each of these sub-regions is the same, and equal to n by definition. So it is true that:
Returning to our previous integral, it holds the identity:
Summing up all the terms we obtain the Dyson series:
The Dyson series for wavefunctions[edit]
Then, going back to the wavefunction for t > t0,
Returning to the Schrödinger picture, for tf > ti,
See also[edit][/ltr]
[ltr]
References[edit][/ltr]
The magic Dyson formula is quite natural from the physical
point of view; it follows from the superposition principle.
From Wikipedia, the free encyclopedia
[ltr]In scattering theory, a part of mathematical physics, the Dyson series, formulated by Freeman Dyson, is a perturbative series, and each term is represented by Feynman diagrams. This series diverges asymptotically, but in quantum electrodynamics (QED) at the second order the difference from experimental data is in the order of 10−10. This close agreement holds because the coupling constant (also known as the fine structure constant) of QED is much less than 1. Notice that in this article Planck units are used, so that ħ = 1 (where ħ is the reduced Planck constant).[/ltr]
- 1 The Dyson operator
- 2 Derivation of the Dyson series
- 3 The Dyson series for wavefunctions
- 4 See also
- 5 References
[ltr]
The Dyson operator[edit]
We suppose we have a Hamiltonian H which we split into a "free" part H0and an "interacting" part V i.e. H = H0 + V. We will work in the interaction picture here and assume units such that the reduced Planck constant is 1.
In the interaction picture, the evolution operator U defined by the equation:
is called Dyson operator.
We have
and then (Tomonaga–Schwinger equation)
Thus:
Derivation of the Dyson series[edit]
This leads to the following Neumann series:
Here we have t1 > t2 > ..., > tn so we can say that the fields are time ordered, and it is useful to introduce an operator called time-orderingoperator, defining:
We can now try to make this integration simpler. in fact, by the following example:
Assume that K is symmetric in its arguments and define (look at integration limits):
The region of integration can be broken in n! sub-regions defined byt1 > t2 > ... > tn, t2 > t1 > ... > tn, etc. Due to the symmetry of K, the integral in each of these sub-regions is the same, and equal to n by definition. So it is true that:
Returning to our previous integral, it holds the identity:
Summing up all the terms we obtain the Dyson series:
The Dyson series for wavefunctions[edit]
Then, going back to the wavefunction for t > t0,
Returning to the Schrödinger picture, for tf > ti,
See also[edit][/ltr]
[ltr]
References[edit][/ltr]
- Charles J. Joachain, Quantum collision theory, North-Holland Publishing, 1975, ISBN 0-444-86773-2 (Elsevier)
The magic Dyson formula is quite natural from the physical
point of view; it follows from the superposition principle.
一星- 帖子数 : 3787
注册日期 : 13-08-07
您在这个论坛的权限:
您不能在这个论坛回复主题