Quantum Field Theory I
回复: Quantum Field Theory I
Functionals generalize classical functions to systems with a finite or infinite
number of degrees of freedom. Folklore
It was emphasized by Feynman (1918–1988) and Schwinger (1918–1994) in
the 1940s that it is very useful for quantum field theory to extend the classical
calculus due to Newton (1643–1727) and Leibniz (1646–1716) to functionals.
In mathematics, the differentiation of functionals and operators was introduced
by Fr´echet (1878–1973), Gˆateaux (1889–1914), and Volterra (1860–
1950) in about 1900. The goal was to give the calculus of variations a rigorous
basis. In the 1920s, functional integrals (Euclidean path integrals) were
introduced by Wiener (1894–1964) in order to mathematically describe Einstein’s
1905 approach to Brownian motion (theory of stochastic processes).
We are going to discuss the basic ideas.
Classical derivatives are generalized to functional derivatives; differentials
are linear functionals in modern mathematics.
Folklore
Let Z : X → C be a functional on the complex Hilbert space X. We write
Z = Z(J).
That is, to each element J of X we assign the complex number Z(J). In
quantum field theory, such functionals arise in a natural way. Prototypes are
the action, S(ψ), of a quantum field ψ and the generating functional Z for the
correlation functions (see Chap. 13). Then, Z(J) is the value of the generating
functional at the point J. Intuitively, the source function J describes an
external force acting on the physical system. The functional derivative Z(J)
tells us then the response of the physical system under a small change of the
source. It is our goal to investigate the following generalizations:
• derivative ⇒ functional derivative;
• partial derivative ⇒ partial functional derivative;
• integral ⇒ functional integral.
Notation. In mathematics, the following notions possess a precise meaning:
• functional derivative and partial functional derivative,
• directional derivative,
• variation,
• differential,
• infinitesimal transformation.
Partial Functional Derivatives
Here, one finds a method which requires only a simple use of the principles
of differential and integral calculus; above all I must call attention to the
fact that I have introduced in my calculations a new characteristic δ since
this method requires that the same quantities vary in two different ways.
Comte de Joseph Louis Lagrange, 1762
By generalizing Euler’s 1744 method, Lagrange (1736–1813) got the idea
for his remarkable formulas, where in a single line there is contained the
solution of all problems of analytic mechanics.
Carl Gustav Jacob Jacobi (1804–1851)
In the calculus of variations, the solutions of the principle of critical action
S(ψ) = critical!, ψ∈ X
satisfy the so-called variational equation
δS(ψ)/δψ= 0.
This implies
δS(ψ)/δψ(x)= 0 for all indices x (7.61)
which represents the desired equation of motion for the field ψ. This equation
is also called the Euler–Lagrange equation.
We will show in this treatise that all of the fundamental field equations
in physics are of the type (7.61).
For example, this concerns the electromagnetic field, non-relativistic and relativistic
quantum mechanics, the Standard Model in particle physics, and the
theory of general relativity. The basic tool for introducing partial functional
derivatives is the notion of the density of a given functional; this generalizes
the classical mass density
Infinitesimal Transformations
Infinitesimal symmetry transformations know much, but not all about
global symmetry transformations.
Folklore
In order to investigate the invariance of physical processes under symmetries,
physicists simplify the considerations by using infinitesimal transformations.
This theory was created by Sophus Lie (1849–1899) in about 1870. Let us
discuss some basic ideas in rigorous terms.
Roughly speaking, infinitesimal transformations are obtained by neglecting
terms of higher order than one.
The prototype of infinitesimal transformations are infinitesimal rotations.
For each smooth function f : R2 → R, the following three
conditions are equivalent.
(i) The function f is invariant under infinitesimal rotations.
(ii) The function f satisfies the Lie partial differential equation
xfy(x, y) − yfx(x, y) = 0 for all (x, y) ∈ R2. (7.70)
(iii) The function f is invariant under rotations.
(iv) In polar coordinates ϕ, r, the function f only depends on r.
Lack of global information. The theory of Lie matrix groups is fundamental
for elementary particle physics. This will be studied in the following
volumes. In Sect. 5.7.1 we have discussed the crucial fact that the two matrix
Lie groups SO(3) and SU(2) have isomorphic Lie algebras, and hence
they are locally isomorphic, but they are not globally isomorphic. In this
important case, the infinitesimal transformations do not know all about the
global transformations. It is quite remarkable that nature sees this difference
in terms of the electron spin.
The Discrete Feynman Path Integral
I found myself thinking of a large number of integrals, one after the other
in sequence. In the integrand was the product of the exponentials, which,
of course, was the exponential of the sum of the terms like #L. Now L is
the Lagrangian and # is like the time interval dt, so that if you took a
sum of such terms, that’s exactly like an integral. That’s like Riemann’s
formula for the integral
SLdt; you just take the value at each point and
add them together. We are to take the limit as # → 0, of course. Therefore,
the connection between the wave function of one instant and and the wave
function of another instant a finite time later could be obtained by an infinite
number of integrals of exp(iS/h), where S is the action expression. . . This
led later on to the idea of the amplitude of the path; that for each possible
way that the particle can go from one point to another in space-time,
there’s an amplitude exp(iS/h), where S is the action along the path. Amplitudes
from various paths superpose by addition. This then is another, a
third way, of describing quantum mechanic, which looks quite different
than that of Heisenberg or Schr¨odinger, but is equivalent to them.
Richard Feynman (1918–1988)
Nobel Lecture in 1965
The magic Dyson formula for the S-matrix and the magic Gell-Mann–
Low formula for the causal correlation functions are the key to applying
perturbation theory in quantum field theory.
Folklore
number of degrees of freedom. Folklore
It was emphasized by Feynman (1918–1988) and Schwinger (1918–1994) in
the 1940s that it is very useful for quantum field theory to extend the classical
calculus due to Newton (1643–1727) and Leibniz (1646–1716) to functionals.
In mathematics, the differentiation of functionals and operators was introduced
by Fr´echet (1878–1973), Gˆateaux (1889–1914), and Volterra (1860–
1950) in about 1900. The goal was to give the calculus of variations a rigorous
basis. In the 1920s, functional integrals (Euclidean path integrals) were
introduced by Wiener (1894–1964) in order to mathematically describe Einstein’s
1905 approach to Brownian motion (theory of stochastic processes).
We are going to discuss the basic ideas.
Classical derivatives are generalized to functional derivatives; differentials
are linear functionals in modern mathematics.
Folklore
Let Z : X → C be a functional on the complex Hilbert space X. We write
Z = Z(J).
That is, to each element J of X we assign the complex number Z(J). In
quantum field theory, such functionals arise in a natural way. Prototypes are
the action, S(ψ), of a quantum field ψ and the generating functional Z for the
correlation functions (see Chap. 13). Then, Z(J) is the value of the generating
functional at the point J. Intuitively, the source function J describes an
external force acting on the physical system. The functional derivative Z(J)
tells us then the response of the physical system under a small change of the
source. It is our goal to investigate the following generalizations:
• derivative ⇒ functional derivative;
• partial derivative ⇒ partial functional derivative;
• integral ⇒ functional integral.
Notation. In mathematics, the following notions possess a precise meaning:
• functional derivative and partial functional derivative,
• directional derivative,
• variation,
• differential,
• infinitesimal transformation.
Partial Functional Derivatives
Here, one finds a method which requires only a simple use of the principles
of differential and integral calculus; above all I must call attention to the
fact that I have introduced in my calculations a new characteristic δ since
this method requires that the same quantities vary in two different ways.
Comte de Joseph Louis Lagrange, 1762
By generalizing Euler’s 1744 method, Lagrange (1736–1813) got the idea
for his remarkable formulas, where in a single line there is contained the
solution of all problems of analytic mechanics.
Carl Gustav Jacob Jacobi (1804–1851)
In the calculus of variations, the solutions of the principle of critical action
S(ψ) = critical!, ψ∈ X
satisfy the so-called variational equation
δS(ψ)/δψ= 0.
This implies
δS(ψ)/δψ(x)= 0 for all indices x (7.61)
which represents the desired equation of motion for the field ψ. This equation
is also called the Euler–Lagrange equation.
We will show in this treatise that all of the fundamental field equations
in physics are of the type (7.61).
For example, this concerns the electromagnetic field, non-relativistic and relativistic
quantum mechanics, the Standard Model in particle physics, and the
theory of general relativity. The basic tool for introducing partial functional
derivatives is the notion of the density of a given functional; this generalizes
the classical mass density
Infinitesimal Transformations
Infinitesimal symmetry transformations know much, but not all about
global symmetry transformations.
Folklore
In order to investigate the invariance of physical processes under symmetries,
physicists simplify the considerations by using infinitesimal transformations.
This theory was created by Sophus Lie (1849–1899) in about 1870. Let us
discuss some basic ideas in rigorous terms.
Roughly speaking, infinitesimal transformations are obtained by neglecting
terms of higher order than one.
The prototype of infinitesimal transformations are infinitesimal rotations.
For each smooth function f : R2 → R, the following three
conditions are equivalent.
(i) The function f is invariant under infinitesimal rotations.
(ii) The function f satisfies the Lie partial differential equation
xfy(x, y) − yfx(x, y) = 0 for all (x, y) ∈ R2. (7.70)
(iii) The function f is invariant under rotations.
(iv) In polar coordinates ϕ, r, the function f only depends on r.
Lack of global information. The theory of Lie matrix groups is fundamental
for elementary particle physics. This will be studied in the following
volumes. In Sect. 5.7.1 we have discussed the crucial fact that the two matrix
Lie groups SO(3) and SU(2) have isomorphic Lie algebras, and hence
they are locally isomorphic, but they are not globally isomorphic. In this
important case, the infinitesimal transformations do not know all about the
global transformations. It is quite remarkable that nature sees this difference
in terms of the electron spin.
The Discrete Feynman Path Integral
I found myself thinking of a large number of integrals, one after the other
in sequence. In the integrand was the product of the exponentials, which,
of course, was the exponential of the sum of the terms like #L. Now L is
the Lagrangian and # is like the time interval dt, so that if you took a
sum of such terms, that’s exactly like an integral. That’s like Riemann’s
formula for the integral
SLdt; you just take the value at each point and
add them together. We are to take the limit as # → 0, of course. Therefore,
the connection between the wave function of one instant and and the wave
function of another instant a finite time later could be obtained by an infinite
number of integrals of exp(iS/h), where S is the action expression. . . This
led later on to the idea of the amplitude of the path; that for each possible
way that the particle can go from one point to another in space-time,
there’s an amplitude exp(iS/h), where S is the action along the path. Amplitudes
from various paths superpose by addition. This then is another, a
third way, of describing quantum mechanic, which looks quite different
than that of Heisenberg or Schr¨odinger, but is equivalent to them.
Richard Feynman (1918–1988)
Nobel Lecture in 1965
The magic Dyson formula for the S-matrix and the magic Gell-Mann–
Low formula for the causal correlation functions are the key to applying
perturbation theory in quantum field theory.
Folklore
由一星于2014-06-23, 03:58进行了最后一次编辑,总共编辑了1次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
自伴算子[编辑]
[ltr]在数学里,作用于一个有限维的内积空间,一个自伴算子(self-adjoint operator)等于自己的伴随算子;等价地说,表达自伴算子的矩阵是埃尔米特矩阵。埃尔米特矩阵等于自己的共轭转置。根据有限维的谱定理,必定存在着一个正交归一基,可以表达自伴算子为一个实值的对角矩阵。
量子力学[size=13][编辑]
[/ltr][/size]
[size][ltr]
在量子力学里,自伴算子,又称为自伴算符,或厄米算符(Hermitian operator),是一种等于自己的厄米共轭的算符。给予算符和其伴随算符,假设,则称为厄米算符。厄米算符的期望值可以表示量子力学中的物理量。
可观察量[编辑]
由于每一种经过测量而得到的物理量都是实值的。所以,可观察量的期望值是实值的:
。
对于任意量子态,这关系都成立;
。
根据伴随算符的定义,假设是的伴随算符,则。因此,
。
这正是厄米算符的定义。所以,表示可观察量的算符,都是厄米算符。
可观察量,像位置,动量,角动量,和自旋,都是用作用于希尔伯特空间的自伴算符来代表。哈密顿算符是一个很重要的自伴算符,表达为
;
其中,是粒子的波函数,是约化普朗克常数,是质量,是位势。
哈密顿算符所代表的哈密顿量是粒子的总能量,一个可观察量。
动量是一个可观察量,动量算符应该也是厄米算符:选择位置空间,量子态的波函数为,
。
对于任意量子态,。所以,动量算符确实是一个厄米算符。
参考文献[编辑]
Griffiths, David J. Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. 2004: pp. 96–106. ISBN 0-13-111892-7.[/ltr]
[/size]
[ltr]在数学里,作用于一个有限维的内积空间,一个自伴算子(self-adjoint operator)等于自己的伴随算子;等价地说,表达自伴算子的矩阵是埃尔米特矩阵。埃尔米特矩阵等于自己的共轭转置。根据有限维的谱定理,必定存在着一个正交归一基,可以表达自伴算子为一个实值的对角矩阵。
量子力学[size=13][编辑]
[/ltr][/size]
|
在量子力学里,自伴算子,又称为自伴算符,或厄米算符(Hermitian operator),是一种等于自己的厄米共轭的算符。给予算符和其伴随算符,假设,则称为厄米算符。厄米算符的期望值可以表示量子力学中的物理量。
可观察量[编辑]
由于每一种经过测量而得到的物理量都是实值的。所以,可观察量的期望值是实值的:
。
对于任意量子态,这关系都成立;
。
根据伴随算符的定义,假设是的伴随算符,则。因此,
。
这正是厄米算符的定义。所以,表示可观察量的算符,都是厄米算符。
可观察量,像位置,动量,角动量,和自旋,都是用作用于希尔伯特空间的自伴算符来代表。哈密顿算符是一个很重要的自伴算符,表达为
;
其中,是粒子的波函数,是约化普朗克常数,是质量,是位势。
哈密顿算符所代表的哈密顿量是粒子的总能量,一个可观察量。
动量是一个可观察量,动量算符应该也是厄米算符:选择位置空间,量子态的波函数为,
。
对于任意量子态,。所以,动量算符确实是一个厄米算符。
参考文献[编辑]
Griffiths, David J. Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. 2004: pp. 96–106. ISBN 0-13-111892-7.[/ltr]
[/size]
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The Rigorous Response Approach to Finite Quantum Fields
Quantum field theory is based on only a few basic principles which we call
magic formulas.
Folklore
Basic Ideas
Classical fields are described by the principle of critical action. The universal
response approach to quantum field theory combines the principle of
critical action with the random aspects of infinite-dimensional Gaussian integrals.
The idea is to study the response of the quantum field under the
influence of an external source. In this section, we will rigorously study a
finite-dimensional variant of the response approach. The notation will be
chosen in such a way that the formal continuum limit can be carried out
in a straightforward manner in Chap. 14. The basic idea is to introduce the
so-called extended quantum action functional Z(J, ϕ) which depends on both
the quantum field ϕ and the external source J and to derive two magic formulas,
namely,
(QA) the quantum action reduction formula, and
(LSZ) the Lehmann–Szymanzik–Zimmermann reduction formula on the relation
between correlation functions
Cn(x1, . . . , xn)
and scattering functions Sn(x1, . . . , xn).
The main steps of our approach are the following ones.
(i) The principle of critical action. The point is that the classical action
functional
S[ϕ, J]
depends on both the classical quantum field ϕ and the external source
J. This yields the Euler–Lagrange equation of the motion of the classical
quantum field under the influence of the external source. See (7.122) on
page 446.
(ii) The response operator. The linearized (and regularized) Euler–Lagrange
equation determines the response operator Rε which describes the response
ϕ = RεJ
of the interaction-free classical quantum field ϕ to the external source
J. See (7.125) on page446. The small parameter ε > 0 regularizes the
response. It turns out that the response operator Rε knows all about the
full quantum field. To this end, we will use the two magic formulas (QA)
and (LSZ).
(iii) The quantum action reduction formula (QA). The response operator Rε
determines the free generating functional
Zfree(J, ϕ).
The formula (QA) tells us how the corresponding generating functional
Z(J, ϕ)
of the interacting quantum field can be obtained from the free functional
Zfree(J, ϕ) by functional differentiation. See (7.127) on page 448. Note
that the key formula (QA) is valid in each order of perturbation theory
with respect to the coupling constant κ.
The magic formula (QA) describes the quantization of the classical
quantum field ϕ.
(iv) The n-point correlation function Cn. It is our philosophy that
The main properties of a quantum field are described by the sequence
of correlation functions C2, C4, . . .
These functions are obtained from Z(J, 0) by using functional differentiation
at the point J = 0. See (7.129) on page 448.
(v) The n-point scattering functions Sn. These functions are obtained by
applying functional differentiation to Z(0, ϕ) at the point ϕ = 0. See
(7.130) on page 449. The scattering functions know all about scattering
processes.
(vi) The LSZ reduction formula. This fundamental formula tells us how to
compute the n-point scattering function Sn by means of the n-point correlation
function Cn.
(vii) The local quantum action principle. The formula (QA) is the solution
formula to the Dyson–Schwinger equation (7.133) on page 452 which is
called the local quantum action principle.
We will use finite functional integrals in order to derive rigorously the magic
formulas (QA) and (LSZ) by using the quite natural global quantum action
principle.
The global quantum action principle is based on an averaging over
the classical fields where the statistical weight eiS[ϕ,J]/ depends on
the classical action S[ϕ, J].
The explicit formulation can be found in Sect. 7.24.5 on page 447. However,
the response approach can also be formulated in such a way that functional integrals
do not appear explicitly. This is important for the infinite-dimensional
approach, since functional integrals are not well-defined in infinite dimensions.
The basic idea is then to start with the definition of the extended
quantum action functional Z = Z(J, ϕ) and to define the correlation
functions Cn and the scattering functions Sn as functional derivatives
of Z(ϕ, J).
In what follows, we will investigate the basic ideas sketched in (i) through
(vii) above in a rigorous setting. The translation to quantum fields with an
infinite number of degrees of freedom will be studied in Chap. 14 (response
approach) and Chap. 15 (operator approach).
Correlation function (quantum field theory)
From Wikipedia, the free encyclopedia
[ltr]For other uses, see Correlation function (disambiguation).
[/ltr]
[size][ltr]
In quantum field theory, the (real space) n-point correlation function is defined as the functional average (functional expectation value) of a product of field operators at different positions
For time-dependent correlation functions, the time-ordering operator is included.
Correlation functions are also called simply correlators. Sometimes, the phrase Green's function is used not only for two-point functions, but for any correlators.
See also[edit]
[/ltr][/size]
[size][ltr]
References[edit]
Books[edit]
[/ltr][/size]
Quantum field theory is based on only a few basic principles which we call
magic formulas.
Folklore
Basic Ideas
Classical fields are described by the principle of critical action. The universal
response approach to quantum field theory combines the principle of
critical action with the random aspects of infinite-dimensional Gaussian integrals.
The idea is to study the response of the quantum field under the
influence of an external source. In this section, we will rigorously study a
finite-dimensional variant of the response approach. The notation will be
chosen in such a way that the formal continuum limit can be carried out
in a straightforward manner in Chap. 14. The basic idea is to introduce the
so-called extended quantum action functional Z(J, ϕ) which depends on both
the quantum field ϕ and the external source J and to derive two magic formulas,
namely,
(QA) the quantum action reduction formula, and
(LSZ) the Lehmann–Szymanzik–Zimmermann reduction formula on the relation
between correlation functions
Cn(x1, . . . , xn)
and scattering functions Sn(x1, . . . , xn).
The main steps of our approach are the following ones.
(i) The principle of critical action. The point is that the classical action
functional
S[ϕ, J]
depends on both the classical quantum field ϕ and the external source
J. This yields the Euler–Lagrange equation of the motion of the classical
quantum field under the influence of the external source. See (7.122) on
page 446.
(ii) The response operator. The linearized (and regularized) Euler–Lagrange
equation determines the response operator Rε which describes the response
ϕ = RεJ
of the interaction-free classical quantum field ϕ to the external source
J. See (7.125) on page446. The small parameter ε > 0 regularizes the
response. It turns out that the response operator Rε knows all about the
full quantum field. To this end, we will use the two magic formulas (QA)
and (LSZ).
(iii) The quantum action reduction formula (QA). The response operator Rε
determines the free generating functional
Zfree(J, ϕ).
The formula (QA) tells us how the corresponding generating functional
Z(J, ϕ)
of the interacting quantum field can be obtained from the free functional
Zfree(J, ϕ) by functional differentiation. See (7.127) on page 448. Note
that the key formula (QA) is valid in each order of perturbation theory
with respect to the coupling constant κ.
The magic formula (QA) describes the quantization of the classical
quantum field ϕ.
(iv) The n-point correlation function Cn. It is our philosophy that
The main properties of a quantum field are described by the sequence
of correlation functions C2, C4, . . .
These functions are obtained from Z(J, 0) by using functional differentiation
at the point J = 0. See (7.129) on page 448.
(v) The n-point scattering functions Sn. These functions are obtained by
applying functional differentiation to Z(0, ϕ) at the point ϕ = 0. See
(7.130) on page 449. The scattering functions know all about scattering
processes.
(vi) The LSZ reduction formula. This fundamental formula tells us how to
compute the n-point scattering function Sn by means of the n-point correlation
function Cn.
(vii) The local quantum action principle. The formula (QA) is the solution
formula to the Dyson–Schwinger equation (7.133) on page 452 which is
called the local quantum action principle.
We will use finite functional integrals in order to derive rigorously the magic
formulas (QA) and (LSZ) by using the quite natural global quantum action
principle.
The global quantum action principle is based on an averaging over
the classical fields where the statistical weight eiS[ϕ,J]/ depends on
the classical action S[ϕ, J].
The explicit formulation can be found in Sect. 7.24.5 on page 447. However,
the response approach can also be formulated in such a way that functional integrals
do not appear explicitly. This is important for the infinite-dimensional
approach, since functional integrals are not well-defined in infinite dimensions.
The basic idea is then to start with the definition of the extended
quantum action functional Z = Z(J, ϕ) and to define the correlation
functions Cn and the scattering functions Sn as functional derivatives
of Z(ϕ, J).
In what follows, we will investigate the basic ideas sketched in (i) through
(vii) above in a rigorous setting. The translation to quantum fields with an
infinite number of degrees of freedom will be studied in Chap. 14 (response
approach) and Chap. 15 (operator approach).
Correlation function (quantum field theory)
From Wikipedia, the free encyclopedia
[ltr]For other uses, see Correlation function (disambiguation).
[/ltr]
Feynman diagram |
History |
Background[show] |
Tools[show] |
Equations[show] |
Incomplete theories[show] |
Scientists[show] |
In quantum field theory, the (real space) n-point correlation function is defined as the functional average (functional expectation value) of a product of field operators at different positions
For time-dependent correlation functions, the time-ordering operator is included.
Correlation functions are also called simply correlators. Sometimes, the phrase Green's function is used not only for two-point functions, but for any correlators.
See also[edit]
[/ltr][/size]
- Connected correlation function
- Correlation does not imply causation
- One particle irreducible correlation function
- Green's function (many-body theory)
- Partition function (mathematics)
[size][ltr]
References[edit]
Books[edit]
[/ltr][/size]
- Alexander Altland, Ben Simons (2006): Condensed Matter Field TheoryCambridge University Press
This quantum mechanics-related article is a stub. You can help Wikipedia by expanding it. |
由一星于2014-06-25, 03:35进行了最后一次编辑,总共编辑了1次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
LSZ reduction formula
From Wikipedia, the free encyclopedia
[size][ltr]
In quantum field theory, theLSZ reduction formula is a method to calculate S-matrixelements (the scattering amplitudes) from the time-ordered correlation functionsof a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik andWolfhart Zimmermann.
Although the LSZ reduction formula cannot handle bound states, massless particlesand topological solitons, it can be generalized to cover bound states, by use ofcomposite fields which are often nonlocal. Furthermore, the method, or variants thereof, have turned out to be also fruitful in other fields of theoretical physics. For example in statistical physicsthey can be used to get a particularly general formulation of the fluctuation-dissipation theorem.
[/ltr][/size]
[size][ltr]
In and Out fields[edit]
S-matrix elements are amplitudes of transitions between in states and outstates. An in state describes the state of a system of particles which, in a far away past, before interacting, were moving freely with definite momenta {p}, and, conversely, an out state describes the state of a system of particles which, long after interaction, will be moving freely with definite momenta {p}.
In and out states are states in Heisenberg picture so they should not be thought to describe particles at a definite time, but rather to describe the system of particles in its entire evolution, so that the S-matrix element:
is the probability amplitude for a set of particles which were prepared with definite momenta {p} to interact and be measured later as a new set of particles with momenta {q}.
The easy way to build in and out states is to seek appropriate field operators that provide the right creation and annihilation operators. These fields are called respectively in and out fields.
Just to fix ideas, suppose we deal with a Klein–Gordon field that interacts in some way which doesn't concern us:
may contain a self interaction gφ3 or interaction with other fields, like a Yukawa interaction . From this Lagrangian, using Euler–Lagrange equations, the equation of motion follows:
where, if does not contain derivative couplings:
We may expect the in field to resemble the asymptotic behaviour of the interacting field as x0 → −∞, *** the assumption that in the far away past interaction described by the current j0 is negligible, as particles are far from each other. This hypothesis is named the adiabatic hypothesis. However self interaction never fades away and, besides many other effects, it causes a difference between the Lagrangian mass m0 and the physical mass m of the φ boson. This fact must be taken into account by rewriting the equation of motion as follows:[citation needed]
This equation can be solved formally using the retarded Green's functionof the Klein–Gordon operator :
allowing us to split interaction from asymptotic behaviour. The solution is:
The factor √Z is a normalization factor that will come handy later, the fieldφin is a solution of the homogeneous equation associated with the equation of motion:
and hence is a free field which describes an incoming unperturbed wave, while the last term of the solution gives the perturbation of the wave due to interaction.
The field φin is indeed the in field we were seeking, as it describes the asymptotic behaviour of the interacting field as x0 → −∞, though this statement will be made more precise later. It is a free scalar field so it can be expanded in flat waves:
where:
The inverse function for the coefficients in terms of the field can be easily obtained and put in the elegant form:
where:
The Fourier coefficients satisfy the algebra of creation and annihilation operators:
and they can be used to build in states in the usual way:
The relation between the interacting field and the in field is not very simple to use, and the presence of the retarded Green's function tempts us to write something like:
implicitly *** the assumption that all interactions become negligible when particles are far away from each other. Yet the current j(x) contains also self interactions like those producing the mass shift from m0 to m. These interactions do not fade away as particles drift apart, so much care must be used in establishing asymptotic relations between the interacting field and the in field.
The correct prescription, as developed by Lehmann, Symanzik and Zimmermann, requires two normalizable states and , and a normalizable solution  f (x) of the Klein–Gordon equation . With these pieces one can state a correct and useful but very weak asymptotic relation:
The second member is indeed independent of time as can be shown by deriving and remembering that both φin and f satisfy the Klein–Gordon equation.
With appropriate changes the same steps can be followed to construct anout field that builds out states. In particular the definition of the out field is:
where Δadv(x − y) is the advanced Green's function of the Klein–Gordon operator. The weak asymptotic relation between out field and interacting field is:
The reduction formula for scalars[edit]
The asymptotic relations are all that is needed to obtain the LSZ reduction formula. For future convenience we start with the matrix element:
which is slightly more general than an S-matrix element. Indeed, is the expectation value of the time-ordered product of a number of fields between an out state and an in state. The out state can contain anything from the vacuum to an undefined number of particles, whose momenta are summarized by the index β. The in state contains at least a particle of momentum p, and possibly many others, whose momenta are summarized by the index α. If there are no fields in the time-ordered product, then is obviously an S-matrix element. The particle with momentum p can be 'extracted' from the in state by use of a creation operator:
With the assumption that no particle with momentum p is present in the outstate, that is, we are ignoring forward scattering, we can write:
because acting on the left gives zero. Expressing the construction operators in terms of in and out fields, we have:
Now we can use the asymptotic condition to write:
Then we notice that the field φ(x) can be brought inside the time-ordered product, since it appears on the right when x0 → −∞ and on the left whenx0 → ∞:
In the following, x dependence in the time-ordered product is what matters, so we set:
It's easy to show by explicitly carrying out the time integration that:
so that, by explicit time derivation, we have:
By its definition we see that  fp (x) is a solution of the Klein–Gordon equation, which can be written as:
Substituting into the expression for and integrating by parts, we arrive at:
That is:
Starting from this result, and following the same path another particle can be extracted from the in state, leading to the insertion of another field in the time-ordered product. A very similar routine can extract particles from the out state, and the two can be iterated to get vacuum both on right and on left of the time-ordered product, leading to the general formula:
Which is the LSZ reduction formula for Klein–Gordon scalars. It gains a much better looking aspect if it is written using the Fourier transform of the correlation function:
Using the inverse transform to substitute in the LSZ reduction formula, with some effort, the following result can be obtained:
Leaving aside normalization factors, this formula asserts that S-matrix elements are the residues of the poles that arise in the Fourier transform of the correlation functions as four-momenta are put on-shell.
Reduction formula for fermions[edit]
[/ltr][/size]
[size][ltr]
Field strength normalization[edit]
The reason of the normalization factor Z in the definition of in and outfields can be understood by taking that relation between the vacuum and a single particle state with four-moment on-shell:
Remembering that both φ and φin are scalar fields with their lorentz transform according to:
where Pμ is the four-moment operator, we can write:
Applying the Klein–Gordon operator ∂2 + m2 on both sides, remembering that the four-moment p is on-shell and that Δret is the Green's function of the operator, we obtain:
So we arrive to the relation:
which accounts for the need of the factor Z. The in field is a free field, so it can only connect one-particle states with the vacuum. That is, its expectation value between the vacuum and a many-particle state is null. On the other hand, the interacting field can also connect many-particle states to the vacuum, thanks to interaction, so the expectation values on the two sides of the last equation are different, and need a normalization factor in between. The right hand side can be computed explicitly, by expanding the in field in creation and annihilation operators:
Using the commutation relation between ain and we obtain:
leading to the relation:
by which the value of Z may be computed, provided that one knows how to compute .
See also[edit]
[/ltr][/size]
[size][ltr]
References[edit]
[/ltr][/size]
From Wikipedia, the free encyclopedia
Feynman diagram |
History |
Background[show] |
Tools[hide]
|
Equations[show] |
Incomplete theories[show] |
Scientists[show] |
In quantum field theory, theLSZ reduction formula is a method to calculate S-matrixelements (the scattering amplitudes) from the time-ordered correlation functionsof a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik andWolfhart Zimmermann.
Although the LSZ reduction formula cannot handle bound states, massless particlesand topological solitons, it can be generalized to cover bound states, by use ofcomposite fields which are often nonlocal. Furthermore, the method, or variants thereof, have turned out to be also fruitful in other fields of theoretical physics. For example in statistical physicsthey can be used to get a particularly general formulation of the fluctuation-dissipation theorem.
[/ltr][/size]
[size][ltr]
In and Out fields[edit]
S-matrix elements are amplitudes of transitions between in states and outstates. An in state describes the state of a system of particles which, in a far away past, before interacting, were moving freely with definite momenta {p}, and, conversely, an out state describes the state of a system of particles which, long after interaction, will be moving freely with definite momenta {p}.
In and out states are states in Heisenberg picture so they should not be thought to describe particles at a definite time, but rather to describe the system of particles in its entire evolution, so that the S-matrix element:
is the probability amplitude for a set of particles which were prepared with definite momenta {p} to interact and be measured later as a new set of particles with momenta {q}.
The easy way to build in and out states is to seek appropriate field operators that provide the right creation and annihilation operators. These fields are called respectively in and out fields.
Just to fix ideas, suppose we deal with a Klein–Gordon field that interacts in some way which doesn't concern us:
may contain a self interaction gφ3 or interaction with other fields, like a Yukawa interaction . From this Lagrangian, using Euler–Lagrange equations, the equation of motion follows:
where, if does not contain derivative couplings:
We may expect the in field to resemble the asymptotic behaviour of the interacting field as x0 → −∞, *** the assumption that in the far away past interaction described by the current j0 is negligible, as particles are far from each other. This hypothesis is named the adiabatic hypothesis. However self interaction never fades away and, besides many other effects, it causes a difference between the Lagrangian mass m0 and the physical mass m of the φ boson. This fact must be taken into account by rewriting the equation of motion as follows:[citation needed]
This equation can be solved formally using the retarded Green's functionof the Klein–Gordon operator :
allowing us to split interaction from asymptotic behaviour. The solution is:
The factor √Z is a normalization factor that will come handy later, the fieldφin is a solution of the homogeneous equation associated with the equation of motion:
and hence is a free field which describes an incoming unperturbed wave, while the last term of the solution gives the perturbation of the wave due to interaction.
The field φin is indeed the in field we were seeking, as it describes the asymptotic behaviour of the interacting field as x0 → −∞, though this statement will be made more precise later. It is a free scalar field so it can be expanded in flat waves:
where:
The inverse function for the coefficients in terms of the field can be easily obtained and put in the elegant form:
where:
The Fourier coefficients satisfy the algebra of creation and annihilation operators:
and they can be used to build in states in the usual way:
The relation between the interacting field and the in field is not very simple to use, and the presence of the retarded Green's function tempts us to write something like:
implicitly *** the assumption that all interactions become negligible when particles are far away from each other. Yet the current j(x) contains also self interactions like those producing the mass shift from m0 to m. These interactions do not fade away as particles drift apart, so much care must be used in establishing asymptotic relations between the interacting field and the in field.
The correct prescription, as developed by Lehmann, Symanzik and Zimmermann, requires two normalizable states and , and a normalizable solution  f (x) of the Klein–Gordon equation . With these pieces one can state a correct and useful but very weak asymptotic relation:
The second member is indeed independent of time as can be shown by deriving and remembering that both φin and f satisfy the Klein–Gordon equation.
With appropriate changes the same steps can be followed to construct anout field that builds out states. In particular the definition of the out field is:
where Δadv(x − y) is the advanced Green's function of the Klein–Gordon operator. The weak asymptotic relation between out field and interacting field is:
The reduction formula for scalars[edit]
The asymptotic relations are all that is needed to obtain the LSZ reduction formula. For future convenience we start with the matrix element:
which is slightly more general than an S-matrix element. Indeed, is the expectation value of the time-ordered product of a number of fields between an out state and an in state. The out state can contain anything from the vacuum to an undefined number of particles, whose momenta are summarized by the index β. The in state contains at least a particle of momentum p, and possibly many others, whose momenta are summarized by the index α. If there are no fields in the time-ordered product, then is obviously an S-matrix element. The particle with momentum p can be 'extracted' from the in state by use of a creation operator:
With the assumption that no particle with momentum p is present in the outstate, that is, we are ignoring forward scattering, we can write:
because acting on the left gives zero. Expressing the construction operators in terms of in and out fields, we have:
Now we can use the asymptotic condition to write:
Then we notice that the field φ(x) can be brought inside the time-ordered product, since it appears on the right when x0 → −∞ and on the left whenx0 → ∞:
In the following, x dependence in the time-ordered product is what matters, so we set:
It's easy to show by explicitly carrying out the time integration that:
so that, by explicit time derivation, we have:
By its definition we see that  fp (x) is a solution of the Klein–Gordon equation, which can be written as:
Substituting into the expression for and integrating by parts, we arrive at:
That is:
Starting from this result, and following the same path another particle can be extracted from the in state, leading to the insertion of another field in the time-ordered product. A very similar routine can extract particles from the out state, and the two can be iterated to get vacuum both on right and on left of the time-ordered product, leading to the general formula:
Which is the LSZ reduction formula for Klein–Gordon scalars. It gains a much better looking aspect if it is written using the Fourier transform of the correlation function:
Using the inverse transform to substitute in the LSZ reduction formula, with some effort, the following result can be obtained:
Leaving aside normalization factors, this formula asserts that S-matrix elements are the residues of the poles that arise in the Fourier transform of the correlation functions as four-momenta are put on-shell.
Reduction formula for fermions[edit]
[/ltr][/size]
This section is empty. You can help by adding to it. (July 2010) |
Field strength normalization[edit]
The reason of the normalization factor Z in the definition of in and outfields can be understood by taking that relation between the vacuum and a single particle state with four-moment on-shell:
Remembering that both φ and φin are scalar fields with their lorentz transform according to:
where Pμ is the four-moment operator, we can write:
Applying the Klein–Gordon operator ∂2 + m2 on both sides, remembering that the four-moment p is on-shell and that Δret is the Green's function of the operator, we obtain:
So we arrive to the relation:
which accounts for the need of the factor Z. The in field is a free field, so it can only connect one-particle states with the vacuum. That is, its expectation value between the vacuum and a many-particle state is null. On the other hand, the interacting field can also connect many-particle states to the vacuum, thanks to interaction, so the expectation values on the two sides of the last equation are different, and need a normalization factor in between. The right hand side can be computed explicitly, by expanding the in field in creation and annihilation operators:
Using the commutation relation between ain and we obtain:
leading to the relation:
by which the value of Z may be computed, provided that one knows how to compute .
See also[edit]
[/ltr][/size]
Mathematics portal |
References[edit]
[/ltr][/size]
- The original paper is H. Lehmann, K. Symanzik, and W. Zimmerman, Nuovo Cimento 1, 205 (1955).
- A pedagogical derivation of the LSZ reduction formula can be found in M.E. Peskin and D.V. Schroeder, An Introduction to Quantum Field Theory, Addison-Wesley, Reading, Massachusetts, 1995, Section 7.2.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
量子化[编辑]
在物理学里,量子化是一种从经典场论建构出量子场论的程序。使用这程序,时常可以直接地将经典力学里的理论量身打造成崭新的量子力学理论。物理学家所谈到的场量子化,指的就是电磁场的量子化。在这里,他们会将光子分类为一种场量子(例如,称呼光子为光量子)。对于粒子物理学,核子物理学,固体物理学和量子光学等等学术领域内的理论,量子化是它们的基础程序。
[/ltr][/size]
[size][ltr]
量子化方法[编辑]
量子化的目的是将经典场论中的场转换成量子算符,这个算符是要作用在量子场论中的量子态上的。能量阶级最低的量子态称为真空态(vacuum state)。这真空态可能会很复杂。量子化一个经典理论的原因,主要是想要根据概率幅来推算出材料、物体或粒子的属性。而这计算会牵涉到某些微妙的问题,例如:重整化;如果我们不考虑使用重整化,则经常会推导出许多不合理的结果,像是在计算某些概率幅时会得到无穷大的结果。因此,在一个完整的量子化步骤中,必须要包含一套方法来说明如何执行重整化。
正则量子化[编辑]
主条目:正则量子化
场论的正则量子化类比于从经典力学的衍生出量子力学。将经典场视为动力学变量,称为正则坐标,其共轭是正则动量。这两个变量的对易关系,与量子力学内粒子的位置和动量的对易关系,类似相同。从这些算符,可以求得创生算符和消灭算符。这两种算符,称为阶梯算符,都是作用于量子态的场算符,有共同的本征态。经过一番运算,可以得到最低能级的本征态,称为真空态。再稍加运算,就可得到其它的本征态和伴随的能级。整个程序又称为二次量子化。
正则量子化可以应用于任何场论的量子化,不管是费米子或玻色子,以及任何内部对称。但是,它引领出一个相当简单的真空态的绘景,并不能很容易地适用于某些量子场论,像量子色动力学。在量子色力学里,时常会出现拥有很多不同冷凝液(condensate)的复杂的真空,。
对于一些比较简单的问题,正则量子化的程序并不是很困难。但是,对于很多其它状况,别种量子化方法比较容易得到量子答案。虽然如此,在量子场论里,正则量子化是一种非常重要的方法。
共变正则量子化[编辑]
物理学家又发现了一种方法来将经典系统正则量子化,不需要诉诸于非共变途径,叶状化时空和选择哈密顿量。这方法建立于经典作用量,但是与泛函积分的解法不同。
这方法并不能应用于所有可能的作用量(例如,非因果架构的作用量,或规范流作用量 (action with gauge flow) )。从所有定义于组态空间的光滑泛函的经典代数开始,将此代数商去欧拉-拉格朗日方程生成的理想。然后,借着从作用量导引出来的泊松代数(Poisson algebra) ,称为 (Peierls bracket) ,将商代数转换为泊松代数。如同正则量子化的做法,再将约化普朗克常数加入泊松代数,就可完成共变正则量子化的程序。
另外地,还有一种方法可以量子化规范流作用量。这方法涉及巴塔林-维尔可维斯基代数,是BRST形式论(BRST formalism) 的延伸。
路径积分量子化[编辑]
另见:费曼路径积分
应用作用量,取对于作用量的泛函变分的极值为容许的组态,这样,可以给出经典力学理论。通过路径积分表述的方法,可以从系统的作用量,制造出对应于经典系统的量子力学描述。
参阅[编辑]
[/ltr][/size]
[size][ltr]
参考文献[编辑]
[/ltr][/size]
[size][ltr]
外部链接[编辑]
[/ltr][/size]
关于一般经典理论的量子化,请见“正则量子化”。
[size][ltr]在物理学里,量子化是一种从经典场论建构出量子场论的程序。使用这程序,时常可以直接地将经典力学里的理论量身打造成崭新的量子力学理论。物理学家所谈到的场量子化,指的就是电磁场的量子化。在这里,他们会将光子分类为一种场量子(例如,称呼光子为光量子)。对于粒子物理学,核子物理学,固体物理学和量子光学等等学术领域内的理论,量子化是它们的基础程序。
[/ltr][/size]
[size][ltr]
量子化方法[编辑]
量子化的目的是将经典场论中的场转换成量子算符,这个算符是要作用在量子场论中的量子态上的。能量阶级最低的量子态称为真空态(vacuum state)。这真空态可能会很复杂。量子化一个经典理论的原因,主要是想要根据概率幅来推算出材料、物体或粒子的属性。而这计算会牵涉到某些微妙的问题,例如:重整化;如果我们不考虑使用重整化,则经常会推导出许多不合理的结果,像是在计算某些概率幅时会得到无穷大的结果。因此,在一个完整的量子化步骤中,必须要包含一套方法来说明如何执行重整化。
正则量子化[编辑]
主条目:正则量子化
场论的正则量子化类比于从经典力学的衍生出量子力学。将经典场视为动力学变量,称为正则坐标,其共轭是正则动量。这两个变量的对易关系,与量子力学内粒子的位置和动量的对易关系,类似相同。从这些算符,可以求得创生算符和消灭算符。这两种算符,称为阶梯算符,都是作用于量子态的场算符,有共同的本征态。经过一番运算,可以得到最低能级的本征态,称为真空态。再稍加运算,就可得到其它的本征态和伴随的能级。整个程序又称为二次量子化。
正则量子化可以应用于任何场论的量子化,不管是费米子或玻色子,以及任何内部对称。但是,它引领出一个相当简单的真空态的绘景,并不能很容易地适用于某些量子场论,像量子色动力学。在量子色力学里,时常会出现拥有很多不同冷凝液(condensate)的复杂的真空,。
对于一些比较简单的问题,正则量子化的程序并不是很困难。但是,对于很多其它状况,别种量子化方法比较容易得到量子答案。虽然如此,在量子场论里,正则量子化是一种非常重要的方法。
共变正则量子化[编辑]
物理学家又发现了一种方法来将经典系统正则量子化,不需要诉诸于非共变途径,叶状化时空和选择哈密顿量。这方法建立于经典作用量,但是与泛函积分的解法不同。
这方法并不能应用于所有可能的作用量(例如,非因果架构的作用量,或规范流作用量 (action with gauge flow) )。从所有定义于组态空间的光滑泛函的经典代数开始,将此代数商去欧拉-拉格朗日方程生成的理想。然后,借着从作用量导引出来的泊松代数(Poisson algebra) ,称为 (Peierls bracket) ,将商代数转换为泊松代数。如同正则量子化的做法,再将约化普朗克常数加入泊松代数,就可完成共变正则量子化的程序。
另外地,还有一种方法可以量子化规范流作用量。这方法涉及巴塔林-维尔可维斯基代数,是BRST形式论(BRST formalism) 的延伸。
路径积分量子化[编辑]
另见:费曼路径积分
应用作用量,取对于作用量的泛函变分的极值为容许的组态,这样,可以给出经典力学理论。通过路径积分表述的方法,可以从系统的作用量,制造出对应于经典系统的量子力学描述。
参阅[编辑]
[/ltr][/size]
[size][ltr]
参考文献[编辑]
[/ltr][/size]
- Abraham, R. & Marsden (1985): Foundations of Mechanics, ed. Addison-Wesley, ISBN 0-8053-0102-X.
- M. Peskin, D. Schroeder, An Introduction to Quantum Field Theory(Westview Press, 1995) [ISBN 0-201-50397-2]
- Weinberg, Steven, The Quantum Theory of Fields(3 volumes)
[size][ltr]
外部链接[编辑]
[/ltr][/size]
|
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Simplifying the Computation of Quantum Effects
We have seen that correlation functions know all about quantum fields including
scattering processes via the LSZ reduction formula. However, as a rule,
the computation of concrete physical effects is lengthy and time consuming.
Therefore, physicists have invented tools in order to simplify computations,
namely,
(i) the family of reduced correlation functions and
(ii) the mean field approach (averaged quantum fluctuations).
Reduced correlation functions. The basic idea is to start with socalled
reduced correlation functions which allow us the computation of the
correlation functions. Schematically,
response function ⇒ reduced correlation functions ⇒
⇒ correlation functions ⇒ scattering functions (S matrix).
The S matrix knows all about scattering processes.
Mean field approach. In order to get typical information about the
influence of quantum fluctuations, we average the quantum fluctuations over
all possible classical field configurations. Schematically,
• classical field ϕ ⇒ mean field ϕmean;
• classical action S[ϕ, J] ⇒ effective quantum action Seff;
• response function ⇒ vertex functional V ⇒ effective quantum action
Seff = V (ϕmean).
The effective quantum action depends on the vertex functional V which
can be described by vertex functions V (x1, . . . , xn), n = 1, 2, . . .
From the computational point of view, note the following.
The computations concerning the reduced correlation functions and
the mean field approach depend on the coupling constant κ, and they
can be carried out in each order of perturbation theory.
Mean field theory
From Wikipedia, the free encyclopedia
[size][ltr]
In physics and probability theory, mean field theory (MFT also known asself-consistent field theory) studies the behavior of large and complexstochastic models by studying a simpler model. Such models consider a large number of small interacting individual components which interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
The ideas first appeared in physics in the work of Pierre Curie[1] andPierre Weiss to describe phase transitions.[2] Approaches inspired by these ideas have seen applications in epidemic models,[3] queueing theory,[4] computer network performance and game theory.[5]
A many-body system with interactions is generally very difficult to solve exactly, except for extremely simple cases (random field theory, 1D Ising model). The n-body system is replaced by a 1-body problem with a chosen good external field. The external field replaces the interaction of all the other particles to an arbitrary particle. The great difficulty (e.g. when computing the partition function of the system) is the treatment ofcombinatorics generated by the interaction terms in the Hamiltonian when summing over all states. The goal of mean field theory is to resolve these combinatorial problems. MFT is known under a great many names and guises. Similar techniques include Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory.
The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field.[6] This reduces any multi-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a relatively low cost.
In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean field". Quite often, in the formalism of fluctuations, MFT provides a convenient launch-point to studying first or second order fluctuations.
In general, dimensionality plays a strong role in determining whether a mean-field approach will work for any particular problem. In MFT, many interactions are replaced by one effective interaction. Then it naturally follows that if the field or particle exhibits many interactions in the original system, MFT will be more accurate for such a system. This is true in cases of high dimensionality, or when the Hamiltonian includes long-range forces. The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, depending upon the number of spatial dimensions in the system of interest.
While MFT arose primarily in the field of statistical mechanics, it has more recently been applied elsewhere, for example in inference, graphical models theory, neuroscience, and artificial intelligence.
[/ltr][/size]
[size][ltr]
Formal approach[edit]
The formal basis for mean field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian
has the following upper bound:
where is the entropy and where the average is taken over the equilibrium ensemble of the reference system with Hamiltonian . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as
where is shorthand for the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth). One can consider sharpening the upper bound by minimizing the right hand side of the inequality. The minimizing reference system is then the "best" approximation to the true system using non-correlated degrees of freedom, and is known as the mean field approximation.
For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,
where is the set of pairs that interact, the minimizing procedure can be carried out formally. Define as the generalized sum of the observable over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by
[/ltr][/size]
[size][ltr]
where is the probability to find the reference system in the state specified by the variables . This probability is given by the normalized Boltzmann factor
where is the partition function. Thus
In order to minimize we take the derivative with respect to the single degree-of-freedom probabilities using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations
where the mean field is given by
Applications[edit]
Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions.[7]
Ising Model[edit]
Consider the Ising model on an N-dimensional cubic lattice. The Hamiltonian is given by
where the indicates summation over the pair of nearest neighbors , and and are neighboring Ising spins.
Let us transform our spin variable by introducing the fluctuation from its mean value . We may rewrite the Hamiltonian:
where we define ; this is the fluctuation of the spin. If we expand the right hand side, we obtain one term that is entirely dependent on the mean values of the spins, and independent of the spin configurations. This is the trivial term, which does not affect the partition function of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.
The mean-field approximation consists in neglecting this second order fluctuation term. These fluctuations are enhanced at low dimensions, *** MFT a better approximation for high dimensions.
Again, the summand can be reexpanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields
The summation over neighboring spins can be rewritten as where means 'nearest-neighbor of ' and the prefactor avoids double-counting, since each bond participates in two spins. Simplifying leads to the final expression
where is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean-field which is the sum of the external field and of the mean-field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension , ).
Substituting this Hamiltonian into the partition function, and solving the effective 1D problem, we obtain
where is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system, and calculate critical exponents. In particular, we can obtain the magnetization as a function of .
We thus have two equations between and , allowing us to determine as a function of temperature. This leads to the following observation:
[/ltr][/size]
[size][ltr]
is given by the following relation: . This shows that MFT can account for the ferromagnetic phase transition.
Application to other systems[edit]
Similarly, MFT can be applied to other types of Hamiltonian to study the metal-superconductor transition. In this case, the analog of the magnetization is the superconducting gap . Another example is the molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero.
Extension to time-dependent mean fields[edit]
Main article: Dynamical mean field theory
In mean-field theory, the mean field appearing in the single-site problem is a scalar or vectorial time-independent quantity. However, this need not always be the case: in a variant of mean-field theory called dynamical mean field theory (DMFT), the mean-field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal-Mott insulator transition.
See also[edit]
[/ltr][/size]
[size][ltr]
References[edit]
[/ltr][/size]
We have seen that correlation functions know all about quantum fields including
scattering processes via the LSZ reduction formula. However, as a rule,
the computation of concrete physical effects is lengthy and time consuming.
Therefore, physicists have invented tools in order to simplify computations,
namely,
(i) the family of reduced correlation functions and
(ii) the mean field approach (averaged quantum fluctuations).
Reduced correlation functions. The basic idea is to start with socalled
reduced correlation functions which allow us the computation of the
correlation functions. Schematically,
response function ⇒ reduced correlation functions ⇒
⇒ correlation functions ⇒ scattering functions (S matrix).
The S matrix knows all about scattering processes.
Mean field approach. In order to get typical information about the
influence of quantum fluctuations, we average the quantum fluctuations over
all possible classical field configurations. Schematically,
• classical field ϕ ⇒ mean field ϕmean;
• classical action S[ϕ, J] ⇒ effective quantum action Seff;
• response function ⇒ vertex functional V ⇒ effective quantum action
Seff = V (ϕmean).
The effective quantum action depends on the vertex functional V which
can be described by vertex functions V (x1, . . . , xn), n = 1, 2, . . .
From the computational point of view, note the following.
The computations concerning the reduced correlation functions and
the mean field approach depend on the coupling constant κ, and they
can be carried out in each order of perturbation theory.
Mean field theory
From Wikipedia, the free encyclopedia
This article needs additional citations forverification. Please help improve this articleby adding citations to reliable sources. Unsourced material may be challenged and removed. (December 2009) |
In physics and probability theory, mean field theory (MFT also known asself-consistent field theory) studies the behavior of large and complexstochastic models by studying a simpler model. Such models consider a large number of small interacting individual components which interact with each other. The effect of all the other individuals on any given individual is approximated by a single averaged effect, thus reducing a many-body problem to a one-body problem.
The ideas first appeared in physics in the work of Pierre Curie[1] andPierre Weiss to describe phase transitions.[2] Approaches inspired by these ideas have seen applications in epidemic models,[3] queueing theory,[4] computer network performance and game theory.[5]
A many-body system with interactions is generally very difficult to solve exactly, except for extremely simple cases (random field theory, 1D Ising model). The n-body system is replaced by a 1-body problem with a chosen good external field. The external field replaces the interaction of all the other particles to an arbitrary particle. The great difficulty (e.g. when computing the partition function of the system) is the treatment ofcombinatorics generated by the interaction terms in the Hamiltonian when summing over all states. The goal of mean field theory is to resolve these combinatorial problems. MFT is known under a great many names and guises. Similar techniques include Bragg–Williams approximation, models on Bethe lattice, Landau theory, Pierre–Weiss approximation, Flory–Huggins solution theory, and Scheutjens–Fleer theory.
The main idea of MFT is to replace all interactions to any one body with an average or effective interaction, sometimes called a molecular field.[6] This reduces any multi-body problem into an effective one-body problem. The ease of solving MFT problems means that some insight into the behavior of the system can be obtained at a relatively low cost.
In field theory, the Hamiltonian may be expanded in terms of the magnitude of fluctuations around the mean of the field. In this context, MFT can be viewed as the "zeroth-order" expansion of the Hamiltonian in fluctuations. Physically, this means an MFT system has no fluctuations, but this coincides with the idea that one is replacing all interactions with a "mean field". Quite often, in the formalism of fluctuations, MFT provides a convenient launch-point to studying first or second order fluctuations.
In general, dimensionality plays a strong role in determining whether a mean-field approach will work for any particular problem. In MFT, many interactions are replaced by one effective interaction. Then it naturally follows that if the field or particle exhibits many interactions in the original system, MFT will be more accurate for such a system. This is true in cases of high dimensionality, or when the Hamiltonian includes long-range forces. The Ginzburg criterion is the formal expression of how fluctuations render MFT a poor approximation, depending upon the number of spatial dimensions in the system of interest.
While MFT arose primarily in the field of statistical mechanics, it has more recently been applied elsewhere, for example in inference, graphical models theory, neuroscience, and artificial intelligence.
[/ltr][/size]
[size][ltr]
Formal approach[edit]
The formal basis for mean field theory is the Bogoliubov inequality. This inequality states that the free energy of a system with Hamiltonian
has the following upper bound:
where is the entropy and where the average is taken over the equilibrium ensemble of the reference system with Hamiltonian . In the special case that the reference Hamiltonian is that of a non-interacting system and can thus be written as
where is shorthand for the degrees of freedom of the individual components of our statistical system (atoms, spins and so forth). One can consider sharpening the upper bound by minimizing the right hand side of the inequality. The minimizing reference system is then the "best" approximation to the true system using non-correlated degrees of freedom, and is known as the mean field approximation.
For the most common case that the target Hamiltonian contains only pairwise interactions, i.e.,
where is the set of pairs that interact, the minimizing procedure can be carried out formally. Define as the generalized sum of the observable over the degrees of freedom of the single component (sum for discrete variables, integrals for continuous ones). The approximating free energy is given by
[/ltr][/size]
where is the probability to find the reference system in the state specified by the variables . This probability is given by the normalized Boltzmann factor
where is the partition function. Thus
In order to minimize we take the derivative with respect to the single degree-of-freedom probabilities using a Lagrange multiplier to ensure proper normalization. The end result is the set of self-consistency equations
where the mean field is given by
Applications[edit]
Mean field theory can be applied to a number of physical systems so as to study phenomena such as phase transitions.[7]
Ising Model[edit]
Consider the Ising model on an N-dimensional cubic lattice. The Hamiltonian is given by
where the indicates summation over the pair of nearest neighbors , and and are neighboring Ising spins.
Let us transform our spin variable by introducing the fluctuation from its mean value . We may rewrite the Hamiltonian:
where we define ; this is the fluctuation of the spin. If we expand the right hand side, we obtain one term that is entirely dependent on the mean values of the spins, and independent of the spin configurations. This is the trivial term, which does not affect the partition function of the system. The next term is the one involving the product of the mean value of the spin and the fluctuation value. Finally, the last term involves a product of two fluctuation values.
The mean-field approximation consists in neglecting this second order fluctuation term. These fluctuations are enhanced at low dimensions, *** MFT a better approximation for high dimensions.
Again, the summand can be reexpanded. In addition, we expect that the mean value of each spin is site-independent, since the Ising chain is translationally invariant. This yields
The summation over neighboring spins can be rewritten as where means 'nearest-neighbor of ' and the prefactor avoids double-counting, since each bond participates in two spins. Simplifying leads to the final expression
where is the coordination number. At this point, the Ising Hamiltonian has been decoupled into a sum of one-body Hamiltonians with an effective mean-field which is the sum of the external field and of the mean-field induced by the neighboring spins. It is worth noting that this mean field directly depends on the number of nearest neighbors and thus on the dimension of the system (for instance, for a hypercubic lattice of dimension , ).
Substituting this Hamiltonian into the partition function, and solving the effective 1D problem, we obtain
where is the number of lattice sites. This is a closed and exact expression for the partition function of the system. We may obtain the free energy of the system, and calculate critical exponents. In particular, we can obtain the magnetization as a function of .
We thus have two equations between and , allowing us to determine as a function of temperature. This leads to the following observation:
[/ltr][/size]
- for temperatures greater than a certain value , the only solution is . The system is paramagnetic.
- for , there are two non-zero solutions: . The system is ferromagnetic.
[size][ltr]
is given by the following relation: . This shows that MFT can account for the ferromagnetic phase transition.
Application to other systems[edit]
Similarly, MFT can be applied to other types of Hamiltonian to study the metal-superconductor transition. In this case, the analog of the magnetization is the superconducting gap . Another example is the molecular field of a liquid crystal that emerges when the Laplacian of the director field is non-zero.
Extension to time-dependent mean fields[edit]
Main article: Dynamical mean field theory
In mean-field theory, the mean field appearing in the single-site problem is a scalar or vectorial time-independent quantity. However, this need not always be the case: in a variant of mean-field theory called dynamical mean field theory (DMFT), the mean-field becomes a time-dependent quantity. For instance, DMFT can be applied to the Hubbard model to study the metal-Mott insulator transition.
See also[edit]
[/ltr][/size]
[size][ltr]
References[edit]
[/ltr][/size]
- Jump up^ Kadanoff, L. P. (2009). "More is the Same; Phase Transitions and Mean Field Theories". Journal of Statistical Physics 137 (5–6): 777–797.arXiv:0906.0653. Bibcode:2009JSP...137..777K. doi:10.1007/s10955-009-9814-1. edit
- Jump up^ Weiss, Pierre (1907). "L'hypothèse du champ moléculaire et la propriété ferromagnétique". J. Phys. Theor. Appl. 6 (1): 661–690.
- Jump up^ Boudec, J. Y. L.; McDonald, D.; Mundinger, J. (2007). "A Generic Mean Field Convergence Result for Systems of Interacting Objects". Fourth International Conference on the Quantitative Evaluation of Systems (QEST 2007). p. 3. doi:10.1109/QEST.2007.8. ISBN 0-7695-2883-X. edit
- Jump up^ Baccelli, F.; Karpelevich, F. I.; Kelbert, M. Y.; Puhalskii, A. A.; Rybko, A. N.; Suhov, Y. M. (1992). "A mean-field limit for a class of queueing networks". Journal of Statistical Physics 66 (3–4): 803.Bibcode:1992JSP....66..803B. doi:10.1007/BF01055703. edit
- Jump up^ Lasry, J. M.; Lions, P. L. (2007). "Mean field games". Japanese Journal of Mathematics 2: 229. doi:10.1007/s11537-007-0657-8. edit
- Jump up^ Chaikin, P. M.; Lubensky, T. C. (2007). Principles of condensed matter physics (4th print ed.). Cambridge: Cambridge University Press. ISBN 978-0-521-79450-3.
- Jump up^ HE Stanley (1971). "Mean field theory of magnetic phase transitions".Introduction to phase transitions and critical phenomena. Oxford University Press. ISBN 0-19-505316-8.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Rigorous Finite-Dimensional Perturbation Theory
Perturbation theory is the most important method in modern
physics.
Folklore
Renormalization
In quantum field theory, a crucial role is played by renormalization. Let us now
study this phenomenon in a very simplified manner.
• We want to show how mathematical difficulties arise if nonlinear equations are
linearized in the incorrect place.
• Furthermore, we will discuss how to overcome these difficulties by using the
methods of bifurcation theory.
The main trick is to replace the original problem by an equivalent one by introducing
so-called regularizing terms. We have to distinguish between
• the non-resonance case (N) (or regular case), and
• the resonance case (R) (or singular case).
In celestial mechanics, it is well-known that resonance may cause highly complicated
motions of asteroids.1
In rough terms, the complexity of phenomena in quantum field theory is
caused by resonances.
In Sect. 7.16, the non-resonance case and the resonance case were studied for linear
operator equations. We now want to generalize this to nonlinear problems.
Naive perturbation theory fails completely in the resonance case.
The Renormalization Group
The method of renormalization group plays a crucial role in modern physics.
Roughly speaking, this method studies the behavior of physical effects under
the rescaling of typical parameters.
Regularizing Terms
The nai ve use of perturbation theory in quantum field theory leads to divergent
mathematical expressions. In order to extract finite physical information from this,
physicists use the method of renormalization. In Volume II we will study quantum
electrodynamics. In this setting, renormalization can be understood best by
proceeding as follows.
(i) Put the quantum system in a box of finite volume V .
(ii) Consider a finite lattice in momentum space of grid length Δp and maximal
momentum Pmax.
The maximal momentum corresponds to the choice of a maximal energy, Emax. We
then have to carry out the limits
V → +∞, Emax → +∞, Δp→ 0.
Unfortunately, it turns out that the nai ve limits do not always exist. Sometimes
divergent expressions arise.
The idea of the method of regularizing terms is to force convergence of divergent
expressions by introducing additional terms. This technique is well-known in
mathematics. In what follows we will study three prototypes, namely,
• the construction of entire functions via regularizing factors (theWeierstrass product
theorem),
• the construction of meromorphic functions via regularizing summands (the
Mittag–Leffler theorem), and
• the regularization of divergent integrals by adding terms to the integrand via
Taylor expansion.
In this monograph, we distinguish between
• regularizing terms and
• counterterms.
By convention, regularizing terms are mathematical objects which give divergent
expressions a well-defined rigorous meaning. Counterterms are added to Lagrangian
densities in order to construct regularizing terms. Roughly speaking, this allows
us a physical interpretation of the regularizing terms. In quantum field theory,
renormalization theory is based on counterterms.
Physical interpretation. Regard E(R) above as the energy of a quantum
system on the interval [ , R]. This energy is very large if the size of the system is
very large. Such extremely large energies are not observed in physical experiments.
Physicists assume that we only measure relative energies with respect to the ground
state. In our model above, we measure E(R) − a ln R. In the limit R → +∞, we
get the regularized value reg E(∞).
整函数[编辑]
[ltr]整函数(entrie function)是在整个复平面上全纯的函数。典型的例子有多项式函数、指数函数、以及它们的和、积及复合函数。每一个整函数都可以表示为处处收敛的幂级数。而对数函数和平方根都不是整函数。
整函数的阶可以用上极限定义如下:
其中是到的距离,是时的最大绝对值。如果,我们也可以定义它的类型:
整函数在无穷远处可能具有奇点,甚至是本性奇点,这时该函数便称为超越整函数。根据刘维尔定理,在整个黎曼球面(复平面和无穷远处的点)上的整函数是常数。
刘维尔定理确立了整函数的一个重要的性质:任何一个有界的整函数都是常数。这个性质可以用来证明代数基本定理。皮卡小定理强化了刘维尔定理,它表明任何一个不是常数的整函数都取遍所有的复数值,最多只有一个值例外,例如指数函数永远不能是零。
参见[size=13][编辑][/ltr][/size]
[ltr]
参考文献[编辑][/ltr]
Perturbation theory is the most important method in modern
physics.
Folklore
Renormalization
In quantum field theory, a crucial role is played by renormalization. Let us now
study this phenomenon in a very simplified manner.
• We want to show how mathematical difficulties arise if nonlinear equations are
linearized in the incorrect place.
• Furthermore, we will discuss how to overcome these difficulties by using the
methods of bifurcation theory.
The main trick is to replace the original problem by an equivalent one by introducing
so-called regularizing terms. We have to distinguish between
• the non-resonance case (N) (or regular case), and
• the resonance case (R) (or singular case).
In celestial mechanics, it is well-known that resonance may cause highly complicated
motions of asteroids.1
In rough terms, the complexity of phenomena in quantum field theory is
caused by resonances.
In Sect. 7.16, the non-resonance case and the resonance case were studied for linear
operator equations. We now want to generalize this to nonlinear problems.
Naive perturbation theory fails completely in the resonance case.
The Renormalization Group
The method of renormalization group plays a crucial role in modern physics.
Roughly speaking, this method studies the behavior of physical effects under
the rescaling of typical parameters.
Regularizing Terms
The nai ve use of perturbation theory in quantum field theory leads to divergent
mathematical expressions. In order to extract finite physical information from this,
physicists use the method of renormalization. In Volume II we will study quantum
electrodynamics. In this setting, renormalization can be understood best by
proceeding as follows.
(i) Put the quantum system in a box of finite volume V .
(ii) Consider a finite lattice in momentum space of grid length Δp and maximal
momentum Pmax.
The maximal momentum corresponds to the choice of a maximal energy, Emax. We
then have to carry out the limits
V → +∞, Emax → +∞, Δp→ 0.
Unfortunately, it turns out that the nai ve limits do not always exist. Sometimes
divergent expressions arise.
The idea of the method of regularizing terms is to force convergence of divergent
expressions by introducing additional terms. This technique is well-known in
mathematics. In what follows we will study three prototypes, namely,
• the construction of entire functions via regularizing factors (theWeierstrass product
theorem),
• the construction of meromorphic functions via regularizing summands (the
Mittag–Leffler theorem), and
• the regularization of divergent integrals by adding terms to the integrand via
Taylor expansion.
In this monograph, we distinguish between
• regularizing terms and
• counterterms.
By convention, regularizing terms are mathematical objects which give divergent
expressions a well-defined rigorous meaning. Counterterms are added to Lagrangian
densities in order to construct regularizing terms. Roughly speaking, this allows
us a physical interpretation of the regularizing terms. In quantum field theory,
renormalization theory is based on counterterms.
Physical interpretation. Regard E(R) above as the energy of a quantum
system on the interval [ , R]. This energy is very large if the size of the system is
very large. Such extremely large energies are not observed in physical experiments.
Physicists assume that we only measure relative energies with respect to the ground
state. In our model above, we measure E(R) − a ln R. In the limit R → +∞, we
get the regularized value reg E(∞).
整函数[编辑]
[ltr]整函数(entrie function)是在整个复平面上全纯的函数。典型的例子有多项式函数、指数函数、以及它们的和、积及复合函数。每一个整函数都可以表示为处处收敛的幂级数。而对数函数和平方根都不是整函数。
整函数的阶可以用上极限定义如下:
其中是到的距离,是时的最大绝对值。如果,我们也可以定义它的类型:
整函数在无穷远处可能具有奇点,甚至是本性奇点,这时该函数便称为超越整函数。根据刘维尔定理,在整个黎曼球面(复平面和无穷远处的点)上的整函数是常数。
刘维尔定理确立了整函数的一个重要的性质:任何一个有界的整函数都是常数。这个性质可以用来证明代数基本定理。皮卡小定理强化了刘维尔定理,它表明任何一个不是常数的整函数都取遍所有的复数值,最多只有一个值例外,例如指数函数永远不能是零。
参见[size=13][编辑][/ltr][/size]
[ltr]
参考文献[编辑][/ltr]
- Ralph P. Boas. Entire Functions. Academic Press. 1954. OCLC847696.
由一星于2014-06-30, 04:57进行了最后一次编辑,总共编辑了1次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Weierstrass factorization theorem
From Wikipedia, the free encyclopedia
(Redirected from Weierstrass product)
[ltr]
In mathematics, the Weierstrass factorization theorem in complex analysis, named after Karl Weierstrass, asserts that entire functions can be represented by a product involving their zeroes. In addition, every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A second form extended to meromorphic functions allows one to consider a given meromorphic function as a product of three factors: the function's poles, zeroes, and an associated non-zero holomorphic function.[/ltr]
3.3 Hadamard factorization theorem
4 See also
5 Notes
6 External links
[ltr]
Motivation[edit]
The consequences of the fundamental theorem of algebra are twofold.[1]Firstly, any finite sequence in the complex plane has an associatedpolynomial that has zeroes precisely at the points of that sequence,
Secondly, any polynomial function in the complex plane has afactorization where a is a non-zero constant and cn are the zeroes of p.
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of extra machinery is demonstrated when one considers the product if the sequence is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that each factor must approach 1 as . So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed. Enter the genius of Weierstrass'elementary factors. These factors serve the same purpose as the factors above.
The elementary factors[edit]
These are also referred to as primary factors.[2]
For , define the elementary factors:[3]
Their utility lies in the following lemma:[3]
Lemma (15.8, Rudin) for |z| ≤ 1, n ∈ No
The two forms of the theorem[edit]
Existence of entire function with specified zeroes[edit]
Sometimes called the Weierstrass theorem.[4]
Let be a sequence of non-zero complex numbers such that . If is any sequence of integers such that for all ,
then the function
is entire with zeros only at points . If number occurs in sequence exactly m times, then function f has a zero at of multiplicitym.[/ltr]
[ltr]
The Weierstrass factorization theorem[edit]
Sometimes called the Weierstrass product/factor theorem.[5]
Let ƒ be an entire function, and let be the non-zero zeros of ƒrepeated according to multiplicity; suppose also that ƒ has a zero at z = 0 of order m ≥ 0 (a zero of order m = 0 at z = 0 means ƒ(0) ≠ 0). Then there exists an entire function g and a sequence of integers such that
[6]
Examples of factorization[edit][/ltr]
[ltr]
Hadamard factorization theorem[edit]
If ƒ is an entire function of finite order ρ then it admits a factorization
where g(z) is a polynomial of degree q, q ≤ ρ and p=[ρ] .[6]
See also[edit][/ltr]
[ltr]
Notes[edit][/ltr]
[ltr]
External links[edit][/ltr]
Mittag-Leffler's theorem
From Wikipedia, the free encyclopedia
[ltr]In complex analysis, Mittag-Leffler's theorem concerns the existence ofmeromorphic functions with prescribed poles. It is sister to the Weierstrass factorization theorem, which asserts existence of holomorphic functionswith prescribed zeros. It is named after Gösta Mittag-Leffler.
[/ltr]
[size][ltr]
Theorem[edit]
Let be an open set in and a closed discrete subset. For each in , let be a polynomial in . There is a meromorphic function on such that for each , the function is holomorphic at . In particular, the principal part of at is .
One possible proof outline is as follows. Notice that if is finite, it suffices to take . If is not finite, consider the finite sum where is a finite subset of . While the may not converge as F approaches E, one may subtract well-chosen rational functions with poles outside of D (provided by Runge's theorem) without changing the principal parts of the and in such a way that convergence is guaranteed.
Example[edit]
Suppose that we desire a meromorphic function with simple poles ofresidue 1 at all positive integers. With notation as above, letting and , Mittag-Leffler's theorem asserts (non-constructively) the existence of a meromorphic function with principal part at for each positive integer . This has the desired properties. More constructively we can let . This series converges normally on (as can be shown using the M-test) to a meromorphic function with the desired properties.
Another example is provided by
See also[edit]
[/ltr][/size]
[size][ltr]
References[edit]
[/ltr][/size]
[size][ltr]
External links[edit]
[/ltr][/size]
From Wikipedia, the free encyclopedia
(Redirected from Weierstrass product)
This article's tone or style may not reflect the encyclopedic tone used on Wikipedia. See Wikipedia's guide to writing better articles for suggestions. (June 2014) |
In mathematics, the Weierstrass factorization theorem in complex analysis, named after Karl Weierstrass, asserts that entire functions can be represented by a product involving their zeroes. In addition, every sequence tending to infinity has an associated entire function with zeroes at precisely the points of that sequence.
A second form extended to meromorphic functions allows one to consider a given meromorphic function as a product of three factors: the function's poles, zeroes, and an associated non-zero holomorphic function.[/ltr]
- 1 Motivation
- 2 The elementary factors
- 3 The two forms of the theorem
- 3.1 Existence of entire function with specified zeroes
- 3.2 The Weierstrass factorization theorem
- 3.2.1 Examples of factorization
[ltr]
Motivation[edit]
The consequences of the fundamental theorem of algebra are twofold.[1]Firstly, any finite sequence in the complex plane has an associatedpolynomial that has zeroes precisely at the points of that sequence,
Secondly, any polynomial function in the complex plane has afactorization where a is a non-zero constant and cn are the zeroes of p.
The two forms of the Weierstrass factorization theorem can be thought of as extensions of the above to entire functions. The necessity of extra machinery is demonstrated when one considers the product if the sequence is not finite. It can never define an entire function, because the infinite product does not converge. Thus one cannot, in general, define an entire function from a sequence of prescribed zeroes or represent an entire function by its zeroes using the expressions yielded by the fundamental theorem of algebra.
A necessary condition for convergence of the infinite product in question is that each factor must approach 1 as . So it stands to reason that one should seek a function that could be 0 at a prescribed point, yet remain near 1 when not at that point and furthermore introduce no more zeroes than those prescribed. Enter the genius of Weierstrass'elementary factors. These factors serve the same purpose as the factors above.
The elementary factors[edit]
These are also referred to as primary factors.[2]
For , define the elementary factors:[3]
Their utility lies in the following lemma:[3]
Lemma (15.8, Rudin) for |z| ≤ 1, n ∈ No
The two forms of the theorem[edit]
Existence of entire function with specified zeroes[edit]
Sometimes called the Weierstrass theorem.[4]
Let be a sequence of non-zero complex numbers such that . If is any sequence of integers such that for all ,
then the function
is entire with zeros only at points . If number occurs in sequence exactly m times, then function f has a zero at of multiplicitym.[/ltr]
- Note that the sequence in the statement of the theorem always exists. For example we could always take and have the convergence. Such a sequence is not unique: changing it at finite number of positions, or taking another sequence p'n ≥ pn, will not break the convergence.
- The theorem generalizes to the following: sequences in open subsets(and hence regions) of the Riemann sphere have associated functions that are holomorphic in those subsets and have zeroes at the points of the sequence.[3]
- Note also that the case given by the fundamental theorem of algebra is incorporated here. If the sequence is finite then we can take and obtain: .
[ltr]
The Weierstrass factorization theorem[edit]
Sometimes called the Weierstrass product/factor theorem.[5]
Let ƒ be an entire function, and let be the non-zero zeros of ƒrepeated according to multiplicity; suppose also that ƒ has a zero at z = 0 of order m ≥ 0 (a zero of order m = 0 at z = 0 means ƒ(0) ≠ 0). Then there exists an entire function g and a sequence of integers such that
[6]
Examples of factorization[edit][/ltr]
[ltr]
Hadamard factorization theorem[edit]
If ƒ is an entire function of finite order ρ then it admits a factorization
where g(z) is a polynomial of degree q, q ≤ ρ and p=[ρ] .[6]
See also[edit][/ltr]
[ltr]
Notes[edit][/ltr]
- Jump up^ Knopp, K. (1996), "Weierstrass's Factor-Theorem", Theory of Functions, Part II, New York: Dover, pp. 1–7.
- Jump up^ Boas, R. P. (1954), Entire Functions, New York: Academic Press Inc.,ISBN 0-8218-4505-5, OCLC 6487790, chapter 2.
- ^ Jump up to:a b c Rudin, W. (1987), Real and Complex Analysis (3rd ed.), Boston: McGraw Hill, pp. 301–304, ISBN 0-07-054234-1, OCLC 13093736.
- Jump up^ Weisstein, Eric W., "Weierstrass's Theorem", MathWorld.
- Jump up^ Weisstein, Eric W., "Weierstrass Product Theorem", MathWorld.
- ^ Jump up to:a b Conway, J. B. (1995), Functions of One Complex Variable I, 2nd ed., springer.com: Springer, ISBN 0-387-90328-3
[ltr]
External links[edit][/ltr]
- Hazewinkel, Michiel, ed. (2001), "Weierstrass theorem", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
Mittag-Leffler's theorem
From Wikipedia, the free encyclopedia
[ltr]In complex analysis, Mittag-Leffler's theorem concerns the existence ofmeromorphic functions with prescribed poles. It is sister to the Weierstrass factorization theorem, which asserts existence of holomorphic functionswith prescribed zeros. It is named after Gösta Mittag-Leffler.
[/ltr]
[size][ltr]
Theorem[edit]
Let be an open set in and a closed discrete subset. For each in , let be a polynomial in . There is a meromorphic function on such that for each , the function is holomorphic at . In particular, the principal part of at is .
One possible proof outline is as follows. Notice that if is finite, it suffices to take . If is not finite, consider the finite sum where is a finite subset of . While the may not converge as F approaches E, one may subtract well-chosen rational functions with poles outside of D (provided by Runge's theorem) without changing the principal parts of the and in such a way that convergence is guaranteed.
Example[edit]
Suppose that we desire a meromorphic function with simple poles ofresidue 1 at all positive integers. With notation as above, letting and , Mittag-Leffler's theorem asserts (non-constructively) the existence of a meromorphic function with principal part at for each positive integer . This has the desired properties. More constructively we can let . This series converges normally on (as can be shown using the M-test) to a meromorphic function with the desired properties.
Another example is provided by
See also[edit]
[/ltr][/size]
[size][ltr]
References[edit]
[/ltr][/size]
- Ahlfors, Lars (1953), Complex analysis (3rd ed.), McGraw Hill (published 1979), ISBN 0-07-000657-1.
- Conway, John B. (1978), Functions of One Complex Variable I (2nd ed.), Springer-Verlag, ISBN 0-387-90328-3.
[size][ltr]
External links[edit]
[/ltr][/size]
- Hazewinkel, Michiel, ed. (2001), "Mittag-Leffler theorem",Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
- Mittag-Leffler's theorem at PlanetMath.org.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Fermions and the Calculus for Grassmann
Variables
In 1844, Hermann Grassmann (1809–1877) emphasized the importance
of the wedge product (Grassmann product) for geometry in higher dimensions.
But his contemporaries did not understand him. Nowadays the
wedge product is fundamental for modern mathematics (cohomology) and
physics (fermions and supersymmetry).
Folklore
Recall that we distinguish between bosons (elementary particles with integer spin
like photons or mesons) and fermions (elementary particles with half-integer spin
like electrons and quarks). The rigorous finite-dimensional approach from the preceding
Chap. 7 refers to bosons. However, it is possible to extend this approach to
fermions by replacing complex numbers by Grassmann variables. In this chapter,
we are going to discuss this.
The Grassmann Product
Vectors. Let X be a complex linear space. For two elements ϕ and ψ of X, we
define the Grassmann product ϕ ∧ ψ by setting
(ϕ ∧ ψ)(f, g) := f(ϕ)g(ψ) − f(ψ)g(ϕ) for all f, g ∈ Xd.
Recall that the dual space Xd consists of all linear functionals f : X → C. The map
ϕ ∧ ψ : Xd × Xd → C
is bilinear and antisymmetric. Explicitly, for all f, g, h ∈ Xd and all complex numbers
α, β, we have
• (ϕ ∧ ψ)(f, g) = −(ϕ ∧ ψ)(g, f);
• (ϕ ∧ ψ)(f,αg + βh) = α(ϕ ∧ ψ)(f, g) + β(ϕ ∧ ψ)(f,h).
The two crucial properties of the Grassmann product are
• ϕ ∧ ψ = −ψ ∧ ϕ (anticommutativity), and
• (αϕ + βχ) ∧ ψ = αϕ ∧ ψ + βχ ∧ ψ (distributivity)
for all ϕ, ψ, χ ∈ X and all complex numbers α, β. If we write briefly ϕψ instead of
the wedge product ϕ ∧ ψ, then
• ϕψ = −ψϕ, and
• (αϕ + βχ)ψ = αϕψ + βχψ.
This implies the key relation
ϕ ϕ= 0 for all ϕ ∈ X.
Functionals. Dually, for f, g ∈ Xd, we define
(f ∧ g)(ϕ, ψ) := f(ϕ)g(ψ) − f(ψ)g(ϕ) for all ϕ, ψ ∈ X.
The map f ∧ g : X × X → C is bilinear and antisymmetric.
Variables
In 1844, Hermann Grassmann (1809–1877) emphasized the importance
of the wedge product (Grassmann product) for geometry in higher dimensions.
But his contemporaries did not understand him. Nowadays the
wedge product is fundamental for modern mathematics (cohomology) and
physics (fermions and supersymmetry).
Folklore
Recall that we distinguish between bosons (elementary particles with integer spin
like photons or mesons) and fermions (elementary particles with half-integer spin
like electrons and quarks). The rigorous finite-dimensional approach from the preceding
Chap. 7 refers to bosons. However, it is possible to extend this approach to
fermions by replacing complex numbers by Grassmann variables. In this chapter,
we are going to discuss this.
The Grassmann Product
Vectors. Let X be a complex linear space. For two elements ϕ and ψ of X, we
define the Grassmann product ϕ ∧ ψ by setting
(ϕ ∧ ψ)(f, g) := f(ϕ)g(ψ) − f(ψ)g(ϕ) for all f, g ∈ Xd.
Recall that the dual space Xd consists of all linear functionals f : X → C. The map
ϕ ∧ ψ : Xd × Xd → C
is bilinear and antisymmetric. Explicitly, for all f, g, h ∈ Xd and all complex numbers
α, β, we have
• (ϕ ∧ ψ)(f, g) = −(ϕ ∧ ψ)(g, f);
• (ϕ ∧ ψ)(f,αg + βh) = α(ϕ ∧ ψ)(f, g) + β(ϕ ∧ ψ)(f,h).
The two crucial properties of the Grassmann product are
• ϕ ∧ ψ = −ψ ∧ ϕ (anticommutativity), and
• (αϕ + βχ) ∧ ψ = αϕ ∧ ψ + βχ ∧ ψ (distributivity)
for all ϕ, ψ, χ ∈ X and all complex numbers α, β. If we write briefly ϕψ instead of
the wedge product ϕ ∧ ψ, then
• ϕψ = −ψϕ, and
• (αϕ + βχ)ψ = αϕψ + βχψ.
This implies the key relation
ϕ ϕ= 0 for all ϕ ∈ X.
Functionals. Dually, for f, g ∈ Xd, we define
(f ∧ g)(ϕ, ψ) := f(ϕ)g(ψ) − f(ψ)g(ϕ) for all ϕ, ψ ∈ X.
The map f ∧ g : X × X → C is bilinear and antisymmetric.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Infinite-Dimensional Hilbert Spaces
Quantum fields possess an infinite number of degrees of freedom. This
causes a lot of mathematical trouble.
Folklore
The Uncertainty Relation
Before you start to axiomatize things, be sure that you first have something
of mathematical substance.
Hermann Weyl (1885–1955)
In 1927 Heisenberg (1901–1976) discovered that in contrast to Newton’s classical
mechanics, it is impossible to measure precisely position and momentum of a quantum
particle at the same time. Heisenberg based his mathematical argument on the
commutation relation
QP − PQ = iI (10.1)
for the position operator Q and the momentum operator P, along with the Schwarz
inequality.
Finite-dimensional Hilbert spaces fail. Observe first that the fundamental
commutation relation (10.1) cannot be realized for observables Q and P living in
a nontrivial finite-dimensional Hilbert space X if the Planck constant is different
from zero. 1 Indeed, suppose that there exist two self-adjoint linear operators
Q, P : X → X
such that (10.1) holds true. By Proposition 7.11 on page 364, tr(QP) = tr(PQ).
This implies
0 = tr(QP −PQ) = i · tr I = i dimX.
Thus, relation (10.1) forces the vanishing of the Planck constant in the setting of
a nontrivial finite-dimensional Hilbert space.
Using an intuitive picture, the incomplete space C2(R) and the complete
space L2(R) correspond to the incomplete space of rational numbers Q and
the complete space of real numbers R, respectively.
A Cauchy sequence (xn) of rational numbers x1, x2, . . . is does not always converge
to a rational number. However, if we complete the space of rational numbers to
the space of real numbers by introducing irrational numbers, then each Cauchy
sequence of real numbers converges to a real number. The term ‘irrational number’
indicates that in the history of mathematics, mathematicians had philosophical
trouble with understanding this notion.
Measure and Integral
Almost all concepts which relate to the modern measure and integration
theory, go back to the works of Lebesgue (1875–1941). The introduction of
these concepts was the turning point in the transition from mathematics
of the 19th century to mathematics of the 20th century.
Naum Vilenk
This measure integral
includes finite sums, infinite series, and traditional integrals, as special cases.
Such general expansions appear in von Neumann’s and Dirac’s operator calculus.
This plays a fundamental role in quantum physics. These expansion formulas generalize
the Fourier series of periodic functions and the Fourier transform. Furthermore,
the measure integral is the natural setting for the theory of probability. Mass has
then to be replaced by probability.
Measure theory begins with Archimedes of Syracus (287–212 B.C.) who computed
the measure of the unit circle, S1. Using a polygon with 96 nodes, he obtained
the approximation μ(S1) = 6.28 which corresponds to π = 3.14. Around 1900, modern
measure theory was founded by Borel (1871–1956) and Lebesgue (1875–1941).
In 1932 Kolmogorov (1903–1987) used general measure theory in order to found the
modern theory of probability.
Definition of measure. The notion of measure generalizes the intuitive notion
of volume, mass, positive electric charge, and probability. Suppose we are given an
arbitrary set S. To certain subsets A of the set S we want to assign a number, μ(A),
with
0 ≤ μ(A)≤∞.
The number μ(A) is called the measure of the set A.
Characterization of the Lebesgue measure by translation invariance. The
Lebesgue measure generalizes the classical volume of a set in RN.
Physical interpretation in quantum mechanics. In terms of physics, the
original function ψ acts in position space, whereas the Fourier transform Fψ acts
in momentum space. In quantum mechanics, the function ψ = ψ(x) is the wave
function of a quantum particle on the real line.
The Fourier transform represents an expansion with respect to the eigenfunctions
of the momentum operator.
Quantum fields possess an infinite number of degrees of freedom. This
causes a lot of mathematical trouble.
Folklore
The Uncertainty Relation
Before you start to axiomatize things, be sure that you first have something
of mathematical substance.
Hermann Weyl (1885–1955)
In 1927 Heisenberg (1901–1976) discovered that in contrast to Newton’s classical
mechanics, it is impossible to measure precisely position and momentum of a quantum
particle at the same time. Heisenberg based his mathematical argument on the
commutation relation
QP − PQ = iI (10.1)
for the position operator Q and the momentum operator P, along with the Schwarz
inequality.
Finite-dimensional Hilbert spaces fail. Observe first that the fundamental
commutation relation (10.1) cannot be realized for observables Q and P living in
a nontrivial finite-dimensional Hilbert space X if the Planck constant is different
from zero. 1 Indeed, suppose that there exist two self-adjoint linear operators
Q, P : X → X
such that (10.1) holds true. By Proposition 7.11 on page 364, tr(QP) = tr(PQ).
This implies
0 = tr(QP −PQ) = i · tr I = i dimX.
Thus, relation (10.1) forces the vanishing of the Planck constant in the setting of
a nontrivial finite-dimensional Hilbert space.
Using an intuitive picture, the incomplete space C2(R) and the complete
space L2(R) correspond to the incomplete space of rational numbers Q and
the complete space of real numbers R, respectively.
A Cauchy sequence (xn) of rational numbers x1, x2, . . . is does not always converge
to a rational number. However, if we complete the space of rational numbers to
the space of real numbers by introducing irrational numbers, then each Cauchy
sequence of real numbers converges to a real number. The term ‘irrational number’
indicates that in the history of mathematics, mathematicians had philosophical
trouble with understanding this notion.
Measure and Integral
Almost all concepts which relate to the modern measure and integration
theory, go back to the works of Lebesgue (1875–1941). The introduction of
these concepts was the turning point in the transition from mathematics
of the 19th century to mathematics of the 20th century.
Naum Vilenk
This measure integral
includes finite sums, infinite series, and traditional integrals, as special cases.
Such general expansions appear in von Neumann’s and Dirac’s operator calculus.
This plays a fundamental role in quantum physics. These expansion formulas generalize
the Fourier series of periodic functions and the Fourier transform. Furthermore,
the measure integral is the natural setting for the theory of probability. Mass has
then to be replaced by probability.
Measure theory begins with Archimedes of Syracus (287–212 B.C.) who computed
the measure of the unit circle, S1. Using a polygon with 96 nodes, he obtained
the approximation μ(S1) = 6.28 which corresponds to π = 3.14. Around 1900, modern
measure theory was founded by Borel (1871–1956) and Lebesgue (1875–1941).
In 1932 Kolmogorov (1903–1987) used general measure theory in order to found the
modern theory of probability.
Definition of measure. The notion of measure generalizes the intuitive notion
of volume, mass, positive electric charge, and probability. Suppose we are given an
arbitrary set S. To certain subsets A of the set S we want to assign a number, μ(A),
with
0 ≤ μ(A)≤∞.
The number μ(A) is called the measure of the set A.
Characterization of the Lebesgue measure by translation invariance. The
Lebesgue measure generalizes the classical volume of a set in RN.
Physical interpretation in quantum mechanics. In terms of physics, the
original function ψ acts in position space, whereas the Fourier transform Fψ acts
in momentum space. In quantum mechanics, the function ψ = ψ(x) is the wave
function of a quantum particle on the real line.
The Fourier transform represents an expansion with respect to the eigenfunctions
of the momentum operator.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The Dirichlet Problem in Electrostatics as a
Paradigm
The Dirichlet principle is an exciting example for a problem that came
from physics and could be solved by mathematics; this famous problem
strongly influenced the development of far-reaching mathematical theories
in the 20th century.
Folklore
By the Dirichlet principle we understand a method for solving boundary
value problems via minimum problems for variational integrals. This principle
goes back to Green (1793–1841), Gauss (1777–1855), Lord Kelvin
(1824–1907), and Dirichlet (1805–1859). In 1870 Weierstrass (1815–1897)
was the first to underline the shortcomings of this principle. He showed
that there are variational problems which do not have any solution. In 1900
I showed that it is possible to rigorously justify the Dirichlet principle.
David Hilbert (1862–1943)
In order to understand the great achievement of Hilbert in the field of
analysis, it is necessary to first comment on the state of analysis at the
end of the nineteenth century. After Weierstrass had made sure of the
foundations of complex function theory, and it has reached an impressive
level, research switched to boundary-value problems, which first arose in
physics (e.g., electrostatics). The work of Riemann (1826–1866) on complex
function theory and conformal maps, however, had shown that boundaryvalue
problems have great importance for pure mathematics as well. Two
problems had to be solved:
(i) the problem of the existence of an electrostatic potential function for
given boundary values, and
(ii) the problem of eigenoscillations of elastic bodies, for example, string
and membrane.
The state of the theory was bad at the end of the nineteenth century. Riemann
had believed that, by using the Dirichlet principle, one could deal
with these problems in a simple and uniform way. After Weierstrass’ substantial
criticism of the Dirichlet principle in 1870, special methods had
to be developed for these problems. These methods, by Carl Neumann,
Amandus Schwarz, and Henri Poincar´e, were very elaborate and still have
great aesthetic appeal today; but because of their variety they were confusing,
although at the end of the nineteenth century, Poincar´e (1854–1912),
in particular, endeavored with great astuteness to standardize the theory.
There was, however, a lack of “simple basic facts” from which one could
easily get complete results without sophisticated investigations of limiting
processes.
Hilbert first looked for these “simple basic facts” in the calculus of variations.
In 1900 he had an immediate and great success; he succeeded in
justifying the Dirichlet principle.
While Hilbert used variational methods, the Swedish mathematician Fredholm
(1866–1927) approached the same goal by developing Poincar´e’s work
by using linear integral equations. In the winter semester 1900/01 Holmgren,
who had come from Uppsala (Sweden) to study under Hilbert in
G¨ottingen (Germany), held a lecture in Hilbert’s seminar on Fredholm’s
work on linear integral equations which had been published the previous
year. This was a decisive day in Hilbert’s life. He took up Fredholm’s discovery
with great zeal, and combined it with his variational method. In
this way he succeeded in creating a uniform theory which solved problems
(i) and (ii) above.8
Hilbert believed that with this theory he had provided analysis with a great
general basis which corresponds to an axiomatics of limiting processes. The
further development of mathematics has proved him to be right.9
Otto Blumenthal, 1932
The creation of a rigorous mathematical quantum field theory is a challenge for
modern mathematics. There is hope for solving this hard problem in the future. The
optimism is motivated by the success of mathematics in the past. As an example,
let us consider the Dirichlet principle in this section.
Paradigm
The Dirichlet principle is an exciting example for a problem that came
from physics and could be solved by mathematics; this famous problem
strongly influenced the development of far-reaching mathematical theories
in the 20th century.
Folklore
By the Dirichlet principle we understand a method for solving boundary
value problems via minimum problems for variational integrals. This principle
goes back to Green (1793–1841), Gauss (1777–1855), Lord Kelvin
(1824–1907), and Dirichlet (1805–1859). In 1870 Weierstrass (1815–1897)
was the first to underline the shortcomings of this principle. He showed
that there are variational problems which do not have any solution. In 1900
I showed that it is possible to rigorously justify the Dirichlet principle.
David Hilbert (1862–1943)
In order to understand the great achievement of Hilbert in the field of
analysis, it is necessary to first comment on the state of analysis at the
end of the nineteenth century. After Weierstrass had made sure of the
foundations of complex function theory, and it has reached an impressive
level, research switched to boundary-value problems, which first arose in
physics (e.g., electrostatics). The work of Riemann (1826–1866) on complex
function theory and conformal maps, however, had shown that boundaryvalue
problems have great importance for pure mathematics as well. Two
problems had to be solved:
(i) the problem of the existence of an electrostatic potential function for
given boundary values, and
(ii) the problem of eigenoscillations of elastic bodies, for example, string
and membrane.
The state of the theory was bad at the end of the nineteenth century. Riemann
had believed that, by using the Dirichlet principle, one could deal
with these problems in a simple and uniform way. After Weierstrass’ substantial
criticism of the Dirichlet principle in 1870, special methods had
to be developed for these problems. These methods, by Carl Neumann,
Amandus Schwarz, and Henri Poincar´e, were very elaborate and still have
great aesthetic appeal today; but because of their variety they were confusing,
although at the end of the nineteenth century, Poincar´e (1854–1912),
in particular, endeavored with great astuteness to standardize the theory.
There was, however, a lack of “simple basic facts” from which one could
easily get complete results without sophisticated investigations of limiting
processes.
Hilbert first looked for these “simple basic facts” in the calculus of variations.
In 1900 he had an immediate and great success; he succeeded in
justifying the Dirichlet principle.
While Hilbert used variational methods, the Swedish mathematician Fredholm
(1866–1927) approached the same goal by developing Poincar´e’s work
by using linear integral equations. In the winter semester 1900/01 Holmgren,
who had come from Uppsala (Sweden) to study under Hilbert in
G¨ottingen (Germany), held a lecture in Hilbert’s seminar on Fredholm’s
work on linear integral equations which had been published the previous
year. This was a decisive day in Hilbert’s life. He took up Fredholm’s discovery
with great zeal, and combined it with his variational method. In
this way he succeeded in creating a uniform theory which solved problems
(i) and (ii) above.8
Hilbert believed that with this theory he had provided analysis with a great
general basis which corresponds to an axiomatics of limiting processes. The
further development of mathematics has proved him to be right.9
Otto Blumenthal, 1932
The creation of a rigorous mathematical quantum field theory is a challenge for
modern mathematics. There is hope for solving this hard problem in the future. The
optimism is motivated by the success of mathematics in the past. As an example,
let us consider the Dirichlet principle in this section.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
斯托克斯定理[编辑]
[ltr]
斯托克斯定理(英文:Stokes theorem)是微分几何中关于微分形式的积分的一个命题,它一般化了向量微积分的几个定理,以斯托克斯爵士命名。[/ltr]
[ltr]
ℝ³ 上的斯托克斯公式[编辑]
设S 是 分片光滑的有向曲面,S 的边界为有向闭曲线Γ ,即,且Γ 的正向与 S 的侧符合右手规则: 函数P(x,y,z)、Q(x,y,z)、R(x,y,z)都是定义在“曲面 S连同其边界 Γ”上且都具有一阶连续偏导数的函数,则有[1]
[/ltr]
旋度定理可以用来计算穿过具有边界的曲面,例如,任何右边的曲面;旋度定理不可以用来计算穿过闭曲面的通量,例如,任何左边的曲面。在这图内,曲面以蓝色显示,边界以红色显示。
[ltr]
这个公式叫做 ℝ³ 上的斯托克斯公式或开尔文-斯托克斯定理、旋度定理。这和函数的旋度有关,用梯度算符可写成:[2]
它将ℝ³ 空间上“向量场的旋度的曲面积分”跟“向量场在曲面边界上的线积分”之间建立联系,这是一般的斯托克斯公式(在 n三维;2 时)的特例,我们只需用ℝ³ 空间上的度量把向量场看作等价的1形式。该定理的第一个已知的书面形式由威廉·汤姆森(开尔文勋爵)给出,出现在他给斯托克斯的信中。
类似的,高斯散度定理
也是一般的斯托克斯公式的一个特例,如果我们把向量场看成是等价的n-1形式,可以通过和体积形式的内积实现。 微积分基本定理和格林定理也是一般性斯托克斯定理的特例。使用微分形式的一般化斯托克斯定理当然比其特例更强,虽然后者更直观而且经常被使用它的科学工作者或工程师认为更方便。
另一种形式[编辑]
通过以下公式可以在对坐标的“曲线积分”和对面积的“面积积分”之间相互转换:
流形上的斯托克斯公式[编辑]
令 M 为一个可定向分段光滑 n 维流形,令 ω 为 M 上的 n−1 阶 C1 类紧支撑微分形式。如果 ∂M 表示 M 的边界,并以 M 的方向诱导的方向为边界的方向,则
这里 dω 是 ω 的外微分, 只用流形的结构定义。这个公式被称为一般的斯托克斯公式(generalized Stokes' formula),它被认为是微积分基本定理、格林公式、高-奥公式、ℝ³ 上的斯托克斯公式的推广;后者实际上是前者的简单推论。
该定理经常用于 M 是嵌入到某个定义了 ω 的更大的流形中的子流形的情形。
定理可以简单的推广到分段光滑的子流形的线性组合上。斯托克斯定理表明相差一个恰当形式的闭形式在相差一个边界的链上的积分相同。这就是同调群和德拉姆上同调可以配对的基础。
应用[编辑]
斯托克斯公式是格林公式的推广。
利用斯托克斯公式可计算曲线积分。
参考文献[编辑][/ltr]
The general Stokes theorem is one of the most beautiful and most useful
theorems in mathematics and physics.
本条目需要补充更多来源。(2014年3月20日) 请协助添加多方面可靠来源以改善这篇条目,无法查证的内容可能会被提出异议而移除。 |
[ltr]
斯托克斯定理(英文:Stokes theorem)是微分几何中关于微分形式的积分的一个命题,它一般化了向量微积分的几个定理,以斯托克斯爵士命名。[/ltr]
[ltr]
ℝ³ 上的斯托克斯公式[编辑]
设S 是 分片光滑的有向曲面,S 的边界为有向闭曲线Γ ,即,且Γ 的正向与 S 的侧符合右手规则: 函数P(x,y,z)、Q(x,y,z)、R(x,y,z)都是定义在“曲面 S连同其边界 Γ”上且都具有一阶连续偏导数的函数,则有[1]
[/ltr]
旋度定理可以用来计算穿过具有边界的曲面,例如,任何右边的曲面;旋度定理不可以用来计算穿过闭曲面的通量,例如,任何左边的曲面。在这图内,曲面以蓝色显示,边界以红色显示。
[ltr]
这个公式叫做 ℝ³ 上的斯托克斯公式或开尔文-斯托克斯定理、旋度定理。这和函数的旋度有关,用梯度算符可写成:[2]
它将ℝ³ 空间上“向量场的旋度的曲面积分”跟“向量场在曲面边界上的线积分”之间建立联系,这是一般的斯托克斯公式(在 n三维;2 时)的特例,我们只需用ℝ³ 空间上的度量把向量场看作等价的1形式。该定理的第一个已知的书面形式由威廉·汤姆森(开尔文勋爵)给出,出现在他给斯托克斯的信中。
类似的,高斯散度定理
也是一般的斯托克斯公式的一个特例,如果我们把向量场看成是等价的n-1形式,可以通过和体积形式的内积实现。 微积分基本定理和格林定理也是一般性斯托克斯定理的特例。使用微分形式的一般化斯托克斯定理当然比其特例更强,虽然后者更直观而且经常被使用它的科学工作者或工程师认为更方便。
另一种形式[编辑]
通过以下公式可以在对坐标的“曲线积分”和对面积的“面积积分”之间相互转换:
流形上的斯托克斯公式[编辑]
令 M 为一个可定向分段光滑 n 维流形,令 ω 为 M 上的 n−1 阶 C1 类紧支撑微分形式。如果 ∂M 表示 M 的边界,并以 M 的方向诱导的方向为边界的方向,则
这里 dω 是 ω 的外微分, 只用流形的结构定义。这个公式被称为一般的斯托克斯公式(generalized Stokes' formula),它被认为是微积分基本定理、格林公式、高-奥公式、ℝ³ 上的斯托克斯公式的推广;后者实际上是前者的简单推论。
该定理经常用于 M 是嵌入到某个定义了 ω 的更大的流形中的子流形的情形。
定理可以简单的推广到分段光滑的子流形的线性组合上。斯托克斯定理表明相差一个恰当形式的闭形式在相差一个边界的链上的积分相同。这就是同调群和德拉姆上同调可以配对的基础。
应用[编辑]
斯托克斯公式是格林公式的推广。
利用斯托克斯公式可计算曲线积分。
参考文献[编辑][/ltr]
The general Stokes theorem is one of the most beautiful and most useful
theorems in mathematics and physics.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Sobolev space
From Wikipedia, the free encyclopedia
[ltr]In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself as well as its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
Sobolev spaces are named after the Russian mathematician Sergei Sobolev. Their importance comes from the fact that solutions of partial differential equations are naturally found in Sobolev spaces, rather than in spaces of continuous functions and with the derivatives understood in the classical sense.
[/ltr]
[size][ltr]
Motivation[edit]
There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be of class C1 — see Differentiability class). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, however, it was observed that the space C1 (or C2, etc.) was not exactly the right space to study solutions of differential equations. The Sobolev spaces are the modern replacement for these spaces in which to look for solutions of partial differential equations.
Quantities or properties of the underlying model of the differential equation are usually expressed in terms of integral norms, rather than the uniform norm. A typical example is measuring the energy of a temperature or velocity distribution by an L2-norm. It is therefore important to develop a tool for differentiating Lebesgue space functions.
The integration by parts formula yields that for every u ∈ Ck(Ω), where k is a natural number and for all infinitely differentiable functions with compact support φ ∈ Cc∞(Ω),
,
where α a multi-index of order |α| = k and Ω is an open subset in ℝn. Here, the notation
is used.
The left-hand side of this equation still makes sense if we only assume u to be locally integrable. If there exists a locally integrable function v, such that
we call v the weak α-th partial derivative of u. If there exists a weak α-th partial derivative of u, then it is uniquely defined almost everywhere. On the other hand, if u ∈ Ck(Ω), then the classical and the weak derivative coincide. Thus, if v is a weak α-th partial derivative of u, we may denote it by Dαu := v.
For example, the function
is not continuous at zero, and not differentiable at −1, 0, or 1. Yet the function
satisfies the definition for being the weak derivative of , which then qualifies as being in the Sobolev space (for any allowed p, see definition below).
The Sobolev spaces Wk,p(Ω) combine the concepts of weak differentiability and Lebesgue norms.
Sobolev spaces with integer k[edit]
One-dimensional case[edit]
In the one-dimensional case (functions on R) the Sobolev space W k,p is defined to be the subset of functions f in Lp(R) such that the function fand its weak derivatives up to some order k have a finite Lp norm, for given p (1 ≤ p ≤ +∞). As mentioned above, some care must be taken to define derivatives in the proper sense. In the one-dimensional problem it is enough to assume that f (k−1), the (k − 1)-th derivative of the function f, is differentiable almost everywhere and is equal almost everywhere to theLebesgue integral of its derivative (this gets rid of examples such asCantor's function which are irrelevant to what the definition is trying to accomplish).
With this definition, the Sobolev spaces admit a natural norm,
Equipped with the norm || ⋅ ||k,p, W k,p becomes a Banach space. It turns out that it is enough to take only the first and last in the sequence, i.e., the norm defined by
is equivalent to the norm above (see Normed vector space#Topological structure).
The case p = 2[edit]
Sobolev spaces with p = 2 (at least[clarification needed] on a one-dimensional finite interval) are especially important because of their connection with Fourier series and because they form a Hilbert space. A special notation has arisen to cover this case, since the space is a Hilbert space:
H k = W k,2.
The space H k can be defined naturally in terms of Fourier series whose coefficients decay sufficiently rapidly, namely,
where is the Fourier series of f. As above, one can use the equivalent norm
Both representations follow easily from Parseval's theorem and the fact that differentiation is equivalent to multiplying the Fourier coefficient by in.
Furthermore, the space H k admits an inner product, like the spaceH 0 = L2. In fact, the H k inner product is defined in terms of the L2 inner product:
The space H k becomes a Hilbert space with this inner product.
Other examples[edit]
Some other Sobolev spaces permit a simpler description. For example,W 1,1(0, 1) is the space of absolutely continuous functions on (0, 1) (or rather, functions that are equal almost everywhere to such), while W 1,∞(I)is the space of Lipschitz functions on I, for every interval I. All spacesW k,∞ are (normed) algebras, i.e. the product of two elements is once again a function of this Sobolev space, which is not the case for p < +∞. (E.g., functions behaving like |x|−1/3 at the origin are in L2, but the product of two such functions is not in L2).
Multidimensional case[edit]
The transition to multiple dimensions brings more difficulties, starting from the very definition. The requirement that f (k−1) be the integral of f (k)does not generalize, and the simplest solution is to consider derivatives in the sense of distribution theory.
A formal definition now follows. Let Ω be an open set in Rn, let k be anatural number and let 1 ≤ p ≤ +∞. The Sobolev space W k,p(Ω) is defined to be the set of all functions f defined on Ω such that for everymulti-index α with |α| ≤ k, the mixed partial derivative
is both locally integrable and in Lp(Ω), i.e.
That is, the Sobolev space W k,p(Ω) is defined as
The natural number k is called the order of the Sobolev space W k,p(Ω).
There are several choices for a norm for W k,p(Ω). The following two are common and are equivalent in the sense of equivalence of norms:
and
With respect to either of these norms, W k,p(Ω) is a Banach space. Forp < +∞, W k,p(Ω) is also a separable space. It is conventional to denoteW k,2(Ω) by H k(Ω) for it is a Hilbert space with the norm .[1]
Approximation by smooth functions[edit]
Many of the properties of the Sobolev spaces cannot be seen directly from the definition. It is therefore interesting to investigate under which conditions a function u ∈ W k,p(Ω) can be approximated by smooth functions. If p is finite and Ω is bounded with Lipschitz boundary, then for any u ∈ W k,p(Ω) there exists an approximating sequence of functionsum ∈ C∞(Ω), smooth up to the boundary such that:[2]
Examples[edit]
In higher dimensions, it is no longer true that, for example, W1,1 contains only continuous functions. For example, 1/|x| belongs to W1,1(B3) whereB3 is the unit ball in three dimensions. For k > n/p the space Wk,p(Ω) will contain only continuous functions, but for which k this is already true depends both on p and on the dimension. For example, as can be easily checked using spherical polar coordinates for the functionf : Bn → R ∪ {+∞}, defined on the n-dimensional ball we have:
Intuitively, the blow-up of f at 0 "counts for less" when n is large since the unit ball has "more outside and less inside" in higher dimensions.
Absolutely Continuous on Lines (ACL) characterization of Sobolev functions[edit]
Let Ω be an open set in Rn and 1 ≤ p ≤ +∞. If a function is in W 1,p(Ω), then, possibly after modifying the function on a set of measure zero, the restriction to almost every line parallel to the coordinate directions in Rn isabsolutely continuous; what's more, the classical derivative along the lines that are parallel to the coordinate directions are in Lp(Ω). Conversely, if the restriction of f to almost every line parallel to the coordinate directions is absolutely continuous, then the pointwise gradient ∇f exists almost everywhere, and f is in W 1,p(Ω) provided f and |∇f | are both in Lp(Ω). In particular, in this case the weak partial derivatives of f and pointwise partial derivatives of f agree almost everywhere. The ACL characterization of the Sobolev spaces was established by Otto M. Nikodym (1933); see (Maz'ya 1985, §1.1.3).
A stronger result holds in the case p > n. A function in W 1,p(Ω) is, after modifying on a set of measure zero, Hölder continuous of exponentγ = 1 − n/p, by Morrey's inequality. In particular, if p = +∞, then the function is Lipschitz continuous.
Functions vanishing at the boundary[edit]
Let Ω be an open set in Rn. The Sobolev space W 1,2(Ω) is also denoted by H1(Ω). It is a Hilbert space, with an important subspace H1
0(Ω) defined to be the closure in H1(Ω) of the infinitely differentiable functions compactly supported in Ω. The Sobolev norm defined above reduces here to
When Ω has a regular boundary, H1
0(Ω) can be described as the space of functions in H1(Ω) that vanish at the boundary, in the sense of traces (see below). When n = 1, if Ω = (a, b) is a bounded interval, thenH1
0(a, b) consists of continuous functions on [a, b] of the form
where the generalized derivative f′ is in L2(a, b) and has 0 integral, so that f (b) = f (a) = 0.
When Ω is bounded, the Poincaré inequality states that there is a constantC = C(Ω) such that
When Ω is bounded, the injection from H1
0(Ω) to L2(Ω) is compact. This fact plays a role in the study of the Dirichlet problem, and in the fact that there exists an orthonormal basis of L2(Ω) consisting of eigenvectors of the Laplace operator (with Dirichlet boundary condition).
Sobolev spaces with non-integer k[edit]
Bessel potential spaces[edit]
For a natural number k and 1 < p < ∞ one can show (by using Fourier multipliers[3][4]) that the space Wk,p(ℝn) can equivalently be defined as
with the norm
.
This motivates Sobolev spaces with non-integer order since in the above definition we can replace k by any real number s. The resulting spaces
are called Bessel potential spaces[5] (named after Friedrich Bessel). They are Banach spaces in general and Hilbert spaces in the special case p = 2.
For an open set Ω ⊆ ℝn, Hs,p(Ω) is the set of restrictions of functions fromHs,p(ℝn) to Ω equipped with the norm
.
Again, Hs,p(Ω) is a Banach space and in the case p = 2 a Hilbert space.
Using extension theorems for Sobolev spaces, it can be shown that alsoWk,p(Ω) = Hk,p(Ω) holds in the sense of equivalent norms, if Ω is domain with uniform Ck-boundary, k a natural number and 1 < p < ∞. By theembeddings
the Bessel potential spaces Hs,p(ℝn) form a continuous scale between the Sobolev spaces Wk,p(ℝn). From an abstract point of view, the Bessel potential spaces occur as complex interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms it holds that
where:
Sobolev–Slobodeckij spaces[edit]
Another approach to define fractional order Sobolev spaces arises from the idea to generalize the Hölder condition to the Lp-setting.[6] For an open subset Ω of ℝn, 1 ≤ p < ∞, θ ∈ (0,1) and f ∈ Lp(Ω), the Slobodeckij seminorm (roughly analogous to the Hölder seminorm) is defined by
.
Let s > 0 be not an integer and set . Using the same idea as for the Hölder spaces, the Sobolev–Slobodeckij space[7]Ws,p(Ω) is defined as
.
It is a Banach space for the norm
.
If the open subset Ω is suitably regular in the sense that there exist certain extension operators, then also the Sobolev–Slobodeckij spaces form a scale of Banach spaces, i.e. one has the continuous injections orembeddings
.
There are examples of irregular Ω such that W1,p(Ω) is not even a vector subspace of Ws,p(Ω) for 0 < s < 1.
From an abstract point of view, the spaces Ws,p(Ω) coincide with the realinterpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms the following holds:
.
Sobolev–Slobodeckij spaces play an important role in the study of traces of Sobolev functions. They are special cases of Besov spaces.[4]
Traces[edit]
Sobolev spaces are often considered when investigating partial differential equations. It is essential to consider boundary values of Sobolev functions. If u ∈ C(Ω), those boundary values are described by the restriction . However, it is not clear how to describe values at the boundary foru ∈ Wk,p(Ω), as the n-dimensional measure of the boundary is zero. The following theorem[2] resolves the problem:
[/ltr][/size]
Tu is called the trace of u. Roughly speaking, this theorem extends the restriction operator to the Sobolev space W1,p(Ω) for well-behaved Ω. Note that the trace operator T is in general not surjective, but for 1 < p < ∞ it maps onto the Sobolev-Slobodeckij space .
Intuitively, taking the trace costs 1/p of a derivative. The functions u inW1,p(Ω) with zero trace, i.e. Tu = 0, can be characterized by the equality
where
In other words, for Ω bounded with Lipschitz boundary, trace-zero functions in W1,p(Ω) can be approximated by smooth functions with compact support.
Extension operators[edit]
If X is an open domain whose boundary is not too poorly behaved (e.g., if its boundary is a manifold, or satisfies the more permissive "cone condition") then there is an operator A mapping functions of X to functions of Rn such that:
[/ltr][/size]
[size][ltr]
We will call such an operator A an extension operator for X.
Case of p=2[edit]
Extension operators are the most natural way to define for non-integer s (we cannot work directly on X since taking Fourier transform is a global operation). We define by saying that u is in if and only if Au is in . Equivalently, complex interpolation yields the same spaces so long as X has an extension operator. If X does not have an extension operator, complex interpolation is the only way to obtain the spaces.
As a result, the interpolation inequality still holds.
Extension by zero[edit]
As in the section #Functions vanishing at the boundary, we define to be the closure in of the space of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following
Theorem Let X be uniformly Cm regular, m ≥ s and let P be the linear map sending u in to
where d/dn is the derivative normal to G, and k is the largest integer less than s. Then is precisely the kernel of P.
If we may define its extension by zero in the natural way, namely
Theorem Let s > ½. The map taking u to is continuous into if and only if s is not of the form n + ½ for n an integer.
For a function f ∈ Lp(Ω) on an open subset Ω of ℝn, its extension by zero
is an element of Lp(ℝn). Furthermore,
In the case of the Sobolev space W1,p(Ω) for 1 ≤ p ≤ ∞, extending a function u by zero will not necessarily yield an element of W1,p(ℝn). But if Ω is bounded with Lipschitz boundary (e.g. ∂Ω is C1), then for any bounded open set O such that Ω⊂⊂O (i.e. Ω is compactly contained in O), there exists a bounded linear operator[2]
such that for each u ∈ W1,p(Ω): Eu = u a.e. on Ω, Eu has compact support within O, and there exists a constant C depending only on p, Ω, O and the dimension n, such that
We call Eu an extension of u to ℝn.
Sobolev embeddings[edit]
Main article: Sobolev inequality
It is a natural question to ask if a Sobolev function is continuous or even continuously differentiable. Roughly speaking, sufficiently many weak derivatives or large p result in a classical derivative. This idea is generalized and made precise in the Sobolev embedding theorem.
Write for the Sobolev space of some compact Riemannian manifold of dimension n. Here k can be any real number, and 1 ≤ p ≤ ∞. (For p = ∞ the Sobolev space is defined to be the Hölder space Cn,α wherek = n + α and 0 < α ≤ 1.) The Sobolev embedding theorem states that ifk ≥ m and k − n/p ≥ m − n/q then
and the embedding is continuous. Moreover if k > m and k − n/p > m −n/qthen the embedding is completely continuous (this is sometimes calledKondrachov's theorem or the Rellich-Kondrachov theorem). Functions in have all derivatives of order less than m are continuous, so in particular this gives conditions on Sobolev spaces for various derivatives to be continuous. Informally these embeddings say that to convert an Lp estimate to a boundedness estimate costs 1/p derivatives per dimension.
There are similar variations of the embedding theorem for non-compact manifolds such as Rn (Stein 1970).
Notes[edit]
[/ltr][/size]
[size][ltr]
References[edit]
[/ltr][/size]
[size][ltr]
External links[edit]
[/ltr][/size]
From Wikipedia, the free encyclopedia
[ltr]In mathematics, a Sobolev space is a vector space of functions equipped with a norm that is a combination of Lp-norms of the function itself as well as its derivatives up to a given order. The derivatives are understood in a suitable weak sense to make the space complete, thus a Banach space. Intuitively, a Sobolev space is a space of functions with sufficiently many derivatives for some application domain, such as partial differential equations, and equipped with a norm that measures both the size and regularity of a function.
Sobolev spaces are named after the Russian mathematician Sergei Sobolev. Their importance comes from the fact that solutions of partial differential equations are naturally found in Sobolev spaces, rather than in spaces of continuous functions and with the derivatives understood in the classical sense.
[/ltr]
- 1 Motivation
- 2 Sobolev spaces with integer k
- 2.1 One-dimensional case
- 2.1.1 The case p = 2
- 2.1.2 Other examples
- 2.2 Multidimensional case
- 2.2.1 Approximation by smooth functions
- 2.2.2 Examples
- 2.2.3 Absolutely Continuous on Lines (ACL) characterization of Sobolev functions
- 2.2.4 Functions vanishing at the boundary
- 3 Sobolev spaces with non-integer k
- 3.1 Bessel potential spaces
- 3.2 Sobolev–Slobodeckij spaces
- 4 Traces
- 5 Extension operators
- 5.1 Case of p=2
- 5.2 Extension by zero
- 6 Sobolev embeddings
- 7 Notes
- 8 References
- 9 External links
[size][ltr]
Motivation[edit]
There are many criteria for smoothness of mathematical functions. The most basic criterion may be that of continuity. A stronger notion of smoothness is that of differentiability (because functions that are differentiable are also continuous) and a yet stronger notion of smoothness is that the derivative also be continuous (these functions are said to be of class C1 — see Differentiability class). Differentiable functions are important in many areas, and in particular for differential equations. In the twentieth century, however, it was observed that the space C1 (or C2, etc.) was not exactly the right space to study solutions of differential equations. The Sobolev spaces are the modern replacement for these spaces in which to look for solutions of partial differential equations.
Quantities or properties of the underlying model of the differential equation are usually expressed in terms of integral norms, rather than the uniform norm. A typical example is measuring the energy of a temperature or velocity distribution by an L2-norm. It is therefore important to develop a tool for differentiating Lebesgue space functions.
The integration by parts formula yields that for every u ∈ Ck(Ω), where k is a natural number and for all infinitely differentiable functions with compact support φ ∈ Cc∞(Ω),
,
where α a multi-index of order |α| = k and Ω is an open subset in ℝn. Here, the notation
is used.
The left-hand side of this equation still makes sense if we only assume u to be locally integrable. If there exists a locally integrable function v, such that
we call v the weak α-th partial derivative of u. If there exists a weak α-th partial derivative of u, then it is uniquely defined almost everywhere. On the other hand, if u ∈ Ck(Ω), then the classical and the weak derivative coincide. Thus, if v is a weak α-th partial derivative of u, we may denote it by Dαu := v.
For example, the function
is not continuous at zero, and not differentiable at −1, 0, or 1. Yet the function
satisfies the definition for being the weak derivative of , which then qualifies as being in the Sobolev space (for any allowed p, see definition below).
The Sobolev spaces Wk,p(Ω) combine the concepts of weak differentiability and Lebesgue norms.
Sobolev spaces with integer k[edit]
One-dimensional case[edit]
In the one-dimensional case (functions on R) the Sobolev space W k,p is defined to be the subset of functions f in Lp(R) such that the function fand its weak derivatives up to some order k have a finite Lp norm, for given p (1 ≤ p ≤ +∞). As mentioned above, some care must be taken to define derivatives in the proper sense. In the one-dimensional problem it is enough to assume that f (k−1), the (k − 1)-th derivative of the function f, is differentiable almost everywhere and is equal almost everywhere to theLebesgue integral of its derivative (this gets rid of examples such asCantor's function which are irrelevant to what the definition is trying to accomplish).
With this definition, the Sobolev spaces admit a natural norm,
Equipped with the norm || ⋅ ||k,p, W k,p becomes a Banach space. It turns out that it is enough to take only the first and last in the sequence, i.e., the norm defined by
is equivalent to the norm above (see Normed vector space#Topological structure).
The case p = 2[edit]
Sobolev spaces with p = 2 (at least[clarification needed] on a one-dimensional finite interval) are especially important because of their connection with Fourier series and because they form a Hilbert space. A special notation has arisen to cover this case, since the space is a Hilbert space:
H k = W k,2.
The space H k can be defined naturally in terms of Fourier series whose coefficients decay sufficiently rapidly, namely,
where is the Fourier series of f. As above, one can use the equivalent norm
Both representations follow easily from Parseval's theorem and the fact that differentiation is equivalent to multiplying the Fourier coefficient by in.
Furthermore, the space H k admits an inner product, like the spaceH 0 = L2. In fact, the H k inner product is defined in terms of the L2 inner product:
The space H k becomes a Hilbert space with this inner product.
Other examples[edit]
Some other Sobolev spaces permit a simpler description. For example,W 1,1(0, 1) is the space of absolutely continuous functions on (0, 1) (or rather, functions that are equal almost everywhere to such), while W 1,∞(I)is the space of Lipschitz functions on I, for every interval I. All spacesW k,∞ are (normed) algebras, i.e. the product of two elements is once again a function of this Sobolev space, which is not the case for p < +∞. (E.g., functions behaving like |x|−1/3 at the origin are in L2, but the product of two such functions is not in L2).
Multidimensional case[edit]
The transition to multiple dimensions brings more difficulties, starting from the very definition. The requirement that f (k−1) be the integral of f (k)does not generalize, and the simplest solution is to consider derivatives in the sense of distribution theory.
A formal definition now follows. Let Ω be an open set in Rn, let k be anatural number and let 1 ≤ p ≤ +∞. The Sobolev space W k,p(Ω) is defined to be the set of all functions f defined on Ω such that for everymulti-index α with |α| ≤ k, the mixed partial derivative
is both locally integrable and in Lp(Ω), i.e.
That is, the Sobolev space W k,p(Ω) is defined as
The natural number k is called the order of the Sobolev space W k,p(Ω).
There are several choices for a norm for W k,p(Ω). The following two are common and are equivalent in the sense of equivalence of norms:
and
With respect to either of these norms, W k,p(Ω) is a Banach space. Forp < +∞, W k,p(Ω) is also a separable space. It is conventional to denoteW k,2(Ω) by H k(Ω) for it is a Hilbert space with the norm .[1]
Approximation by smooth functions[edit]
Many of the properties of the Sobolev spaces cannot be seen directly from the definition. It is therefore interesting to investigate under which conditions a function u ∈ W k,p(Ω) can be approximated by smooth functions. If p is finite and Ω is bounded with Lipschitz boundary, then for any u ∈ W k,p(Ω) there exists an approximating sequence of functionsum ∈ C∞(Ω), smooth up to the boundary such that:[2]
Examples[edit]
In higher dimensions, it is no longer true that, for example, W1,1 contains only continuous functions. For example, 1/|x| belongs to W1,1(B3) whereB3 is the unit ball in three dimensions. For k > n/p the space Wk,p(Ω) will contain only continuous functions, but for which k this is already true depends both on p and on the dimension. For example, as can be easily checked using spherical polar coordinates for the functionf : Bn → R ∪ {+∞}, defined on the n-dimensional ball we have:
Intuitively, the blow-up of f at 0 "counts for less" when n is large since the unit ball has "more outside and less inside" in higher dimensions.
Absolutely Continuous on Lines (ACL) characterization of Sobolev functions[edit]
Let Ω be an open set in Rn and 1 ≤ p ≤ +∞. If a function is in W 1,p(Ω), then, possibly after modifying the function on a set of measure zero, the restriction to almost every line parallel to the coordinate directions in Rn isabsolutely continuous; what's more, the classical derivative along the lines that are parallel to the coordinate directions are in Lp(Ω). Conversely, if the restriction of f to almost every line parallel to the coordinate directions is absolutely continuous, then the pointwise gradient ∇f exists almost everywhere, and f is in W 1,p(Ω) provided f and |∇f | are both in Lp(Ω). In particular, in this case the weak partial derivatives of f and pointwise partial derivatives of f agree almost everywhere. The ACL characterization of the Sobolev spaces was established by Otto M. Nikodym (1933); see (Maz'ya 1985, §1.1.3).
A stronger result holds in the case p > n. A function in W 1,p(Ω) is, after modifying on a set of measure zero, Hölder continuous of exponentγ = 1 − n/p, by Morrey's inequality. In particular, if p = +∞, then the function is Lipschitz continuous.
Functions vanishing at the boundary[edit]
Let Ω be an open set in Rn. The Sobolev space W 1,2(Ω) is also denoted by H1(Ω). It is a Hilbert space, with an important subspace H1
0(Ω) defined to be the closure in H1(Ω) of the infinitely differentiable functions compactly supported in Ω. The Sobolev norm defined above reduces here to
When Ω has a regular boundary, H1
0(Ω) can be described as the space of functions in H1(Ω) that vanish at the boundary, in the sense of traces (see below). When n = 1, if Ω = (a, b) is a bounded interval, thenH1
0(a, b) consists of continuous functions on [a, b] of the form
where the generalized derivative f′ is in L2(a, b) and has 0 integral, so that f (b) = f (a) = 0.
When Ω is bounded, the Poincaré inequality states that there is a constantC = C(Ω) such that
When Ω is bounded, the injection from H1
0(Ω) to L2(Ω) is compact. This fact plays a role in the study of the Dirichlet problem, and in the fact that there exists an orthonormal basis of L2(Ω) consisting of eigenvectors of the Laplace operator (with Dirichlet boundary condition).
Sobolev spaces with non-integer k[edit]
Bessel potential spaces[edit]
For a natural number k and 1 < p < ∞ one can show (by using Fourier multipliers[3][4]) that the space Wk,p(ℝn) can equivalently be defined as
with the norm
.
This motivates Sobolev spaces with non-integer order since in the above definition we can replace k by any real number s. The resulting spaces
are called Bessel potential spaces[5] (named after Friedrich Bessel). They are Banach spaces in general and Hilbert spaces in the special case p = 2.
For an open set Ω ⊆ ℝn, Hs,p(Ω) is the set of restrictions of functions fromHs,p(ℝn) to Ω equipped with the norm
.
Again, Hs,p(Ω) is a Banach space and in the case p = 2 a Hilbert space.
Using extension theorems for Sobolev spaces, it can be shown that alsoWk,p(Ω) = Hk,p(Ω) holds in the sense of equivalent norms, if Ω is domain with uniform Ck-boundary, k a natural number and 1 < p < ∞. By theembeddings
the Bessel potential spaces Hs,p(ℝn) form a continuous scale between the Sobolev spaces Wk,p(ℝn). From an abstract point of view, the Bessel potential spaces occur as complex interpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms it holds that
where:
Sobolev–Slobodeckij spaces[edit]
Another approach to define fractional order Sobolev spaces arises from the idea to generalize the Hölder condition to the Lp-setting.[6] For an open subset Ω of ℝn, 1 ≤ p < ∞, θ ∈ (0,1) and f ∈ Lp(Ω), the Slobodeckij seminorm (roughly analogous to the Hölder seminorm) is defined by
.
Let s > 0 be not an integer and set . Using the same idea as for the Hölder spaces, the Sobolev–Slobodeckij space[7]Ws,p(Ω) is defined as
.
It is a Banach space for the norm
.
If the open subset Ω is suitably regular in the sense that there exist certain extension operators, then also the Sobolev–Slobodeckij spaces form a scale of Banach spaces, i.e. one has the continuous injections orembeddings
.
There are examples of irregular Ω such that W1,p(Ω) is not even a vector subspace of Ws,p(Ω) for 0 < s < 1.
From an abstract point of view, the spaces Ws,p(Ω) coincide with the realinterpolation spaces of Sobolev spaces, i.e. in the sense of equivalent norms the following holds:
.
Sobolev–Slobodeckij spaces play an important role in the study of traces of Sobolev functions. They are special cases of Besov spaces.[4]
Traces[edit]
Sobolev spaces are often considered when investigating partial differential equations. It is essential to consider boundary values of Sobolev functions. If u ∈ C(Ω), those boundary values are described by the restriction . However, it is not clear how to describe values at the boundary foru ∈ Wk,p(Ω), as the n-dimensional measure of the boundary is zero. The following theorem[2] resolves the problem:
[/ltr][/size]
[size][ltr]Trace Theorem. Assume Ω is bounded with Lipschitz boundary. Then there exists a bounded linear operator such that
Tu is called the trace of u. Roughly speaking, this theorem extends the restriction operator to the Sobolev space W1,p(Ω) for well-behaved Ω. Note that the trace operator T is in general not surjective, but for 1 < p < ∞ it maps onto the Sobolev-Slobodeckij space .
Intuitively, taking the trace costs 1/p of a derivative. The functions u inW1,p(Ω) with zero trace, i.e. Tu = 0, can be characterized by the equality
where
In other words, for Ω bounded with Lipschitz boundary, trace-zero functions in W1,p(Ω) can be approximated by smooth functions with compact support.
Extension operators[edit]
If X is an open domain whose boundary is not too poorly behaved (e.g., if its boundary is a manifold, or satisfies the more permissive "cone condition") then there is an operator A mapping functions of X to functions of Rn such that:
[/ltr][/size]
- Au(x) = u(x) for almost every x in X and
- A is continuous from to , for any 1 ≤ p ≤ ∞ and integer k.
[size][ltr]
We will call such an operator A an extension operator for X.
Case of p=2[edit]
Extension operators are the most natural way to define for non-integer s (we cannot work directly on X since taking Fourier transform is a global operation). We define by saying that u is in if and only if Au is in . Equivalently, complex interpolation yields the same spaces so long as X has an extension operator. If X does not have an extension operator, complex interpolation is the only way to obtain the spaces.
As a result, the interpolation inequality still holds.
Extension by zero[edit]
As in the section #Functions vanishing at the boundary, we define to be the closure in of the space of infinitely differentiable compactly supported functions. Given the definition of a trace, above, we may state the following
Theorem Let X be uniformly Cm regular, m ≥ s and let P be the linear map sending u in to
where d/dn is the derivative normal to G, and k is the largest integer less than s. Then is precisely the kernel of P.
If we may define its extension by zero in the natural way, namely
Theorem Let s > ½. The map taking u to is continuous into if and only if s is not of the form n + ½ for n an integer.
For a function f ∈ Lp(Ω) on an open subset Ω of ℝn, its extension by zero
is an element of Lp(ℝn). Furthermore,
In the case of the Sobolev space W1,p(Ω) for 1 ≤ p ≤ ∞, extending a function u by zero will not necessarily yield an element of W1,p(ℝn). But if Ω is bounded with Lipschitz boundary (e.g. ∂Ω is C1), then for any bounded open set O such that Ω⊂⊂O (i.e. Ω is compactly contained in O), there exists a bounded linear operator[2]
such that for each u ∈ W1,p(Ω): Eu = u a.e. on Ω, Eu has compact support within O, and there exists a constant C depending only on p, Ω, O and the dimension n, such that
We call Eu an extension of u to ℝn.
Sobolev embeddings[edit]
Main article: Sobolev inequality
It is a natural question to ask if a Sobolev function is continuous or even continuously differentiable. Roughly speaking, sufficiently many weak derivatives or large p result in a classical derivative. This idea is generalized and made precise in the Sobolev embedding theorem.
Write for the Sobolev space of some compact Riemannian manifold of dimension n. Here k can be any real number, and 1 ≤ p ≤ ∞. (For p = ∞ the Sobolev space is defined to be the Hölder space Cn,α wherek = n + α and 0 < α ≤ 1.) The Sobolev embedding theorem states that ifk ≥ m and k − n/p ≥ m − n/q then
and the embedding is continuous. Moreover if k > m and k − n/p > m −n/qthen the embedding is completely continuous (this is sometimes calledKondrachov's theorem or the Rellich-Kondrachov theorem). Functions in have all derivatives of order less than m are continuous, so in particular this gives conditions on Sobolev spaces for various derivatives to be continuous. Informally these embeddings say that to convert an Lp estimate to a boundedness estimate costs 1/p derivatives per dimension.
There are similar variations of the embedding theorem for non-compact manifolds such as Rn (Stein 1970).
Notes[edit]
[/ltr][/size]
- Jump up^ Evans 1998, Chapter 5.2
- ^ Jump up to:a b c Adams 1975
- Jump up^ Bergh & Löfström 1976
- ^ Jump up to:a b Triebel 1995
- Jump up^ Bessel potential spaces with variable integrability have been independently introduced by Almeida & Samko (A. Almeida and S. Samko, "Characterization of Riesz and Bessel potentials on variableLebesgue spaces", J. Function Spaces Appl. 4 (2006), no. 2, 113–144) and Gurka, Harjulehto & Nekvinda (P. Gurka, P. Harjulehto and A. Nekvinda: "Bessel potential spaces with variable exponent", Math. Inequal. Appl. 10 (2007), no. 3, 661–676).
- Jump up^ Lunardi 1995
- Jump up^ In the literature, fractional Sobolev-type spaces are also calledAronszajn spaces, Gagliardo spaces or Slobodeckij spaces, after the names of the mathematicians who introduced them in the 1950s: N. Aronszajn ("Boundary values of functions with finite Dirichlet integral", Techn. Report of Univ. of Kansas 14 (1955), 77–94), E. Gagliardo ("Proprietà di alcune classi di funzioni in più variabili", Ricerche Mat. 7 (1958), 102–137), and L. N. Slobodeckij ("Generalized Sobolev spaces and their applications to boundary value problems of partial differential equations", Leningrad. Gos. Ped. Inst. Učep. Zap. 197 (1958), 54–112).
[size][ltr]
References[edit]
[/ltr][/size]
- Adams, Robert A. (1975), Sobolev Spaces, Boston, MA: Academic Press, ISBN 978-0-12-044150-1.
- Aubin, Thierry (1982), Nonlinear analysis on manifolds. Monge-Ampère equations, Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences] 252, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90704-8, MR 681859.
- Bergh, Jöran; Löfström, Jörgen (1976), Interpolation Spaces, An Introduction, Grundlehren der Mathematischen Wissenschaften 223, Springer-Verlag, pp. X + 207, ISBN 978-7-5062-6011-4,MR 0482275, Zbl 0344.46071
- Evans, L.C. (1998), Partial Differential Equations, AMS_Chelsea.
- Maz'ja, Vladimir G. (1985), Sobolev Spaces, Springer Series in Soviet Mathematics, Berlin–Heidelberg–New York: Springer-Verlag, pp. xix+486, ISBN 0-387-13589-8, MR 817985, Zbl 0692.46023.
- Maz'ya, Vladimir G.; Poborchi, Sergei V. (1997), Differentiable Functions on Bad Domains, Singapore–New Jersey–London–Hong Kong: World Scientific, pp. xx+481, ISBN 981-02-2767-1,MR 1643072, Zbl 0918.46033.
- Maz'ya, Vladimir G. (2011) [1985], Sobolev Spaces. With Applications to Elliptic Partial Differential Equations., Grundlehren der Mathematischen Wissenschaften 342 (2nd revised and augmented ed.), Berlin–Heidelberg–New York: Springer Verlag, pp. xxviii+866,ISBN 978-3-642-15563-5, MR 2777530, Zbl 1217.46002.
- Lunardi, Alessandra (1995), Analytic semigroups and optimal regularity in parabolic problems, Basel: Birkhäuser Verlag.
- Nikodym, Otto (1933), "Sur une classe de fonctions considérée dans l'étude du problème de Dirichlet", Fund. Math. 21: 129–150.
- Nikol'skii, S.M. (2001), "Imbedding theorems", in Hazewinkel, Michiel,Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4.
- Nikol'skii, S.M. (2001), "Sobolev space", in Hazewinkel, Michiel,Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4.
- Sobolev, S.L. (1963), "On a theorem of functional analysis", Transl. Amer. Math. Soc. 34 (2): 39–68; translation of Mat. Sb., 4 (1938) pp. 471–497.
- Sobolev, S.L. (1963), Some applications of functional analysis in mathematical physics, Amer. Math. Soc..
- Stein, E (1970), Singular Integrals and Differentiability Properties of Functions, Princeton Univ. Press, ISBN 0-691-08079-8.
- Triebel, H. (1995), Interpolation Theory, Function Spaces, Differential Operators, Heidelberg: Johann Ambrosius Barth.
- Ziemer, William P. (1989), Weakly differentiable functions, Graduate Texts in Mathematics 120, Berlin, New York: Springer-Verlag,ISBN 978-0-387-97017-2, MR 1014685.
[size][ltr]
External links[edit]
[/ltr][/size]
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Introduction
Many problems of theoretical physics are frequently formulated in terms of ordinary
differential equations or partial differential equations. We can frequently convert
them into integral equations with boundary conditions or initial conditions built
in. We can formally develop the perturbation series by iterations. A good example
is the Born series for the potential scattering problem in quantum mechanics. In
some cases, the resulting equations are nonlinear integro-differential equations.
A good example is the Schwinger–Dyson equation in quantum field theory and
quantum statistical mechanics. It is the nonlinear integro-differential equation,
and is exact and closed. It provides the starting point of Feynman–Dyson-type
perturbation theory in configuration space and in momentum space. In some
singular cases, the resulting equations are Wiener–Hopf integral equations. They
originate from research on the radiative equilibrium on the su***ce of a star. In
the two-dimensional Ising model and the analysis of the Yagi–Uda semi-infinite
arrays of antennas, among others, we have the Wiener–Hopf sum equations.
The theory of integral equations is best illustrated with the notion of functionals
defined on some function space. If the functionals involved are quadratic in the
function, the integral equations are said to be linear integral equations, and if
they are higher than quadratic in the function, the integral equations are said
to be nonlinear integral equations. Depending on the form of the functionals,
the resulting integral equations are said to be of the first kind, of the second
kind, and of the third kind. If the kernels of the integral equations are squareintegrable,
the integral equations are said to be nonsingular, and if the kernels
of the integral equations are not square-integrable, the integral equations are said
to be singular. Furthermore, depending on whether the end points of the kernel
are fixed constants or not, the integral equations are said to be of the Fredholm
type, Volterra type, Cauchy type, or Wiener–Hopf types, etc. Through discussion
of the variational derivative of the quadratic functional, we can also establish the
relationship between the theory of integral equations and the calculus of variations.
The integro-differential equations can be best formulated in this manner. Analogies
of the theory of integral equations with the system of linear algebraic equations are
also useful.
The integral equation of the Cauchy type has an interesting application to classical
electrodynamics, namely, dispersion relations. Dispersion relations were derived
by Kramers in 1927 and Kronig in 1926, for X-ray dispersion and optical dispersion,
respectively. Kramers–Kronig dispersion relations are of very general validity which
only depends on the assumption of the causality. The requirement of the causality
alone determines the region of analyticity of dielectric constants. In the mid-1950s,
these dispersion relations were also derived from quantum field theory and applied
to strong interaction physics. The application of the covariant perturbation theory
to strong interaction physics was hopeless due to the large coupling constant.
From mid-1950s to 1960s, the dispersion theoretic approach to strong interaction
physics was the only realistic approach that provided many sum rules. To cite a few,
we have the Goldberger–Treiman relation, the Goldberger–Miyazawa–Oehme
formula and the Adler–Weisberger sum rule. In dispersion theoretic approach to
strong interaction physics, experimentally observed data were directly used in the
sum rules. The situation changed dramatically in the early 1970s when quantum
chromodynamics, the relativistic quantum field theory of strong interaction physics,
was invented with the use of asymptotically free non-Abelian gauge field theory.
The region of analyticity of the scattering amplitude in the upper-half k-plane in
quantum field theory when expressed in terms of Fourier transform is immediate
since quantum field theory has the microscopic causality. But, the region of
analyticity of the scattering amplitude in the upper-half k-plane in quantum
mechanics when expressed in terms of Fourier transform is not immediate since
quantum mechanics does not have the microscopic causality. We shall invoke
the generalized triangular inequality to derive the region of analyticity of the
scattering amplitude in the upper-half k-plane in quantum mechanics. The region
of analyticity of the scattering amplitudes in the upper-half k-plane in quantum
mechanics and quantum field theory strongly depends on the fact that the scattering
amplitudes are expressed in terms of Fourier transform. When the other expansion
basis is chosen, like Fourier–Bessel series, the region of analyticity drastically
changes its domain.
In the standard application of the calculus of variations to the variety of problems
of theoretical physics, we simply write the Euler equation and are rarely concerned
with the second variations, the Legendre test and the Jacobi test. Examination of
the second variations and the application of the Legendre test and the Jacobi test
become necessary in some cases of the application of the calculus of variations to
the problems of theoretical physics. In order to bring the development of theoretical
physics and the calculus of variations much closer, some historical comments are
in order here.
Euler formulated Newtonian mechanics by the variational principle, the Euler
equation. Lagrange started the whole field of calculus of variations. He also
introduced the notion of generalized coordinates into classical mechanics and
completely reduced the problem to that of differential equations, which are presently
known as Lagrange equations of motion, with the Lagrangian appropriately written
in terms of kinetic energy and potential energy. He successfully converted classical
mechanics into analytical mechanics with the variational principle. Legendre
constructed the transformation methods for thermodynamics which are presently
known as the Legendre transformations. Hamilton succeeded in transforming
the Lagrange equations of motion, which are of the second order, into a set
of first-order differential equations with twice as many variables. He did this
by introducing the canonical momenta which are conjugate to the generalized
coordinates. His equations are known as Hamilton’s canonical equations of motion.
He successfully formulated classical mechanics in terms of the principle of least
action. The variational principles formulated by Euler and Lagrange apply only to
the conservative system. Hamilton recognized that the principle of least action in
classical mechanics and Fermat’s principle of shortest time in geometrical optics
are strikingly analogous, permitting the interpretation of optical phenomena in
mechanical terms and vice versa. Jacobi quickly realized the importance of the
work of Hamilton. He noted that Hamilton was using just one particular set of
the variables to describe the mechanical system and formulated the canonical
transformation theory with the Legendre transformation. He duly arrived at what
is presently known as the Hamilton–Jacobi equation. He formulated his version
of the principle of least action for the time-independent case.
From what we discussed, we may be led to the conclusion that calculus of
variations is the finished subject by the end of the 19th century. We shall note that,
from the 1940s to 1950s, we encountered the resurgence of the action principle
for the systemization of quantum field theory. The subject matters are Feynman’s
action principle and Schwinger’s action principle.
Path integral quantization procedure invented by Feynman in 1942 in the
Lagrangian formalism is justified by theHamiltonian formalism of quantum theory
in the standard treatment. We can deduce the canonical formalism of quantum
theory from the path integral formalism. The path integral quantization procedure
originated from the old paper published by Dirac in 1934 on the Lagrangian
in quantum mechanics. This quantization procedure is called Feynman’s action
principle.
Schwinger’s action principle proposed in the early 1950s is the differential
formalism of action principle to be compared with Feynman’s action principle
which is the integral formalism of action principle. These two quantum action
principles are equivalent to each other and are essential in carrying out the
computation of electrodynamic level shifts of the atomic energy level.
Schwinger’s action principle is a convenient device to develop Schwinger theory
of Green’s functions. When it is applied to the two-point ‘‘full’’ Green’s functions
with the use of the proper self energy parts and the vertex operator of Dyson,
we obtain Schwinger–Dyson equation for quantum field theory and quantum
statistical mechanics. When it is applied to the four-point ‘‘full’’ Green’s functions,
we obtain Bethe–Salpeter equation. We focus on Bethe–Salpeter equation for the
bound state problem. This equation is highly nonlinear and does not permit the
exact solution except for the Wick–Cutkosky model. In all the rests of the models
proposed, we employ the certain type of the approximation in the interaction kernel
of Bethe–Salpeter equation. Frequently, we employ the ladder approximation in
high energy physics.
Feynman’s variational principle in quantum statistical mechanics can be derived
by the analytic continuation in time from a real time to an imaginary time of
Feynman’s action principle in quantum mechanics. The polaron problem can be
discussed with Feynman’s variational principle.
There exists a close relationship between a global continuous symmetry of
the Lagrangian L(qr (t),˙qr (t), t) (the Lagrangian density L(ψa(x), ∂μψa(x))) and the
current conservation law, commonly known as Noether’s theorem. When the
global continuous symmetry of the Lagrangian (the Lagrangian density) exists,
the conserved current results at classical level, and hence the conserved charge.
A conserved current need not be a vector current. It can be a tensor current with
the conservation index. It may be an energy–momentum tensor whose conserved
charge is the energy–momentum four-vector. At quantum level, however, the
otherwise conserved classical current frequently develops the anomaly and the
current conservation fails to hold at quantum level any more. An axial current is a
good example.
When we extend the global symmetry of the field theory to the local symmetry,
Weyl’s gauge principle naturally comes in. With Weyl’sgauge principle,
electrodynamics of James Clark Maxwell can be deduced.
Weyl’s gauge principle still attracts considerable attention due to the fact that all
forces in nature can be unified with the extension of Weyl’s gauge principle with
the appropriate choice of the grand unifying Lie groups as the gauge group.
Based on the tri- approximation to the set of completely renormalized
Schwinger–Dyson equations for non-Abelian gauge field in interaction with the
fermion field, which is free from the overlapping divergence, we can demonstrate
asymptotic freedom, as stipulated above, nonperturbatively. This property arises
from the non-Abelian nature of the gauge group and such property is not present for
Abelian gauge field like QED. Actually, no quantum field theory is asymptotically
free without non-Abelian gauge field.
With the tri- approximation, we can demonstrate asymptotic disaster of Abelian
gauge field in interaction with the fermion field. Asymptotic disaster of Abelian
gauge field was discovered in mid-1950s by Gell–Mann and Low and independently
by Landau, Abrikosov, Galanin, and Khalatnikov. Soon after this discovery was
made, quantum field theory was once abandoned for a decade, and dispersion
theory became fashionable.
There exist the Gell–Mann–Low renormalization group equation, which originates
from the perturbative calculation of the massless QED with the use of the
mathematical theory of the regular variations. There also exist the renormalization
group equation, called the Callan–Symanzik equation, which is slightly different
from the former. The relationship between the two approaches is established with
some effort. We note that the method of the renormalization group essentially consists
of separating the field components into the rapidly varying field components
(k2 > 2) and the slowly varying field components (k2 < 2), path-integrating out
the rapidly varying field components (k2 > 2) in the generating functional of
Green’s functions, and focusing our attention to the slowly varying field components
(k2 < 2) to analyze the low energy phenomena at k2 < 2. We remark that
the scale of depends on the kind of physics we analyze and, to some extent, is
arbitrary. The Gell–Mann–Low analysis exhibited the astonishing result; QED is
Many problems of theoretical physics are frequently formulated in terms of ordinary
differential equations or partial differential equations. We can frequently convert
them into integral equations with boundary conditions or initial conditions built
in. We can formally develop the perturbation series by iterations. A good example
is the Born series for the potential scattering problem in quantum mechanics. In
some cases, the resulting equations are nonlinear integro-differential equations.
A good example is the Schwinger–Dyson equation in quantum field theory and
quantum statistical mechanics. It is the nonlinear integro-differential equation,
and is exact and closed. It provides the starting point of Feynman–Dyson-type
perturbation theory in configuration space and in momentum space. In some
singular cases, the resulting equations are Wiener–Hopf integral equations. They
originate from research on the radiative equilibrium on the su***ce of a star. In
the two-dimensional Ising model and the analysis of the Yagi–Uda semi-infinite
arrays of antennas, among others, we have the Wiener–Hopf sum equations.
The theory of integral equations is best illustrated with the notion of functionals
defined on some function space. If the functionals involved are quadratic in the
function, the integral equations are said to be linear integral equations, and if
they are higher than quadratic in the function, the integral equations are said
to be nonlinear integral equations. Depending on the form of the functionals,
the resulting integral equations are said to be of the first kind, of the second
kind, and of the third kind. If the kernels of the integral equations are squareintegrable,
the integral equations are said to be nonsingular, and if the kernels
of the integral equations are not square-integrable, the integral equations are said
to be singular. Furthermore, depending on whether the end points of the kernel
are fixed constants or not, the integral equations are said to be of the Fredholm
type, Volterra type, Cauchy type, or Wiener–Hopf types, etc. Through discussion
of the variational derivative of the quadratic functional, we can also establish the
relationship between the theory of integral equations and the calculus of variations.
The integro-differential equations can be best formulated in this manner. Analogies
of the theory of integral equations with the system of linear algebraic equations are
also useful.
The integral equation of the Cauchy type has an interesting application to classical
electrodynamics, namely, dispersion relations. Dispersion relations were derived
by Kramers in 1927 and Kronig in 1926, for X-ray dispersion and optical dispersion,
respectively. Kramers–Kronig dispersion relations are of very general validity which
only depends on the assumption of the causality. The requirement of the causality
alone determines the region of analyticity of dielectric constants. In the mid-1950s,
these dispersion relations were also derived from quantum field theory and applied
to strong interaction physics. The application of the covariant perturbation theory
to strong interaction physics was hopeless due to the large coupling constant.
From mid-1950s to 1960s, the dispersion theoretic approach to strong interaction
physics was the only realistic approach that provided many sum rules. To cite a few,
we have the Goldberger–Treiman relation, the Goldberger–Miyazawa–Oehme
formula and the Adler–Weisberger sum rule. In dispersion theoretic approach to
strong interaction physics, experimentally observed data were directly used in the
sum rules. The situation changed dramatically in the early 1970s when quantum
chromodynamics, the relativistic quantum field theory of strong interaction physics,
was invented with the use of asymptotically free non-Abelian gauge field theory.
The region of analyticity of the scattering amplitude in the upper-half k-plane in
quantum field theory when expressed in terms of Fourier transform is immediate
since quantum field theory has the microscopic causality. But, the region of
analyticity of the scattering amplitude in the upper-half k-plane in quantum
mechanics when expressed in terms of Fourier transform is not immediate since
quantum mechanics does not have the microscopic causality. We shall invoke
the generalized triangular inequality to derive the region of analyticity of the
scattering amplitude in the upper-half k-plane in quantum mechanics. The region
of analyticity of the scattering amplitudes in the upper-half k-plane in quantum
mechanics and quantum field theory strongly depends on the fact that the scattering
amplitudes are expressed in terms of Fourier transform. When the other expansion
basis is chosen, like Fourier–Bessel series, the region of analyticity drastically
changes its domain.
In the standard application of the calculus of variations to the variety of problems
of theoretical physics, we simply write the Euler equation and are rarely concerned
with the second variations, the Legendre test and the Jacobi test. Examination of
the second variations and the application of the Legendre test and the Jacobi test
become necessary in some cases of the application of the calculus of variations to
the problems of theoretical physics. In order to bring the development of theoretical
physics and the calculus of variations much closer, some historical comments are
in order here.
Euler formulated Newtonian mechanics by the variational principle, the Euler
equation. Lagrange started the whole field of calculus of variations. He also
introduced the notion of generalized coordinates into classical mechanics and
completely reduced the problem to that of differential equations, which are presently
known as Lagrange equations of motion, with the Lagrangian appropriately written
in terms of kinetic energy and potential energy. He successfully converted classical
mechanics into analytical mechanics with the variational principle. Legendre
constructed the transformation methods for thermodynamics which are presently
known as the Legendre transformations. Hamilton succeeded in transforming
the Lagrange equations of motion, which are of the second order, into a set
of first-order differential equations with twice as many variables. He did this
by introducing the canonical momenta which are conjugate to the generalized
coordinates. His equations are known as Hamilton’s canonical equations of motion.
He successfully formulated classical mechanics in terms of the principle of least
action. The variational principles formulated by Euler and Lagrange apply only to
the conservative system. Hamilton recognized that the principle of least action in
classical mechanics and Fermat’s principle of shortest time in geometrical optics
are strikingly analogous, permitting the interpretation of optical phenomena in
mechanical terms and vice versa. Jacobi quickly realized the importance of the
work of Hamilton. He noted that Hamilton was using just one particular set of
the variables to describe the mechanical system and formulated the canonical
transformation theory with the Legendre transformation. He duly arrived at what
is presently known as the Hamilton–Jacobi equation. He formulated his version
of the principle of least action for the time-independent case.
From what we discussed, we may be led to the conclusion that calculus of
variations is the finished subject by the end of the 19th century. We shall note that,
from the 1940s to 1950s, we encountered the resurgence of the action principle
for the systemization of quantum field theory. The subject matters are Feynman’s
action principle and Schwinger’s action principle.
Path integral quantization procedure invented by Feynman in 1942 in the
Lagrangian formalism is justified by theHamiltonian formalism of quantum theory
in the standard treatment. We can deduce the canonical formalism of quantum
theory from the path integral formalism. The path integral quantization procedure
originated from the old paper published by Dirac in 1934 on the Lagrangian
in quantum mechanics. This quantization procedure is called Feynman’s action
principle.
Schwinger’s action principle proposed in the early 1950s is the differential
formalism of action principle to be compared with Feynman’s action principle
which is the integral formalism of action principle. These two quantum action
principles are equivalent to each other and are essential in carrying out the
computation of electrodynamic level shifts of the atomic energy level.
Schwinger’s action principle is a convenient device to develop Schwinger theory
of Green’s functions. When it is applied to the two-point ‘‘full’’ Green’s functions
with the use of the proper self energy parts and the vertex operator of Dyson,
we obtain Schwinger–Dyson equation for quantum field theory and quantum
statistical mechanics. When it is applied to the four-point ‘‘full’’ Green’s functions,
we obtain Bethe–Salpeter equation. We focus on Bethe–Salpeter equation for the
bound state problem. This equation is highly nonlinear and does not permit the
exact solution except for the Wick–Cutkosky model. In all the rests of the models
proposed, we employ the certain type of the approximation in the interaction kernel
of Bethe–Salpeter equation. Frequently, we employ the ladder approximation in
high energy physics.
Feynman’s variational principle in quantum statistical mechanics can be derived
by the analytic continuation in time from a real time to an imaginary time of
Feynman’s action principle in quantum mechanics. The polaron problem can be
discussed with Feynman’s variational principle.
There exists a close relationship between a global continuous symmetry of
the Lagrangian L(qr (t),˙qr (t), t) (the Lagrangian density L(ψa(x), ∂μψa(x))) and the
current conservation law, commonly known as Noether’s theorem. When the
global continuous symmetry of the Lagrangian (the Lagrangian density) exists,
the conserved current results at classical level, and hence the conserved charge.
A conserved current need not be a vector current. It can be a tensor current with
the conservation index. It may be an energy–momentum tensor whose conserved
charge is the energy–momentum four-vector. At quantum level, however, the
otherwise conserved classical current frequently develops the anomaly and the
current conservation fails to hold at quantum level any more. An axial current is a
good example.
When we extend the global symmetry of the field theory to the local symmetry,
Weyl’s gauge principle naturally comes in. With Weyl’sgauge principle,
electrodynamics of James Clark Maxwell can be deduced.
Weyl’s gauge principle still attracts considerable attention due to the fact that all
forces in nature can be unified with the extension of Weyl’s gauge principle with
the appropriate choice of the grand unifying Lie groups as the gauge group.
Based on the tri- approximation to the set of completely renormalized
Schwinger–Dyson equations for non-Abelian gauge field in interaction with the
fermion field, which is free from the overlapping divergence, we can demonstrate
asymptotic freedom, as stipulated above, nonperturbatively. This property arises
from the non-Abelian nature of the gauge group and such property is not present for
Abelian gauge field like QED. Actually, no quantum field theory is asymptotically
free without non-Abelian gauge field.
With the tri- approximation, we can demonstrate asymptotic disaster of Abelian
gauge field in interaction with the fermion field. Asymptotic disaster of Abelian
gauge field was discovered in mid-1950s by Gell–Mann and Low and independently
by Landau, Abrikosov, Galanin, and Khalatnikov. Soon after this discovery was
made, quantum field theory was once abandoned for a decade, and dispersion
theory became fashionable.
There exist the Gell–Mann–Low renormalization group equation, which originates
from the perturbative calculation of the massless QED with the use of the
mathematical theory of the regular variations. There also exist the renormalization
group equation, called the Callan–Symanzik equation, which is slightly different
from the former. The relationship between the two approaches is established with
some effort. We note that the method of the renormalization group essentially consists
of separating the field components into the rapidly varying field components
(k2 > 2) and the slowly varying field components (k2 < 2), path-integrating out
the rapidly varying field components (k2 > 2) in the generating functional of
Green’s functions, and focusing our attention to the slowly varying field components
(k2 < 2) to analyze the low energy phenomena at k2 < 2. We remark that
the scale of depends on the kind of physics we analyze and, to some extent, is
arbitrary. The Gell–Mann–Low analysis exhibited the astonishing result; QED is
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
not asymptotically free. It becomes the strong coupling theory at short distances.
At the same time, the Gell–Mann–Low renormalization group equation and the
Callan–Symanzik equation address themselves to the deep Euclidean momentum
space. High energy experimental physicists also focus their attention to the deep Euclidean
region. So the analysis based on the renormalization group equations is the
standard procedure for high energy experimental physicists. Popov employed the
same method to path-integrate out the rapidly varying field components (k2 > 2)
and focus his attention to the slowly varying field components (k2 < 2) in the
detailed analysis of the superconductivity. Wilson introduced the renormalization
group equation to analyze the critical exponents in solid state physics. He employed
the method of the block spin and coarse-graining in his formulation of the
renormalization group equation. The Wilson approach looks quite dissimilar to the
Gell–Mann–Low approach and the Callan–Symanzik approach. In the end, the
Wilson approach is identical to the former two approaches.
Renormalization group equation can be regarded as one application of calculus
of variations which attempts to maintain the renormalizability of quantum theory
under variations of some physical parameters.
Electro-weak unification of Glashow, Weinberg, and Salam is based on the gauge
group,
SU(2)weak isospin × U(1)weak hypercharge,
while maintaining Gell-Mann-Nishijima relation in the lepton sector. It suffers
from the problem of the nonrenormalizability due to the triangular anomaly in
the lepton sector. In the early 1970s, it is discovered that non-Abelian gauge field
theory is asymptotically free at short distance, i.e., it behaves like a free field at short
distance. Then the relativistic quantum field theory of the strong interaction based
on the gauge group SU(3)color is invented, and is called quantum chromodynamics.
Standard model with the gauge group,
SU(3)color × SU(2)weak isospin × U(1) weak hypercharge,
which describes the weal interaction, the electromagnetic interaction, and the strong
interaction is free from the triangular anomaly. It suffers, however, from a serious
defect; the existence of the classical instanton solution to the field equation in the
Euclidean metric for the SU(2) gauge field theory. In the SU(2) gauge field theory,
we have the Belavin–Polyakov–Schwartz–Tyupkin instanton solution which is a
classical solution to the field equation in the Euclidean metric. A proper account
for the instanton solution requires the addition of the strong CP-violating term to
the QCD Lagrangian density in the path integral formalism. The Peccei–Quinn
axion and the invisible axion scenario resolve this strong CP-violation problem. In
the grand unified theories, we assume that the subgroup of the grand unifying
gauge group is the gauge group SU(3)color × SU(2)weak isospin × U(1)weak hypercharge.
We now attempt to unify the weak interaction, the electromagnetic interaction, and
the strong interaction by starting from the much larger gauge group G, which is
reduced to SU(3)color × SU(2)weak isospin × U(1)weak hypercharge and further down to
SU(3)color × U(1)E.M..
Lattice gauge field theory can explain consistently the phenomena of the quark
confinement. The discretized space–time spacing of the lattice gauge field theory
plays the role of the momentum cutoff of the continuum theory.
A customary WKB method in quantum mechanics is the short wavelength
approximation to wave mechanics. The WKB method in quantum theory in path
integral formalism consists of the replacement of general Lagrangian (density)
with a quadratic Lagrangian (density).
The Hartree–Fock program is the one of the classic variational problems in
quantum mechanics of a system of A identical fermions. One-body and twobody
density matrices are introduced. But extremization of the energy functional
with respect to density matrices is difficult to implement. We thus introduce a
Slater determinant for the wavefunction of A identical fermions. After extremizing
the energy functional under variation of the parameters in orbitals of the Slater
determinant, however, there remain two important questions. One question has
to do with the stability of the iterative solutions, i.e., do they provide the true
minimum? The second variation of the energy functional with respect to the
variation parameters in orbitals should be examined. Another question has to do
with the degeneracy of the Hartree–Fock solution.
Weyl’s gauge principle, Feynman’s action principle, Schwinger’s action principle,
Feynman’s variational principle as applied to the polaron problem, and the method
of the renormalization group equations are the modern applications of calculus of
variations. Thus, the calculus of variations is well and alive in theoretical physics
to this day, contrary to a common brief that the calculus of variations is a dead
subject.
In this book, we address ourselves to theory of integral equations and the calculus
of variations, and their application to the modern development of theoretical
physics, while referring the reader to other sources for theory of ordinary differential
equations and partial differential equations.
At the same time, the Gell–Mann–Low renormalization group equation and the
Callan–Symanzik equation address themselves to the deep Euclidean momentum
space. High energy experimental physicists also focus their attention to the deep Euclidean
region. So the analysis based on the renormalization group equations is the
standard procedure for high energy experimental physicists. Popov employed the
same method to path-integrate out the rapidly varying field components (k2 > 2)
and focus his attention to the slowly varying field components (k2 < 2) in the
detailed analysis of the superconductivity. Wilson introduced the renormalization
group equation to analyze the critical exponents in solid state physics. He employed
the method of the block spin and coarse-graining in his formulation of the
renormalization group equation. The Wilson approach looks quite dissimilar to the
Gell–Mann–Low approach and the Callan–Symanzik approach. In the end, the
Wilson approach is identical to the former two approaches.
Renormalization group equation can be regarded as one application of calculus
of variations which attempts to maintain the renormalizability of quantum theory
under variations of some physical parameters.
Electro-weak unification of Glashow, Weinberg, and Salam is based on the gauge
group,
SU(2)weak isospin × U(1)weak hypercharge,
while maintaining Gell-Mann-Nishijima relation in the lepton sector. It suffers
from the problem of the nonrenormalizability due to the triangular anomaly in
the lepton sector. In the early 1970s, it is discovered that non-Abelian gauge field
theory is asymptotically free at short distance, i.e., it behaves like a free field at short
distance. Then the relativistic quantum field theory of the strong interaction based
on the gauge group SU(3)color is invented, and is called quantum chromodynamics.
Standard model with the gauge group,
SU(3)color × SU(2)weak isospin × U(1) weak hypercharge,
which describes the weal interaction, the electromagnetic interaction, and the strong
interaction is free from the triangular anomaly. It suffers, however, from a serious
defect; the existence of the classical instanton solution to the field equation in the
Euclidean metric for the SU(2) gauge field theory. In the SU(2) gauge field theory,
we have the Belavin–Polyakov–Schwartz–Tyupkin instanton solution which is a
classical solution to the field equation in the Euclidean metric. A proper account
for the instanton solution requires the addition of the strong CP-violating term to
the QCD Lagrangian density in the path integral formalism. The Peccei–Quinn
axion and the invisible axion scenario resolve this strong CP-violation problem. In
the grand unified theories, we assume that the subgroup of the grand unifying
gauge group is the gauge group SU(3)color × SU(2)weak isospin × U(1)weak hypercharge.
We now attempt to unify the weak interaction, the electromagnetic interaction, and
the strong interaction by starting from the much larger gauge group G, which is
reduced to SU(3)color × SU(2)weak isospin × U(1)weak hypercharge and further down to
SU(3)color × U(1)E.M..
Lattice gauge field theory can explain consistently the phenomena of the quark
confinement. The discretized space–time spacing of the lattice gauge field theory
plays the role of the momentum cutoff of the continuum theory.
A customary WKB method in quantum mechanics is the short wavelength
approximation to wave mechanics. The WKB method in quantum theory in path
integral formalism consists of the replacement of general Lagrangian (density)
with a quadratic Lagrangian (density).
The Hartree–Fock program is the one of the classic variational problems in
quantum mechanics of a system of A identical fermions. One-body and twobody
density matrices are introduced. But extremization of the energy functional
with respect to density matrices is difficult to implement. We thus introduce a
Slater determinant for the wavefunction of A identical fermions. After extremizing
the energy functional under variation of the parameters in orbitals of the Slater
determinant, however, there remain two important questions. One question has
to do with the stability of the iterative solutions, i.e., do they provide the true
minimum? The second variation of the energy functional with respect to the
variation parameters in orbitals should be examined. Another question has to do
with the degeneracy of the Hartree–Fock solution.
Weyl’s gauge principle, Feynman’s action principle, Schwinger’s action principle,
Feynman’s variational principle as applied to the polaron problem, and the method
of the renormalization group equations are the modern applications of calculus of
variations. Thus, the calculus of variations is well and alive in theoretical physics
to this day, contrary to a common brief that the calculus of variations is a dead
subject.
In this book, we address ourselves to theory of integral equations and the calculus
of variations, and their application to the modern development of theoretical
physics, while referring the reader to other sources for theory of ordinary differential
equations and partial differential equations.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The Power of Ideas in Mathematics
A mathematician like a painter or poet, is a maker of patterns. If his
patterns are more permanent as theirs, it is because they are made with
ideas.
Godfrey Harold Hardy (1877–1947)
In a right *** the side opposite to the right angle is called the hypotenuse. The
theorem of Pythagoras says that
c2 = a2 + b2.
In words: the square of the length of the hypotenuse is equal to the sum of the
squares of the lengths of the other two legs. Mathematicians of the Pythagorean
school in ancient Greece attributed the Pythagorean theorem to the master of their
school, Pythagoras of Samos (circa 560 B.C.–480 B.C.). It is said that Pythagoras
sacrificed one hundred oxen to the gods in gratitude. In fact, this theorem was
already known in Babylon at the time of King Hammurabi (circa 1728 B.C.–1686
B.C.). Presumably, however, it was a mathematician of the Pythagorean school who
first proved the Pythagorean theorem. This famous theorem appears as Proposition
47 in Book I of Euclid’s Elements (300 B.C.).
In 1940 Hermann Weyl wrote a fundamental paper where he emphasized that
the justification of the Dirichlet principle can be based on the method of orthogonal
projection. In 1943 he applied this method to the Dirichlet prinicple for harmonic
differential forms. This way, Hodge theory obtained a sound analytic foundation.30
The notion of Hilbert space is the abstract realization of the idea of orthogonality.
To this end, one introduces the inner product
ϕ|ψ
between the two elements ϕ and ψ of the Hilbert space. Recall that we say that ϕ
is orthogonal to ψ iff ϕ|ψ = 0. The proofs of the essential theorems about Hilbert
spaces (e.g., spectral theory) are based on the notion of orthogonality. In the late
1920s, John von Neumann (1903–1955) discovered that the mathematical foundation
of quantum mechanics can be based on the theory of Hilbert spaces. Therefore,
the concept of orthogonality is crucial for quantum mechanics. In the early 1940s,
Feynman (1918–1988) developed a new approach to quantum mechanics. He called
the inner product ϕ|ψ the transition amplitude between the two quantum states
ϕ and ψ, and he used this in order to construct his path integral.
There are ideas in mathematics, like the idea of orthogonality, which remain
eternally young and which lose nothing of their intellectual freshness
after thousands of years.
A mathematician like a painter or poet, is a maker of patterns. If his
patterns are more permanent as theirs, it is because they are made with
ideas.
Godfrey Harold Hardy (1877–1947)
In a right *** the side opposite to the right angle is called the hypotenuse. The
theorem of Pythagoras says that
c2 = a2 + b2.
In words: the square of the length of the hypotenuse is equal to the sum of the
squares of the lengths of the other two legs. Mathematicians of the Pythagorean
school in ancient Greece attributed the Pythagorean theorem to the master of their
school, Pythagoras of Samos (circa 560 B.C.–480 B.C.). It is said that Pythagoras
sacrificed one hundred oxen to the gods in gratitude. In fact, this theorem was
already known in Babylon at the time of King Hammurabi (circa 1728 B.C.–1686
B.C.). Presumably, however, it was a mathematician of the Pythagorean school who
first proved the Pythagorean theorem. This famous theorem appears as Proposition
47 in Book I of Euclid’s Elements (300 B.C.).
In 1940 Hermann Weyl wrote a fundamental paper where he emphasized that
the justification of the Dirichlet principle can be based on the method of orthogonal
projection. In 1943 he applied this method to the Dirichlet prinicple for harmonic
differential forms. This way, Hodge theory obtained a sound analytic foundation.30
The notion of Hilbert space is the abstract realization of the idea of orthogonality.
To this end, one introduces the inner product
ϕ|ψ
between the two elements ϕ and ψ of the Hilbert space. Recall that we say that ϕ
is orthogonal to ψ iff ϕ|ψ = 0. The proofs of the essential theorems about Hilbert
spaces (e.g., spectral theory) are based on the notion of orthogonality. In the late
1920s, John von Neumann (1903–1955) discovered that the mathematical foundation
of quantum mechanics can be based on the theory of Hilbert spaces. Therefore,
the concept of orthogonality is crucial for quantum mechanics. In the early 1940s,
Feynman (1918–1988) developed a new approach to quantum mechanics. He called
the inner product ϕ|ψ the transition amplitude between the two quantum states
ϕ and ψ, and he used this in order to construct his path integral.
There are ideas in mathematics, like the idea of orthogonality, which remain
eternally young and which lose nothing of their intellectual freshness
after thousands of years.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Distributions and Green’s Functions
Whoever understands Green’s functions can understand forces in nature.
Folklore
The invention of the Green’s function brought about a tool-driven revolution
in mathematical physics, similar in character to the more famous
tool-driven revolution caused by the invention of electronic computers a
century and a half later. . . The Green’s function and the computer are
prime examples of intellectual tools. They are tools for clear thinking. . .
Invented in 1828 by George Green (1793–1841) and successfully applied to
classical electromagnetism, acoustics, and hydrodynamics, Green’s functions
were the essential link between the theories of quantum electrodynamics
by Schwinger, Feynman, and Tomonaga in 1948 and are still alive
and well today. . .
I began the application of the Green’s function to condensed matter physics
in 1956 with a study of spin-waves in ferromagnets. I found that all the
Green’s function tricks that had worked so well in quantum electrodynamics
worked even better in the theory of spin waves. . .
Meanwhile, the Green’s functions method was applied systematically by
Bogoliubov and other people to a whole range of problems in condensed
matter physics. The main novelty in condensed matter physics was the
appearance of temperature as an additional variable. . . A beautiful thing
happens when you make the transition from ordinary Green’s functions to
thermal Green’s functions. To make the transition, all you have to do is to
replace the real frequency of any oscillation by a complex number whose
real part is frequency and whose imaginary part is temperature1. . .
Soon after thermal Green’s functions were invented, they were applied to
solve the outstanding unsolved problem of condensed matter physics, the
problem of superconductivity. They allowed Cooper, Bardeen, and Schrieffer
to understand superconductivity as an effect of a particular thermal
Green’s functions expressing long-range phase-coherence between pairs of
electrons (called Cooper pairs). . .
In the 1960s, after Green’s functions had become established as the standard
working tools of theoretical analysis in condensed matter physics,
the wheel of fashion in particle physics continued to turn. For a decade,
quantum field theory and Green’s functions were unfashionable in particle
physics. The prevailing view was that quantum field theory had failed in
the domain of strong interactions2. . .
Then in the 1970s, the wheel of fashion turned once more. Quantum
field theory was back in the limelight with two enormous successes, the
Weinberg–Salam unified theory of electromagnetic and weak interactions,
and the gauge theory of strong interactions now known as quantum chromodynamics.
Green’s functions were once again the working tools of calculation,
both in particle physics and in condensed matter physics. And
so they have remained up to the present day.
In the 1980’s, quantum field theory moved off in a new direction, to lattice
gauge theories in one direction and to superstring theory in another. . . The
Wilson loop is the reincarnation of a Green’s function in a lattice gauge
theory3 and there is a reincarnation of Green’s functions in superstring
theory.
Freeman Dyson
George Green and physics4
Between 1930 and 1940, several mathematicians began to investigate systematically
the concept of a “weak” solution of a linear partial differential
equation, which appeared episodically (and without a name) in Poincar´e’s
work.
It was one of the main contributions of Laurent Schwartz when he saw,
in 1945, that the concept of distribution introduced by Sobolev in 1936
(which he had rediscovered independently) could give a satisfactory generalization
of the Fourier transform including all the preceding ones. . . By
his own research and those of his numerous students, Laurent Schwartz
began to explore the potentialities of distributions (generalized functions)
and gradually succeeded in convincing the world of mathematicians that
this new concept should become central in all problems of mathematical
analysis, due to the greater freedom and generality it allowed in the fundamental
operations of calculus, doing away with a great many unnecessary
restrictions and pathology.
The role of Laurent Schwartz (born 1915) in the theory of distributions is
very similar to the one played by Newton (1643–1727) and Leibniz (1646–
1716) in the history of Calculus. Contrary to popular belief, they of course
did not invent it, for derivation and integration were practiced by men such
as Cavalieri (1598–1647), Fermat (1601–1665) and Roberval (1602–1675)
when Newton and Leibniz were merely schoolboys. But they were able to
systematize the algorithms and notations of Calculus in such a way that it
became a versatile and powerful tool which we know, whereas before them
it could only be handled via complicated arguments and diagrams.
Jean Dieudonn´e, 1981
History of Functional Analysis
The local propagation of physical effects is described mathematically by Green’s
functions. In quantum field theory, special Green’s functions are
• n-point correlation functions of quantum fields (n-point Green’s functions).6
They are closely related to
• propagators,
• retarded propagators,
• advanced propagators.
The prototypes can be found in Sect. 11.1.2. In terms of physics, Green’s functions
describe physical processes under the influence of sharply concentrated external
forces described by the Dirac delta function. General external forces are then obtained
by the superposition principle. The Dirac delta function is not a classical
object, but a generalized function (also called distribution). Therefore, distributions
play a crucial role in physics, in particular, in quantum field theory. We do
not suppose that the reader is familiar with this fundamental mathematical tool.
Therefore, in this chapter, we will give an introduction to the mathematical theory
of distributions and its physical interpretation.
The theory of distributions was created by Laurent Schwartz in 1945; it was
motivated by Dirac’s approach to quantum mechanics. This was represented in
Dirac’s famous 1930 monograph Foundations of Quantum Mechanics.7 Distributions
generalize a broad class of continuous and discontinuous functions. For a
classical function, one has always to worry about the existence of derivatives. The
situation changes completely for distributions.
Distributions possess derivatives of all orders.
Therefore, distributions are the right tool for the investigation of linear partial
differential equations. For example, let us introduce the discontinuous Heaviside
function
θ(t) :=
1 if t ≥ 0,
0 if t < 0
(11.1)
which jumps at the initial time t = 0 from zero to one (Fig. 11.1(a)). By convention,
the Heaviside function is continuous from the right.
The discrete Dirac delta function is the key to the Dirac delta distribution.
Folklore
The discrete Dirac delta function δΔt approximates white noise as Δt → +0.
The term ‘white’ comes from the fact that white light is a superposition of electromagnetic
waves of all frequencies, and each frequency contributes approximately
the same amplitude.
Prototypes of Green’s Functions
The notion of Green’s function was introduced by George Green (1793–1841) in the
year 1828.10
The Green’s function describes the behavior of a physical system by kicking
it with a force which acts only during a very small time interval and which
is concentrated on a very small neighborhood of some point in the position
space.
This generalizes Newton’s infinitesimal strategy from mechanics to field theories.
Steinmann’s Renormalization Theorem
The quest for the existence of a non-trivial quantum field in four spacetime
dimensions is still without any conclusive result. Nonetheless, physicists
are working daily, with success, on concrete models which describe
very efficiently physics at wide energy scales. This description is based
on expansion of physical quantities like amplitudes of scattering processes
of power series of “physical” parameters, as coupling constants, masses,
charges. The higher order terms of these power series are usually illdefined,
in a *** approach, but physicists have soon learned how to make
sense of them through the procedure now known as renormalization. . .On
Minkowski space-time, Steinmann’s concept of the scaling degree of a generalized
function at a point leads to a rather smooth and economic method
of renormalization. . .
Romeo Brunetti and Klaus Fredenhagen, 2000
Renormalization in a Nutshell
This section should help the reader to understand the basic ideas of renormalization
theory from both the mathematical and physical point of view. We will use a simple
example in order to demonstrate the relation between the critical phenomenon of
resonance of an oscillating system and the renormalization of physical parameters.
Our rigorous approach will be based on the classical methods of bifurcation theory
in nonlinear functional analysis.33 This way, we will also clarify the role of renormalized
Green’s functions which represent an important tool used by physicists in
renormalization theory.
重整化
量子场论中计算格林方程之关联函数时将遭遇到发散困难,这种困难很自然地出现在无穷上下限积分之中,并以紫外发散和红外发散的形式出现。而解决发散的办法之一即为重整化。重整化的关键在于标度变换,即将场的标度变换为更大的标度,从而将发散部分隔离出来。这种办法的精神类似于眼睛和反光镜的对视,眼睛中有反光镜里的眼睛,而反光镜里的眼睛又有眼睛里反光镜里的眼睛......
重整化[编辑]
[ltr]
重整化(Renormalization)是量子场论、场的统计力学和自相似几何结构中解决计算过程中出现无穷大的一系列方法。
在量子场论发展的早期,人们发现许多圈图(即微扰展开的高阶项)的计算结果含有发散(即无穷大)项。重整化是解决这个困难的一个方案。一个理论如果只有有限种发散项,则可以在拉氏量中引进有限数目的项来抵消这些无穷大项,这种情形被称为可重整。反之,如果理论中有无限种发散项,则称为不可重整。
可重整化曾被认为一个场论所必需满足的自洽性要求。它在量子电动力学和量子规范场论的发展过程中起过重要的作用。粒子物理的标准模型也是可重整的。
现代场论的观点认为所有理论都只是有效理论,它们都有它们的适用范围。除了所谓的终极理论,所有理论在原则上都是不可重整的。在这种观点下,重整化只是联系不同能标下理论的一种方法。
例如: 的后两项发散.
为了消除发散,把积分下限分别改为无穷小的和,这样积分就变成了
如果能保证,那么就可以得到.[/ltr]
如果一个物体自我相似,表示它和它本身的一部分完全或是几乎相似。若说一个曲线自我相似,即每部分的曲线有一小块和它相似。自然界中有很多东西有自我相似性质,例如海岸线。
自我相似是分形的重要特质。
科赫曲线是自相似的。
Whoever understands Green’s functions can understand forces in nature.
Folklore
The invention of the Green’s function brought about a tool-driven revolution
in mathematical physics, similar in character to the more famous
tool-driven revolution caused by the invention of electronic computers a
century and a half later. . . The Green’s function and the computer are
prime examples of intellectual tools. They are tools for clear thinking. . .
Invented in 1828 by George Green (1793–1841) and successfully applied to
classical electromagnetism, acoustics, and hydrodynamics, Green’s functions
were the essential link between the theories of quantum electrodynamics
by Schwinger, Feynman, and Tomonaga in 1948 and are still alive
and well today. . .
I began the application of the Green’s function to condensed matter physics
in 1956 with a study of spin-waves in ferromagnets. I found that all the
Green’s function tricks that had worked so well in quantum electrodynamics
worked even better in the theory of spin waves. . .
Meanwhile, the Green’s functions method was applied systematically by
Bogoliubov and other people to a whole range of problems in condensed
matter physics. The main novelty in condensed matter physics was the
appearance of temperature as an additional variable. . . A beautiful thing
happens when you make the transition from ordinary Green’s functions to
thermal Green’s functions. To make the transition, all you have to do is to
replace the real frequency of any oscillation by a complex number whose
real part is frequency and whose imaginary part is temperature1. . .
Soon after thermal Green’s functions were invented, they were applied to
solve the outstanding unsolved problem of condensed matter physics, the
problem of superconductivity. They allowed Cooper, Bardeen, and Schrieffer
to understand superconductivity as an effect of a particular thermal
Green’s functions expressing long-range phase-coherence between pairs of
electrons (called Cooper pairs). . .
In the 1960s, after Green’s functions had become established as the standard
working tools of theoretical analysis in condensed matter physics,
the wheel of fashion in particle physics continued to turn. For a decade,
quantum field theory and Green’s functions were unfashionable in particle
physics. The prevailing view was that quantum field theory had failed in
the domain of strong interactions2. . .
Then in the 1970s, the wheel of fashion turned once more. Quantum
field theory was back in the limelight with two enormous successes, the
Weinberg–Salam unified theory of electromagnetic and weak interactions,
and the gauge theory of strong interactions now known as quantum chromodynamics.
Green’s functions were once again the working tools of calculation,
both in particle physics and in condensed matter physics. And
so they have remained up to the present day.
In the 1980’s, quantum field theory moved off in a new direction, to lattice
gauge theories in one direction and to superstring theory in another. . . The
Wilson loop is the reincarnation of a Green’s function in a lattice gauge
theory3 and there is a reincarnation of Green’s functions in superstring
theory.
Freeman Dyson
George Green and physics4
Between 1930 and 1940, several mathematicians began to investigate systematically
the concept of a “weak” solution of a linear partial differential
equation, which appeared episodically (and without a name) in Poincar´e’s
work.
It was one of the main contributions of Laurent Schwartz when he saw,
in 1945, that the concept of distribution introduced by Sobolev in 1936
(which he had rediscovered independently) could give a satisfactory generalization
of the Fourier transform including all the preceding ones. . . By
his own research and those of his numerous students, Laurent Schwartz
began to explore the potentialities of distributions (generalized functions)
and gradually succeeded in convincing the world of mathematicians that
this new concept should become central in all problems of mathematical
analysis, due to the greater freedom and generality it allowed in the fundamental
operations of calculus, doing away with a great many unnecessary
restrictions and pathology.
The role of Laurent Schwartz (born 1915) in the theory of distributions is
very similar to the one played by Newton (1643–1727) and Leibniz (1646–
1716) in the history of Calculus. Contrary to popular belief, they of course
did not invent it, for derivation and integration were practiced by men such
as Cavalieri (1598–1647), Fermat (1601–1665) and Roberval (1602–1675)
when Newton and Leibniz were merely schoolboys. But they were able to
systematize the algorithms and notations of Calculus in such a way that it
became a versatile and powerful tool which we know, whereas before them
it could only be handled via complicated arguments and diagrams.
Jean Dieudonn´e, 1981
History of Functional Analysis
The local propagation of physical effects is described mathematically by Green’s
functions. In quantum field theory, special Green’s functions are
• n-point correlation functions of quantum fields (n-point Green’s functions).6
They are closely related to
• propagators,
• retarded propagators,
• advanced propagators.
The prototypes can be found in Sect. 11.1.2. In terms of physics, Green’s functions
describe physical processes under the influence of sharply concentrated external
forces described by the Dirac delta function. General external forces are then obtained
by the superposition principle. The Dirac delta function is not a classical
object, but a generalized function (also called distribution). Therefore, distributions
play a crucial role in physics, in particular, in quantum field theory. We do
not suppose that the reader is familiar with this fundamental mathematical tool.
Therefore, in this chapter, we will give an introduction to the mathematical theory
of distributions and its physical interpretation.
The theory of distributions was created by Laurent Schwartz in 1945; it was
motivated by Dirac’s approach to quantum mechanics. This was represented in
Dirac’s famous 1930 monograph Foundations of Quantum Mechanics.7 Distributions
generalize a broad class of continuous and discontinuous functions. For a
classical function, one has always to worry about the existence of derivatives. The
situation changes completely for distributions.
Distributions possess derivatives of all orders.
Therefore, distributions are the right tool for the investigation of linear partial
differential equations. For example, let us introduce the discontinuous Heaviside
function
θ(t) :=
1 if t ≥ 0,
0 if t < 0
(11.1)
which jumps at the initial time t = 0 from zero to one (Fig. 11.1(a)). By convention,
the Heaviside function is continuous from the right.
The discrete Dirac delta function is the key to the Dirac delta distribution.
Folklore
The discrete Dirac delta function δΔt approximates white noise as Δt → +0.
The term ‘white’ comes from the fact that white light is a superposition of electromagnetic
waves of all frequencies, and each frequency contributes approximately
the same amplitude.
Prototypes of Green’s Functions
The notion of Green’s function was introduced by George Green (1793–1841) in the
year 1828.10
The Green’s function describes the behavior of a physical system by kicking
it with a force which acts only during a very small time interval and which
is concentrated on a very small neighborhood of some point in the position
space.
This generalizes Newton’s infinitesimal strategy from mechanics to field theories.
Steinmann’s Renormalization Theorem
The quest for the existence of a non-trivial quantum field in four spacetime
dimensions is still without any conclusive result. Nonetheless, physicists
are working daily, with success, on concrete models which describe
very efficiently physics at wide energy scales. This description is based
on expansion of physical quantities like amplitudes of scattering processes
of power series of “physical” parameters, as coupling constants, masses,
charges. The higher order terms of these power series are usually illdefined,
in a *** approach, but physicists have soon learned how to make
sense of them through the procedure now known as renormalization. . .On
Minkowski space-time, Steinmann’s concept of the scaling degree of a generalized
function at a point leads to a rather smooth and economic method
of renormalization. . .
Romeo Brunetti and Klaus Fredenhagen, 2000
Renormalization in a Nutshell
This section should help the reader to understand the basic ideas of renormalization
theory from both the mathematical and physical point of view. We will use a simple
example in order to demonstrate the relation between the critical phenomenon of
resonance of an oscillating system and the renormalization of physical parameters.
Our rigorous approach will be based on the classical methods of bifurcation theory
in nonlinear functional analysis.33 This way, we will also clarify the role of renormalized
Green’s functions which represent an important tool used by physicists in
renormalization theory.
重整化
量子场论中计算格林方程之关联函数时将遭遇到发散困难,这种困难很自然地出现在无穷上下限积分之中,并以紫外发散和红外发散的形式出现。而解决发散的办法之一即为重整化。重整化的关键在于标度变换,即将场的标度变换为更大的标度,从而将发散部分隔离出来。这种办法的精神类似于眼睛和反光镜的对视,眼睛中有反光镜里的眼睛,而反光镜里的眼睛又有眼睛里反光镜里的眼睛......
重整化[编辑]
[ltr]
重整化(Renormalization)是量子场论、场的统计力学和自相似几何结构中解决计算过程中出现无穷大的一系列方法。
在量子场论发展的早期,人们发现许多圈图(即微扰展开的高阶项)的计算结果含有发散(即无穷大)项。重整化是解决这个困难的一个方案。一个理论如果只有有限种发散项,则可以在拉氏量中引进有限数目的项来抵消这些无穷大项,这种情形被称为可重整。反之,如果理论中有无限种发散项,则称为不可重整。
可重整化曾被认为一个场论所必需满足的自洽性要求。它在量子电动力学和量子规范场论的发展过程中起过重要的作用。粒子物理的标准模型也是可重整的。
现代场论的观点认为所有理论都只是有效理论,它们都有它们的适用范围。除了所谓的终极理论,所有理论在原则上都是不可重整的。在这种观点下,重整化只是联系不同能标下理论的一种方法。
例如: 的后两项发散.
为了消除发散,把积分下限分别改为无穷小的和,这样积分就变成了
如果能保证,那么就可以得到.[/ltr]
如果一个物体自我相似,表示它和它本身的一部分完全或是几乎相似。若说一个曲线自我相似,即每部分的曲线有一小块和它相似。自然界中有很多东西有自我相似性质,例如海岸线。
自我相似是分形的重要特质。
科赫曲线是自相似的。
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
The Beauty of Hironaka’s Theorem
In this note I shall show how Hironaka’s theorem42 on the resolution of
singularities leads quickly to a new proof of the H¨ormander–Lojasiewicz
theorem on the division of distributions (generalized functions) and hence
to the existence of tempered fundamental solutions for constant-coefficient
differential operators. Since most of the difficulties in the general theory of
partial differential operators arise from the singularities of the characteristic
variety, it is quite natural to expect Hironaka’s theorem to be relevant.
In fact, this note is primarily intended to draw the attention of analysts
to the power of this theorem.
Michael Atiyah, 1970
In this section, we want to have a look at some deep mathematical theorems which
are related to quantum field theory. At the top is the highly sophisticated Hironaka
theorem. It turns out that
An appropriate mathematical tool for quantum field theory is the theory of
tempered generalized functions based on the Fourier transform.
In fact, the basic objects of Feynman’s approach to quantum field theory, namely,
• the Dirac delta function,
• the Feynman propagators (e.g., the photon propagator and the electron propagator
in quantum electrodynamics), and
• the algebraic Feynman integrals (corresponding to internal lines of Feynman
diagrams)
are not classical mathematical functions, but tempered distributions. Let us mention
the following fundamental results:
(i) the 1964 Hironaka theorem on the resolution of singularities in algebraic geometry;
(ii) the 1968 Hironaka–Atiyah–Bernstein–Gelfand (HABG) theorem on meromorphic
families of tempered distributions;
(iii) propagators of quantum fields and the 1958 H¨ormander–Lojasiewicz theorem
on the existence of tempered fundamental solutions; this can be based on the
HABG theorem;
(iv) dimensional regularization of Feynman integrals via the HABG theorem;
(v) the zero-mass limit as a tempered generalized function via the HABG theorem;
(vi) the multiplication of generalized functions by using H¨ormander’s wave front
sets, and causal products of propagators (the 1973 Epstein–Glaser approach
to quantum field theory via constructing the S-matrix in terms of tempered
generalized functions);
(vii) H¨ormander’s 1971 theory of Fourier integral operators, microlocal analysis,
and Radzikowski’s 1996 theory for Hadamard states in quantum gravitation;
(viii) Analytic continuation of functions of many complex variables; analyticity
properties of the Fourier transform or Laplace transform of tempered generalized
functions (the Paley–Wiener theorem and Bogoliubov’s “edge-of-thewedge”
theorem); proof of the fundamental CPT theorem in the setting of
axiomatic quantum field theory by Res Jost in 1957.
Mathematically, important contributions came from Laurent Schwartz (theory of
generalized functions) in the 1940’s and from H¨ormander between 1955 and 1975
(general theory of linear partial differential operators). See H¨ormander (1983), Vols.
1–4
In terms of quantum field theory, during the 1950s important impacts came
from Arthur Wightman in Princeton and Nikolai Bogoliubov in Moscow and from
their numerous collaborators.
奇点解消[编辑]
[ltr]在代数几何学中,奇点解消问题探讨代数簇是否有非奇异的模型(即:与之双有理等价的非奇异代数簇)。在特征为零的域上,广中平祐已给出肯定答案,至于正特征的域,四维以上的情形至今(2007年)未解。
[/ltr]
[size][ltr]
定义[编辑]
对于一个域 上的代数簇 ,若能找到一个完备非奇异代数簇与之双有理等价(换言之:有相同的函数域),则称 有弱奇点解消。在实践上常会要求更容易运用的条件:若存在非奇异代数簇 及真双有理态射 ,使之在 的奇点集 之外为同构,则称 有奇点解消。真态射的条件意在排除平凡解,例如 。
一般而言,设 ,其中 是非奇异代数簇,此时一个实用的概念是 在 中的强奇点解消:这是一个真双有理态射 ,满足下述条件:
[/ltr][/size]
[size][ltr]
广中平祐证明了:当域 的特征为零,则存在满足前两个条件的强奇点解消。他的建构后经多位数学家改进,以满足全部四个条件。
简史[编辑]
代数曲线的奇点解消较容易,在19世纪已广为人知。证明方法不一:最常见的两种是相继拉开奇点,或取曲线的正规化。正规化消解的是所有余维度为一的奇点,因此仅适用于曲线。
复代数曲面的奇点解消先后由 Beppo Levi(1899年)、O. Chisini(1921年)与 G. Albanese(1924年)给出非正式的说明。第一个严谨证明由 Robert J. Walker 于1935年给出。对所有零特征域均成立的代数证明由扎里斯基于1939年给出。S. S. Abhyankar 证明正特征域上的情形(1956年)。所有二维优概形(包括所有算术曲面)的奇点解消由 Lipman 在1978年证出。
消解曲面奇点的通常办法是不断将曲面正规化(以消去余维为一的奇点)并拉开奇点(以改善余维为二的奇点,但是可能会增加新的余维一的奇点)。
对于三维情形,零特征域上首先由扎里斯基证明(1944年);域特征超过 5 的情形由 S. S. Abhyankar 于1966年证明。
零特征域上任意维度的奇点消解首先由广中平祐于1964年证出。他证明可以借着相继对非奇异闭子流形作拉开以消去奇点,其证明中对维度作了相当复杂的数学归纳法。简化版的证明之后由许多数学家给出,包括 Bierstone 与 Milman(1997年),Encinas 与 Villamayor(1998年),Encinas 与 Hauser( 2002年)、Cutkosky(2004年),Wlodarczyk(2005年)及 Kollar(2007年)。某些晚近证明的长度还不及广中平祐证明的十分之一,并简单到可以在研究所导论课程中给出。关于该定理的介绍,详阅文献中 Hauser 的著作(2003),历史讨论请见 Hauser(2000)。
A. J. de Jong 在1996年提出奇点解消的另种进路,这套进路被 Bogomolov 与 Pantev(1996年)及 Abramovich 与 de Jong(1997年)用于证明零特征域上的奇点解消。De Jong 的方法对正特征域上的代数簇给出较弱的结果,然而已足以替代奇点解消的许多角色。
De Jong 证明对任意域上的代数簇 ,存在满的真态射 ,使得 且 非奇异。这不一定是双有理等价, 的函数域可能是有限扩张,故非奇点解消。De Jong 的想法是尝试将 表为一个较小空间 上的纤维化映射,使得纤维均为曲线(为此可能需要修改 ),然后借着对维度作数学归纳法消去 的奇点,最后消去纤维上的奇点。
概形的情形[编辑]
奇点解消的定义容易推广到所有概形。并非所有概形都有奇点解消:格罗滕迪克(1965, EGA IV 7.9)证明了如果在一个局部诺特概形 上有限的所有整概形都有奇点解消,则 必然是拟优概形。格罗滕迪克猜测其逆为真,换言之:如果一个局部诺特概形 是拟优且既约的,则可以消解奇点。当 定义于一个零特征的域上时,此陈述能由广中平祐的定理导出;一般情形则化约到整完备局部环的奇点解消问题。
外部链接[编辑]
[/ltr][/size]
[size][ltr]
文献[编辑]
[/ltr][/size]
In this note I shall show how Hironaka’s theorem42 on the resolution of
singularities leads quickly to a new proof of the H¨ormander–Lojasiewicz
theorem on the division of distributions (generalized functions) and hence
to the existence of tempered fundamental solutions for constant-coefficient
differential operators. Since most of the difficulties in the general theory of
partial differential operators arise from the singularities of the characteristic
variety, it is quite natural to expect Hironaka’s theorem to be relevant.
In fact, this note is primarily intended to draw the attention of analysts
to the power of this theorem.
Michael Atiyah, 1970
In this section, we want to have a look at some deep mathematical theorems which
are related to quantum field theory. At the top is the highly sophisticated Hironaka
theorem. It turns out that
An appropriate mathematical tool for quantum field theory is the theory of
tempered generalized functions based on the Fourier transform.
In fact, the basic objects of Feynman’s approach to quantum field theory, namely,
• the Dirac delta function,
• the Feynman propagators (e.g., the photon propagator and the electron propagator
in quantum electrodynamics), and
• the algebraic Feynman integrals (corresponding to internal lines of Feynman
diagrams)
are not classical mathematical functions, but tempered distributions. Let us mention
the following fundamental results:
(i) the 1964 Hironaka theorem on the resolution of singularities in algebraic geometry;
(ii) the 1968 Hironaka–Atiyah–Bernstein–Gelfand (HABG) theorem on meromorphic
families of tempered distributions;
(iii) propagators of quantum fields and the 1958 H¨ormander–Lojasiewicz theorem
on the existence of tempered fundamental solutions; this can be based on the
HABG theorem;
(iv) dimensional regularization of Feynman integrals via the HABG theorem;
(v) the zero-mass limit as a tempered generalized function via the HABG theorem;
(vi) the multiplication of generalized functions by using H¨ormander’s wave front
sets, and causal products of propagators (the 1973 Epstein–Glaser approach
to quantum field theory via constructing the S-matrix in terms of tempered
generalized functions);
(vii) H¨ormander’s 1971 theory of Fourier integral operators, microlocal analysis,
and Radzikowski’s 1996 theory for Hadamard states in quantum gravitation;
(viii) Analytic continuation of functions of many complex variables; analyticity
properties of the Fourier transform or Laplace transform of tempered generalized
functions (the Paley–Wiener theorem and Bogoliubov’s “edge-of-thewedge”
theorem); proof of the fundamental CPT theorem in the setting of
axiomatic quantum field theory by Res Jost in 1957.
Mathematically, important contributions came from Laurent Schwartz (theory of
generalized functions) in the 1940’s and from H¨ormander between 1955 and 1975
(general theory of linear partial differential operators). See H¨ormander (1983), Vols.
1–4
In terms of quantum field theory, during the 1950s important impacts came
from Arthur Wightman in Princeton and Nikolai Bogoliubov in Moscow and from
their numerous collaborators.
奇点解消[编辑]
[ltr]在代数几何学中,奇点解消问题探讨代数簇是否有非奇异的模型(即:与之双有理等价的非奇异代数簇)。在特征为零的域上,广中平祐已给出肯定答案,至于正特征的域,四维以上的情形至今(2007年)未解。
[/ltr]
[size][ltr]
定义[编辑]
对于一个域 上的代数簇 ,若能找到一个完备非奇异代数簇与之双有理等价(换言之:有相同的函数域),则称 有弱奇点解消。在实践上常会要求更容易运用的条件:若存在非奇异代数簇 及真双有理态射 ,使之在 的奇点集 之外为同构,则称 有奇点解消。真态射的条件意在排除平凡解,例如 。
一般而言,设 ,其中 是非奇异代数簇,此时一个实用的概念是 在 中的强奇点解消:这是一个真双有理态射 ,满足下述条件:
[/ltr][/size]
- 由一系列对非奇异闭子簇的拉开合成,每一步取的闭子簇都横截已拉开的例外除数。
- 的严格变换 是非奇异的,并与横截拉开的例外除数;于是限制态射 是 的奇点解消。
- 的构造对平滑态射具函子性。
- 态射 与 在 中的嵌入方式无关。
[size][ltr]
广中平祐证明了:当域 的特征为零,则存在满足前两个条件的强奇点解消。他的建构后经多位数学家改进,以满足全部四个条件。
简史[编辑]
代数曲线的奇点解消较容易,在19世纪已广为人知。证明方法不一:最常见的两种是相继拉开奇点,或取曲线的正规化。正规化消解的是所有余维度为一的奇点,因此仅适用于曲线。
复代数曲面的奇点解消先后由 Beppo Levi(1899年)、O. Chisini(1921年)与 G. Albanese(1924年)给出非正式的说明。第一个严谨证明由 Robert J. Walker 于1935年给出。对所有零特征域均成立的代数证明由扎里斯基于1939年给出。S. S. Abhyankar 证明正特征域上的情形(1956年)。所有二维优概形(包括所有算术曲面)的奇点解消由 Lipman 在1978年证出。
消解曲面奇点的通常办法是不断将曲面正规化(以消去余维为一的奇点)并拉开奇点(以改善余维为二的奇点,但是可能会增加新的余维一的奇点)。
对于三维情形,零特征域上首先由扎里斯基证明(1944年);域特征超过 5 的情形由 S. S. Abhyankar 于1966年证明。
零特征域上任意维度的奇点消解首先由广中平祐于1964年证出。他证明可以借着相继对非奇异闭子流形作拉开以消去奇点,其证明中对维度作了相当复杂的数学归纳法。简化版的证明之后由许多数学家给出,包括 Bierstone 与 Milman(1997年),Encinas 与 Villamayor(1998年),Encinas 与 Hauser( 2002年)、Cutkosky(2004年),Wlodarczyk(2005年)及 Kollar(2007年)。某些晚近证明的长度还不及广中平祐证明的十分之一,并简单到可以在研究所导论课程中给出。关于该定理的介绍,详阅文献中 Hauser 的著作(2003),历史讨论请见 Hauser(2000)。
A. J. de Jong 在1996年提出奇点解消的另种进路,这套进路被 Bogomolov 与 Pantev(1996年)及 Abramovich 与 de Jong(1997年)用于证明零特征域上的奇点解消。De Jong 的方法对正特征域上的代数簇给出较弱的结果,然而已足以替代奇点解消的许多角色。
De Jong 证明对任意域上的代数簇 ,存在满的真态射 ,使得 且 非奇异。这不一定是双有理等价, 的函数域可能是有限扩张,故非奇点解消。De Jong 的想法是尝试将 表为一个较小空间 上的纤维化映射,使得纤维均为曲线(为此可能需要修改 ),然后借着对维度作数学归纳法消去 的奇点,最后消去纤维上的奇点。
概形的情形[编辑]
奇点解消的定义容易推广到所有概形。并非所有概形都有奇点解消:格罗滕迪克(1965, EGA IV 7.9)证明了如果在一个局部诺特概形 上有限的所有整概形都有奇点解消,则 必然是拟优概形。格罗滕迪克猜测其逆为真,换言之:如果一个局部诺特概形 是拟优且既约的,则可以消解奇点。当 定义于一个零特征的域上时,此陈述能由广中平祐的定理导出;一般情形则化约到整完备局部环的奇点解消问题。
外部链接[编辑]
[/ltr][/size]
- 一些奇点及其解消之图片
- SINGULAR: 一套能处理奇点解消的软件
- 2006年6月的暑期学校 Resolution of Singularities (Trieste, Italy) 的讲义
- desing 另一套处理奇点解消的软件
[size][ltr]
文献[编辑]
[/ltr][/size]
- Abhyankar, Shreeram Local uniformization on algebraic su***ces over ground fields of characteristic p≠0 Ann. of Math. (2) 63 (1956), 491--526.
- S.S. Abhyankar, Resolution of singularities of embedded algebraic su***ces , Acad. Press (1966), second edition (1998) ISBN 3540637192
- Abramovich, D., de Jong, A. J., Smoothness, semistability, and toroidal geometry. J. Algebraic Geom. 6 (1997), no. 4, 789-801.
- G. Albanese, Transformazione birazionale di una superficie algebrica in un altra priva di punti multiple, Rend. Circ. Mat. Palermo 48 (1924).
- Bierstone, Edward, Milman, Pierre D. Canonical desingularization in characteristic zero by blowing up the maximum strata of a local invariant. Invent. Math. 128 (1997), no. 2, 207-302.
- Bogomolov, Fedor A., Pantev, Tony G. Weak Hironaka theorem. Math. Res. Lett. 3 (1996), no. 3, 299-307.
- O. Chisini, La risoluzione delle singolarita di una superficie, Mem. Acad. Bologna, 8 (1921).
- Steven Dale Cutkosky Resolution of Singularities (2004) ISBN 0821835556
- V.I. Danilov, Resolution of singularities//Hazewinkel, Michiel, 数学百科全书, 克鲁维尔学术出版社. 2001, ISBN 978-1556080104
- de Jong, A. J. Smoothness, semi-stability and alterations. Inst. Hautes Études Sci. Publ. Math. No. 83 (1996), 51-93.
- Encinas, S. Hauser, Herwig Strong resolution of singularities in characteristic zero. Comment. Math. Helv. 77 (2002), no. 4, 821--845.
- Encinas, S., Villamayor, O., Good points and constructive resolution of singularities. Acta Math. 181 (1998), no. 1, 109--158.
- A. Grothendieck, J. Dieudonne, Eléments de géométrie algébrique Publ. Math. IHES, 24 (1965)
- Hauser, Herwig (2000) Resolution of singularities 1860-1999. In Resolution of singularities (Obergurgl, 1997), 5-36, Progr. Math., 181, Birkhäuser, Basel, 2000. ISBN 0817661786
- Hauser, Herwig (2003) The Hironaka theorem on resolution of singularities (or: A proof we always wanted to understand). Bull. Amer. Math. Soc. (N.S.) 40 (2003), no. 3, 323-403
- Hironaka, Heisuke Resolution of singularities of an algebraic variety over a field of characteristic zero. I, II. Ann. of Math. (2) 79 (1964), 109-203; ibid. (2) 79 1964 205-326.
- Janos Kollar Lectures on Resolution of Singularities (2007) ISBN 0691129231 (similar to his Resolution of Singularities -- Seattle Lecture.
- B. Levi, Risoluzione delle singolarita puntualli delle superficie algebriche, Atti. Acad. Torino, 34 (1899).
- J. Lipman, Desingularization of two-dimensional schemes, Ann. Math. 107 (1978) 151-207.
- Robert J. Walker, Reduction of the Singularities of an Algebraic Su***ce The Annals of Mathematics 2nd Ser., Vol. 36, No. 2 (Apr., 1935), pp. 336-365
- Wlodarczyk, Jaroslaw Simple Hironaka resolution in characteristic zero. J. Amer. Math. Soc. 18 (2005), no. 4, 779-822
- Zariski, Oscar The reduction of the singularities of an algebraic su***ce. Ann. of Math. (2) 40, (1939). 639-689.
- Zariski, Oscar Reduction of the singularities of algebraic three dimensional varieties. Ann. of Math. (2) 45, (1944). 472-542.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Functional Integrals
Feynman’s path integrals are both infinite-dimensional Gaussian integrals
and continuous partition functions.
Folklore
By Theorem 11.8 on page 587, the initial-value problem for the heat equation is
completely determined by the knowledge of the heat kernel.
In terms of the heat equation, the Feynman functional integral approach
represents the heat kernel as an infinite-dimensional integral.
This fundamental idea can be generalized to the time-evolution of general physical
systems. The Feynman approach can be viewed as a generalization of the classical
Fourier method via Fourier series or Fourier integral. Physicists like functional
integrals very much, since they allow elegant explicit computations based on the
following crucial methods:
• approximation by finite-dimensional Gaussian integrals,
• the method of stationary phase,
• infinite-dimensional Gaussian integrals and the method of zeta function regularization,
and
• (formal) analytic continuation.
From the mathematical point of view, functional integrals represent an extraordinarily
useful mnemonic tool for conjecturing rigorous results. Unfortunately, the
rigorous proofs are frequently missing or they have to be based on sophisticated
methods. Let us sketch the main ideas for the heat kernel. We will restrict ourselves
to the elegant formal language used by physicists which can be immediately
translated
• to the Schr¨odinger equation in quantum mechanics by replacing real time by
imaginary time, t ⇒ it/h,
• and to quantum field theory.
The Method of Quantum Fluctuations
We want to discuss a formal method which is frequently used by physicists in order
to compute Feynman functional integrals in an elegant way. It is our goal
• to separate quantum fluctuations from the classical motion, and
• to compute the Feynman functional integral corresponding to quantum fluctuations
by the method of zeta function regularization.
We will apply this method to the heat kernel. However, the same method also
applies to the Feynman propagator kernel for the Schr¨odinger equation.
Feynman’s path integrals are both infinite-dimensional Gaussian integrals
and continuous partition functions.
Folklore
By Theorem 11.8 on page 587, the initial-value problem for the heat equation is
completely determined by the knowledge of the heat kernel.
In terms of the heat equation, the Feynman functional integral approach
represents the heat kernel as an infinite-dimensional integral.
This fundamental idea can be generalized to the time-evolution of general physical
systems. The Feynman approach can be viewed as a generalization of the classical
Fourier method via Fourier series or Fourier integral. Physicists like functional
integrals very much, since they allow elegant explicit computations based on the
following crucial methods:
• approximation by finite-dimensional Gaussian integrals,
• the method of stationary phase,
• infinite-dimensional Gaussian integrals and the method of zeta function regularization,
and
• (formal) analytic continuation.
From the mathematical point of view, functional integrals represent an extraordinarily
useful mnemonic tool for conjecturing rigorous results. Unfortunately, the
rigorous proofs are frequently missing or they have to be based on sophisticated
methods. Let us sketch the main ideas for the heat kernel. We will restrict ourselves
to the elegant formal language used by physicists which can be immediately
translated
• to the Schr¨odinger equation in quantum mechanics by replacing real time by
imaginary time, t ⇒ it/h,
• and to quantum field theory.
The Method of Quantum Fluctuations
We want to discuss a formal method which is frequently used by physicists in order
to compute Feynman functional integrals in an elegant way. It is our goal
• to separate quantum fluctuations from the classical motion, and
• to compute the Feynman functional integral corresponding to quantum fluctuations
by the method of zeta function regularization.
We will apply this method to the heat kernel. However, the same method also
applies to the Feynman propagator kernel for the Schr¨odinger equation.
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
调和函数[编辑]
[ltr]在数学、数学物理学以及随机过程理论中,都有调和函数的概念。一个调和函数是一个二阶连续可导的函数 f : U → R(其中 U 是 Rn 里的一个开子集),其满足拉普拉斯方程,即在 U上满足方程:
上式也经常写作
或 ,其中符号 是拉普拉斯算子
调和函数还用一个较为弱的定义,但这个定义与上述的定义是等价的。
运用拉普拉斯-德拉姆算子 ,调和函数可以在任意的黎曼流形上定义。 在这种情况下,调和函数直接定义为:满足
一个 的函数如果满足 ,则被称作次调和函数。
[/ltr]
[size][ltr]
例子[编辑]
二元的调和函数的例子有:
[/ltr][/size]
[size][ltr]
f(x1, x2) = ln(x12 + x22)这个函数定义在 R2 \ {0} 上(实际上是一个均匀线电荷所产生的电势或一个细长的均匀无限长圆柱形物体产生的引力势所对应的数学模型)
[/ltr][/size]
[size][ltr]
n 元的调和函数的例子有:
[/ltr][/size]
[size][ltr]
在三元的调和函数的例子前,先定义以简化形式。下面表格中的函数在经过数乘(乘以一个常数)、旋转和相加后仍然会是调和函数。调和函数是由其奇点决定的。调和函数的奇点可以在电磁学中解释为电荷所在的点,因此相应的调和函数可以看作是某种电荷分布下的电势场。
[/ltr][/size]
[size][ltr]
性质[编辑]
在给定的开集 U 上所有的调和函数的集合是其上的拉普拉斯算子 Δ 的核,因此是一个 R 的向量空间: 调和函数的和与差以及数乘,结果依然是调和函数。
如果 f是 U 上的一个调和函数,那么 f 的所有偏导数也仍然是 U 上的调和函数,在调和函数类上,拉普拉斯算子和偏导数算子是交换的。
在某些意义上,调和函数是全纯函数在实值函数上的对应物。所有的调和函数都是解析的,也就是说它们可以局部地展开成幂级数。这是关于椭圆算子的一个性质,而拉普拉斯算子是一个常见的例子。
收敛的调和函数列的一致极限仍会是调和的。这是因为所有满足介值性质的连续函数都是调和函数。
与复函数理论的联系[编辑]
一个全纯函数的实数和虚数部分都是 R2 上的调和函数。反过来说,对于一个调和函数 u,总可以找到一个调和函数 v,使得函数 u+iv 是全纯函数。这个函数 v 被称为调和函数 u 的调和共轭。这里的函数 v 在差一个常数的意义上是唯一定义的。这个结果在希尔伯特变换中有应用,也是数学分析中一个与奇异积分算子有关的基本例子。在几何意义上,u 和 v 可以被看作具有正交的关系。如果画出两者的等值线,那么两条线在交点处正交(两条切线成直角)。在这种视角下,函数 u+iv 可以被看作一种“复位势场”,其中 u是一个位势函数,而 v 是流函数。
调和函数规则性的理论[编辑]
调和函数总是无穷次可导(光滑)的。事实上,调和函数是实解析函数的一种。
极大值定理[编辑]
调和函数满足以下的极大值定理:如果 K 是 U 的一个紧子集,那么 f 在 K上诱导的函数只能在边界上达到其最大值和最小值。如果 U 是连通的,那么这个定理意味着 f 不能达到最大值和最小值,除非它是常数函数。对于次调和函数也有同样的定理。
介值性质[编辑]
设 B(x,r) 是一个以 x 为中心,以 r 为半径的完全在 U 中的球,那么调和函数f(x)球的边界上取值的平均值和 f 在球的内部的取值的平均值相同。也就是说:
其中 表示 n 维的单位球面。
刘维尔定理[编辑]
如果 f 在整个 Rn 都有定义的调和函数,并且在其上有最大值或最小值,那么函数 f 是常数函数(参见复平面上函数的刘维尔定理)。
推广[编辑]
调和函数研究的一个推广是黎曼流形上的调和形的研究,后者与上同调的研究有关。此外,可以定义调和的向量值函数,或者两个黎曼流形间的调和映射。这些调和映射出现在最小表面理论中。比如说,一个从 R 上区间 射到一个黎曼流形的映射是调和的当且仅当它是一条短程线。
参见[编辑]
[/ltr][/size]
[size][ltr]
参考[编辑]
[/ltr][/size]
[size][ltr]
外部链接[编辑]
[/ltr][/size]
[ltr]在数学、数学物理学以及随机过程理论中,都有调和函数的概念。一个调和函数是一个二阶连续可导的函数 f : U → R(其中 U 是 Rn 里的一个开子集),其满足拉普拉斯方程,即在 U上满足方程:
上式也经常写作
或 ,其中符号 是拉普拉斯算子
调和函数还用一个较为弱的定义,但这个定义与上述的定义是等价的。
运用拉普拉斯-德拉姆算子 ,调和函数可以在任意的黎曼流形上定义。 在这种情况下,调和函数直接定义为:满足
一个 的函数如果满足 ,则被称作次调和函数。
[/ltr]
[size][ltr]
例子[编辑]
二元的调和函数的例子有:
[/ltr][/size]
- 任意全纯函数的实数部分和虚数部分。
- 函数:
[size][ltr]
f(x1, x2) = ln(x12 + x22)这个函数定义在 R2 \ {0} 上(实际上是一个均匀线电荷所产生的电势或一个细长的均匀无限长圆柱形物体产生的引力势所对应的数学模型)
[/ltr][/size]
- 函数:f(x1, x2) = exp(x1)sin(x2)。
[size][ltr]
n 元的调和函数的例子有:
[/ltr][/size]
- Rn 所有的常数函数、线性函数和仿射函数(比如说两块均匀带电无限大平板之间的电势)。
- 定义在 Rn \ {0} 上的函数 f(x1,...,xn) = (x12 + ... + xn2)1 −n/2,其中 n ≥ 2。
[size][ltr]
在三元的调和函数的例子前,先定义以简化形式。下面表格中的函数在经过数乘(乘以一个常数)、旋转和相加后仍然会是调和函数。调和函数是由其奇点决定的。调和函数的奇点可以在电磁学中解释为电荷所在的点,因此相应的调和函数可以看作是某种电荷分布下的电势场。
[/ltr][/size]
原点处的点电荷 | |
原点处的x-向电偶极矩 | |
整个z-轴上均匀带电的线电荷 | |
负的z-轴上均匀带电的线电荷 | |
整个z-轴上的线性电偶极矩 | |
负的z-轴上的线性电偶极矩 |
性质[编辑]
在给定的开集 U 上所有的调和函数的集合是其上的拉普拉斯算子 Δ 的核,因此是一个 R 的向量空间: 调和函数的和与差以及数乘,结果依然是调和函数。
如果 f是 U 上的一个调和函数,那么 f 的所有偏导数也仍然是 U 上的调和函数,在调和函数类上,拉普拉斯算子和偏导数算子是交换的。
在某些意义上,调和函数是全纯函数在实值函数上的对应物。所有的调和函数都是解析的,也就是说它们可以局部地展开成幂级数。这是关于椭圆算子的一个性质,而拉普拉斯算子是一个常见的例子。
收敛的调和函数列的一致极限仍会是调和的。这是因为所有满足介值性质的连续函数都是调和函数。
与复函数理论的联系[编辑]
一个全纯函数的实数和虚数部分都是 R2 上的调和函数。反过来说,对于一个调和函数 u,总可以找到一个调和函数 v,使得函数 u+iv 是全纯函数。这个函数 v 被称为调和函数 u 的调和共轭。这里的函数 v 在差一个常数的意义上是唯一定义的。这个结果在希尔伯特变换中有应用,也是数学分析中一个与奇异积分算子有关的基本例子。在几何意义上,u 和 v 可以被看作具有正交的关系。如果画出两者的等值线,那么两条线在交点处正交(两条切线成直角)。在这种视角下,函数 u+iv 可以被看作一种“复位势场”,其中 u是一个位势函数,而 v 是流函数。
调和函数规则性的理论[编辑]
调和函数总是无穷次可导(光滑)的。事实上,调和函数是实解析函数的一种。
极大值定理[编辑]
调和函数满足以下的极大值定理:如果 K 是 U 的一个紧子集,那么 f 在 K上诱导的函数只能在边界上达到其最大值和最小值。如果 U 是连通的,那么这个定理意味着 f 不能达到最大值和最小值,除非它是常数函数。对于次调和函数也有同样的定理。
介值性质[编辑]
设 B(x,r) 是一个以 x 为中心,以 r 为半径的完全在 U 中的球,那么调和函数f(x)球的边界上取值的平均值和 f 在球的内部的取值的平均值相同。也就是说:
其中 表示 n 维的单位球面。
刘维尔定理[编辑]
如果 f 在整个 Rn 都有定义的调和函数,并且在其上有最大值或最小值,那么函数 f 是常数函数(参见复平面上函数的刘维尔定理)。
推广[编辑]
调和函数研究的一个推广是黎曼流形上的调和形的研究,后者与上同调的研究有关。此外,可以定义调和的向量值函数,或者两个黎曼流形间的调和映射。这些调和映射出现在最小表面理论中。比如说,一个从 R 上区间 射到一个黎曼流形的映射是调和的当且仅当它是一条短程线。
参见[编辑]
[/ltr][/size]
[size][ltr]
参考[编辑]
[/ltr][/size]
- L.C. Evans, 1998. Partial Differential Equations. American Mathematical Society.
- D. Gilbarg, N. Trudinger Elliptic Partial Differential Equations of Second Order. ISBN 3-540-41160-7.
- Q. Han, F. Lin, 2000, Elliptic Partial Differential Equations, American Mathematical Society
- 解析函数与调和函数 2
[size][ltr]
外部链接[编辑]
[/ltr][/size]
- MathWorld上 调和函数 的资料,作者:埃里克·韦斯坦因。
- J.H.马修,调和函数教程
- 调和函数教程
分类:
傅里叶分析[编辑]
(重定向自调和分析)
[ltr]傅里叶分析,又称调和分析,是数学的一个分支领域。它研究如何将一个函数或者信号表达为基本波形的叠加。它研究并扩展傅里叶级数和傅里叶变换的概念。基本波形称为调和函数,调和分析因此得名。在过去两个世纪中,它已成为一个广泛的主题,并在诸多领域得到广泛应用,如信号处理、量子力学、神经科学等。
定义于Rn上的经典傅里叶变换仍然是一个十分活跃的研究领域,特别是在作用于更一般的对象(例如缓增广义函数)上的傅里叶变换。 例如,如果在函数或者信号上加上一个分布f,我们可以试图用f的傅里叶变换来表达这些要求。Paley-Wiener定理就是这样的一个例子。Paley-Wiener定理直接蕴涵如果f是紧支撑的一个非零分布,(这包含紧支撑函数),则其傅里叶变换从不拥有紧支撑。这是在调和分析下的测不准原理的一个非常初等的形式。参看经典调和分析。
在希尔伯特空间,傅里叶级数的研究变得很方便,该空间将调和分析和泛函分析联系起来。
抽象调和分析[size=13][编辑]
拓扑群上的数学分析是调和分析更现代的一个分支,源于20世纪中叶。其主要动机是各种傅里叶变换可以推广为定义在局部紧致阿贝尔群上的函数的变换。关键是证明普朗歇尔定理的类比。
局部紧致阿贝尔群上的调和分析以庞特里亚金对偶性为基石,现已有完整的理论。对于一般的局部紧拓扑群,调和分析的课题是分类其酉表示。主要对象是李群与 p-进群。
对于紧群,任何不可约表示必为有限维么正表示,彼得-外尔定理断言:不可约么正表示的矩阵系数构成 的正交基;映射 具有与傅里叶变换相近的性质。借此可以深究紧群的结构。
对于非紧亦非交换的群,须考虑其无穷维表示。目前还没有一般的普朗歇尔定理,不过对 等特例已有结果。
其它分支[编辑]
[/ltr][/size]
[size][ltr]
参考[编辑]
[/ltr][/size]
[size]
分类:
[/size]
傅里叶分析[编辑]
(重定向自调和分析)
[ltr]傅里叶分析,又称调和分析,是数学的一个分支领域。它研究如何将一个函数或者信号表达为基本波形的叠加。它研究并扩展傅里叶级数和傅里叶变换的概念。基本波形称为调和函数,调和分析因此得名。在过去两个世纪中,它已成为一个广泛的主题,并在诸多领域得到广泛应用,如信号处理、量子力学、神经科学等。
定义于Rn上的经典傅里叶变换仍然是一个十分活跃的研究领域,特别是在作用于更一般的对象(例如缓增广义函数)上的傅里叶变换。 例如,如果在函数或者信号上加上一个分布f,我们可以试图用f的傅里叶变换来表达这些要求。Paley-Wiener定理就是这样的一个例子。Paley-Wiener定理直接蕴涵如果f是紧支撑的一个非零分布,(这包含紧支撑函数),则其傅里叶变换从不拥有紧支撑。这是在调和分析下的测不准原理的一个非常初等的形式。参看经典调和分析。
在希尔伯特空间,傅里叶级数的研究变得很方便,该空间将调和分析和泛函分析联系起来。
抽象调和分析[size=13][编辑]
拓扑群上的数学分析是调和分析更现代的一个分支,源于20世纪中叶。其主要动机是各种傅里叶变换可以推广为定义在局部紧致阿贝尔群上的函数的变换。关键是证明普朗歇尔定理的类比。
局部紧致阿贝尔群上的调和分析以庞特里亚金对偶性为基石,现已有完整的理论。对于一般的局部紧拓扑群,调和分析的课题是分类其酉表示。主要对象是李群与 p-进群。
对于紧群,任何不可约表示必为有限维么正表示,彼得-外尔定理断言:不可约么正表示的矩阵系数构成 的正交基;映射 具有与傅里叶变换相近的性质。借此可以深究紧群的结构。
对于非紧亦非交换的群,须考虑其无穷维表示。目前还没有一般的普朗歇尔定理,不过对 等特例已有结果。
其它分支[编辑]
[/ltr][/size]
- 研究流形或图上的拉普拉斯算子。
- 欧氏空间 上的傅里叶分析。由于傅里叶变换在旋转下保持不变,可析之为径向成份与球面成份,由此导向贝塞尔函数与球谐函数的研究。
- 管状域上的调和分析,这是哈代空间在高维度的推广。
[size][ltr]
参考[编辑]
[/ltr][/size]
- Elias M. Stein and Guido Weiss, Introduction to Fourier Analysis on Euclidean Spaces, Princeton University Press, 1971. ISBN 069108078X
- Yitzhak Katznelson, An introduction to harmonic analysis, Third edition. Cambridge University Press, 2004. ISBN 0-521-83829-0; 0-521-54359-2
[size]
分类:
[/size]
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
Symmetry and Special Functions
The experience of mathematicians and physicists shows that
Behind special functions, there lurk symmetry groups.
Roughly speaking, all of the important special functions that appear in mathematical
physics are governed by symmetries.
The Trouble with the Euclidean Trick
The passage t →−it from real time t in quantum physics to imaginary time −it is
called the Euclidean trick. For example, this transformation sends the Schr¨odinger
equation to the diffusion equation (see (11.13) on page 588). The experience of
physicists shows that this trick is quite useful. From the mathematical point of
view, note that
The Euclidean trick has to be handled very carefully.
The Euclidean trick can never be used for describing shock waves.
This fact limits the use of the Euclidean trick in quantum field theory. In terms of
mathematics, it is not possible to reduce the theory of hyperbolic partial differential
equations to the much simpler theory of elliptic partial differential equations.
Observe that quantum fields possess the typical character of hyperbolic partial
differential equations.
12. Distributions and Physics
The Discrete Dirac Calculus
For understanding the Dirac calculus used by physicists, it is useful to start with
a discrete variant of this calculus. This also helps to avoid divergent expressions,
say, in quantum electrodynamics.
Lattices
The basic idea is to use a truncated lattice in momentum space.
Self-Adjoint Operators
It was discovered by John von Neumann around 1928 that the notion of self-adjoint
operator on a Hilbert space plays a fundamental role in the mathematical approach
to quantum physics. Let X be a complex Hilbert space. The operator A : D(A) → X
is called self-adjoint iff
• the domain of definition D(A) is a linear subspace of the Hilbert space X which
is dense, that is, for each ϕ ∈ X, there exists a sequence (ϕn) in D(A) such that
limn→∞ ϕn = ϕ in X;
• the operator A is linear and formally self-adjoint, that is,
<ϕ|Aψ> =< Aϕ|ψ> for all ϕ, ψ ∈ D(A);
• the two operators (A ± iI) : D(A) → X are surjective.
The operator B : D(B) → X is called essentially self-adjoint iff there exists precisely
one self-adjoint operator A : D(A) → X which is an extension of B, that is,
D(B) ⊆ D(A) ⊆ X and Aϕ = Bϕ for all ϕ ∈ D(B).
S(RN) ⊂ L2(RN) ⊂ S'(RN).
Theorem 12.4 Let A : S(RN) → S(RN) be a linear, sequentially continuous operator
which is essentially self-adjoint with respect to the Hilbert space L2(RN). Then,
this operator has a complete system {Fm}m∈M of eigendistributions.
Explicitly, this means the following. There exists a nonempty set M such that
Fm ∈ S
(RN) for all indices m∈M.
(i) Eigendistributions: There exists a function λ :M→R such that
Fm(Aϕ) = λ(m)Fm(ϕ)
for all indices m∈M and all test functions ϕ ∈ S(RN).
(ii) Completeness: If Fm(ϕ) = 0 for all m∈M and fixed ϕ ∈ S(RN), then ϕ = 0.
The function ˆϕ :M→C defined by
ˆ ϕ(m) := Fm(ϕ) for all m∈M
is called the generalized Fourier transform of the function ϕ ∈ S(RN), with respect
to the operator A.
12.3 Fundamental Limits in Physics
Limits play a crucial role in physics. The idea is to approximate complicated phenomena
by simpler ones. For example, we have the following limits:
• high-energy limit,
• low-energy-limit,
• thermodynamic limit and phase transitions,
• adiabatic limit and regularization,
• the limit from wave optics to geometric optics for short wavelengths of light,
• the limit from Einstein’s relativistic physics to Newton’s non-relativistic physics
for sufficiently low velocities, and
• the limit from quantum physics to classical physics for sufficiently large action.
Let us discuss some of the basic ideas.
12.3.1 High-Energy Limit
Experiments in particle accelerators are carried out at a fixed energy scale E per
particle. The high-energy limit corresponds to E → +∞. Such extremely high
energies were present in the early universe shortly after the Big Bang. However,
the high-energy limit and the low-energy limit E → +0 are also crucial in quantum
field theory in order to pass from lattices to the continuum limit. This is related to
the procedure of renormalization. For example, quarks behave like free particles at
very high particle energies.
12.3.2 Thermodynamic Limit and Phase Transitions
The most spectacular phenomena of thermodynamic systems are phase transitions.
For example, the freezing of water to ice at the temperature 0
◦
Celsius represents
a phase transition. Mathematically, phase transitions can be studied by singularities
(e.g. jumps) of thermodynamic quantities. As a rule, such singularities do not
appear in thermodynamic systems of finite volume. One has to perform the limit
V →∞,
i.e., the volume V of the system has to go to infinity. This limit is called the
thermodynamic limit. The importance of this limit in mathematical physics was
emphasized by David Ruelle in his monograph Statistical Mechanics: Rigorous Results,
New York, 1969. We also recommend the survey article by Griffith (1972)
(rigorous results) and the monograph by Minlos (2000) (mathematical statistical
physics). An extensive bibliography on statistical physics can be found in Emch
and Liu (2002) (1500 references). A collection of seminal papers in 20th century
statistical physics is contained in Stroke (1995).
The importance of phase transitions in high technology processes.
Phase transitions play a crucial role in understanding strange properties of matter.
For example, at sufficiently low temperatures one observes
• superconductivity,
• superfluidity (e.g., liquid helium), and
• condensation of Bose–Einstein gases.
For example, Cornell, Ketterle, and Wieman were awarded the Nobel prize in
physics in 2001 for the achievement of Bose–Einstein condensation in dilute gases
of alkali atoms. As an introduction to the statistical physics of these phenomena,
we recommend the classical lectures given by Feynman (1998) (14th edition).
Phase transitions in the early universe. Physicists also assume that the
cooling of the universe after the Big Bang caused several phase transitions which
were responsible for the splitting of the original unified force into gravitational,
strong, weak, and electromagnetic interaction. In the setting of the inflationary
theory, it is assumed that shortly after the Big Bang, a phase transition caused
an enormous sudden expansion of the universe which is responsible for the almost
flatness of the present universe. Moreover, the strange properties of the ground
state of Fermi gases lead to the existence of neutron stars and white dwarfs which
possess extreme mass densities. For this, we refer to Straumann (2004).
12.3.3 Adiabatic Limit
The basic idea is to compute integrals as limits of regularized integrals.
12.3.4 Singular Limit
There exist the following crucial limiting processes:
(i) λ → 0 (the wave length of light goes to zero): the passage from Maxwell’s theory
of electromagnetism to Fermat’s geometric optics;
(ii) h → 0 (the Planck constant goes to zero): the passage from quantum mechanics
to classical mechanics;
(iii) c→∞ (the velocity of light goes to infinity): the passage from the theory of
relativity to classical mechanics.
Since the quantities λ, h, c carry physical dimensions, they depend on the choice
of the unit system. Thus, more precisely, one has to use relative dimensionless
quantities. For example, consider a physical experiment with visible light. Let L be
a typical length scale of the experiment, and let λ be the wave length of light. The
limit (i) corresponds then to
λ/L→ 0.
In (ii) and (iii), we need a typical velocity V and a typical action S (energy times
time), respectively, and we have to study the limits
V/c→ 0,
h/S→ 0.
In daily life, we have the following typical ratios
λ/L∼ 10−6,
V/c∼ 10−8,
h/S∼ 10−34.
Here, we use the length L = 1m, the velocity V = 1m/s, and the action
S = 1kg · m · s.
Consequently, relativistic effects and quantum effects can be neglected in daily life,
and we can approximately apply the methods of geometric optics. For radio waves,
X rays, γ-rays, and cosmic rays, we have
λ/L∼ l
with l = 103 , 10−10, 10−12, 10−15, respectively. Again, L := 1m. The singular
limits (i)-(iii) play an important role in physics. We will study them later on. For
example, the short-wavelength limit for electromagnetic waves will be investigated
in Sect. 12.5.4 on page 718. This is the prototype of a singular limit.
12.4 Duality in Physics
The concept of duality is crucial in both mathematics and physics.
Folklore
The goal is to relate apparently different problems to each other via duality in order
to simplify the mathematical treatment.6 There exist the following fundamental
dualities in quantum physics related to the Fourier transform and the Laplace
transform:
• particles and waves,
• time and frequency,
• time and energy,
• position and momentum,
• causality and analyticity,
• strong and weak interaction.
Causality and Analyticity
Linear response and causality force analyticity, and hence the Kramers–
Kronig dispersion relations.
Folklore
In the 1940s, physicists discovered that the polarization of the ground state of the
quantum field in quantum electrodynamics is responsible for crucial physical effects
(e.g., the Lamb shift of the hydrogen spectrum). In the late 1950s, physicists used
dispersion relations for describing scattering processes for elementary particles via
analyticity properties of the S-matrix. This led to the emergence of string theory
in the 1970s. Let us discuss the basic ideas going back to classical electrodynamics.
The key words are
• electric dipole and polarization,
• dielectricity of material media, and
• dispersion of light.
Our goal is to discuss the Kramers–Kronig dispersion relations which play a crucial
role in all physical processes which are governed by linear response and causality.
This concerns a broad class of phenomena in physics and engineering. Heisenberg’s
foundation of quantum mechanics in 1925 was strongly influenced by his joint 1924
paper with Kramers on the dispersion of light and earlier papers by Kramers.
Generally, dispersion means that processes depend on the energy spectrum. Singularities
arise in the energy space which are generated by resonances. For scattering
processes of elementary particles, resonance appears for the energies of stable or
unstable bound states of particles.
Strong and Weak Interaction
We speak of strong or weak interaction if the coupling constant κ is large or small,
respectively. In the case of weak interaction, physicists successfully use the method
of perturbation theory. As a rule, this method fails for strong interaction.
In string theory, one studies models which are based on a duality between strong
and weak interaction by using, roughly speaking, the replacement
κ ⇒ 1/κ
in the framework of so-called T-duality and mirror symmetry. Physicists hope that,
in the future, this will allow us to reduce difficult problems in strong interaction
to the method of perturbation theory. We refer to Zwiebach (2004), Chap. 17, and
Polchinski (1998), Vol. 2, Chap. 19.
色散关系[编辑]
[size][ltr]
在物理学中,色散关系指系统能量与其相应动量之间的关系。譬如,在空间中自由运动的粒子系统的色散关系可通过动能的定义方便地得出,具体形式为:
在这个简单的例子中,色散关系是一个二次函数。更为复杂的系统具有其他不同形式的色散关系。
物理性质的推导[编辑]
经典物理系统的许多性质,如速度,都可以通过写成色散关系的形式推广到其他非经典系统。例如,任意系统中的粒子速度可以通过对其相应色散关系的微分来定义,即:
光学色散关系[编辑]
光波(电磁波)的能量(或频率)正比于波的动量(或波数)。根据麦克斯韦方程组,真空中电磁波的色散关系应是线性的:
根据上一小节的式子,可得波速为:
这便是光速。[/ltr]
[/size]
The experience of mathematicians and physicists shows that
Behind special functions, there lurk symmetry groups.
Roughly speaking, all of the important special functions that appear in mathematical
physics are governed by symmetries.
The Trouble with the Euclidean Trick
The passage t →−it from real time t in quantum physics to imaginary time −it is
called the Euclidean trick. For example, this transformation sends the Schr¨odinger
equation to the diffusion equation (see (11.13) on page 588). The experience of
physicists shows that this trick is quite useful. From the mathematical point of
view, note that
The Euclidean trick has to be handled very carefully.
The Euclidean trick can never be used for describing shock waves.
This fact limits the use of the Euclidean trick in quantum field theory. In terms of
mathematics, it is not possible to reduce the theory of hyperbolic partial differential
equations to the much simpler theory of elliptic partial differential equations.
Observe that quantum fields possess the typical character of hyperbolic partial
differential equations.
12. Distributions and Physics
The Discrete Dirac Calculus
For understanding the Dirac calculus used by physicists, it is useful to start with
a discrete variant of this calculus. This also helps to avoid divergent expressions,
say, in quantum electrodynamics.
Lattices
The basic idea is to use a truncated lattice in momentum space.
Self-Adjoint Operators
It was discovered by John von Neumann around 1928 that the notion of self-adjoint
operator on a Hilbert space plays a fundamental role in the mathematical approach
to quantum physics. Let X be a complex Hilbert space. The operator A : D(A) → X
is called self-adjoint iff
• the domain of definition D(A) is a linear subspace of the Hilbert space X which
is dense, that is, for each ϕ ∈ X, there exists a sequence (ϕn) in D(A) such that
limn→∞ ϕn = ϕ in X;
• the operator A is linear and formally self-adjoint, that is,
<ϕ|Aψ> =< Aϕ|ψ> for all ϕ, ψ ∈ D(A);
• the two operators (A ± iI) : D(A) → X are surjective.
The operator B : D(B) → X is called essentially self-adjoint iff there exists precisely
one self-adjoint operator A : D(A) → X which is an extension of B, that is,
D(B) ⊆ D(A) ⊆ X and Aϕ = Bϕ for all ϕ ∈ D(B).
S(RN) ⊂ L2(RN) ⊂ S'(RN).
Theorem 12.4 Let A : S(RN) → S(RN) be a linear, sequentially continuous operator
which is essentially self-adjoint with respect to the Hilbert space L2(RN). Then,
this operator has a complete system {Fm}m∈M of eigendistributions.
Explicitly, this means the following. There exists a nonempty set M such that
Fm ∈ S
(RN) for all indices m∈M.
(i) Eigendistributions: There exists a function λ :M→R such that
Fm(Aϕ) = λ(m)Fm(ϕ)
for all indices m∈M and all test functions ϕ ∈ S(RN).
(ii) Completeness: If Fm(ϕ) = 0 for all m∈M and fixed ϕ ∈ S(RN), then ϕ = 0.
The function ˆϕ :M→C defined by
ˆ ϕ(m) := Fm(ϕ) for all m∈M
is called the generalized Fourier transform of the function ϕ ∈ S(RN), with respect
to the operator A.
12.3 Fundamental Limits in Physics
Limits play a crucial role in physics. The idea is to approximate complicated phenomena
by simpler ones. For example, we have the following limits:
• high-energy limit,
• low-energy-limit,
• thermodynamic limit and phase transitions,
• adiabatic limit and regularization,
• the limit from wave optics to geometric optics for short wavelengths of light,
• the limit from Einstein’s relativistic physics to Newton’s non-relativistic physics
for sufficiently low velocities, and
• the limit from quantum physics to classical physics for sufficiently large action.
Let us discuss some of the basic ideas.
12.3.1 High-Energy Limit
Experiments in particle accelerators are carried out at a fixed energy scale E per
particle. The high-energy limit corresponds to E → +∞. Such extremely high
energies were present in the early universe shortly after the Big Bang. However,
the high-energy limit and the low-energy limit E → +0 are also crucial in quantum
field theory in order to pass from lattices to the continuum limit. This is related to
the procedure of renormalization. For example, quarks behave like free particles at
very high particle energies.
12.3.2 Thermodynamic Limit and Phase Transitions
The most spectacular phenomena of thermodynamic systems are phase transitions.
For example, the freezing of water to ice at the temperature 0
◦
Celsius represents
a phase transition. Mathematically, phase transitions can be studied by singularities
(e.g. jumps) of thermodynamic quantities. As a rule, such singularities do not
appear in thermodynamic systems of finite volume. One has to perform the limit
V →∞,
i.e., the volume V of the system has to go to infinity. This limit is called the
thermodynamic limit. The importance of this limit in mathematical physics was
emphasized by David Ruelle in his monograph Statistical Mechanics: Rigorous Results,
New York, 1969. We also recommend the survey article by Griffith (1972)
(rigorous results) and the monograph by Minlos (2000) (mathematical statistical
physics). An extensive bibliography on statistical physics can be found in Emch
and Liu (2002) (1500 references). A collection of seminal papers in 20th century
statistical physics is contained in Stroke (1995).
The importance of phase transitions in high technology processes.
Phase transitions play a crucial role in understanding strange properties of matter.
For example, at sufficiently low temperatures one observes
• superconductivity,
• superfluidity (e.g., liquid helium), and
• condensation of Bose–Einstein gases.
For example, Cornell, Ketterle, and Wieman were awarded the Nobel prize in
physics in 2001 for the achievement of Bose–Einstein condensation in dilute gases
of alkali atoms. As an introduction to the statistical physics of these phenomena,
we recommend the classical lectures given by Feynman (1998) (14th edition).
Phase transitions in the early universe. Physicists also assume that the
cooling of the universe after the Big Bang caused several phase transitions which
were responsible for the splitting of the original unified force into gravitational,
strong, weak, and electromagnetic interaction. In the setting of the inflationary
theory, it is assumed that shortly after the Big Bang, a phase transition caused
an enormous sudden expansion of the universe which is responsible for the almost
flatness of the present universe. Moreover, the strange properties of the ground
state of Fermi gases lead to the existence of neutron stars and white dwarfs which
possess extreme mass densities. For this, we refer to Straumann (2004).
12.3.3 Adiabatic Limit
The basic idea is to compute integrals as limits of regularized integrals.
12.3.4 Singular Limit
There exist the following crucial limiting processes:
(i) λ → 0 (the wave length of light goes to zero): the passage from Maxwell’s theory
of electromagnetism to Fermat’s geometric optics;
(ii) h → 0 (the Planck constant goes to zero): the passage from quantum mechanics
to classical mechanics;
(iii) c→∞ (the velocity of light goes to infinity): the passage from the theory of
relativity to classical mechanics.
Since the quantities λ, h, c carry physical dimensions, they depend on the choice
of the unit system. Thus, more precisely, one has to use relative dimensionless
quantities. For example, consider a physical experiment with visible light. Let L be
a typical length scale of the experiment, and let λ be the wave length of light. The
limit (i) corresponds then to
λ/L→ 0.
In (ii) and (iii), we need a typical velocity V and a typical action S (energy times
time), respectively, and we have to study the limits
V/c→ 0,
h/S→ 0.
In daily life, we have the following typical ratios
λ/L∼ 10−6,
V/c∼ 10−8,
h/S∼ 10−34.
Here, we use the length L = 1m, the velocity V = 1m/s, and the action
S = 1kg · m · s.
Consequently, relativistic effects and quantum effects can be neglected in daily life,
and we can approximately apply the methods of geometric optics. For radio waves,
X rays, γ-rays, and cosmic rays, we have
λ/L∼ l
with l = 103 , 10−10, 10−12, 10−15, respectively. Again, L := 1m. The singular
limits (i)-(iii) play an important role in physics. We will study them later on. For
example, the short-wavelength limit for electromagnetic waves will be investigated
in Sect. 12.5.4 on page 718. This is the prototype of a singular limit.
12.4 Duality in Physics
The concept of duality is crucial in both mathematics and physics.
Folklore
The goal is to relate apparently different problems to each other via duality in order
to simplify the mathematical treatment.6 There exist the following fundamental
dualities in quantum physics related to the Fourier transform and the Laplace
transform:
• particles and waves,
• time and frequency,
• time and energy,
• position and momentum,
• causality and analyticity,
• strong and weak interaction.
Causality and Analyticity
Linear response and causality force analyticity, and hence the Kramers–
Kronig dispersion relations.
Folklore
In the 1940s, physicists discovered that the polarization of the ground state of the
quantum field in quantum electrodynamics is responsible for crucial physical effects
(e.g., the Lamb shift of the hydrogen spectrum). In the late 1950s, physicists used
dispersion relations for describing scattering processes for elementary particles via
analyticity properties of the S-matrix. This led to the emergence of string theory
in the 1970s. Let us discuss the basic ideas going back to classical electrodynamics.
The key words are
• electric dipole and polarization,
• dielectricity of material media, and
• dispersion of light.
Our goal is to discuss the Kramers–Kronig dispersion relations which play a crucial
role in all physical processes which are governed by linear response and causality.
This concerns a broad class of phenomena in physics and engineering. Heisenberg’s
foundation of quantum mechanics in 1925 was strongly influenced by his joint 1924
paper with Kramers on the dispersion of light and earlier papers by Kramers.
Generally, dispersion means that processes depend on the energy spectrum. Singularities
arise in the energy space which are generated by resonances. For scattering
processes of elementary particles, resonance appears for the energies of stable or
unstable bound states of particles.
Strong and Weak Interaction
We speak of strong or weak interaction if the coupling constant κ is large or small,
respectively. In the case of weak interaction, physicists successfully use the method
of perturbation theory. As a rule, this method fails for strong interaction.
In string theory, one studies models which are based on a duality between strong
and weak interaction by using, roughly speaking, the replacement
κ ⇒ 1/κ
in the framework of so-called T-duality and mirror symmetry. Physicists hope that,
in the future, this will allow us to reduce difficult problems in strong interaction
to the method of perturbation theory. We refer to Zwiebach (2004), Chap. 17, and
Polchinski (1998), Vol. 2, Chap. 19.
色散关系[编辑]
本条目需要扩充。(2013年2月13日) 请协助改善这篇条目,更进一步的信息可能会在讨论页或扩充请求中找到。请在扩充条目后将此模板移除。 |
在物理学中,色散关系指系统能量与其相应动量之间的关系。譬如,在空间中自由运动的粒子系统的色散关系可通过动能的定义方便地得出,具体形式为:
在这个简单的例子中,色散关系是一个二次函数。更为复杂的系统具有其他不同形式的色散关系。
物理性质的推导[编辑]
经典物理系统的许多性质,如速度,都可以通过写成色散关系的形式推广到其他非经典系统。例如,任意系统中的粒子速度可以通过对其相应色散关系的微分来定义,即:
光学色散关系[编辑]
光波(电磁波)的能量(或频率)正比于波的动量(或波数)。根据麦克斯韦方程组,真空中电磁波的色散关系应是线性的:
根据上一小节的式子,可得波速为:
这便是光速。[/ltr]
[/size]
由一星于2014-07-19, 12:41进行了最后一次编辑,总共编辑了1次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
辛几何
辛几何(symplectic geometry)是数学中微分几何领域的分支领域,是研究辛流形(symplectic manifold)的几何与拓扑性质的学科。它的起源和物理学中的经典力学关系密切,也与数学中的代数几何,数学物理,几何拓扑等领域有很重要的联系。 不同于微分几何中的另一大分支--黎曼几何,辛几何是一种不能测量长度却可以测量面积的几何,而且辛流形上并没有类似于黎曼几何中曲率这样的局部概念。这使得辛几何的研究带有很大的整体性。
基本简介
设M是一个2n维微分流形,称一个二次微分形式ω叫做M上的一个辛结构(symplectic structure)或辛形式,如果ω满足
ω是一个闭形式,即dω=0。
ω是非退化的,即ω^n(ω的n次外积)是一个处处非零的2n次微分形式。
我们称(M,ω)为一个辛流形。简单的说,辛几何就是研究辛流形的性质的一种几何,一般认为属于微分几何的范畴。
辛流形的例子
紧的微分流形存在辛结构的一个阻碍是可定向和第二个上同调群的秩非零。
凯勒流形
一大类紧的辛流形来源于复代数几何,譬如,n维复射影空间都存在一个标准的辛形式(称为Fubini-Study形式);Fubini-Study形式限制在任何光滑的复射影簇上都是一个辛形式。更一般的,任何Kaehler流形都是辛流形。
余切丛
任何微分流形的余切丛上都有一个典则的辛形式。这是一大类非紧的辛流形。事实上余切丛可以看作经典力学的相空间,而一般的辛流形则是它的推广。
辛几何的历史
达布定理和Weinstein定理
达布定理是辛几何中第一个重要的定理。它断言辛流形上任意一个点附近存在一个局部坐标系,使得辛形式在这组坐标系下是欧式空间的标准的辛形式。这样的坐标系被称为达布坐标系。这说明不同于黎曼几何,辛几何中并没有曲率这样的局部概念,而辛流形的所有性质应该都是整体的。
类比于达布定理,Alan Weinstein证明,任何嵌入的拉格朗日子流形L都有一个管状邻域,使得辛形式在这个邻域的限制等价于L的余切丛上的典则的辛形式。这样的邻域被称为Weinstein邻域。
拟全纯曲线
辛几何发展的里程碑是在1985年,俄罗斯数学家格罗莫夫(M. Gromov)引入了拟全纯曲线(Pseudo-holomorphic curve)的概念 ,证明了譬如不可压缩定理(Non squeezing theorem)等一些非常奇妙的定理。这套理论后来发展成为格罗莫夫-威腾不变量(Gromov-Witten invariant),弗洛尔同调(Floer homology)等在辛几何中非常重要的理论。
阿诺德猜测
前苏联数学家阿诺德(V. I. Arnold)猜测紧致辛流形的辛自同构至少要有一定数目的不动点,并将不动点的数目估计同拓扑学中的莫尔斯不等式做类比。 这个猜测成为辛几何在二十世纪最后20年的指导性纲领。德国数学家弗洛尔(Andreas Floer)为证明阿诺德猜测,引入了弗洛尔同调的概念,成为辛几何领域的重要工具。
镜像对称
在弦理论中,物理学家发现卡拉比-丘流形(一类特别的辛流形)存在一种被称为“镜像对称”的现象,即一个卡拉比-丘流形的复几何性质对应着另一个卡拉比-丘流形(它的镜像流形)的辛几何性质。这个观点极大的影响了1990年代之后的辛几何的研究。其中1998年菲尔兹奖得主孔采维奇(Maxim Kontsevich)提出的“同调镜像对称”猜想,日本几何学家深谷贤治(Kenji Fukaya)提出的“深谷范畴”等在现代辛几何的研究中都有非常重要的意义。
辛流形
简介
数学上,一个辛流形是一个装备了一个闭、非退化2-形式ω的光滑流形,ω称为辛形式。辛流形的研究称为辛拓扑。辛流形作为经典力学和分析力学的抽象表述中的流形的余切丛自然的出现,例如在经典力学的哈密尔顿表述中,该领域的一个主要原因之一:一个系统的所有组态的空间可以用一个流形建模,而该流形的余切丛描述了该系统的相空间。
详细内容
一个辛流形上的任何实值可微函数H可以用作一个能量函数或者叫哈密顿量。和任何一个哈密顿量相关有一个哈密顿向量场;该哈密顿向量场的积分曲线是哈密顿-雅戈比方程的解。哈密顿向量场定义了辛流形上的一个流场,称为哈密顿流场或者叫辛同胚。根据刘维尔定理,哈密顿流保持相空间的体积形式不变。
具有某种特殊结构的微分流形,这种结构称为辛结构。设M为一微分流形,又在M上具有一个二次非退化的闭外微分形式σ,则称σ是M上的一个辛结构,又称M为具辛结构σ的辛流形。微分流形的辛结构联系于向量空间的辛结构。设V是m维向量空间,在V上定义了一个反对称、非退化的双线性形式σ,即σ满足:①反对称性,σ(α,β)=-σ(β,α),对任意α,β∈V成立;②非退化,若对任意β∈V,有σ(α,β)=0,必有α=0,则称σ为向量空间V上的一个辛结构,又称V 为具辛结构σ的辛向量空间。对于具辛结构σ的微分流形M,在每一点x∈M,将σ(x)视为TxM上的双线性形式,即得出向量空间TxM上的辛结构。具辛结构的向量空间 V或具辛结构的微分流形M都必须是偶数维的。
正则量子化
物理学中,正则量子化是多种对经典理论进行量子化的数学方法中的一种;在对经典场论进行量子化时,又称二次量子化。“正则”这个词其实源自经典理论,指的是理论中一种特定的结构(称作辛结构(Symplectic structure)),这样的结构在量子理论中也被保留。这在保罗·狄拉克尝试建构量子场论时由他首先强调。
普通的量子力学方法只能处理粒子数守恒的系统。但在相对论量子力学中,粒子可以产生和湮没,普通量子力学的数学表述方法不再适用。二次量子化通过引入产生算符和湮没算符处理粒子的产生和湮没,是建立相对论量子力学和量子场论的必要数学手段。相比普通量子力学表述方式,二次量子化方法能够自然而简洁的处理全同粒子的对称性和反对称性,所以即使在粒子数守恒的非相对论多体问题中,也被广泛应用。
动量映射
在数学,尤其在辛几何中,动量映射是一个与辛流形上的李群的哈密顿作用有关的工具,可用于构造作用的守恒量。动量映射推广了经典的 动量和角动量。它在各种辛流形的建立中是一个重要的部分,包括将会在后面讨论的symplectic (Marsden–Weinstein) quotients,以及symplectic cuts和sums。
泊松流形
在数学中,泊松流形(Poisson manifold)是一个微分流形 M 使得 M 上光滑函数代数 C∞(M) 上装备有一个双线性映射称为泊松括号,将其变成泊松代数。
每个辛流形是泊松流形,反之则不然。
殆复流形
数学中,一个殆复流形(almost complex manifold)是在每个切空间上带有一个光滑线性复结构的光滑流形。此结构的存在性是一个流形成为复流形的必要条件,但非充分条件。即每个复流形是一个殆复流形,反之则不然。殆复结构在辛几何中有重要应用。
此概念由埃雷斯曼与霍普夫于1940年代引入。
弦拓扑
弦拓扑是近几年来兴起的一个数学学科,概括地说,它是关于流形的路径空间(path space)上的拓扑性质及其在微分几何,同调代数和数学物理等领域的应用的研究。
弦拓扑介绍
1999年美国数学家Moira Chas和Dennis Sullivan在网络上(www.arxiv.org)发表了他们的研究论文String topology,即弦拓扑(文献1)。在这篇论文中,他们证明了一个流形的自由环路空间(free loop space)的同调群有一个Gerstenhaber代数和一个Batalin-Vilkovisky代数(简称BV代数)结构,从而得出了关于流形的一类新的拓扑不变量。此后,Sullivan和他的合作者们,陆续发表了几篇关于流形的环路空间和路径空间方面的论文,进一步探讨了这些空间的拓扑性质。他们的研究很快吸引了许多数学家的兴趣,并引起了广泛的研究,这些研究主要集中在:
一个流形的什么代数性质导致了它的环路空间的这两个代数结构?
这些新的不变量是流形的什么样的不变量?比如说,是不是流形的同胚或者同伦不变量?
由于迄今为止所有BV代数的例子都来自于弦理论,那么弦拓扑有没有一个弦理论的解释?
越来越多的辛几何特别是辛场论(symplectic field theory)的研究者发现,辛场论和弦拓扑的研究对象有类似之处,那么这两者之间到底有什么关系?
弦拓扑研究的是流形的环路空间,那么它在低维流形的研究中,比如说三维流形和纽结理论,有些什么样的应用?
所有这些研究现在被统称为弦拓扑。
汉弥尔顿矩阵[编辑]
[ltr]在数学上,若一个2n阶矩阵A是一个汉弥尔顿矩阵,则对此矩阵而言,JA会是一个对称矩阵,而其中J这个矩阵具有以下的形式:
其中In是n阶矩阵单位矩阵。也就是说,若A是一个汉弥尔顿矩阵当且仅当(JA)T = JA,在此处()T表示矩阵的转置[1]
[/ltr]
[size][ltr]
性质[编辑]
假设一个2n阶的矩阵A可写成如下形式的分块矩阵:
其中a、b、c、d皆为n阶矩阵,则“A是汉弥尔顿矩阵”的这条件与“b和c这两个矩阵皆为对称矩阵,且a + dT = 0”的这条件等价。[1][2]另一个A是汉弥尔顿矩阵时的这条件等价的条件为“存在一个对称矩阵S,使得A = JS with S”[2]:34
从转置的定义,可轻易地得知说一个汉弥尔顿矩阵的转置也是汉弥尔顿矩阵,两个汉弥尔顿矩阵的和也是弥尔顿矩阵,一个汉弥尔顿矩阵的交换子也是汉弥尔顿矩阵。由所有同阶的汉弥尔顿矩阵组成的空间形式一个李代数,记作sp(2n),而sp(2n)的维度则为2n2 + n。与这个李代数相对应的李群是Sp(2n)这个辛群。Sp(2n)这个群可将之视作由辛矩阵所构成的一个群,其中若一矩阵A为一辛矩阵,则它满足ATJA = J这条件。因此,一个汉弥尔顿矩阵的指数是一个辛矩阵,而一个辛矩阵的对数是一个汉弥尔顿矩阵。[2]:34–36[3]
实汉弥尔顿矩阵的特征多项式是个偶函数,因此若λ是一个汉弥尔顿矩阵的特征向量,则−λ、λ*和−λ*也都会是该矩阵的特征向量。[2]:45而这也说明了一个汉弥尔顿矩阵的迹会是零。
一个汉弥尔顿矩阵的平方是一个斜汉弥尔顿矩阵(skew-Hamiltonian matrix。若一个矩阵A满足(JA)T = −JA这条件,则它是一个斜汉弥尔顿矩阵);另一方面,每个斜汉弥尔顿矩阵都是一个弥尔顿矩阵的平方。[4]
在复矩阵上的推广[编辑]
汉弥尔顿的定义可用两种方式推广到复矩阵上。一种方法是如上所述般定义说若一矩阵A满足(JA)T = JA这条件,则该矩阵是一个汉弥尔顿矩阵;[1][4]另一个方式是利用(JA)* = JA这条件,其中()*表示矩阵的共轭转置。[5]
汉弥尔顿算子[编辑]
设V为一个向量空间,在其上有着辛形式Ω。那么当“是对称的”这条件满足时,就称线性变换是一个对Ω的汉弥尔顿算子(Hamiltonian operator),也就是说它当满足下式:
若选择一个基e1, …, e2n in V,使得Ω可写成这样的形式,则一个对Ω线性算子是汉弥尔顿算子,当且仅当在这个基中与此算子对应的矩阵是汉弥尔顿矩阵。[4]
参照[编辑]
[/ltr][/size]
辛几何(symplectic geometry)是数学中微分几何领域的分支领域,是研究辛流形(symplectic manifold)的几何与拓扑性质的学科。它的起源和物理学中的经典力学关系密切,也与数学中的代数几何,数学物理,几何拓扑等领域有很重要的联系。 不同于微分几何中的另一大分支--黎曼几何,辛几何是一种不能测量长度却可以测量面积的几何,而且辛流形上并没有类似于黎曼几何中曲率这样的局部概念。这使得辛几何的研究带有很大的整体性。
基本简介
设M是一个2n维微分流形,称一个二次微分形式ω叫做M上的一个辛结构(symplectic structure)或辛形式,如果ω满足
ω是一个闭形式,即dω=0。
ω是非退化的,即ω^n(ω的n次外积)是一个处处非零的2n次微分形式。
我们称(M,ω)为一个辛流形。简单的说,辛几何就是研究辛流形的性质的一种几何,一般认为属于微分几何的范畴。
辛流形的例子
紧的微分流形存在辛结构的一个阻碍是可定向和第二个上同调群的秩非零。
凯勒流形
一大类紧的辛流形来源于复代数几何,譬如,n维复射影空间都存在一个标准的辛形式(称为Fubini-Study形式);Fubini-Study形式限制在任何光滑的复射影簇上都是一个辛形式。更一般的,任何Kaehler流形都是辛流形。
余切丛
任何微分流形的余切丛上都有一个典则的辛形式。这是一大类非紧的辛流形。事实上余切丛可以看作经典力学的相空间,而一般的辛流形则是它的推广。
辛几何的历史
达布定理和Weinstein定理
达布定理是辛几何中第一个重要的定理。它断言辛流形上任意一个点附近存在一个局部坐标系,使得辛形式在这组坐标系下是欧式空间的标准的辛形式。这样的坐标系被称为达布坐标系。这说明不同于黎曼几何,辛几何中并没有曲率这样的局部概念,而辛流形的所有性质应该都是整体的。
类比于达布定理,Alan Weinstein证明,任何嵌入的拉格朗日子流形L都有一个管状邻域,使得辛形式在这个邻域的限制等价于L的余切丛上的典则的辛形式。这样的邻域被称为Weinstein邻域。
拟全纯曲线
辛几何发展的里程碑是在1985年,俄罗斯数学家格罗莫夫(M. Gromov)引入了拟全纯曲线(Pseudo-holomorphic curve)的概念 ,证明了譬如不可压缩定理(Non squeezing theorem)等一些非常奇妙的定理。这套理论后来发展成为格罗莫夫-威腾不变量(Gromov-Witten invariant),弗洛尔同调(Floer homology)等在辛几何中非常重要的理论。
阿诺德猜测
前苏联数学家阿诺德(V. I. Arnold)猜测紧致辛流形的辛自同构至少要有一定数目的不动点,并将不动点的数目估计同拓扑学中的莫尔斯不等式做类比。 这个猜测成为辛几何在二十世纪最后20年的指导性纲领。德国数学家弗洛尔(Andreas Floer)为证明阿诺德猜测,引入了弗洛尔同调的概念,成为辛几何领域的重要工具。
镜像对称
在弦理论中,物理学家发现卡拉比-丘流形(一类特别的辛流形)存在一种被称为“镜像对称”的现象,即一个卡拉比-丘流形的复几何性质对应着另一个卡拉比-丘流形(它的镜像流形)的辛几何性质。这个观点极大的影响了1990年代之后的辛几何的研究。其中1998年菲尔兹奖得主孔采维奇(Maxim Kontsevich)提出的“同调镜像对称”猜想,日本几何学家深谷贤治(Kenji Fukaya)提出的“深谷范畴”等在现代辛几何的研究中都有非常重要的意义。
辛流形
简介
数学上,一个辛流形是一个装备了一个闭、非退化2-形式ω的光滑流形,ω称为辛形式。辛流形的研究称为辛拓扑。辛流形作为经典力学和分析力学的抽象表述中的流形的余切丛自然的出现,例如在经典力学的哈密尔顿表述中,该领域的一个主要原因之一:一个系统的所有组态的空间可以用一个流形建模,而该流形的余切丛描述了该系统的相空间。
详细内容
一个辛流形上的任何实值可微函数H可以用作一个能量函数或者叫哈密顿量。和任何一个哈密顿量相关有一个哈密顿向量场;该哈密顿向量场的积分曲线是哈密顿-雅戈比方程的解。哈密顿向量场定义了辛流形上的一个流场,称为哈密顿流场或者叫辛同胚。根据刘维尔定理,哈密顿流保持相空间的体积形式不变。
具有某种特殊结构的微分流形,这种结构称为辛结构。设M为一微分流形,又在M上具有一个二次非退化的闭外微分形式σ,则称σ是M上的一个辛结构,又称M为具辛结构σ的辛流形。微分流形的辛结构联系于向量空间的辛结构。设V是m维向量空间,在V上定义了一个反对称、非退化的双线性形式σ,即σ满足:①反对称性,σ(α,β)=-σ(β,α),对任意α,β∈V成立;②非退化,若对任意β∈V,有σ(α,β)=0,必有α=0,则称σ为向量空间V上的一个辛结构,又称V 为具辛结构σ的辛向量空间。对于具辛结构σ的微分流形M,在每一点x∈M,将σ(x)视为TxM上的双线性形式,即得出向量空间TxM上的辛结构。具辛结构的向量空间 V或具辛结构的微分流形M都必须是偶数维的。
正则量子化
物理学中,正则量子化是多种对经典理论进行量子化的数学方法中的一种;在对经典场论进行量子化时,又称二次量子化。“正则”这个词其实源自经典理论,指的是理论中一种特定的结构(称作辛结构(Symplectic structure)),这样的结构在量子理论中也被保留。这在保罗·狄拉克尝试建构量子场论时由他首先强调。
普通的量子力学方法只能处理粒子数守恒的系统。但在相对论量子力学中,粒子可以产生和湮没,普通量子力学的数学表述方法不再适用。二次量子化通过引入产生算符和湮没算符处理粒子的产生和湮没,是建立相对论量子力学和量子场论的必要数学手段。相比普通量子力学表述方式,二次量子化方法能够自然而简洁的处理全同粒子的对称性和反对称性,所以即使在粒子数守恒的非相对论多体问题中,也被广泛应用。
动量映射
在数学,尤其在辛几何中,动量映射是一个与辛流形上的李群的哈密顿作用有关的工具,可用于构造作用的守恒量。动量映射推广了经典的 动量和角动量。它在各种辛流形的建立中是一个重要的部分,包括将会在后面讨论的symplectic (Marsden–Weinstein) quotients,以及symplectic cuts和sums。
泊松流形
在数学中,泊松流形(Poisson manifold)是一个微分流形 M 使得 M 上光滑函数代数 C∞(M) 上装备有一个双线性映射称为泊松括号,将其变成泊松代数。
每个辛流形是泊松流形,反之则不然。
殆复流形
数学中,一个殆复流形(almost complex manifold)是在每个切空间上带有一个光滑线性复结构的光滑流形。此结构的存在性是一个流形成为复流形的必要条件,但非充分条件。即每个复流形是一个殆复流形,反之则不然。殆复结构在辛几何中有重要应用。
此概念由埃雷斯曼与霍普夫于1940年代引入。
弦拓扑
弦拓扑是近几年来兴起的一个数学学科,概括地说,它是关于流形的路径空间(path space)上的拓扑性质及其在微分几何,同调代数和数学物理等领域的应用的研究。
弦拓扑介绍
1999年美国数学家Moira Chas和Dennis Sullivan在网络上(www.arxiv.org)发表了他们的研究论文String topology,即弦拓扑(文献1)。在这篇论文中,他们证明了一个流形的自由环路空间(free loop space)的同调群有一个Gerstenhaber代数和一个Batalin-Vilkovisky代数(简称BV代数)结构,从而得出了关于流形的一类新的拓扑不变量。此后,Sullivan和他的合作者们,陆续发表了几篇关于流形的环路空间和路径空间方面的论文,进一步探讨了这些空间的拓扑性质。他们的研究很快吸引了许多数学家的兴趣,并引起了广泛的研究,这些研究主要集中在:
一个流形的什么代数性质导致了它的环路空间的这两个代数结构?
这些新的不变量是流形的什么样的不变量?比如说,是不是流形的同胚或者同伦不变量?
由于迄今为止所有BV代数的例子都来自于弦理论,那么弦拓扑有没有一个弦理论的解释?
越来越多的辛几何特别是辛场论(symplectic field theory)的研究者发现,辛场论和弦拓扑的研究对象有类似之处,那么这两者之间到底有什么关系?
弦拓扑研究的是流形的环路空间,那么它在低维流形的研究中,比如说三维流形和纽结理论,有些什么样的应用?
所有这些研究现在被统称为弦拓扑。
汉弥尔顿矩阵[编辑]
[ltr]在数学上,若一个2n阶矩阵A是一个汉弥尔顿矩阵,则对此矩阵而言,JA会是一个对称矩阵,而其中J这个矩阵具有以下的形式:
其中In是n阶矩阵单位矩阵。也就是说,若A是一个汉弥尔顿矩阵当且仅当(JA)T = JA,在此处()T表示矩阵的转置[1]
[/ltr]
[size][ltr]
性质[编辑]
假设一个2n阶的矩阵A可写成如下形式的分块矩阵:
其中a、b、c、d皆为n阶矩阵,则“A是汉弥尔顿矩阵”的这条件与“b和c这两个矩阵皆为对称矩阵,且a + dT = 0”的这条件等价。[1][2]另一个A是汉弥尔顿矩阵时的这条件等价的条件为“存在一个对称矩阵S,使得A = JS with S”[2]:34
从转置的定义,可轻易地得知说一个汉弥尔顿矩阵的转置也是汉弥尔顿矩阵,两个汉弥尔顿矩阵的和也是弥尔顿矩阵,一个汉弥尔顿矩阵的交换子也是汉弥尔顿矩阵。由所有同阶的汉弥尔顿矩阵组成的空间形式一个李代数,记作sp(2n),而sp(2n)的维度则为2n2 + n。与这个李代数相对应的李群是Sp(2n)这个辛群。Sp(2n)这个群可将之视作由辛矩阵所构成的一个群,其中若一矩阵A为一辛矩阵,则它满足ATJA = J这条件。因此,一个汉弥尔顿矩阵的指数是一个辛矩阵,而一个辛矩阵的对数是一个汉弥尔顿矩阵。[2]:34–36[3]
实汉弥尔顿矩阵的特征多项式是个偶函数,因此若λ是一个汉弥尔顿矩阵的特征向量,则−λ、λ*和−λ*也都会是该矩阵的特征向量。[2]:45而这也说明了一个汉弥尔顿矩阵的迹会是零。
一个汉弥尔顿矩阵的平方是一个斜汉弥尔顿矩阵(skew-Hamiltonian matrix。若一个矩阵A满足(JA)T = −JA这条件,则它是一个斜汉弥尔顿矩阵);另一方面,每个斜汉弥尔顿矩阵都是一个弥尔顿矩阵的平方。[4]
在复矩阵上的推广[编辑]
汉弥尔顿的定义可用两种方式推广到复矩阵上。一种方法是如上所述般定义说若一矩阵A满足(JA)T = JA这条件,则该矩阵是一个汉弥尔顿矩阵;[1][4]另一个方式是利用(JA)* = JA这条件,其中()*表示矩阵的共轭转置。[5]
汉弥尔顿算子[编辑]
设V为一个向量空间,在其上有着辛形式Ω。那么当“是对称的”这条件满足时,就称线性变换是一个对Ω的汉弥尔顿算子(Hamiltonian operator),也就是说它当满足下式:
若选择一个基e1, …, e2n in V,使得Ω可写成这样的形式,则一个对Ω线性算子是汉弥尔顿算子,当且仅当在这个基中与此算子对应的矩阵是汉弥尔顿矩阵。[4]
参照[编辑]
[/ltr][/size]
- ^ 1.0 1.1 1.2 Ikramov, Khakim D., Hamiltonian square roots of skew-Hamiltonian matrices revisited, Linear Algebra and its Applications. 2001, 325: 101–107, doi:10.1016/S0024-3795(00)00304-9.
- ^ 2.0 2.1 2.2 2.3 Meyer, K. R.; Hall, G. R., Introduction to Hamiltonian dynamical systems and the N-body problem, Springer. 1991, ISBN 0-387-97637-X.
- ^ Dragt, Alex J., The symplectic group and classical mechanics, Annals of the New York Academy of Sciences. 2005, 1045 (1): 291–307, doi:10.1196/annals.1350.025.
- ^ 4.0 4.1 4.2 Waterhouse, William C., The structure of alternating-Hamiltonian matrices, Linear Algebra and its Applications. 2005, 396: 385–390, doi:10.1016/j.laa.2004.10.003.
- ^ Paige, Chris; Van Loan, Charles, A Schur decomposition for Hamiltonian matrices, Linear Algebra and its Applications. 1981, 41: 11–32, doi:10.1016/0024-3795(81)90086-0.
由一星于2014-07-19, 13:03进行了最后一次编辑,总共编辑了1次
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
麦克斯韦方程组[编辑]
[ltr]
麦克斯韦方程组[1](英语:Maxwell's equations),是英国物理学家詹姆斯·麦克斯韦在19世纪建立的一组描述电场、磁场与电荷密度、电流密度之间关系的偏微分方程。它由四个方程组成:描述电荷如何产生电场的高斯定律、论述磁单极子不存在的高斯磁定律、描述电流和时变电场怎样产生磁场的麦克斯韦-安培定律、描述时变磁场如何产生电场的法拉第感应定律。
从麦克斯韦方程组,可以推论出光波是电磁波。麦克斯韦方程组和洛伦兹力方程是经典电磁学的基础方程。从这些基础方程的相关理论,发展出现代的电力科技与电子科技。
麦克斯韦1865年提出的最初形式的方程组由20个等式和20个变量组成。他在1873年尝试用四元数来表达,但未成功。现在所使用的数学形式是奥利弗·亥维赛和约西亚·吉布斯于1884年以矢量分析的形式重新表达的。[/ltr]
3 历史
3.1 麦克斯韦方程组的演化
3.2 论文《论法拉第力线》
3.3 论文《论物理力线》
3.4 论文《电磁场的动力学理论》
3.5 教科书《电磁通论》
4 宏观麦克斯韦方程组
4.1 束缚电荷和束缚电流
4.2 本构关系
4.2.1 自由空间案例
4.2.2 线性物质案例
4.2.3 一般案例
4.2.4 本构关系的演算
5 自由空间
6 磁单极子
7 边界条件
8 高斯单位制
9 进阶表述
9.1 麦克斯韦方程组的协变形式
9.2 势场表述
9.2.1 协变形式
10 弯曲时空
11 参阅
12 注释
13 参考文献
13.1 进阶阅读
[ltr]
概论[编辑]
麦克斯韦方程组乃是由四个方程共同组成的[2]:[/ltr]
[ltr]
方程组汇览[编辑]
采用不同的单位制,麦克斯韦方程组的形式会稍微有所改变,大致形式仍旧相同,只是不同的常数会出现在方程内部不同位置。国际单位制是最常使用的单位制,整个工程学领域都采用这种单位制,大多数化学家也都使用这种单位制,大学物理教科书几乎都采用这种单位制[3]。其它常用的单位制有高斯单位制、洛伦兹-亥维赛单位制(Lorentz-Heaviside units)和普朗克单位制。由厘米-克-秒制衍生的高斯单位制,比较适合于教学用途,能够使得方程看起来更简单、更易懂[3]。稍后会详细阐述高斯单位制。洛伦兹-亥维赛单位制也是衍生于厘米-克-秒制,主要用于粒子物理学[4];普朗克单位制是一种自然单位制,其单位都是根据大自然的性质定义,不是由人为设定。普朗克单位制是研究理论物理学非常有用的工具,能够给出很大的启示[5][6]。在本段落里,所有方程都采用国际单位制。
这里展示出麦克斯韦方程组的两种等价表述。第一种表述将自由电荷和束缚电荷总和为高斯定律所需要的总电荷,又将自由电流、束缚电流和电极化电流总合为麦克斯韦-安培定律内的总电流。这种表述采用比较基础、微观的观点。这种表述可以应用于计算在真空里有限源电荷与源电流所产生的电场与磁场。但是,对于物质内部超多的电子与原子核,实际而言,无法一一纳入计算。事实上,经典电磁学也不需要这么精确的答案。
第二种表述以自由电荷和自由电流为源头,而不直接计算出现于介电质的束缚电荷和出现于磁化物质的束缚电流和电极化电流所给出的贡献。由于在一般实际状况,能够直接控制的参数是自由电荷和自由电流,而束缚电荷、束缚电流和电极化电流是物质经过极化后产生的现象,采用这种表述会使得在介电质或磁化物质内各种物理计算更加简易[7]。
麦克斯韦方程组似乎是超定的(overdetermined)方程组,它只有六个未知量(矢量电场、磁场各拥有三个未知量,电流与电荷不是未知量,而是自由设定并符合电荷守恒的物理量),但却有八个方程(两个高斯定律共有两个方程,法拉第定律与安培定律各有三个方程)。这状况与麦克斯韦方程组的某种有限重复性有关。从理论可以推导出,任何满足法拉第定律与安培定律的系统必定满足两个高斯定律。[8][9]
微观麦克斯韦方程组表格[编辑][/ltr]
以总电荷和总电流为源头的表述
[ltr]
宏观麦克斯韦方程组表格[编辑][/ltr]
以自由电荷和自由电流为源头的表述
[ltr]
麦克斯韦方程组术语符号表格[编辑]
以下表格给出每一个符号所代表的物理意义,和其单位:[/ltr]
物理意义和单位
[ltr]
麦克斯韦方程组[1](英语:Maxwell's equations),是英国物理学家詹姆斯·麦克斯韦在19世纪建立的一组描述电场、磁场与电荷密度、电流密度之间关系的偏微分方程。它由四个方程组成:描述电荷如何产生电场的高斯定律、论述磁单极子不存在的高斯磁定律、描述电流和时变电场怎样产生磁场的麦克斯韦-安培定律、描述时变磁场如何产生电场的法拉第感应定律。
从麦克斯韦方程组,可以推论出光波是电磁波。麦克斯韦方程组和洛伦兹力方程是经典电磁学的基础方程。从这些基础方程的相关理论,发展出现代的电力科技与电子科技。
麦克斯韦1865年提出的最初形式的方程组由20个等式和20个变量组成。他在1873年尝试用四元数来表达,但未成功。现在所使用的数学形式是奥利弗·亥维赛和约西亚·吉布斯于1884年以矢量分析的形式重新表达的。[/ltr]
[ltr]
概论[编辑]
麦克斯韦方程组乃是由四个方程共同组成的[2]:[/ltr]
- 高斯定律描述电场是怎样由电荷生成。电场线开始于正电荷,终止于负电荷。计算穿过某给定闭曲面的电场线数量,即其电通量,可以得知包含在这闭曲面内的总电荷。更详细地说,这定律描述穿过任意闭曲面的电通量与这闭曲面内的电荷之间的关系。
- 高斯磁定律表明,磁单极子实际上并不存在于宇宙。所以,没有磁荷,磁场线没有初始点,也没有终止点。磁场线会形成循环或延伸至无穷远。换句话说,进入任何区域的磁场线,必需从那区域离开。以术语来说,通过任意闭曲面的磁通量等于零,或者,磁场是一个螺线矢量场。
- 法拉第感应定律描述含时磁场怎样生成(感应出)电场。电磁感应在这方面是许多发电机的运作原理。例如,一块旋转的条形磁铁会产生含时磁场,这又接下来会生成电场,使得邻近的闭循环因而感应出电流。
- 麦克斯韦-安培定律阐明,磁场可以用两种方法生成:一种是靠电流(原本的安培定律),另一种是靠含时电场(麦克斯韦修正项)。在电磁学里,麦克斯韦修正项意味着含时电场可以生成磁场,而由于法拉第感应定律,含时磁场又可以生成电场。这样,两个方程在理论上允许自我维持的电磁波传播于空间(更详尽细节,请参阅条目电磁波方程)。
[ltr]
方程组汇览[编辑]
采用不同的单位制,麦克斯韦方程组的形式会稍微有所改变,大致形式仍旧相同,只是不同的常数会出现在方程内部不同位置。国际单位制是最常使用的单位制,整个工程学领域都采用这种单位制,大多数化学家也都使用这种单位制,大学物理教科书几乎都采用这种单位制[3]。其它常用的单位制有高斯单位制、洛伦兹-亥维赛单位制(Lorentz-Heaviside units)和普朗克单位制。由厘米-克-秒制衍生的高斯单位制,比较适合于教学用途,能够使得方程看起来更简单、更易懂[3]。稍后会详细阐述高斯单位制。洛伦兹-亥维赛单位制也是衍生于厘米-克-秒制,主要用于粒子物理学[4];普朗克单位制是一种自然单位制,其单位都是根据大自然的性质定义,不是由人为设定。普朗克单位制是研究理论物理学非常有用的工具,能够给出很大的启示[5][6]。在本段落里,所有方程都采用国际单位制。
这里展示出麦克斯韦方程组的两种等价表述。第一种表述将自由电荷和束缚电荷总和为高斯定律所需要的总电荷,又将自由电流、束缚电流和电极化电流总合为麦克斯韦-安培定律内的总电流。这种表述采用比较基础、微观的观点。这种表述可以应用于计算在真空里有限源电荷与源电流所产生的电场与磁场。但是,对于物质内部超多的电子与原子核,实际而言,无法一一纳入计算。事实上,经典电磁学也不需要这么精确的答案。
第二种表述以自由电荷和自由电流为源头,而不直接计算出现于介电质的束缚电荷和出现于磁化物质的束缚电流和电极化电流所给出的贡献。由于在一般实际状况,能够直接控制的参数是自由电荷和自由电流,而束缚电荷、束缚电流和电极化电流是物质经过极化后产生的现象,采用这种表述会使得在介电质或磁化物质内各种物理计算更加简易[7]。
麦克斯韦方程组似乎是超定的(overdetermined)方程组,它只有六个未知量(矢量电场、磁场各拥有三个未知量,电流与电荷不是未知量,而是自由设定并符合电荷守恒的物理量),但却有八个方程(两个高斯定律共有两个方程,法拉第定律与安培定律各有三个方程)。这状况与麦克斯韦方程组的某种有限重复性有关。从理论可以推导出,任何满足法拉第定律与安培定律的系统必定满足两个高斯定律。[8][9]
微观麦克斯韦方程组表格[编辑][/ltr]
以总电荷和总电流为源头的表述
高斯定律 | ||
高斯磁定律 | ||
法拉第感应定律 | ||
麦克斯韦-安培定律 |
宏观麦克斯韦方程组表格[编辑][/ltr]
以自由电荷和自由电流为源头的表述
高斯定律 | ||
高斯磁定律 | ||
法拉第感应定律 | ||
麦克斯韦-安培定律 |
麦克斯韦方程组术语符号表格[编辑]
以下表格给出每一个符号所代表的物理意义,和其单位:[/ltr]
物理意义和单位
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
物理意义和单位
微观尺度与宏观尺度[编辑]
麦克斯韦方程组通常应用于各种场的“宏观平均”。当尺度缩小至微观(microscopic scale),以至于接近单独原子大小的时侯,这些场的局部波动差异将变得无法忽略,量子现象也会开始出现。只有在宏观平均的前提下,物理量像物质的电容率和磁导率才会得到有意义的定义值。
最重的原子核的半径大约为7飞米(7× 10−15米)。所以,在经典电磁学里,微观尺度指的是尺寸的数量级大于10−14米。满足微观尺度,电子和原子核可以视为点电荷,微观麦克斯韦方程组成立;否则,必需将原子核内部的电荷分布纳入考量。在微观尺度计算出来的电场与磁场仍旧变化相当剧烈,空间变化的距离数量级小于10−10米,时间变化的周期数量级在10−17至10−13秒之间。因此,从微观麦克斯韦方程组,必需经过经典平均运算,才能得到平滑、连续、缓慢变化的宏观电场与宏观磁场。宏观尺度的最低极限为10−8米。这意味着电磁波的反射与折射行为可以用宏观麦克斯韦方程组来描述。以这最低极限为边长,体积为10−24立方米的立方体大约含有106个原子核和电子。这么多原子核和电子的物理行为,经过经典平均运算,足以平缓任何剧烈的涨落。根据可靠文献记载,经典平均运算只需要在空间作平均运算,不需要在时间作平均运算,也不需要考虑到原子的量子效应[7]。
经典平均运算是一种比较简单的平均程序,给定函数,这函数的空间平均定义为[7]
;
其中,是平均运算的空间,是权重函数。
有很多种函数可以选为优良的权重函数,例如,高斯函数:
。
最早出现的麦克斯韦方程和其相关理论是为宏观物质设计的,是一种现象学。在那时候,物理学者并不清楚造成电磁现象的基本原因。后来,按照物质的粒子绘景,才推导出微观麦克斯韦方程。二十世纪前半期,在量子力学、相对论、与粒子物理学领域的突破与发展,其崭新理论与微观麦克斯韦方程组相结合,成为建立量子电动力学的关键基石。这是物理学中最准确的理论,所计算出的结果能够精确地符合实验数据[10]。
证明两种表述等价[编辑]
前面所论述的麦克斯韦方程组的两种表述,在数学上是等价的。
历史[编辑]
虽然有些历史学家认为麦克斯韦并不是现代麦克斯韦方程组的原创者,在建立分子涡流模型的同时,麦克斯韦的确独自地推导出所有相关的方程。现代麦克斯韦方程组的四个方程,都可以在麦克斯韦的1861年论文《论物理力线》、1865年论文《电磁场的动力学理论》和于1873年发行的名著《电磁通论》的第二册,第四集,第九章"电磁场的一般方程"里,找到可辨认的形式,尽管没有任何矢量标记和梯度符号的蛛丝马迹。这本往后物理学生必读的教科书的发行日期,早于亥维赛、海因里希·赫兹等等的著作。
麦克斯韦方程组的演化[编辑]
麦克斯韦方程组这术语原本指的是麦克斯韦于1865年在论文《电磁场的动力学理论》提出的一组八个方程[11]。但是,现在常见的麦克斯韦方程组,乃是经过亥维赛于1884年编排修改而成的四个方程[12]。同时期,约西亚·吉布斯和赫兹分别都研究出类似的结果。有很久一段时间,这些方程被总称为赫兹-亥维赛方程组、麦克斯韦-赫兹方程组或麦克斯韦-亥维赛方程组[12] [13]。
麦克斯韦写出的这些方程,对于电磁学的贡献,主要是在他1861年的论文《论物理力线》内,他将位移电流项目加入了安培定律,将安培定律修改成麦克斯韦-安培定律[14]。这添加的项目使他后来在论文《电磁场的动力学理论》中,能够推导出电磁波方程,在理论上证明了光波就是电磁波[11]。
麦克斯韦认为位势变量(电势和磁矢势)是他的方程组的中心概念。对于这想法,亥维赛强烈地驳斥,认为位势属于形而上学的概念,只有电场和磁场才是最基础、最实际的物理量。他试着除去方程组内的位势变量。亥维赛努力研究的结果是一双对称的方程[12]:
、;
其中,是包括位移电流密度在内的总电流密度,是磁场强度,是电场,是总磁流密度。
总磁流密度定义为
;
其中,是磁场,是磁荷的运动所产生的磁流。
到现在为止,由于物理学家还没有找到任何磁粒子,可以设定为零。
电场 | 伏特/米,牛顿/库仑 | |
磁感应强度 | 特斯拉,韦伯/米2,伏特·秒/米2 | |
电位移 | 库仑/米2,牛顿/伏特·米 | |
磁场强度 | 安培/米 | |
散度算符 | /米 | |
旋度算符 | ||
对于时间的偏导数 | /秒 | |
曲面积分的运算曲面 | 米2 | |
路径积分的运算路径 | 米 | |
微小面元素矢量 | 米2 | |
微小线元素矢量 | 米 | |
电常数 | 法拉/米 | |
磁常数 | 亨利/米,牛顿/安培2 | |
自由电荷密度 | 库仑/米3 | |
总电荷密度 | 库仑/米3 | |
在闭曲面里面的自由电荷 | 库仑 | |
在闭曲面里面的总电荷 | 库仑 | |
自由电流密度 | 安培/米2 | |
总电流密度 | 安培/米2 | |
穿过闭路径所包围的曲面的自由电流 | 安培 | |
穿过闭路径所包围的曲面的总电流 | 安培 | |
穿过闭路径所包围的曲面的磁通量 | 特斯拉·米2,伏特·秒,韦伯 | |
穿过闭路径所包围的曲面的电通量 | 焦耳·米/库仑 | |
穿过闭路径所包围的曲面的电位移通量 | 库仑 |
麦克斯韦方程组通常应用于各种场的“宏观平均”。当尺度缩小至微观(microscopic scale),以至于接近单独原子大小的时侯,这些场的局部波动差异将变得无法忽略,量子现象也会开始出现。只有在宏观平均的前提下,物理量像物质的电容率和磁导率才会得到有意义的定义值。
最重的原子核的半径大约为7飞米(7× 10−15米)。所以,在经典电磁学里,微观尺度指的是尺寸的数量级大于10−14米。满足微观尺度,电子和原子核可以视为点电荷,微观麦克斯韦方程组成立;否则,必需将原子核内部的电荷分布纳入考量。在微观尺度计算出来的电场与磁场仍旧变化相当剧烈,空间变化的距离数量级小于10−10米,时间变化的周期数量级在10−17至10−13秒之间。因此,从微观麦克斯韦方程组,必需经过经典平均运算,才能得到平滑、连续、缓慢变化的宏观电场与宏观磁场。宏观尺度的最低极限为10−8米。这意味着电磁波的反射与折射行为可以用宏观麦克斯韦方程组来描述。以这最低极限为边长,体积为10−24立方米的立方体大约含有106个原子核和电子。这么多原子核和电子的物理行为,经过经典平均运算,足以平缓任何剧烈的涨落。根据可靠文献记载,经典平均运算只需要在空间作平均运算,不需要在时间作平均运算,也不需要考虑到原子的量子效应[7]。
经典平均运算是一种比较简单的平均程序,给定函数,这函数的空间平均定义为[7]
;
其中,是平均运算的空间,是权重函数。
有很多种函数可以选为优良的权重函数,例如,高斯函数:
。
最早出现的麦克斯韦方程和其相关理论是为宏观物质设计的,是一种现象学。在那时候,物理学者并不清楚造成电磁现象的基本原因。后来,按照物质的粒子绘景,才推导出微观麦克斯韦方程。二十世纪前半期,在量子力学、相对论、与粒子物理学领域的突破与发展,其崭新理论与微观麦克斯韦方程组相结合,成为建立量子电动力学的关键基石。这是物理学中最准确的理论,所计算出的结果能够精确地符合实验数据[10]。
证明两种表述等价[编辑]
前面所论述的麦克斯韦方程组的两种表述,在数学上是等价的。
虽然有些历史学家认为麦克斯韦并不是现代麦克斯韦方程组的原创者,在建立分子涡流模型的同时,麦克斯韦的确独自地推导出所有相关的方程。现代麦克斯韦方程组的四个方程,都可以在麦克斯韦的1861年论文《论物理力线》、1865年论文《电磁场的动力学理论》和于1873年发行的名著《电磁通论》的第二册,第四集,第九章"电磁场的一般方程"里,找到可辨认的形式,尽管没有任何矢量标记和梯度符号的蛛丝马迹。这本往后物理学生必读的教科书的发行日期,早于亥维赛、海因里希·赫兹等等的著作。
麦克斯韦方程组的演化[编辑]
麦克斯韦方程组这术语原本指的是麦克斯韦于1865年在论文《电磁场的动力学理论》提出的一组八个方程[11]。但是,现在常见的麦克斯韦方程组,乃是经过亥维赛于1884年编排修改而成的四个方程[12]。同时期,约西亚·吉布斯和赫兹分别都研究出类似的结果。有很久一段时间,这些方程被总称为赫兹-亥维赛方程组、麦克斯韦-赫兹方程组或麦克斯韦-亥维赛方程组[12] [13]。
麦克斯韦写出的这些方程,对于电磁学的贡献,主要是在他1861年的论文《论物理力线》内,他将位移电流项目加入了安培定律,将安培定律修改成麦克斯韦-安培定律[14]。这添加的项目使他后来在论文《电磁场的动力学理论》中,能够推导出电磁波方程,在理论上证明了光波就是电磁波[11]。
麦克斯韦认为位势变量(电势和磁矢势)是他的方程组的中心概念。对于这想法,亥维赛强烈地驳斥,认为位势属于形而上学的概念,只有电场和磁场才是最基础、最实际的物理量。他试着除去方程组内的位势变量。亥维赛努力研究的结果是一双对称的方程[12]:
、;
其中,是包括位移电流密度在内的总电流密度,是磁场强度,是电场,是总磁流密度。
总磁流密度定义为
;
其中,是磁场,是磁荷的运动所产生的磁流。
到现在为止,由于物理学家还没有找到任何磁粒子,可以设定为零。
一星- 帖子数 : 3787
注册日期 : 13-08-07
回复: Quantum Field Theory I
论文《论物理力线》[编辑]
主条目:论物理力线
1861年,麦克斯韦在发表的一篇论文《论物理力线》里,提出了“分子涡流模型”[14]。由于法拉第效应显示出,在通过介质时,偏振光会因为外磁场的作用,转变偏振的方向,因此,麦克斯韦认为磁场是一种旋转现象[17]。在他设计的“分子涡流模型”里,他将力线延伸为“涡流管”。许多单独的“涡胞”(涡旋分子)组成了一条条的涡流管。在这涡胞内部,不可压缩流体绕着旋转轴以均匀角速度旋转。由于离心力作用,在涡胞内部的任意微小元素会感受到不同的压力。知道这压力的分布,就可以计算出微小元素感受到的作用力。透过分子涡流模型,麦克斯韦详细地分析与比拟这作用力内每一个项目的物理性质,合理地解释各种磁场现象和其伴随的作用力。
麦克斯韦对于分子涡流模型提出几点质疑。假设邻近两条磁力线的涡胞的旋转方向相同。假若这些涡胞之间会发生摩擦,则涡胞的旋转会越来越慢,终究会停止旋转;假若这些涡胞之间是平滑的,则涡胞会失去传播资讯的能力。为了要避免这些棘手的问题,麦克斯韦想出一个绝妙的点子:他假设在两个相邻涡胞之间,有一排微小圆珠,将这两个涡胞隔离分开。这些圆珠只能滚动(rolling),不能滑动。圆珠旋转的方向相反于这两个涡胞的旋转方向,这样,就不会引起摩擦。圆珠的平移速度是两个涡胞的周边速度的平均值。这是一种运动关系,不是动力关系。麦克斯韦将这些圆珠的运动比拟为电流。从这模型,经过一番复杂的运算,麦克斯韦能够推导出安培定律、法拉第感应定律等等。
麦克斯韦又给予这些涡胞一种弹性性质。假设施加某种外力于圆珠,则这些圆珠会转而施加切力于涡胞,使得涡胞变形。这代表了一种静电状态。假设外力与时间有关,则涡胞的变形也会与时间有关,因而形成了电流。这样,麦克斯韦可以比拟出电位移和位移电流。不但是在介质内,甚至在真空(麦克斯韦认为没有完全的真空,乙太弥漫于整个宇宙),只要有磁力线,就有涡胞,位移电流就可以存在。因此,麦克斯韦将安培定律加以延伸,增加了一个有关于位移电流的项目,称为“麦克斯韦修正项”。聪明睿智的麦克斯韦很快地联想到,既然弹性物质会以波动形式传播能量于空间,那么,这弹性模型所比拟的电磁场应该也会以波动形式传播能量于空间。不但如此,电磁波还会产生反射,折射等等波动行为。麦克斯韦计算出电磁波的传播速度,发觉这数值非常接近于,先前从天文学得到的,光波传播于行星际空间(interplanetary space)的速度。因此,麦克斯韦断定光波就是一种电磁波。
现今常见的麦克斯韦方程组,在论文内出现了很多次:
主条目:论物理力线
1861年,麦克斯韦在发表的一篇论文《论物理力线》里,提出了“分子涡流模型”[14]。由于法拉第效应显示出,在通过介质时,偏振光会因为外磁场的作用,转变偏振的方向,因此,麦克斯韦认为磁场是一种旋转现象[17]。在他设计的“分子涡流模型”里,他将力线延伸为“涡流管”。许多单独的“涡胞”(涡旋分子)组成了一条条的涡流管。在这涡胞内部,不可压缩流体绕着旋转轴以均匀角速度旋转。由于离心力作用,在涡胞内部的任意微小元素会感受到不同的压力。知道这压力的分布,就可以计算出微小元素感受到的作用力。透过分子涡流模型,麦克斯韦详细地分析与比拟这作用力内每一个项目的物理性质,合理地解释各种磁场现象和其伴随的作用力。
麦克斯韦对于分子涡流模型提出几点质疑。假设邻近两条磁力线的涡胞的旋转方向相同。假若这些涡胞之间会发生摩擦,则涡胞的旋转会越来越慢,终究会停止旋转;假若这些涡胞之间是平滑的,则涡胞会失去传播资讯的能力。为了要避免这些棘手的问题,麦克斯韦想出一个绝妙的点子:他假设在两个相邻涡胞之间,有一排微小圆珠,将这两个涡胞隔离分开。这些圆珠只能滚动(rolling),不能滑动。圆珠旋转的方向相反于这两个涡胞的旋转方向,这样,就不会引起摩擦。圆珠的平移速度是两个涡胞的周边速度的平均值。这是一种运动关系,不是动力关系。麦克斯韦将这些圆珠的运动比拟为电流。从这模型,经过一番复杂的运算,麦克斯韦能够推导出安培定律、法拉第感应定律等等。
麦克斯韦又给予这些涡胞一种弹性性质。假设施加某种外力于圆珠,则这些圆珠会转而施加切力于涡胞,使得涡胞变形。这代表了一种静电状态。假设外力与时间有关,则涡胞的变形也会与时间有关,因而形成了电流。这样,麦克斯韦可以比拟出电位移和位移电流。不但是在介质内,甚至在真空(麦克斯韦认为没有完全的真空,乙太弥漫于整个宇宙),只要有磁力线,就有涡胞,位移电流就可以存在。因此,麦克斯韦将安培定律加以延伸,增加了一个有关于位移电流的项目,称为“麦克斯韦修正项”。聪明睿智的麦克斯韦很快地联想到,既然弹性物质会以波动形式传播能量于空间,那么,这弹性模型所比拟的电磁场应该也会以波动形式传播能量于空间。不但如此,电磁波还会产生反射,折射等等波动行为。麦克斯韦计算出电磁波的传播速度,发觉这数值非常接近于,先前从天文学得到的,光波传播于行星际空间(interplanetary space)的速度。因此,麦克斯韦断定光波就是一种电磁波。
现今常见的麦克斯韦方程组,在论文内出现了很多次:
- 在论文内,方程(56)是高斯磁定律: ;其中,是涡胞的质量密度,对应于磁导率,、、分别为涡胞的周边速度矢量的三个投影于x-轴、y-轴和z-轴的分量,对应于的三个分量。
- 方程(112)是麦克斯韦-安培定律:、、;其中,、、分别为每秒钟通过单位面积的圆粒数量矢量的三个分量,分别对应于电流密度的三个分量,、、分别为在涡胞之间的圆粒所感受到的作用力的三个分量,分别对应于电场的三个分量。
这方程右边第三个项目是包括了位移电流的麦克斯韦修正项。后来,他在1865年的论文《电磁场的动力学理论》中,延续先前的点子,推导出电磁波方程,在理论上证明了光波是电磁波。很有趣地是,完全没有使用到位移电流的概念,古斯塔夫·基尔霍夫就能够于1857年推导出电报方程(telegraph equations)。但是,他使用的是泊松方程和电荷连续方程。位移电流的数学要素就是这两个方程。可是,基尔霍夫认为他的方程只适用于导线内部。因此,他始终没有发觉光波就是电磁波的事实。 - 方程(115)是高斯定律: ;其中,是单位体积的圆粒数量,对应于电荷密度,是涡胞的弹性常数,对应于电容率的平方根的倒数。
- 方程(54)是、、;方程(77)是、、;其中,、、分别为在涡胞之间的圆粒的动量的三个分量,分别对应于磁矢势的三个分量,是圆粒与圆粒相互作用于对方的压力,对应于电势。
方程(54)是亥维赛指为法拉第感应定律的方程。法拉第的原本的通量定律将含时方面和运动方面的问题合并在一起处理。麦克斯韦用方程(54)来专门处理电磁感应涉及的含时方面的问题,用方程(77)来处理电磁感应涉及的运动方面的问题。稍后列出的原本的八个麦克斯韦方程之中的方程(D)就是方程(77),对应于现在的洛伦兹力定律。当亨德里克·洛伦兹还是年轻小伙子的时候,麦克斯韦就已经推导出这方程了。
一星- 帖子数 : 3787
注册日期 : 13-08-07
您在这个论坛的权限:
您不能在这个论坛回复主题