imported>Paul Wormer |
imported>Paul Wormer |
Line 1: |
Line 1: |
| {{subpages}} | | {{subpages}} |
| In [[mathematics]], the '''Jacobian''' of a [[coordinate transformation]] is the [[determinant]] of the functional matrix of Jacobi. This matrix consists of [[partial derivatives]]. The Jacobian appears as the weight (measure) in multiple integrals over generalized coordinates. The Jacobian is named after the German mathematician [[Carl Gustav Jacob Jacobi]] (1804 - 1851). | | In [[mathematics]], the '''Jacobi matrix''' is the matrix of first-order [[partial derivative]]s of the (vector-valued) function: |
| | :<math>\mathbf{f}:\quad\mathbb{R}^n \rightarrow \mathbb{R}^m.</math> |
| | The Jacobi matrix is ''m'' × ''n'' and consists of ''m'' rows of ''n'' first-order [[partial derivatives]] of '''f''' with respect to ''x''<sub>1</sub>, ...,''x''<sub>n</sub>. This matrix is also known as the ''functional matrix of Jacobi''. The determinant of the Jacobi matrix for ''n'' = ''m'' is known as the '''Jacobian'''. The Jacobi matrix and its determinant have several uses in mathematics: |
| | |
| | *The Jacobi matrix appears in the second (linear) term of the [[Taylor series]] of '''f'''. |
| | |
| | *The Jacobian appears as the [[weight]] ([[measure (mathematics)|measure]]) in multi-dimensional [[integral]]s over [[generalized coordinates]], i.e, over non-[[Cartesian coordinates]]. |
| | |
| | *The [[inverse function theorem]] states that if ''m'' = ''n'' and '''f''' is continuously differentiable, then '''f''' is invertible in the neighborhood of a point '''''x'''''<sub>0</sub> if and only if the Jacobian at '''''x'''''<sub>0</sub> is non-zero. |
| | |
| | The Jacobi matrix and its determinant are named after the German mathematician [[Carl Gustav Jacob Jacobi]] (1804 - 1851). |
| | |
| ==Definition== | | ==Definition== |
| Let '''f''' be a map of an open subset ''T'' of <math>\mathbb{R}^n</math> into <math>\mathbb{R}^n</math> with continuous first partial derivatives, | | Let '''f''' be a map of an open subset ''T'' of <math>\mathbb{R}^n</math> into <math>\mathbb{R}^n</math> with continuous first partial derivatives, |
Revision as of 06:43, 14 January 2009
In mathematics, the Jacobi matrix is the matrix of first-order partial derivatives of the (vector-valued) function:
![{\displaystyle \mathbf {f} :\quad \mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/3c42848fe885345cddca16967f83f05e91c5d1a2)
The Jacobi matrix is m × n and consists of m rows of n first-order partial derivatives of f with respect to x1, ...,xn. This matrix is also known as the functional matrix of Jacobi. The determinant of the Jacobi matrix for n = m is known as the Jacobian. The Jacobi matrix and its determinant have several uses in mathematics:
- The Jacobi matrix appears in the second (linear) term of the Taylor series of f.
- The inverse function theorem states that if m = n and f is continuously differentiable, then f is invertible in the neighborhood of a point x0 if and only if the Jacobian at x0 is non-zero.
The Jacobi matrix and its determinant are named after the German mathematician Carl Gustav Jacob Jacobi (1804 - 1851).
Definition
Let f be a map of an open subset T of
into
with continuous first partial derivatives,
![{\displaystyle \mathbf {f} :\quad T\rightarrow \mathbb {R} ^{n}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/4912caaa7b65624b64f5f6e51b8277b85979d97d)
That is if
![{\displaystyle \mathbf {t} =(t_{1},\;t_{2},\;\ldots ,t_{n})\in T\subset \mathbb {R} ^{n},}](https://wikimedia.org/api/rest_v1/media/math/render/svg/2d4fa065ab68c05180ebdd5d12fd12e6a10d1350)
then
![{\displaystyle {\begin{aligned}x_{1}&=f_{1}(t_{1},t_{2},\ldots ,t_{n})\\x_{2}&=f_{2}(t_{1},t_{2},\ldots ,t_{n})\\\cdots &\cdots \\x_{n}&=f_{n}(t_{1},t_{2},\ldots ,t_{n}),\\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/c669c408200972f28a307ce2926e1a9f6c1eeb23)
with
![{\displaystyle \mathbf {x} =(x_{1},\;x_{2},\;\ldots ,x_{n})\in \mathbb {R} ^{n}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d169aebc5859b17867ee46588c4325d22334f24f)
The n × n functional matrix of Jacobi consists of partial derivatives
![{\displaystyle {\begin{pmatrix}{\dfrac {\partial f_{1}}{\partial t_{1}}}&{\dfrac {\partial f_{2}}{\partial t_{1}}}&\ldots &{\dfrac {\partial f_{n}}{\partial t_{1}}}\\\\{\dfrac {\partial f_{1}}{\partial t_{2}}}&{\dfrac {\partial f_{2}}{\partial t_{2}}}&\ldots &\dots \\\\&&\ddots \\\\{\dfrac {\partial f_{1}}{\partial t_{n}}}&\dots &\ldots &{\dfrac {\partial f_{n}}{\partial t_{n}}}\\\end{pmatrix}}.}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d5591c3931198ac39e6539f6e78b568706b057f0)
The determinant of this matrix is usually written as
![{\displaystyle \mathbf {J} _{\mathbf {f} }(\mathbf {t} )\quad {\hbox{or}}\quad {\frac {\partial {\big (}f_{1},f_{2},\ldots ,f_{n}{\Big )}}{\partial {\big (}t_{1},t_{2},\ldots ,t_{n}{\Big )}}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8f47f417120814b051e15e545dccc77487e42249)
Example
Let T be the subset {r, θ, φ | r > 0, 0 < θ<π, 0 <φ <2π} in
and let f be defined by
![{\displaystyle {\begin{aligned}x_{1}&=f_{1}(r,\theta ,\phi )=r\sin \theta \cos \phi \\x_{2}&=f_{2}(r,\theta ,\phi )=r\sin \theta \sin \phi \\x_{3}&=f_{3}(r,\theta ,\phi )=r\cos \theta \\\end{aligned}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/1844b1c01c07f6b98459d743a29a9afd6d16f467)
The Jacobi matrix is
![{\displaystyle {\begin{pmatrix}\sin \theta \cos \phi &\sin \theta \sin \phi &\cos \theta \\r\cos \theta \cos \phi &r\cos \theta \sin \phi &-r\sin \theta \\-r\sin \theta \sin \phi &r\sin \theta \cos \phi &0\\\end{pmatrix}}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/24546db20669857c7a0a7201e2a46ae48d797990)
Its determinant can be obtained most conveniently by a Laplace expansion along the third column
![{\displaystyle \cos \theta {\begin{vmatrix}r\cos \theta \cos \phi &r\cos \theta \sin \phi \\-r\sin \theta \sin \phi &r\sin \theta \cos \phi \end{vmatrix}}+r\sin \theta {\begin{vmatrix}\sin \theta \cos \phi &\sin \theta \sin \phi \\-r\sin \theta \sin \phi &r\sin \theta \cos \phi \end{vmatrix}}=r^{2}(\cos \theta )^{2}\sin \theta +r^{2}(\sin \theta )^{3}=r^{2}\sin \theta }](https://wikimedia.org/api/rest_v1/media/math/render/svg/62798f49f89e684aaf4927a0c67a8aebd89e10d8)
The quantities {r, θ, φ} are known as spherical polar coordinates and its Jacobian is r2sinθ.
Coordinate transformation
The map
is a coordinate transformation if (i) f has continuous first derivatives on T (ii) f is one-to-one on T and (iii) the Jacobian of f is not equal to zero on T.
Multiple integration
It can be proved [1] that
![{\displaystyle \int _{\mathbf {f} (\mathbf {t} )}\phi (\mathbf {x} )\;\mathrm {d} \mathbf {x} =\int _{T}\phi {\big (}\mathbf {f} (\mathbf {t} ){\big )}\;\mathbf {J} _{\mathbf {f} }(\mathbf {t} )\;\mathrm {d} \mathbf {t} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/0bfeab509b9c8fd29166401478e101b4f16afbd6)
As an example we consider the spherical polar coordinates mentioned above. Here x = f(t) ≡ f(r, θ, φ) covers all of
, while T is the region {r > 0, 0 < θ<π, 0 <φ <2π}. Hence the theorem states that
![{\displaystyle \iiint \limits _{\mathbb {R} ^{3}}\phi (\mathbf {x} )\;\mathrm {d} \mathbf {x} =\int \limits _{0}^{\infty }\int \limits _{0}^{\pi }\int \limits _{0}^{2\pi }\phi {\big (}\mathbf {x} (r,\theta ,\phi ){\big )}\;r^{2}\sin \theta \;\mathrm {d} r\mathrm {d} \theta \mathrm {d} \phi .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/30ba041ae71083dfaf118ee6f57a8948765c5b34)
Geometric interpretation of the Jacobian
The Jacobian has a geometric interpretation which we expound for the example of n = 3.
The following is a vector of infinitesimal length in the direction of increase in t1,
![{\displaystyle \mathrm {d} \mathbf {g} _{1}\equiv \lim _{\Delta t_{1}\rightarrow 0}{\frac {\mathbf {f} (t_{1}+\Delta t_{1},t_{2},t_{3})-\mathbf {f} (t_{1},t_{2},t_{3})}{\Delta t_{1}}}\Delta t_{1}={\frac {\partial \mathbf {f} }{\partial t_{1}}}\mathrm {d} t_{1}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/dab1607b77a9129dd9ba2e22c0ee96d4c3126230)
Similarly, we define
![{\displaystyle \mathrm {d} \mathbf {g} _{2}\equiv {\frac {\partial \mathbf {f} }{\partial t_{2}}}\mathrm {d} t_{2},\quad \mathrm {d} \mathbf {g} _{3}\equiv {\frac {\partial \mathbf {f} }{\partial t_{3}}}\mathrm {d} t_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/8352212455e0dda34e809b8e1f999ea68d272f0d)
The scalar triple product of these three vectors gives the volume of an infinitesimally small parallelepiped,
![{\displaystyle \mathrm {d} V=\mathrm {d} \mathbf {g} _{1}\cdot (\mathrm {d} \mathbf {g} _{2}\times \mathrm {d} \mathbf {g} _{3})={\frac {\partial \mathbf {f} }{\partial t_{1}}}\cdot \left({\frac {\partial \mathbf {f} }{\partial t_{2}}}\times {\frac {\partial \mathbf {f} }{\partial t_{3}}}\right)\;\mathrm {d} t_{1}\mathrm {d} t_{3}\mathrm {d} t_{3}}](https://wikimedia.org/api/rest_v1/media/math/render/svg/d45ac3389f4b1613df6e92e0c4cb1923db581f09)
The components of the first vector are given by
![{\displaystyle {\frac {\partial \mathbf {f} }{\partial t_{1}}}\equiv \left({\frac {\partial x}{\partial t_{1}}},{\frac {\partial y}{\partial t_{1}}},{\frac {\partial z}{\partial t_{1}}}\right)\equiv \left({\frac {\partial f_{1}}{\partial t_{1}}},{\frac {\partial f_{2}}{\partial t_{1}}},{\frac {\partial f_{3}}{\partial t_{1}}}\right)}](https://wikimedia.org/api/rest_v1/media/math/render/svg/48a9230760a930747550b3f85ba80fb1bc9f59e1)
and similar expressions hold for the components of the other two derivatives.
It has been shown in the article on the scalar triple product that
![{\displaystyle {\frac {\partial \mathbf {f} }{\partial t_{1}}}\cdot \left({\frac {\partial \mathbf {f} }{\partial t_{2}}}\times {\frac {\partial \mathbf {f} }{\partial t_{3}}}\right)={\begin{vmatrix}{\dfrac {\partial f_{1}}{\partial t_{1}}}&{\dfrac {\partial f_{2}}{\partial t_{1}}}&{\dfrac {\partial f_{3}}{\partial t_{1}}}\\{\dfrac {\partial f_{1}}{\partial t_{2}}}&{\dfrac {\partial f_{2}}{\partial t_{2}}}&{\dfrac {\partial f_{3}}{\partial t_{2}}}\\{\dfrac {\partial f_{1}}{\partial t_{3}}}&{\dfrac {\partial f_{2}}{\partial t_{3}}}&{\dfrac {\partial f_{3}}{\partial t_{3}}}\\\end{vmatrix}}\equiv {\frac {\partial (f_{1},f_{2},f_{3})}{\partial (t_{1},t_{2},t_{3})}}\equiv \mathbf {J} _{\mathbf {f} }(\mathbf {t} ).}](https://wikimedia.org/api/rest_v1/media/math/render/svg/b4470685960041c0b1f10a7d813045138517fc74)
Finally.
![{\displaystyle \mathrm {d} V={\frac {\partial (f_{1},f_{2},f_{3})}{\partial (t_{1},t_{2},t_{3})}}\;\mathrm {d} t_{1}\mathrm {d} t_{3}\mathrm {d} t_{3}\equiv \mathbf {J} _{\mathbf {f} }(\mathbf {t} )\;\mathrm {d} \mathbf {t} .}](https://wikimedia.org/api/rest_v1/media/math/render/svg/11d5fe39a667340209004baa9811e6c1041fd17c)
Reference
- ↑ T. M. Apostol, Mathematical Analysis, Addison-Wesley, 2nd ed. (1974), sec. 15.10