Difference between revisions of "Theory Notes"

From Dr. GWF Drake's Research Group
Jump to: navigation, search
(Hylleraas Coordinates)
Line 114: Line 114:
 
==Completeness==
 
==Completeness==
 
The completeness of the above basis set can be shown by first writing
 
The completeness of the above basis set can be shown by first writing
<math>$r_{12}^2 = r_1^2 + r_2^2 - 2r_1r_2\cos(\Theta_{12})$</math> and
+
<math>r_{12}^2 = r_1^2 + r_2^2 - 2r_1r_2\cos(\Theta_{12})</math> and
<math>$\cos(\Theta_{12})=\frac{4\pi}{3}\sum^1_{m=-1}Y^{m*}_l(\theta_1,\varphi_1)Y^m_l(\theta_2,\varphi_2)$</math>
+
<math>\cos(\Theta_{12})=\frac{4\pi}{3}\sum^1_{m=-1}Y^{m*}_l(\theta_1,\varphi_1)Y^m_l(\theta_2,\varphi_2)</math>
 
consider first S-states.
 
consider first S-states.
The <math>$r_{12}^0$</math> terms are like the ss terms in a CI calculation.  The
+
The <math>r_{12}^0</math> terms are like the ss terms in a CI calculation.  The
<math>$r_{12}^2$</math> terms bring in p-p type contributions, and the higher powers bring
+
<math>r_{12}^2</math> terms bring in p-p type contributions, and the higher powers bring
 
in d-d, f-f etc type terms.  In general
 
in d-d, f-f etc type terms.  In general
 
<math>
 
<math>
Line 128: Line 128:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
&r_{12}^0&\ \ \ \ \ \ \ \ (sp)P\nonumber\\
+
r_{12}^0\ \ \ \ \ \ \ \ (sp)P\nonumber\\
&r_{12}^2&\ \ \ \ \ \ \ \ (pd)P\nonumber\\
+
r_{12}^2\ \ \ \ \ \ \ \ (pd)P\nonumber\\
&r_{12}^4&\ \ \ \ \ \ \ \ (df)P\nonumber\\
+
r_{12}^4\ \ \ \ \ \ \ \ (df)P\nonumber\\
&\vdots& \ \ \ \ \ \ \ \ \ \ \vdots\nonumber
+
\vdots \ \ \ \ \ \ \ \ \ \ \vdots\nonumber
 
\end{eqnarray}</math>
 
\end{eqnarray}</math>
  
Line 137: Line 137:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
&r_{12}^0&\ \ \ \ \ \ \ \ (sp)D\ \ \ \ \ \ \ \ (pp^\prime)D\nonumber\\
+
r_{12}^0\ \ \ \ \ \ \ \ (sp)D\ \ \ \ \ \ \ \ (pp^\prime)D\nonumber\\
&r_{12}^2&\ \ \ \ \ \ \ \ (pd)D\ \ \ \ \ \ \ \ (dd^\prime)D\nonumber\\
+
r_{12}^2\ \ \ \ \ \ \ \ (pd)D\ \ \ \ \ \ \ \ (dd^\prime)D\nonumber\\
&r_{12}^4&\ \ \ \ \ \ \ \ (df)D\ \ \ \ \ \ \ \ (ff^\prime)D\nonumber\\
+
r_{12}^4\ \ \ \ \ \ \ \ (df)D\ \ \ \ \ \ \ \ (ff^\prime)D\nonumber\\
&\vdots& \ \ \ \ \ \ \ \ \ \ \vdots\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots\nonumber\end{eqnarray}</math>
+
\vdots \ \ \ \ \ \ \ \ \ \ \vdots\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots\nonumber\end{eqnarray}</math>
  
 
In this case, since there are two ``lowest-order'' couplings to form a
 
In this case, since there are two ``lowest-order'' couplings to form a
Line 146: Line 146:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
\Psi(r_2,r_2) &=& \sum c_{ijk}r_1^ir_2^{j+2}r_{12}^ke^{-\alpha r_1-\beta
+
\Psi(r_2,r_2) = \sum c_{ijk}r_1^ir_2^{j+2}r_{12}^ke^{-\alpha r_1-\beta
 
   r-2}\mathcal{Y}^M_{022}(\hat{r}_1,\hat{r}_2)\nonumber\\
 
   r-2}\mathcal{Y}^M_{022}(\hat{r}_1,\hat{r}_2)\nonumber\\
&+&\sum d_{ijk}r_1^{i+1}r_2^{j+1}r_{12}e^{-\alpha^\prime r_1 - \beta^\prime
+
+\sum d_{ijk}r_1^{i+1}r_2^{j+1}r_{12}e^{-\alpha^\prime r_1 - \beta^\prime
 
   r_2}\mathcal{Y}^M_{112}(\hat{r}_1,\hat{r}_2)\nonumber
 
   r_2}\mathcal{Y}^M_{112}(\hat{r}_1,\hat{r}_2)\nonumber
 
\end{eqnarray}</math>
 
\end{eqnarray}</math>
  
For F-states, one would need <math>$(sf)F$</math> and <math>$(pd)F$</math> terms.
+
For F-states, one would need <math>(sf)F</math> and <math>(pd)F</math> terms.
  
For G-states, one would need <math>$(sg)G$</math>, <math>$(pf)G$</math> and <math>$(dd^\prime)G$</math> terms.
+
For G-states, one would need <math>(sg)G</math>, <math>(pf)G</math> and <math>(dd^\prime)G</math> terms.
  
 
Completeness of the radial functions can be proven by considering the
 
Completeness of the radial functions can be proven by considering the
Line 167: Line 167:
 
   \frac{l\left(l+1\right)}{2r^2} - \frac{\lambda}{r} - E\right)u(r) = 0.\nonumber
 
   \frac{l\left(l+1\right)}{2r^2} - \frac{\lambda}{r} - E\right)u(r) = 0.\nonumber
 
</math>
 
</math>
For fixed E and variable <math>$\lambda$</math> (nuclear charge).
+
For fixed E and variable <math>\lambda</math> (nuclear charge).
  
The eigenvalues are <math>$\lambda_n = (E/E_n)^{1/2}$</math>, where <math>$E_n =- \frac{1}{2n^2}$</math>
+
The eigenvalues are <math>\lambda_n = (E/E_n)^{1/2}</math>, where <math>E_n =- \frac{1}{2n^2}</math>
  
 
INSERT FIGURE HERE
 
INSERT FIGURE HERE
Line 180: Line 180:
  
  
with <math>$\alpha = (-2E)^{1/2}$</math> and <math>$n\geq l+1$</math>.
+
with <math>\alpha = (-2E)^{1/2}</math> and <math>n\geq l+1</math>.
  
  
Unlike the hydrogen spectrum, which has both a discrete part for <math>$E<0$</math> and a
+
Unlike the hydrogen spectrum, which has both a discrete part for <math>E<0</math> and a
continuous part for <math>$E>0$</math>, this forms an entirely discrete set of finite
+
continuous part for <math>E>0</math>, this forms an entirely discrete set of finite
 
polynomials, called Sturmian functions. They are orthogonal with respect to
 
polynomials, called Sturmian functions. They are orthogonal with respect to
 
the potential
 
the potential
Line 192: Line 192:
  
  
Since they become complete in the limit <math>$n\rightarrow\infty$</math>, this assures the
+
Since they become complete in the limit <math>n\rightarrow\infty</math>, this assures the
 
completeness of the variational basis set.
 
completeness of the variational basis set.
  
Line 205: Line 205:
 
<math>\Psi({\bf r}_1,{\bf r}_2) = \sum^N_{m=1}c_m\varphi_m\nonumber </math>
 
<math>\Psi({\bf r}_1,{\bf r}_2) = \sum^N_{m=1}c_m\varphi_m\nonumber </math>
  
where <math>$m=$ m'th</math> combinations of <math>$i,j,k$</math>
+
where <math>m= m'th</math> combinations of <math>i,j,k</math>
  
 
<math>\varphi_{ijk}=r_1^ir_2^jr_{12}^ke^{-\alpha r_1 - \beta
 
<math>\varphi_{ijk}=r_1^ir_2^jr_{12}^ke^{-\alpha r_1 - \beta
Line 212: Line 212:
  
 
\[\left( \begin{array}{cc}
 
\[\left( \begin{array}{cc}
\cos(\theta) & \sin(\theta)\\
+
\cos(\theta) \sin(\theta)\\
-\sin(\theta) & \cos(\theta)
+
-\sin(\theta) \cos(\theta)
 
\end{array} \right)
 
\end{array} \right)
 
\left( \begin{array}{cc}
 
\left( \begin{array}{cc}
H_{11} & H_{12}\\
+
H_{11} H_{12}\\
H_{12} & H_{22}
+
H_{12} H_{22}
 
\end{array}\right)
 
\end{array}\right)
 
\left( \begin{array}{cc}
 
\left( \begin{array}{cc}
\cos(\theta) & -\sin(\theta)\\
+
\cos(\theta) -\sin(\theta)\\
\sin(\theta) & \cos(\theta)
+
\sin(\theta) \cos(\theta)
 
\end{array} \right)\]
 
\end{array} \right)\]
 
\[ = \left(\begin{array}{cc}
 
\[ = \left(\begin{array}{cc}
cH_{11}+sH_{12} & cH_{12} + sH_{22}\\
+
cH_{11}+sH_{12} cH_{12} + sH_{22}\\
-sH_{11} + cH_{12} & -sH_{12} + cH_{22}
+
-sH_{11} + cH_{12} -sH_{12} + cH_{22}
 
\end{array}\right)
 
\end{array}\right)
 
\left( \begin{array}{cc}
 
\left( \begin{array}{cc}
c & -s\\
+
c -s\\
s & c
+
s c
 
\end{array}\right)\]
 
\end{array}\right)\]
 
\[ = \left(\begin{array}{cc}
 
\[ = \left(\begin{array}{cc}
c^2H_{11}+s^2H_{22} + 2csH_{12} & (c^2-s^2)H_{12}+cs(H_{22}-H_{11})\\
+
c^2H_{11}+s^2H_{22} + 2csH_{12} (c^2-s^2)H_{12}+cs(H_{22}-H_{11})\\
(c^2-s^2)H_{12}+cs(H_{22}-H_{11}) & s^2H_{11}+c^2H_{22}-2csH_{12}
+
(c^2-s^2)H_{12}+cs(H_{22}-H_{11}) s^2H_{11}+c^2H_{22}-2csH_{12}
 
\end{array}\right)\]</math>
 
\end{array}\right)\]</math>
  
Line 258: Line 258:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
\omega &=& H_{22}-H_{11}\nonumber\\
+
\omega = H_{22}-H_{11}\nonumber\\
r&=&\left(\omega^2+4H_{12}^2\right)^{1/2}\nonumber\\
+
r=\left(\omega^2+4H_{12}^2\right)^{1/2}\nonumber\\
E_1&=&\frac{1}{2}\left(H_{11}+H_{22}-r\right)\nonumber\\
+
E_1=\frac{1}{2}\left(H_{11}+H_{22}-r\right)\nonumber\\
E_2&=&\frac{1}{2}\left(H_{11}+H_{22}+r\right)\nonumber
+
E_2=\frac{1}{2}\left(H_{11}+H_{22}+r\right)\nonumber
 
\end{eqnarray}</math>
 
\end{eqnarray}</math>
  
Line 273: Line 273:
 
\Phi_m = \sum_n\varphi_nR_{nm}\nonumber
 
\Phi_m = \sum_n\varphi_nR_{nm}\nonumber
 
</math>
 
</math>
such that <math>$<\Phi_m|\Phi_n> = \delta_{m,n}$\\</math>.
+
such that <math><\Phi_m|\Phi_n> = \delta_{m,n}\\</math>.
 
This can be done by finding an orthogonal tranformation, T, such that
 
This can be done by finding an orthogonal tranformation, T, such that
  
Line 279: Line 279:
 
T^TOT=I=\left(
 
T^TOT=I=\left(
 
\begin{array}{cccc}
 
\begin{array}{cccc}
I_1 & 0 & \ldots & 0\\
+
I_1 0 \ldots 0\\
0 & I_2 & \ & 0 \\
+
0 I_2 \ 0 \\
\vdots & \ & I_3 & 0 \\
+
\vdots \ I_3 0 \\
0 & 0 & 0 & \ddots
+
0 0 0 \ddots
 
\end{array}\right);
 
\end{array}\right);
 
\ \ O_{mn} = <\varphi_m|\varphi_n>\nonumber
 
\ \ O_{mn} = <\varphi_m|\varphi_n>\nonumber
Line 289: Line 289:
 
<math>
 
<math>
 
S = \left(\begin{array}{cccc}
 
S = \left(\begin{array}{cccc}
\frac{1}{I_1^{1/2}} & 0 & \ldots & 0\\
+
\frac{1}{I_1^{1/2}} 0 \ldots 0\\
0 & \frac{1}{I_2^{1/2}} & \ & 0 \\
+
0 \frac{1}{I_2^{1/2}} \ 0 \\
\vdots & \ & \frac{1}{I_3^{1/2}} & 0 \\
+
\vdots \ \frac{1}{I_3^{1/2}} 0 \\
0 & 0 & 0 & \ddots
+
0 0 0 \ddots
 
\end{array}\right)= S^T\nonumber
 
\end{array}\right)= S^T\nonumber
 
</math>
 
</math>
  
Then <math>$S^TT^TOTS = 1$\\</math>.  ie <math>$R^TOR = 1$</math> with <math>$R=TS$</math>.
+
Then <math>S^TT^TOTS = 1\\</math>.  ie <math>R^TOR = 1</math> with <math>R=TS</math>.
  
If H is the matrix with elements <math>$H_{mn}=<\varphi_m|\varphi_n>$</math>, then H
+
If H is the matrix with elements <math>H_{mn}=<\varphi_m|\varphi_n></math>, then H
expressed in the <math>$\Phi_m$</math> basis set is
+
expressed in the <math>\Phi_m</math> basis set is
 
<math>
 
<math>
 
H^\prime = R^THR.\nonumber
 
H^\prime = R^THR.\nonumber
 
</math>
 
</math>
  
We next diagonalize <math>$H^\prime$</math> by finding an orthogonal transformation W such
+
We next diagonalize <math>H^\prime</math> by finding an orthogonal transformation W such
 
that
 
that
 
<math>
 
<math>
 
W^TH^\prime W = \lambda = \left(
 
W^TH^\prime W = \lambda = \left(
 
\begin{array}{cccc}
 
\begin{array}{cccc}
\lambda_1 & 0 & \ldots & 0\\
+
\lambda_1 0 \ldots 0\\
0 & \lambda_2& \ & 0 \\
+
0 \lambda_2 \ 0 \\
\vdots & \ & \ddots & 0 \\
+
\vdots \ \ddots 0 \\
0 & 0 & 0 & \lambda_N
+
0 0 0 \lambda_N
 
\end{array}\right)\nonumber
 
\end{array}\right)\nonumber
 
</math>
 
</math>
Line 319: Line 319:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
\Psi^{(q)} &=& \sum_n\Phi_n W_{n,q}\nonumber \\
+
\Psi^{(q)} = \sum_n\Phi_n W_{n,q}\nonumber \\
&=& \sum_{n,n^\prime}\varphi_{n^\prime}R_{n^\prime ,n}W_{n,q}.\nonumber
+
= \sum_{n,n^\prime}\varphi_{n^\prime}R_{n^\prime ,n}W_{n,q}.\nonumber
 
\end{eqnarray}</math>
 
\end{eqnarray}</math>
ie. <math>$c_{n^prime}^{(q)} = \sum_n R_{n^\prime n} W_{n,q}$</math>.
+
ie. <math>c_{n^prime}^{(q)} = \sum_n R_{n^\prime n} W_{n,q}$</math>.
  
  
 
===The Power Method===
 
===The Power Method===
  
-Based on the observation that if H has one eigenvalue, <math>$\lambda_M$</math>, much bigger than all the rest, and <math>$\chi = \left(\nonumber\\
+
-Based on the observation that if H has one eigenvalue, <math>\lambda_M</math>, much bigger than all the rest, and <math>\chi = \left(\nonumber\\
 
\begin{array}{c}
 
\begin{array}{c}
 
a_1\\
 
a_1\\
 
a_2\\
 
a_2\\
 
\vdots\nonumber
 
\vdots\nonumber
\end{array}\right)\nonumber$</math> is an arbitrary starting vector, then <math>$\chi = \sum_q
+
\end{array}\right)\nonumber</math> is an arbitrary starting vector, then <math>\chi = \sum_q
 
x_q\Psi^{(q)}$\nonumber</math>.
 
x_q\Psi^{(q)}$\nonumber</math>.
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
(H)^n\chi &=& \sum_q x_q \lambda^n_q\Psi^{(q)}\nonumber\\
+
(H)^n\chi = \sum_q x_q \lambda^n_q\Psi^{(q)}\nonumber\\
&\rightarrow& x_M\lambda_M^n\Psi^{(M)}\nonumber
+
\rightarrow x_M\lambda_M^n\Psi^{(M)}\nonumber
 
\end{eqnarray}\\</math>
 
\end{eqnarray}\\</math>
provided <math>$x_M\neq 0$\\</math>.
+
provided <math>x_M\neq 0\\</math>.
  
 
To pick out the eigenvector correspondng to any eigenvalue, with the original
 
To pick out the eigenvector correspondng to any eigenvalue, with the original
Line 345: Line 345:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
H\Psi &=& \lambda O\Psi\nonumber\\
+
H\Psi = \lambda O\Psi\nonumber\\
(H-\lambda)qO)\Psi\nonumber &=& (\lambda - \lambda_q)O\Psi\nonumber\\
+
(H-\lambda)qO)\Psi\nonumber = (\lambda - \lambda_q)O\Psi\nonumber\\
 
\end{eqnarray}</math>
 
\end{eqnarray}</math>
  
Line 353: Line 353:
 
G\Psi = \frac{1}{\lambda-\lambda_q}\Psi\nonumber\\
 
G\Psi = \frac{1}{\lambda-\lambda_q}\Psi\nonumber\\
 
</math>
 
</math>
where <math>$G=(H-\lambda_qO)^{-1}O$\nonumber</math> with eigenvalues
+
where <math>G=(H-\lambda_qO)^{-1}O\nonumber</math> with eigenvalues
<math>$\frac{1}{\lambda_n-\lambda_q}\nonumber$\\</math>.
+
<math>\frac{1}{\lambda_n-\lambda_q}\nonumber\\</math>.
  
By picking <math>$\lambda_q$</math> close to any one of the <math>$\lambda_n$</math>, say
+
By picking <math>\lambda_q</math> close to any one of the <math>\lambda_n</math>, say
<math>$\lambda_{n^\prime}$</math>, then <math>$\frac{1}{\lambda_n-\lambda_q}$</math> is much larger for
+
<math>\lambda_{n^\prime}</math>, then <math>\frac{1}{\lambda_n-\lambda_q}</math> is much larger for
<math>$n=n^\prime$\nonumber</math> than for any other value.  The sequence is then
+
<math>n=n^\prime$\nonumber</math> than for any other value.  The sequence is then
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
\chi_1&=&G\chi\nonumber\\
+
\chi_1=G\chi\nonumber\\
\chi_2&=&G\chi_1\nonumber\\
+
\chi_2=G\chi_1\nonumber\\
\chi_3&=&G\chi_2\nonumber\\
+
\chi_3=G\chi_2\nonumber\\
 
\vdots\nonumber
 
\vdots\nonumber
 
\end{eqnarray}\\ </math>
 
\end{eqnarray}\\ </math>
  
until the ratios of components in <math>$\chi_n$</math> stop changing.
+
until the ratios of components in <math>\chi_n</math> stop changing.
  
 
- To avoid matrix inversion and multiplication, note that the sequence is equivalent to
 
- To avoid matrix inversion and multiplication, note that the sequence is equivalent to
Line 375: Line 375:
 
</math>
 
</math>
  
where <math>$F = H-\lambda_qO$</math>.  The factor of <math>$(\lambda - \lambda_q)$</math> can be
+
where <math>F = H-\lambda_qO</math>.  The factor of <math>(\lambda - \lambda_q)</math> can be
dropped because this only affects the normalization of <math>$\chi_n$</math>.  To find
+
dropped because this only affects the normalization of <math>\chi_n</math>.  To find
<math>$\chi_n$\\</math>, solve
+
<math>\chi_n\\</math>, solve
 
<math>
 
<math>
 
F\chi_n = O\chi_{n-1}\nonumber\\
 
F\chi_n = O\chi_{n-1}\nonumber\\
Line 394: Line 394:
 
</math>
 
</math>
  
Taking <math>$r_1$, $r_2$</math> and <math>$r_{12}$</math> as independent variables,
+
Taking <math>r_1, r_2</math> and <math>r_{12}</math> as independent variables,
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
\nabla_1^2 &=& \frac{1}{r_1^2}\frac{\partial}{\partial r_1}\left(r_1^2\frac{\partial}{\partial r_1}\right)+ \frac{1}{r_{12}^2}\frac{\partial}{\partial r_{12}}
+
\nabla_1^2 = \frac{1}{r_1^2}\frac{\partial}{\partial r_1}\left(r_1^2\frac{\partial}{\partial r_1}\right)+ \frac{1}{r_{12}^2}\frac{\partial}{\partial r_{12}}
\left(r_{12}^2\frac{\partial}{\partial r_{12}}\right)\nonumber &-&\frac{l_1(l_1+1)}{r_1^2}+2(r_1-r_2\cos(\theta))\frac{1}{r_{12}}\frac{\partial^2}{\partial r_1 \partial r} \nonumber &-& 2(\nabla_1 \cdot {\bf r}_2)\frac{1}{r}\frac{\partial}{\partial r}\nonumber
+
\left(r_{12}^2\frac{\partial}{\partial r_{12}}\right)\nonumber -\frac{l_1(l_1+1)}{r_1^2}+2(r_1-r_2\cos(\theta))\frac{1}{r_{12}}\frac{\partial^2}{\partial r_1 \partial r} \nonumber - 2(\nabla_1 \cdot {\bf r}_2)\frac{1}{r}\frac{\partial}{\partial r}\nonumber
 
\end{eqnarray}</math>
 
\end{eqnarray}</math>
  
Line 404: Line 404:
 
INSERT FIGURE HERE
 
INSERT FIGURE HERE
  
The complete set of 6 independent variables is <math>$r_1, r_2, r_{12}, \theta_1,\varphi_1, \chi$</math>.
+
The complete set of 6 independent variables is <math>r_1, r_2, r_{12}, \theta_1,\varphi_1, \chi</math>.
  
If <math>$r_{12}$</math> were not an independent variable, then one could take the column element to be <math> d\tau = r_1^2dr_1\sin(\theta_1)d\theta_1d\varphi_1r_2^2dr_2\sin(\theta_2)d\theta_2d\varphi_2.\nonumber </math>
+
If <math>r_{12}</math> were not an independent variable, then one could take the column element to be <math> d\tau = r_1^2dr_1\sin(\theta_1)d\theta_1d\varphi_1r_2^2dr_2\sin(\theta_2)d\theta_2d\varphi_2.\nonumber </math>
  
However, <math>$\theta_2$</math> and <math>$\varphi_2$</math> are no longer independent variables. To eliminate them, take the point <math>${\bf r}_1$</math> as the  
+
However, <math>\theta_2</math> and <math>\varphi_2</math> are no longer independent variables. To eliminate them, take the point <math>{\bf r}_1</math> as the  
  
 
origin of a new polar co-ordinate system, and write <math> d\tau=-r_1^2dr_1\sin(\theta_1)d\theta_1d\varphi_1r_{12}^2dr_{12}\sin(\psi)d\psi d\chi\nonumber\\</math>  
 
origin of a new polar co-ordinate system, and write <math> d\tau=-r_1^2dr_1\sin(\theta_1)d\theta_1d\varphi_1r_{12}^2dr_{12}\sin(\psi)d\psi d\chi\nonumber\\</math>  
Line 415: Line 415:
  
  
Then for fixed <math>$r_1$</math> and <math>$r_{12}$\\</math>, <math> 2r_2dr_2 = -2r_1r_{12}\sin(\psi)d\psi\\ </math>
+
Then for fixed <math>r_1</math> and <math>$r_{12}$\\</math>, <math> 2r_2dr_2 = -2r_1r_{12}\sin(\psi)d\psi\\ </math>
  
  
Line 425: Line 425:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
I(l_1,m_1,l_2,m_2;R) &=&\int\sin(\theta_1)d\theta_1d\varphi_1d\chi Y^{m_1}_{l_1}(\theta_1,\varphi_1)^{*}Y^{m_2}_{l_2}(\theta_2,\varphi_2)\nonumber\\  
+
I(l_1,m_1,l_2,m_2;R) =\int\sin(\theta_1)d\theta_1d\varphi_1d\chi Y^{m_1}_{l_1}(\theta_1,\varphi_1)^{*}Y^{m_2}_{l_2}(\theta_2,\varphi_2)\nonumber\\  
&\times&\int r_1dr_1r_2dr_2r_{12}dr_{12}R(r_1,r_2,r_{12})\nonumber\\
+
\times\int r_1dr_1r_2dr_2r_{12}dr_{12}R(r_1,r_2,r_{12})\nonumber\\
 
\end{eqnarray}\\</math>
 
\end{eqnarray}\\</math>
  
  
Consider first the angular integral. <math>$Y^{m_2}_{l_2}(\theta_2,\varphi_2)$\\</math> can be expressed in terms of the independent variables <math>$\theta_1, \varphi_1,\chi$\\</math> by use of the rotation matrix relation
+
Consider first the angular integral. <math>Y^{m_2}_{l_2}(\theta_2,\varphi_2)$\\</math> can be expressed in terms of the independent variables <math>\theta_1, \varphi_1,\chi\\</math> by use of the rotation matrix relation
  
  
Line 436: Line 436:
  
  
where <math>$\theta, \varphi$</math> are the polar angles of <math>${\bf r}_2$</math> relative to <math>${\bf r}_1$</math>.  The angular integral is then
+
where <math>\theta, \varphi</math> are the polar angles of <math>{\bf r}_2</math> relative to <math>{\bf r}_1</math>.  The angular integral is then
  
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
I_{ang}&=&\int^{2\pi}_0d\chi\int^{2\pi}_0d\varphi_1\int^\pi_0\sin(\theta_1)d=theta_1Y^{m_1}_{l_1}(\theta_1,\varphi_1)^*\nonumber\\
+
I_{ang}=\int^{2\pi}_0d\chi\int^{2\pi}_0d\varphi_1\int^\pi_0\sin(\theta_1)d=theta_1Y^{m_1}_{l_1}(\theta_1,\varphi_1)^*\nonumber\\
&\times&\sum_m\mathcal{D}^{(l_2)}_{m_2,m}(\varphi_1,\theta_1,\chi)^*Y^m_{l_2}(\theta,\varphi)\nonumber\\
+
\times\sum_m\mathcal{D}^{(l_2)}_{m_2,m}(\varphi_1,\theta_1,\chi)^*Y^m_{l_2}(\theta,\varphi)\nonumber\\
 
\end{eqnarray}\\</math>
 
\end{eqnarray}\\</math>
  
Line 460: Line 460:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
I_{ang}&=&\sqrt{\frac{2l_1+1}{4\pi}}\frac{8\pi^2}{2l_1+1}\delta_{l_1,l_2}\delta_{m_1,m_2}Y^0_{l_2}(\theta,\varphi)\nonumber\\
+
I_{ang}=\sqrt{\frac{2l_1+1}{4\pi}}\frac{8\pi^2}{2l_1+1}\delta_{l_1,l_2}\delta_{m_1,m_2}Y^0_{l_2}(\theta,\varphi)\nonumber\\
&=&2\pi\delta_{l_1,l_2}\delta_{m_1,m_2}P_{l_2}(\cos\theta)\nonumber\\
+
=2\pi\delta_{l_1,l_2}\delta_{m_1,m_2}P_{l_2}(\cos\theta)\nonumber\\
 
\end{eqnarray}\\</math>
 
\end{eqnarray}\\</math>
  
Line 468: Line 468:
  
  
<math>$Y^0_{l_2}(\theta,\varphi)=\sqrt{\frac{2l_1+1}{4\pi}}P_{l_2}(\cos(\theta))$\nonumber\\</math>.
+
<math>Y^0_{l_2}(\theta,\varphi)=\sqrt{\frac{2l_1+1}{4\pi}}P_{l_2}(\cos(\theta))$\nonumber\\</math>.
  
  
Note that <math>$P_{l_2}(\cos\theta)$</math> is just a short hand expression for a radial function because <math>$\cos\theta = \frac{r_1^2+r_2^2-r_{12}^2}
+
Note that <math>P_{l_2}(\cos\theta)</math> is just a short hand expression for a radial function because <math>\cos\theta = \frac{r_1^2+r_2^2-r_{12}^2}
{2r_1r_2}$\nonumber\\</math>.
+
{2r_1r_2}\nonumber\\</math>.
  
  
Line 478: Line 478:
  
 
<math>\begin{eqnarray}
 
<math>\begin{eqnarray}
I(l_1,m_1,l_2,m_2;R)&=&2\pi\delta_{l_1l_2}\delta_{m_1m_2}\nonumber\\  
+
I(l_1,m_1,l_2,m_2;R)=2\pi\delta_{l_1l_2}\delta_{m_1m_2}\nonumber\\  
&\times&\int^\infty_0r_1dr_1\int^\infty_0r_2dr_2\int^{r_1+r_2}_{|r_1-r_2|} r_{12}dr_{12}R(r_1,r_2,r_{12})P_{l_2}(\cos\theta)\nonumber\\
+
\times\int^\infty_0r_1dr_1\int^\infty_0r_2dr_2\int^{r_1+r_2}_{|r_1-r_2|} r_{12}dr_{12}R(r_1,r_2,r_{12})P_{l_2}(\cos\theta)\nonumber\\
 
\end{eqnarray}\\</math>
 
\end{eqnarray}\\</math>
  
  
where <math>$\cos\theta = (r_1^2 +r^2_2-r_{12}^2)/(2r_1r_2)$\nonumber</math> is a purely radial function.
+
where <math>\cos\theta = (r_1^2 +r^2_2-r_{12}^2)/(2r_1r_2)\nonumber</math> is a purely radial function.
  
  
The above would become quite complicated for large <math>$l_2$</math> because <math>$P_{l_2}(\cos\theta)$</math> contains terms up to <math>$(\cos\theta)^{l_2}$\\</math>. However, recursion relations exist which allow any integral containing <math>$P_l(\cos\theta)$</math> in terms of those containing just <math>$P_0(\cos\theta)=1$\nonumber\\</math>  
+
The above would become quite complicated for large <math>l_2</math> because <math>P_{l_2}(\cos\theta)</math> contains terms up to <math>(\cos\theta)^{l_2}$\\</math>. However, recursion relations exist which allow any integral containing <math>P_l(\cos\theta)</math> in terms of those containing just <math>P_0(\cos\theta)=1\nonumber\\</math>  
and <math>$P_1(\cos\theta)=\cos\theta$\nonumber\\</math>.
+
and <math>P_1(\cos\theta)=\cos\theta\nonumber\\</math>.
  
 
==Radial Integrals and Recursion Relations==
 
==Radial Integrals and Recursion Relations==

Revision as of 02:53, 8 November 2012

Helium Calculations

\( [-\frac{\hbar^2}{2m}(\nabla^2_1 +\nabla^2_2) - \frac{Ze^2}{r_1} - \frac{Ze^2}{r_2}+\frac{e^2}{r^2_{12}} ]\psi = E\psi\nonumber \)

Define \(\rho = \frac{Zr}{a_0}\) where \(a_0 = \frac{\hbar^2}{me^2}\) (Bohr radius). Then

\([-\frac{\hbar^2}{2m}Z^2(\frac{me^2}{\hbar^2})^2(\nabla^2_{\rho_1}+\nabla^2_{\rho_2}) - Z^2\frac{e^2}{a_0}\rho^{-1}_1 - Z^2\frac{e^2}{a_0}\rho^{-1}_2 + \frac{e^2}{a_0}Z\rho^{-1}_{12}]\psi= E\psi\nonumber\)

But \(\frac{\hbar^2}{m}(\frac{me^2}{\hbar^2})^2 = \frac{e^2}{a_0}\) is in atomic units (au) of energy. Therefore

\([-\frac{1}{2}(\nabla^2_{\rho_1}+\nabla^2_{\rho_2}) - \frac{1}{\rho_1} - \frac{1}{\rho_2} + \frac{Z^{-1}}{\rho_{12}}]\psi = \varepsilon\psi\nonumber\) where \(\varepsilon = \frac{Ea_0}{Z^2e^2}\)

The problem to be solved is thus \([\frac{1}{2}(\nabla^2_1+\nabla^2_2) - \frac{1}{r_1}-\frac{1}{r_2} + \frac{Z^{-1}}{r_{12}}]\psi = \varepsilon\psi\nonumber\)

[figure to be inserted]

The Hartree Fock Method

Assume that \(\psi({\bf r}_1,{\bf r}_2)\) can be written in the form

\(\psi({\bf r}_1,{\bf r}_2) = \frac{1}{\sqrt{2}}[u_1(r_1)u_2(r_2) \pm u_2(r_1)u_1(r_2)]\nonumber\)

for the \(1S^21S\) ground state

\([-\frac{1}{2}(\nabla^2_1+\nabla^2_2) - \frac{1}{r_1}- \frac{1}{r_2} + \frac{Z^{-1}}{r_{12}}]\psi(r_1,r_2) = E\psi(r_1,r_2)\nonumber\)

Substitute into \(<\psi|H-E|\psi>\) and require this expression to be stationary with respect to arbitrary infinitesimal variations \(\delta u_1\) and \(\delta u_2\) in \(u_1\) and \(u_2\). ie

\(\frac{1}{2}<\delta u_1(r_1)u_2(r_2) \pm u_2(r_1)\delta u_1(r_2)|H-E|u_1(r_1)u_2(r_2)\pm u_2(r_1)u_1(r_2)>\nonumber\)

\(=\int\delta u_1(r_1)d{\bf r}_1\{\int d{\bf r}_2u_2(r_2)(H-E)[u_1(r_1)u_2(r_2)\pm u_2(r_1)u_1(r_2)]\}\nonumber\)

\(= 0 \ \ \ for \ arbitrary \ \delta u_1(r_1).\nonumber\)

Therefore \(\{\int d{\bf r}_2 \ldots \} = 0\).

Similarly, the coefficient of \(\delta u_2\) would give

\(\int d{\bf r}_1 u_1(r_1)(H-E)[u_1(r_1)u_2(r_2) \pm u_2(r_1)u_1(r_2)] = 0\nonumber\)

Define

\(I_{12} = \int dru_1(r)u_2(r), \nonumber\)

\(I_{21} = \int dru_1(r)u_2(r), \nonumber\)

\(H_{ij} = \int d{\bf r}u_i(-\frac{1}{2}\nabla - \frac{1}{r})u_j(r), \nonumber\)

\(G_{ij}(r) = \int d{\bf r}^\prime u_i(r^\prime)\frac{1}{|{\bf r} - {\bf r}\prime|}u_j(r^\prime)\nonumber\)

Then the above equations become the pair of integro-differential equations

\([ H_0 - E + H_{22}+G_{22}(r)]u_1(r) = \mp [ I_{12}(H_0-E) + H_{12}+G_{12}(r)]u_2(r)\nonumber\)

\([H_0-E+H_{11}+G_{11}(r)]u_2(r) = \mp [I_{12}(H_0-E) + H_{12}+G_{12}(r)]u_1(r)\nonumber\)

These must be solved self-consistently for the "constants" \(I_{12}\) and \(H_{ij}\) and the function \(G_{ij}(r)\).

The H.F. energy is \(E \simeq -2.87\cdots a.u.\) while the exact energy is \(E = -2.903724\cdots a.u.\)

The difference is called the "correlation energy" because it arises from the way in which the motion of one electron is correlated to the other. The H.F. equations only describe how one electron moves in the average field provided by the other.

Configuration Interaction

Expand \( \psi({\bf r}_1,{\bf r}_2)= C_0u^{(s)}_1(r_1)u^{(s)}_1(r_2) + C_1u^{(P)}_1({\bf r}_1)u^{(P)}_1({\bf r}_2)\Upsilon^0_{1,1,0}(\hat{\bf r}_1, \hat{\bf r}_2)+C_2u^{(d)}_1({\bf r}_1)u^{(d)}_2({\bf r}_2)\Upsilon^0_{2,2,0}(\hat{\bf r}_1, \hat{\bf r}_2)+... \pm\) exchange where \( \Upsilon^M_{l_1,l_2,L}(\hat{\bf r}_1, \hat{\bf r}_2)=\Sigma_{m_1,m_2}\Upsilon^{m_1}_{l_1}({\bf r}_1)\Upsilon^{m_2}_{l_2}({\bf r}_2)\times <l_1l_2m_1m_2\mid LM> \).

This works, but is slowly convergent, and very laborious. The best CI calculations are accurate to \( ~10^{-7}\) a.u.

Hylleraas Coordinates

[E.A. Hylleraas, Z. Phys. \({\bf 48}, 469(1928)\) and \({\bf 54}, 347(1929)\)] suggested using the co-ordinates \(r_1\), \(r_2\) and \(r_{12}\) or equivalently

\(\begin{eqnarray} s &=& r_1 + r_2, \nonumber\\ t &=& r_1-r_2, \nonumber\\ u &=& r_{12}\nonumber \end{eqnarray}\)

and writing the trial functions in the form

\( \Psi({\bf r}_1,{\bf r}_2) = \sum^{1+j+k\leq N}_{i,j,k}c_{i,j,k}r_1^{i+l_1}r_2^{j+l_2}r_{12}^ke^{-\alpha r_1 - \beta r_2} \mathcal{Y}^M_{l_1,l_2,L}(\hat{r}_1,\hat{r}_2)\pm exchange\nonumber \)

Diagonalizing H in this non-orthogonal basis set is equivalent to solving \( \frac{\partial E}{\partial c_{i,j,k}} = 0\nonumber \) for fixed \(\alpha\) and \(\beta\).

The diagonalization must be repeated for different values of \(\alpha\) and \(\beta\) in order to optimize the non-linear parameters.

Completeness

The completeness of the above basis set can be shown by first writing \(r_{12}^2 = r_1^2 + r_2^2 - 2r_1r_2\cos(\Theta_{12})\) and \(\cos(\Theta_{12})=\frac{4\pi}{3}\sum^1_{m=-1}Y^{m*}_l(\theta_1,\varphi_1)Y^m_l(\theta_2,\varphi_2)\) consider first S-states. The \(r_{12}^0\) terms are like the ss terms in a CI calculation. The \(r_{12}^2\) terms bring in p-p type contributions, and the higher powers bring in d-d, f-f etc type terms. In general \( P_l(\cos(\theta_{12}) = \frac{4\pi}{2l+1}\sum^l_{m=-l}{Y^{m}_l}^*(\theta_1,\varphi_1)Y^m_l(\theta_2,\varphi_2)\nonumber \)

For P-states, one would have similarly

\(\begin{eqnarray} r_{12}^0\ \ \ \ \ \ \ \ (sp)P\nonumber\\ r_{12}^2\ \ \ \ \ \ \ \ (pd)P\nonumber\\ r_{12}^4\ \ \ \ \ \ \ \ (df)P\nonumber\\ \vdots \ \ \ \ \ \ \ \ \ \ \vdots\nonumber \end{eqnarray}\)

For D-states

\(\begin{eqnarray} r_{12}^0\ \ \ \ \ \ \ \ (sp)D\ \ \ \ \ \ \ \ (pp^\prime)D\nonumber\\ r_{12}^2\ \ \ \ \ \ \ \ (pd)D\ \ \ \ \ \ \ \ (dd^\prime)D\nonumber\\ r_{12}^4\ \ \ \ \ \ \ \ (df)D\ \ \ \ \ \ \ \ (ff^\prime)D\nonumber\\ \vdots \ \ \ \ \ \ \ \ \ \ \vdots\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \vdots\nonumber\end{eqnarray}\)

In this case, since there are two ``lowest-order couplings to form a D-state, both must be present in the basis set. ie

\(\begin{eqnarray} \Psi(r_2,r_2) = \sum c_{ijk}r_1^ir_2^{j+2}r_{12}^ke^{-\alpha r_1-\beta r-2}\mathcal{Y}^M_{022}(\hat{r}_1,\hat{r}_2)\nonumber\\ +\sum d_{ijk}r_1^{i+1}r_2^{j+1}r_{12}e^{-\alpha^\prime r_1 - \beta^\prime r_2}\mathcal{Y}^M_{112}(\hat{r}_1,\hat{r}_2)\nonumber \end{eqnarray}\)

For F-states, one would need \((sf)F\) and \((pd)F\) terms.

For G-states, one would need \((sg)G\), \((pf)G\) and \((dd^\prime)G\) terms.

Completeness of the radial functions can be proven by considering the Stern-Liouville problem

\( \left(-\frac{1}{2}\nabla^2-\frac{\lambda}{r_s}-E\right)\psi({\bf r}) = 0\nonumber \) or \( \left(-\frac{1}{2}\frac{1}{r^2}\left({r^2}\frac{\partial}{\partial r}\right) - \frac{l\left(l+1\right)}{2r^2} - \frac{\lambda}{r} - E\right)u(r) = 0.\nonumber \) For fixed E and variable \(\lambda\) (nuclear charge).

The eigenvalues are \(\lambda_n = (E/E_n)^{1/2}\), where \(E_n =- \frac{1}{2n^2}\)

INSERT FIGURE HERE

\( u_{nl}(r) = \frac{1}{(2l+1)!}\left(\frac{(n+l)!}{(n-l-1)2!}\right)^{1/2}(2\alpha)^{3/2}e^{-\alpha r},\nonumber \)


with \(\alpha = (-2E)^{1/2}\) and \(n\geq l+1\).


Unlike the hydrogen spectrum, which has both a discrete part for \(E<0\) and a continuous part for \(E>0\), this forms an entirely discrete set of finite polynomials, called Sturmian functions. They are orthogonal with respect to the potential

-ie \(\int^\infty_0 r^2dru_{n^\prime l}(r)\frac{1}{r}u_{nl}(r) = \delta_{n,n^\prime}\nonumber \)


Since they become complete in the limit \(n\rightarrow\infty\), this assures the completeness of the variational basis set.

[see B Klahn and W.A. Bingel Theo. Chim. Acta (Berlin) \({\bf 44}, 9\) and \(27 (1977)\)].

Solutions of the Eigenvalue Problem

For convenience, write


\(\Psi({\bf r}_1,{\bf r}_2) = \sum^N_{m=1}c_m\varphi_m\nonumber \)

where \(m= m'th\) combinations of \(i,j,k\)

\(\varphi_{ijk}=r_1^ir_2^jr_{12}^ke^{-\alpha r_1 - \beta r_2}\mathcal{Y}^M_{l_1,l_2,L}(\hat{r}_1,\hat{r}_2) \pm exchange.\nonumber '"`UNIQ-MathJax1-QINU`"' '"`UNIQ-MathJax2-QINU`"' '"`UNIQ-MathJax3-QINU`"'\)


Therefore \( (\cos^2(\theta)-\sin^2(\theta))H_{12} = \cos(\theta)\sin(\theta)(H_{11}-H_{22})\nonumber \) and \( \tan(2\theta) = \frac{2H_{12}}{H_{11}-H{22}}\nonumber \)

ie

\(\begin{eqnarray} \cos(\theta)=\left(\frac{r+\omega}{2r}\right)^{1/2}\nonumber,\\ \sin(\theta)=-sgn(H_{12})\left(\frac{r-\omega}{2r}\right)^{1/2}\nonumber \end{eqnarray}\\\)

where

\(\begin{eqnarray} \omega = H_{22}-H_{11}\nonumber\\ r=\left(\omega^2+4H_{12}^2\right)^{1/2}\nonumber\\ E_1=\frac{1}{2}\left(H_{11}+H_{22}-r\right)\nonumber\\ E_2=\frac{1}{2}\left(H_{11}+H_{22}+r\right)\nonumber \end{eqnarray}\)

Brute Force Method

-Gives all the eigenvalues and eigenvectors, but it is slow.

-First orthonormalize the basis set - ie form linear combinations

\( \Phi_m = \sum_n\varphi_nR_{nm}\nonumber \) such that \(<\Phi_m|\Phi_n> = \delta_{m,n}\\\). This can be done by finding an orthogonal tranformation, T, such that

\( T^TOT=I=\left( \begin{array}{cccc} I_1 0 \ldots 0\\ 0 I_2 \ 0 \\ \vdots \ I_3 0 \\ 0 0 0 \ddots \end{array}\right); \ \ O_{mn} = <\varphi_m|\varphi_n>\nonumber \) and then applying a scale change matrix \( S = \left(\begin{array}{cccc} \frac{1}{I_1^{1/2}} 0 \ldots 0\\ 0 \frac{1}{I_2^{1/2}} \ 0 \\ \vdots \ \frac{1}{I_3^{1/2}} 0 \\ 0 0 0 \ddots \end{array}\right)= S^T\nonumber \)

Then \(S^TT^TOTS = 1\\\). ie \(R^TOR = 1\) with \(R=TS\).

If H is the matrix with elements \(H_{mn}=<\varphi_m|\varphi_n>\), then H expressed in the \(\Phi_m\) basis set is \( H^\prime = R^THR.\nonumber \)

We next diagonalize \(H^\prime\) by finding an orthogonal transformation W such that \( W^TH^\prime W = \lambda = \left( \begin{array}{cccc} \lambda_1 0 \ldots 0\\ 0 \lambda_2 \ 0 \\ \vdots \ \ddots 0 \\ 0 0 0 \lambda_N \end{array}\right)\nonumber \)

The q'th eigenvector is

\(\begin{eqnarray} \Psi^{(q)} = \sum_n\Phi_n W_{n,q}\nonumber \\ = \sum_{n,n^\prime}\varphi_{n^\prime}R_{n^\prime ,n}W_{n,q}.\nonumber \end{eqnarray}\) ie. \(c_{n^prime}^{(q)} = \sum_n R_{n^\prime n} W_{n,q}$\).


The Power Method

-Based on the observation that if H has one eigenvalue, \(\lambda_M\), much bigger than all the rest, and \(\chi = \left(\nonumber\\ \begin{array}{c} a_1\\ a_2\\ \vdots\nonumber \end{array}\right)\nonumber\) is an arbitrary starting vector, then \(\chi = \sum_q x_q\Psi^{(q)}$\nonumber\).

\(\begin{eqnarray} (H)^n\chi = \sum_q x_q \lambda^n_q\Psi^{(q)}\nonumber\\ \rightarrow x_M\lambda_M^n\Psi^{(M)}\nonumber \end{eqnarray}\\\) provided \(x_M\neq 0\\\).

To pick out the eigenvector correspondng to any eigenvalue, with the original problem in the form

\(\begin{eqnarray} H\Psi = \lambda O\Psi\nonumber\\ (H-\lambda)qO)\Psi\nonumber = (\lambda - \lambda_q)O\Psi\nonumber\\ \end{eqnarray}\)

Therefore, \( G\Psi = \frac{1}{\lambda-\lambda_q}\Psi\nonumber\\ \) where \(G=(H-\lambda_qO)^{-1}O\nonumber\) with eigenvalues \(\frac{1}{\lambda_n-\lambda_q}\nonumber\\\).

By picking \(\lambda_q\) close to any one of the \(\lambda_n\), say \(\lambda_{n^\prime}\), then \(\frac{1}{\lambda_n-\lambda_q}\) is much larger for \(n=n^\prime$\nonumber\) than for any other value. The sequence is then

\(\begin{eqnarray} \chi_1=G\chi\nonumber\\ \chi_2=G\chi_1\nonumber\\ \chi_3=G\chi_2\nonumber\\ \vdots\nonumber \end{eqnarray}\\ \)

until the ratios of components in \(\chi_n\) stop changing.

- To avoid matrix inversion and multiplication, note that the sequence is equivalent to

\( F\chi_n = (\lambda-\lambda_q)O\chi_{n-1}\nonumber\\ \)

where \(F = H-\lambda_qO\). The factor of \((\lambda - \lambda_q)\) can be dropped because this only affects the normalization of \(\chi_n\). To find \(\chi_n\\\), solve \( F\chi_n = O\chi_{n-1}\nonumber\\ \) (N equations in N unknowns). Then

\( \lambda = \frac{<\chi_n|H|\chi_n>}{<\chi_n|\chi_n>}\nonumber \)

Matrix Elements of H

\( H=-\frac{1}{2}\nabla^2_1 -\frac{1}{2}\nabla^2_2 - \frac{1}{r_1} - \frac{1}{r_2} +\frac{Z^{-1}}{r_{12}}\nonumber\\ \)

Taking \(r_1, r_2\) and \(r_{12}\) as independent variables,

\(\begin{eqnarray} \nabla_1^2 = \frac{1}{r_1^2}\frac{\partial}{\partial r_1}\left(r_1^2\frac{\partial}{\partial r_1}\right)+ \frac{1}{r_{12}^2}\frac{\partial}{\partial r_{12}} \left(r_{12}^2\frac{\partial}{\partial r_{12}}\right)\nonumber -\frac{l_1(l_1+1)}{r_1^2}+2(r_1-r_2\cos(\theta))\frac{1}{r_{12}}\frac{\partial^2}{\partial r_1 \partial r} \nonumber - 2(\nabla_1 \cdot {\bf r}_2)\frac{1}{r}\frac{\partial}{\partial r}\nonumber \end{eqnarray}\)

where INSERT FIGURE HERE

The complete set of 6 independent variables is \(r_1, r_2, r_{12}, \theta_1,\varphi_1, \chi\).

If \(r_{12}\) were not an independent variable, then one could take the column element to be \( d\tau = r_1^2dr_1\sin(\theta_1)d\theta_1d\varphi_1r_2^2dr_2\sin(\theta_2)d\theta_2d\varphi_2.\nonumber \)

However, \(\theta_2\) and \(\varphi_2\) are no longer independent variables. To eliminate them, take the point \({\bf r}_1\) as the

origin of a new polar co-ordinate system, and write \( d\tau=-r_1^2dr_1\sin(\theta_1)d\theta_1d\varphi_1r_{12}^2dr_{12}\sin(\psi)d\psi d\chi\nonumber\\\)

and use \(r_2^2=r_1^2+r_{12}^2 +2r_1r_{12}\cos(\psi).\\ \)


Then for fixed \(r_1\) and \($r_{12}$\\\), \( 2r_2dr_2 = -2r_1r_{12}\sin(\psi)d\psi\\ \)


Thus \( d\tau= r_1dr_1r_2dr_2r_{12}dr_{12}\sin(\theta_1)d\theta_1d\varphi_1d\chi\\ \)


The basic type of integral to be calculated is


\(\begin{eqnarray} I(l_1,m_1,l_2,m_2;R) =\int\sin(\theta_1)d\theta_1d\varphi_1d\chi Y^{m_1}_{l_1}(\theta_1,\varphi_1)^{*}Y^{m_2}_{l_2}(\theta_2,\varphi_2)\nonumber\\ \times\int r_1dr_1r_2dr_2r_{12}dr_{12}R(r_1,r_2,r_{12})\nonumber\\ \end{eqnarray}\\\)


Consider first the angular integral. \(Y^{m_2}_{l_2}(\theta_2,\varphi_2)$\\\) can be expressed in terms of the independent variables \(\theta_1, \varphi_1,\chi\\\) by use of the rotation matrix relation


\( Y^{m_2}_{l_2}(\theta_2,\varphi_2) =\sum_m\mathcal{D}^{(l_2)}_{m_2,m}(\varphi_1,\theta_1,\chi)^*Y^m_{l_2}(\theta,\varphi)\nonumber\\\)


where \(\theta, \varphi\) are the polar angles of \({\bf r}_2\) relative to \({\bf r}_1\). The angular integral is then


\(\begin{eqnarray} I_{ang}=\int^{2\pi}_0d\chi\int^{2\pi}_0d\varphi_1\int^\pi_0\sin(\theta_1)d=theta_1Y^{m_1}_{l_1}(\theta_1,\varphi_1)^*\nonumber\\ \times\sum_m\mathcal{D}^{(l_2)}_{m_2,m}(\varphi_1,\theta_1,\chi)^*Y^m_{l_2}(\theta,\varphi)\nonumber\\ \end{eqnarray}\\\)


Use

\( Y^{m_1}_{l_1}(\theta_1,\varphi_1)^* = \sqrt{\frac{2l_1+1}{4\pi}}\mathcal{D}^{(l_1)}_{m_1,0}(\varphi_1,\theta_1,\chi)\nonumber\\ \)


together with the orthogonality property of the rotation matrices (Brink and Satchler, p 147)

\( \mathcal{D}^{(j)*}_{m,m^\prime}\mathcal{D}^{(J)}_{M,M^\prime}\sin(\theta_1)d\theta_1d\varphi_1d\chi = \frac{8\pi^2}{2j+1}\delta_{jJ}\delta_{mM}\delta_{m^\prime M^\prime}\nonumber\\ \)


to obtain


\(\begin{eqnarray} I_{ang}=\sqrt{\frac{2l_1+1}{4\pi}}\frac{8\pi^2}{2l_1+1}\delta_{l_1,l_2}\delta_{m_1,m_2}Y^0_{l_2}(\theta,\varphi)\nonumber\\ =2\pi\delta_{l_1,l_2}\delta_{m_1,m_2}P_{l_2}(\cos\theta)\nonumber\\ \end{eqnarray}\\\)


since


\(Y^0_{l_2}(\theta,\varphi)=\sqrt{\frac{2l_1+1}{4\pi}}P_{l_2}(\cos(\theta))$\nonumber\\\).


Note that \(P_{l_2}(\cos\theta)\) is just a short hand expression for a radial function because \(\cos\theta = \frac{r_1^2+r_2^2-r_{12}^2} {2r_1r_2}\nonumber\\\).


The original integral is thus

\(\begin{eqnarray} I(l_1,m_1,l_2,m_2;R)=2\pi\delta_{l_1l_2}\delta_{m_1m_2}\nonumber\\ \times\int^\infty_0r_1dr_1\int^\infty_0r_2dr_2\int^{r_1+r_2}_{|r_1-r_2|} r_{12}dr_{12}R(r_1,r_2,r_{12})P_{l_2}(\cos\theta)\nonumber\\ \end{eqnarray}\\\)


where \(\cos\theta = (r_1^2 +r^2_2-r_{12}^2)/(2r_1r_2)\nonumber\) is a purely radial function.


The above would become quite complicated for large \(l_2\) because \(P_{l_2}(\cos\theta)\) contains terms up to \((\cos\theta)^{l_2}$\\\). However, recursion relations exist which allow any integral containing \(P_l(\cos\theta)\) in terms of those containing just \(P_0(\cos\theta)=1\nonumber\\\) and \(P_1(\cos\theta)=\cos\theta\nonumber\\\).

Radial Integrals and Recursion Relations

The Radial Recursion Relation

The General Integral

Graphical Representation

[figure to be inserted]

Matrix Elements of H

Problem

General Hermitean Property

Optimization of Non-linear Parameters

  • Difficulties
  • Cure

The Screened Hyrdogenic Term

Small Corrections

  • Mass Polarization