Files
cds-numerical-methods/Final/Final - Tight-binding propagation method.ipynb

192 KiB

CDS: Numerical Methods -- Final Assignment

  • See lecture notes and documentation on Brightspace for Python and Jupyter basics. If you are stuck, try to google or get in touch via Discord.

  • Solutions must be submitted **individually** via the Jupyter Hub until **Monday, April 4th, 23:59**.

  • Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE".

  • Remember to document your source codes (docstrings, comments where necessary) and to write it as clear as possible.

  • Do not forget to fully annotate all of your plots.

Submission

  1. make sure everything runs as expected
  2. restart the kernel (in the menubar, select Kernel$\rightarrow$Restart)
  3. run all cells (in the menubar, select Cell$\rightarrow$Run All)
  4. Check all outputs (Out[*]) for errors and resolve them if necessary
  5. submit your solutions in time (before the deadline)

Tight-Binding Propagation Method Module

Tight-Binding Theory

Solid state theory aims to describe crystalline structures defined by periodic arrangements of atomic positions $\vec{R}_i$ with $i= 1 \dots n$. To model the electronic properties of such a structure, we can use the so-called tight-binding method. Here one assumes that the problem for a single atom described by the Hamiltonian $H_{at}(\vec{r})$ has already been solved, so that the atomic wave functions $\phi_m(\vec{r})$ are known. The Hamiltonian of the crystalline structure is then constructed from these atomic Hamiltonians as follows

\begin{align*} H(\vec{r}) = \sum_{i} H_{at}(\vec{r} - \vec{R}_i) + \Delta V(\vec{r}), \end{align*}

where $\Delta V(\vec{r})$ describes the changes to the atomic potentials due to the periodic arrangement. Solutions to the time-dependent Schrödinger equation $\psi_n(\vec{r})$ can then be approximated by linear combinations of the atomic orbitals, i.e.

\begin{align*} \psi_m(\vec{r}) = \sum_{i} \, c_{i,m} \, \phi_m(\vec{r}-\vec{R}_i). \end{align*}

Thus, our task is to find the coefficients $c_{i,m}$, which are the eigenfunctions of the tight-binding Hamiltonian $H_{tb}$. In the basis of the atomic orbitals $H_{tb}$ is an $n \times n$ matrix which describes the "hopping" of an electron from one atomic position to the other. In this description the electrons are assumed to be tightly bound to the atomic positions, hence the name of the approach. In summary, we have reduced our original problem $H(\vec{r})$, described in a continuous space $\vec{r}$, to a strongly discretized problem $H_{tb}$ in the space of lattice coordinates $\vec{R}_i$.

Propagation Method

While this reduction already helps a lot, full diagonalizations of the tight-binding matrix is still not feasible if we need to describe realistic structures with thousands of atoms. For this case we like to have a method which allows us to study the electronic properties, without the need of fully diagonalizing the tight-binding matrix. The tight-biding propagation method allows for exactly this. By analyzing the propagation of an initial electronic state through the crystalline structure we also have access to the full eigenspectrum of $H_{tb}$, without explicit diagonalization.

Your Goal

In the following you will setup the tight-binding Hamiltonian for a one-dimensional chain of atoms and numerically study its properties using exact diagonalization. Then you will compare it to the results obtained using the tight-binding propagation method. You will need some of the algorithms which you have implemented in the weekly assignments before. Additionally, you will need to implement a few new algorithms, which we have discussed in the last lecture. In principle there will be no need to use Numpy or Scipy (except for Numpy's array handling and a few other exceptions). However, if you encounter any problems with your own implementations of specific functionalities, you can use the Numpy and Scipy alternatives. Therefore you should be able to perform all of the following tasks in any case.

Let us start by importing the necessary packages.

In [1]:
import numpy as np
from matplotlib import pyplot as plt

Step 1: Crystal Lattice

Task 1.1 [3 points]

In the following exercises the atomic positions of the 1D crystal lattice will be fixed to $\vec{R}_i = x_i = i a$, with $i = 0 \dots n-1$ and $a$ being the lattice constant.

Write a simple Python function that takes the chain length $n$ as an argument and returns the atomic positions $x_i$. Set $a = 1$ for all the following exercises.

In [2]:
def atomic_positions(n, a=1):
    """
    Creates an array of atomic position in a 1D crystal lattice
    for lattice constant a having default value a = 1.
    
    Args:
        n: number of atoms in the 1D lattice string
        a: numerical value for the lattice constant

    Returns:
        A 1D array of atomic positions.
    """
    
    return np.arange(n)*a

Step 2: Atomic Basis Functions

Our atomic basis functions will be Gaussians of the form $$ \large \phi(x, \mu, \sigma) = \frac{1}{\pi^{1/4} \sigma^{1/2}} e^{-\frac{1}{2} \left(\frac{x-\mu}{\sigma}\right)^2}, $$ where $\mu$ is their localization position and $\sigma$ their broadenings. We also choose to have just one orbital per atom so that we can drop the index $m$ from now on.

Task 2.1 [4 points]

Implement a Python function which calculates $\phi(x, \mu, \sigma)$ for a whole array of arbitrary $x$, centered at given $\mu$ with a given broadening $\sigma$.

Plot all the atomic basis functions for a chain with $n = 10$ atoms, using $\sigma = 0.25$. I.e. plot $\phi(x, x_i, \sigma)$ vs $x$, for all atomic positions $x_i$ in the chain.

In [3]:
def atomic_basis(x, mu, sigma):
    """
    Calculates the atomic basis functions for the 1D chain of atoms.
    
    Args:
        x:     array of positions to calculate the wavefunction at
        mu:    atomic position to center Gaussian wavefunction at
        sigma: broadening constant for Gaussian function

    Returns:
        A 1D array of values for the wavefunction over the positions
        as given by x.
    """
    
    return np.pi**(-1/4)*sigma**(-1/2)*np.exp(-1/2*((x - mu)/sigma)**2)

n = 10
sigma = .25
x = np.linspace(-2, 12, 1000)

plt.figure()
plt.xlabel("$x$")
plt.ylabel("$\phi$")

for mu in atomic_positions(n):
    plt.plot(x, atomic_basis(x, mu, sigma), label="n = " + str(mu))

plt.legend()
plt.tight_layout()
plt.show()

Task 2.2 [6 points]

Implement a Python function to calculate numerical integrals (using for example the composite trapezoid or Simpson rule). This one should be general enough to calculate integrals $\int_a^b f(x) dx$ for arbitrary functions $f(x)$, as you will need it for other tasks as well.

Implement a simple unit test for your integration function.

In [4]:
def integrate(yk, x):
    """
    Numerically integrates function yk over [x[0], x[-1]] using Simpson's 3/8
    rule over the grid provided by x.
    
    Args:
        yk: function of one numerical argument that returns a numeric
            or an array of function values such that x[i] corresponds to yk[i]
        x:  array of numerics as argument to yk

    Returns:
        A numeric value for the quadrature of yk over x with error
        of order 
    """
    
    # If yk is callable, we use it to determine the function values
    # over array x.
    if callable(yk):
        yk = yk(x)
    
    # The distance h_i = x[i + 1] - x[i] is not necessarily constant. The choice of
    # partitioning of the interval is subject to mathematical considerations I will
    # not go into.
    h = x[1:] - x[:-1]
    
    integral = 0
    integral += 3/8*(x[1] - x[0])*yk[0]
    integral += 9/8*h[1::3]@yk[1:-1:3]
    integral += 9/8*h[2::3]@yk[2:-1:3]
    integral += 6/8*h[ ::3]@yk[ :-1:3]
    integral += 3/8*(x[-1] - x[-2])*yk[-1]
    return integral
In [5]:
def test_integrate():
    # Test integral 1 of f with F its primitive with integration constant 0
    f = lambda x: x**2
    F = lambda x: x**3/3
    x = np.logspace(0, 3, 1000000)
    assert np.isclose(integrate(f, x), F(x[-1]) - F(x[0]))
    
    # Test integral 2 of f with F its primitive with integration constant 0
    f = lambda x: np.sin(2*x)/(2 + np.cos(2*x))
    F = lambda x: -.5*np.log(np.cos(2*x) + 2)
    x = np.linspace(0, 10, 1000)
    assert np.isclose(integrate(f, x), F(x[-1]) - F(x[0]))
    
test_integrate()

Task 2.3 [2 points]

Use your Python integration function to check the orthogonality of the Gaussian basis functions by verifying the following condition $$\delta_{ij} = \int_{-\infty}^{+\infty} \phi(x, x_i, \sigma) \, \phi(x, x_j, \sigma) \, dx,$$ where $\delta_{ii} \approx 1$ and $\delta_{ij} \approx 0$ for $\sigma = 0.25$.

In [6]:
n = 10
sigma = .25

positions = atomic_positions(n)
infty = 10000
x = np.linspace(-infty, infty, 1000000)

def ijlabel(i, j):
    """
    Returns a string label describing the relation between two states in words,
    if they are close enough.
    """
    
    if i == j:
        return "   (self)"
    if abs(i - j) == 1:
        return "   (nearest neighbours)"
    # Default:
    return ""

for i in range(n):
    for j in range(n):
        integrand = lambda x: atomic_basis(x, positions[i], sigma)*atomic_basis(x, positions[j], sigma)
        print("delta_{}{} = {:.5f}{}".format(i, j, integrate(integrand, x), ijlabel(i, j)))

# Yann had output:
#delta_00 = 1.00000
#delta_01 = 0.01832
#delta_02 = 0.00000
#delta_34 = 0.01832
# Explanation: next neighbours migth have some overlap. Further away, no overlap at all.
delta_00 = 1.00000   (self)
delta_01 = 0.01832   (nearest neighbours)
delta_02 = 0.00000
delta_03 = 0.00000
delta_04 = 0.00000
delta_05 = 0.00000
delta_06 = 0.00000
delta_07 = 0.00000
delta_08 = 0.00000
delta_09 = 0.00000
delta_10 = 0.01832   (nearest neighbours)
delta_11 = 1.00000   (self)
delta_12 = 0.01832   (nearest neighbours)
delta_13 = 0.00000
delta_14 = 0.00000
delta_15 = 0.00000
delta_16 = 0.00000
delta_17 = 0.00000
delta_18 = 0.00000
delta_19 = 0.00000
delta_20 = 0.00000
delta_21 = 0.01832   (nearest neighbours)
delta_22 = 1.00000   (self)
delta_23 = 0.01832   (nearest neighbours)
delta_24 = 0.00000
delta_25 = 0.00000
delta_26 = 0.00000
delta_27 = 0.00000
delta_28 = 0.00000
delta_29 = 0.00000
delta_30 = 0.00000
delta_31 = 0.00000
delta_32 = 0.01832   (nearest neighbours)
delta_33 = 1.00000   (self)
delta_34 = 0.01832   (nearest neighbours)
delta_35 = 0.00000
delta_36 = 0.00000
delta_37 = 0.00000
delta_38 = 0.00000
delta_39 = 0.00000
delta_40 = 0.00000
delta_41 = 0.00000
delta_42 = 0.00000
delta_43 = 0.01832   (nearest neighbours)
delta_44 = 1.00000   (self)
delta_45 = 0.01832   (nearest neighbours)
delta_46 = 0.00000
delta_47 = 0.00000
delta_48 = 0.00000
delta_49 = 0.00000
delta_50 = 0.00000
delta_51 = 0.00000
delta_52 = 0.00000
delta_53 = 0.00000
delta_54 = 0.01832   (nearest neighbours)
delta_55 = 1.00000   (self)
delta_56 = 0.01832   (nearest neighbours)
delta_57 = 0.00000
delta_58 = 0.00000
delta_59 = 0.00000
delta_60 = 0.00000
delta_61 = 0.00000
delta_62 = 0.00000
delta_63 = 0.00000
delta_64 = 0.00000
delta_65 = 0.01832   (nearest neighbours)
delta_66 = 1.00000   (self)
delta_67 = 0.01832   (nearest neighbours)
delta_68 = 0.00000
delta_69 = 0.00000
delta_70 = 0.00000
delta_71 = 0.00000
delta_72 = 0.00000
delta_73 = 0.00000
delta_74 = 0.00000
delta_75 = 0.00000
delta_76 = 0.01832   (nearest neighbours)
delta_77 = 1.00000   (self)
delta_78 = 0.01832   (nearest neighbours)
delta_79 = 0.00000
delta_80 = 0.00000
delta_81 = 0.00000
delta_82 = 0.00000
delta_83 = 0.00000
delta_84 = 0.00000
delta_85 = 0.00000
delta_86 = 0.00000
delta_87 = 0.01832   (nearest neighbours)
delta_88 = 1.00000   (self)
delta_89 = 0.01832   (nearest neighbours)
delta_90 = 0.00000
delta_91 = 0.00000
delta_92 = 0.00000
delta_93 = 0.00000
delta_94 = 0.00000
delta_95 = 0.00000
delta_96 = 0.00000
delta_97 = 0.00000
delta_98 = 0.01832   (nearest neighbours)
delta_99 = 1.00000   (self)

Step 3: Tight-Binding Hamiltonian

The tight-binding Hamiltonian for our 1D chain should describe the hopping of an electron from all atomic positions to their nearest left and right neighbours (i.e. no long-range hopping). The resulting matrix representation in the basis of the discrete $x_i$ positions is therefore given as a tri-diagonal $n \times n$ matrix of the form

\begin{align} \mathbf{H}_{tb} = \left( \begin{array}{cccc} 0 & t & & 0\\ t & \ddots & \ddots & \\ & \ddots & \ddots & t \\ 0 & & t & 0 \end{array} \right), \end{align}

where $t = t_{i,i\pm1}$ is the nearest-neighbour hopping matrix element. A hopping matrix element $t_{i,j}$ is a measure for the probability of an electron to hop from site $i$ to site $j$. They are defined as

\begin{align} t_{i,j} = \int_{-\infty}^{+\infty} \phi(x, x_i, \sigma) \, \Delta V(x) \, \phi(x, x_j, \sigma) \, dx, \end{align}

with the potential fixed to

\begin{align} \Delta V(x) = \sum_i \frac{-1}{|x - x_i| + 0.001}. \end{align}

Task 3.1 [3 points]

Write a Python function to calculate $t_{i,j}$, using $\sigma = 0.25$. The function should have as input the indices $i$ and $j$, and the chain length $n$. Verify that the long-range hoppings $t_{i,i\pm2}$ and $t_{i,i\pm3}$ are negligible compared to $t_{i,i\pm1}$.

Hint: use your integration function from task 2.2

In [7]:
def hopping(i, j, n, sigma=.25):
    """
    Calculates hopping matrix elements t_ij for sigma = 0.25 in a 1D
    chain of n atoms at distance a = 1 from eachother.
    
    Args:
        i:     origin site index
        j:     destination site index
        n:     number of atoms in the chain
        sigma: standard deviation to the Gaussian wave functions

    Returns:
        Hopping parameter t_ij.
    """
    
    positions = atomic_positions(n)
    
    # This 'infinity' is large enough, as the Gaussians decay quite quickly
    # away from the atomic positions, which we already saw in the overlap
    # above. In fact, 99.7% of all probability mass is under the integral
    # for x radius of 3*sigma from the centers x_i.
    h = 1e-5
    x = np.arange(positions[0] - 10*sigma, positions[-1] - 10*sigma, h)
    
    def V(x):
         ret = np.zeros(x.shape)
         for x_i in positions:
             ret += -1./(np.abs(x - x_i) + 0.001)
         return ret
    # Instead of using a loop, one could vectorize the problem further by calculating all sum
    # terms as elements of a len(x) by len(positions) matrix and then summing along the rows.
    # In testing I found that this was slower than using the loop, so I commented it out.
    # This might be due to the large memory overhead O(len(x)*len(positions)), and the fact that
    # the len(positions) iterations already do vectorized calculations on len(x) >> len(positions)
    # numbers, making the theoretical speed gain only plausible at larger len(positions). 
    #V = lambda x: np.sum( -1/( np.abs(np.subtract.outer(x, positions)) + 0.001 ), axis=1 )
    
    integrand = lambda x: atomic_basis(x, positions[i], sigma)*V(x)*atomic_basis(x, positions[j], sigma)
    return integrate(integrand, x)
In [8]:
n = 10

for i in range(n):
    print("For i =", i, "...")
    for r in range(1, 4):
        if i - r >= 0:
            print("\tt_{{i,i-{}}} = {}".format(r, hopping(i, i - r, n)))
        if i + r < n:
            print("\tt_{{i,i+{}}} = {}".format(r, hopping(i, i + r, n)))
    print()
For i = 0 ...
	t_{i,i+1} = -0.13849173441658025
	t_{i,i+2} = -3.088088057066831e-06
	t_{i,i+3} = -1.8833562200578063e-15

For i = 1 ...
	t_{i,i-1} = -0.13849173441658025
	t_{i,i+1} = -0.14871538221422848
	t_{i,i+2} = -3.1306987950404085e-06
	t_{i,i+3} = -1.945630457066332e-15

For i = 2 ...
	t_{i,i-1} = -0.14871538221422848
	t_{i,i+1} = -0.15363274031153992
	t_{i,i-2} = -3.088088057066831e-06
	t_{i,i+2} = -3.152251440766849e-06
	t_{i,i+3} = -1.9763481552880358e-15

For i = 3 ...
	t_{i,i-1} = -0.15363274031153992
	t_{i,i+1} = -0.1560583006931239
	t_{i,i-2} = -3.1306987950404085e-06
	t_{i,i+2} = -3.1616643825949025e-06
	t_{i,i-3} = -1.8833562200578063e-15
	t_{i,i+3} = -1.9857521228152284e-15

For i = 4 ...
	t_{i,i-1} = -0.1560583006931239
	t_{i,i+1} = -0.15680086580653224
	t_{i,i-2} = -3.1522514407668485e-06
	t_{i,i+2} = -3.1616580341274714e-06
	t_{i,i-3} = -1.945630457066332e-15
	t_{i,i+3} = -1.9763479030784917e-15

For i = 5 ...
	t_{i,i-1} = -0.15680086580653224
	t_{i,i+1} = -0.1560582807779115
	t_{i,i-2} = -3.1616643825949025e-06
	t_{i,i+2} = -3.1503943708763577e-06
	t_{i,i-3} = -1.9763481552880358e-15
	t_{i,i+3} = -9.75855063584149e-16

For i = 6 ...
	t_{i,i-1} = -0.15605828077791148
	t_{i,i+1} = -0.07705640452986241
	t_{i,i-2} = -3.1616580341274714e-06
	t_{i,i+2} = -1.8615080260773555e-09
	t_{i,i-3} = -1.985752122815229e-15
	t_{i,i+3} = -1.2516261372405081e-23

For i = 7 ...
	t_{i,i-1} = -0.07705640452986241
	t_{i,i+1} = -9.883210483852472e-10
	t_{i,i-2} = -3.1503943708763577e-06
	t_{i,i+2} = -8.453472394803474e-24
	t_{i,i-3} = -1.9763479030784917e-15

For i = 8 ...
	t_{i,i-1} = -9.88321048385247e-10
	t_{i,i+1} = -7.16787287550824e-31
	t_{i,i-2} = -1.8615080260773555e-09
	t_{i,i-3} = -9.75855063584149e-16

For i = 9 ...
	t_{i,i-1} = -7.16787287550824e-31
	t_{i,i-2} = -8.453472394803472e-24
	t_{i,i-3} = -1.2516261372405081e-23

Task 3.2 [3 points]

Implement a diagonalization routine for tri-diagonal matrices which returns all eigenvalues, for example using the $QR$ decomposition (it is fine to use Numpy's $\text{qr()}$).

Hint: For tri-diagonal matrices with vanishing diagonal elements, the $QR$-decomposition-based diagonalization algorithm gets trapped. To get around this you could, for example, add a diagonal $1$ to your matrix, and later subtract $1$ from each eigenvalue.

In [9]:
def QREig(T, eps=1e-6, k_max=10000):
    """
    Follows the method of the QR decomposition based diagonalization routine
    for tridiagonal matrices. The matrix T is diagonalized, resulting in
    all diagonal elements being an eigenvalue.
    
    Args:
        T:     a tridiagonaliz matrix.
        eps:   the desired accuracy.
        k_max: maximum number of iterations after which to cut off
    
    Returns:
        A one dimensional array with the eigenvalues of the matrix T.
    """
    
    e = eps + 1
    k = 0
    while e > eps and k < k_max:
        k += 1
        Q, R = np.linalg.qr(T)
        T = np.matmul(R,Q)
        e = np.sum(np.abs(np.diag(T, k=1)))
    return np.diag(T)

Task 3.3 [3 points]

Implement a unit test for your diagonalization routine.

In [10]:
def test_QREig():
    # Test case one
    T = np.array([
        [1,4,0,0],
        [3,4,1,0],
        [0,2,3,4],
        [0,0,1,3]
    ])
    # Eigenvalues are roots of λ^4 - 11*λ^3 + 25*λ^2 + 31*λ - 46.
    eigenvalues_of_T = np.array([-1.45350244, 1., 4.65531023, 6.79819221])
    assert np.allclose(np.sort(QREig(T)), eigenvalues_of_T)
    
    # Test case two
    T = np.array([
        [1,4,0,0],
        [3,0,1,0],
        [0,2,0,4],
        [0,0,0,3]
    ])
    eigenvalues_of_T = np.sort(np.linalg.eig(T)[0])
    assert np.allclose(np.sort(QREig(T)), eigenvalues_of_T)

test_QREig()

Task 3.4 [4 points]

First, write a function that generates your tight-binding Hamiltonian $\mathbf{H}_{tb}$, for a given chain length $n$. Use $t = t_{i,i\pm1}$, as calculated in task 3.1. You can choose any $i$ near the center of the chain for the calculation of $t$, as the chain is (approximately) periodic.

Second, use your diagonalization routine to calculate all the eigenvalues $E_m$, for a variety of $n=10,20,40,80,100$. Sort the resulting $E_m$ and plot them vs. $m$.

In [11]:
def TBHamiltonian(n, sigma=.25):
    """
    Generates the tight-binding hamiltonian H_tb for given chain length n,
    using the approximation of constant hopping parameter in a periodic
    chain of atoms.
    
    Args:
        n:     number of atoms in the chain
        sigma: standard deviation to the Gaussian wave functions

    Returns:
        Tight-binding hamiltonian H_tb.
    """
    
    # TODO: Comment on the weird 20% differences in hopping parameters.
    
    i = n//2
    t = hopping(i, i + 1, n, sigma)
    H_tb = (np.eye(n, n, -1) + np.eye(n, n, 1))*t
    
    return H_tb
In [12]:
plt.figure()

for n in [10, 20, 40, 80, 100]:
    H_tb = TBHamiltonian(n)
    E_m  = QREig(H_tb + np.eye(n)) - 1
    plt.plot(np.arange(len(E_m)) + 1, np.sort(E_m), label="n = {}".format(n))

plt.legend()
plt.title("Energy eigenvalues of $H_{{tb}}$ for different chain lengths $n$")
plt.xlabel("$m$")
plt.ylabel("$E_m$")
plt.show()

Task 3.5 [3 points]

Implement a function to calculate the so-called density-of-states

\begin{align*} \rho(\omega) = \frac{1}{N} \sum_i \delta(\omega - E_i), \end{align*}

for a variable energy grid $\omega$. Do this by approximating the $\delta$-distribution with a Gaussian. In detail, you can use your atomic orbital function $\delta(\omega - E_i) \approx \phi(\omega, E_i, \sigma_\rho)$. Calculate the normalization factor $N$ such that $\int \rho(\omega) dw = 1$ is fulfilled.

Your function should take as input the energy grid $\omega$, the eigenenergies $E_i$ and the broadening $\sigma_\rho$.

In [ ]:
def getDOS_ED(w, Ei, sigma):
    # YOUR CODE HERE
    raise NotImplementedError()

Task 3.6 [3 points]

Use your density-of-states routine to calculate $\rho(\omega)$ for $n=10,20,40,80,100$ for $\sigma_\rho \approx 0.005$. See below for two examples with $t \approx -0.195$ and $n=10$ and $n=100$.

Hint: if your plots look like they are smoothed out, try decreasing $\sigma_\rho$. If they look like there is a lot of noise, try increasing $\sigma_\rho$.

$n = 10$ $n = 100$
dosN010.png dosN100.png
In [ ]:
# YOUR CODE HERE
raise NotImplementedError()

Step 4: Tight-Binding Propagation Method

Now we turn to the time-dependent Schrödinger equation

\begin{align} i\hbar\frac{\partial}{\partial t} \psi(x,t) = H \psi(x,t), \end{align}

which has the formal solution

\begin{align} \psi(x,t) = U(t) \psi(x,t=0), \end{align}

with

\begin{align} U(t) = e^{-i \hbar H t} \end{align}

being the time-propagation operator. Within the propagation method we can calculate the so-called local density-of-states

\begin{align} \rho_{loc}(\omega) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} \, e^{i\omega t} \, f(t) \ dt, \end{align}

with respect to an (arbitrary) initial state $\psi(x,t=0)$, where

\begin{align} f(t) &= \int_{-\infty}^{+\infty} \, \psi^*(x,t) \, \psi(x,t=0) \, dx \\ &\approx \int_{-\infty}^{+\infty} \sum_i c_i^*(t) \phi(x,x_i,\sigma) \, \sum_j c_j(0) \phi(x,x_j,\sigma) \, dx \notag \\ &\approx \sum_i c_i^*(t) c_i(0). \notag \end{align}

Thus, the time propagation of an initial state towards positive and negative times followed by a Fourier transform of $f(t)$ yields the local density-of-states. To obtain the full density-of-states we need to average $\rho_{loc}(\omega)$ as follows

\begin{align} \rho(\omega) = \lim_{S \to \infty} \frac{1}{S} \sum_p^S \rho^{(p)}_{loc}(\omega) \end{align}

over a variety of random initial states $p$.

Task 4.1 [3 points]

Implement a function which calculates the exact time-propagation matrix $U(\tau)$ for a small time-step $\tau$ given the Hamiltonian $H$. For simplicity, set $\hbar = 1$ in the following.

Hint: Use Scipy's $\text{expm()}$ function.

In [ ]:
def getU_exact(tau, H):
    # YOUR CODE HERE
    raise NotImplementedError()

Task 4.2 [3 points]

Implement a function which performs the step-by-step time propagation given an initial state $\vec{c}(0)$, the matrix $U(\tau)$ and the discretized time grid $t_j$. In other words, your function should calculate

$$\vec{c}(j+1) = U(\tau) \cdot \vec{c}(j)$$

for all $j$ of a given discretized time grid $t_j = j \tau$.

In [ ]:
def timePropagate(U, c0, t):
    # YOUR CODE HERE
    raise NotImplementedError()

Task 4.3 [4 points]

Use both of the above functions to calculate and animate the time propagation of an initial state

$$\psi(x,t=0) = \phi(x, x_{i=n/2}, \sigma) \leftrightarrow \vec{c}(0) = [c_{i=n/2}(0) = 1, c_{i\neq n/2}(0) = 0]$$

for a $n=100$ chain. Discretize your time grid as $t_j=j\tau$ with $j=0 \dots 200$, and $\tau=1.5$. Use again $a = 1$ and $\sigma=0.25$.

To plot / animate the time propagation you should plot the real-space wave function $\psi(x,t) \approx \sum_i c_i(t) \phi(x, x_i, \sigma)$.

Hint: use your function from task 3.4 to get the Hamiltonian $H$.

For the animation you can use the following draft:

# use matplotlib's animation package
import matplotlib.pylab as plt
import matplotlib
import matplotlib.animation as animation
# set the animation style to "jshtml" (for the use in Jupyter)
matplotlib.rcParams['animation.html'] = 'jshtml'

# create a figure for the animation
fig = plt.figure()
plt.grid(True)
plt.xlim( ... )     # fix x limits
plt.ylim( ... )     # fix y limits

# Create an empty plot object and prevent its showing (we will fill it each frame)
myPlot, = plt.plot([0], [0])
plt.close()

# This function is called each frame to generate the animation (f is the frame number)
def animate(f):                   
    myPlot.set_data( ... )  # update plot

# Show the animation
frames = np.arange(1, np.size(t))  # t is the time grid here
myAnimation = animation.FuncAnimation(fig, animate, frames, interval = 20)
myAnimation
In [ ]:
# YOUR CODE HERE
raise NotImplementedError()
In [ ]:
# Animate here ...

# YOUR CODE HERE
raise NotImplementedError()

# Yann has an animation about an atomic orbital that starts
# moving to left and right and then bounce back.

Task 4.4 [3 points]

Implement a function which calculates the Crank-Nicolson time-propagation matrix

\begin{align*} U_{CN}(\tau) = (I - i \tau H / 2)\cdot(I + i \tau H / 2)^{-1}. \end{align*}

Here, $I$ is the diagonal identity matrix. Use Numpy's $\text{inv()}$ function to invert the needed expression.

In [ ]:
def getU_CN(tau, H):
    # YOUR CODE HERE
    raise NotImplementedError()

# Yann notes that the definition of $U_{CN}(\tau)$ here is a little
# different from what Malte used on the slides. He recommends using
# what is stated here.

Task 4.5 [5 points]

Implement a function which calculates the time-propagation matrix using the Trotter-Suzuki decomposition

\begin{align*} U_{TZ}(\tau) = e^{-i\tau H_1} \cdot e^{-i \tau H_2}. \end{align*}

In this approach you choose a decomposition of the tight-binding Hamiltonian $H = H_1 + H_2$, which allows you to analytically diagonalize $H_1$ and $H_2$ (see last lecture). From this analytic diagonalization you will be able to calculate the matrix exponentials $e^{-i\tau H_1}$ and $e^{-i \tau H_2}$.

Write your definition of the 2x2 blocks in $e^{-i\tau H_1}$ and $e^{-i \tau H_2}$ in the Markdown cell below. (Double click on "YOUR ANSWER HERE" to open the cell, and ctrl+enter to compile.)

YOUR ANSWER HERE

In [ ]:
def getU_TZ(tau, H):
    # YOUR CODE HERE
    raise NotImplementedError()

# Yann mentions again that this is slightly different wrong what
# is in the slides/lecture.

Task 4.6 [3 points]

In your implementation of $U_{TZ}(\tau)$ you analytically evaluate the matrix exponentials $e^{-i\tau H_1}$ and $e^{-i \tau H_2}$. Test your implementation by comparing your results for these matrix exponentials to those obtained using Scipy's $\text{expm()}$ function.

In [ ]:
# YOUR CODE HERE
raise NotImplementedError()

# Yann printed
#Biggest differences of U1 with Scipy:
#Real: 1e-16 
#Imag: 2.77e-17
#    
# and difference with U_exact in the order of 1e-1 or 1e-2.

Task 4.7 [6 points]

In the next task you will need a Fourier transform to calculate the local density-of-states. Therefore you will need to implement a function that returns the Fourier transform $f(\omega)$ of a given function $f(t)$ defined on a time grid $t$, for a given energy grid $\omega$. I.e. it should calculate:

\begin{align} f(\omega) = \frac{1}{2\pi} \int_{-\infty}^{+\infty} \, e^{i\omega t} \, f(t) \ dt. \end{align}

Hint: use your integration function from task 2.2.

Then implement a unit test for your function.

In [ ]:
# YOUR CODE HERE
raise NotImplementedError()
In [ ]:
# Implement your unit test here ...

# YOUR CODE HERE
raise NotImplementedError()

Task 4.8 [3 points]

Calculate the local density-of-states $\rho_{loc}(\omega)$ from the Fourier transform of $f(t)$ using all three time propagation methods: $U(\tau)$, $U_{CN}(\tau)$ and $U_{TZ}(\tau)$.

Start from $\psi(x,t=0) = \phi(x, x_{i=0}, \sigma)$ and $\psi(x,t=0) = \phi(x, x_{i=n/2}, \sigma)$, using a $n=100$ chain. Discretize your integration time grid as $t_j=j\tau$, with $j=-150 \dots 150$ and $\tau=1.5$. Use again $a = 1$ and $\sigma=0.25$.

Be careful: for the Fourier transform you will need positive and negative time steps! Thus you will need to do two time propagations: one using $U(\tau)$ towards positive times and one using $U(-\tau)$ towards negative times, both starting from $\psi(x,t=0)$.

In [ ]:
# YOUR CODE HERE
raise NotImplementedError()
In [ ]:
# Do your own testing here ...

# YOUR CODE HERE
raise NotImplementedError()

# Yann had a plot for Tau = 1.5
# DOS: looking like a hill ("like a dome with a peak around zero energy 0")
# for CN, TS and the exact one
# a plot of f(t)
# a plot of local DOS
# in the title he mentiones the inital values.

Task 4.9 [6 points]

Use the Trotter-Suzuki decomposition to calculate the full density-of-states by averaging over about $100$ local density-of-states you obtained from the time propagation of $100$ random initial states $\vec{c}(0)$. To this end, you will need to make sure that each $\vec{c}(0)$ is (a) normalized and (b) can have positive and negative elements.

Compare this approximation to the total density-of-states to the exact one from task 3.6, which you obtained directly from the eigenvalues.

Hint: don't expect the results to be the exact same. Check for the location of the peaks, and whether they have a similar order of magnitude.

Hint: if you did not get the Trotter-Suzuki decomposition to work, you can instead use the exact or the Crank-Nicolson time-propagation matrix.

In [ ]:
# YOUR CODE HERE
raise NotImplementedError()

# Yann says the initial states do need to be negative, too.
In [ ]:
# Do your plotting here ...

# YOUR CODE HERE
raise NotImplementedError()

# Yann plotted the exact diagonalisation and the TS propagation results
# he had two plots, one peaky, one with peaks on the edges (looking a little
# like my 1f/2f results in my bachelor internship hmmpfff)