diff --git a/Week 2/feedback/5 Discrete and Fast Fourier Transforms.html b/Week 2/feedback/5 Discrete and Fast Fourier Transforms.html new file mode 100755 index 0000000..4f8caee --- /dev/null +++ b/Week 2/feedback/5 Discrete and Fast Fourier Transforms.html @@ -0,0 +1,13844 @@ + + + + + +5 Discrete and Fast Fourier Transforms + + + + + + + + + + + + + + + + + +
+
+
+

5 Discrete and Fast Fourier Transforms (Score: 11.0 / 13.0)

+
+
    + + + + + + + + + + + +
  1. Coding free-response (Score: 0.0 / 0.0)
  2. + + + + + + + +
  3. Coding free-response (Score: 3.0 / 3.0)
  4. + + +
  5. Comment
  6. + + + + + + +
  7. Coding free-response (Score: 0.5 / 1.0)
  8. + + +
  9. Comment
  10. + + + + + + +
  11. Coding free-response (Score: 3.0 / 4.0)
  12. + + +
  13. Comment
  14. + + + + + + +
  15. Coding free-response (Score: 3.0 / 3.0)
  16. + + +
  17. Comment
  18. + + + +
  19. Coding free-response (Score: 0.5 / 1.0)
  20. + + +
  21. Comment
  22. + + + + + + +
  23. Coding free-response (Score: 1.0 / 1.0)
  24. + + +
  25. Comment
  26. + + + + + + +
+
+
+
+
+
+ +
+
+
+
+

CDS: Numerical Methods Assignments

    +
  • See lecture notes and documentation on Brightspace for Python and Jupyter basics. If you are stuck, try to google or get in touch via Discord.

    +
  • +
  • Solutions must be submitted via the Jupyter Hub.

    +
  • +
  • Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE".

    +
  • +
+

Submission

    +
  1. Name all team members in the the cell below
  2. +
  3. make sure everything runs as expected
  4. +
  5. restart the kernel (in the menubar, select Kernel$\rightarrow$Restart)
  6. +
  7. run all cells (in the menubar, select Cell$\rightarrow$Run All)
  8. +
  9. Check all outputs (Out[*]) for errors and resolve them if necessary
  10. +
  11. submit your solutions in time (before the deadline)
  12. +
+ +
+
+team_members = "Koen Vendrig, Kees van Kempen" +
+
+
+
+

Discrete and Fast Fourier Transforms (DFT and FFT)

In the following we will implement a DFT algorithm and, based on that, a FFT algorithm. Our aim is to experience the drastic improvement of computational time in the FFT case.

+ +
+
+ +
+
+
In [1]:
+
Student's answer + Score: 0.0 / 0.0 (Top) +
+
+
+
import numpy as np
+from matplotlib import pyplot as plt
+import timeit
+
+ +
+
+ +
+
+ +
+
+
+
+
+

Task 1

Implement a Python function $\text{DFT(yk)}$ which returns the Fourier transform defined by

+\begin{equation} +\beta_j = \sum^{N-1}_{k=0} f(x_k) e^{-ij x_k} +\end{equation}

with $x_k = \frac{2\pi k}{N}$ and $j = 0, 1, ..., N-1$. The $\text{yk}$ should represent the array corresponding to $y_k = f(x_k)$. Please note that this definition is slightly different to the one we introduced in the lecture. Here we follow the notation of Numpy and Scipy.

+

Hint: try to write the sum as a matrix-vector product and use $\text{numpy.dot()}$ to evaluate it.

+ +
+
+ +
+
+
In [2]:
+
Student's answer + Score: 3.0 / 3.0 (Top) +
+
+
+
def DFT(yk):
+    """
+    Return discrete fourier transform (DFT) of yk for N discrete frequency intervals.
+    """
+    
+    N = len(yk)
+    xk = 2*np.pi*np.arange(N)/N
+    beta = np.dot(yk, np.exp(np.outer(-np.arange(N), xk*1j)))
+    return beta
+
+ +
+
+ +
+
+ +
+
+
+
+
+

Task 2

Make sure your function $\text{DFT(yk)}$ and Numpy’s FFT function $\text{numpy.fft.fft(yk)}$ return +the same data by plotting $|\beta_j|$ vs. $j$ for

+\begin{equation} + y_k = f(x_k) = e^{20i x_k} + e^{40 i x_k} +\end{equation}

and +\begin{equation} + y_k = f(x_k) = e^{i 5 x_k^2} +\end{equation}

+

using $N = 128$ for both routines.

+ +
+
+ +
+
+
In [3]:
+
Student's answer + Score: 0.5 / 1.0 (Top) +
+
+
+
N   = 128
+
+xk  = 2*np.pi*np.arange(N)/N
+yk0 = np.exp(20j*xk) + np.exp(40j*xk)
+yk1 = np.exp(5j*xk*2)
+
+fig, ax = plt.subplots(1, 2, sharex=True, sharey=True, figsize=(16,6))
+
+ax[0].set_xlabel("j")
+ax[1].set_xlabel("j")
+
+ax[0].set_title("$y_k = e^{20ix_k} + e^{40ix_k}$")
+ax[0].plot(np.abs(DFT(yk0)), label="DFT")
+ax[0].plot(np.abs(np.fft.fft(yk0)), label="numpy.fft.fft")
+ax[0].legend(loc="upper right")
+
+ax[1].set_title("$y_k = e^{i5x^2_k}$")
+ax[1].plot(np.abs(DFT(yk1)), label="DFT")
+ax[1].plot(np.abs(np.fft.fft(yk1)), label="numpy.fft.fft")
+ax[1].legend(loc="upper right")
+
+# TODO: So the graphs overlap completely. Is this good enough?
+#       To make it more clear, we could mirror one of the graphs (multiply by -1),
+#       like what is often done in spectroscopy, or we could add the difference.
+
+fig.show()
+
+ +
+
+ +
+
+ +
+
+ + +
+ +
+ + + + +
+ +
+ +
+ +
+
+ +
+
+
+
+
+

Task 3

Analyze the evaluation-time scaling of your $\text{DFT(yk)}$ function with the help of the timeit +module. Base your code on the following example:

+
import timeit
+
+tOut = timeit.repeat(stmt=lambda: DFT(yk), number=10, repeat=5)
+tMean = np.mean(tOut)
+
+

This example evaluates $\text{DFT(yk)}$ 5 × 10 times and stores the resulting 5 evaluation times in tOut. Afterwards we calculate the mean value of these 5 repetitions. +Use this example to calculate and plot the evaluation time of your $\text{DFT(yk)}$ function for $N = 2^2, 2^3, ..., 2^M$. Depending on your implementation you might be able to go up to $M = 10$. Be careful and increase M just step by step!

+ +
+
+ +
+
+
In [4]:
+
Student's answer + Score: 3.0 / 4.0 (Top) +
+
+
+
for M in range(2, 10+1):
+    N = 2**M
+    xk  = 2*np.pi*np.arange(N)/N
+    # Using the first equation for yk from the previous exercise.
+    yk = np.exp(20j*xk) + np.exp(40j*xk)
+    tOut = timeit.repeat(stmt=lambda: DFT(yk), number=10, repeat=5)
+    tMean = np.mean(tOut)
+    print("M =", M, "gives")
+    print("tOut =", tOut)
+    print()
+
+ +
+
+ +
+
+ +
+
+ + +
+ +
+ + +
+
M = 2 gives
+tOut = [0.0004651930066756904, 0.000577185011934489, 0.0006698909855913371, 0.00021993799600750208, 0.00021764400298707187]
+
+M = 3 gives
+tOut = [0.0002851419849321246, 0.00024239899357780814, 0.0002413280017208308, 0.00024259099154733121, 0.00024230999406427145]
+
+M = 4 gives
+tOut = [0.00046908899093978107, 0.0004833569983020425, 0.0003386119788046926, 0.0003360290138516575, 0.0003712739853654057]
+
+M = 5 gives
+tOut = [0.001004675985313952, 0.0005115100066177547, 0.0004938770143780857, 0.0004925539833493531, 0.0004936660116072744]
+
+M = 6 gives
+tOut = [0.0022528140107169747, 0.0018840350094251335, 0.0019002860062755644, 0.0018833030189853162, 0.0019113069865852594]
+
+M = 7 gives
+tOut = [0.006436622992623597, 0.006453314010286704, 0.006422006001230329, 0.006435290997615084, 0.006428507011150941]
+
+
+
+
+ +
+ +
+ + +
+
M = 8 gives
+tOut = [0.034213367995107546, 0.03541924600722268, 0.03501593999681063, 0.03518175397766754, 0.03495456400560215]
+
+
+
+
+ +
+ +
+ + +
+
M = 9 gives
+tOut = [0.10512581901275553, 0.10494623999693431, 0.10481545099173672, 0.10478638601489365, 0.10525128699373454]
+
+
+
+
+ +
+ +
+ + +
+
M = 10 gives
+tOut = [0.46051779499975964, 0.45902673600357957, 0.4516237679927144, 0.44649736100109294, 0.4479313330084551]
+
+
+
+
+ +
+
+ +
+
+
+
+
+

Task 4

A very simple FFT algorithm can be derived by the following separation of the sum from +above:

+\begin{align} + \beta_j = \sum^{N-1}_{k=0} f(x_k) e^{-ij \frac{2\pi k}{N}} + &= \sum^{N/2 - 1}_{k=0} f(x_{2k}) e^{-ij \frac{2\pi 2k}{N}} + + \sum^{N/2 - 1}_{k=0} f(x_{2k+1}) e^{-ij \frac{2\pi (2k+1)}{N}}\\ + &= \sum^{N/2 - 1}_{k=0} f(x_{2k}) e^{-ij \frac{2\pi k}{N/2}} + + \sum^{N/2 - 1}_{k=0} f(x_{2k+1}) e^{-ij \frac{2\pi k}{N/2}} e^{-ij \frac{2\pi}{N}}\\ + &= \beta^{\text{even}}_j + \beta^{\text{odd}}_j e^{-ij \frac{2\pi}{N}} +\end{align}

where $\beta^{\text{even}}_j$ is the Fourier transform based on only even $k$ (or $x_k$) and $\beta^{\text{odd}}_j$ the Fourier transform based on only odd $k$. In case $N = 2^M$ this even/odd separation can be done again and again in a recursive way.

+

Use the template below to implement a $\text{FFT(yk)}$ function, making use of your $\text{DFT(yk)}$ function from above. Make sure that you get the same results as before by comparing the results from $\text{DFT(yk)}$ +and $\text{FFT(yk)}$ for both functions defined in task 2.

+
def FFT(yk):
+    """Don't forget to write a docstring ...
+    """
+    N = # ... get the length of yk
+
+    assert # ... check if N is a power of 2. Hint: use the % (modulo) operator
+
+    if(N <= 2):
+        return # ... call DFT with all yk points
+
+    else:
+        betaEven = # ... call FFT but using just even yk points
+        betaOdd = # ... call FFT but using just odd yk points
+
+        expTerms = np.exp(-1j * 2.0 * np.pi * np.arange(N) / N)
+
+        # Remember : beta_j is periodic in j !
+        betaEvenFull = np.concatenate([betaEven, betaEven])
+        betaOddFull = np.concatenate([betaOdd, betaOdd])
+
+        return betaEvenFull + expTerms * betaOddFull
+
+ +
+
+ +
+
+
In [5]:
+
Student's answer + Score: 3.0 / 3.0 (Top) +
+
+
+
def FFT(yk):
+    """
+    Return the fast fourier transform (FFT) of array yk by considering odd
+    and even points and making use of discrete fourier transforms (DFTs).
+    """
+    
+    N = len(yk)
+
+    # N should be a power of two
+    assert np.log2(N).is_integer()
+
+    if(N <= 2):
+        return DFT(yk)
+
+    else:
+        betaEven = FFT(yk[::2])
+        betaOdd = FFT(yk[1::2])
+
+        expTerms = np.exp(-1j * 2.0 * np.pi * np.arange(N) / N)
+
+        # Remember : beta_j is periodic in j !
+        betaEvenFull = np.concatenate([betaEven, betaEven])
+        betaOddFull = np.concatenate([betaOdd, betaOdd])
+
+        return betaEvenFull + expTerms * betaOddFull
+
+ +
+
+ +
+
+ +
+
+
+
In [6]:
+
Student's answer + Score: 0.5 / 1.0 (Top) +
+
+
+
# N needs to be a power of two for FFT to work.
+N   = 2**7
+
+xk  = 2*np.pi*np.arange(N)/N
+yk = np.exp(20j*xk) + np.exp(40j*xk)
+
+fig, ax = plt.subplots()
+
+ax.set_xlabel("j")
+
+ax.set_title("$y_k = e^{20ix_k} + e^{40ix_k}$")
+ax.plot(np.abs(FFT(yk)), label="FFT")
+ax.plot(np.abs(DFT(yk)), label="DFT")
+ax.legend(loc="upper right")
+
+fig.show()
+
+ +
+
+ +
+
+ +
+
+ + +
+ +
+ + + + +
+ +
+ +
+ +
+
+ +
+
+
+
+
+

Task 5

Analyze the evaluation-time scaling of your $\text{FFT(yk)}$ function with the help of the timeit module and compare it to the scaling of the $\text{DFT(yk)}$ function.

+ +
+
+ +
+
+
In [7]:
+
Student's answer + Score: 1.0 / 1.0 (Top) +
+
+
+
for M in range(2, 10+1):
+    N = 2**M
+    xk  = 2*np.pi*np.arange(N)/N
+    # Using the first equation for yk from the second exercise.
+    yk = np.exp(20j*xk) + np.exp(40j*xk)
+    tOutDFT = timeit.repeat(stmt=lambda: DFT(yk), number=10, repeat=5)
+    tOutFFT = timeit.repeat(stmt=lambda: FFT(yk), number=10, repeat=5)
+    tMean = np.mean(tOut)
+    print("M =", M, "gives")
+    print("tOutDFT =", tOutDFT)
+    print("tOutFFT =", tOutFFT)
+    print()
+
+ +
+
+ +
+
+ +
+
+ + +
+ +
+ + +
+
M = 2 gives
+tOutDFT = [0.0002505759766791016, 0.00021253401064313948, 0.0004455750167835504, 0.00014452499453909695, 0.00014388200361281633]
+tOutFFT = [0.0004867039970122278, 0.0004463070072233677, 0.00045041501289233565, 0.0004449840052984655, 0.0004476289905142039]
+
+M = 3 gives
+tOutDFT = [0.0002475400106050074, 0.00022962599177844822, 0.00022635998902842402, 0.00022813299437984824, 0.0002445339923724532]
+tOutFFT = [0.0012703200045507401, 0.0010607420117594302, 0.0010336010018363595, 0.0010479280026629567, 0.0010572359897196293]
+
+M = 4 gives
+tOutDFT = [0.0003108689852524549, 0.0009174809965770692, 0.00022098800400272012, 0.0002215309941675514, 0.00022179100778885186]
+tOutFFT = [0.0022611299937125295, 0.0022732539800927043, 0.0022214960190467536, 0.002227875986136496, 0.002236133994301781]
+
+M = 5 gives
+tOutDFT = [0.001145333022577688, 0.0005005990096833557, 0.0004998589865863323, 0.000499947986099869, 0.0005132050137035549]
+tOutFFT = [0.004537781001999974, 0.004538812005193904, 0.004590921016642824, 0.004497474990785122, 0.00447452999651432]
+
+
+
+
+ +
+ +
+ + +
+
M = 6 gives
+tOutDFT = [0.002343315980397165, 0.0019200629903934896, 0.0019572740129660815, 0.0018803580023813993, 0.0018911089864559472]
+tOutFFT = [0.00966267500189133, 0.009090398001717404, 0.009144200012087822, 0.009273145988117903, 0.009112189989537]
+
+
+
+
+ +
+ +
+ + +
+
M = 7 gives
+tOutDFT = [0.008212182990973815, 0.008353309996891767, 0.008408645022427663, 0.008393687021452934, 0.00840135000180453]
+tOutFFT = [0.018318668007850647, 0.018413748999591917, 0.01933192100841552, 0.018324879987630993, 0.018218228011392057]
+
+
+
+
+ +
+ +
+ + +
+
M = 8 gives
+tOutDFT = [0.039639003021875396, 0.041080196999246255, 0.04059328298899345, 0.04081081598997116, 0.04085720400325954]
+tOutFFT = [0.03702234799857251, 0.03687017899937928, 0.03688905501621775, 0.037114762992132455, 0.037107348995050415]
+
+
+
+
+ +
+ +
+ + +
+
M = 9 gives
+tOutDFT = [0.14568762201815844, 0.1501707499846816, 0.150004163995618, 0.15014100397820584, 0.14948989800177515]
+tOutFFT = [0.07342959797824733, 0.0742503360088449, 0.07434793098946102, 0.0744445739837829, 0.07426519301952794]
+
+
+
+
+ +
+ +
+ + +
+
M = 10 gives
+tOutDFT = [0.44247336601256393, 0.4435892539913766, 0.44337107898900285, 0.44301720801740885, 0.44676208298187703]
+tOutFFT = [0.14968713297275826, 0.14875479298643768, 0.14765915501629934, 0.14755774100194685, 0.1474623599788174]
+
+
+
+
+ +
+
+ +
+
+
+
+
+

For small $M$, DFT is faster, but as $M$ increases, FFT gets a lot more efficient.

+ +
+
+ +
+
+
+
+
+ + diff --git a/Week 2/feedback/6 Composite Numerical Integration Trapezoid and Simpson Rules.html b/Week 2/feedback/6 Composite Numerical Integration Trapezoid and Simpson Rules.html new file mode 100755 index 0000000..cf61759 --- /dev/null +++ b/Week 2/feedback/6 Composite Numerical Integration Trapezoid and Simpson Rules.html @@ -0,0 +1,13669 @@ + + + + + +6 Composite Numerical Integration: Trapezoid and Simpson Rules + + + + + + + + + + + + + + + + + +
+
+
+

6 Composite Numerical Integration: Trapezoid and Simpson Rules (Score: 16.0 / 17.0)

+
+
    + + + + + + + + + + + +
  1. Coding free-response (Score: 0.0 / 0.0)
  2. + + + + + + + +
  3. Coding free-response (Score: 6.0 / 6.0)
  4. + + + + + + + +
  5. Coding free-response (Score: 1.0 / 1.0)
  6. + + + + + + + + + + +
  7. Coding free-response (Score: 6.0 / 6.0)
  8. + + +
  9. Comment
  10. + + + + + + +
  11. Coding free-response (Score: 3.0 / 4.0)
  12. + + +
  13. Comment
  14. + + + + + + + + + +
+
+
+
+
+
+ +
+
+
+
+

CDS: Numerical Methods Assignments

    +
  • See lecture notes and documentation on Brightspace for Python and Jupyter basics. If you are stuck, try to google or get in touch via Discord.

    +
  • +
  • Solutions must be submitted via the Jupyter Hub.

    +
  • +
  • Make sure you fill in any place that says YOUR CODE HERE or "YOUR ANSWER HERE".

    +
  • +
+

Submission

    +
  1. Name all team members in the the cell below
  2. +
  3. make sure everything runs as expected
  4. +
  5. restart the kernel (in the menubar, select Kernel$\rightarrow$Restart)
  6. +
  7. run all cells (in the menubar, select Cell$\rightarrow$Run All)
  8. +
  9. Check all outputs (Out[*]) for errors and resolve them if necessary
  10. +
  11. submit your solutions in time (before the deadline)
  12. +
+ +
+
+team_members = "Koen Vendrig, Kees van Kempen" +
+
+
+
+

Composite Numerical Integration: Trapezoid and Simpson Rules

In the following we will implement the composite trapezoid and Simpson rules to calculate definite integrals. These rules are defined by

+\begin{align} + \int_a^b \, f(x)\, dx &\approx \frac{h}{2} \left[ f(a) + 2 \sum_{j=1}^{n-1} f(x_j) + f(b) \right] + &\text{trapezoid} \\ + &\approx \frac{h}{3} \left[ f(a) + 2 \sum_{j=1}^{n/2-1} f(x_{2j}) + 4 \sum_{j=1}^{n/2} f(x_{2j-1}) + f(b) \right] + &\text{Simpson} +\end{align}

with $a = x_0 < x_1 < \dots < x_{n-1} < x_n = b$ and $x_k = a + kh$. Here $k = 0, \dots, n$ and $h = (b-a) / n$ is the step size.

+ +
+
+ +
+
+
In [1]:
+
Student's answer + Score: 0.0 / 0.0 (Top) +
+
+
+
import numpy as np
+import scipy.integrate
+from matplotlib import pyplot as plt
+
+# And for printing the lambdas:
+import inspect
+
+ +
+
+ +
+
+ +
+
+
+
+
+

Task 1

Implement both integration schemes as Python functions $\text{trapz(yk, dx)}$ and $\text{simps(yk, dx)}$. The argument $\text{yk}$ is an array of length $n+1$ representing $y_k = f(x_k)$ and $\text{dx}$ is the step size $h$. Compare your results with Scipy's functions $\text{scipy.integrate.trapz(yk, xk)}$ and $\text{scipy.integrate.simps(yk, xk)}$ for a $f(x_k)$ of your choice.

+

Try both even and odd $n$. What do you see? Why?

+

Hint: go to the Scipy documentation. How are even and odd $n$ handled there?

+ +
+
+ +
+
+
In [2]:
+
Student's answer + Score: 6.0 / 6.0 (Top) +
+
+
+
def trapz(yk, dx):
+    """
+    Return integration estimate for curve yk with steps dx
+    using the trapezoid algorithm.
+    """
+    
+    a, b = yk[0], yk[-1]
+    h = dx
+    integral = h/2*(a + 2*np.sum(yk[1:-1]) + b)
+    return integral
+    
+def simps(yk, dx):
+    """
+    Return integration estimate for curve yk with steps dx
+    using Simpson's algorithm.
+    """
+    
+    a, b = yk[0], yk[-1]
+    h = dx
+    # Instead of summing over something with n/2, we use step size 2,
+    # thus avoiding any issues with 2 ∤ n.
+    integral = h/3*(a + 2*np.sum(yk[2:-1:2]) + 4*np.sum(yk[1:-1:2]) + b)
+    return integral
+
+ +
+
+ +
+
+ +
+
+
+
In [3]:
+
+
def compare_integration(f, a, b, n):
+    """
+    Prints an analysis of integration estimates to function f(x)
+    over interval [a,b] in n steps using both the trapezoid and Simpson's
+    algorithm, self-implemented and Scipy implemented.
+    """
+    
+    h    = (b - a)/n
+    xk   = np.linspace(a, b, n + 1)
+    yk   = f(xk)
+    
+    print("For function", inspect.getsource(f))
+    print("for boundaries a =", a, ", b =", b, "and steps n =", n, "the algorithms say:")
+    print("trapezoid:\t\t", trapz(yk, h))
+    print("Simpson:\t\t", simps(yk, h))
+    print("scipy.integrate.trapz:\t", scipy.integrate.trapz(yk, xk))
+    print("scipy.integrate.simps:\t", scipy.integrate.simps(yk, xk))
+    print()
+
+ +
+
+
+ +
+
+
+
In [4]:
+
Student's answer + Score: 1.0 / 1.0 (Top) +
+
+
+
# We need a function to integrate, so here we go.
+f = lambda x: x**2
+
+n    = 100001
+a, b = 0, 1
+
+ +
+
+ +
+
+ +
+
+
+
In [5]:
+
+
compare_integration(f, a, b, n)
+compare_integration(f, a, b, n + 1)
+
+ +
+
+
+ +
+
+ + +
+ +
+ + +
+
For function f = lambda x: x**2
+
+for boundaries a = 0 , b = 1 and steps n = 100001 the algorithms say:
+trapezoid:		 0.33333333334999976
+Simpson:		 0.3333300000666658
+scipy.integrate.trapz:	 0.33333333334999965
+scipy.integrate.simps:	 0.3333333333333335
+
+For function f = lambda x: x**2
+
+for boundaries a = 0 , b = 1 and steps n = 100002 the algorithms say:
+trapezoid:		 0.3333333333499994
+Simpson:		 0.33333333333333337
+scipy.integrate.trapz:	 0.3333333333499993
+scipy.integrate.simps:	 0.3333333333333333
+
+
+
+
+ +
+
+ +
+
+
+
+
+

Task 2

Implement at least one test function for each of your integration functions.

+ +
+
+ +
+
+
In [6]:
+
Student's answer + Score: 6.0 / 6.0 (Top) +
+
+
+
# In the comparison of n even and n odd, and the testing of the integrations,
+# we have already tested the functions, but as it is asked, here we go again.
+
+def test_trapz():
+    fun  = lambda x: x**3 + 6*x
+    a, b = 2, 16
+    n    = 82198
+    
+    h    = (b - a)/n
+    xk   = np.linspace(a, b, n + 1)
+    yk   = f(xk)
+
+    trapz_our   = trapz(yk, h)
+    trapz_scipy = scipy.integrate.trapz(yk, xk)
+    
+    print("For function f(x) = x^3 + 6x")
+    print("for boundaries a =", a, ", b =", b, "and steps n =", n, "the algorithms say:")
+    print("trapezoid:\t\t", trapz_our)
+    print("scipy.integrate.trapz:\t", trapz_scipy)
+    print("with difference trapz(yk, h) - scipy.integrate.trapz(yk, xk) =", trapz_our - trapz_scipy)
+    print()
+    
+def test_simps():
+    fun  = lambda x: -x**3 + 6*x
+    a, b = 2, 17
+    n    = 82228
+    
+    h    = (b - a)/n
+    xk   = np.linspace(a, b, n + 1)
+    yk   = f(xk)
+
+    simps_our   = simps(yk, h)
+    simps_scipy = scipy.integrate.simps(yk, xk)
+    
+    print("For function f(x) = -x^3 + 6x")
+    print("for boundaries a =", a, ", b =", b, "and steps n =", n, "the algorithms say:")
+    print("Simpson:\t\t", simps_our)
+    print("scipy.integrate.simps:\t", simps_scipy)
+    print("with difference simps(yk, h) - scipy.integrate.simps(yk, xk) =", simps_our - simps_scipy)
+    print()
+    
+test_trapz()
+test_simps()
+
+ +
+
+ +
+
+ +
+
+ + +
+ +
+ + +
+
For function f(x) = x^3 + 6x
+for boundaries a = 2 , b = 16 and steps n = 82198 the algorithms say:
+trapezoid:		 1362.6666667343538
+scipy.integrate.trapz:	 1362.6666667343543
+with difference trapz(yk, h) - scipy.integrate.trapz(yk, xk) = -4.547473508864641e-13
+
+For function f(x) = -x^3 + 6x
+for boundaries a = 2 , b = 17 and steps n = 82228 the algorithms say:
+Simpson:		 1635.0
+scipy.integrate.simps:	 1635.0000000000002
+with difference simps(yk, h) - scipy.integrate.simps(yk, xk) = -2.2737367544323206e-13
+
+
+
+
+ +
+
+ +
+
+
+
+
+

Task 3

Study the accuracy of these integration routines by calculating the following integrals for a variety of step sizes $h$:

+
    +
  • $\int_0^1 \, x\, dx$
  • +
  • $\int_0^1 \, x^2\, dx$
  • +
  • $\int_0^1 \, x^\frac{1}{2}\, dx$
  • +
+

The integration error is defined as the difference (not the absolute difference) between your numerical results and the exact results. Plot the integration error as a function of $h$ for both integration routines and all listed functions. Comment on the comparison between both integration routines. Does the sign of the error match your expectations? Why?

+ +
+
+ +
+
+
In [7]:
+
Student's answer + Score: 3.0 / 4.0 (Top) +
+
+
+
f1 = lambda x: x
+f2 = lambda x: x**2
+f3 = lambda x: x**(1/2)
+
+a, b = 0, 1
+h_list = np.logspace(-3, 1, 50)
+
+f1_simps = np.zeros(len(h_list))
+f1_trapz = np.zeros(len(h_list))
+f2_simps = np.zeros(len(h_list))
+f2_trapz = np.zeros(len(h_list))
+f3_simps = np.zeros(len(h_list))
+f3_trapz = np.zeros(len(h_list))
+
+for i in range(len(h_list)):
+    h    = h_list[i]
+    xk   = np.arange(a, b, h)
+    n    = len(xk)
+    
+    # The repetition could be reduced, but we deem that unnecessary.
+    f1_simps[i] = simps(f1(xk), h)
+    f1_trapz[i] = trapz(f1(xk), h)
+    f2_simps[i] = simps(f2(xk), h)
+    f2_trapz[i] = trapz(f1(xk), h)
+    f3_simps[i] = simps(f2(xk), h)
+    f3_trapz[i] = trapz(f1(xk), h)
+
+ +
+
+ +
+
+ +
+
+
+
In [8]:
+
+
fig, ax = plt.subplots(1, 3, sharex=True, sharey=True, figsize=(16,6))
+
+fig.suptitle("Difference estimated integral minus true analytic value for three functions and two algorithms:")
+
+ax[0].set_xlabel("h")
+ax[1].set_xlabel("h")
+ax[2].set_xlabel("h")
+
+# We only need to set the scale and direction for one graph,
+# as we set sharex.
+ax[0].set_xscale("log")
+ax[0].invert_xaxis()
+
+ax[0].set_title(r"error in $\int_0^1xdx$")
+ax[0].plot(h_list, f1_trapz - 1/2, label="trapezoid")
+ax[0].plot(h_list, f1_simps - 1/2, label="Simpson")
+ax[0].legend(loc="lower right")
+
+ax[1].set_title(r"error in $\int_0^1x^2dx$")
+ax[1].plot(h_list, f2_trapz - 1/3, label="trapezoid")
+ax[1].plot(h_list, f2_simps - 1/3, label="Simpson")
+ax[1].legend(loc="lower right")
+
+ax[2].set_title(r"error in $\int_0^1x^\frac{1}{2}dx$")
+ax[2].plot(h_list, f3_trapz - 2/3, label="trapezoid")
+ax[2].plot(h_list, f3_simps - 2/3, label="Simpson")
+ax[2].legend(loc="lower right")
+
+fig.show()
+
+ +
+
+
+ +
+
+ + +
+ +
+ + + + +
+ +
+ +
+ +
+
+ +
+
+
+
+
+

Somehow, the shape of the error functions seems to be similar, with peaks a similar pattern, for the three functions and the the algorithms. +The errors to the Simpson algorithm seems to be negative, thus the integration function gives lower estimates to the integrals. +This cannot be said about the trapezoid algorithm. +The trapezoid algorithms has the same trend, but becomes larger and positive in the latter two functions. +For Simpson's algorithm, as desired, over the range of decreasing $h$, the error decreases converging to around zero

+ +
+
+ +
+
+
+
+
+ +