Files
cds-numerical-methods/Week 3/7 Linear Equation Systems.ipynb
2022-02-18 17:33:35 +01:00

982 lines
36 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "4ec40081b048ce2f34f3f4fedbb0be10",
"grade": false,
"grade_id": "cell-98f724ece1aacb67",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"# CDS: Numerical Methods Assignments\n",
"\n",
"- See lecture notes and documentation on Brightspace for Python and Jupyter basics. If you are stuck, try to google or get in touch via Discord.\n",
"\n",
"- Solutions must be submitted via the Jupyter Hub.\n",
"\n",
"- Make sure you fill in any place that says `YOUR CODE HERE` or \"YOUR ANSWER HERE\".\n",
"\n",
"## Submission\n",
"\n",
"1. Name all team members in the the cell below\n",
"2. make sure everything runs as expected\n",
"3. **restart the kernel** (in the menubar, select Kernel$\\rightarrow$Restart)\n",
"4. **run all cells** (in the menubar, select Cell$\\rightarrow$Run All)\n",
"5. Check all outputs (Out[\\*]) for errors and **resolve them if necessary**\n",
"6. submit your solutions **in time (before the deadline)**"
]
},
{
"cell_type": "raw",
"metadata": {},
"source": [
"team_members = \"Koen Vendrig, Kees van Kempen\""
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "2900c663e4345c7f0707be39cc0cc7f4",
"grade": false,
"grade_id": "cell-b6b5c93e38567117",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"## Linear Equation Systems\n",
"\n",
"In the following you will implement the Gauss-Seidel (GS), Steepest Descent (SD) and the Conjugate Gradient (CG) algorithms to solve linear equation systems of the form \n",
"\n",
"$$A \\mathbf{x} = \\mathbf{b},$$ \n",
"\n",
"with $A$ being an $n \\times n$ matrix."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "55d777356d474cb7327bb087dfe5e644",
"grade": true,
"grade_id": "cell-bcf2dd8f9a194be8",
"locked": false,
"points": 0,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"import numpy as np\n",
"import numpy.linalg as linalg\n",
"from matplotlib import pyplot as plt"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "9717dea99ca91d38963fc25b2ef95e03",
"grade": false,
"grade_id": "cell-595bb99f65f79ca7",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 1\n",
"First, you need to implement a Python function $\\text{diff(a,b)}$, which returns the difference $\\text{d}$ between two $n$-dimensional vectors $\\text{a}$ and $\\text{b}$ according to \n",
"\n",
"$$ d = || \\mathbf{a} - \\mathbf{b}||_\\infty = \\underset{i=1,2,\\dots,n}{\\operatorname{max}} |a_i - b_i|. $$"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "437477eff36ed8c083b4903a6921591c",
"grade": true,
"grade_id": "cell-bcd26d8a9f0e6447",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"def diff(a, b):\n",
" return np.max(np.abs(a - b))"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "a10d859fa1b26c11ae00c7b846fdd1e8",
"grade": false,
"grade_id": "cell-19ae2e6fe5c9264c",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 2 \n",
"\n",
"The Gauss-Seidel iteration scheme to solve the linear equation system \n",
"\n",
"$$A \\mathbf{x} = \\mathbf{b}$$\n",
"\n",
"is defined by \n",
"\n",
"$$x_i^{(k)} = \\frac{1}{a_{ii}} \\left[ -\\sum_{j=0}^{i-1} a_{ij} x_j^{(k)} -\\sum_{j=i+1}^{n-1} a_{ij} x_j^{(k-1)} + b_i \\right].$$\n",
"\n",
"Note especially the difference in the sums: the first one involves $x_j^{(k)}$ and the second one $x_j^{(k-1)}$.\n",
"\n",
"\n",
"Give the outline of the derivation in LaTeX math notation in the markdown cell below. (Double click on \"YOUR ANSWER HERE\" to open the cell, and ctrl+enter to compile.) \n",
"\n",
"Hint: Similar to the Jacobi scheme, start by seperating the matrix $A$ into its diagonal ($D$), lower triangular ($L$) and upper triangular ($U$) forms, such that $A = D - L - U$."
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "3c0c1bc83919a390739fa5802fc75743",
"grade": true,
"grade_id": "cell-3a8403fee85d9582",
"locked": false,
"points": 2,
"schema_version": 3,
"solution": true,
"task": false
}
},
"source": [
"---\n",
"\n",
"We start from our linear equations:\n",
"\n",
"$$Ax = b$$\n",
"\n",
"We separate A into different components (diagonal, strictly lower triangular and strictly upper triangular):\n",
"\n",
"$$A = D - L - U$$\n",
"\n",
"\n",
"We write $D - L$ as $L'$ to get:\n",
"\n",
"\n",
"$$(L' - U)x = b$$\n",
"\n",
"\n",
"We take the iterative process of the Gauss-Seidel method to write:\n",
"\n",
"\n",
"$$\n",
"L'x^k = b + Ux^{k-1}\\\\\n",
"x^k = L'^{-1}(b + Ux^{k-1})\\\\\n",
"$$\n",
"If we write every component of the matrix $A$ as $a_{ij}$, we can use forward substitution to rewrite our previous equation to:\n",
"\n",
"\n",
"$$x^k _i = \\frac{1}{a_{ii}}\\left[-\\sum_{j=0}^{i-1}a_{ij}x_{j}^{k} -\\sum_{j=i+1}^{n-1}a_{ij}x_{j}^{k-1} + b_i\\right].$$"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "dfbc2086b4294a7d942d35d9e1a22d5b",
"grade": false,
"grade_id": "cell-4ebd4c774509912d",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 3\n",
"\n",
"Implement the Gauss-Seidel iteration scheme derived above\n",
"\n",
"$$x_i^{(k)} = \\frac{1}{a_{ii}} \\left[ -\\sum_{j=0}^{i-1} a_{ij} x_j^{(k)} -\\sum_{j=i+1}^{n-1} a_{ij} x_j^{(k-1)} + b_i \\right],$$\n",
"\n",
"where $a_{ij}$ are the elements of the matrix $A$, and $x_i$ and $b_i$ the elements of vectors $\\mathbf{x}$ and $\\mathbf{b}$, respectively.\n",
"\n",
"Write a Python function $\\text{GS(A, b, eps)}$, where $\\text{A}$ represents the $n \\times n$ $A$ matrix, $\\text{b}$ represents the $n$-dimensional solution vector $\\mathbf{b}$, and $\\text{eps}$ is a scalar $\\varepsilon$ defining the accuracy up to which the iteration is performed. Your function should return both the solution vector $\\mathbf{x}^{(k)}$ from the last iteration step and the corresponding iteration index $k$. \n",
"\n",
"Use an assertion to make sure the diagonal elements of $A$ are all non-zero. Initialize your iteration with $\\mathbf{x}^{(0)} = \\mathbf{0}$ (or with $\\mathbf{x}^{(1)} = D^{-1}\\mathbf{b}$, with $D$ the diagonal of $A$) and increase $k$ until $|| \\mathbf{x}^{(k)} - \\mathbf{x}^{(k-1)}||_\\infty < \\varepsilon$. "
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "65b01c4e4c650dc6971b7d6dde62044b",
"grade": true,
"grade_id": "cell-ec2390c3cafa2f8e",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"def GS(A, b, eps, k_max = 10000):\n",
" \"\"\"\n",
" Return the Gauss-Seidel algorithm estimate solution x to the problem\n",
" Ax = b and the number of iterations k it took to decrease maximum\n",
" norm error below eps or to exceed iteration maximum k_max.\n",
" \"\"\"\n",
" \n",
" # Assert n by n matrix.\n",
" assert len(A.shape) == 2 and A.shape[0] == A.shape[1]\n",
" n = len(A)\n",
" \n",
" # First we decompose A = D - L - U.\n",
" D = np.diag(np.diag(A))\n",
" U = -np.triu(A) + D\n",
" L = -np.tril(A) + D\n",
" \n",
" # We need non-zero diagonals elements.\n",
" assert np.all(np.diag(D) != 0)\n",
" \n",
" x_prev = np.zeros(n)\n",
" x_cur = np.dot(linalg.inv(D), b)\n",
" \n",
" k = 1\n",
" while diff(x_cur, x_prev) > eps and k < k_max:\n",
" k += 1\n",
" # We will have to copy, as the array elements will point to the same\n",
" # memory otherwise, and changes to one array will change the other aswell.\n",
" x_prev = x_cur.copy()\n",
" for i in range(n):\n",
" x_cur[i] = 1/A[i, i]*(-np.dot(A[i, :i], x_cur[:i]) - np.dot(A[i, i + 1:], x_prev[i + 1:]) + b[i])\n",
" return x_cur, k"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "badd9e6110882657c17c42e3d9ad8370",
"grade": false,
"grade_id": "cell-78d8cad45f5350b9",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 4\n",
"\n",
"Verify your implementation by comparing your approximate result to an exact solution. Use $\\text{numpy.linalg.solve()}$ to obtain the exact solution of the system\n",
"\n",
"$$\n",
"\\begin{align*}\n",
" \\begin{pmatrix}\n",
" 10 & -1 & 2 & 0 \\\\ \n",
" -1 & 11 &-1 & 3 \\\\\n",
" 2 & -1 & 10&-1 \\\\\n",
" 0 & 3 & -1& 8\n",
" \\end{pmatrix} \\mathbf{x}^*\n",
" =\n",
" \\begin{pmatrix}\n",
" 6 \\\\\n",
" 25 \\\\\n",
" -11\\\\\n",
" 15\n",
" \\end{pmatrix}\n",
"\\end{align*}\n",
"$$\n",
"\n",
"Then compare you approximate result $\\mathbf{\\tilde{x}}$ to the exact result $\\mathbf{x^*}$ by plotting $|| \\mathbf{x}^* - \\mathbf{\\tilde{x}}||_\\infty$ for different accuracies $\\varepsilon = 10^{-1}, 10^{-2}, 10^{-3}, 10^{-4}$. \n",
"\n",
"Implement a unit test for your function using this system."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "b54e0939053f9afee0f0ba46e490ba17",
"grade": true,
"grade_id": "cell-0585da22f1d304ab",
"locked": false,
"points": 4,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAZMAAAEKCAYAAADXdbjqAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAQxUlEQVR4nO3df2id133H8c9nipOoGZVL445YiWcXBdGwEAzCgXgEN1mmFKLEhKxLukFLTEwysrF/RG3S9b/WYWJjLQlkLg1uyvLDpEbYiVcxSF2HEDYr0aiTGXUmECI5zMkWqU25UNv97g9dx7IiyffqPI+ee+95v0D4Pue599xvOFw+Oec897mOCAEAkOL3qi4AAND+CBMAQDLCBACQjDABACQjTAAAyQgTAECyy6ouoApXX311bNy4seoyAKCtvPHGGx9GxLrFzmUZJhs3btT4+HjVZQBAW7H97lLnslrmsj1ke+/s7GzVpQBAR8kqTCLiUETs7OnpqboUAOgoWYUJAKAchAkAIFlWG/C2hyQN9fX1VV0KAKyq0YlpjYxN6tRMTevXdmt4sF/bN/cW1n9WMxP2TADkaHRiWrsPHNf0TE0haXqmpt0Hjmt0Yrqw98gqTAAgRyNjk6qdOXdRW+3MOY2MTRb2HoQJAHS4UzO1ptpXgjABgA63fm13U+0rkVWY8KVFADkaHuxX95qui9q613RpeLC/sPfIKkzYgAeQo+2be7Xn3hvVu7ZbltS7tlt77r2x0Ku5sro0GABytX1zb6HhsVBWMxMAQDkIEwBAMsIEAJCMMAEAJMsqTLg0GADKkVWYcGkwAJQjqzABAJSDMAEAJCNMAADJCBMAQDLCBACQjDABACTLKkz4ngkAlCOrMOF7JgBQjqzCBABQDsIEAJCMMAEAJCNMAADJCBMAQDLCBACQjDABACQjTAAAyQgTAECyrMKE26kAQDmyChNupwIA5cgqTAAA5SBMAADJCBMAQDLCBACQjDABACQjTAAAyQgTAEAywgQAkIwwAQAkI0wAAMkIEwBAMsIEAJCMMAEAJCNMAADJCBMAQLKOCBPbX7L9lO0XbT9SdT0AkJvKw8T207ZP235rQfudtidtn7S9a7k+IuJERDws6auStpZZLwDg0yoPE0n7JN05v8F2l6QnJX1F0g2SHrB9g+0bbb+04O8L9dfcLellSYdXt3wAwGVVFxARR21vXNC8RdLJiHhHkmw/L+meiNgj6a4l+jko6aDtlyU9u/C87Z2SdkrShg0bivsPAABUHyZL6JX03rzjKUk3L/Vk29sk3SvpCi0xM4mIvZL2StLAwEAUVCcAQK0bJk2JiCOSjlRcBgBkqxX2TBYzLem6ecfX1tuS2B6yvXd2dja1KwDAPK0aJsckXW97k+3LJd0v6WBqpxFxKCJ29vT0JBcIALig8jCx/Zyk1yX1256yvSMizkp6VNKYpBOS9kfE21XWCQBYWuV7JhHxwBLth8VlvgDQFiqfmawm9kwAoBxZhQl7JgBQjqzCBABQjqzChGUuAChHVmHCMhdQvtGJaW19/BVt2vWytj7+ikYnkr8ihjZQ+dVcADrH6MS0dh84rtqZc5Kk6Zmadh84Lknavrm3ytJQsqxmJgDKNTI2+UmQnFc7c04jY5MVVYTVQpgAKMypmVpT7egcWYUJG/BAudav7W6qHZ0jqzBhAx4o1/Bgv7rXdF3U1r2mS8OD/RVVhNXCBjyAwpzfZB8Zm9SpmZrWr+3W8GA/m+8ZIEwAFGr75l7CI0NZLXMBAMqRVZiwAQ8A5cgqTNiAB4ByZBUmAIByECYAgGSECQAgGWECAEiWVZhwNRcAlCOrMOFqLgAoR6FhYvuzRfYHAGgPhd1OxfYjks7ZvjUi/rKofgEAra/Ie3Odqf97tsA+AQBtoMhlrvcl9Uo6XWCfAIA2UGSY3CzpNUmbCuwTANAGCguTiPi2pP+T9FBRfQIA2kOhV3NFxJsRMVNkn0XieyYAUI6mN+Btb2jwqTMR8atm+y9TRBySdGhgYIDZEwAUaCVXc/2ogeeEpH2SnllB/wCANtN0mETEl8soBADQvrJa5gIAlINlLgBAsqbCxHYXy1wAgIWWvTTYdq/t79r+TL3p720/UT/3vdKrAwC0hUt9z+RGSTdJGqgffyTp3frjX9s+ZPsqSbI9aPu1csoEALSyZZe5IuKntr8eEUfrTVsk/Vv93Ldsf03SEdu/lfSxpF2lVgsAaElN7ZlExN22PydJtm/X3K1TfiPpGkkPRsRk8SUCAFpdI7dTeXP+QUR8VH/4mKS/i4htku6T9ILt24otDwDQDi45M4mIkSXab5v3+Ljtr0j6iaRbiiuvWLaHJA319fVVXQoAdJQi7xr8vqTbi+qvDPwGPACUo+i7BteK7A8A0B6Sw6S+dAQAyFgRM5PvFNAHAKCNFREmLqAPAEAbKyJMooA+AABtrNANeABAnggTAECyIsLkfwroAwDQxpLDJCLuKKIQAED7YpkLAJCMMAEAJCNMAADJmgqTeb+q+PvllAMAaEfNzkw+Z/tRSX9cRjEAgPbUbJjcLukbkr5o+wvFl7Nytq+yPW77rqprAYDcNBsm/yHpQUnvRsTpIgqw/bTt07bfWtB+p+1J2ydtN/Lb8t+UtL+ImgAAzWn2N+BP1B/+osAa9kl6QtIz5xtsd0l6UtIdkqYkHbN9UFKXpD0LXv+gpJsk/ZekKwusCwDQoKbCpAwRcdT2xgXNWySdjIh3JMn285LuiYg9kj61jGV7m6SrJN0gqWb7cET8rsy6AQAXNLzMZbvX9ndtf6Z+/D3bZd1+vlfSe/OOp+pti4qIxyLibyU9K+kHiwWJ7Z31PZXxDz74oOh6ASBrzeyZ3Ki55aSB+vGvJR2cd7nwoO3XCq6vKRGxLyJeWuLc3ogYiIiBdevWrXZpANDRGl7mioif2v56RBytH3/L9tckHbH9W0kfS2pko7wR05Kum3d8bb0NANCCVvwNeNu3S3pI0m8kXS3pbyLi1YLqOibpetubbF8u6X5JB1M7tT1ke+/s7GxygQCAC5oNkzfnPX5M0rcjYpuk+yS9YPu2Zguw/Zyk1yX1256yvSMizkp6VNKYpBOS9kfE2832vVBEHIqInT09PaldAQDmcUQxv7pr+xpJP4mIWwrpsEQDAwMxPj5edRkA0FZsvxERA4uda/rSYNsbljm9Y975mYj4VbP9l8n2kKShvr6+qksBgI7S9MzE9s+WOR2SXP93X0Q8s8xzK8PMBACaV+jMJCK+nF4SAKCTFL3MNV/LLXMBAMqxktup/KiB54Tm7rnVUstc7JkAQDkKu5qrnbBn0jlGJ6Y1MjapUzM1rV/breHBfm3fvOSddwAkKHTPBGgVoxPT2n3guGpnzkmSpmdq2n3guCQRKMAq4zfg0bZGxiY/CZLzamfOaWRssqKKgHxlFSbcTqWznJqpNdUOoDxZhQm3U+ks69d2N9UOoDxZhQk6y/Bgv7rXdF3U1r2mS8OD/RVVBOSLDXi0rfOb7FzNBVSPMEFb2765l/AAWkBWy1xswANAObIKEzbgAaAcWYUJAKAchAkAIBlhAgBIRpgAAJIRJgCAZFmFCZcGA0A5sgoTLg0GgHJkFSYAgHIQJgCAZIQJACAZYQIASEaYAACSESYAgGRZhQnfMwGAcmQVJnzPBADKkVWYAADKQZgAAJIRJgCAZIQJACAZYQIASEaYAACSESYAgGSECQAgGWECAEiWVZhwOxUAKEdWYcLtVACgHFmFCQCgHIQJACAZYQIASEaYAACSESYAgGSECQAgGWECAEhGmAAAkhEmAIBkhAkAIBlhAgBIRpgAAJIRJgCAZIQJACAZYQIASNYRYWJ7m+1XbT9le1vV9QBAbioPE9tP2z5t+60F7XfanrR90vauS3QTkj6WdKWkqbJqBQAs7rKqC5C0T9ITkp4532C7S9KTku7QXDgcs31QUpekPQte/6CkVyPi57b/QNI/SvqLVagbAFBXeZhExFHbGxc0b5F0MiLekSTbz0u6JyL2SLprme4+knTFYids75S0U5I2bNiQWjYAYJ7Kl7mW0CvpvXnHU/W2Rdm+1/Y/S/qx5mY5nxIReyNiICIG1q1bV2ixAJC7ymcmRYiIA5IOVF0HAOSqVWcm05Kum3d8bb0tie0h23tnZ2dTuwIAzNOqYXJM0vW2N9m+XNL9kg6mdhoRhyJiZ09PT3KBAIALKg8T289Jel1Sv+0p2zsi4qykRyWNSTohaX9EvF1lnQCApVW+ZxIRDyzRfljS4SLfy/aQpKG+vr4iuwWA7FU+M1lNLHMBQDmyChMAQDkIEwBAsqzChEuDAaAcWYUJeyYAUI6swgQAUA7CBACQjDABACTLKkzYgAeAcmQVJqkb8KMT09r6+CvatOtlbX38FY1OJN97EgA6QuW3U2kXoxPT2n3guGpnzkmSpmdq2n3guCRp++Ylf2oFALKQ1cwkxcjY5CdBcl7tzDmNjE1WVBEAtA7CpEGnZmpNtQNATrIKk5QN+PVru5tqB4CcZBUmKRvww4P96l7TdVFb95ouDQ/2F1UeALQtNuAbdH6TfWRsUqdmalq/tlvDg/1svgOACJOmbN/cS3gAwCKyWuYCAJSDMAEAJMsqTLidCgCUI6sw4fdMAKAcWYUJAKAcjoiqa1h1tj+Q9O4ip3okLVwDW6ztakkfllDapSxWy2r108hrLvWc5c4vda7Vx0QqZlzKGpNGnlfWuLT7mKy0n07+rPxhRKxb9ExE8Ff/k7S3wbbxVqlvtfpp5DWXes5y55c61+pjUtS4lDUmVY5Lu49JmePSiZ8VlrkudqjBtqoUVctK+mnkNZd6znLnlzrX6mMiFVNPWWPSyPM6cVz4rDReSyGyXOZKZXs8IgaqrgMXMCathzFpTWWNCzOTldlbdQH4FMak9TAmramUcWFmAgBIxswEAJCMMAEAJCNMAADJCJOC2b7K9rjtu6quBXNsf8n2U7ZftP1I1fVAsr3d9g9sv2D7T6uuB3Nsf9H2D22/2OxrCZM620/bPm37rQXtd9qetH3S9q4GuvqmpP3lVJmfIsYlIk5ExMOSvippa5n15qCgMRmNiIckPSzpz8usNxcFjcs7EbFjRe/P1VxzbN8q6WNJz0TEH9XbuiT9UtIdkqYkHZP0gKQuSXsWdPGgpJskfV7SlZI+jIiXVqf6zlXEuETEadt3S3pE0o8j4tnVqr8TFTUm9df9g6R/iYg3V6n8jlXwuLwYEfc18/780mJdRBy1vXFB8xZJJyPiHUmy/bykeyJij6RPLWPZ3ibpKkk3SKrZPhwRvyuz7k5XxLjU+zko6aDtlyURJgkK+qxY0uOS/pUgKUZRn5WVIkyW1yvpvXnHU5JuXurJEfGYJNn+huZmJgRJOZoal3rI3yvpCkmHyywsY02NiaS/lvQnknps90XEU2UWl7FmPyufl/QdSZtt766HTkMIkxJExL6qa8AFEXFE0pGKy8A8EfF9Sd+vug5cLCL+V3P7WE1jA35505Kum3d8bb0N1WJcWg9j0ppWbVwIk+Udk3S97U22L5d0v6SDFdcExqUVMSatadXGhTCps/2cpNcl9duesr0jIs5KelTSmKQTkvZHxNtV1pkbxqX1MCatqepx4dJgAEAyZiYAgGSECQAgGWECAEhGmAAAkhEmAIBkhAkAIBlhAgBIRpgAAJIRJkCLqP+I0X/W//7dNp9PtA2+AQ+0CNv/LenWiHi/6lqAZvF/PkDrOCzpF7b/qepCgGbxeyZAC7B9iyRLuqZ+cz6grTAzAVrDn0n6ZUSc9ZzPVl0Q0Az2TIAWYHuLpB9KCkk1SX8VEW9UWxXQOMIEAJCMZS4AQDLCBACQjDABACQjTAAAyQgTAEAywgQAkIwwAQAkI0wAAMn+H8is0a3Xr+ppAAAAAElFTkSuQmCC\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"A = np.array([[ 10, - 1, 2, 0],\n",
" [- 1, 11, - 1, 3],\n",
" [ 2, - 1, 10, - 1],\n",
" [ 0, 3, - 1, 8]])\n",
"b = np.array( [ 6, 25, -11, 15] )\n",
"x_exact = linalg.solve(A, b)\n",
"\n",
"eps_list = [1e-1, 1e-2, 1e-3, 1e-4]\n",
"diff_list = []\n",
"for eps in eps_list:\n",
" x, k = GS(A, b, eps)\n",
" diff_list.append(diff(x_exact, x))\n",
"\n",
"fig, ax = plt.subplots()\n",
"\n",
"ax.scatter(eps_list, diff_list)\n",
"ax.set_xscale(\"log\")\n",
"ax.set_yscale(\"log\")\n",
"ax.set_xlabel(\"$\\epsilon$\")\n",
"ax.set_ylabel(\"$||\\\\vec{x}^* - \\\\vec{\\\\tilde{x}}||_\\infty$\")\n",
"\n",
"fig.show()"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"# As the three algorithm functions will have the same signature,\n",
"# it makes sense to only write the test function once.\n",
"\n",
"def test_alg(alg, alg_name):\n",
" \"\"\"\n",
" Check that function alg returns solutions for the example system Ax = b\n",
" within the error defined by the same eps as used for the iteration.\n",
" \"\"\"\n",
" \n",
" A = np.array([[ 10, - 1, 2, 0],\n",
" [- 1, 11, - 1, 3],\n",
" [ 2, - 1, 10, - 1],\n",
" [ 0, 3, - 1, 8]])\n",
" b = np.array( [ 6, 25, -11, 15] )\n",
" x_exact = linalg.solve(A, b)\n",
"\n",
" print(\"Starting with A =\")\n",
" print(A)\n",
" print(\"and b =\", b)\n",
" print(\"We apply the {} algorithm to solve Ax = b.\".format(alg_name))\n",
" print()\n",
"\n",
" eps_list = [1e-1, 1e-2, 1e-3, 1e-4]\n",
" for eps in eps_list:\n",
" x, k = alg(A, b, eps)\n",
" print(\"For eps = {:.0e}\\tafter k = {:d}\\t iterations:\".format(eps, k))\n",
" print(\"x =\\t\\t\\t\", x)\n",
" print(\"Ax =\\t\\t\\t\", np.dot(A, x))\n",
" print(\"diff(Ax, b) =\\t\\t\", diff(A @ x, b))\n",
" print(\"diff(x, x_exact) =\\t\", diff(x, x_exact))\n",
" print()\n",
" \n",
" assert diff(x, x_exact) < eps\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "d8e54a45261b013b9dd07a401d86401c",
"grade": true,
"grade_id": "cell-6524fb6d322a6ea0",
"locked": false,
"points": 0,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Starting with A =\n",
"[[10 -1 2 0]\n",
" [-1 11 -1 3]\n",
" [ 2 -1 10 -1]\n",
" [ 0 3 -1 8]]\n",
"and b = [ 6 25 -11 15]\n",
"We apply the Gauss-Seidel algorithm to solve Ax = b.\n",
"\n",
"For eps = 1e-01\tafter k = 4\t iterations:\n",
"x =\t\t\t [ 0.99463393 1.99776509 -0.99803257 1.00108402]\n",
"Ax =\t\t\t [ 5.95250909 24.98206671 -10.98990699 15. ]\n",
"diff(Ax, b) =\t\t 0.04749090616931895\n",
"diff(x, x_exact) =\t 0.005366066491359844\n",
"\n",
"For eps = 1e-02\tafter k = 5\t iterations:\n",
"x =\t\t\t [ 0.99938302 1.99982713 -0.99978549 1.00009164]\n",
"Ax =\t\t\t [ 5.99443213 24.99877578 -10.99900762 15. ]\n",
"diff(Ax, b) =\t\t 0.005567865937722516\n",
"diff(x, x_exact) =\t 0.000616975874427883\n",
"\n",
"For eps = 1e-03\tafter k = 6\t iterations:\n",
"x =\t\t\t [ 0.99993981 1.99998904 -0.99997989 1.00000662]\n",
"Ax =\t\t\t [ 5.99944928 24.99993935 -10.99991498 15. ]\n",
"diff(Ax, b) =\t\t 0.000550717702960668\n",
"diff(x, x_exact) =\t 6.018928065554263e-05\n",
"\n",
"For eps = 1e-04\tafter k = 7\t iterations:\n",
"x =\t\t\t [ 0.99999488 1.99999956 -0.99999836 1.00000037]\n",
"Ax =\t\t\t [ 5.99995255 24.99999971 -10.99999375 15. ]\n",
"diff(Ax, b) =\t\t 4.744782363452771e-05\n",
"diff(x, x_exact) =\t 5.11751035947583e-06\n",
"\n"
]
}
],
"source": [
"def test_GS():\n",
" \"\"\"\n",
" Check that GS returns solutions for the example system Ax = b\n",
" within the error defined by the same eps as used for the iteration.\n",
" \"\"\"\n",
" \n",
" return test_alg(GS, \"Gauss-Seidel\")\n",
" \n",
"test_GS()"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "f88ad91685d2bd73471acc5b600b5177",
"grade": false,
"grade_id": "cell-59514f17e337138f",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 5\n",
"\n",
"Next, implement the Steepest Descent algorithm in a similar Python function $\\text{SD(A, b, eps)}$, which calculates\n",
"\n",
"\\begin{align*}\n",
" \\mathbf{v}^{(k)} &= \\mathbf{b} - A \\mathbf{x}^{(k-1)} \\\\\n",
" t_k &= \\frac{ \\langle \\mathbf{v}^{(k)}, \\mathbf{v}^{(k)} \\rangle }{ \\langle \\mathbf{v}^{(k)}, A \\mathbf{v}^{(k)}\\rangle } \\\\\n",
" \\mathbf{x}^{(k)} &= \\mathbf{x}^{(k-1)} + t_k \\mathbf{v}^{(k)} .\n",
"\\end{align*}\n",
"\n",
"Initialize your iteration again with $\\mathbf{x}^{(0)} = \\mathbf{0}$ and increase $k$ until $|| \\mathbf{x}^{(k)} - \\mathbf{x}^{(k-1)}||_\\infty < \\varepsilon$. Return the solution vector $\\mathbf{x}^{(k)}$ from the last iteration step and the corresponding iteration index $k$. Implement a unit test for your implementation by comparing your result to the exact solution of the system in task 4.\n",
"Use $\\text{numpy.dot()}$ for all needed vector/vector and matrix/vector products. "
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "215218647aa6a6a314c093e82a4a24fb",
"grade": true,
"grade_id": "cell-a105ce7e1a34ce3c",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"def SD(A, b, eps, k_max = 10000):\n",
" \"\"\"\n",
" Return the Steepest Descent algorithm estimate solution x to the problem\n",
" Ax = b and the number of iterations k it took to decrease maximum\n",
" norm error below eps or to exceed iteration maximum k_max.\n",
" \"\"\"\n",
" \n",
" # Assert n by n matrix.\n",
" assert len(A.shape) == 2 and A.shape[0] == A.shape[1]\n",
" \n",
" n = len(A)\n",
" \n",
" x_cur = np.zeros(n)\n",
" x_prev = np.zeros(n)\n",
" \n",
" k = 0\n",
" while diff(x_cur, x_prev) > eps and k < k_max or k == 0:\n",
" k += 1\n",
" x_prev = x_cur.copy()\n",
" \n",
" v = b - A @ x_prev\n",
" t = np.dot(v, v)/np.dot(v, A @ v)\n",
" x_cur = x_prev.copy() + t*v\n",
" \n",
" return x_cur, k"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "f554d4f402c9e1a4cbff1f4e9018e9d0",
"grade": true,
"grade_id": "cell-62100bf55800145b",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Starting with A =\n",
"[[10 -1 2 0]\n",
" [-1 11 -1 3]\n",
" [ 2 -1 10 -1]\n",
" [ 0 3 -1 8]]\n",
"and b = [ 6 25 -11 15]\n",
"We apply the Steepest Descent algorithm to solve Ax = b.\n",
"\n",
"For eps = 1e-01\tafter k = 4\t iterations:\n",
"x =\t\t\t [ 0.99748613 1.98300329 -0.98904751 1.01283183]\n",
"Ax =\t\t\t [ 6.01376302 24.84309304 -10.89133793 15.04071202]\n",
"diff(Ax, b) =\t\t 0.15690696195356324\n",
"diff(x, x_exact) =\t 0.016996711694861055\n",
"\n",
"For eps = 1e-02\tafter k = 6\t iterations:\n",
"x =\t\t\t [ 0.99983175 1.99716093 -0.99850509 1.00217552]\n",
"Ax =\t\t\t [ 6.00414638 24.97397012 -10.98472385 15.00739201]\n",
"diff(Ax, b) =\t\t 0.02602988052923294\n",
"diff(x, x_exact) =\t 0.002839069878101119\n",
"\n",
"For eps = 1e-03\tafter k = 9\t iterations:\n",
"x =\t\t\t [ 0.99991029 1.99994247 -0.99999645 1.00023877]\n",
"Ax =\t\t\t [ 5.99916754 25.00016961 -11.00032515 15.00173404]\n",
"diff(Ax, b) =\t\t 0.0017340408511579142\n",
"diff(x, x_exact) =\t 0.00023877424067331177\n",
"\n",
"For eps = 1e-04\tafter k = 11\t iterations:\n",
"x =\t\t\t [ 0.99998551 1.99999065 -0.99999949 1.00003874]\n",
"Ax =\t\t\t [ 5.9998655 25.00002733 -11.00005329 15.00028137]\n",
"diff(Ax, b) =\t\t 0.0002813662634846281\n",
"diff(x, x_exact) =\t 3.874139053650083e-05\n",
"\n"
]
}
],
"source": [
"def test_SD():\n",
" \"\"\"\n",
" Check that SD returns solutions for the example system Ax = b\n",
" within the error defined by the same eps as used for the iteration.\n",
" \"\"\"\n",
" \n",
" return test_alg(SD, \"Steepest Descent\")\n",
" \n",
"test_SD()"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "695745368fdf84d75691832839fb2834",
"grade": false,
"grade_id": "cell-3277aa3696a5c87f",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 6\n",
"\n",
"Finally, based on your steepest decent implementation from task 5, implement the Conjugate Gradient algorithm in a Python function $\\text{CG(A, b, eps)}$ in the following way: \n",
"\n",
"Initialize your procedure with:\n",
"\n",
"\\begin{align*}\n",
" \\mathbf{x}^{(0)} &= \\mathbf{0} \\\\\n",
" \\mathbf{r}^{(0)} &= \\mathbf{b} - A \\mathbf{x}^{(0)} \\\\\n",
" \\mathbf{v}^{(0)} &= \\mathbf{r}^{(0)}\n",
"\\end{align*}\n",
"\n",
"Then increase $k$ and repeat the following until $|| \\mathbf{x}^{(k)} - \\mathbf{x}^{(k-1)}||_\\infty < \\varepsilon$.\n",
"\n",
"\\begin{align*}\n",
" t_k &= \\frac{ \\langle \\mathbf{r}^{(k)}, \\mathbf{r}^{(k)} \\rangle }{ \\langle \\mathbf{v}^{(k)}, A \\mathbf{v}^{(k)} \\rangle } \\\\\n",
" \\mathbf{x}^{(k+1)} &= \\mathbf{x}^{(k)} + t_k \\mathbf{v}^{(k)} \\\\\n",
" \\mathbf{r}^{(k+1)} &= \\mathbf{r}^{(k)} - t_k A \\mathbf{v}^{(k)} \\\\\n",
" s_k &= \\frac{ \\langle \\mathbf{r}^{(k+1)}, \\mathbf{r}^{(k+1)} \\rangle }{ \\langle \\mathbf{r}^{(k)}, \\mathbf{r}^{(k)} \\rangle } \\\\\n",
" \\mathbf{v}^{(k+1)} &= \\mathbf{r}^{(k+1)} + s_k \\mathbf{v}^{(k)}\n",
"\\end{align*}\n",
"\n",
"Return the solution vector $\\mathbf{x}^{(k)}$ from the last iteration step and the corresponding iteration index $k$. Implement a unit test for your implementation by comparing your result to the exact solution of the system in task 4.\n",
"Use $\\text{numpy.dot()}$ for all needed vector/vector and matrix/vector products.\n",
"\n",
"How do you expect the number of needed iteration steps to behave when changing the accuracy $\\epsilon$? What do you see?"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "f709b7bfc8af2ba2d607f60ed03e1074",
"grade": true,
"grade_id": "cell-8ac3d07c9d237e47",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"def CG(A, b, eps, k_max = 10000):\n",
" \"\"\"\n",
" Return the Conjugate Gradient algorithm estimate solution x to the problem\n",
" Ax = b and the number of iterations k it took to decrease maximum\n",
" norm error below eps or to exceed iteration maximum k_max.\n",
" \"\"\"\n",
" \n",
" # Assert n by n matrix.\n",
" assert len(A.shape) == 2 and A.shape[0] == A.shape[1]\n",
" \n",
" n = len(A)\n",
" \n",
" x_cur = np.zeros(n)\n",
" x_prev = x_cur.copy()\n",
" r_cur = b - A @ x_cur\n",
" v = r_cur\n",
" \n",
" k = 0\n",
" while diff(x_cur, x_prev) > eps and k < k_max or k == 0:\n",
" k += 1\n",
" x_prev = x_cur.copy()\n",
" r_prev = r_cur\n",
" \n",
" t = np.dot(r_prev, r_prev)/np.dot(v, A @ v)\n",
" x_cur = x_prev + t*v\n",
" r_cur = r_prev - t*A @ v\n",
" s = np.dot(r_cur, r_cur)/np.dot(r_prev, r_prev)\n",
" v = r_cur + s*v\n",
" \n",
" return x_cur, k"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "0d5b479bee6c0297974c003b4ebd99f3",
"grade": true,
"grade_id": "cell-4f5bc81f40ddb3fa",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Starting with A =\n",
"[[10 -1 2 0]\n",
" [-1 11 -1 3]\n",
" [ 2 -1 10 -1]\n",
" [ 0 3 -1 8]]\n",
"and b = [ 6 25 -11 15]\n",
"We apply the Conjugate Gradient algorithm to solve Ax = b.\n",
"\n",
"For eps = 1e-01\tafter k = 4\t iterations:\n",
"x =\t\t\t [ 1. 2. -1. 1.]\n",
"Ax =\t\t\t [ 6. 25. -11. 15.]\n",
"diff(Ax, b) =\t\t 1.7763568394002505e-15\n",
"diff(x, x_exact) =\t 2.220446049250313e-16\n",
"\n",
"For eps = 1e-02\tafter k = 5\t iterations:\n",
"x =\t\t\t [ 1. 2. -1. 1.]\n",
"Ax =\t\t\t [ 6. 25. -11. 15.]\n",
"diff(Ax, b) =\t\t 1.7763568394002505e-15\n",
"diff(x, x_exact) =\t 2.220446049250313e-16\n",
"\n",
"For eps = 1e-03\tafter k = 5\t iterations:\n",
"x =\t\t\t [ 1. 2. -1. 1.]\n",
"Ax =\t\t\t [ 6. 25. -11. 15.]\n",
"diff(Ax, b) =\t\t 1.7763568394002505e-15\n",
"diff(x, x_exact) =\t 2.220446049250313e-16\n",
"\n",
"For eps = 1e-04\tafter k = 5\t iterations:\n",
"x =\t\t\t [ 1. 2. -1. 1.]\n",
"Ax =\t\t\t [ 6. 25. -11. 15.]\n",
"diff(Ax, b) =\t\t 1.7763568394002505e-15\n",
"diff(x, x_exact) =\t 2.220446049250313e-16\n",
"\n"
]
}
],
"source": [
"def test_CG():\n",
" \"\"\"\n",
" Check that CG returns solutions for the example system Ax = b\n",
" within the error defined by the same eps as used for the iteration.\n",
" \"\"\"\n",
" \n",
" # TODO: Do we need to plot the error over epsilon like in task 4?\n",
" \n",
" return test_alg(CG, \"Conjugate Gradient\")\n",
" \n",
"test_CG()"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "65dd782269d01314b02643d0610e3348",
"grade": false,
"grade_id": "cell-308595a088486388",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 7\n",
"\n",
"Apply all three methods to the following system\n",
"\n",
"\\begin{align*}\n",
"\\begin{pmatrix}\n",
"0.2& 0.1& 1.0& 1.0& 0.0 \\\\ \n",
"0.1& 4.0& -1.0& 1.0& -1.0 \\\\\n",
"1.0& -1.0& 60.0& 0.0& -2.0 \\\\\n",
"1.0& 1.0& 0.0& 8.0& 4.0 \\\\\n",
"0.0& -1.0& -2.0& 4.0& 700.0\n",
"\\end{pmatrix} \\mathbf{x}^*\n",
"=\n",
"\\begin{pmatrix}\n",
"1 \\\\\n",
"2 \\\\\n",
"3 \\\\\n",
"4 \\\\\n",
"5\n",
"\\end{pmatrix}.\n",
"\\end{align*}\n",
" \n",
"Plot the number of needed iterations for each method as a function of $\\varepsilon$, using $\\varepsilon = 10^{-1}, 10^{-2}, ..., 10^{-8}$.\n",
"\n",
"Explain the observed behavior with the help of the condition number (which you can calculate using $\\text{numpy.linalg.cond()}$). "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "3d94dc67b052b82dae655b7ff562c7d2",
"grade": true,
"grade_id": "cell-490cd96dc58cbff8",
"locked": false,
"points": 4,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"A = np.array([[ .2, .1, 1.0, 1.0, 0.0],\n",
" [ .1, 4.0, - 1.0, 1.0, - 1.0],\n",
" [ 1.0, -1.0, 60.0, .0, .0],\n",
" [ 1.0, 1.0, .0, 8.0, 4.0],\n",
" [ .0, -1.0, - 2.0, 4.0, 700.0]])\n",
"b = np.array( [ 1 , 2 , 3 , 4 , 5 ] )\n",
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
},
{
"cell_type": "markdown",
"metadata": {
"deletable": false,
"editable": false,
"nbgrader": {
"cell_type": "markdown",
"checksum": "4d7421993a0fef53e4f0f83ffae975b1",
"grade": false,
"grade_id": "cell-c1b8b533da043a26",
"locked": true,
"schema_version": 3,
"solution": false,
"task": false
}
},
"source": [
"### Task 8\n",
"\n",
"Try to get a better convergence behavior by pre-conditioning your matrix $A$. Instead of $A$ use\n",
"\n",
"$$ \\tilde{A} = C A C,$$\n",
"\n",
"where $C = \\sqrt{D^{-1}}$. If you do so, you will need to replace $\\mathbf{b}$ by \n",
"\n",
"$$\\mathbf{\\tilde{b}} = C \\mathbf{b}$$\n",
"\n",
"and the vector $\\mathbf{\\tilde{x}}$ returned by your function will have to be transformed back via\n",
"\n",
"$$\\mathbf{x} = C \\mathbf{\\tilde{x}}.$$ \n",
"\n",
"What is the effect of $C$ on the condition number and why?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"deletable": false,
"nbgrader": {
"cell_type": "code",
"checksum": "213ac0f9b57d67bb90bbd75dd3808955",
"grade": true,
"grade_id": "cell-de3d1bd5ccfa8dfb",
"locked": false,
"points": 3,
"schema_version": 3,
"solution": true,
"task": false
}
},
"outputs": [],
"source": [
"# YOUR CODE HERE\n",
"raise NotImplementedError()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.10"
}
},
"nbformat": 4,
"nbformat_minor": 4
}