{ "cells": [ { "cell_type": "markdown", "id": "ac27651a", "metadata": {}, "source": [ "# Exercise sheet\n", "\n", "Some general remarks about the exercises:\n", "* For your convenience functions from the lecture are included below. Feel free to reuse them without copying to the exercise solution box.\n", "* For each part of the exercise a solution box has been added, but you may insert additional boxes. Do not hesitate to add Markdown boxes for textual or LaTeX answers (via `Cell > Cell Type > Markdown`). But make sure to replace any part that says `YOUR CODE HERE` or `YOUR ANSWER HERE` and remove the `raise NotImplementedError()`.\n", "* Please make your code readable by humans (and not just by the Python interpreter): choose informative function and variable names and use consistent formatting. Feel free to check the [PEP 8 Style Guide for Python](https://www.python.org/dev/peps/pep-0008/) for the widely adopted coding conventions or [this guide for explanation](https://realpython.com/python-pep8/).\n", "* Make sure that the full notebook runs without errors before submitting your work. This you can do by selecting `Kernel > Restart & Run All` in the jupyter menu.\n", "* For some exercises test cases have been provided in a separate cell in the form of `assert` statements. When run, a successful test will give no output, whereas a failed test will display an error message.\n", "* Each sheet has 100 points worth of exercises. Note that only the grades of sheets number 2, 4, 6, 8 count towards the course examination. Submitting sheets 1, 3, 5, 7 & 9 is voluntary and their grades are just for feedback.\n", "\n", "Please fill in your name here:" ] }, { "cell_type": "code", "execution_count": 1, "id": "d8b13e80", "metadata": {}, "outputs": [], "source": [ "NAME = \"Kees van Kempen\"\n", "NAMES_OF_COLLABORATORS = \"\"" ] }, { "cell_type": "markdown", "id": "e21d4bc9", "metadata": {}, "source": [ "---" ] }, { "cell_type": "markdown", "id": "97ec4790", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "ba0b514985ed8d79af63814bda6ee328", "grade": false, "grade_id": "cell-70094cd1c9529a28", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "**Exercise sheet 3**\n", "\n", "Code from the lectures:" ] }, { "cell_type": "code", "execution_count": 2, "id": "948639fc", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "1e25d190936687e34fe3ac01d4dfce44", "grade": false, "grade_id": "cell-9a6bf70e64f84715", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "import numpy as np\n", "import matplotlib.pylab as plt\n", "\n", "rng = np.random.default_rng()\n", "%matplotlib inline\n", "\n", "def sample_acceptance_rejection(sample_z,accept_probability):\n", " while True:\n", " x = sample_z()\n", " if rng.random() < accept_probability(x):\n", " return x \n", " \n", "def estimate_expectation(sampler,n):\n", " '''Compute beste estimate of mean and 1-sigma error with n samples.'''\n", " samples = [sampler() for _ in range(n)]\n", " return np.mean(samples), np.std(samples)/np.sqrt(n-1)\n", "\n", "def estimate_expectation_one_pass(sampler,n):\n", " sample_mean = sample_square_dev = 0.0\n", " for k in range(1,n+1):\n", " delta = sampler() - sample_mean\n", " sample_mean += delta / k\n", " sample_square_dev += (k-1)*delta*delta/k \n", " return sample_mean, np.sqrt(sample_square_dev / (n*(n-1)))" ] }, { "cell_type": "markdown", "id": "bce418c0", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "90c9636c93064db3b008a2e9ec210117", "grade": false, "grade_id": "cell-8763c7e3277ef0ce", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "## Acceptance-rejection sampling\n", "\n", "**(35 points)**\n", "\n", "The goal of this exercise is to develop a fast sampling algorithm of the discrete random variable $X$ with probability mass function $$p_X(k) = \\frac{6}{\\pi^2} k^{-2}, \\qquad k=1,2,\\ldots$$\n", "\n", "__(a)__ Let $Z$ be the discrete random variable with $p_Z(k) = \\frac{1}{k} - \\frac{1}{k+1}$ for $k=1,2,\\ldots$. Write a function to compute the inverse CDF $F_Z^{-1}(u)$, such that you can use the inversion method to sample $Z$ efficiently. **(15 pts)**" ] }, { "cell_type": "code", "execution_count": 3, "id": "564c7f00", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "da96fc27e757b00d6e55befc89f5afda", "grade": false, "grade_id": "cell-b0b3fcf6d9bf35a0", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "def f_inverse_Z(u):\n", " '''Compute the inverse CDF of Z, i.e. F_Z^{-1}(u) for 0 <= u <= 1.'''\n", " assert 0 <= u and u <= 1\n", " \n", " # we need to round to return a natural number (excluding zero, yes).\n", " return np.ceil(u/(1 - u)).astype(int)\n", "\n", "def random_Z():\n", " return int(f_inverse_Z(rng.random())) # make sure to return an integer" ] }, { "cell_type": "code", "execution_count": 4, "id": "2ca9d9ca", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "3eeabc3f6572ed217a002adefda34876", "grade": true, "grade_id": "cell-a28cb07f90510b94", "locked": true, "points": 15, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "assert f_inverse_Z(0.2)==1\n", "assert f_inverse_Z(0.51)==2\n", "assert f_inverse_Z(0.76)==4\n", "assert f_inverse_Z(0.991)==111" ] }, { "cell_type": "markdown", "id": "9a1b3dcd", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "78f978d46d5f742623cf174e88323ec9", "grade": false, "grade_id": "cell-576d3991eef6bf11", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "__(b)__ Implement a sampler for $X$ using acceptance-rejection based on the sampler of $Z$. For this you need to first determine a $c$ such that $p_X(k) \\leq c\\,p_Z(k)$ for all $k=1,2,\\ldots$, and then consider an acceptance probability $p_X(k) / (c p_Z(k))$. Verify the validity of your sampler numerically (e.g. for $k=1,\\ldots,10$). **(20 pts)**" ] }, { "cell_type": "code", "execution_count": 5, "id": "8e201507", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "862ecc31c9ee75e459dafe5262ddc776", "grade": false, "grade_id": "cell-20fd55e7a4c4c531", "locked": false, "schema_version": 3, "solution": true, "task": false } }, "outputs": [ { "data": { "image/png": "iVBORw0KGgoAAAANSUhEUgAAAY4AAAEICAYAAABI7RO5AAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjUuMSwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy/YYfK9AAAACXBIWXMAAAsTAAALEwEAmpwYAAAdY0lEQVR4nO3dfZzNdf7/8cfLGKELylWicpEbUYhBKEPC1ipMSt1ascWkJL7f32qlzVarbb9xayuFrK1QbRYp0m6bXbNqU7loulAhF635KmQzMYZmeP3+mJnzNcyM+ejMfM6ced5vt7lxznwuXuczc+Z5Xp+L98fcHRERkdKqEnYBIiJSsSg4REQkEAWHiIgEouAQEZFAFBwiIhJI1bALKA9169b1Jk2ahF2GiEiFsXbt2m/dvV5R34vr4DCza4BrLrjgAtasWRN2OSIiFYaZfVXc9+J6V5W7L3X31Fq1aoVdiohI3Ijr4BARkehTcIiISCBxfYxDyl9OTg4ZGRkcPHgw7FJEpBSqV69O48aNSUxMLPU8Cg6JqoyMDE4//XSaNGmCmYVdjoiUwN3Zs2cPGRkZNG3atNTzxfWuKjO7xsxmZWZmhl1KpXHw4EHq1Kmj0BCpAMyMOnXqBN5DENfBobOqwqHQEKk4Tub9GtfBISIi0adjHCfwyoavwy4BgJSWDcMu4aREe/sF3Q4PPPAAp512Gr/4xS+iVsPVV1/NSy+9BMBLL73EnXfeGZXljh8/njfeeIOrr76aKVOmRGWZP8Zvf/tbJk6cGHYZJVqzZg1z587lySefDDzv448/TmpqKjVr1gT+7+dau3btKFdZdtLS0pg6dSqvv/56ua5XHYdIQG+88Qa1a9dm7969TJ8+PWrLfeaZZ1i3bl1MhAbkBUcsyM3NLfZ7SUlJJxUakBccBw4ciDwu+LnKianjCGDS0OvKdX0PzVtUruuLFw8//DBz587l3HPPpV69enTs2BGAzZs3M3r0aHbv3k3NmjX5wx/+QKtWrRg+fDhnnHEGa9as4ZtvvuHRRx9l8ODBfP311wwZMoTvv/+e3NxcZsyYweWXX06TJk1Ys2YNEyZMYPPmzbRv354+ffrwzTffMHjwYAYMGADAzTffzJAhQ7j22msjtbk799xzD3/5y18wM371q19FpsnKyqJLly7ce++9DBkyJDLPBx98wLhx48jOzqZGjRo899xztGzZksOHD/PLX/6SN998EzNj5MiRjBkzhtWrVzN27FiysrI45ZRT+Pvf/07NmjWZMGECaWlpHDp0iNGjR3P77beTlpbGpEmTqFOnDhs2bKBHjx5Mnz6diRMnkp2dTfv27WnTpg0vvvgiAwcOZPv27Rw8eJCxY8eSmpoKwGmnncbYsWN5/fXXqVGjBq+99hoNGjRg586djBo1ii1btgAwY8YMunXrxgsvvMCTTz7JDz/8QJcuXZg+fToJCQmFfobPP/88y5Yt4+DBg2RlZbF06VLGjBnDJ598Qm5uLg888AADBgwo9Ik7KyuryGmK2k7uzo4dO+jVqxd169ZlxYoVkZ9r3bp1eeyxx3j22WcBGDFiBOPGjWPbtm1cddVVXHbZZbz77rs0atSI1157jRo1ahSqfcGCBTz44IMkJCRQq1YtVq5cybZt2xg6dChZWVkAPPXUU3Tr1o20tDR+/etf06BBA9LT00lJSeHiiy/miSeeIDs7m1dffZXmzZszfPhwqlevzvr169m5cyePPfYY/fv3L7Te4l7/+vXr+fnPf84PP/zAkSNHWLRoES1atPhxbzJ3j/uvjh07+sla9MWOyFebTl3L9evodVcUn332WaHHR7+GaHydyJo1a/yiiy7yrKwsz8zM9ObNm/uUKVPc3f2KK67wjRs3urv7e++957169XJ392HDhvngwYP98OHDvn79em/evLm7u0+dOtUnT57s7u65ubn+/fffu7v7+eef77t37/atW7d6mzZtIutOS0vzAQMGuLv73r17vUmTJp6Tk1OovoULF/qVV17pubm5/s033/i5557rO3bkva5TTz21yNeUmZkZWc5bb73lKSkp7u4+ffp0T0lJiXxvz549fujQIW/atKl/8MEHheZ95pln/De/+Y27ux88eNA7duzoW7Zs8RUrVvgpp5zimzdv9tzcXL/yyit9wYIFRdazZ88ed3c/cOCAt2nTxr/99lt3dwd8yZIl7u4+fvz4yHpuuOEG//3vfx/Zfnv37vXPPvvM+/fv7z/88IO7u99xxx0+Z86c417zc889540aNYqs89577/V58+a5u/t3333nLVq08P379/uKFSv8pz/9aYnTFLWdjv45Fih4XPA7tH//ft+3b5+3bt3a161b51u3bvWEhAT/8MMP3d39+uuvj6zvaBdddJFnZGRE6nB3z8rK8uzsbHd337hxoxf8TVqxYoXXqlXLd+zY4QcPHvRzzjnHJ02a5O7ujz/+uI8dO9bd835H+/Xr54cPH/aNGzd6o0aNPDs7u1Sv/6677vIXXnjB3d0PHTrkBw4cOK7mY9+37u7AGi/mb2pcdxxHD3IYDeoAYt/bb7/NoEGDIvutCz7t79+/n3fffZfrr78+Mu2hQ4ci/x84cCBVqlShdevW7Ny5E4BOnTpx6623kpOTw8CBA2nfvn2J605OTmb06NHs2rWLV155heuuu46qVQu/xd555x1uuukmEhISaNCgAcnJyaxevbpQV3KszMxMhg0bxqZNmzAzcnJyAFi+fDmjRo2KrOOss87ik08+oWHDhnTq1AmAM844A4C//e1vfPzxxyxcuDCyzE2bNlGtWjU6d+5Ms2bNALjpppt45513GDx48HF1PPnkkyxevBiA7du3s2nTJurUqUO1atUin347duzIW2+9BcA//vEP5s6dCxD59D1v3jzWrl0bqS87O5v69esX+br79OnDWWedFal/yZIlTJ06Fcg77fvf//53oemLm6ao7VSSd955h0GDBnHqqacCkJKSwttvv821115L06ZNI78HHTt2ZNu2bcfN3717d4YPH84NN9xASkoKkHdh7F133UV6ejoJCQls3LgxMn2nTp1o2DDv2F3z5s3p27cvABdffDErVqyITHfDDTdQpUoVWrRoQbNmzfjiiy9K9fq7du3Kww8/TEZGBikpKT++2yDOd1W5+1JgaVJS0siwa5HyU9TphUeOHKF27dqkp6cXOc8pp5wS+X/ehy3o0aMHK1euZNmyZQwdOpTx48dzyy23lLjuoUOH8uKLL/Lyyy9HdnUcrWDZQdx///306tWLxYsXs23bNnr27BlZ1rGvtajnCp6fNm0a/fr1K/R8WlracdMXNX9aWhrLly9n1apV1KxZk549e0bO/U9MTIzMk5CQUOIxCXdn2LBhPPLII4WeX7x4MQ8++CAAs2fPBoj84S6Yb9GiRbRs2bLQfAUhX9I0xW2TkmosztG/JwkJCWRnZx83zcyZM3n//fdZtmwZ7du3Jz09nWnTptGgQQM++ugjjhw5QvXq1YtcZpUqVSKPq1SpUmhbnujnVNzrv/DCC+nSpQvLli2jX79+zJ49myuuuKKkTXBCOjgucaVHjx4sXryY7Oxs9u3bx9KlS4G8T95NmzZlwYIFQN6b7KOPPipxWV999RX169dn5MiR3Hbbbaxbt67Q908//XT27dtX6Lnhw4fz+OOPA9CmTZsi65s/fz6HDx9m9+7drFy5ks6dO5dYR2ZmJo0aNQLy9v0X6Nu3LzNnzoz8cfnPf/5Dq1at2LFjB6tXrwZg37595Obm0q9fP2bMmBHpVjZu3BjZ3/7BBx+wdetWjhw5wvz587nsssuAvEAomD4zM5MzzzyTmjVr8sUXX/Dee++VWDNA7969mTFjBgCHDx/m+++/p3fv3ixcuJBdu3ZFav7qq68YNGgQ6enppKenk5SUdNyy+vXrx7Rp0yJ/1D/88MNST1PUdoKif36Q9zN69dVXOXDgAFlZWSxevJjLL7/8hK+3wObNm+nSpQsPPfQQdevWZfv27WRmZtKwYUOqVKnCvHnzOHz4cKmXV2DBggUcOXKEzZs3s2XLluMCorjXv2XLFpo1a8bdd9/Ntddey8cffxx43ceK645DwlfepxF36NCBIUOG0L59e84///xCb/gXX3yRO+64g8mTJ5OTk8ONN95Iu3btil1WWloaU6ZMITExkdNOOy2y26VAnTp16N69OxdddBFXXXUVU6ZMoUGDBlx44YUMHDiwyGUOGjSIVatW0a5dO8yMRx99lLPPPrvE13TPPfcwbNgwHnvssUKfFEeMGMHGjRtp27YtiYmJjBw5krvuuov58+czZsyYyMH05cuXM2LECLZt20aHDh1wd+rVq8err74KQNeuXZkwYQKffPIJPXr0YNCgQQCkpqbStm1bOnTowLPPPsvMmTNp27YtLVu25NJLLy2xZoAnnniC1NRU/vjHP5KQkMCMGTPo2rUrkydPpm/fvhw5coTExESefvppzj///BKXdf/99zNu3Djatm2Lu9OkSZPIKagFn7yLm6a47ZSamspVV11Fw4YNC+0S6tChA8OHD48E+ogRI7jkkkuK3C1VlPHjx7Np0ybcnd69e9OuXTvuvPNOrrvuOhYsWECvXr0KdVOl1bJlS5KTk9m5cyczZ84s1LWU9Prnz5/PCy+8QGJiImeffTaTJk0KvO5j2cm0zhVNUlKSn+yNnHQdRzCff/45F154YdhlhObAgQNcfPHFrFu3joowYkFY1wFEy6JFi1iyZAlz5swJu5QyNXz4cPr371/ksadoKOp9a2Zr3f349g/tqhKJmuXLl9OqVSvGjBlTIUKjoluyZAn33Xcft99+e9ilVDrqOE5AHUcwlb3jEKmI1HGIiEiZiuvg0LDqIiLRF9fB4RpWXUQk6uI6OEREJPoUHBJ3jh61Ni0t7bjB4Mra888/z44dOyKPR4wYwWeffRZ4OWHULlIaCg6JO9Ee7rwoJQ2rcWxwzJ49m9atW5dpPSLlScEhcefo4c7Hjx/P/v37GTx4MK1ateLmm2+ODMmwdu1akpOT6dixI/369ePrr/NOvU5PT+fSSy+lbdu2DBo0iO+++w6Anj17MnHiRJKTk3niiSeKnH/hwoWsWbOGm2++mfbt25OdnU3Pnj0pOB38r3/9Kx06dKBdu3b07t0byBvyo1u3blxyySV069aNDRs2hLDVREpPQ45I3Pnd737Hp59+Snp6OmlpaZF7Epxzzjl0796df/3rX3Tp0oUxY8bw2muvUa9ePebPn899993Hs88+yy233MK0adNITk5m0qRJPPjgg5Hxp/bu3cs///lPcnJySE5OLnL+p556iqlTpx435tLu3bsZOXIkK1eupGnTppExk1q1asXKlSupWrUqy5cvZ+LEiSxapJGYJXYpOCSqhg0bFhnSvDykpaWdcJrOnTvTuHFjANq3b8+2bduoXbs2n376KX369AHyBuFr2LAhmZmZ7N27l+TkZCDv9Rw9FHvBDZY2bNhQ5Pwlee+99+jRowdNmzYF/m947+KGTReJVQoOiXvHDoWdm5uLu9OmTRtWrVpVaNoTXfNTMDhdcfOXpLjhvYsbNl0kVik4JKrmzJkT+pAjxQ2XfbSWLVuye/duVq1aRdeuXcnJyWHjxo20adOGM888k7fffpvLL7+cefPmRbqP0s5f3Pq7du3K6NGj2bp1a2RX1VlnnVXssOkisSqugyPadwCUiuHo4c5r1KhBgwYNjpumWrVqLFy4kLvvvpvMzExyc3MZN24cbdq0Yc6cOYwaNYoDBw7QrFkznnvuuUDzDx8+nFGjRlGjRo1CHUm9evWYNWsWKSkpHDlyhPr16/PWW28VO2y6SKzSIIcnoEEOg9EghyIVjwY5FBGRMqXgEBGRQBQcEnWVYfenSLw4mfergkOiqnr16uzZs0fhIVIBuDt79uw57v7lJxLXZ1VJ+WvcuDEZGRns3r077FJEpBSqV68euUC2tBQcElWJiYmRK6NFJD5pV5WIiASi4BARkUAUHCIiEoiCQ0REAonr4DCza8xs1olGPBURkdKL6+Bw96XunlqrVq2wSxERiRtxHRwiIhJ9Cg4REQlEwSEiIoEoOEREJBAFh4iIBKLgEBGRQBQcIiISiIJDREQCUXCIiEggCg4REQlEwSEiIoEoOEREJBAFh4iIBKLgEBGRQBQcIiISiIJDREQCievg0B0ARUSiL66DQ3cAFBGJvrgODhERiT4Fh4iIBKLgEBGRQBQcIiISiIJDREQCUXCIiEggCg4REQlEwSEiIoFUDbsAKZ1XNnwddgkApLRsGHYJIhIydRwiIhKIOo4KaNLQ68p1fQ/NW1Su6xOR2KaOQ0REAlHHUQGpAxCRMKnjEBGRQBQcIiISiIJDREQCUXCIiEggCg4REQlEwSEiIoEoOEREJBAFh4iIBKLgEBGRQBQcIiISiIJDREQCUXCIiEggCg4REQmkwgWHmTUzsz+a2cKwaxERqYzKNTjM7Fkz22Vmnx7z/E/MbIOZfWlmE0pahrtvcffbyrZSEREpTnnfj+N54ClgbsETZpYAPA30ATKA1Wa2BEgAHjlm/lvdfVf5lCoiIkUp1+Bw95Vm1uSYpzsDX7r7FgAzexkY4O6PAP1Pdl1mlgqkApx33nknuxgRETlGLBzjaARsP+pxRv5zRTKzOmY2E7jEzO4tbjp3n+XuSe6eVK9evehVKyJSycXCrWOtiOe8uIndfQ8wquzKERGRksRCx5EBnHvU48bAjpBqERGRE4iF4FgNtDCzpmZWDbgRWBJyTSIiUozAwWFmp+afCRWYmf0JWAW0NLMMM7vN3XOBu4A3gc+BP7v7+pNZfhHru8bMZmVmZkZjcSIiQimOcZhZFfK6gJuBTsAh4BQz2w28Acxy902lWZm731TM82/kLyuq3H0psDQpKWlktJctIlJZlabjWAE0B+4Fznb3c929PnA58B7wOzP7WRnWKCIiMaQ0Z1Vd6e45xz7p7v8BFgGLzCwx6pWJiEhMOmHHURAaZjb52O8VHOsoKlhERCQ+BTk43sjMIscozKw+sDz6JUWPDo6LiERfkOC4HUg1s85m1gn4BzC1bMqKDndf6u6ptWrVCrsUEZG4UZqzquYC64APgdHAS0AuMNDdvyzb8kREJNaUpuOYkz/dreSFRhPgO+BnZja47EoTEZFYdMKOw93/Dvy94LGZVQVaA+2ASwHdUKkSeWXD12GXQErLhmGXIFKplWZXlbl7ZNDB/Cu9P87/mlfUNCIiEr9KdQGgmY0xs0I3tTCzamZ2hZnNAYaVTXk/js6qEhGJvtJcAPgT8o5v/MnMmgJ7gerk3aHvb8Dv3T29rAr8MTTkSNmZNPS6cl3fQ/MWlev6RKR4pTnGcRCYDkzPv0K8LpDt7nvLuDYREYlBgW7k5O45ZnaLu/9PWRUkFYM6AJHK62Tux/GlmT2ef4zjZ2amvyAiIpXIyQTHYvI6lR35/+paDhGRSuRkguNPwF+A7kBf4MyoViQiIjEtcHC4+xB3X+buG4D/Ap6OflnRodNxRUSir9TBYWZXmdn7ZrbBzP5sZl3dfScwogzr+1E0yKGISPQF6TimA/9N3jAjs4ApZnaTu2eVSWUiIhKTgpyOu9Pd/5X//+Vmtgp4n7xjHiIiUkkE6Ti2mdlkM6uW/zgH2FcGNYmISAwLEhwOpADbzewd4EsgzcxalEllIiISk0q9q8rdbwIws+rAReQNq94OmG1mzdz93LIpUUREYkmgIUcgMnbVmvwvERGpZE7mAsAKQ9dxiIhEX1wHh67jEBGJvrgODhERiT4Fh4iIBKLgEBGRQBQcIiISiIJDREQCUXCIiEggCg4REQkk8JXjImF7ZcPXYZcAQErLhmGXIBKKuO44dOW4iEj0xXXH4e5LgaVJSUkjw65FysakodeV6/oemreoXNcnEoviuuMQEZHoi+uOQ+KfOgCR8qeOQ0REAlFwiIhIIAoOEREJRMEhIiKBKDhERCQQBYeIiASi4BARkUAUHCIiEoiCQ0REAonr4NAghyIi0RfXweHuS909tVatWmGXIiISN+I6OEREJPoUHCIiEoiCQ0REAlFwiIhIILofh8hJ0r3PpbJSxyEiIoGo4xCJAt37XCoTdRwiIhKIOg6RKFAHIJWJOg4REQlEwSEiIoEoOEREJBAFh4iIBKLgEBGRQBQcIiISSFwHh27kJCISfXEdHLqRk4hI9MV1cIiISPQpOEREJBAFh4iIBKKxqkQqON0XRMqbOg4REQlEHYdIHNF9QaQ8qOMQEZFA1HGIxBF1AFIe1HGIiEggCg4REQlEwSEiIoEoOEREJBAFh4iIBKLgEBGRQBQcIiISiIJDREQCUXCIiEggunJcRKJCo/RWHuo4REQkEHUcIhJ1GqU3vqnjEBGRQNRxiEjUqQOIb+o4REQkEAWHiIgEouAQEZFAFBwiIhJIhQsOMxtoZn8ws9fMrG/Y9YiIVDblGhxm9qyZ7TKzT495/idmtsHMvjSzCSUtw91fdfeRwHBgSBmWKyIiRSjv03GfB54C5hY8YWYJwNNAHyADWG1mS4AE4JFj5r/V3Xfl//9X+fOJiEg5KtfgcPeVZtbkmKc7A1+6+xYAM3sZGODujwD9j12GmRnwO+Av7r6uuHWZWSqQCnDeeedF5wWISMyLhTGz4n28rFi4ALARsP2oxxlAlxKmHwNcCdQyswvcfWZRE7n7LGAWQFJSkkepVhGpQDT0SdmIheCwIp4r9g+9uz8JPFl25YiISEliITgygHOPetwY2BFSLSISRypLB1DeYuF03NVACzNrambVgBuBJSHXJCIixSjv03H/BKwCWppZhpnd5u65wF3Am8DnwJ/dfX2U1neNmc3KzMyMxuJERITyP6vqpmKefwN4owzWtxRYmpSUNDLayxYRqaxiYVeViIhUIAoOEREJRMEhIiKBxMLpuGXGzK4BrrngggvCLkVEKpFYuHodyu4K9rjuONx9qbun1qpVK+xSRETiRlx3HCIiYYvHYU/iuuMQEZHoU8chIlKG4nHYE3UcIiISSFwHh4YcERGJvrgODp1VJSISfXEdHCIiEn0KDhERCUTBISIigSg4REQkEAWHiIgEYu4edg1lzsx2A1+FXUcMqAt8G3YRMULbojBtj8K0PeB8d69X1DcqRXBIHjNb4+5JYdcRC7QtCtP2KEzbo2TaVSUiIoEoOEREJBAFR+UyK+wCYoi2RWHaHoVpe5RAxzhERCQQdRwiIhKIgkNERAJRcMQ5MzvXzFaY2edmtt7MxoZdUywwswQz+9DMXg+7ljCZWW0zW2hmX+T/jnQNu6Ywmdl/5b9PPjWzP5lZ9bBrikUKjviXC/w/d78QuBQYbWatQ64pFowFPg+7iBjwBPBXd28FtKMSbxMzawTcDSS5+0VAAnBjuFXFJgVHnHP3r919Xf7/95H3h6FRuFWFy8waAz8FZoddS5jM7AygB/BHAHf/wd33hlpU+KoCNcysKlAT2BFyPTFJwVGJmFkT4BLg/ZBLCdvjwD3AkZDrCFszYDfwXP5uu9lmdmrYRYXF3f8XmAr8G/gayHT3v4VbVWxScFQSZnYasAgY5+7fh11PWMysP7DL3deGXUsMqAp0AGa4+yVAFjAh3JLCY2ZnAgOApsA5wKlm9rNwq4pNCo5KwMwSyQuNF939lbDrCVl34Foz2wa8DFxhZi+EW1JoMoAMdy/oQBeSFySV1ZXAVnff7e45wCtAt5BrikkKjjhnZkbePuzP3f2xsOsJm7vf6+6N3b0JeQc+/+HulfJTpbt/A2w3s5b5T/UGPguxpLD9G7jUzGrmv296U4lPFihJ1bALkDLXHRgKfGJm6fnPTXT3N8IrSWLIGOBFM6sGbAF+HnI9oXH3981sIbCOvLMRP0RDjxRJQ46IiEgg2lUlIiKBKDhERCQQBYeIiASi4BARkUAUHCIiEoiCQ0REAlFwiIhIIAoOkRCY2ZVmNi/sOkROhoJDJBztyLsyWaTCUXCIhKMd8KGZnWJmz5vZb/PHRxKJeRqrSiQc7YBdwJvAbHevrCP0SgWksapEyln+MPffAl8Bt7v7qpBLEglEu6pEyl9rYDV5I7AeDrkWkcAUHCLlrx3wLnn3A3nOzBqEXI9IIAoOkfLXDvjU3TcCvwT+nL/7SqRC0DEOEREJRB2HiIgEouAQEZFAFBwiIhKIgkNERAJRcIiISCAKDhERCUTBISIigfx/vf8Mt7nTGkQAAAAASUVORK5CYII=\n", "text/plain": [ "
" ] }, "metadata": { "needs_background": "light" }, "output_type": "display_data" } ], "source": [ "def accept_probability_X(k):\n", " '''Return the appropriate acceptance probability on the event Z=k.'''\n", " \n", " # The resulting c is found by doing algebraic things, and seeing that\n", " # 6/pi^2*(1 + 1/k) <= c should hold for all allowed k. For k = 1, c is\n", " # the largest, so: tada.\n", " c = 12*np.pi**(-2)\n", " \n", " # But we do not need this variable, we only need the acceptance probability.\n", " return .5*(1 + 1/k)\n", " \n", "def random_X():\n", " return sample_acceptance_rejection(random_Z,accept_probability_X)\n", "\n", "# Verify numerically\n", "samples = [random_X() for _ in range(1000000)]\n", "k = np.array(range(1,11))\n", "\n", "# The theoretical densities to compare the densities from the\n", "# acceptance-rejection densities to:\n", "p_X = lambda k: 6/(np.pi*k)**2\n", "\n", "plt.figure()\n", "plt.hist(samples, k - .5, density=True, color=\"lightblue\", label=\"density of acceptance-rejection samples\")\n", "plt.scatter(k[:-1], [p_X(i) for i in k[:-1]], s=800, marker=\"_\", color=\"black\", label=\"theoretical\")\n", "plt.yscale(\"log\")\n", "plt.ylabel(\"$p_X(k)$\")\n", "plt.xlabel(\"$k$\")\n", "plt.legend()\n", "plt.show()\n", "# TODO: The histogram overshoots the theoretical value every time." ] }, { "cell_type": "code", "execution_count": 6, "id": "1b0e1d47", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "641adac2c6462be641ae2d45a4d80e7d", "grade": true, "grade_id": "cell-04bad69f7cc2ad18", "locked": true, "points": 20, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "from nose.tools import assert_almost_equal\n", "assert min([random_X() for _ in range(10000)]) >= 1\n", "assert_almost_equal([random_X() for _ in range(10000)].count(1),6079,delta=400)\n", "assert_almost_equal([random_X() for _ in range(10000)].count(3),675,delta=75)" ] }, { "cell_type": "markdown", "id": "d010b3c8", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "ec5e4c478f45388c302b82613e45a0c0", "grade": false, "grade_id": "cell-f831d5b44932cf2f", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "## Monte Carlo integration & Importance sampling\n", "\n", "**(30 Points)**\n", "\n", "Consider the integral \n", "\n", "$$\n", "I = \\int_0^1 \\sin(\\pi x(1-x))\\mathrm{d}x = \\mathbb{E}[X], \\quad X=g(U), \\quad g(U)=\\sin(\\pi U(1-U)),\n", "$$ \n", "\n", "where $U$ is a uniform random variable in $(0,1)$.\n", "\n", "__(a)__ Use Monte Carlo integration based on sampling $U$ to estimate $I$ with $1\\sigma$ error at most $0.001$. How many samples do you need? (It is not necessary to automate this: trial and error is sufficient.) **(10 pts)**" ] }, { "cell_type": "code", "execution_count": 7, "id": "bbf5c843", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "febbb705202776d2af26bd95c4c08625", "grade": true, "grade_id": "cell-47eab0537d21690e", "locked": false, "points": 10, "schema_version": 3, "solution": true, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "After trial and error, I found that 43402 trials were sufficient to arrive at an 1-sigma error of around 0.001,\n", "with mean 0.486091 and standard deviation sigma 0.001000. Wolfram Alpha tells me it should be approximately I =\n", "0.487595, so I am happy.\n", " \n" ] } ], "source": [ "def sample_X():\n", " x = rng.random()\n", " return np.sin(np.pi*x*(1 - x))\n", "\n", "n = 43402\n", "sample_mean, sample_stdev = estimate_expectation(sample_X, n)\n", "print(\"\"\"\n", "After trial and error, I found that {} trials were sufficient to arrive at an 1-sigma error of around 0.001,\n", "with mean {:.6f} and standard deviation sigma {:.6f}. Wolfram Alpha tells me it should be approximately I =\n", "{}, so I am happy.\n", " \"\"\".format(n, sample_mean, sample_stdev, 0.487595))\n", "# TODO: Maybe I did not answer the question. Reading is difficult." ] }, { "cell_type": "markdown", "id": "26b8ff78", "metadata": {}, "source": [ "__(b)__ Choose a random variable $Z$ on $(0,1)$ whose density resembles the integrand of $I$ and which you know how to sample efficiently (by inversion method, acceptance-rejection, or a built-in Python function). Estimate $I$ again using importance sampling, i.e. $I = \\mathbb{E}[X']$ where $X' = g(Z) f_U(Z)/f_Z(Z)$, with an error of at most 0.001. How many samples did you need this time? **(20 pts)**" ] }, { "cell_type": "markdown", "id": "605f8777", "metadata": {}, "source": [ "After hours of thinking, I tried $$f_Z(x) = \\sec^2(\\pi{}x) = \\frac{1}{\\cos^2(\\pi{}x)}$$ on $(0, 1)$.\n", "It has cumulative density $$F_Z(x) = \\int_0^x\\sec^2(\\pi{}t)dt = \\left[ \\frac{\\tan (\\pi{}t)}{\\pi} \\right]_{t=0}^x = \\frac{\\tan (\\pi{}x)}{\\pi}.$$ This density gives us $$X' := g(Z)f_U(Z)/f_Z(Z) = \\sin(\\pi{}Z(1-Z))\\sin^2(\\pi{}Z)$$ for $Z$ on $(0, 1)$. $Z$ can be sampled using the inversion method as $$F_Z^{-1}(p) = \\frac{\\arctan(\\pi{}p)}{\\pi}.$$\n", "\n", "Unluckily, I found that the function is way too non-constant." ] }, { "cell_type": "code", "execution_count": 8, "id": "5793420a", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "d546b8c253155efa96a399ec15c24fb8", "grade": true, "grade_id": "cell-467d7240beab5b29", "locked": false, "points": 20, "schema_version": 3, "solution": true, "task": false } }, "outputs": [ { "name": "stdout", "output_type": "stream", "text": [ "\n", "After trial and error, I found that 43402 trials were sufficient to arrive at an 1-sigma error of around 0.001,\n", "with mean 0.175938 and standard deviation sigma 0.000418. Wolfram Alpha tells me it should be approximately I =\n", "0.487595, so I am happy.\n", " \n" ] } ], "source": [ "def sample_nice_Z():\n", " '''Sample from the nice distribution Z'''\n", " p = rng.random()\n", " return np.arctan(p*np.pi)/np.pi\n", " \n", "def sample_X_prime():\n", " '''Sample from X'.'''\n", " z = sample_nice_Z()\n", " return np.sin(np.pi*z*(1 - z))*np.cos(np.pi*z)**2\n", " \n", "n = 43402\n", "sample_mean, sample_stdev = estimate_expectation(sample_X_prime, n)\n", "print(\"\"\"\n", "After trial and error, I found that {} trials were sufficient to arrive at an 1-sigma error of around 0.001,\n", "with mean {:.6f} and standard deviation sigma {:.6f}. Wolfram Alpha tells me it should be approximately I =\n", "{}, so I am happy.\n", " \"\"\".format(n, sample_mean, sample_stdev, 0.487595))" ] }, { "cell_type": "markdown", "id": "e2a5b934", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "8fd70da242e4db50dd7a7db9851e839a", "grade": false, "grade_id": "cell-59ae6e6f3f476d2a", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "## Direct sampling of Dyck paths\n", "\n", "**(35 points)**\n", "\n", "Direct sampling of random variables in high dimensions requires some luck and/or ingenuity. Here is an example of a probability distribution on $\\mathbb{Z}^{2n+1}$ that features prominently in the combinatorial literature and can be sampled directly in an efficient manner. A sequence $\\mathbf{x}\\equiv(x_0,x_1,\\ldots,x_{2n})\\in\\mathbb{Z}^{2n+1}$ is said to be a **Dyck path** if $x_0=x_{2n}=0$, $x_i \\geq 0$ and $|x_{i}-x_{i-1}|=1$ for all $i=1,\\ldots,2n$. Dyck paths are counted by the Catalan numbers $C(n) = \\frac{1}{n+1}\\binom{2n}{n}$. Let $\\mathbf{X}=(X_0,\\ldots,X_n)$ be a **uniform Dyck path**, i.e. a random variable with probability mass function $p_{\\mathbf{X}}(\\mathbf{x}) = 1/C(n)$ for every Dyck path $\\mathbf{x}$. Here is one way to sample $\\mathbf{X}$." ] }, { "cell_type": "code", "execution_count": null, "id": "d11a0889", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "code", "checksum": "0ea1008043f2711810b9da3c2cf96a02", "grade": false, "grade_id": "cell-3598f5c8aabad8b9", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "outputs": [], "source": [ "def random_dyck_path(n):\n", " '''Returns a uniform Dyck path of length 2n as an array [x_0, x_1, ..., x_{2n}] of length 2n.'''\n", " # produce a (2n+1)-step random walk from 0 to -1\n", " increments = [1]*n +[-1]*(n+1)\n", " rng.shuffle(increments)\n", " unconstrained_walk = np.cumsum(increments)\n", " # determine the first time it reaches its minimum\n", " argmin = np.argmin(unconstrained_walk)\n", " # cyclically permute the increments to ensure walk stays non-negative until last step\n", " rotated_increments = np.roll(increments,-argmin)\n", " # turn off the superfluous -1 step\n", " rotated_increments[0] = 0\n", " # produce dyck path from increments\n", " walk = np.cumsum(rotated_increments)\n", " return walk\n", "\n", "\n", "plt.plot(random_dyck_path(50))\n", "plt.show()" ] }, { "cell_type": "markdown", "id": "3e6ad6ad", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "e88c23c1f0a6e2d23e0cd8ab4e2a20fd", "grade": false, "grade_id": "cell-8c2933e84038cd5e", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "__(a)__ Let $H$ be the (maximal) height of $X$, i.e. $H=\\max_i X_i$. Estimate the expected height $\\mathbb{E}[H]$ for $n = 2^5, 2^6, \\ldots, 2^{11}$ (including error bars). Determine the growth $\\mathbb{E}[H] \\approx a\\,n^\\beta$ via an appropriate fit. *Hint*: use the `scipy.optimize.curve_fit` function with the option `sigma = ...` to incorporate the standard errors on $\\mathbb{E}[H]$ in the fit. Note that when you supply the errors appropriately, fitting on linear or logarithmic scale should result in the same answer. **(25 pts)**" ] }, { "cell_type": "code", "execution_count": null, "id": "2b79f35a", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "54439c27bcec7884b6168c5bf92c27fd", "grade": true, "grade_id": "cell-310664cb8eaaa831", "locked": false, "points": 10, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# Collect height estimates\n", "n_values = [2**k for k in range(5,11+1)]\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "code", "execution_count": null, "id": "77d3723b", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "960be7ade957300785f4b69233a8f462", "grade": true, "grade_id": "cell-0ca47ced5547d67c", "locked": false, "points": 10, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "from scipy.optimize import curve_fit\n", "\n", "# Fitting\n", "# YOUR CODE HERE\n", "raise NotImplementedError()\n", "print(\"Fit parameters: a = {}, beta = {}\".format(a_fit,beta_fit))" ] }, { "cell_type": "code", "execution_count": null, "id": "edd467e4", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "421cc39b43f6f9ebc46886d7ca0ab1e4", "grade": true, "grade_id": "cell-7ebce3629edd06a2", "locked": false, "points": 5, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# Plotting\n", "# YOUR CODE HERE\n", "raise NotImplementedError()" ] }, { "cell_type": "markdown", "id": "142aa5cb", "metadata": { "deletable": false, "editable": false, "nbgrader": { "cell_type": "markdown", "checksum": "1df181e550da9f9c1edff06f45d51984", "grade": false, "grade_id": "cell-2dcc8f8e438aad86", "locked": true, "schema_version": 3, "solution": false, "task": false } }, "source": [ "__(b)__ Produce a histogram of the height $H / \\sqrt{n}$ for $n = 2^5, 2^6, \\ldots, 2^{11}$ and $3000$ samples each and demonstrate with a plot that it appears to converge in distribution as $n\\to\\infty$. *Hint*: you could call `plt.hist(...,density=True,histtype='step')` for each $n$ to plot the densities on top of each other. **(10 pts)**" ] }, { "cell_type": "code", "execution_count": null, "id": "eed4339c", "metadata": { "deletable": false, "nbgrader": { "cell_type": "code", "checksum": "cb48ed2a17571debd5f778f87f34a2ab", "grade": true, "grade_id": "cell-b2bb42d9970a7271", "locked": false, "points": 10, "schema_version": 3, "solution": true, "task": false } }, "outputs": [], "source": [ "# YOUR CODE HERE\n", "raise NotImplementedError()" ] } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.9.12" } }, "nbformat": 4, "nbformat_minor": 5 }