ml-schoo-and-maybe-andrew-ng/C1_W2_Lab02_Multiple_Variab...

649 lines
24 KiB
Plaintext

{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Optional Lab: Multiple Variable Linear Regression\n",
"\n",
"In this lab, you will extend the data structures and previously developed routines to support multiple features. Several routines are updated making the lab appear lengthy, but it makes minor adjustments to previous routines making it quick to review.\n",
"# Outline\n",
"- [  1.1 Goals](#toc_15456_1.1)\n",
"- [  1.2 Tools](#toc_15456_1.2)\n",
"- [  1.3 Notation](#toc_15456_1.3)\n",
"- [2 Problem Statement](#toc_15456_2)\n",
"- [  2.1 Matrix X containing our examples](#toc_15456_2.1)\n",
"- [  2.2 Parameter vector w, b](#toc_15456_2.2)\n",
"- [3 Model Prediction With Multiple Variables](#toc_15456_3)\n",
"- [  3.1 Single Prediction element by element](#toc_15456_3.1)\n",
"- [  3.2 Single Prediction, vector](#toc_15456_3.2)\n",
"- [4 Compute Cost With Multiple Variables](#toc_15456_4)\n",
"- [5 Gradient Descent With Multiple Variables](#toc_15456_5)\n",
"- [  5.1 Compute Gradient with Multiple Variables](#toc_15456_5.1)\n",
"- [  5.2 Gradient Descent With Multiple Variables](#toc_15456_5.2)\n",
"- [6 Congratulations](#toc_15456_6)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_1.1\"></a>\n",
"## 1.1 Goals\n",
"- Extend our regression model routines to support multiple features\n",
" - Extend data structures to support multiple features\n",
" - Rewrite prediction, cost and gradient routines to support multiple features\n",
" - Utilize NumPy `np.dot` to vectorize their implementations for speed and simplicity"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_1.2\"></a>\n",
"## 1.2 Tools\n",
"In this lab, we will make use of: \n",
"- NumPy, a popular library for scientific computing\n",
"- Matplotlib, a popular library for plotting data"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"import copy, math\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"plt.style.use('./deeplearning.mplstyle')\n",
"np.set_printoptions(precision=2) # reduced display precision on numpy arrays"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_1.3\"></a>\n",
"## 1.3 Notation\n",
"Here is a summary of some of the notation you will encounter, updated for multiple features. \n",
"\n",
"|General <img width=70/> <br /> Notation <img width=70/> | Description<img width=350/>| Python (if applicable) |\n",
"|: ------------|: ------------------------------------------------------------||\n",
"| $a$ | scalar, non bold ||\n",
"| $\\mathbf{a}$ | vector, bold ||\n",
"| $\\mathbf{A}$ | matrix, bold capital ||\n",
"| **Regression** | | | |\n",
"| $\\mathbf{X}$ | training example matrix | `X_train` | \n",
"| $\\mathbf{y}$ | training example targets | `y_train` \n",
"| $\\mathbf{x}^{(i)}$, $y^{(i)}$ | $i_{th}$Training Example | `X[i]`, `y[i]`|\n",
"| m | number of training examples | `m`|\n",
"| n | number of features in each example | `n`|\n",
"| $\\mathbf{w}$ | parameter: weight, | `w` |\n",
"| $b$ | parameter: bias | `b` | \n",
"| $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)})$ | The result of the model evaluation at $\\mathbf{x^{(i)}}$ parameterized by $\\mathbf{w},b$: $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) = \\mathbf{w} \\cdot \\mathbf{x}^{(i)}+b$ | `f_wb` | \n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_2\"></a>\n",
"# 2 Problem Statement\n",
"\n",
"You will use the motivating example of housing price prediction. The training dataset contains three examples with four features (size, bedrooms, floors and, age) shown in the table below. Note that, unlike the earlier labs, size is in sqft rather than 1000 sqft. This causes an issue, which you will solve in the next lab!\n",
"\n",
"| Size (sqft) | Number of Bedrooms | Number of floors | Age of Home | Price (1000s dollars) | \n",
"| ----------------| ------------------- |----------------- |--------------|-------------- | \n",
"| 2104 | 5 | 1 | 45 | 460 | \n",
"| 1416 | 3 | 2 | 40 | 232 | \n",
"| 852 | 2 | 1 | 35 | 178 | \n",
"\n",
"You will build a linear regression model using these values so you can then predict the price for other houses. For example, a house with 1200 sqft, 3 bedrooms, 1 floor, 40 years old. \n",
"\n",
"Please run the following code cell to create your `X_train` and `y_train` variables."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"X_train = np.array([[2104, 5, 1, 45], [1416, 3, 2, 40], [852, 2, 1, 35]])\n",
"y_train = np.array([460, 232, 178])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_2.1\"></a>\n",
"## 2.1 Matrix X containing our examples\n",
"Similar to the table above, examples are stored in a NumPy matrix `X_train`. Each row of the matrix represents one example. When you have $m$ training examples ( $m$ is three in our example), and there are $n$ features (four in our example), $\\mathbf{X}$ is a matrix with dimensions ($m$, $n$) (m rows, n columns).\n",
"\n",
"\n",
"$$\\mathbf{X} = \n",
"\\begin{pmatrix}\n",
" x^{(0)}_0 & x^{(0)}_1 & \\cdots & x^{(0)}_{n-1} \\\\ \n",
" x^{(1)}_0 & x^{(1)}_1 & \\cdots & x^{(1)}_{n-1} \\\\\n",
" \\cdots \\\\\n",
" x^{(m-1)}_0 & x^{(m-1)}_1 & \\cdots & x^{(m-1)}_{n-1} \n",
"\\end{pmatrix}\n",
"$$\n",
"notation:\n",
"- $\\mathbf{x}^{(i)}$ is vector containing example i. $\\mathbf{x}^{(i)}$ $ = (x^{(i)}_0, x^{(i)}_1, \\cdots,x^{(i)}_{n-1})$\n",
"- $x^{(i)}_j$ is element j in example i. The superscript in parenthesis indicates the example number while the subscript represents an element. \n",
"\n",
"Display the input data."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# data is stored in numpy array/matrix\n",
"print(f\"X Shape: {X_train.shape}, X Type:{type(X_train)})\")\n",
"print(X_train)\n",
"print(f\"y Shape: {y_train.shape}, y Type:{type(y_train)})\")\n",
"print(y_train)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_2.2\"></a>\n",
"## 2.2 Parameter vector w, b\n",
"\n",
"* $\\mathbf{w}$ is a vector with $n$ elements.\n",
" - Each element contains the parameter associated with one feature.\n",
" - in our dataset, n is 4.\n",
" - notionally, we draw this as a column vector\n",
"\n",
"$$\\mathbf{w} = \\begin{pmatrix}\n",
"w_0 \\\\ \n",
"w_1 \\\\\n",
"\\cdots\\\\\n",
"w_{n-1}\n",
"\\end{pmatrix}\n",
"$$\n",
"* $b$ is a scalar parameter. "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"For demonstration, $\\mathbf{w}$ and $b$ will be loaded with some initial selected values that are near the optimal. $\\mathbf{w}$ is a 1-D NumPy vector."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"b_init = 785.1811367994083\n",
"w_init = np.array([ 0.39133535, 18.75376741, -53.36032453, -26.42131618])\n",
"print(f\"w_init shape: {w_init.shape}, b_init type: {type(b_init)}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_3\"></a>\n",
"# 3 Model Prediction With Multiple Variables\n",
"The model's prediction with multiple variables is given by the linear model:\n",
"\n",
"$$ f_{\\mathbf{w},b}(\\mathbf{x}) = w_0x_0 + w_1x_1 +... + w_{n-1}x_{n-1} + b \\tag{1}$$\n",
"or in vector notation:\n",
"$$ f_{\\mathbf{w},b}(\\mathbf{x}) = \\mathbf{w} \\cdot \\mathbf{x} + b \\tag{2} $$ \n",
"where $\\cdot$ is a vector `dot product`\n",
"\n",
"To demonstrate the dot product, we will implement prediction using (1) and (2)."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_3.1\"></a>\n",
"## 3.1 Single Prediction element by element\n",
"Our previous prediction multiplied one feature value by one parameter and added a bias parameter. A direct extension of our previous implementation of prediction to multiple features would be to implement (1) above using loop over each element, performing the multiply with its parameter and then adding the bias parameter at the end.\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def predict_single_loop(x, w, b): \n",
" \"\"\"\n",
" single predict using linear regression\n",
" \n",
" Args:\n",
" x (ndarray): Shape (n,) example with multiple features\n",
" w (ndarray): Shape (n,) model parameters \n",
" b (scalar): model parameter \n",
" \n",
" Returns:\n",
" p (scalar): prediction\n",
" \"\"\"\n",
" n = x.shape[0]\n",
" p = 0\n",
" for i in range(n):\n",
" p_i = x[i] * w[i] \n",
" p = p + p_i \n",
" p = p + b \n",
" return p"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get a row from our training data\n",
"x_vec = X_train[0,:]\n",
"print(f\"x_vec shape {x_vec.shape}, x_vec value: {x_vec}\")\n",
"\n",
"# make a prediction\n",
"f_wb = predict_single_loop(x_vec, w_init, b_init)\n",
"print(f\"f_wb shape {f_wb.shape}, prediction: {f_wb}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Note the shape of `x_vec`. It is a 1-D NumPy vector with 4 elements, (4,). The result, `f_wb` is a scalar."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_3.2\"></a>\n",
"## 3.2 Single Prediction, vector\n",
"\n",
"Noting that equation (1) above can be implemented using the dot product as in (2) above. We can make use of vector operations to speed up predictions.\n",
"\n",
"Recall from the Python/Numpy lab that NumPy `np.dot()`[[link](https://numpy.org/doc/stable/reference/generated/numpy.dot.html)] can be used to perform a vector dot product. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def predict(x, w, b): \n",
" \"\"\"\n",
" single predict using linear regression\n",
" Args:\n",
" x (ndarray): Shape (n,) example with multiple features\n",
" w (ndarray): Shape (n,) model parameters \n",
" b (scalar): model parameter \n",
" \n",
" Returns:\n",
" p (scalar): prediction\n",
" \"\"\"\n",
" p = np.dot(x, w) + b \n",
" return p "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# get a row from our training data\n",
"x_vec = X_train[0,:]\n",
"print(f\"x_vec shape {x_vec.shape}, x_vec value: {x_vec}\")\n",
"\n",
"# make a prediction\n",
"f_wb = predict(x_vec,w_init, b_init)\n",
"print(f\"f_wb shape {f_wb.shape}, prediction: {f_wb}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"The results and shapes are the same as the previous version which used looping. Going forward, `np.dot` will be used for these operations. The prediction is now a single statement. Most routines will implement it directly rather than calling a separate predict routine."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_4\"></a>\n",
"# 4 Compute Cost With Multiple Variables\n",
"The equation for the cost function with multiple variables $J(\\mathbf{w},b)$ is:\n",
"$$J(\\mathbf{w},b) = \\frac{1}{2m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - y^{(i)})^2 \\tag{3}$$ \n",
"where:\n",
"$$ f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) = \\mathbf{w} \\cdot \\mathbf{x}^{(i)} + b \\tag{4} $$ \n",
"\n",
"\n",
"In contrast to previous labs, $\\mathbf{w}$ and $\\mathbf{x}^{(i)}$ are vectors rather than scalars supporting multiple features."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Below is an implementation of equations (3) and (4). Note that this uses a *standard pattern for this course* where a for loop over all `m` examples is used."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def compute_cost(X, y, w, b): \n",
" \"\"\"\n",
" compute cost\n",
" Args:\n",
" X (ndarray (m,n)): Data, m examples with n features\n",
" y (ndarray (m,)) : target values\n",
" w (ndarray (n,)) : model parameters \n",
" b (scalar) : model parameter\n",
" \n",
" Returns:\n",
" cost (scalar): cost\n",
" \"\"\"\n",
" m = X.shape[0]\n",
" cost = 0.0\n",
" for i in range(m): \n",
" f_wb_i = np.dot(X[i], w) + b #(n,)(n,) = scalar (see np.dot)\n",
" cost = cost + (f_wb_i - y[i])**2 #scalar\n",
" cost = cost / (2 * m) #scalar \n",
" return cost"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Compute and display cost using our pre-chosen optimal parameters. \n",
"cost = compute_cost(X_train, y_train, w_init, b_init)\n",
"print(f'Cost at optimal w : {cost}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Result**: Cost at optimal w : 1.5578904045996674e-12"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_5\"></a>\n",
"# 5 Gradient Descent With Multiple Variables\n",
"Gradient descent for multiple variables:\n",
"\n",
"$$\\begin{align*} \\text{repeat}&\\text{ until convergence:} \\; \\lbrace \\newline\\;\n",
"& w_j = w_j - \\alpha \\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j} \\tag{5} \\; & \\text{for j = 0..n-1}\\newline\n",
"&b\\ \\ = b - \\alpha \\frac{\\partial J(\\mathbf{w},b)}{\\partial b} \\newline \\rbrace\n",
"\\end{align*}$$\n",
"\n",
"where, n is the number of features, parameters $w_j$, $b$, are updated simultaneously and where \n",
"\n",
"$$\n",
"\\begin{align}\n",
"\\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j} &= \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - y^{(i)})x_{j}^{(i)} \\tag{6} \\\\\n",
"\\frac{\\partial J(\\mathbf{w},b)}{\\partial b} &= \\frac{1}{m} \\sum\\limits_{i = 0}^{m-1} (f_{\\mathbf{w},b}(\\mathbf{x}^{(i)}) - y^{(i)}) \\tag{7}\n",
"\\end{align}\n",
"$$\n",
"* m is the number of training examples in the data set\n",
"\n",
" \n",
"* $f_{\\mathbf{w},b}(\\mathbf{x}^{(i)})$ is the model's prediction, while $y^{(i)}$ is the target value\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_5.1\"></a>\n",
"## 5.1 Compute Gradient with Multiple Variables\n",
"An implementation for calculating the equations (6) and (7) is below. There are many ways to implement this. In this version, there is an\n",
"- outer loop over all m examples. \n",
" - $\\frac{\\partial J(\\mathbf{w},b)}{\\partial b}$ for the example can be computed directly and accumulated\n",
" - in a second loop over all n features:\n",
" - $\\frac{\\partial J(\\mathbf{w},b)}{\\partial w_j}$ is computed for each $w_j$.\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def compute_gradient(X, y, w, b): \n",
" \"\"\"\n",
" Computes the gradient for linear regression \n",
" Args:\n",
" X (ndarray (m,n)): Data, m examples with n features\n",
" y (ndarray (m,)) : target values\n",
" w (ndarray (n,)) : model parameters \n",
" b (scalar) : model parameter\n",
" \n",
" Returns:\n",
" dj_dw (ndarray (n,)): The gradient of the cost w.r.t. the parameters w. \n",
" dj_db (scalar): The gradient of the cost w.r.t. the parameter b. \n",
" \"\"\"\n",
" m,n = X.shape #(number of examples, number of features)\n",
" dj_dw = np.zeros((n,))\n",
" dj_db = 0.\n",
"\n",
" for i in range(m): \n",
" err = (np.dot(X[i], w) + b) - y[i] \n",
" for j in range(n): \n",
" dj_dw[j] = dj_dw[j] + err * X[i, j] \n",
" dj_db = dj_db + err \n",
" dj_dw = dj_dw / m \n",
" dj_db = dj_db / m \n",
" \n",
" return dj_db, dj_dw"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"#Compute and display gradient \n",
"tmp_dj_db, tmp_dj_dw = compute_gradient(X_train, y_train, w_init, b_init)\n",
"print(f'dj_db at initial w,b: {tmp_dj_db}')\n",
"print(f'dj_dw at initial w,b: \\n {tmp_dj_dw}')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Result**: \n",
"dj_db at initial w,b: -1.6739251122999121e-06 \n",
"dj_dw at initial w,b: \n",
" [-2.73e-03 -6.27e-06 -2.22e-06 -6.92e-05] "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"<a name=\"toc_15456_5.2\"></a>\n",
"## 5.2 Gradient Descent With Multiple Variables\n",
"The routine below implements equation (5) above."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"def gradient_descent(X, y, w_in, b_in, cost_function, gradient_function, alpha, num_iters): \n",
" \"\"\"\n",
" Performs batch gradient descent to learn w and b. Updates w and b by taking \n",
" num_iters gradient steps with learning rate alpha\n",
" \n",
" Args:\n",
" X (ndarray (m,n)) : Data, m examples with n features\n",
" y (ndarray (m,)) : target values\n",
" w_in (ndarray (n,)) : initial model parameters \n",
" b_in (scalar) : initial model parameter\n",
" cost_function : function to compute cost\n",
" gradient_function : function to compute the gradient\n",
" alpha (float) : Learning rate\n",
" num_iters (int) : number of iterations to run gradient descent\n",
" \n",
" Returns:\n",
" w (ndarray (n,)) : Updated values of parameters \n",
" b (scalar) : Updated value of parameter \n",
" \"\"\"\n",
" \n",
" # An array to store cost J and w's at each iteration primarily for graphing later\n",
" J_history = []\n",
" w = copy.deepcopy(w_in) #avoid modifying global w within function\n",
" b = b_in\n",
" \n",
" for i in range(num_iters):\n",
"\n",
" # Calculate the gradient and update the parameters\n",
" dj_db,dj_dw = gradient_function(X, y, w, b) ##None\n",
"\n",
" # Update Parameters using w, b, alpha and gradient\n",
" w = w - alpha * dj_dw ##None\n",
" b = b - alpha * dj_db ##None\n",
" \n",
" # Save cost J at each iteration\n",
" if i<100000: # prevent resource exhaustion \n",
" J_history.append( cost_function(X, y, w, b))\n",
"\n",
" # Print cost every at intervals 10 times or as many iterations if < 10\n",
" if i% math.ceil(num_iters / 10) == 0:\n",
" print(f\"Iteration {i:4d}: Cost {J_history[-1]:8.2f} \")\n",
" \n",
" return w, b, J_history #return final w,b and J history for graphing"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"In the next cell you will test the implementation. "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# initialize parameters\n",
"initial_w = np.zeros_like(w_init)\n",
"initial_b = 0.\n",
"# some gradient descent settings\n",
"iterations = 1000\n",
"alpha = 5.0e-7\n",
"# run gradient descent \n",
"w_final, b_final, J_hist = gradient_descent(X_train, y_train, initial_w, initial_b,\n",
" compute_cost, compute_gradient, \n",
" alpha, iterations)\n",
"print(f\"b,w found by gradient descent: {b_final:0.2f},{w_final} \")\n",
"m,_ = X_train.shape\n",
"for i in range(m):\n",
" print(f\"prediction: {np.dot(X_train[i], w_final) + b_final:0.2f}, target value: {y_train[i]}\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"**Expected Result**: \n",
"b,w found by gradient descent: -0.00,[ 0.2 0. -0.01 -0.07] \n",
"prediction: 426.19, target value: 460 \n",
"prediction: 286.17, target value: 232 \n",
"prediction: 171.47, target value: 178 "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# plot cost versus iteration \n",
"fig, (ax1, ax2) = plt.subplots(1, 2, constrained_layout=True, figsize=(12, 4))\n",
"ax1.plot(J_hist)\n",
"ax2.plot(100 + np.arange(len(J_hist[100:])), J_hist[100:])\n",
"ax1.set_title(\"Cost vs. iteration\"); ax2.set_title(\"Cost vs. iteration (tail)\")\n",
"ax1.set_ylabel('Cost') ; ax2.set_ylabel('Cost') \n",
"ax1.set_xlabel('iteration step') ; ax2.set_xlabel('iteration step') \n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"*These results are not inspiring*! Cost is still declining and our predictions are not very accurate. The next lab will explore how to improve on this."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"<a name=\"toc_15456_6\"></a>\n",
"# 6 Congratulations!\n",
"In this lab you:\n",
"- Redeveloped the routines for linear regression, now with multiple variables.\n",
"- Utilized NumPy `np.dot` to vectorize the implementations"
]
}
],
"metadata": {
"dl_toc_settings": {
"rndtag": "15456"
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.6"
},
"toc-autonumbering": false
},
"nbformat": 4,
"nbformat_minor": 5
}