No handwritten homework reports are accepted for this course. We work with Git/GitHub. Efficient and abundant use of Git, e.g., frequent and well-documented commits, is an important criterion for grading your homework.
If you don't have a GitHub account, apply for the Student Developer Pack at GitHub using your UCLA email.
Create a private repository biostat-257-2020-spring
and add Hua-Zhou
and BrendonChau
(TA) as your collaborators.
Top directories of the repository should be hw1
, hw2
, ... You may create other branches for developing your homework solutions; but the master
branch will be your presentation area. Put your homework submission files (Jupyter notebook .ipynb
, html converted from notebook, all code and data set to reproduce results) in the master
branch.
After each homework due date, teaching assistant and instructor will check out your master
branch for grading. Tag each of your homework submissions with tag names hw1
, hw2
, ... Tagging time will be used as your submission time. That means if you tag your hw1 submission after deadline, penalty points will be deducted for late submission.
Read the style guide for Julia programming. Following rules in the style guide will be strictly enforced when grading: (4) four space indenting rule, (6) 80 charcter rule, (7) space after comma rule, (8) no space before comma rule, (9) space around operator rule.
Let's check whether floating-point numbers obey certain algebraic rules. For 2-5, one counter-example suffices.
Associative rule for addition says (x + y) + z == x + (y + z)
. Check association rule using x = 0.1
, y = 0.1
and z = 1.0
in Julia. Explain what you find.
Do floating-point numbers obey the associative rule for multiplication: (x * y) * z == x * (y * z)
?
Do floating-point numbers obey the distributive rule: a * (x + y) == a * x + a * y
?
Is 0 * x == 0
true for all floating-point number x
?
Is x / a == x * (1 / a)
always true?
Consider Julia function
function g(k)
for i in 1:10
k = 5k - 1
end
k
end
@code_llvm
to find the LLVM bitcode of compiled g
with Int64
input. @code_llvm
to find the LLVM bitcode of compiled g
with Float64
input. @fastmath
and repeat the questions 1-3 on the function function g_fastmath(k)
@fastmath for i in 1:10
k = 5k - 1
end
k
end
Explain what does macro `@fastmath` do?
Create the vector x = (0.988, 0.989, 0.990, ..., 1.010, 1.011, 1.012)
.
Plot the polynomial y = x^7 - 7x^6 + 21x^5 - 35x^4 + 35x^3 - 21x^2 + 7x -1
at points x
.
Plot the polynomial y = (x - 1)^7
at points x
.
Explain what you found.
Show the Sherman-Morrison formula $$ (\mathbf{A} + \mathbf{u} \mathbf{u}^T)^{-1} = \mathbf{A}^{-1} - \frac{1}{1 + \mathbf{u}^T \mathbf{A}^{-1} \mathbf{u}} \mathbf{A}^{-1} \mathbf{u} \mathbf{u}^T \mathbf{A}^{-1}, $$ where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is nonsingular and $\mathbf{u} \in \mathbb{R}^n$. This formula supplies the inverse of the symmetric, rank-one perturbation of $\mathbf{A}$.
Show the Woodbury formula $$ (\mathbf{A} + \mathbf{U} \mathbf{V}^T)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V}^T \mathbf{A}^{-1}, $$ where $\mathbf{A} \in \mathbb{R}^{n \times n}$ is nonsingular, $\mathbf{U}, \mathbf{V} \in \mathbb{R}^{n \times m}$, and $\mathbf{I}_m$ is the $m \times m$ identity matrix. In many applications $m$ is much smaller than $n$. Woodbury formula generalizes Sherman-Morrison and is valuable because the smaller matrix $\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U}$ is cheaper to invert than the larger matrix $\mathbf{A} + \mathbf{U} \mathbf{V}^T$.
Show the binomial inversion formula $$ (\mathbf{A} + \mathbf{U} \mathbf{B} \mathbf{V}^T)^{-1} = \mathbf{A}^{-1} - \mathbf{A}^{-1} \mathbf{U} (\mathbf{B}^{-1} + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U})^{-1} \mathbf{V}^T \mathbf{A}^{-1}, $$ where $\mathbf{A}$ and $\mathbf{B}$ are nonsingular.
Show the identity $$ \text{det}(\mathbf{A} + \mathbf{U} \mathbf{V}^T) = \text{det}(\mathbf{A}) \text{det}(\mathbf{I}_m + \mathbf{V}^T \mathbf{A}^{-1} \mathbf{U}). $$ This formula is useful for evaluating the density of a multivariate normal with covariance matrix $\mathbf{A} + \mathbf{U} \mathbf{V}^T$.
Hint: 1 and 2 are special cases of 3.
Demonstrate the following facts about triangular matrices in Julia (one example for each fact). Mathematically curious ones are also encouraged to prove them.
Note a unit triangular matrix is a triangular matrix with all diagonal entries being 1.
The product of two upper (lower) triangular matrices is upper (lower) triangular.
The inverse of an upper (lower) triangular matrix is upper (lower) triangular.
The product of two unit upper (lower) triangular matrices is unit upper (lower) triangular.
The inverse of a unit upper (lower) triangular matrix is unit upper (lower) triangular.
An orthogonal upper (lower) triangular matrix is diagonal.