# KarlRosaen

Transforming random variables, joint and marginal distributions, and The Rule of the Lazy Statistician

## HW on transformation of random variables and joint density functions

I've been catching up on probability homework from chapter 2.

In this problem I had to prove this theorem:

Let $F$ be a cdf, and $U$ a random variable uniformly distributed on $[0, 1]$. Then $F^{-1}(U)$ is a random variable with cdf $F$.

After looking carefully at the definitions of inverse (or quantile) CDFs and reviewing the techniques for how to transform a random variable by applying a function to it, I finally (mostly) got it right before checking with the solution. I wasn't precise in thinking where the resulting random variable was defined, but got the main point. Was a good review of working carefully with the definitions of random variables and cumulative distribution functions.

Let $X$ have a CDF $F$. Find the CDF of $X^+ = \text{max}\{0,X\}$.

was more straight forward and again reviewed the technique of transforming a random variable.

These two problems both concerned joint density functions. In both cases, thinking clearly about how to integrate over 2d regions was the key.

## Marginal Distributions

I read about marginal distributions: you can integrate out a random variable from the joint distribution to get back to a distribution without that variable.

For joint density function $f_{X,Y}$ the marginal density $f_X(x) = \int f(x,y)dy$

In the discrete case, you sum over all values of the variable you are marginalizing.

$f_X(x) = P(X=x) = \sum_y P(X = x, Y = y) = \sum_y f(x,y)$

## Studying Expectation

I began reading ahead into chapter 3 and watching math monk probability videos on expectation. The formulas are pretty straight forward for both the discrete and continuous cases. Some tidbits that were new to me:

• A random variable may not have a well defined expectation
• A random variable may have a well defined infinite expectation

### Well defined

In order to determine whether a distribution is well defined, break it up into $> 0$ and $< 0$ cases (negative and positive parts) and so long as one of them is finite, then the entire summation / integral is "well defined".

An example of a continuous random variable $X$ with an undefined expectation is The Cauchy distribution.

$f(x) = \frac {1}{\pi(1 + x^2)}$

$E(X) = \int_{-\infty}^{\infty} \frac {x}{\pi(1 + x^2)}$

It can be shown that both $\int_{-\infty}^{0} \frac {x}{\pi(1 + x^2)}$ and $\int_{0}^{\infty} \frac {x}{\pi(1 + x^2)}$ are infinite, making the sum of the two undefined. This is a bit hand wavy (see Wikipedia for a more rigorous description), but gives the intuition behind why this integral is undefined: you can't add negative infinity and positive infinity together and have a well defined value.

### Expectation rule aka The Rule of the Lazy Statistician

Another interesting property concerns how to compute the expectation of a function of a random variable.

What if we know $E(X)$ for some density function $f_x(x)$ and for some function $g(x)$ wish to know $E(g(x))$, but don't know $g_x(x)$?

The rule of the Lazy Statistician says we can plug $g(x)$ in for $x$ as follows:

$E(g(x)) = \sum_x g(x) f_X(x)$

and for the continuous case:

$E(g(x)) = \int_{-\infty}^{\infty} g(x) f_X(x) dx$

This seems pretty handy; we know the expected value of uniform distributions, normal distributions and a host of others, so if we would like to find the expected value of a random variable $Y$ with a PDF $f_Y(x)$ that can be re-written as a function of a random variable's PDF that we already know the expected value for, we can go that route without having to compute the integral $\int_{-\infty}^{\infty} x g_X(x) dx$