Measure theory as it relates to Cumulative distribution functions (CDFs), working on problem sets.

Moar Measure

Following up on my study of measure theory, which in turn, is a study of probability theory because a probability is a measure, here are some properties of a measure $\mu$ (via the first few videos of the mathematical monk's playlist on probability):

  • Monotonicity: $A \subset B \implies \mu(A) \leq \mu(B)$
  • Subadditivity: $E_1, E_2, ... \in A \implies \mu(\bigcup_{i}E_i) \leq \sum_{i}\mu(E_i)$
  • Continuity from below: $E_1, E_2, ... \in A$ and $E_1 \subset E_2 \subset ... \implies \mu(\sum_{i=1}^{\infty}E_i) = \lim_{i\to\infty} \mu(E_i)$
  • Continuity from above: if $E_1, E_2, ... \in A$ and $E_1 \supset E_2 \supset ... $ and $\mu(E_1) < \infty$ then $\mu(\bigcap_{i=1}^{\infty} E_i) = \lim_{i\to\infty} \mu(E_i)$

The above are true of all measures, and thus all probability measures. Here are some more properties of probability measures.

Let $(\Omega, \mathcal{A}, P)$ be a probability measure space with $E, F, E_i \in \mathcal{A}$

  • $P(E \cup F) = P(E) + P(F)$ if $E \cap F = \emptyset$
  • $P(E \cup F) = P(E) + P(F) - P(E \cap F)$
  • $P(E) = 1 - P(E^C)$
  • $P(E \cap F^C) = P(E) - P(E \cap F)$

The generalization of $P(E \cup F) = P(E) + P(F) - P(E \cap F)$ is called the inclusion exclusion principal and can be visualized with 3 sets:

File:Inclusion-exclusion.svg

All of these are enumerated as properties of probability measures in the all of stats book, but I don't mind running through it again with my new found appreciation for measure theory. Most of this stuff can be visualized with venn diagrams.

Borel Probability measures and CDFs

Let's consider a Borel measure on $\Bbb R$, which is a measure with $\Omega$ is $\Bbb R$, $\mathcal{A}$ is $\mathcal{B}(\Bbb R)$, e.g it is a measure on $(\Bbb R, \mathcal{B}(\Bbb R))$.

$\mathcal{B}(\Bbb R))$ is the Borel $\sigma$-algebra which are all of the open sets of $\Bbb R$.

Going back to cumulative distribution functions (CDFs), it turns out that every CDF implies a unique Borel probability measure and that a Borel probability measure implies a unique CDF.

More concretely, A CDF is defined as a function $F: \Bbb R \to \Bbb R$ s.t.

  • $x \leq y \implies F(x) \leq F(y)$ $(x,y \in \Bbb R)$ ($F$ is non-decreasing)
  • $lim_{x \searrow a} F(x) = F(a)$ ($F$ is right-continuous)
  • $lim_{x \to \infty} F(x) = 1$
  • $lim_{x \to -\infty} F(x) = 0$

Theorem: $F(x) = P((-\infty, x])$ defines an equivalence between CDFs $F$ and Borel probability measures $P$.

This is kind of interesting as it frames the scope of what kind of probability measures can be uniquely described with a CDF: those that are measures over the Borel $\sigma$-algebra, e.g those considering the open sets of real numbers $\Bbb R$.

So this means we might in some cases be trying to reason about probability measures that are not on open sets of real numbers, and we'll need to use measure theory as a CDF won't cut it.

Mathematical monk's recommended resources

In this video the teacher recommends some books, so I thought I'd take note in case I need even moar stuff to read / reference / study:

all are googleable to find excerpts / problem sets.

Useful tool for looking up LaTeX symbols

As I attempt to get more fluent in typing out TeX for Mathjax, being able to find a symbol quickly is important. This tool rocks: it lets you draw the symbol and finds closely related symbols along with the needed TeX.

Starting on problem sets

I'm finally diving into the All of Stats problem sets from the course website, here's my WIP for problem 1.