F11Stat946ass

From Wiki Course Notes

Jump to: navigation, search

Contents

problem 1

Recall the definition of the family of probability distributions associated with a directed graphical model:

p(x) = \prod_{i=1}^{n}f_i(x_i,x_{\pi_i}).

Prove by induction that p(x_i|x_{\pi_i})=f_i(x_i,x_{\pi_i}).

solution 1

According to the product rule we have:


p(x_1,\ldots,x_{k+1})=p(x_1,\ldots,x_{k})p(x_{k+1}|x_1,\ldots,x_{k}).~\sharp

This is the most general case for a directed graph, as we can represent each and every graphical model with a fully connected graph.

Now, we need to show that this statement, p(x_i|x_{\pi_i})=f_i(x_i,x_{\pi_i}), holds for n = 1 (and n = 2, just for the sake of more clarity).

\begin{align}
n=&1:~p(x)=p(x_1)=p(x_1|\emptyset)=f_1(x_1,x_{\pi_1}), \mbox{ where } x_{\pi_1}=\emptyset \rightarrow  p(x_i|x_{\pi_i})=f_i(x_i,x_{\pi_i}),\\
n=&2:~p(x)=p(x_1,x_2)=p(x_1)p(x_2|x_1)=p(x_1|\emptyset)p(x_2|x_1)=f_1(x_1,x_{\pi_2})f_2(x_2,x_{\pi_2})\rightarrow  p(x_i|x_{\pi_i})=f_i(x_i,x_{\pi_i}).
\end{align}

Where in the latter case, without loss of generality, we have assumed that x1 is the only parent of x2. This assumption that we've taken, not only does not reduce from the generality of the proof (as we have initially assumed that the graph is directed and in a directed graph one of two connected nodes/random variables should be a parent of the other), but it also abides by our earlier assumption, that parents come before children. This leaves no parents for x1, similar to the former case in the above.

Now, assuming that the statement p(x_i|x_{\pi_i})=f_i(x_i,x_{\pi_i}) holds for n = k,

\begin{align}
n=k: p(x)=p(x_1,x_2,\ldots,x_k)=\prod_{i=1}^{k}f_i(x_i,x_{\pi_i})=\prod_{i=1}^{k}p(x_i|x_{\pi_i}).~\flat
\end{align}

we need to show that it also holds for n = k + 1.

\begin{align}
n=k+1: p(x)&=p(x_1,x_2,\ldots,x_{k+1})=\prod_{i=1}^{k+1}f_i(x_i,x_{\pi_i}) \\
&=f_{k+1}(x_{k+1},x_{\pi_{k+1}})\prod_{i=1}^{k}f_i(x_i,x_{\pi_i}) \\
&\stackrel{\flat}{=}f_{k+1}(x_{k+1},x_{\pi_{k+1}})\prod_{i=1}^{k}p(x_i|x_{\pi_i}).
\end{align}

On the other hand according to \sharp we have:

\begin{align}
p(x_1,\ldots,x_{k+1})=p(x_1,\ldots,x_{k})p(x_{k+1}|x_1,\ldots,x_{k})
\end{align}

Thus,

\begin{align}
f_{k+1}(x_{k+1},x_{\pi_{k+1}})\prod_{i=1}^{k}p(x_i|x_{\pi_i})=p(x_{k+1}|x_1,\ldots,x_{k})p(x_1,\ldots,x_{k}).
\end{align}

Once more using \flat, we have:

\begin{align}
f_{k+1}(x_{k+1},x_{\pi_{k+1}})=p(x_{k+1}|x_1,\ldots,x_{k}),
\end{align}

and so:

\begin{align}
p(x_i|x_{\pi_i})=f_i(x_i,x_{\pi_i})~\triangle.
\end{align}

problem 2

For a given variable Xi in a graphical model, what is the minimal set of nodes that renders Xi conditionally independent of all of the other variables? That is, what is the smallest set C, such that X_i\perp X_{V\setminus \{i\}\cup C}|X_C? (Note that V is the set of all nodes in the graph, so that V\setminus \{i\}\cup C is the set of all nodes in the graph, excluding C and i).

  • Do the problem for an undirected graph.
  • Do the problem for a directed graph.

solution 1

a. As in directed graphs, the relationship between each two nodes, Xi and Xj, can be of one of the child-parent or parent-child forms. Let's assume that we show the set of parents of the node Xi like this: X_{\pi_i}. So, a general node can be thought of as in Fig. 1. In this figure, all the possible relationships for the node X4 and its immediate neighbours and those of the depth one has been shown.

Fig.1 A directed graph!

From the graph in the figure we have:

\begin{align}
p(x) = p(X_1)p(X_2|X_1)p(X_3|X_2)p(X_4|X_2)p(X_6|X_4,X_5)p(X_7|X_6)\\
\end{align}

So, we can derive:

\begin{align}
p(X_1,X_2,X_4)&=p(X_1)p(X_2|X_1)p(X_4|X_2)\sum_{X_3}p(X_3|X_2)\sum_{X_5}\sum_{X_6}p(X_6|X_4,X_5)\sum_{X_7}p(X_7|X_6)\\
&=p(X_1)p(X_2|X_1)p(X_4|X_2)\\
\rightarrow p(X_4|X_1,X_2) &= \frac{p(X_1,X_2,X_4)}{p(X_1,X_2)}=\frac{p(X_1)p(X_1,X_2)p(X_4,X_2)}{p(X_1,X_2)p(X_1)p(X_2)}=p(X_4|X_2)~\rightarrow~X_4 \perp X_1 | X_2.
\end{align}

Also we have:

\begin{align}
p(X_2,X_3,X_4)&=p(X_4|X_2)p(X_3|X_2)\sum_{X_1}p(X_1)p(X_2|X_1)\sum_{X_5}\sum_{X_6}p(X_6|X_4,X_5)\sum_{X_7}p(X_7|X_6)\\
&=p(X_4|X_2)p(X_3|X_2)p(X_2)\\
\rightarrow p(X_4,X_3|X_2) &= \frac{p(X_2,X_3,X_4)}{p(X_2)}=\frac{p(X_4|X_2)p(X_3|X_2)p(X_2)}{p(X_2)}=p(X_4|X_2)p(X_3|X_2)~\rightarrow~X_4 \perp X_3 | X_2.
\end{align}

So, by induction we can say that given the parents of a node, it will be conditionally independent of all of the nodes that come before. It is assumed that parents always come before children.

Now, if we check the independence of the nodes, from those there is a path to X4 -in general- but not through parents of X4, we will have:

\begin{align}
X_4 \perp X_7 | X_6,\\
X_4 \cancel{\perp} X_5 | X_6.
\end{align}

And if the children of X4 are not given, we will have:

\begin{align}
X_4 \cancel{\perp} X_7,\\
X_4 \perp X_5.
\end{align}

So, as of the set C, we can state:


C_{X_i}=\pi_{X_i}\cup\{k,\pi_{X_k}|i\in \pi_{X_k}\}

b. As of an undirected graph, we will have:


C_{X_i}=\{j|j\mbox{ is an immediate neighbour of }i\}

problem 3

Consider the directed graphical model below.

Fig.2 problem 3 - graphical model

(a) Moralize the graph, and show the resulting undirected graphical model.

(b) Invoke UndirectedGraphEliminate on your moral graph, using the elimination ordering (7, 8, 9, 6, 3, 5, 4, 2, 1), and show the resulting reconstituted graph (the graph that includes all the edges added during the elimination process). You do not have to show the intermediate steps of the algorithm.

(c) Report (b), but using the elimination ordering (7, 6, 8, 9, 4, 3, 2, 5, 1).

(d) Which of the two elimination orderings, if used in Eliminate to calculate p(x1 | x7), do you think would result in a more time-efficient calculation? Which would result in a more space-efficient calculation? Briefly explain why.

solution 1

(a) Here is the resulting moralized undirected graph (Fig. 3).

Fig.3 moralized undirected graphical model

(b) Here is the resulting reconstituted graph, after a (7, 8, 9, 6, 3, 5, 4, 2, 1) order elimination. (Fig. 4)

Fig.4 Resulting reconstituted graph after (7, 8, 9, 6, 3, 5, 4, 2, 1) order elimination.

(c) Here is the resulting reconstituted graph, after a (7, 6, 8, 9, 4, 3, 2, 5, 1) order elimination. (Fig. 5)

Fig.5 Resulting reconstituted graph after (7, 6, 8, 9, 4, 3, 2, 5, 1) order elimination.

(d) It is more computationally efficient to remove those nodes, which have less number of edges, as it takes less number of summations over their neighbours. So, starting with single-edge nodes would be a good idea, which makes the first ordering more computationally efficient. (What does space-efficient means, though?)

problem 4

Consider an undirected tree. We know that it is not possible in genereal to parameterize a distribution on a tree using marginal probabilities as the potentials. It is, however, possible to parameterize such a distribution using ratios of marginal probabilities. In particular let:

\begin{align}
\psi(x_i)&=p(x_i),\\
\psi(x_i,x_j)&=\frac{p(x_i,x_j)}{p(x_i)p(x_j)}.
\end{align}

where i and j are neighbors in the tree, where p(xi) and p(xi,xj) are a consistent set of marginal probabilities. Show that this setting of potential yields a parameterization of a joint probability distribution on the tree under which p(xi) and p(xi,xj) are marginals. What is Z under this parameterization?

solution 1

Based on the definition of probability function which has to satisfy the following:

\begin{align}
0\le p(x) \le 1,~p(x)\in R,\mbox{ and},~\sum_{x}p(x)=1,
\end{align}

one can prove that the proposed potential function, ψ, is non-negative and is real-valued. These two are all we need for a potential function and besides these, it can be defined totally arbitrarily, while it meets the independence characteristics of the random variables.

According to the given potential function and the definition of p(x), probability distribution function for an undirected graphical model, here is how one can define the joint probability distribution:

\begin{align}
p(x)&=\frac{1}{Z}\prod_{x_k\in X}p(x_k)\prod_{c\in C}\prod_{x_i,x_j\in c}\frac{p(x_i,x_j)}{p(x_i)p(x_j)},\\
Z &= \sum_{x}\prod_{x_k\in X}p(x_k)\prod_{c\in C}\prod_{x_i,x_j\in c}\frac{p(x_i,x_j)}{p(x_i)p(x_j)}=1.
\end{align}

where C is the set of all maximal cliques in the tree and i\neq j.

problem 5

In this problem we will work through the mechanics of using Eliminate for inference in a directed graphical model. Consider the directed graphical model below over the binary variables X1,X2,X3, and X4.

Fig.6 problem 5

Let p(X1 = 0) = p(X1 = 1) = 0.5. For the local conditional probabilities, p(xi + 1 | xi), use the following matrix.

We will use eliminate to compute p(X1 = 0 | X4 = 1).

(a) Write out the summations involved in carrying out the elimination algorithm to find p(x1,x4) and p(x4), as in Equations (3.8) and (3.15) in Chapter 3 of the text. Use the elimination ordering (4,3,2,1).

(b) Using eliminate (with x4 = 1), write down (as tables) the resulting intermediate terms, and p(x_1,\bar{x}_4).

(c) Compute p(X1 = 0 | X4 = 1).

(d) You can alternatively do parts (a), (b), and (c) using Matlab or Splus.

solution 1

\begin{align}
p(x) &= p(X_1)p(X_2|X_1)p(X_3|X_2)p(X_4|X_3). \\
\rightarrow p(X_1,X_4=1)&=p(X_1)\sum_{X_2}p(X_2|X_1)\sum_{X_3}p(X_3|X_2)p(X_4=1|X_3) \\
&= p(X_1)\sum_{X_2}p(X_2|X_1)\underbrace{(0.4p(X_3=0,X_2)+0.8p(X_3=1,X_2))}_{m_3(x_2)} \\
&= p(X_1)\underbrace{[(0.4*0.6+0.8*0.4)p(X_2=0,X_1)+(0.4*0.2+0.8*0.8)p(X_2=1,X_1)]}_{m_2(x_1)} \\
&= p(X_1)(0.56p(X_2=0,X_1)+0.72p(X_2=1,X_1)), \\
\rightarrow p(X_1=0,X_4=1)&=0.5(0.56*0.6+0.72*0.4)=0.312,\\
\rightarrow p(X_4=1)&=\sum_{X_1}p(X_1,X_4=1) \\
&= 0.5(0.56*(0.6+0.2)+0.72*(0.4+0.8))=0.656,\\
\rightarrow p(X_1=0|X_4=1)&=\frac{p(X_1=0,X_4=1)}{p(X_4=1)}\\
&=\frac{0.312}{0.656}= 0.552.
\end{align}

solution 2 (Code)

# Programming Language: R
# In this code I tried to solve the above question using only operations over the CPT matrices
# the last portion of the code shows how to use the transfer matrix method to compute 
# the marginal.
# X41: P(X4,X1)
# X4: P(X4)
# Data Model
transitionMatrix <- matrix(c(.6, .2, .4, .8), ncol=2, byrow=TRUE)
X1 <- c(.5,.5)
X21 <- transitionMatrix
X32 <- transitionMatrix
X43 <- transitionMatrix

binaryFactorsProduct <- function(f1, f2) {
  # expect two factors of the form
  # f1: A * B, F2: B * C
  # where dom(A,B,C ) \in {0,1}
  # and return an array of the form B * (A * C)

  newFactor <- array(0, c(2,2,2))
  newFactor[,,1] <- as.matrix(f1[,1]) %*% f2[1,]
  newFactor[,,2] <- as.matrix(f1[,2]) %*% f2[2,]

  return(newFactor)
}

binaryFactorsSumOut <- function(f) {
  #expect factor of the form B * (A * C)
  # return new factor 
  return(f[,,1] + f[,,2])
}

m42 <- binaryFactorsSumOut(binaryFactorsProduct(X43,X32))
m41 <- binaryFactorsSumOut(binaryFactorsProduct(m42, X21))
X41 <- m41 * X1

#########################################
# Computing the marginal p(X4) using the transfer matrix method
X4 <- (X43 %*% (X32 %*% (X21 %*% X1)))
#########################################

Problem 6

For the two graphs in Figure 1, determine which variables can be reached using the Bayes ball algorithm if we start at node A.

Fig.1 Bayes ball examples.


Solution 1

First graph: Starting from node A, the ball passes to nodes B & C. Node B blocks the ball, while node C passes the ball to nodes E & F. In turn, node E passes the ball to node G which passes the ball to nodes I & D. Node F passes the ball to node H which passes the ball to the already visited node, I. Therefore all nodes are visited.

More formally, we can say that the variables reached using the Bayes ball algorithm are:

A

B

C

E

F

G

H

D

I


Second graph: Again, if we start from node A, it passes the ball to node G. As G receives the ball from a parent, it can only pass the ball to its children, hence G passes the ball to J. J is shaded, therefore it bounces the ball back to its parent G. This time G receives the ball from a child and can pass it everywhere and then the ball is passed to node D which passes the ball to nodes B & H. Node H doesn't have children and can't pass the ball anywhere, while node B passes the ball to node E which passes the ball to node I. Node I doesn't have the children and this is where the Bayes ball stops.

We can now list the variables reached using the Bayes ball algorithm in the second graph as:

A

G

J

D

B

H

E

I

Personal tools