Latent Dirichlet Allocation using Gibbs Sampling

Yuncheng Li
Computer Science, University of Rochester

Apr. 30, 2014, BST512 Final Project

Agenda

Motivation — An Example

Example articles from Associated Press

Associated Press

Associated Press

Law/Legislation

Politics and Election

Business

Police/Cases

Military/Army

What’s LDA for?

From Wikipedia, LDA, …, allows sets of observations to be explained by unobserved groups that explain why some parts of the data are similar.

Other Applications

Notations

Plate model for LDA

Follows Integrating Out Multinomial Parameters in Latent Dirichlet Allocation and Naive Bayes for Collapsed Gibbs Sampling

Variables Meaning Descriptions
\(M \in \mathbb{N}_{+}\) number of documents 3,000
\(N_m \in \mathbb{N}_{+}\) number of words in \(m\)-th document 100
\(J\) number of unique words 20,000
\(K\) number of topics (predefined) 50
\(y_{m,n} \in 1:J\) \(n\)-th word of the \(m\)-th document 300K vector
\(z_{m,n} \in 1:K\) topic assigned to \(y_{m,n}\) 300K vector
\(\theta_m \in [0,1]^K\) topic distribution for document \(m\) 3,000 * 50 matrix, 150K
\(\phi_k \in [0,1]^J\) word distribution for topic \(k\) 20,000 * 50 matrix, 1M
\(\alpha \in \mathbb{R}_{+}^K\) Dirichlet prior for \(\theta_m\) 0.01
\(\beta \in \mathbb{R}_{+}^J\) Dirichlet prior distributions for \(\phi_k\) 0.05

Generative Process

Plate model for LDA

  1. Draw word distribution \(\phi_k \sim \text{Dir}(\beta)\) for each topic \(k\)
  2. Draw topic distribution \(\theta_m \sim \text{Dir}(\alpha)\) for each document \(m\)
  3. For each SLOT \(n\) in document \(m\), draw topic \(z_{m,n} \sim \text{Multi}(\theta_m)\).
  4. For each SLOT \(n\) in document \(m\), draw word \(y_{m,n} \sim \text{Multi}(\phi_{z_{m,n}})\)

Inference goal: given \(y_{m,n}\), fit \(\phi_k\), \(\theta_m\) and \(z_{m,n}\)

Some important counts,

  1. \(c_{k,j}\) denotes, in any documents, how many times, word \(j\) assigned to topic \(k\)
  2. \(c_{k,m}\) denotes, in document \(m\), how many words assigned to topic \(k\)
  3. \(c_k\) denotes, throughout entire corpus, how many words assigned to topic \(k\). \[ c_k = \sum_j c_{k,j} = \sum_m c_{k,m} \]

Once we have a draw of the vector \(z_{m,n}\), we can compute these counts efficiently.

Posteriors

Plate model for LDA

\[ p(\phi, \theta, z | y, \alpha, \beta) \propto p(\phi|\beta) p(\theta|\alpha) p(z|\theta) p(y|\phi, z) \]

Collapsed Gibbs Sampling

Plate model for LDA

\[ p(z_{m,n} = \mathbb{k}| z_{-(m,n)}, y, \alpha, \beta) \propto \frac{(c_{\mathbb{k}, m}^{-(m,n)} + \alpha) \times (c_{\mathbb{k}, y_{m,n}}^{-(m,n)} + \beta)}{c_{\mathbb{k}}^{-(m,n)} + J * \beta} \]

Algorithm

Plate model for LDA

\[ p(z_{m,n} = \mathbb{k}| z_{-(m,n)}, y, \alpha, \beta) \propto \frac{(c_{\mathbb{k}, m}^{-(m,n)} + \alpha) \times (c_{\mathbb{k}, y_{m,n}}^{-(m,n)} + \beta)}{c_{\mathbb{k}}^{-(m,n)} + J * \beta} \]

  1. Input: \(y_{m,n}\)
  2. Random init \(z_{m,n}\)
  3. Init the counts \(c_{k,m}, c_{k,j}, c_{k}\) from \(z_{m,n}\)
  4. for i = 1:niter
    • for each (m, n) do,
    1. \(j \leftarrow y_{m,n}\)
    2. \(\mathbb{k} \leftarrow z_{m,n}\)
    3. decrement counts \(c_{\mathbb{k},m}\) -= 1, \(c_{\mathbb{k},j}\) -= 1, \(c_{\mathbb{k}}\) -= 1, (which are just \(c_{k,m}^{-(m,n)}, c_{k,j}^{-(m,n)}, c_k^{-(m,n)}\))
    4. compute \(p(z_{m,n}|...)\) using the above equation
    5. draw \(z_{m,n} \rightarrow \mathbb{k}\)
    6. increment counts \(c_{\mathbb{k},m}\) += 1, \(c_{\mathbb{k},j}\) += 1, \(c_{\mathbb{k}}\) += 1.
  5. Output: \(z_{m,n}, c_{k,m}, c_{k,j}\) and \(c_{k}\)

Implementations

Results on real datasets

News articles from Associated Press, used by the original LDA paper.

source("load.data.R")
source("./gibbs.vis.stat.R")
## Loading required package: foreach
## Loading required package: iterators
## Loading required package: snow
## Loading required package: methods

dataset.ap <- load.ap.data()
A <- gibbs.vis.stat(dataset.ap, do.save = F, param.id = 100, do.example = T)
## data/perp.res-38c9d69773382496b55398092ba43b28-90-1000-4-0.9.rdb
## Loading required package: Rcpp
## Loading required package: RcppArmadillo
## Loading required package: lattice
##
##
## Top words for top topics
##       [,1]         [,2]        [,3]     [,4]        [,5]     [,6]
##  [1,] "government" "new"       "i"      "police"    "i"      "court"
##  [2,] "officials"  "years"     "years"  "two"       "think"  "trial"
##  [3,] "president"  "year"      "people" "people"    "dont"   "charges"
##  [4,] "two"        "million"   "two"    "killed"    "people" "judge"
##  [5,] "official"   "last"      "like"   "city"      "going"  "case"
##  [6,] "meeting"    "people"    "day"    "man"       "just"   "attorney"
##  [7,] "last"       "two"       "just"   "three"     "know"   "prison"
##  [8,] "told"       "program"   "first"  "shot"      "get"    "years"
##  [9,] "states"     "officials" "time"   "officials" "say"    "convicted"
## [10,] "foreign"    "national"  "home"   "injured"   "time"   "guilty"
##       [,7]       [,8]         [,9]           [,10]
##  [1,] "percent"  "party"      "dukakis"      "company"
##  [2,] "year"     "government" "bush"         "million"
##  [3,] "rate"     "communist"  "campaign"     "stock"
##  [4,] "rates"    "political"  "jackson"      "billion"
##  [5,] "economy"  "opposition" "democratic"   "offer"
##  [6,] "prices"   "minister"   "convention"   "inc"
##  [7,] "report"   "new"        "presidential" "corp"
##  [8,] "economic" "leader"     "vice"         "share"
##  [9,] "increase" "republics"  "republican"   "new"
## [10,] "price"    "democratic" "president"    "agreement"
##
##
## Top documents for topics
##       [,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8] [,9] [,10]
##  [1,]   38 1666 2026  573 1586  428  821  559  316  1477
##  [2,] 1653 2171  914 1434 1748 1557  370 2098 2029  1091
##  [3,]  675  949 1505 1878  227  985 1550 2160 2066   561
##  [4,] 1127   66    5  973  333   82  944  132  919    47
##  [5,] 1281  266 1903  138 2094 1414  219  103  866  2183
##  [6,] 1809 1858  828 1508 1290  713 2196  929 2065   537
##  [7,]  611 2058   54 1629 1301  644 1322  832  421   915
##  [8,] 1259  592  480  152 1959 1677   15 2143  517   165
##  [9,] 1741  218 1320 1265 1663  213 1452  753    8  1854
## [10,] 2049 1968 1217  209 2195  665 1621  595 1960  1379

Examples

Following observations,

Extensive list of example documents for discovered topics with keywords highlighted, (Note that these topics can not be automatically named by this method.)

Traceplots

We can make following observations,

One strange trace, whose parameters are larger than 1.

A <- gibbs.vis.stat(dataset.ap, do.save = F, param.id = 100, do.trace = T)
## data/perp.res-38c9d69773382496b55398092ba43b28-90-1000-4-0.9.rdb

plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots plot of chunk ap-traceplots

Perplexity

Another name for likelihood, but many different definitions for LDA model. I computed perplexity as follows, \[ \text{Perplexity}(y^{un.obs}_{m,n} | \phi, \theta) = exp\left( -\frac{\sum_{m, n} \log p(y_{m,n}^{un.obs} | \phi, \theta)}{MN}\right) \] \[ p(y_{m,n} | \phi, \theta) = \sum_{k=1}^K \phi_{k, y_{m,n}} \theta_{m,k} \] For each document, keep a number of words as unobserved and the rest as observed. The observed words are used to infer \(\phi_k\) and \(\theta_m\). Using some point estimates of \(\theta_m\) and \(\phi_k\) from the gibbs draws, we can compute the perplexity for unobserved words.

Note that we can control the ratio of observed to unobserved words to see what the percentage of words we need to make good fit, which are evaluated in next page.

As the definition indicates, smaller perplexity means the fitted model can better explain unobserved data. Following is a plot of perplexity over iterations.

A <- gibbs.vis.stat(dataset.ap, do.save = F, param.id = 100, do.plot = T)
## data/perp.res-38c9d69773382496b55398092ba43b28-90-1000-4-0.9.rdb
plot of chunk ap-perp

plot of chunk ap-perp

As we can see, perplexity decreases very fast to the converging point. 1000 iterations is enough

Hyper-Parameters

Following is of our interests,

A <- gibbs.vis.stat(dataset.ap, do.save = T)
## loading data/multi.run-38c9d69773382496b55398092ba43b28.rdb
plot of chunk ap-param

plot of chunk ap-param

Here are our observations,

Simulations

Plate model for LDA

  1. Draw true \(\theta\) and \(\phi\) from the Dirichlet prior, according to predefined \(K\), \(M\) and \(N\).
  2. Generate topics and words according to \(\theta\) and \(\phi\)
  3. Infer \(\hat{\theta}\) (the last draw)
  4. Compare \(\theta\) and \(\hat{\theta}\), and report the bias.

A problem with comparing \(\theta\) and \(\hat{\theta}\) directly: non-identifiablity. \(\hat{\theta}\) can be arbitrary permutation of \(\theta\).

In mixture model, one common practice is to find a optimal relabeling. Another approach is to measure affinity correlations.

Affinity correlation

(Not a formal term, idea comes from a mixture model homework)

Given \(\theta\), we can compute distance between each pair of documents \(d(m_1, m_2)\) by \[ d(m_1, m_2)_\theta = \sqrt{\sum_{k=1}^{K} \|\theta_{m_1, k} - \theta_{m_2, k}\|^2}, \] which can be seen as a distance between \(m_1\) and \(m_2\) in the topic space.

Affinity correlations: \(\text{cor} (d_\theta, d_\hat{\theta})\)

Basic idea is if two documents is close in the group truth topic space \(\theta\), they should be close in the estimated topic space \(\hat{\theta}\).

Use a GMM to explain the idea.

It is treated as pesudo bias to evaluate simulations.

Simulation results #1:

The number of topics is 50, which is relatively large.

numTopics <- 50
sizeVocab <- 10000
numDocs <- 5000
averageDocumentLength <- 100
averageWordsPerTopic <- 10

n.rep <- 5

for (rep.i in 1:n.rep) {
    dataset.sim.50 <- load.sim.data(numTopics, sizeVocab, numDocs, averageDocumentLength,
        averageWordsPerTopic, rep.i)

    A <- gibbs.vis.stat(dataset.sim.50, n.save = 100, do.save = T)
}
## loading data/multi.run-18dfdad9950cad221634b5ff19d37787.rdb

plot of chunk sim-data plot of chunk sim-data

## loading data/multi.run-bc560150094a12f4ecdf523deb9d832a.rdb

plot of chunk sim-data plot of chunk sim-data

## loading data/multi.run-fb9310067831d871a0959f68b6335e60.rdb

plot of chunk sim-data plot of chunk sim-data

## loading data/multi.run-803b43a568a33810d0369d1036260aca.rdb

plot of chunk sim-data plot of chunk sim-data

## loading data/multi.run-c316c230871bf7f26b5dc79ce6b1ade8.rdb

plot of chunk sim-data plot of chunk sim-data

Simulation results #2:

A smaller number of topics and model size is chosen.

numTopics <- 5
sizeVocab <- 1000
numDocs <- 500
averageDocumentLength <- 100
averageWordsPerTopic <- 10

n.rep <- 5

for (rep.i in 1:n.rep) {
    dataset.sim.5 <- load.sim.data(numTopics, sizeVocab, numDocs, averageDocumentLength,
        averageWordsPerTopic, rep.i)

    A <- gibbs.vis.stat(dataset.sim.5, n.save = 100, do.save = T)
}
## loading data/multi.run-34f3ffc230fdf34401f6c4e17cf29ce8.rdb

plot of chunk sim-2 plot of chunk sim-2

## loading data/multi.run-6acf960f82d2d04fe59624a5ef8bb09f.rdb

plot of chunk sim-2 plot of chunk sim-2

## loading data/multi.run-bd977582a30dacde9f5cce173bb8892a.rdb

plot of chunk sim-2 plot of chunk sim-2

## loading data/multi.run-9a52e401ff5844a78f034fcab80be4d3.rdb

plot of chunk sim-2 plot of chunk sim-2

## loading data/multi.run-2f923423610e24933d0e1519d83d6f08.rdb

plot of chunk sim-2 plot of chunk sim-2

Simulations summary

Future work

Packages and References

All source code for this project are posted online. https://github.com/raingo/topicmodel

Thanks

Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License.