Description: In this exercise, we apply basic counts and some association statistics to a small corpus. We will:
Turn in: written answers to all questions marked QUESTION.
Credits: The scripts used in this exercise, written by Philip Resnik, were originally derived from Ken Church's "NGRAMS" tutorial at ACL-1995. The likelihood ratio code was adapted from code written by Ted Dunning.
Prerequisites: This exercise assumes basic familiarity with typical Unix commands, and the ability to create text files (e.g. using a text editor such as vi or emacs). No programming is required.
Platforms:The executable files run under the Solaris operating system. C programs can generally be recompiled on different platforms (e.g. Linux) using cc -o filename filename.c. On most Unix platforms you can execute uname -sr to see what operating system you're running.
Notational Convention: The symbols <== will be used to identify a comment from the instructor, on lines where you're typing something in. So, for example, in
% cp file1.txt file2.txt <== The "cp" is short for "copy"
what you're supposed to type at the prompt (identified by the percent sign, here) is
cp file1.txt file2.txtfollowed by a carriage return.
% passwd(Note that the prompt % may be different on your system; for example, another frequently seen prompt is [thismachine], where "thismachine" is the name of your workstation, or simply ">".)
% mkdir stats <== Create a subdirectory called "stats" % cd stats <== Go into that directory
% tar xvf ngrams.tar <== Extract code from the file % rm ngrams.tar <== Delete to conserve space % cd ngrams <== Go into ngrams directory % chmod u+x *.pl <== Make perl scripts executable
% more corpora/GEN.EN(Type spacebar for more pages, and "q" for "quit".) This contains an annotated version of the book of Genesis, King James version. It is a small corpus, by current standards -- somewhere on the order of 40,000 or 50,000 words. QUESTION: What words (unigrams) would you expect to have high frequency in this corpus? What bigrams do you think might be frequent?
% mkdir genesis
Then run the Stats program to analyze the corpus. The program requires an input file, and a "prefix" to be used in creating output files. The input file will be corpora/GEN.EN, and the prefix will be genesis/out, so that output files will be created in the genesis subdirectory. That is, you should execute the following:
% Stats corpora/GEN.EN genesis/outThe program will tell you what it's doing, as it counts unigrams, counts bigrams, computes mutual information, and computes likelihood ratio statistics. Depending on the machine you're working on, this may take differing amount of time to run, but it should be less than 5 minutes for most machines.
% tar xvf ngram_out.tar
% cd genesis
% more out.unigramsSeeing the vocabulary in alphabetical order isn't very useful, so let's sort the file by the unigram frequency, from highest to lowest:
% sort -nr out.unigrams > out.unigrams.sorted % more out.unigrams.sortedNow examine out.unigrams.sorted. Note that v (verse), c (chapter), id, and GEN are part of the markup in file GEN.EN, for identifying verse boundaries. QUESTION: Other than those (which are a good example of why we need better pre-processing to handle markup), are the high frequency words what you would expect?
% sort -nr out.bigrams > out.bigrams.sorted % more out.bigrams.sorted
Low-frequency bigrams (bigram count less than 5) were excluded.
As an exercise, compute mutual information by hand for the first bigram on the list, "savoury meat". Recall that
I(x,y) = log2 [p(x,y)/(p(x)p(y))]and that the simplest estimates of probabilities, the maximum likelihood estimates, are given by
p(x) = freq(x)/N p(y) = freq(y)/N p(x,y) = freq(x,y)/Nwhere N is the number of observed words in the corpus. You can find out this value by counting the words in file out.words; it should match what you should get by summing the frequencies in either out.unigrams or out.bigrams.
% wc -l out.words
You can get a calculator on your screen on some systems (at least sunos and solaris) by executing:
% xcalc &Here's a sequence you can use to do the calculation:
Compute p(savoury) = freq(savoury)/N Compute p(meat) = freq(meat)/N Compute p(savoury meat) = freq(savoury,meat)/N Compute p(savoury)p(meat) = p(savoury) * p(meat) Divide p(savoury,meat) by this value Take the log of the result (which in xcalc is log to the base 10) Convert that result to log base-2 by dividing by 0.30103 This uses the fact that for all M, N: logM(x) = logN(x)/logN(M).At some point, the calculator may give you scientific notation for a number. If you need to enter a number in scientific notation, you use EE:
EE Used for entering exponential numbers. For example to get "-2.3E-4" you'd enter "2 . 3 +/- EE 4 +/-".The number you some up with should be close to the mutual information reported in out.mi. It may be slightly different because your calculation used different precision than the program's.
log(a * b) = log(a) + log(b) log(a / b) = log(a) - log(b)Try converting the formula for mutual information using these identities so that probabilities are never multiplied or divided, before reading further.
Solution: log[p(x,y)/p(x)p(y)] = log p(x,y) - log p(x) - log p(y)
To really get a feel for things, first first substitute the maximum likelihood estimates in and then convert to using log probabilities, i.e.
log[ (freq(x,y)/N)/(freq(x)/N)(freq(y)/N) ]
% ln -s ../filter_stopwords <== Creates a symbolic link % ln -s ../stop.wrd <== Creates a symbolic linkThen run it:
% filter_stopwords stop.wrd < out.lr > out.lr.filteredIf you see a lot of instances of GEN, you might want to also filter it out, e.g.
% filter_stopwords stop.wrd < out.lr | fgrep -v GEN > out.lr.filtered
QUESTION: How does out.lr.filtered look as a file containing bigrams that are characteristic of this corpus?
% cat GEN.EN | tr "A-Z" "a-z" > GEN.EN.lcTo save disk space, assuming you're done with GEN.EN, delete the original:
% rm GEN.ENQUESTION: Try re-doing the exercise with this version. What, if anything, changes?
Now get back into your stats directory, create an output directory, say, holmes1, and run the Stats program for the file of interest, e.g.:
% cd .. % mkdir holmes1 % Stats corpora/study.dyl holmes1/out % cd holmes1
Or perhaps convert to lowercase before running Stats:
% cd corpora % cat study.dyl | tr "A-Z" "a-z" > study.lc % rm study.dyl % cat hound.dyl | tr "A-Z" "a-z" > hound.lc % rm hound.dyl % cat adventures.dyl | tr "A-Z" "a-z" > adventures.lc % rm adventures.dyl % cd ..
Look at out.lr, etc. for this corpus. Now go through the same process again, but creating a directory holmes2 and using a different file. QUESTION: Same author, same main character, same genre... how do the high-association bigrams compare between the two cases? If you use filter_stopwords, how do the results look -- what kinds of bigrams are you getting? What natural language processing problems might this be useful for?
% /bin/rm -rf corpora genesis holmes1 holmes2 holmes3to delete the entire directories. Your housekeeping will be much appreciated.