{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "## Acknowledgement: This notebook is taken (almost) verbatim from the notebook used in CIRC bootcamp on Spark offered by Jonathan Carroll-Nellenback \n" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "When you start a python-spark kernel, it spawns a spark cluster within your job allocation and creates the spark context 'sc'.\n", "SparkContext object represents a connection to a computing cluster" ] }, { "cell_type": "code", "execution_count": 2, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "" ] }, "execution_count": 2, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sc" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Each spark context has a master that coordinates spark applications" ] }, { "cell_type": "code", "execution_count": 3, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'spark.app.id', u'app-20170419161246-0000'),\n", " (u'spark.rdd.compress', u'True'),\n", " (u'spark.master', u'spark://bhg0019:7077'),\n", " (u'spark.serializer.objectStreamReset', u'100'),\n", " (u'spark.executor.id', u'driver'),\n", " (u'spark.submit.deployMode', u'client'),\n", " (u'spark.driver.host', u'192.168.5.19'),\n", " (u'spark.app.name', u'pyspark-shell'),\n", " (u'spark.driver.port', u'56089')]" ] }, "execution_count": 3, "metadata": {}, "output_type": "execute_result" } ], "source": [ "sc.getConf().getAll()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Lets load the text data for Tale of Two Cities using sc.textFile\n", "Once you have a SparkContext, you can use it to build RDDs. We called sc.textFile() to create an RDD representing the lines of text in a file.\n", "We can then run various operations on these lines, such as count()." ] }, { "cell_type": "code", "execution_count": 4, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "pyspark.rdd.RDD" ] }, "execution_count": 4, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities=sc.textFile('taleOfTwoCities.txt')\n", "type(taleOfTwoCities)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "* #### The textFile command loads lines from a text file into a spark RDD (Resilient Distributed Dataset).\n", "* #### An RDD is the basic abstraction in spark that supports various actions and transformations. \n", "* #### 'take' is an action that returns the first N entries from the RDD" ] }, { "cell_type": "code", "execution_count": 5, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "[u'\\ufeffIt was the best of times,',\n", " u'it was the worst of times,',\n", " u'it was the age of wisdom,',\n", " u'it was the age of foolishness,',\n", " u'it was the epoch of belief,',\n", " u'it was the epoch of incredulity,',\n", " u'it was the season of Light,',\n", " u'it was the season of Darkness,',\n", " u'it was the spring of hope,',\n", " u'it was the winter of despair,',\n", " u'we had everything before us,',\n", " u'we had nothing before us,']" ] }, "execution_count": 5, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities.take(12)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Some other simple actions we can perform are\n", " * first\n", " * takeSample\n", " * takeOrdered\n", " * count" ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "u'\\ufeffIt was the best of times,'" ] }, "execution_count": 6, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities.first()" ] }, { "cell_type": "code", "execution_count": 7, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[u'',\n", " u'wake him--\"ten o\\'clock, sir.\"',\n", " u'that appeared quite supernatural, he forced all these changes upon him.',\n", " u'even the fountain appears to fall to that tune. At length, on Sunday',\n", " u\"distant? Rather. Ever been in prison? Certainly not. Never in a debtors'\"]" ] }, "execution_count": 7, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities.takeSample(withReplacement=False,num=5,seed=2)" ] }, { "cell_type": "code", "execution_count": 8, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[u'invigorated himself with a bumper for his throttle, and a fresh application',\n", " u\"Bastille; the gentleman's, all negligent indifference; the peasant's, all\",\n", " u'its noisiest authorities insisted on its being received, for good or for',\n", " u'There were a king with a large jaw and a queen with a plain face, on the',\n", " u'It was the year of Our Lord one thousand seven hundred and seventy-five.']" ] }, "execution_count": 8, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities.takeOrdered(5, key=lambda x: -len(x))" ] }, { "cell_type": "code", "execution_count": 9, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[u'', u'', u'', u'', u'']" ] }, "execution_count": 9, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities.takeOrdered(5,key=lambda x: len(x))" ] }, { "cell_type": "code", "execution_count": 10, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "15787" ] }, "execution_count": 10, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCities.count()" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now in addition to actions that return data, RDD's can undergo transformations.\n", "#### Let's try to filter out empty lines." ] }, { "cell_type": "code", "execution_count": 6, "metadata": { "collapsed": true }, "outputs": [], "source": [ "taleOfTwoCitiesFiltered=taleOfTwoCities.filter(lambda x: len(x)>0)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### And we can test this by taking the 5 shortest lines from the filtered data" ] }, { "cell_type": "code", "execution_count": 12, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[u'to.', u'it.', u'in:', u'in.', u'box.']" ] }, "execution_count": 12, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCitiesFiltered.takeOrdered(5,key=lambda x: len(x))" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now let's look at the type of data structure taleOfTwoCitiesFiltered is" ] }, { "cell_type": "code", "execution_count": 13, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "pyspark.rdd.PipelinedRDD" ] }, "execution_count": 13, "metadata": {}, "output_type": "execute_result" } ], "source": [ "type(taleOfTwoCitiesFiltered)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Notice that it is a PipelinedRDD. This is because it is built by a transformation (filter) of an existing RDD and tranformations in spark are done lazily.\n", "\n", "\n", "Function-based operations like filter parallelize across the cluster. That is,\n", "Spark automatically takes your function and ships\n", "it to executor nodes. Thus, you can write code in a single driver program and automatically\n", "have parts of it run on multiple nodes. " ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Another type of common transformation is map" ] }, { "cell_type": "code", "execution_count": 14, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[[u'\\ufeffIt', u'was', u'the', u'best', u'of', u'times,'],\n", " [u'it', u'was', u'the', u'worst', u'of', u'times,'],\n", " [u'it', u'was', u'the', u'age', u'of', u'wisdom,'],\n", " [u'it', u'was', u'the', u'age', u'of', u'foolishness,'],\n", " [u'it', u'was', u'the', u'epoch', u'of', u'belief,']]" ] }, "execution_count": 14, "metadata": {}, "output_type": "execute_result" } ], "source": [ "words=taleOfTwoCitiesFiltered.map(lambda x: x.split(' '))\n", "words.take(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### This maps each line into an array of words. So we have an RDD where each row is an array. If instead we want an RDD where each row is a word, we can use flatMap" ] }, { "cell_type": "code", "execution_count": 15, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[u'\\ufeffIt',\n", " u'was',\n", " u'the',\n", " u'best',\n", " u'of',\n", " u'times,',\n", " u'it',\n", " u'was',\n", " u'the',\n", " u'worst']" ] }, "execution_count": 15, "metadata": {}, "output_type": "execute_result" } ], "source": [ "words=taleOfTwoCitiesFiltered.flatMap(lambda x: x.split())\n", "words.take(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now lets get rid of punctuation and make everything lower case." ] }, { "cell_type": "code", "execution_count": 16, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[u'it',\n", " u'was',\n", " u'the',\n", " u'best',\n", " u'of',\n", " u'times',\n", " u'it',\n", " u'was',\n", " u'the',\n", " u'worst']" ] }, "execution_count": 16, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import re\n", "words_lower=words.map(lambda x: re.sub(r'[^\\w]','',x.lower()))\n", "words_lower.take(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now in order to perform a word count, we need to create a set of key-value pairs" ] }, { "cell_type": "code", "execution_count": 17, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'it', 1),\n", " (u'was', 1),\n", " (u'the', 1),\n", " (u'best', 1),\n", " (u'of', 1),\n", " (u'times', 1),\n", " (u'it', 1),\n", " (u'was', 1),\n", " (u'the', 1),\n", " (u'worst', 1)]" ] }, "execution_count": 17, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wordMap=words_lower.map(lambda x: (x,1))\n", "wordMap.take(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now we can reduceByKey where we just add the values to get the word counts" ] }, { "cell_type": "code", "execution_count": 18, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'the', 7986),\n", " (u'and', 4926),\n", " (u'of', 4002),\n", " (u'to', 3460),\n", " (u'a', 2908),\n", " (u'in', 2577),\n", " (u'his', 2003),\n", " (u'it', 2001),\n", " (u'i', 1896),\n", " (u'that', 1883)]" ] }, "execution_count": 18, "metadata": {}, "output_type": "execute_result" } ], "source": [ "wordCount=wordMap.reduceByKey(lambda v1, v2: v1+v2)\n", "wordCount.takeOrdered(10,key=lambda x: -x[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now for fun, we can use the Natural Language Tool Kit to remove stopwords" ] }, { "cell_type": "code", "execution_count": 19, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "{u'a',\n", " u'about',\n", " u'above',\n", " u'after',\n", " u'again',\n", " u'against',\n", " u'ain',\n", " u'all',\n", " u'am',\n", " u'an',\n", " u'and',\n", " u'any',\n", " u'are',\n", " u'aren',\n", " u'as',\n", " u'at',\n", " u'be',\n", " u'because',\n", " u'been',\n", " u'before',\n", " u'being',\n", " u'below',\n", " u'between',\n", " u'both',\n", " u'but',\n", " u'by',\n", " u'can',\n", " u'couldn',\n", " u'd',\n", " u'did',\n", " u'didn',\n", " u'do',\n", " u'does',\n", " u'doesn',\n", " u'doing',\n", " u'don',\n", " u'down',\n", " u'during',\n", " u'each',\n", " u'few',\n", " u'for',\n", " u'from',\n", " u'further',\n", " u'had',\n", " u'hadn',\n", " u'has',\n", " u'hasn',\n", " u'have',\n", " u'haven',\n", " u'having',\n", " u'he',\n", " u'her',\n", " u'here',\n", " u'hers',\n", " u'herself',\n", " u'him',\n", " u'himself',\n", " u'his',\n", " u'how',\n", " u'i',\n", " u'if',\n", " u'in',\n", " u'into',\n", " u'is',\n", " u'isn',\n", " u'it',\n", " u'its',\n", " u'itself',\n", " u'just',\n", " u'll',\n", " u'm',\n", " u'ma',\n", " u'me',\n", " u'mightn',\n", " u'more',\n", " u'most',\n", " u'mustn',\n", " u'my',\n", " u'myself',\n", " u'needn',\n", " u'no',\n", " u'nor',\n", " u'not',\n", " u'now',\n", " u'o',\n", " u'of',\n", " u'off',\n", " u'on',\n", " u'once',\n", " u'only',\n", " u'or',\n", " u'other',\n", " u'our',\n", " u'ours',\n", " u'ourselves',\n", " u'out',\n", " u'over',\n", " u'own',\n", " u're',\n", " u's',\n", " u'same',\n", " u'shan',\n", " u'she',\n", " u'should',\n", " u'shouldn',\n", " u'so',\n", " u'some',\n", " u'such',\n", " u't',\n", " u'than',\n", " u'that',\n", " u'the',\n", " u'their',\n", " u'theirs',\n", " u'them',\n", " u'themselves',\n", " u'then',\n", " u'there',\n", " u'these',\n", " u'they',\n", " u'this',\n", " u'those',\n", " u'through',\n", " u'to',\n", " u'too',\n", " u'under',\n", " u'until',\n", " u'up',\n", " u've',\n", " u'very',\n", " u'was',\n", " u'wasn',\n", " u'we',\n", " u'were',\n", " u'weren',\n", " u'what',\n", " u'when',\n", " u'where',\n", " u'which',\n", " u'while',\n", " u'who',\n", " u'whom',\n", " u'why',\n", " u'will',\n", " u'with',\n", " u'won',\n", " u'wouldn',\n", " u'y',\n", " u'you',\n", " u'your',\n", " u'yours',\n", " u'yourself',\n", " u'yourselves'}" ] }, "execution_count": 19, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import os\n", "os.environ[\"NLTK_DATA\"]='/software/nltk/nltk_data'\n", "from nltk.corpus import stopwords\n", "stop=set(stopwords.words('english'))\n", "stop" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### And then filter out the stop words from the wordCount." ] }, { "cell_type": "code", "execution_count": 20, "metadata": { "collapsed": false, "scrolled": true }, "outputs": [ { "data": { "text/plain": [ "[(u'said', 660),\n", " (u'mr', 620),\n", " (u'one', 436),\n", " (u'would', 341),\n", " (u'lorry', 336),\n", " (u'upon', 289),\n", " (u'could', 281),\n", " (u'defarge', 280),\n", " (u'man', 279),\n", " (u'little', 265)]" ] }, "execution_count": 20, "metadata": {}, "output_type": "execute_result" } ], "source": [ "filteredCount=wordCount.filter(lambda x : x[0] not in stop)\n", "filteredCount.takeOrdered(10, key=lambda v: -v[1])" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### Now let's make an index of words. We don't have pg numbers, but we can use line numbers instead.\n", "#### First we have to tag each line or entry in the RDD with a number" ] }, { "cell_type": "code", "execution_count": 21, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'\\ufeffIt was the best of times,', 0),\n", " (u'it was the worst of times,', 1),\n", " (u'it was the age of wisdom,', 2),\n", " (u'it was the age of foolishness,', 3),\n", " (u'it was the epoch of belief,', 4)]" ] }, "execution_count": 21, "metadata": {}, "output_type": "execute_result" } ], "source": [ "taleOfTwoCitieswLines=taleOfTwoCities.zipWithIndex()\n", "taleOfTwoCitieswLines.take(5)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### And then map each (line, line_number) tuple into a set of (word, line_number)" ] }, { "cell_type": "code", "execution_count": 22, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[[(u'\\ufeffIt', 0),\n", " (u'was', 0),\n", " (u'the', 0),\n", " (u'best', 0),\n", " (u'of', 0),\n", " (u'times,', 0)],\n", " [(u'it', 1),\n", " (u'was', 1),\n", " (u'the', 1),\n", " (u'worst', 1),\n", " (u'of', 1),\n", " (u'times,', 1)],\n", " [(u'it', 2),\n", " (u'was', 2),\n", " (u'the', 2),\n", " (u'age', 2),\n", " (u'of', 2),\n", " (u'wisdom,', 2)]]" ] }, "execution_count": 22, "metadata": {}, "output_type": "execute_result" } ], "source": [ "words=taleOfTwoCitieswLines.map(lambda (k,v): [(word,v) for word in k.split(' ')])\n", "words.take(3)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### And again we want to use flatMap instead of Map." ] }, { "cell_type": "code", "execution_count": 23, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'\\ufeffIt', 0),\n", " (u'was', 0),\n", " (u'the', 0),\n", " (u'best', 0),\n", " (u'of', 0),\n", " (u'times,', 0),\n", " (u'it', 1),\n", " (u'was', 1),\n", " (u'the', 1),\n", " (u'worst', 1)]" ] }, "execution_count": 23, "metadata": {}, "output_type": "execute_result" } ], "source": [ "words=taleOfTwoCitieswLines.flatMap(lambda (k,v): [(word,v) for word in k.split()])\n", "words.take(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### And again, we want to remove punctuation and make everything lower case, and remove stop words\n", "\n" ] }, { "cell_type": "code", "execution_count": 24, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "{u'a',\n", " u'about',\n", " u'above',\n", " u'after',\n", " u'again',\n", " u'against',\n", " u'ain',\n", " u'all',\n", " u'am',\n", " u'an',\n", " u'and',\n", " u'any',\n", " u'are',\n", " u'aren',\n", " u'as',\n", " u'at',\n", " u'be',\n", " u'because',\n", " u'been',\n", " u'before',\n", " u'being',\n", " u'below',\n", " u'between',\n", " u'both',\n", " u'but',\n", " u'by',\n", " u'can',\n", " u'couldn',\n", " u'd',\n", " u'did',\n", " u'didn',\n", " u'do',\n", " u'does',\n", " u'doesn',\n", " u'doing',\n", " u'don',\n", " u'down',\n", " u'during',\n", " u'each',\n", " u'few',\n", " u'for',\n", " u'from',\n", " u'further',\n", " u'had',\n", " u'hadn',\n", " u'has',\n", " u'hasn',\n", " u'have',\n", " u'haven',\n", " u'having',\n", " u'he',\n", " u'her',\n", " u'here',\n", " u'hers',\n", " u'herself',\n", " u'him',\n", " u'himself',\n", " u'his',\n", " u'how',\n", " u'i',\n", " u'if',\n", " u'in',\n", " u'into',\n", " u'is',\n", " u'isn',\n", " u'it',\n", " u'its',\n", " u'itself',\n", " u'just',\n", " u'll',\n", " u'm',\n", " u'ma',\n", " u'me',\n", " u'mightn',\n", " u'more',\n", " u'most',\n", " u'mustn',\n", " u'my',\n", " u'myself',\n", " u'needn',\n", " u'no',\n", " u'nor',\n", " u'not',\n", " u'now',\n", " u'o',\n", " u'of',\n", " u'off',\n", " u'on',\n", " u'once',\n", " u'only',\n", " u'or',\n", " u'other',\n", " u'our',\n", " u'ours',\n", " u'ourselves',\n", " u'out',\n", " u'over',\n", " u'own',\n", " u're',\n", " u's',\n", " u'same',\n", " u'shan',\n", " u'she',\n", " u'should',\n", " u'shouldn',\n", " u'so',\n", " u'some',\n", " u'such',\n", " u't',\n", " u'than',\n", " u'that',\n", " u'the',\n", " u'their',\n", " u'theirs',\n", " u'them',\n", " u'themselves',\n", " u'then',\n", " u'there',\n", " u'these',\n", " u'they',\n", " u'this',\n", " u'those',\n", " u'through',\n", " u'to',\n", " u'too',\n", " u'under',\n", " u'until',\n", " u'up',\n", " u've',\n", " u'very',\n", " u'was',\n", " u'wasn',\n", " u'we',\n", " u'were',\n", " u'weren',\n", " u'what',\n", " u'when',\n", " u'where',\n", " u'which',\n", " u'while',\n", " u'who',\n", " u'whom',\n", " u'why',\n", " u'will',\n", " u'with',\n", " u'won',\n", " u'wouldn',\n", " u'y',\n", " u'you',\n", " u'your',\n", " u'yours',\n", " u'yourself',\n", " u'yourselves'}" ] }, "execution_count": 24, "metadata": {}, "output_type": "execute_result" } ], "source": [ "import os\n", "os.environ[\"NLTK_DATA\"]='/software/nltk/nltk_data'\n", "from nltk.corpus import stopwords\n", "stop=set(stopwords.words('english'))\n", "stop" ] }, { "cell_type": "code", "execution_count": 25, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'best', 0),\n", " (u'times', 0),\n", " (u'worst', 1),\n", " (u'times', 1),\n", " (u'age', 2),\n", " (u'wisdom', 2),\n", " (u'age', 3),\n", " (u'foolishness', 3),\n", " (u'epoch', 4),\n", " (u'belief', 4)]" ] }, "execution_count": 25, "metadata": {}, "output_type": "execute_result" } ], "source": [ "words=words.map(lambda (k,v): (re.sub(r'[^\\w]','',k.lower()),v))\n", "words=words.filter(lambda (k,v): k not in stop);\n", "words.take(10)" ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "#### And then we can reduce the data by combining the line numbers of matching keys\n", "#### combineByKey takes three functions (creator combiner, merge value into combiner, merge combiners)\n", "#### Here our values are integers and our combiners are python sets." ] }, { "cell_type": "code", "execution_count": 26, "metadata": { "collapsed": false }, "outputs": [ { "data": { "text/plain": [ "[(u'funereal', {699}),\n", " (u'restalong', {97}),\n", " (u'suicidal', {13143}),\n", " (u'pardon', {1806, 4414, 4609, 4880, 5406, 6003, 7510, 13323}),\n", " (u'expostulation', {1182}),\n", " (u'assembles', {6973}),\n", " (u'desirable', {1202, 2305, 6644, 15197}),\n", " (u'crumpled', {12607}),\n", " (u'foul', {1341, 7325, 10570}),\n", " (u'four',\n", " {70,\n", " 717,\n", " 1438,\n", " 2604,\n", " 2762,\n", " 3648,\n", " 4190,\n", " 4193,\n", " 4339,\n", " 4531,\n", " 4808,\n", " 5335,\n", " 5792,\n", " 6850,\n", " 7008,\n", " 8627,\n", " 8978,\n", " 9071,\n", " 9584,\n", " 9586,\n", " 9638,\n", " 9642,\n", " 9644,\n", " 10664,\n", " 10674,\n", " 10675,\n", " 10685,\n", " 11186,\n", " 11429,\n", " 11541,\n", " 12071,\n", " 12081,\n", " 13637,\n", " 13909,\n", " 14031,\n", " 14249,\n", " 14966,\n", " 15073,\n", " 15346}),\n", " (u'protest', {4905, 6435, 6438, 8137, 13188, 14486}),\n", " (u'sleep',\n", " {611,\n", " 3437,\n", " 5146,\n", " 5166,\n", " 5620,\n", " 7923,\n", " 7925,\n", " 9516,\n", " 9534,\n", " 10001,\n", " 10346,\n", " 10351,\n", " 10389,\n", " 10570,\n", " 10571,\n", " 10872,\n", " 11224,\n", " 13040,\n", " 13071,\n", " 13561,\n", " 14541}),\n", " (u'astray', {1725}),\n", " (u'perverted', {6280, 9852, 11551}),\n", " (u'types', {11553})]" ] }, "execution_count": 26, "metadata": {}, "output_type": "execute_result" } ], "source": [ "index=words.combineByKey(lambda v: set([v]), lambda s,v: s | set([v]), lambda s1,s2: set(s1) | set(s2))\n", "index.take(15)" ] } ], "metadata": { "kernelspec": { "display_name": "Python 2 (spark)", "language": "python", "name": "python-2.7.10-b1-spark" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 2 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython2", "version": "2.7.10" }, "widgets": { "state": {}, "version": "1.1.2" } }, "nbformat": 4, "nbformat_minor": 0 }