Hadoop的word co-occurrence实现

Word Co-occurrence一直不知道该怎么正确翻译, 单词相似度?还是共生单词?还是单词的共生矩阵?

这在统计里面是很常用的文本处理算法,用来度量一组文档集中所有出现频率最接近的词组.嗯,其实是上下文词组,不是单词.算是一个比较常用的算法,可以衍生出其他的统计算法.能用来做推荐,因为它能够提供的结果是”人们看了这个,也会看那个”.比如做一些协同过滤之外的购物商品的推荐,信用卡的风险分析,或者是计算大家都喜欢什么东西.

比如 I love you , 出现 “I love” 的同时往往伴随着 “love you” 的出现,不过中文的处理跟英文不一样,需要先用分词库做预处理.

按照Mapper, Reducer和Driver的方式拆分代码

Mapper程序:

packagewco;importjava.io.IOException;importorg.apache.hadoop.io.IntWritable;importorg.apache.hadoop.io.LongWritable;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Mapper;publicclassWCoMapperextendsMapper<LongWritable,Text,Text,IntWritable>{@Overridepublicvoidmap(LongWritablekey,Textvalue,Contextcontext)throwsIOException,InterruptedException{/**将行内容全部转换为小写格式.*/Stringline_lc=value.toString().toLowerCase();Stringbefore=null;/**将行拆分成单词*并且key是前一个单词加上后一个单词*value是1*/for(Stringword:line_lc.split(“\\W+”)){//循环行内容,按照空格进行分割单词if(word.length()>0){if(before!=null){//如果前词不为空,则写入上下文(第一次前词一定是空,直接跳到下面的before=word)context.write(newText(before+”,”+word),newIntWritable(1));}before=word;//将现词赋值给前词}}}}

Reducer程序:

packagewco;importjava.io.IOException;importorg.apache.hadoop.io.IntWritable;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.Reducer;publicclassWCoReducerextendsReducer<Text,IntWritable,Text,IntWritable>{@Overridepublicvoidreduce(Textkey,Iterable<IntWritable>values,Contextcontext)throwsIOException,InterruptedException{intwordCount=0;for(IntWritablevalue:values){wordCount+=value.get();//单纯计算wordcount}context.write(key,newIntWritable(wordCount));}}

Driver程序就不解释了,天下的Driver都一样:

packagewco;importorg.apache.hadoop.fs.Path;importorg.apache.hadoop.io.IntWritable;importorg.apache.hadoop.io.Text;importorg.apache.hadoop.mapreduce.lib.input.FileInputFormat;importorg.apache.hadoop.mapreduce.lib.output.FileOutputFormat;importorg.apache.hadoop.mapreduce.Job;importorg.apache.hadoop.conf.Configured;importorg.apache.hadoop.conf.Configuration;importorg.apache.hadoop.util.Tool;importorg.apache.hadoop.util.ToolRunner;publicclassWCoextendsConfiguredimplementsTool{@Overridepublicintrun(String[]args)throwsException{if(args.length!=2){System.out.printf(“Usage:hadoopjarwco.WCo<input><output>\n”);return-1;}Jobjob=newJob(getConf());job.setJarByClass(WCo.class);job.setJobName(“WordCoOccurrence”);FileInputFormat.setInputPaths(job,newPath(args[0]));FileOutputFormat.setOutputPath(job,newPath(args[1]));job.setMapperClass(WCoMapper.class);job.setReducerClass(WCoReducer.class);job.setOutputKeyClass(Text.class);job.setOutputValueClass(IntWritable.class);booleansuccess=job.waitForCompletion(true);returnsuccess?0:1;}publicstaticvoidmain(String[]args)throwsException{intexitCode=ToolRunner.run(newConfiguration(),newWCo(),args);System.exit(exitCode);}}

如果你曾歌颂黎明,那么也请你拥抱黑夜

Hadoop的word co-occurrence实现

相关文章:

你感兴趣的文章:

标签云: