[Hadoop]基于Eclipse的Hadoop应用开发环境配置

安装Eclipse 下载Eclipse(点击进入下载),,解压安装。我安装在/usr/local/software/目录下。

在eclipse上安装hadoop插件

下载hadoop插件(点击进入下载) 把插件放到eclipse/plugins目录下。

重启eclipse,配置hadoop installation directory

如果安装插件成功,打开Window–>Preferens,你会发现Hadoop Map/Reduce选项,在这个选项里你需要配置Hadoop installation directory。配置完成后退出。

配置Map/Reduce Locations

在Window–>Show View中打开Map/Reduce Locations。

在Map/Reduce Locations中新建一个Hadoop Location。在这个View中,右键–>New Hadoop Location。在弹出的对话框中你需要配置Location name,如Hadoop1.0,还有Map/Reduce Master和DFS Master。这里面的Host、Port分别为你在mapred-site.xml、core-site.xml中配置的地址及端口。如: Map/Reduce Master

192.168.239.1309001

DFS Master

192.168.239.1309000

配置完后退出。点击DFS Locations–>Hadoop如果能显示文件夹(2)说明配置正确,如果显示”拒绝连接”,请检查你的配置。

新建WordCount项目

File—>Project,选择Map/Reduce Project,输入项目名称WordCount等。 在WordCount项目里新建class,名称为WordCount,代码如下:

package WordCount;import java.io.IOException;import java.util.StringTokenizer;import org.apache.hadoop.conf.Configuration;import org.apache.hadoop.conf.Configured;import org.apache.hadoop.fs.Path;import org.apache.hadoop.io.IntWritable;import org.apache.hadoop.io.Text;import org.apache.hadoop.mapreduce.Job;import org.apache.hadoop.mapreduce.Mapper;import org.apache.hadoop.mapreduce.Reducer;import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;import org.apache.hadoop.util.GenericOptionsParser;import org.apache.hadoop.util.Tool;import org.apache.hadoop.util.ToolRunner;public {/**** @author root**/public static <static IntWritable one = new IntWritable(1);private Text word = new Text();public void map(Object key, Text value, Context context) throws IOException, InterruptedException {StringTokenizer itr = new StringTokenizer(value.toString());while (itr.hasMoreTokens()) {word.set(itr.nextToken());context.write(word, one);}// while}// map}// mapper/**** @author root**/public static <Text,IntWritable,Text,IntWritable> {private IntWritable result = new IntWritable();public void reduce(Text key, Iterable<IntWritable> values,Context context) throws IOException, InterruptedException {int sum = 0;for (IntWritable val : values) {sum += val.get();}//forresult.set(sum);context.write(key, result);}// reduce }// reducer/**** @param args* @return* @throws Exception*/public int run(String[] args) throws Exception{Configuration conf = new Configuration();String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();if (otherArgs.length != 2) {System.err.println(“Usage: wordcount <in> <out>”);System.exit(2);}// job nameJob job = new Job(conf, “word count”);// class job.setJarByClass(WordCount.class);// mapperjob.setMapperClass(TokenizerMapper.class);// combinerjob.setCombinerClass(IntSumReducer.class);// reducerjob.setReducerClass(IntSumReducer.class);// output key formatjob.setOutputKeyClass(Text.class);// outout value formatjob.setOutputValueClass(IntWritable.class);// input pathFileInputFormat.addInputPath(job, new Path(otherArgs[0]));// output pathFileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));job.waitForCompletion(true);return job.isSuccessful() ? 0: 1;}/**** @param args* @throws Exception*/public static void main(String[] args) throws Exception {int res = ToolRunner.run(new Configuration(), new WordCount(), args);System.exit(res);}}

快忘了那些不高兴的事吧!你看就连今天的阳光都如此明媚灿烂,

[Hadoop]基于Eclipse的Hadoop应用开发环境配置

相关文章:

你感兴趣的文章:

标签云: