hive 用户手册和map参数调整

map/reduce简单的原理介绍

Hadoop Map/Reduce框架为每一个InputSplit产生一个map任务,而每个InputSplit是由该作业的InputFormat产生的。

然后,框架为这个任务的InputSplit中每个键值对调用一次 map(WritableComparable, Writable, OutputCollector, Reporter)操作。通过调用 OutputCollector.collect(WritableComparable,Writable)可以收集输出的键值对。Reduce会接收到不同map任务传来的数据,并且每个map传来的数据都是有序的。如果reduce端接受的数据量相当小,则直接存储在内存中(缓冲区大小由mapred.job.shuffle.input.buffer.percent属性控制,表示用作此用途的堆空间的百分比),如果数据量超过了该缓冲区大小的一定比例(由mapred.job.shuffle.merge.percent决定),则对数据合并后溢写到磁盘中。hive根据不同的参数匹配evaluate();udf比较简单一行生成一行,对一行进行计算,udaf聚合函数比较麻烦,有merge函数等,在不同的map函数中和reduce中进行。udtf有一行生成多行或者多列,initialize和progress以及close等,在建表的时候运用,可以用序列化和正则来做到相同的效果 见Hive外部表,编写正则表达式解析Nginx日志nginx日志的format配置:’$proxy_add_x_forwarded_for – $remote_user [$time_local] "$request"”$status $request_body "$http_referer"”"$http_user_agent" "$http_x_forwarded_for" $request_time $upstream_response_time’;生成的日志大致格式:218.202.xxx.xxx – – [2014-08-19 22:17:08.446671] “POST /xxx/xxx-web/xxx HTTP/1.1″ 200 stepid=15&tid=U753HifVPE0DAOn%2F&output=json&language=zh_CN&session=114099628&dip=10920&diu=DBDBF926-3210-4D64-972A7&xxx=056a849c70ae57560440ebe&diu2=2DFDB167-1505-4372-AAB5-99D28868DCB5&shell=e3209006950686f6e65352c3205004150504c450000000000000000000000000000&compress=false&channel=&sign=438BD4D701A960CD4B7C1DE36AA8A877&wua=0&appkey=0&adcode=150700&t=0 HTTP/1.1″ 200 302 “-” “xxx-iphone” 31.0mshive建表的语句:CREATE EXTERNAL TABLE xxx_log(host STRING,log_date STRING,method STRING,uri STRING,version STRING,STATUS STRING,flux STRING,referer STRING,user_agent STRING,reponse_time STRING)PARTITIONED BY(YEAR STRING, MONTH STRING, DAY STRING)ROW FORMAT SERDE ‘org.apache.hadoop.hive.contrib.serde2.RegexSerDe’WITH SERDEPROPERTIES ( "input.regex" = "([^ ]*)\\s+-\\s+-\\s+\\[([^\]]*)\\]\\s+\&;([^ ]*)\\s+(.*?)\\s+([^ ]*)\&;\\s+(-|[0-9]*)\\s+(-|[0-9]*)\\s+\&;(.+?|-)\&;\\s+\&;(.+?|-)\&;\\s+(.*)","output.format.string" = "%1$s %2$s %3$s %4$s %5$s %5$s %6$s %7$s %8$s %9$s %10$s" ) STORED AS TEXTFILE;然后,将数据导入表中方法1. 将指定路径下的文件导入,不移动文件ALTER TABLE nugget_aos_log ADD partition (YEAR=’2014′, MONTH=’08’,DAY=’19’) location ‘/user/xxx/xx/year=2014/month=08/day=19’;或者方法2. 导入指定文件,并将文件移动到用户的存储路径下LOAD DATA inpath ‘/user/xxx/xx/2014/08/19′ overwrite INTO TABLE xxx_log partition (YEAR=’2014′, MONTH=’08’,DAY=’19’);接下来就可以查询表中的数据,做验证hive>select * from xxx_log limit 100;可能会碰到的情况,你的正则表达式在正则表达式工具(推荐:RegexBuddy)中,没有问题,但是在hive表显示时,每个字段值都是NULL,这说明正则表达式存在错误。这个时候可以用下面的办法解决,在hive中执行下面的命令:hive>describe extended tablename;– 查看表的详细信息,其中包括了hive表实际的input.regex正则表达式的值,,看看建表的正则表达式转义字符是否缺少。正则表达式的问题解决之后,再drop table xxx_log,再重新建表,导数据。最后能看到,nginx日志被拆分到了hive表的字段中,接下来就可以进行各种统计了。hive 用户手册Usage ExamplesCreating tablesMovieLens User RatingsCREATE TABLE u_data ( userid INT, movieid INT, rating INT, unixtime STRING)ROW FORMAT DELIMITEDFIELDS TERMINATED BY ‘\t’STORED AS TEXTFILE;Apache Access Log Tablesadd jar ../build/contrib/hive_contrib.jar;CREATE TABLE apachelog ( host STRING, identity STRING, user STRING, time STRING, request STRING, status STRING, size STRING, referer STRING, agent STRING)ROW FORMAT SERDE ‘org.apache.hadoop.hive.contrib.serde2.RegexSerDe’WITH SERDEPROPERTIES ( "input.regex" = "([^]*) ([^]*) ([^]*) (-|\\[^\\]*\\]) ([^ \&;]*|\&;[^\&;]*\&;) (-|[0-9]*) (-|[0-9]*)(?: ([^ \&;]*|\&;[^\&;]*\&;) ([^ \&;]*|\&;[^\&;]*\&;))?", "output.format.string" = "%1$s %2$s %3$s %4$s %5$s %6$s %7$s %8$s %9$s")STORED AS TEXTFILE;Control Separated TablesCREATE TABLE mylog (name STRING, language STRING, groups ARRAY<STRING>, entities MAP<INT, STRING>)ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\001’ COLLECTION ITEMS TERMINATED BY ‘\002’ MAP KEYS TERMINATED BY ‘\003’STORED AS TEXTFILE;Loading tablesMovieLens User RatingsDownload and extract the data:wget +0.gztar xvzf ml-data.tar+0.gzLoad it in:LOAD DATA LOCAL INPATH ‘ml-data/u.data’OVERWRITE INTO TABLE u_data;Running queriesMovieLens User RatingsSELECT COUNT(1) FROM u_data;Running custom map/reduce jobsMovieLens User RatingsCreate weekday_mapper.py:import sysimport datetimefor line in sys.stdin: line = line.strip() userid, movieid, rating, unixtime = line.split(‘\t’) weekday = datetime.datetime.fromtimestamp(float(unixtime)).isoweekday() print ‘\t’.join([userid, movieid, rating, str(weekday)])Use the mapper script:CREATE TABLE u_data_new ( userid INT, movieid INT, rating INT, weekday INT)ROW FORMAT DELIMITEDFIELDS TERMINATED BY ‘\t’;INSERT OVERWRITE TABLE u_data_newSELECT TRANSFORM (userid, movieid, rating, unixtime) USING ‘python weekday_mapper.py’ AS (userid, movieid, rating, weekday)FROM u_data;SELECT weekday, COUNT(1)FROM u_data_newGROUP BY weekday;Note: due to a bug in the parser, you must run the "INSERT OVERWRITE" query on a single linehadoop map数目的分析和调整Hadoop中在计算一个JOB需要的map数之前首先要计算分片的大小。计算分片大小的公式是:goalSize = totalSize / mapred.map.tasksminSize = max {mapred.min.split.size, minSplitSize}splitSize = max (minSize, min(goalSize, dfs.block.size))totalSize是一个JOB的所有map总的输入大小,即Map input bytes。参数mapred.map.tasks的默认值是2,我们可以更改这个参数的值。计算好了goalSize之后还要确定上限和下限。下限是max {mapred.min.split.size, minSplitSize} 。参数mapred.min.split.size的默认值为1个字节,minSplitSize随着File Format的不同而不同。上限是dfs.block.size,它的默认值是64兆。举几个例子,例如Map input bytes是100兆,mapred.map.tasks默认值为2,那么分片大小就是50兆;如果我们把mapred.map.tasks改成1,那分片大小就变成了64兆。计算好了分片大小之后接下来计算map数。Map数的计算是以文件为单位的,针对每一个文件做一个循环:

1. 文件大小/splitsize>1.1,创建一个split,这个split的大小=splitsize,文件剩余大小=文件大小-splitsize

2. 文件剩余大小/splitsize<1.1,剩余的部分作为一个split

在那里,有我们特有的记忆,亲情之忆、

hive 用户手册和map参数调整

相关文章:

你感兴趣的文章:

标签云: