重庆分公司,新征程启航
为企业提供网站建设、域名注册、服务器等服务
小编给大家分享一下Hive中库、表、字段、交互式查询的基本操作有哪些,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!
创新互联成立与2013年,是专业互联网技术服务公司,拥有项目网站建设、成都做网站网站策划,项目实施与项目整合能力。我们以让每一个梦想脱颖而出为使命,1280元商洛做网站,已为上家服务,为商洛各地企业和个人服务,联系电话:18980820575
(1)打印查询头,需要显示设置:
set hive.cli.print.header=true;
(2)加"--",其后的都被认为是注释,但 CLI 不解析注释。带有注释的文件只能通过这种方式执行:
hive -f script_name
(3)-e后跟带引号的hive指令或者查询,-S去掉多余的输出:
hive -S -e "select * FROM mytable LIMIT 3" > /tmp/myquery
(4)遍历所有分区的查询将产生一个巨大的MapReduce作业,如果你的数据集和目录非常多,
因此建议你使用strict模型,也就是你存在分区时,必须指定where语句
hive> set hive.mapred.mode=strict;
(5)显示当前使用数据库
set hive.cli.print.current.db=true;
(6)设置 Hive Job 优先级
set mapred.job.priority=VERY_HIGH | HIGH | NORMAL | LOW | VERY_LOW
(VERY_LOW=1,LOW=2500,NORMAL=5000,HIGH=7500,VERY_HIGH=10000)
set mapred.job.map.capacity=M设置同时最多运行M个map任务
set mapred.job.reduce.capacity=N设置同时最多运行N个reduce任务
(7)Hive 中的Mapper个数的是由以下几个参数确定的:
mapred.min.split.size ,mapred.max.split.size ,dfs.block.size
splitSize = Math.max(minSize, Math.min(maxSize, blockSize));
map个数还与inputfilles的个数有关,如果有2个输入文件,即使总大小小于blocksize,也会产生2个map
mapred.reduce.tasks用来设置reduce个数。
(1)查看某个表所有分区
SHOW PARTITIONS ext_trackflow
查询具体某个分区
SHOW PARTITIONS ext_trackflow PARTITION(statDate='20140529');
(2)查看格式化的完整表结构
desc formatted ext_trackflow;
DESCRIBE EXTENDED ext_trackflow;
(3)删除分区:分区的元数据和数据将被一并删除,但是对于扩展表则只删除元数据
ALTER TABLE ext_trackflow DROP PARTITION (statDate='20140529');
(4)查询是外部表还是内部表
DESCRIBE EXTENDED tablename
(5)复制表结构
CREATE EXTERNAL TABLE IF NOT EXISTS mydb.employees3
LIKE mydb.employees
LOCATION '/path/to/data';
Note:如果你忽略关键字EXTERNAL,那么将依据 employees 是外部还是内部,如果加了那么一定是EXTERNAL,并要LOCATION
(6)为内部表某个分区导入数据,Hive将建立目录并拷贝数据到分区当中
LOAD DATA LOCAL INPATH '${env:HOME}/california-employees'
INTO TABLE employees
PARTITION (country = 'US', state = 'CA');
(7)为外部表某个分区添加数据
ALTER TABLE log_messages ADD IF NOT EXISTS PARTITION(year = 2012, month = 1, day = 2)
LOCATION 'hdfs://master_server/data/log_messages/2012/01/02';
Note:Hive并不关心分区,目录是否存在,是否有数据,这会导致没有查询结果
(8)修改表:在任何时候你都可以修改表,但是你仅仅修改的是表的元数据,都实际数据不会造成任何影响
例如更改分区指定位置,这个命令不会删除旧的数据
ALTER TABLE log_messages PARTITION(year = 2011, month = 12, day = 2)
SET LOCATION 's3n://ourbucket/logs/2011/01/02';
(9)更改表属性
ALTER TABLE log_messages SET TBLPROPERTIES (
'notes' = 'The process id is no longer captured; this column is always NULL'
);
(10)更改存储属性
ALTER TABLE log_messages
PARTITION(year = 2012, month = 1, day = 1)
SET FILEFORMAT SEQUENCEFILE;
Note:如果table是分区的话那么partition是必须的
(11)指定新的 SerDe
ALTER TABLE table_using_JSON_storage
SET SERDE 'com.example.JSONSerDe'
WITH SERDEPROPERTIES (
'prop1' = 'value1',
'prop2' = 'value2'
);
Note:SERDEPROPERTIE解释SERDE用的何种模型,属性值和名称都为字符串,方便告诉用户,为自己指定SERDE并且应用于什么模型
为当前SERDE设定
ALTER TABLE table_using_JSON_storage
SET SERDEPROPERTIES (
'prop3' = 'value3',
'prop4' = 'value4'
);
(12)改变存储属性
ALTER TABLE stocks
CLUSTERED BY (exchange, symbol)
SORTED BY (symbol)
INTO 48 BUCKETS;
(13)复杂更改表语句:为各种不同的操作添加 hook ALTER TABLE … TOUCH
ALTER TABLE log_messages TOUCH
PARTITION(year = 2012, month = 1, day = 1);
典型的应用场景就是当分区有改动的时候,那么将触发
hive -e 'ALTER TABLE log_messages TOUCH PARTITION(year = 2012, month = 1, day = 1);'
(14)ALTER TABLE … ARCHIVE PARTITION 捕获分区文件到Hadoop archive file也就是HAR
ALTER TABLE log_messages ARCHIVE
PARTITION(year = 2012, month = 1, day = 1);(只可以用在被分区的表)
(15)保护分区不被删除和查询
ALTER TABLE log_messages
PARTITION(year = 2012, month = 1, day = 1) ENABLE NO_DROP;
ALTER TABLE log_messages
PARTITION(year = 2012, month = 1, day = 1) ENABLE OFFLINE;
Note:与ENABLE对应的是DISABLE,不能应用在未被分区的表
(16)按正条件(正则表达式)显示表
hive> SHOW TABLES '.*s';
(17)外部表、内部表互转
alter table tablePartition set TBLPROPERTIES ('EXTERNAL'='TRUE'); //内部表转外部表
alter table tablePartition set TBLPROPERTIES ('EXTERNAL'='FALSE'); //外部表转内部表
(18)分区与分桶:
partition(分区:按目录保存文件,每个partition对应一个目录)例如:
CREATE EXTERNAL TABLE table1 ( column1 STRING, column2 STRING, column3 STRING, ) PARTITIONED BY (dt STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' STORED AS TEXTFILE; ALTER TABLE table1 ADD IF NOT EXISTS PARTITION (dt=20090105); ALTER TABLE table1 ADD IF NOT EXISTS PARTITION (dt=20090102); ALTER TABLE table1 ADD IF NOT EXISTS PARTITION (dt=20081231);
bucket(分桶,对指定列作hash,每个bucket对应一个文件)
CREATE TABLE VT_NEW_DATA ( column1 STRING, column2 STRING, column3 STRING, ) CLUSTERED BY (column1) SORTED BY (column1) INTO 48 BUCKETS ROW FORMAT DELIMITED FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n' STORED AS RCFILE;
(1)重命名列,更改位置,类型和注释
ALTER TABLE log_messages
CHANGE COLUMN hms hours_minutes_seconds INT
COMMENT 'The hours, minutes, and seconds part of the timestamp'
AFTER severity;
更改名称: new column old column type
comment不是必须的,你可以添加注释
AFTER用于更改字段位置
仅修改了元数据并未对源data做任何改动
(2)添加新列
ALTER TABLE log_messages ADD COLUMNS (
app_name STRING COMMENT 'Application name',
session_id LONG COMMENT 'The current session id');
(3)删除和替换列:慎用!!!
ALTER TABLE table_name ADD|REPLACE COLUMNS (col_name data_type [COMMENT col_comment], ...)
ADD是代表新增一字段,字段位置在所有列后面(partition列前)
REPLACE则是表示替换表中所有字段。
REPLACE COLUMNS removes all existing columns and adds the new set of columns.
REPLACE COLUMNS can also be used to drop columns. For example:
"ALTER TABLE test_change REPLACE COLUMNS (a int, b int);" will remove column `c' from test_change's schema. Note that this does not delete underlying data, it just changes the schema.
(4)REGEX Column Specification
SELECT 语句可以使用正则表达式做列选择,下面的语句查询除了 ds 和 hr 之外的所有列:
SELECT `(ds|hr)?+.+` FROM test
hive> set;
…
hive> set-v;
… even more output!…
‘set’输出 hivevar,hiveconf,system 和 env 命名空间下的所有变量。
‘set -v’包括了输出Hadoop定义的全部变量。
hive> set hivevar:foo=hello;
hive> set hivevar:foo;
hivevar:foo=hello
使用变量:
hive> create table toss1(i int, ${hivevar:foo} string);
-- 创建数据库 create database ecdata WITH DBPROPERTIES ('creator' = 'June', 'date' = '2014-06-01'); -- 或者使用 COMMENT 关键字 -- 查看数据库描述 DESCRIBE DATABASE ecdata; DESCRIBE DATABASE EXTENDED ecdata; -- 切库 use ecdata; -- 删除表 drop table ext_trackflow; -- 创建表 create EXTERNAL table IF NOT EXISTS ext_trackflow ( cookieId string COMMENT '05dvOVC6Il6INhYABV6LAg==', cate1 string COMMENT '4', area1 string COMMENT '102', url string COMMENT 'http://cd.ooxx.com/jinshan-mingzhan-1020', trackTime string COMMENT '2014-05-25 23:03:36', trackURLMap mapCOMMENT '{"area":"102","cate":"4,29,14052"}', ) PARTITIONED BY (statDate STRING) ROW FORMAT DELIMITED FIELDS TERMINATED BY '\001' COLLECTION ITEMS TERMINATED BY '\002' MAP KEYS TERMINATED BY '\003' LINES TERMINATED BY '\n' STORED AS TEXTFILE LOCATION '/DataWarehouse/ods/TrackFlowTable' ; --添加分区语句 ALTER TABLE ext_trackflow ADD PARTITION (statDate='20140525') LOCATION '/DataWarehouse/ods/TrackFlowTable/20140525'; --每天建立分区 yesterday=`date -d '1 days ago' +'%Y%m%d'` hive -e "use ecdata; ALTER TABLE ext_trackflow ADD PARTITION (statDate='$yesterday') LOCATION '/DataWarehouse/ods/TrackFlowTable/$yesterday';"
(1)按页面类型统计 pv
select pageType, count(pageType) from ext_trackflow where statDate = '20140521' group by pageType;
Note:一般 SELECT 查询会扫描整个表,使用 PARTITIONED BY 子句建表,查询就可以利用分区剪枝(input pruning)的特性
Hive 当前的实现是,只有分区断言出现在离 FROM 子句最近的那个WHERE 子句中,才会启用分区剪枝
(2)导出查询结果到本地的两种方式
INSERT OVERWRITE LOCAL DIRECTORY '/home/jun06/tmp/110.112.113.115' select area1, count(area1) from ext_trackflow where statDate = '20140521' group by area1 having count(area1) > 1000;
hive -e 'use ecdata; select area1, count(area1) from ext_trackflow where statDate = '20140521' group by area1 having count(area1) > 1000;' > a.txt
(3)map 数据结构的查询与使用
select trackURLMap, extField, unitParamMap, queryParamMap from ext_trackflow where statDate = '20140525' and size(unitParamMap)!=0 limit 10;
(4)下面的查询语句查询销售记录最大的 5 个销售代表。
SET mapred.reduce.tasks = 1;
SELECT * FROM test SORT BY amount DESC LIMIT 5;
(5)将同一表中数据插入到不同表、路径中:
FROM src
INSERT OVERWRITE TABLE dest1 SELECT src.* WHERE src.key < 100
INSERT OVERWRITE TABLE dest2 SELECT src.key, src.value WHERE src.key >= 100 and src.key < 200
INSERT OVERWRITE TABLE dest3 PARTITION(ds='2008-04-08', hr='12') SELECT src.key WHERE src.key >= 200 and src.key < 300
INSERT OVERWRITE LOCAL DIRECTORY '/tmp/dest4.out' SELECT src.value WHERE src.key >= 300;
(6)用streaming方式将文件流直接插入文件:
hive> FROM invites a INSERT OVERWRITE TABLE events SELECT TRANSFORM(a.foo, a.bar) AS (oof, rab) USING '/bin/cat' WHERE a.ds > '2008-08-09';
(7)Hive 只支持等值连接(equality joins)、外连接(outer joins)和(left semi joins)。Hive 不支持所有非等值的连接,因为非等值连接非常难转化到 map/reduce 任务
LEFT,RIGHT和FULL OUTER关键字用于处理join中空记录的情况
LEFT SEMI JOIN 是 IN/EXISTS 子查询的一种更高效的实现
join 时,每次 map/reduce 任务的逻辑是这样的:reducer 会缓存 join 序列中除了最后一个表的所有表的记录,再通过最后一个表将结果序列化到文件系统
实践中,应该把最大的那个表写在最后
(8)join 查询时,需要注意几个关键点
只支持等值join
SELECT a.* FROM a JOIN b ON (a.id = b.id)
SELECT a.* FROM a JOIN b
ON (a.id = b.id AND a.department = b.department)
可以 join 多于 2 个表,例如
SELECT a.val, b.val, c.val FROM a JOIN b
ON (a.key = b.key1) JOIN c ON (c.key = b.key2)
Note:如果join中多个表的 join key 是同一个,则 join 会被转化为单个 map/reduce 任务
(9)LEFT,RIGHT和FULL OUTER
SELECT a.val, b.val FROM a LEFT OUTER JOIN b ON (a.key=b.key)
如果你想限制 join 的输出,应该在 WHERE 子句中写过滤条件——或是在 join 子句中写
容易混淆的问题是表分区的情况
SELECT c.val, d.val FROM c LEFT OUTER JOIN d ON (c.key=d.key)
WHERE a.ds='2010-07-07' AND b.ds='2010-07-07'
如果 d 表中找不到对应 c 表的记录,d 表的所有列都会列出 NULL,包括 ds 列。也就是说,join 会过滤 d 表中不能找到匹配 c 表 join key 的所有记录。这样的话,LEFT OUTER 就使得查询结果与 WHERE 子句无关
解决办法
SELECT c.val, d.val FROM c LEFT OUTER JOIN d
ON (c.key=d.key AND d.ds='2009-07-07' AND c.ds='2009-07-07')
(10)LEFT SEMI JOIN
LEFT SEMI JOIN 的限制是, JOIN 子句中右边的表只能在 ON 子句中设置过滤条件,在 WHERE 子句、SELECT 子句或其他地方过滤都不行
SELECT a.key, a.value
FROM a
WHERE a.key in
(SELECT b.key
FROM B);
可以被重写为:
SELECT a.key, a.val
FROM a LEFT SEMI JOIN b on (a.key = b.key)
(11)从SQL到HiveQL应转变的习惯
①Hive不支持传统的等值连接
•SQL中对两表内联可以写成:
•select * from dual a,dual b where a.key = b.key;
•Hive中应为
•select * from dual a join dual b on a.key = b.key;
而不是传统的格式:
SELECT t1.a1 as c1, t2.b1 as c2FROM t1, t2
WHERE t1.a2 = t2.b2
②分号字符
•分号是SQL语句结束标记,在HiveQL中也是,但是在HiveQL中,对分号的识别没有那么智慧,例如:
•select concat(key,concat(';',key)) from dual;
•但HiveQL在解析语句时提示:
FAILED: Parse Error: line 0:-1 mismatched input '
•解决的办法是,使用分号的八进制的ASCII码进行转义,那么上述语句应写成:
•select concat(key,concat('\073',key)) from dual;
看完了这篇文章,相信你对“Hive中库、表、字段、交互式查询的基本操作有哪些”有了一定的了解,如果想了解更多相关知识,欢迎关注创新互联行业资讯频道,感谢各位的阅读!