重庆分公司,新征程启航
为企业提供网站建设、域名注册、服务器等服务
这篇文章给大家分享的是有关Hive数据定义语言DDL有哪些的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。
为咸宁等地区用户提供了全套网页设计制作服务,及咸宁网站建设行业解决方案。主营业务为成都网站建设、网站制作、咸宁网站设计,以传统方式定制建设网站,并提供域名空间备案等一条龙服务,秉承以专业、用心的态度为用户提供真诚的服务。我们深信只要达到每一位用户的要求,就会得到认可,从而选择与我们长期合作。这样,我们也可以走得更远!
一、数据库相关操作(DATABASE)
1.1 创建数据库(create)
CREATE DATABASE [IF NOT EXISTS] database_name
[COMMENT database_comment]
[LOCATION hdfs_path]
[WITH DBPROPERTIES (property_name=property_value, ...)];
默认地址:/user/hive/warehouse/db_name.db/table_name/partition_name/…
1.2 使用数据库(use)
USE database_name;
注意:只有use时不用写DATABASE关键字
1.3 删除数据库(drop)
DROP DATABASE [IF EXISTS] database_name [RESTRICT|CASCADE];
1.4 修改数据库(alter)
1.4.1 修改数据库属性
ALTER DATABASE database_name SET DBPROPERTIES (property_name=property_value, …);
1.4.2 修改数据库所有者
ALTER DATABASE database_name SET OWNER [USER|ROLE] user_or_role;
二、新建和删除数据表(Create/Drop/Truncate Table)
2.1 创建数据表(create)
# 手动建表
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name -- (Note: TEMPORARY available in Hive 0.14.0 and later)
[(col_name data_type [COMMENT col_comment], ...)]
[COMMENT table_comment]
[PARTITIONED BY (col_name data_type [COMMENT col_comment], ...)]
[CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name [ASC|DESC], ...)] INTO num_buckets BUCKETS]
[SKEWED BY (col_name, col_name, ...) -- (Note: Available in Hive 0.10.0 and later)]
ON ((col_value, col_value, ...), (col_value, col_value, ...), ...)
[STORED AS DIRECTORIES]
[
[ROW FORMAT row_format]
[STORED AS file_format]
| STORED BY 'storage.handler.class.name' [WITH SERDEPROPERTIES (...)] -- (Note: Available in Hive 0.6.0 and later)
]
[LOCATION hdfs_path]
[TBLPROPERTIES (property_name=property_value, ...)] -- (Note: Available in Hive 0.6.0 and later)
[AS select_statement]; -- (Note: Available in Hive 0.5.0 and later; not supported for external tables)
# 复制表结构
CREATE [TEMPORARY] [EXTERNAL] TABLE [IF NOT EXISTS] [db_name.]table_name
LIKE existing_table_or_view_name
[LOCATION hdfs_path];
#数据类型
data_type
: primitive_type
| array_type
| map_type
| struct_type
| union_type -- (Note: Available in Hive 0.7.0 and later)
primitive_type
: TINYINT
| SMALLINT
| INT
| BIGINT
| BOOLEAN
| FLOAT
| DOUBLE
| STRING
| BINARY -- (Note: Available in Hive 0.8.0 and later)
| TIMESTAMP -- (Note: Available in Hive 0.8.0 and later)
| DECIMAL -- (Note: Available in Hive 0.11.0 and later)
| DECIMAL(precision, scale) -- (Note: Available in Hive 0.13.0 and later)
| DATE -- (Note: Available in Hive 0.12.0 and later)
| VARCHAR -- (Note: Available in Hive 0.12.0 and later)
| CHAR -- (Note: Available in Hive 0.13.0 and later)
array_type
: ARRAY < data_type >
map_type
: MAP < primitive_type, data_type >
struct_type
: STRUCT < col_name : data_type [COMMENT col_comment], ...>
union_type
: UNIONTYPE < data_type, data_type, ... > -- (Note: Available in Hive 0.7.0 and later)
# 行列分隔符
row_format
: DELIMITED [FIELDS TERMINATED BY char [ESCAPED BY char]] [COLLECTION ITEMS TERMINATED BY char]
[MAP KEYS TERMINATED BY char] [LINES TERMINATED BY char]
[NULL DEFINED AS char] -- (Note: Available in Hive 0.13 and later)
| SERDE serde_name [WITH SERDEPROPERTIES (property_name=property_value, property_name=property_value, ...)]
# 文件存储格式
file_format:
: SEQUENCEFILE
| TEXTFILE -- (Default, depending on hive.default.fileformat configuration)
| RCFILE -- (Note: Available in Hive 0.6.0 and later)
| ORC -- (Note: Available in Hive 0.11.0 and later)
| PARQUET -- (Note: Available in Hive 0.13.0 and later)
| AVRO -- (Note: Available in Hive 0.14.0 and later)
| INPUTFORMAT input_format_classname OUTPUTFORMAT output_format_classname
2.1.1 行格式,文件存储格式,SerDe
ROW FORMAT DELIMITED FIELDS TERMINATED BY ‘\001’ STORED AS SEQUENCEFILE;
ROW FORMAT SERDE … STORED AS SEQUENCEFILE;
RegEx SerDe
CREATE TABLE apachelog (
host STRING,
identity STRING,
user STRING,
time STRING,
request STRING,
status STRING,
size STRING,
referer STRING,
agent STRING)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
"input.regex" = "([^]*) ([^]*) ([^]*) (-|\\[^\\]*\\]) ([^ \"]*|\"[^\"]*\") (-|[0-9]*) (-|[0-9]*)(?: ([^ \"]*|\".*\") ([^ \"]*|\".*\"))?"
)
STORED AS TEXTFILE;
2.1.2 分区表(partitioned by)
CREATE TABLE page_view(
viewTime INT
,userid BIGINT
,page_url STRING
,referrer_url STRING
,ip STRING COMMENT 'IP Address of the User'
)
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\001'
STORED AS SEQUENCEFILE;
2.1.3 外部表(external)
可以自定义HDFS存储地址,drop表时数据不删除,还是要指定分隔符的
CREATE EXTERNAL TABLE page_view(
viewTime INT
,userid BIGINT
,page_url STRING
, referrer_url STRING
,ip STRING COMMENT 'IP Address of the User'
,country STRING COMMENT 'country of origination'
)
COMMENT 'This is the staging page view table'
ROW FORMAT DELIMITED FIELDS TERMINATED BY '\054'
STORED AS TEXTFILE
LOCATION '
2.1.4 利用查询结果建表(Create Table As Select (CTAS))
根据查询结果的列和列类型建表,可以自己指定列分隔符和文件存储格式
CREATE TABLE new_key_value_store
ROW FORMAT SERDE "org.apache.hadoop.hive.serde2.columnar.ColumnarSerDe"
STORED AS RCFile
AS
SELECT
(key % 1024) new_key
, concat(key, value) key_value_pair
FROM key_value_store
SORT BY new_key, key_value_pair;
2.1.5 复制已有表结构(Create Table Like)
复制已有表的表结构,不复制数据(属性一样,仅表名不同)
CREATE TABLE empty_key_value_store
LIKE key_value_store;
2.1.6 分桶排序的表(Bucketed Sorted Tables)
CREATE TABLE page_view(
viewTime INTviewTime INT
,userid BIGINT
,page_url STRING
,referrer_url STRING
,ip STRING COMMENT 'IP Address of the User'
,ip STRING COMMENT 'IP Address of the User'
)
COMMENT 'This is the page view table'
PARTITIONED BY(dt STRING, country STRING)
CLUSTERED BY(userid) SORTED BY(viewTime) INTO 32 BUCKETS
ROW FORMAT DELIMITED
FIELDS TERMINATED BY '\001'
COLLECTION ITEMS TERMINATED BY '\002'
MAP KEYS TERMINATED BY '\003'
STORED AS SEQUENCEFILE;
上表按userid分桶,桶内按viewTime升序排列,可以更有效的取样和让内部操作更加了解数据结构,从而提高运算性能。
建表时的CLUSTERED BY和SORTED BY语句只对数据读取有作用,对写入没有作用,所以在写入数据时需要手动指定reduce数等于分桶数并且使用CLUSTERED BY和SORTED BY语句。
简便方法是set hive.enforce.bucketing = true;这样写入数据时就不用那么麻烦了,系统将会自动指定reduce数等于分桶数和使用CLUSTERED BY和SORTED BY语句。分桶规则依赖于hash函数。
例如:建表语句
CREATE TABLE user_info_bucketed(
user_id BIGINT
,firstname STRING
,lastname STRING
)
COMMENT 'A bucketed copy of user_info'
PARTITIONED BY(ds STRING)
CLUSTERED BY(user_id) INTO 256 BUCKETS;
插入语句:
set hive.enforce.bucketing = true;
FROM user_id
INSERT OVERWRITE TABLE user_info_bucketed
PARTITION (ds='2009-02-25')
SELECT userid, firstname, lastname WHERE ds='2009-02-25';
2.1.7 倾斜表(Skewed Tables)
某列的几个值出现频率非常高,所以相比于其它值他们的运算非常慢,造成倾斜,Hive可以指定将特定的值单独存储到独立文件中来提高性能。
单列举例:
CREATE TABLE list_bucket_single (key STRING, value STRING)
SKEWED BY (key) ON (1,5,6) [STORED AS DIRECTORIES];
多列举例:
CREATE TABLE list_bucket_multiple (col1 STRING, col2 int, col3 STRING)
SKEWED BY (col1, col2) ON (('s1',1), ('s3',3), ('s13',13), ('s78',78)) [STORED AS DIRECTORIES];
2.1.8 临时表(Temporary Tables)
临时表仅这个session可见,存储在用户临时目录,session结束后删除。
如果与永久表重名,则查询时默认为临时表,直到不重名或者删除临时表。
2.2 删除数据表(drop)
删除外表不删数据
删除内表数据移到用户垃圾箱(可以重建metadata并移回数据来恢复)
指定PURGE强制删除不可恢复
DROP TABLE [IF EXISTS] table_name [PURGE];
2.3 清空数据表(Truncate Table)
清空表数据,指定partition时只清空特定partition的数据。
TRUNCATE TABLE table_name [PARTITION (partition_column = partition_col_value, partition_column = partition_col_value, ...)];
三、修改数据表(Alter Table/Partition/Column)
3.1 修改表(Alter Table)
3.1.1 重命名表(Rename Table)
ALTER TABLE table_name RENAME TO new_table_name;
3.1.2 修改表属性(Alter Table Properties)
ALTER TABLE table_name SET TBLPROPERTIES (property_name = property_value, property_name = property_value, ... );
3.1.3 修改表注释(Alter Table Comment)
ALTER TABLE table_name SET TBLPROPERTIES ('comment' = new_comment);
3.1.4 修改SerDe属性(Add SerDe Properties)
ALTER TABLE table_name [PARTITION partition_spec] SET SERDE serde_class_name [WITH SERDEPROPERTIES (property_name = property_value, property_name = property_value, ... )];
ALTER TABLE table_name [PARTITION partition_spec] SET SERDEPROPERTIES (property_name = property_value, property_name = property_value, ... );
属性的名和值都需要加引号
ALTER TABLE table_name SET SERDEPROPERTIES ('field.delim' = ',');
3.1.5 修改表存储属性(Alter Table Storage Properties)
ALTER TABLE table_name CLUSTERED BY (col_name, col_name, ...) [SORTED BY (col_name, ...)]
INTO num_buckets BUCKETS;
只修改metadata,用户需要自己确保实际数据的格式与改完后的metadata相符
3.1.6 修改倾斜表属性(Alter Table Skewed or Stored as Directories)
改为倾斜表
ALTER TABLE table_name SKEWED BY (col_name1, col_name2, ...)
ON ([(col_name1_value, col_name2_value, ...) [, (col_name1_value, col_name2_value), ...]
[STORED AS DIRECTORIES];
改为非倾斜表
ALTER TABLE table_name NOT SKEWED;
改为不单独存储倾斜列
ALTER TABLE table_name NOT STORED AS DIRECTORIES;
修改倾斜表位置
ALTER TABLE table_name SET SKEWED LOCATION (col_name1="location1" [, col_name2="location2", ...] );
3.2 修改分区(Alter Partition)
3.2.1 增加分区(Add Partitions)
ALTER TABLE table_name ADD [IF NOT EXISTS] PARTITION partition_spec
[LOCATION 'location1'] partition_spec [LOCATION 'location2'] ...;
partition_spec:
: (partition_column = partition_col_value, partition_column = partition_col_value, ...)
修改分区仅修改metadata,实际数据需要用户自己修改
3.2.2 动态分区(Dynamic Partitions)
3.2.3 重命名分区(Rename Partition)
ALTER TABLE table_name PARTITION partition_spec RENAME TO PARTITION partition_spec;
相当于更改了分区对应的那个列的值
3.2.4 交换分区(Exchange Partition)
ALTER TABLE table_name_1 EXCHANGE PARTITION (partition_spec) WITH TABLE table_name_2;
-- multiple partitions
ALTER TABLE table_name_1 EXCHANGE PARTITION (partition_spec, partition_spec2, ...) WITH TABLE table_name_2;
将分区从一个表迁移到另一个表,要求两个表结构一致且目标表没有这个分区
3.2.5 修复分区(MSCK REPAIR TABLE)
当我们手动传数据到hdfs作为一个分区时需要在metadata进行设置以便能够识别
MSCK REPAIR TABLE table_name;
3.2.6 删除分区(Drop Partitions)
ALTER TABLE table_name DROP [IF EXISTS] PARTITION partition_spec[, PARTITION partition_spec, ...]
[IGNORE PROTECTION] [PURGE];
删除分区时同时删除metadata和data,删除的data到用户垃圾箱
3.3 修改表或者分区(Alter Either Table or Partition)
3.3.1 修改文件格式(Alter Table/Partition File Format)
ALTER TABLE table_name [PARTITION partition_spec] SET FILEFORMAT file_format;
3.3.2 修改存储位置(Alter Table/Partition Location)
ALTER TABLE table_name [PARTITION partition_spec] SET LOCATION "new location";
3.4 修改列(Alter Column)
列名是大小写敏感的
3.4.1 修改列名/数据类型/位置/注释(Change Column Name/Type/Position/Comment)
ALTER TABLE table_name [PARTITION partition_spec] CHANGE [COLUMN] col_old_name col_new_name column_type
[COMMENT col_comment] [FIRST|AFTER column_name] [CASCADE|RESTRICT];
例子:
CREATE TABLE test_change (a int, b int, c int);
# First change column a's name to a1.
ALTER TABLE test_change CHANGE a a1 INT;
# Next change column a1's name to a2, its data type to string, and put it after column b.
ALTER TABLE test_change CHANGE a1 a2 STRING AFTER b;
# The new table's structure is: b int, a2 string, c int.
# Then change column c's name to c1, and put it as the first column.
ALTER TABLE test_change CHANGE c c1 INT FIRST;
# The new table's structure is: c1 int, b int, a2 string.
# Add a comment to column a1
ALTER TABLE test_change CHANGE a1 a1 INT COMMENT 'this is column a1';
3.4.2 添加列(Add Columns)
ALTER TABLE table_name
[PARTITION partition_spec]
ADD COLUMNS (col_name data_type [COMMENT col_comment], ...)
[CASCADE|RESTRICT]
添加的列位于普通列最后位置,在分区列之前
四、视图的相关操作(Create/Drop/Alter View)
4.1 创建视图(Create View)
CREATE VIEW [IF NOT EXISTS] [db_name.]view_name [(column_name [COMMENT column_comment], ...) ]
[COMMENT view_comment]
[TBLPROPERTIES (property_name = property_value, ...)]
AS SELECT ...;
视图仅是逻辑对象,不是实际存储,查询时相当于先查出视图再进一步查询
不指定列名时按select的列来算
一旦创建即被冻结,表再变化不会影响视图变化
只读不能插入数据
SHOW CREATE TABLE 用来查看建视图的语句
例子:
CREATE VIEW onion_referrers(url COMMENT 'URL of Referring page')
COMMENT 'Referrers to The Onion website'
AS
SELECT DISTINCT referrer_url
FROM page_view
WHERE page_url='http://www.theonion.com';
4.2 删除视图(Drop View)
DROP VIEW [IF EXISTS] view_name;
例子:
DROP VIEW onion_referrers;
4.3 修改视图属性(Alter View Properties)
ALTER VIEW view_name SET TBLPROPERTIES table_properties;
table_properties:
: (property_name = property_value, property_name = property_value, ...)
4.4 修改已有视图(Alter View As Select)
ALTER VIEW view_name AS select_statement;
只能用于没有分区的视图
五、索引的相关操作
5.1 创建索引(Create Index)
CREATE INDEX index_name
ON TABLE base_table_name (col_name, ...)
AS index_type
[WITH DEFERRED REBUILD]
[IDXPROPERTIES (property_name=property_value, ...)]
[IN TABLE index_table_name]
[
[ ROW FORMAT ...] STORED AS ...
| STORED BY ...
]
[LOCATION hdfs_path]
[TBLPROPERTIES (...)]
[COMMENT "index comment"];
5.2 删除索引(Drop Index)
DROP INDEX [IF EXISTS] index_name ON table_name;
5.3 修改索引(ALTER INDEX)
ALTER INDEX index_name ON table_name [PARTITION partition_spec] REBUILD;
六、方法的相关操作(Create/Drop/Reload Function)
6.1 临时方法(Temporary Functions)
6.1.1 创建临时方法(Create Temporary Function)
CREATE TEMPORARY FUNCTION function_name AS class_name;
利用类名创建临时方法,在当前session有效,类名可以先add jar
6.1.2 删除临时方法(DROP TEMPORARY FUNCTION)
DROP TEMPORARY FUNCTION [IF EXISTS] function_name;
6.2 永久方法(Permanent Functions)
6.2.1 创建永久方法(Create Function)
CREATE FUNCTION [db_name.]function_name AS class_name
[USING JAR|FILE|ARCHIVE 'file_uri' [, JAR|FILE|ARCHIVE 'file_uri'] ];
需要限制到数据库,如果没有写数据库,默认当前数据库
需要先添加进去并用using访问
6.2.2 删除永久方法(DROP FUNCTION)
DROP FUNCTION [IF EXISTS] function_name;
6.2.3 重载永久方法(RELOAD FUNCTION)
RELOAD FUNCTION;
七、用户权限相关操作(Create/Drop/Grant/Revoke Roles and Privileges)
八、查看相关信息(Show)
8.1 查看数据库(Show Databases)
SHOW (DATABASES|SCHEMAS) [LIKE 'identifier_with_wildcards'];
用”*”表示任意个字符,”|”表示或
8.2 查看表/分区/索引(Show Tables/Partitions/Indexes)
8.2.1 查看表(Show Tables)
SHOW TABLES [IN database_name] [like 'identifier_with_wildcards'];
in语句表示数据库,不写默认当前数据库
用”*”表示任意个字符,”|”表示或
8.2.2 查看分区(Show Partitions)
SHOW PARTITIONS table_name;
也可以指定分区中的一个或几个字段
SHOW PARTITIONS table_name PARTITION(ds='2010-03-03');
SHOW PARTITIONS table_name PARTITION(hr='12');
SHOW PARTITIONS table_name PARTITION(ds='2010-03-03', hr='12');
8.2.3 查看表和分区的扩展信息(Show Table/Partition Extended)
SHOW TABLE EXTENDED [IN|FROM database_name] LIKE 'identifier_with_wildcards' [PARTITION(partition_spec)];
extended会给出表的文件信息和文件大小、修改时间等信息
如果指定了partition就不能用表的正则匹配了,只能指定具体的表
8.2.4 查看表属性(Show Table Properties)
SHOW TBLPROPERTIES tblname;
SHOW TBLPROPERTIES tblname("foo");
第一种方式给出表的所有属性值
第二种方式给出表的指定的属性值
8.2.5 查看表的创建信息(Show Create Table)
SHOW CREATE TABLE ([db_name.]table_name|view_name);
既能看table,也能看view
8.2.6 查看索引(Show Indexes)
SHOW [FORMATTED] (INDEX|INDEXES) ON table_with_index [(FROM|IN) db_name];
8.3 查看列(Show Columns)
SHOW COLUMNS (FROM|IN) table_name [(FROM|IN) db_name];
8.4 查看方法(Show Functions)
SHOW FUNCTIONS "a.*";
查看所有方法时用”.*”
8.5 查看用户和权限(Show Granted Roles and Privileges)
8.6 查看锁(Show Locks)
SHOW LOCKS
SHOW LOCKS
SHOW LOCKS
SHOW LOCKS
SHOW LOCKS
8.7 查看配置信息(Show Conf)
SHOW CONF
不会列出当前配置的值,如果需要请使用set命令
8.8 查看事务信息(SHOW TRANSACTIONS)
SHOW TRANSACTIONS;
8.9 查看压缩信息(SHOW COMPACTIONS)
SHOW COMPACTIONS;
九、描述相关信息(Describe)
9.1 描述数据库(Describe Database)
DESCRIBE DATABASE [EXTENDED] db_name;
extended给出数据库的属性信息
9.2 描述表/视图/列(Describe Table/View/Column)
没有指定数据库时
DESCRIBE [EXTENDED|FORMATTED]
table_name[.col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];
指定数据库时
DESCRIBE [EXTENDED|FORMATTED]
[db_name.]table_name[ col_name ( [.field_name] | [.'$elem$'] | [.'$key$'] | [.'$value$'] )* ];
9.2.1 展示列的统计信息(Display Column Statistics)
DESCRIBE FORMATTED [db_name.]table_name column_name;
DESCRIBE FORMATTED [db_name.]table_name column_name PARTITION (partition_spec);
9.3 描述分区信息(Describe Partition)
没有指定数据库时
DESCRIBE [EXTENDED|FORMATTED] table_name[.column_name] PARTITION partition_spec;
指定数据库时
DESCRIBE [EXTENDED|FORMATTED] [db_name.]table_name [column_name] PARTITION partition_spec;
例子:
DESCRIBE page_view PARTITION (ds='2008-08-08');
感谢各位的阅读!关于“Hive数据定义语言DDL有哪些”这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!