spark+phoenix读取hbase

正常来说这个内容应该网上可参考的文章很多,但是我还是捣鼓了好久,现在记录下来,给自己个备忘录。

phoenix是操作hbase的皮肤,他可以轻松的使用sql语句来操作hbase,比直接用hbase的原语操作要友好的多。spark直接操作hbase也是通过hbase的原语操作,操作起来比较繁琐,下面就是将spark和phoenix相结合的方法步骤。

我用的是scala语言,首先pom.xml中添加依赖

         <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-spark</artifactId>
            <version>5.0.0-HBase-2.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.phoenix</groupId>
            <artifactId>phoenix-core</artifactId>
            <version>5.0.0-HBase-2.0</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-client</artifactId>
            <version>2.4.12</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-server</artifactId>
            <version>2.4.12</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hbase</groupId>
            <artifactId>hbase-common</artifactId>
            <version>2.4.12</version>
        </dependency>

这里添加的版本信息要和你要访问的hbase相一致!

接下来,到phoenix官网下载jar包,Overview | Apache Phoenix

然后解压缩,将里面的phoenix-server-hbase-2.4-5.1.3.jar(你的版本可能和我下载的不一致,这个根据hadoop上安装的hbase的版本来定)拷贝到hbase/lib/目录下,然后重启hbase。

然后将解压的phoenix-client-hbase-2.4-5.1.3.jar包拷贝到你的工程resources目录下,然后将hadoop中的配置文件也都放到resources/conf/这个目录下,接下来开始写代码。

import org.apache.spark.SparkContext
import org.apache.spark.sql.{SQLContext, SparkSession}
import org.apache.phoenix.spark.datasource.v2.PhoenixDataSource

val spark = SparkSession
  .builder()
  .appName("phoenix-test")
  .master("local")
  .getOrCreate()

// Load data from TABLE1
val df = spark.sqlContext
  .read
  .format("phoenix")
  .options(Map("table" -> "TABLE1", PhoenixDataSource.ZOOKEEPER_URL -> "phoenix-server:2181"))
  .load

df.filter(df("COL1") === "test_row_1" && df("ID") === 1L)
  .select(df("ID"))
  .show

这是phoenix官网提供的代码,我执行没成功,显示org.apache.phoenix.spark.datasource.v2.PhoenixDataSource这个找不到,我不知道是我依赖包没引对还是其他原因,我的代码在上面的基础上做了一些改动。

import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.execution.datasources.jdbc._
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.Path

import org.apache.log4j.Logger


object SparkPhoenixHbase {
  @transient lazy val log = Logger.getLogger(this.getClass)
  def main(args: Array[String]): Unit = {

    readFromHBaseWithPhoenix()
  }

  def readFromHBaseWithPhoenix(): Unit = {

    val hadoopConf = new Configuration()
    hadoopConf.addResource(new Path("conf/core-site.xml"))
    hadoopConf.addResource(new Path("conf/hdfs-site.xml"))
    hadoopConf.addResource(new Path("conf/mapred-site.xml"))
    hadoopConf.addResource(new Path("conf/yarn-site.xml"))
    hadoopConf.addResource(new Path("conf/hbase-site.xml"))


  val conf = new SparkConf()
    .setAppName("phoenix-spark-hdase")
    .setMaster("local[*]")
    conf.set("spark.driver.extraClassPath","/resources/phoenix-client-hbase-2.4-5.1.3.jar")
    conf.set("spark.executor.extraClassPath","/resources/phoenix-client-hbase-2.4-5.1.3.jar")

    val it = hadoopConf.iterator()
    while (it.hasNext){
      val entry = it.next()
      conf.set(entry.getKey, entry.getValue)
    }

  val spark = SparkSession
    .builder()
    .master("local")
    .appName("phoenix-hbase")
    .config(conf)
    .getOrCreate()

    val phoenixConfig = Map(
      "url" -> "jdbc:phoenix:10.12.4.51:2181",   //这里是你hadoop上安装的zookeeper的地址
      "driver" -> "org.apache.phoenix.jdbc.PhoenixDriver"
    )

  val df = spark.read
    .format("jdbc")
    .options(phoenixConfig)
    .option("dbtable","student")
    .load()

     df.show() 

    spark.close()

  }
}

最好要在工程里配置上日志打印,不然执行过程中的错误信息是看不到的。

最后执行成功的结果如下所示

2024-01-18 08:53:52,487 INFO [org.apache.spark.executor.Executor] : Finished task 0.0 in stage 0.0 (TID 0). 1509 bytes result sent to driver
2024-01-18 08:53:52,493 INFO [org.apache.spark.scheduler.TaskSetManager] : Finished task 0.0 in stage 0.0 (TID 0) in 580 ms on DESKTOP-FT30H9D (executor driver) (1/1)
2024-01-18 08:53:52,494 INFO [org.apache.spark.scheduler.TaskSchedulerImpl] : Removed TaskSet 0.0, whose tasks have all completed, from pool 
2024-01-18 08:53:52,500 INFO [org.apache.spark.scheduler.DAGScheduler] : ResultStage 0 (show at SparkPhoenixHbase.scala:70) finished in 0.774 s
2024-01-18 08:53:52,502 INFO [org.apache.spark.scheduler.DAGScheduler] : Job 0 is finished. Cancelling potential speculative or zombie tasks for this job
2024-01-18 08:53:52,502 INFO [org.apache.spark.scheduler.TaskSchedulerImpl] : Killing all running tasks in stage 0: Stage finished
2024-01-18 08:53:52,504 INFO [org.apache.spark.scheduler.DAGScheduler] : Job 0 finished: show at SparkPhoenixHbase.scala:70, took 0.808840 s
2024-01-18 08:53:52,538 INFO [org.apache.spark.sql.catalyst.expressions.codegen.CodeGenerator] : Code generated in 14.3886 ms
+----+--------+---+-------+
|  ID|    NAME|AGE|   ADDR|
+----+--------+---+-------+
|1001|zhangsan| 10|tianjin|
+----+--------+---+-------+

// 能看到这个就说明成功了,我的hbase student表里就这么一行信息

2024-01-18 08:53:52,555 INFO [org.sparkproject.jetty.server.AbstractConnector] : Stopped Spark@4108fa66{HTTP/1.1, (http/1.1)}{0.0.0.0:4040}
2024-01-18 08:53:52,556 INFO [org.apache.spark.ui.SparkUI] : Stopped Spark web UI at http://DESKTOP-FT30H9D:4040
2024-01-18 08:53:52,566 INFO [org.apache.spark.MapOutputTrackerMasterEndpoint] : MapOutputTrackerMasterEndpoint stopped!
2024-01-18 08:53:52,581 INFO [org.apache.spark.storage.memory.MemoryStore] : MemoryStore cleared
2024-01-18 08:53:52,581 INFO [org.apache.spark.storage.BlockManager] : BlockManager stopped
2024-01-18 08:53:52,587 INFO [org.apache.spark.storage.BlockManagerMaster] : BlockManagerMaster stopped
2024-01-18 08:53:52,589 INFO [org.apache.spark.scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint] : OutputCommitCoordinator stopped!
2024-01-18 08:53:52,595 INFO [org.apache.spark.SparkContext] : Successfully stopped SparkContext
2024-01-18 08:53:59,207 INFO [org.apache.spark.util.ShutdownHookManager] : Shutdown hook called
2024-01-18 08:53:59,207 INFO [org.apache.spark.util.ShutdownHookManager] : Deleting directory C:\Users\shell\AppData\Local\Temp\spark-344ef832-7438-47dd-9126-725e6c2d8af4

相关推荐

  1. spark+phoenix读取hbase

    2024-01-19 07:20:02       34 阅读
  2. Flink源码分析 | 读取HBase配置

    2024-01-19 07:20:02       46 阅读
  3. <span style='color:red;'>hbase</span>

    hbase

    2024-01-19 07:20:02      26 阅读
  4. <span style='color:red;'>HBase</span>

    HBase

    2024-01-19 07:20:02      28 阅读
  5. <span style='color:red;'>Hbase</span>

    Hbase

    2024-01-19 07:20:02      9 阅读
  6. <span style='color:red;'>HBase</span>

    HBase

    2024-01-19 07:20:02      9 阅读
  7. HBASE基础

    2024-01-19 07:20:02       38 阅读

最近更新

  1. TCP协议是安全的吗?

    2024-01-19 07:20:02       14 阅读
  2. 阿里云服务器执行yum,一直下载docker-ce-stable失败

    2024-01-19 07:20:02       16 阅读
  3. 【Python教程】压缩PDF文件大小

    2024-01-19 07:20:02       15 阅读
  4. 通过文章id递归查询所有评论(xml)

    2024-01-19 07:20:02       18 阅读

热门阅读

  1. 情绪价值怎么自己给自己

    2024-01-19 07:20:02       33 阅读
  2. 【排序算法】快速排序的基本算法

    2024-01-19 07:20:02       28 阅读
  3. Cmake 中list命令总结

    2024-01-19 07:20:02       35 阅读
  4. MySQL5.7之grant

    2024-01-19 07:20:02       24 阅读
  5. MySQL各种索引超详细讲解

    2024-01-19 07:20:02       32 阅读
  6. 数据库的设计模式

    2024-01-19 07:20:02       26 阅读
  7. 几种常见的算法

    2024-01-19 07:20:02       29 阅读