Canal搭建 idea设置及采集数据到kafka的操作方法

Canal github:https://github.com/alibaba/canal#readme
实时采集工具canal:利用mysql主从复制的原理,slave定期读取master的binarylog对binarylog进行解析。
canal工作原理
canal模拟MySQL slave的交互协议,伪装自己为MySQL slave,向MySQL master发送dump协议
MySQL master收到dump请求,开始推送binary log给slave(即canal)
canal解析binary log对像(原始为bye流)

官网配置:https://github.com/alibaba/canal/wiki/QuickStart
1.在mysql中开启binlog日志功能
mysql上配置

  1. linux>vi /etc/my.cnf
  2. serverid=1
  3. logbin=mysqlbin
  4. binlog_format=row
  5. binlogdodb=testdb //指定数据库

2.重启mysql服务

  1. linux>systemctl restart mysqld

3.查看binlog是否生效:

  1. linux>ls /var/lib/mysql

4.解压canal压缩包

  1. linux>tar zxvf canal-* C canal

5.数据库设置
登陆mysql

  1. mysql>GRANT SELECT, REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO ‘canal’@‘%’ IDENTIFIED BY ‘canal’;
  2. mysql>flush privileges;

6.配置文件
1.修改canal.properties配置文件

  1. linux>vi conf/canal.properties
  2. canal.instance.parser.parallelThreadSize = 1

2.修改instance.properties配置文件

  1. linux>vi conf/example/instance.properties
  2. canal.instance.mysql.slaveId=21
  3. canal.instance.master.address=192.168.58.203:3306

7.启动服务并查看进程

  1. linux>bin/startup.sh
  2. linux>jps
  3. xxxx CanalLauncher

查看日志

  1. linux>cat /opt/install/canal/logs/canal/canal.log

idea客户端
pom.XML

  1.      <dependency>
  2.              <groupId>com.alibaba.otter</groupId>
  3.              <artifactId>canal.client</artifactId>
  4.              <version>1.1.2</version>
  5.          </dependency>
  6.          <dependency>
  7.              <groupId>org.apache.kafka</groupId>
  8.              <artifactId>kafka-clients</artifactId>
  9.              <version>2.4.1</version>
  10.          </dependency>
  1. import Java.net.InetSocketAddress;
  2. import java.util.List;
  3. import com.alibaba.otter.canal.client.CanalConnectors;
  4. import com.alibaba.otter.canal.client.CanalConnector;
  5. import com.alibaba.otter.canal.common.utils.AddressUtils;
  6. import com.alibaba.otter.canal.protocol.CanalEntry;
  7. import com.alibaba.otter.canal.protocol.Message;
  8. import com.alibaba.otter.canal.protocol.CanalEntry.Column;
  9. import com.alibaba.otter.canal.protocol.CanalEntry.Entry;
  10. import com.alibaba.otter.canal.protocol.CanalEntry.EntryType;
  11. import com.alibaba.otter.canal.protocol.CanalEntry.EventType;
  12. import com.alibaba.otter.canal.protocol.CanalEntry.RowChange;
  13. import com.alibaba.otter.canal.protocol.CanalEntry.RowData;
  14. public class CanalClientDemo {
  15.      public static void main(String args[]) {
  16.          // 创建链接
  17.          CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress(“192.168.58.203)”,
  18.                  11111), “example”, “”, “”);
  19.          int BATchSize = 1000;
  20.          int emptyCount = 0;
  21.          try {
  22.          //testdb库中的所有表
  23.              connector.connect();
  24.              connector.subscribe(“testdb.*”);
  25.              connector.rollback();
  26.              int totalEmptyCount = 120;
  27.              while (emptyCount < totalEmptyCount) {
  28.                  Message message = connector.getWithoutAck(batchSize); // 获取指定数量的数据
  29.                  long batchId = message.getId();
  30.                  int size = message.getEntries().size();
  31.                  if (batchId == 1 || size == 0) {
  32.                      emptyCount++;
  33.                      System.out.println(“empty count : “ + emptyCount);
  34.                      try {
  35.                      Thread.sleep(1000);
  36.                      } catch (InterruptedException e) {
  37.                      }
  38.                  } else {
  39.                      emptyCount = 0;
  40.                      // System.out.printf(“message[batchId=%s,size=%s] \n”, batchId, size);
  41.                      printEntry(message.getEntries());
  42.                  }
  43.                  connector.ack(batchId); // 提交确认
  44.                  // connector.rollback(batchId); // 处理失败, 回滚数据
  45.              }
  46.              System.out.println(“empty too many times, exit”);
  47.          } finally {
  48.              connector.disconnect();
  49.          }
  50.      }
  51.      private static void printEntry(List<CanalEntry.Entry> entrys) {
  52.          for (Entry entry : entrys) {
  53.              if (entry.getEntryType() == EntryType.TRANSACTIONBEGIN || entry.getEntryType() == EntryType.TRANSACTIONEND) {
  54.                  continue;
  55.              }
  56.              RowChange rowChage = null;
  57.              try {
  58.                  rowChage = RowChange.parseFrom(entry.getStoreValue());
  59.              } catch (Exception e) {
  60.                  throw new RuntimeException(“ERROR ## parser of eromanga-event has an error , data:” + entry.toString(),
  61.                      e);
  62.              }
  63.              EventType eventType = rowChage.getEventType();
  64.              System.out.println(String.format(“================&gt; binlog[%s:%s] , name[%s,%s] , eventType : %s”,
  65.                      entry.getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(),
  66.                      entry.getHeader().getSchemaName(), entry.getHeader().getTableName(),
  67.                      eventType));
  68.              for (RowData rowData : rowChage.getRowDatasList()) {
  69.                  if (eventType == EventType.DELETE) {
  70.                      printColumn(rowData.getBeforeColumnsList());
  71.                  } else if (eventType == EventType.INSERT) {
  72.                      printColumn(rowData.getAfterColumnsList());
  73.                  } els{
  74.                      System.out.println(“——-&gt; before”);
  75.                      printColumn(rowData.getBeforeColumnsList());
  76.                      System.out.println(“——-&gt; after”);
  77.                      printColumn(rowData.getAfterColumnsList());
  78.                  }
  79.              }
  80.          }
  81.      }
  82.      private static void printColumn(List<Column> columns) {
  83.          for (Column column : columns) {
  84.              System.out.println(column.getName() + ” : “ + column.getValue() + ” update=” + column.getUpdated());
  85.          }
  86.      }
  87. }

注意:canal只能在java8中运行,如果canal进程CanalLauncher起不来,检查本地java环境
CanalClientDemo运行提示拒绝连接,检查脚本中的连接地址是不是运行canal的主机 CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress(“192.168.58.203(canal))”,
xxxx”);

Canal kafka github 配置官网:github.com/alibaba/canal/wiki/Canal-Kafka-RocketMQ-QuickStart

使用canal将数据同步到kafka上
#重新解压一个canal到nodefour上进行配置
1.修改canal.properties配置文件

  1. linux>vi canal.properties
  2. canal.serverMode = kafka
  3. canal.instance.parser.parallel = false
  4. canal.mq.servers = 192.168.58.201:9092,192.168.58.202:9092,192.168.58.203:9092

2.修改instance.properties配置文件

  1. linux>vi instance.properties
  2. canal.instance.mysql.slaveId=21
  3. canal.instance.master.address=192.168.58.203:3306
  4. canal.mq.topic=example

3.可以创建topic,也可以不创建

在kafka上启动一个消费者

  1. bin/kafkaconsoleconsumer.sh topic example frombeginning bootstrapserver 192.168.58.201:9092,192.168.58.202:9092,192.168.58.203:9092

启动 canal

  1. linux>bin/startup.sh

结果:

-1

到此这篇关于Canal搭建 idea设置及采集数据到kafka的操作方法的文章就介绍到这了,更多相关Canal搭建 idea内容请搜索我们以前的文章或继续浏览下面的相关文章希望大家以后多多支持我们!

标签

发表评论