大家好,欢迎来到IT知识分享网。
本文记录在linux环境下,安装kafka,并做简单测试,如果zookeeper没有安装,可参考zookeeper安装:
1.下载安装包
地址:http://kafka.apache.org/downloads, 注意不要下载成source了。
2.上传至服务器
rz命令上传至服务器
解压
[root@localhost local]# tar -zxvf kafka_2.11-2.1.1.tgz
3.修改配置文件
这里这列举几个重要的配置,其他配置如果只是单机的自己做测试不需要修改:
4.启动
[root@localhost bin]# ./kafka-server-start.sh ../config/server.properties &
这个&是后台启动,但是需要exit去退出。
(还有一种是:sh kafka-server-start.sh …/config/server.properties 1>/dev/null 2>&1 &
其中1>/dev/null 2>&1 是将命令产生的输入和错误都输入到空设备,也就是不输出的意思。
/dev/null代表空设备。)
启动后会刷一波日志然后看到如下信息:
[2019-02-28 10:49:13,727] INFO [ExpirationReaper-1-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper)
[2019-02-28 10:49:14,551] INFO [GroupCoordinator 1]: Starting up. (kafka.coordinator.group.GroupCoordinator)
[2019-02-28 10:49:14,600] INFO [GroupCoordinator 1]: Startup complete. (kafka.coordinator.group.GroupCoordinator)
[2019-02-28 10:49:14,866] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 228 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2019-02-28 10:49:15,019] INFO [ProducerId Manager 1]: Acquired new producerId block (brokerId:1,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager)
[2019-02-28 10:49:15,143] INFO [TransactionCoordinator id=1] Starting up. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-02-28 10:49:15,279] INFO [TransactionCoordinator id=1] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator)
[2019-02-28 10:49:15,317] INFO [Transaction Marker Channel Manager 1]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager)
[2019-02-28 10:49:18,420] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread)
[2019-02-28 10:49:18,631] INFO [SocketServer brokerId=1] Started processors for 1 acceptors (kafka.network.SocketServer)
[2019-02-28 10:49:18,690] INFO Kafka version : 2.1.1 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-28 10:49:18,709] INFO Kafka commitId : 21234bee31165527 (org.apache.kafka.common.utils.AppInfoParser)
[2019-02-28 10:49:18,713] INFO [KafkaServer id=1] started (kafka.server.KafkaServer)
[2019-02-28 10:59:14,552] INFO [GroupMetadataManager brokerId=1] Removed 0 expired off
也可以用命令验证一下:
[root@localhost kafka_2.11-2.1.1]# netstat -tunlp|egrep "(2181|9092)"
tcp6 0 0 :::9092 :::* LISTEN 14019/java
tcp6 0 0 :::2181 :::* LISTEN 11938/java
[root@localhost kafka_2.11-2.1.1]#
5.创建一个topic
创建一个名为“wangtest”的Topic,只有一个分区和一个备份:
[root@localhost bin]# ./kafka-topics.sh --create --zookeeper xx.xx.xx.xx:2181 --replication-factor 1 --partitions 1 --topic wangtest
Created topic "wangtest".
[root@localhost bin]#
查询一下topic
[root@localhost bin]# ./kafka-topics.sh --list --zookeeper xx.xx.xx.xx:2181
wangtest
[root@localhost bin]#
6.发送消息
我们发一下消息测试一下:
[root@localhost bin]# ./kafka-console-producer.sh --broker-list xx.x.xx.xx:9092 --topic wangtest
>this is a test message from wangwang
>from itqunqi^H^H^H^H^H^[[3~
>Ties^H^H^H^H
>this test from itu^H
>this test from ityunqing
>
7.消费消息
在另一个终端,可以消费刚才写入的消息
[root@localhost bin]# ./kafka-console-consumer.sh --bootstrap-server xx.xx.xx.xx:9092 --topic wangtest --from-beginning
this is a test message from wangwang
from itqunqi
Ties
this test from itu
this test from ityunqing
免责声明:本站所有文章内容,图片,视频等均是来源于用户投稿和互联网及文摘转载整编而成,不代表本站观点,不承担相关法律责任。其著作权各归其原作者或其出版社所有。如发现本站有涉嫌抄袭侵权/违法违规的内容,侵犯到您的权益,请在线联系站长,一经查实,本站将立刻删除。 本文来自网络,若有侵权,请联系删除,如若转载,请注明出处:https://yundeesoft.com/14719.html