Search...
FAQs
Subscribe
Pie
FAQs
Recent topics
Flagged topics
Hot topics
Best topics
Search...
Search within Cloud/Virtualization
Search Coderanch
Advance search
Google search
Register / Login
Post Reply
Bookmark Topic
Watch Topic
New Topic
programming forums
Java
Mobile
Certification
Databases
Caching
Books
Engineering
Micro Controllers
OS
Languages
Paradigms
IDEs
Build Tools
Frameworks
Application Servers
Open Source
This Site
Careers
Other
Pie Elite
all forums
this forum made possible by our volunteer staff, including ...
Marshals:
Campbell Ritchie
Jeanne Boyarsky
Ron McLeod
Paul Clapham
Liutauras Vilda
Sheriffs:
paul wheaton
Rob Spoor
Devaka Cooray
Saloon Keepers:
Stephan van Hulst
Tim Holloway
Carey Brown
Frits Walraven
Tim Moores
Bartenders:
Mikalai Zaikin
Forum:
Cloud/Virtualization
Docker with Kafka Error Connection to node -1 (localhost/127.0.0.1:9092) could not be established.
Tony Evans
Ranch Hand
Posts: 681
1
posted 2 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
I am trying to run a producer connecting to Kafka though docker, Keep getting the following error
Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available.
I think this is the problem
I have set up for Kafka
From the logs
[2021-06-03 12:36:00,527] INFO KafkaConfig values: kafka | advertised.host.name = null kafka | advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 kafka | advertised.port = null
But for producer logs I have
bootstrap.servers = [localhost:9092]
bootstrap server should be set to kafka:19092
So in my applications.yml I have
server: port: 8080 spring: kafka: producer: bootstrap-servers: kafka:19092 key-serializer: org.apache.kafka.common.serialization.StringSerializer value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
So my first question is why is bootstrap-servers defaulting to localhost 9092
I have set up a docker-compose.yaml
version: '3' services: zookeeper: image: wurstmeister/zookeeper container_name: zookeper ports: - 2181:2181 kafka: image: wurstmeister/kafka container_name: kafka ports: - 9092:9092 environment: KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT KAFKA_ADVERTISED_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 KAFKA_LISTENERS: LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 KAFKA_INTER_BROKER_LISTENER_NAME: LISTENER_DOCKER_EXTERNAL KAFKA_CREATE_TOPICS: "orders:1:1" KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 publish-service: build: ./CurrencyExchangeProducerKafkaDocker/ image: producerkafkaspringboot/kafka-producer-springboot ports: - 8080:8080
it builds and runs the 3 images
938f8a898fc3 wurstmeister/zookeeper "/bin/sh -c '/usr/sb…" 20 seconds ago Up 18 seconds 22/tcp, 2888/tcp, 3888/tcp, 0.0.0.0:2181->2181/tcp, :::2181->2181/tcp zookeper 2ba8d22be6a0 producerkafkaspringboot/kafka-producer-springboot "java -jar /Currency…" 20 seconds ago Up 18 seconds 0.0.0.0:8080->8080/tcp, :::8080->8080/tcp kafkatwo_publish-service_1 3411a92ea52c wurstmeister/kafka "start-kafka.sh" 20 seconds ago Up 18 seconds 0.0.0.0:9092->9092/tcp, :::9092->9092/tcp
Tony Evans
Ranch Hand
Posts: 681
1
posted 2 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
The full logging
C:\Users\Tony\workdev\docker\Kafka\springboot\KafkaTwo>docker-compose up Docker Compose is now in the Docker CLI, try `docker compose up` Creating network "kafkatwo_default" with the default driver Creating zookeper ... done Creating kafka ... done Creating kafkatwo_publish-service_1 ... done Attaching to kafka, zookeper, kafkatwo_publish-service_1 kafka | waiting for kafka to be ready kafka | [Configuring] 'inter.broker.listener.name' in '/opt/kafka/config/server.properties' kafka | Excluding KAFKA_HOME from broker config kafka | [Configuring] 'port' in '/opt/kafka/config/server.properties' kafka | [Configuring] 'advertised.listeners' in '/opt/kafka/config/server.properties' kafka | [Configuring] 'listener.security.protocol.map' in '/opt/kafka/config/server.properties' kafka | [Configuring] 'broker.id' in '/opt/kafka/config/server.properties' kafka | Excluding KAFKA_VERSION from broker config kafka | [Configuring] 'listeners' in '/opt/kafka/config/server.properties' kafka | [Configuring] 'zookeeper.connect' in '/opt/kafka/config/server.properties' kafka | [Configuring] 'log.dirs' in '/opt/kafka/config/server.properties' zookeper | ZooKeeper JMX enabled by default zookeper | Using config: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg zookeper | 2021-06-03 12:35:58,607 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg zookeper | 2021-06-03 12:35:58,613 [myid:] - INFO [main:DatadirCleanupManager@78] - autopurge.snapRetainCount set to 3 zookeper | 2021-06-03 12:35:58,614 [myid:] - INFO [main:DatadirCleanupManager@79] - autopurge.purgeInterval set to 1 zookeper | 2021-06-03 12:35:58,615 [myid:] - WARN [main:QuorumPeerMain@116] - Either no config or no quorum defined in config, running in standalone mode zookeper | 2021-06-03 12:35:58,616 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@138] - Purge task started. zookeper | 2021-06-03 12:35:58,631 [myid:] - INFO [PurgeTask:DatadirCleanupManager$PurgeTask@144] - Purge task completed. zookeper | 2021-06-03 12:35:58,635 [myid:] - INFO [main:QuorumPeerConfig@136] - Reading configuration from: /opt/zookeeper-3.4.13/bin/../conf/zoo.cfg zookeper | 2021-06-03 12:35:58,636 [myid:] - INFO [main:ZooKeeperServerMain@98] - Starting server zookeper | 2021-06-03 12:35:58,648 [myid:] - INFO [main:Environment@100] - Server environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT zookeper | 2021-06-03 12:35:58,648 [myid:] - INFO [main:Environment@100] - Server environment:host.name=938f8a898fc3 zookeper | 2021-06-03 12:35:58,648 [myid:] - INFO [main:Environment@100] - Server environment:java.version=1.7.0_65 zookeper | 2021-06-03 12:35:58,649 [myid:] - INFO [main:Environment@100] - Server environment:java.vendor=Oracle Corporation zookeper | 2021-06-03 12:35:58,649 [myid:] - INFO [main:Environment@100] - Server environment:java.home=/usr/lib/jvm/java-7-openjdk-amd64/jre zookeper | 2021-06-03 12:35:58,650 [myid:] - INFO [main:Environment@100] - Server environment:java.class.path=/opt/zookeeper-3.4.13/bin/../build/classes:/opt/zookeeper-3.4.13/bin/../build/lib/*.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-log4j12-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/slf4j-api-1.7.25.jar:/opt/zookeeper-3.4.13/bin/../lib/netty-3.10.6.Final.jar:/opt/zookeeper-3.4.13/bin/../lib/log4j-1.2.17.jar:/opt/zookeeper-3.4.13/bin/../lib/jline-0.9.94.jar:/opt/zookeeper-3.4.13/bin/../lib/audience-annotations-0.5.0.jar:/opt/zookeeper-3.4.13/bin/../zookeeper-3.4.13.jar:/opt/zookeeper-3.4.13/bin/../src/java/lib/*.jar:/opt/zookeeper-3.4.13/bin/../conf: zookeper | 2021-06-03 12:35:58,650 [myid:] - INFO [main:Environment@100] - Server environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib zookeper | 2021-06-03 12:35:58,650 [myid:] - INFO [main:Environment@100] - Server environment:java.io.tmpdir=/tmp zookeper | 2021-06-03 12:35:58,655 [myid:] - INFO [main:Environment@100] - Server environment:java.compiler=<NA> zookeper | 2021-06-03 12:35:58,655 [myid:] - INFO [main:Environment@100] - Server environment:os.name=Linux zookeper | 2021-06-03 12:35:58,655 [myid:] - INFO [main:Environment@100] - Server environment:os.arch=amd64 zookeper | 2021-06-03 12:35:58,655 [myid:] - INFO [main:Environment@100] - Server environment:os.version=5.4.72-microsoft-standard-WSL2 zookeper | 2021-06-03 12:35:58,655 [myid:] - INFO [main:Environment@100] - Server environment:user.name=root zookeper | 2021-06-03 12:35:58,656 [myid:] - INFO [main:Environment@100] - Server environment:user.home=/root zookeper | 2021-06-03 12:35:58,656 [myid:] - INFO [main:Environment@100] - Server environment:user.dir=/opt/zookeeper-3.4.13 zookeper | 2021-06-03 12:35:58,662 [myid:] - INFO [main:ZooKeeperServer@836] - tickTime set to 2000 zookeper | 2021-06-03 12:35:58,662 [myid:] - INFO [main:ZooKeeperServer@845] - minSessionTimeout set to -1 zookeper | 2021-06-03 12:35:58,663 [myid:] - INFO [main:ZooKeeperServer@854] - maxSessionTimeout set to -1 zookeper | 2021-06-03 12:35:58,680 [myid:] - INFO [main:ServerCnxnFactory@117] - Using org.apache.zookeeper.server.NIOServerCnxnFactory as server connection factory zookeper | 2021-06-03 12:35:58,686 [myid:] - INFO [main:NIOServerCnxnFactory@89] - binding to port 0.0.0.0/0.0.0.0:2181 kafka | [2021-06-03 12:35:59,158] INFO Registered kafka:type=kafka.Log4jController MBean (kafka.utils.Log4jControllerRegistration$) publish-service_1 | publish-service_1 | . ____ _ __ _ _ publish-service_1 | /\\ / ___'_ __ _ _(_)_ __ __ _ \ \ \ \ publish-service_1 | ( ( )\___ | '_ | '_| | '_ \/ _` | \ \ \ \ publish-service_1 | \\/ ___)| |_)| | | | | || (_| | ) ) ) ) publish-service_1 | ' |____| .__|_| |_|_| |_\__, | / / / / publish-service_1 | =========|_|==============|___/=/_/_/_/ publish-service_1 | :: Spring Boot :: (v2.5.0) publish-service_1 | kafka | [2021-06-03 12:35:59,696] INFO Setting -D jdk.tls.rejectClientInitiatedRenegotiation=true to disable client-initiated TLS renegotiation (org.apache.zookeeper.common.X509Util) publish-service_1 | 2021-06-03 12:35:59.716 INFO 1 --- [ main] cyExchangeProducerKafkaDockerApplication : Starting CurrencyExchangeProducerKafkaDockerApplication using Java 1.8.0_212 on 2ba8d22be6a0 with PID 1 (/CurrencyExchangeProducerKafkaDocker-0.0.1-SNAPSHOT started by root in /) publish-service_1 | 2021-06-03 12:35:59.720 INFO 1 --- [ main] cyExchangeProducerKafkaDockerApplication : No active profile set, falling back to default profiles: default kafka | [2021-06-03 12:35:59,801] INFO Registered signal handlers for TERM, INT, HUP (org.apache.kafka.common.utils.LoggingSignalHandler) kafka | [2021-06-03 12:35:59,808] INFO starting (kafka.server.KafkaServer) kafka | [2021-06-03 12:35:59,811] INFO Connecting to zookeeper on zookeeper:2181 (kafka.server.KafkaServer) kafka | [2021-06-03 12:35:59,843] INFO [ZooKeeperClient Kafka server] Initializing a new session to zookeeper:2181. (kafka.zookeeper.ZooKeeperClient) kafka | [2021-06-03 12:35:59,852] INFO Client environment:zookeeper.version=3.5.8-f439ca583e70862c3068a1f2a7d4d068eec33315, built on 05/04/2020 15:53 GMT (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,853] INFO Client environment:host.name=3411a92ea52c (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,853] INFO Client environment:java.version=1.8.0_212 (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,853] INFO Client environment:java.vendor=IcedTea (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,853] INFO Client environment:java.home=/usr/lib/jvm/java-1.8-openjdk/jre (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,853] INFO Client environment:java.class.path=/opt/kafka/bin/../libs/activation-1.1.1.jar:/opt/kafka/bin/../libs/aopalliance-repackaged-2.6.1.jar:/opt/kafka/bin/../libs/argparse4j-0.7.0.jar:/opt/kafka/bin/../libs/audience-annotations-0.5.0.jar:/opt/kafka/bin/../libs/commons-cli-1.4.jar:/opt/kafka/bin/../libs/commons-lang3-3.8.1.jar:/opt/kafka/bin/../libs/connect-api-2.7.0.jar:/opt/kafka/bin/../libs/connect-basic-auth-extension-2.7.0.jar:/opt/kafka/bin/../libs/connect-file-2.7.0.jar:/opt/kafka/bin/../libs/connect-json-2.7.0.jar:/opt/kafka/bin/../libs/connect-mirror-2.7.0.jar:/opt/kafka/bin/../libs/connect-mirror-client-2.7.0.jar:/opt/kafka/bin/../libs/connect-runtime-2.7.0.jar:/opt/kafka/bin/../libs/connect-transforms-2.7.0.jar:/opt/kafka/bin/../libs/hk2-api-2.6.1.jar:/opt/kafka/bin/../libs/hk2-locator-2.6.1.jar:/opt/kafka/bin/../libs/hk2-utils-2.6.1.jar:/opt/kafka/bin/../libs/jackson-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-core-2.10.5.jar:/opt/kafka/bin/../libs/jackson-databind-2.10.5.1.jar:/opt/kafka/bin/../libs/jackson-dataformat-csv-2.10.5.jar:/opt/kafka/bin/../libs/jackson-datatype-jdk8-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-base-2.10.5.jar:/opt/kafka/bin/../libs/jackson-jaxrs-json-provider-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-jaxb-annotations-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-paranamer-2.10.5.jar:/opt/kafka/bin/../libs/jackson-module-scala_2.13-2.10.5.jar:/opt/kafka/bin/../libs/jakarta.activation-api-1.2.1.jar:/opt/kafka/bin/../libs/jakarta.annotation-api-1.3.5.jar:/opt/kafka/bin/../libs/jakarta.inject-2.6.1.jar:/opt/kafka/bin/../libs/jakarta.validation-api-2.0.2.jar:/opt/kafka/bin/../libs/jakarta.ws.rs-api-2.1.6.jar:/opt/kafka/bin/../libs/jakarta.xml.bind-api-2.3.2.jar:/opt/kafka/bin/../libs/javassist-3.25.0-GA.jar:/opt/kafka/bin/../libs/javassist-3.26.0-GA.jar:/opt/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/opt/kafka/bin/../libs/javax.ws.rs-api-2.1.1.jar:/opt/kafka/bin/../libs/jaxb-api-2.3.0.jar:/opt/kafka/bin/../libs/jersey-client-2.31.jar:/opt/kafka/bin/../libs/jersey-common-2.31.jar:/opt/kafka/bin/../libs/jersey-container-servlet-2.31.jar:/opt/kafka/bin/../libs/jersey-container-servlet-core-2.31.jar:/opt/kafka/bin/../libs/jersey-hk2-2.31.jar:/opt/kafka/bin/../libs/jersey-media-jaxb-2.31.jar:/opt/kafka/bin/../libs/jersey-server-2.31.jar:/opt/kafka/bin/../libs/jetty-client-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-continuation-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-http-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-io-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-security-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-server-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-servlet-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-servlets-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jetty-util-9.4.33.v20201020.jar:/opt/kafka/bin/../libs/jopt-simple-5.0.4.jar:/opt/kafka/bin/../libs/kafka-clients-2.7.0.jar:/opt/kafka/bin/../libs/kafka-log4j-appender-2.7.0.jar:/opt/kafka/bin/../libs/kafka-raft-2.7.0.jar:/opt/kafka/bin/../libs/kafka-streams-2.7.0.jar:/opt/kafka/bin/../libs/kafka-streams-examples-2.7.0.jar:/opt/kafka/bin/../libs/kafka-streams-scala_2.13-2.7.0.jar:/opt/kafka/bin/../libs/kafka-streams-test-utils-2.7.0.jar:/opt/kafka/bin/../libs/kafka-tools-2.7.0.jar:/opt/kafka/bin/../libs/kafka_2.13-2.7.0-sources.jar:/opt/kafka/bin/../libs/kafka_2.13-2.7.0.jar:/opt/kafka/bin/../libs/log4j-1.2.17.jar:/opt/kafka/bin/../libs/lz4-java-1.7.1.jar:/opt/kafka/bin/../libs/maven-artifact-3.6.3.jar:/opt/kafka/bin/../libs/metrics-core-2.2.0.jar:/opt/kafka/bin/../libs/netty-buffer-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-codec-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-common-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-handler-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-resolver-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-transport-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-epoll-4.1.51.Final.jar:/opt/kafka/bin/../libs/netty-transport-native-unix-common-4.1.51.Final.jar:/opt/kafka/bin/../libs/osgi-resource-locator-1.0.3.jar:/opt/kafka/bin/../libs/paranamer-2.8.jar:/opt/kafka/bin/../libs/plexus-utils-3.2.1.jar:/opt/kafka/bin/../libs/reflections-0.9.12.jar:/opt/kafka/bin/../libs/rocksdbjni-5.18.4.jar:/opt/kafka/bin/../libs/scala-collection-compat_2.13-2.2.0.jar:/opt/kafka/bin/../libs/scala-java8-compat_2.13-0.9.1.jar:/opt/kafka/bin/../libs/scala-library-2.13.3.jar:/opt/kafka/bin/../libs/scala-logging_2.13-3.9.2.jar:/opt/kafka/bin/../libs/scala-reflect-2.13.3.jar:/opt/kafka/bin/../libs/slf4j-api-1.7.30.jar:/opt/kafka/bin/../libs/slf4j-log4j12-1.7.30.jar:/opt/kafka/bin/../libs/snappy-java-1.1.7.7.jar:/opt/kafka/bin/../libs/zookeeper-3.5.8.jar:/opt/kafka/bin/../libs/zookeeper-jute-3.5.8.jar:/opt/kafka/bin/../libs/zstd-jni-1.4.5-6.jar (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,854] INFO Client environment:java.library.path=/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64/server:/usr/lib/jvm/java-1.8-openjdk/jre/lib/amd64:/usr/lib/jvm/java-1.8-openjdk/jre/../lib/amd64:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:os.version=5.4.72-microsoft-standard-WSL2 (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,855] INFO Client environment:user.dir=/ (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,856] INFO Client environment:os.memory.free=974MB (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,856] INFO Client environment:os.memory.max=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,856] INFO Client environment:os.memory.total=1024MB (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,861] INFO Initiating client connection, connectString=zookeeper:2181 sessionTimeout=18000 watcher=kafka.zookeeper.ZooKeeperClient$ZooKeeperClientWatcher$@59fd97a8 (org.apache.zookeeper.ZooKeeper) kafka | [2021-06-03 12:35:59,872] INFO jute.maxbuffer value is 4194304 Bytes (org.apache.zookeeper.ClientCnxnSocket) kafka | [2021-06-03 12:35:59,886] INFO zookeeper.request.timeout value is 0. feature enabled= (org.apache.zookeeper.ClientCnxn) kafka | [2021-06-03 12:35:59,917] INFO [ZooKeeperClient Kafka server] Waiting until connected. (kafka.zookeeper.ZooKeeperClient) kafka | [2021-06-03 12:35:59,924] INFO Opening socket connection to server zookeeper/192.168.240.4:2181. Will not attempt to authenticate using SASL (unknown error) (org.apache.zookeeper.ClientCnxn) kafka | [2021-06-03 12:35:59,932] INFO Socket connection established, initiating session, client: /192.168.240.2:42636, server: zookeeper/192.168.240.4:2181 (org.apache.zookeeper.ClientCnxn) zookeper | 2021-06-03 12:35:59,934 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /192.168.240.2:42636 zookeper | 2021-06-03 12:35:59,943 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.240.2:42636 zookeper | 2021-06-03 12:35:59,947 [myid:] - INFO [SyncThread:0:FileTxnLog@213] - Creating new log file: log.1 publish-service_1 | 2021-06-03 12:35:59.966 WARN 1 --- [kground-preinit] o.s.h.c.j.Jackson2ObjectMapperBuilder : For Jackson Kotlin classes support please add "com.fasterxml.jackson.module:jackson-module-kotlin" to the classpath zookeper | 2021-06-03 12:35:59,971 [myid:] - INFO [SyncThread:0:ZooKeeperServer@694] - Established session 0x1000a1d0b8f0000 with negotiated timeout 18000 for client /192.168.240.2:42636 kafka | [2021-06-03 12:35:59,973] INFO Session establishment complete on server zookeeper/192.168.240.4:2181, sessionid = 0x1000a1d0b8f0000, negotiated timeout = 18000 (org.apache.zookeeper.ClientCnxn) kafka | [2021-06-03 12:35:59,979] INFO [ZooKeeperClient Kafka server] Connected. (kafka.zookeeper.ZooKeeperClient) zookeper | 2021-06-03 12:36:00,076 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000a1d0b8f0000 type:create cxid:0x2 zxid:0x3 txntype:-1 reqpath:n/a Error Path:/brokers Error:KeeperErrorCode = NoNode for /brokers zookeper | 2021-06-03 12:36:00,088 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000a1d0b8f0000 type:create cxid:0x6 zxid:0x7 txntype:-1 reqpath:n/a Error Path:/config Error:KeeperErrorCode = NoNode for /config zookeper | 2021-06-03 12:36:00,098 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000a1d0b8f0000 type:create cxid:0x9 zxid:0xa txntype:-1 reqpath:n/a Error Path:/admin Error:KeeperErrorCode = NoNode for /admin kafka | [2021-06-03 12:36:00,140] INFO [feature-zk-node-event-process-thread]: Starting (kafka.server.FinalizedFeatureChangeListener$ChangeNotificationProcessorThread) kafka | [2021-06-03 12:36:00,159] INFO Feature ZK node at path: /feature does not exist (kafka.server.FinalizedFeatureChangeListener) kafka | [2021-06-03 12:36:00,160] INFO Cleared cache (kafka.server.FinalizedFeatureCache) zookeper | 2021-06-03 12:36:00,407 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000a1d0b8f0000 type:create cxid:0x17 zxid:0x15 txntype:-1 reqpath:n/a Error Path:/cluster Error:KeeperErrorCode = NoNode for /cluster kafka | [2021-06-03 12:36:00,425] INFO Cluster ID = rH6nabEKRWi79GO2G2ckbg (kafka.server.KafkaServer) kafka | [2021-06-03 12:36:00,431] WARN No meta.properties file under dir /kafka/kafka-logs-3411a92ea52c/meta.properties (kafka.server.BrokerMetadataCheckpoint)
Tony Evans
Ranch Hand
Posts: 681
1
posted 2 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
kafka | [2021-06-03 12:36:00,527] INFO KafkaConfig values: kafka | advertised.host.name = null kafka | advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 kafka | advertised.port = null kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.id = -1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 0 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | host.name = kafka | inter.broker.listener.name = LISTENER_DOCKER_EXTERNAL kafka | inter.broker.protocol.version = 2.7-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT kafka | listeners = LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /kafka/kafka-logs-3411a92ea52c kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 2.7-IV2 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | port = 9092 kafka | principal.builder.class = null kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.consumer.default = 9223372036854775807 kafka | quota.producer.default = 9223372036854775807 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.server.callback.handler.class = null kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 127000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.2 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 1 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 1 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = 18000 kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | zookeeper.sync.time.ms = 2000 kafka | (kafka.server.KafkaConfig) kafka | [2021-06-03 12:36:00,546] INFO KafkaConfig values: kafka | advertised.host.name = null kafka | advertised.listeners = LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 kafka | advertised.port = null kafka | alter.config.policy.class.name = null kafka | alter.log.dirs.replication.quota.window.num = 11 kafka | alter.log.dirs.replication.quota.window.size.seconds = 1 kafka | authorizer.class.name = kafka | auto.create.topics.enable = true kafka | auto.leader.rebalance.enable = true kafka | background.threads = 10 kafka | broker.id = -1 kafka | broker.id.generation.enable = true kafka | broker.rack = null kafka | client.quota.callback.class = null kafka | compression.type = producer kafka | connection.failed.authentication.delay.ms = 100 kafka | connections.max.idle.ms = 600000 kafka | connections.max.reauth.ms = 0 kafka | control.plane.listener.name = null kafka | controlled.shutdown.enable = true kafka | controlled.shutdown.max.retries = 3 kafka | controlled.shutdown.retry.backoff.ms = 5000 kafka | controller.quota.window.num = 11 kafka | controller.quota.window.size.seconds = 1 kafka | controller.socket.timeout.ms = 30000 kafka | create.topic.policy.class.name = null kafka | default.replication.factor = 1 kafka | delegation.token.expiry.check.interval.ms = 3600000 kafka | delegation.token.expiry.time.ms = 86400000 kafka | delegation.token.master.key = null kafka | delegation.token.max.lifetime.ms = 604800000 kafka | delete.records.purgatory.purge.interval.requests = 1 kafka | delete.topic.enable = true kafka | fetch.max.bytes = 57671680 kafka | fetch.purgatory.purge.interval.requests = 1000 kafka | group.initial.rebalance.delay.ms = 0 kafka | group.max.session.timeout.ms = 1800000 kafka | group.max.size = 2147483647 kafka | group.min.session.timeout.ms = 6000 kafka | host.name = kafka | inter.broker.listener.name = LISTENER_DOCKER_EXTERNAL kafka | inter.broker.protocol.version = 2.7-IV2 kafka | kafka.metrics.polling.interval.secs = 10 kafka | kafka.metrics.reporters = [] kafka | leader.imbalance.check.interval.seconds = 300 kafka | leader.imbalance.per.broker.percentage = 10 kafka | listener.security.protocol.map = LISTENER_DOCKER_INTERNAL:PLAINTEXT,LISTENER_DOCKER_EXTERNAL:PLAINTEXT kafka | listeners = LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092 kafka | log.cleaner.backoff.ms = 15000 kafka | log.cleaner.dedupe.buffer.size = 134217728 kafka | log.cleaner.delete.retention.ms = 86400000 kafka | log.cleaner.enable = true kafka | log.cleaner.io.buffer.load.factor = 0.9 kafka | log.cleaner.io.buffer.size = 524288 kafka | log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308 kafka | log.cleaner.max.compaction.lag.ms = 9223372036854775807 kafka | log.cleaner.min.cleanable.ratio = 0.5 kafka | log.cleaner.min.compaction.lag.ms = 0 kafka | log.cleaner.threads = 1 kafka | log.cleanup.policy = [delete] kafka | log.dir = /tmp/kafka-logs kafka | log.dirs = /kafka/kafka-logs-3411a92ea52c kafka | log.flush.interval.messages = 9223372036854775807 kafka | log.flush.interval.ms = null kafka | log.flush.offset.checkpoint.interval.ms = 60000 kafka | log.flush.scheduler.interval.ms = 9223372036854775807 kafka | log.flush.start.offset.checkpoint.interval.ms = 60000 kafka | log.index.interval.bytes = 4096 kafka | log.index.size.max.bytes = 10485760 kafka | log.message.downconversion.enable = true kafka | log.message.format.version = 2.7-IV2 kafka | log.message.timestamp.difference.max.ms = 9223372036854775807 kafka | log.message.timestamp.type = CreateTime kafka | log.preallocate = false kafka | log.retention.bytes = -1 kafka | log.retention.check.interval.ms = 300000 kafka | log.retention.hours = 168 kafka | log.retention.minutes = null kafka | log.retention.ms = null kafka | log.roll.hours = 168 kafka | log.roll.jitter.hours = 0 kafka | log.roll.jitter.ms = null kafka | log.roll.ms = null kafka | log.segment.bytes = 1073741824 kafka | log.segment.delete.delay.ms = 60000 kafka | max.connection.creation.rate = 2147483647 kafka | max.connections = 2147483647 kafka | max.connections.per.ip = 2147483647 kafka | max.connections.per.ip.overrides = kafka | max.incremental.fetch.session.cache.slots = 1000 kafka | message.max.bytes = 1048588 kafka | metric.reporters = [] kafka | metrics.num.samples = 2 kafka | metrics.recording.level = INFO kafka | metrics.sample.window.ms = 30000 kafka | min.insync.replicas = 1 kafka | num.io.threads = 8 kafka | num.network.threads = 3 kafka | num.partitions = 1 kafka | num.recovery.threads.per.data.dir = 1 kafka | num.replica.alter.log.dirs.threads = null kafka | num.replica.fetchers = 1 kafka | offset.metadata.max.bytes = 4096 kafka | offsets.commit.required.acks = -1 kafka | offsets.commit.timeout.ms = 5000 kafka | offsets.load.buffer.size = 5242880 kafka | offsets.retention.check.interval.ms = 600000 kafka | offsets.retention.minutes = 10080 kafka | offsets.topic.compression.codec = 0 kafka | offsets.topic.num.partitions = 50 kafka | offsets.topic.replication.factor = 1 kafka | offsets.topic.segment.bytes = 104857600 kafka | password.encoder.cipher.algorithm = AES/CBC/PKCS5Padding kafka | password.encoder.iterations = 4096 kafka | password.encoder.key.length = 128 kafka | password.encoder.keyfactory.algorithm = null kafka | password.encoder.old.secret = null kafka | password.encoder.secret = null kafka | port = 9092 kafka | principal.builder.class = null kafka | producer.purgatory.purge.interval.requests = 1000 kafka | queued.max.request.bytes = -1 kafka | queued.max.requests = 500 kafka | quota.consumer.default = 9223372036854775807 kafka | quota.producer.default = 9223372036854775807 kafka | quota.window.num = 11 kafka | quota.window.size.seconds = 1 kafka | replica.fetch.backoff.ms = 1000 kafka | replica.fetch.max.bytes = 1048576 kafka | replica.fetch.min.bytes = 1 kafka | replica.fetch.response.max.bytes = 10485760 kafka | replica.fetch.wait.max.ms = 500 kafka | replica.high.watermark.checkpoint.interval.ms = 5000 kafka | replica.lag.time.max.ms = 30000 kafka | replica.selector.class = null kafka | replica.socket.receive.buffer.bytes = 65536 kafka | replica.socket.timeout.ms = 30000 kafka | replication.quota.window.num = 11 kafka | replication.quota.window.size.seconds = 1 kafka | request.timeout.ms = 30000 kafka | reserved.broker.max.id = 1000 kafka | sasl.client.callback.handler.class = null kafka | sasl.enabled.mechanisms = [GSSAPI] kafka | sasl.jaas.config = null kafka | sasl.kerberos.kinit.cmd = /usr/bin/kinit kafka | sasl.kerberos.min.time.before.relogin = 60000 kafka | sasl.kerberos.principal.to.local.rules = [DEFAULT] kafka | sasl.kerberos.service.name = null kafka | sasl.kerberos.ticket.renew.jitter = 0.05 kafka | sasl.kerberos.ticket.renew.window.factor = 0.8 kafka | sasl.login.callback.handler.class = null kafka | sasl.login.class = null kafka | sasl.login.refresh.buffer.seconds = 300 kafka | sasl.login.refresh.min.period.seconds = 60 kafka | sasl.login.refresh.window.factor = 0.8 kafka | sasl.login.refresh.window.jitter = 0.05 kafka | sasl.mechanism.inter.broker.protocol = GSSAPI kafka | sasl.server.callback.handler.class = null kafka | security.inter.broker.protocol = PLAINTEXT kafka | security.providers = null kafka | socket.connection.setup.timeout.max.ms = 127000 kafka | socket.connection.setup.timeout.ms = 10000 kafka | socket.receive.buffer.bytes = 102400 kafka | socket.request.max.bytes = 104857600 kafka | socket.send.buffer.bytes = 102400 kafka | ssl.cipher.suites = [] kafka | ssl.client.auth = none kafka | ssl.enabled.protocols = [TLSv1.2] kafka | ssl.endpoint.identification.algorithm = https kafka | ssl.engine.factory.class = null kafka | ssl.key.password = null kafka | ssl.keymanager.algorithm = SunX509 kafka | ssl.keystore.certificate.chain = null kafka | ssl.keystore.key = null kafka | ssl.keystore.location = null kafka | ssl.keystore.password = null kafka | ssl.keystore.type = JKS kafka | ssl.principal.mapping.rules = DEFAULT kafka | ssl.protocol = TLSv1.2 kafka | ssl.provider = null kafka | ssl.secure.random.implementation = null kafka | ssl.trustmanager.algorithm = PKIX kafka | ssl.truststore.certificates = null kafka | ssl.truststore.location = null kafka | ssl.truststore.password = null kafka | ssl.truststore.type = JKS kafka | transaction.abort.timed.out.transaction.cleanup.interval.ms = 10000 kafka | transaction.max.timeout.ms = 900000 kafka | transaction.remove.expired.transaction.cleanup.interval.ms = 3600000 kafka | transaction.state.log.load.buffer.size = 5242880 kafka | transaction.state.log.min.isr = 1 kafka | transaction.state.log.num.partitions = 50 kafka | transaction.state.log.replication.factor = 1 kafka | transaction.state.log.segment.bytes = 104857600 kafka | transactional.id.expiration.ms = 604800000 kafka | unclean.leader.election.enable = false kafka | zookeeper.clientCnxnSocket = null kafka | zookeeper.connect = zookeeper:2181 kafka | zookeeper.connection.timeout.ms = 18000 kafka | zookeeper.max.in.flight.requests = 10 kafka | zookeeper.session.timeout.ms = 18000 kafka | zookeeper.set.acl = false kafka | zookeeper.ssl.cipher.suites = null kafka | zookeeper.ssl.client.enable = false kafka | zookeeper.ssl.crl.enable = false kafka | zookeeper.ssl.enabled.protocols = null kafka | zookeeper.ssl.endpoint.identification.algorithm = HTTPS kafka | zookeeper.ssl.keystore.location = null kafka | zookeeper.ssl.keystore.password = null kafka | zookeeper.ssl.keystore.type = null kafka | zookeeper.ssl.ocsp.enable = false kafka | zookeeper.ssl.protocol = TLSv1.2 kafka | zookeeper.ssl.truststore.location = null kafka | zookeeper.ssl.truststore.password = null kafka | zookeeper.ssl.truststore.type = null kafka | zookeeper.sync.time.ms = 2000 kafka | (kafka.server.KafkaConfig)
Tony Evans
Ranch Hand
Posts: 681
1
posted 2 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
kafka | [2021-06-03 12:36:00,631] INFO [ThrottledChannelReaper-Fetch]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2021-06-03 12:36:00,631] INFO [ThrottledChannelReaper-Produce]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2021-06-03 12:36:00,633] INFO [ThrottledChannelReaper-Request]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2021-06-03 12:36:00,636] INFO [ThrottledChannelReaper-ControllerMutation]: Starting (kafka.server.ClientQuotaManager$ThrottledChannelReaper) kafka | [2021-06-03 12:36:00,686] INFO Log directory /kafka/kafka-logs-3411a92ea52c not found, creating it. (kafka.log.LogManager) kafka | [2021-06-03 12:36:00,699] INFO Loading logs from log dirs ArraySeq(/kafka/kafka-logs-3411a92ea52c) (kafka.log.LogManager) kafka | [2021-06-03 12:36:00,704] INFO Attempting recovery for all logs in /kafka/kafka-logs-3411a92ea52c since no clean shutdown file was found (kafka.log.LogManager) kafka | [2021-06-03 12:36:00,716] INFO Loaded 0 logs in 16ms. (kafka.log.LogManager) kafka | [2021-06-03 12:36:00,740] INFO Starting log cleanup with a period of 300000 ms. (kafka.log.LogManager) kafka | [2021-06-03 12:36:00,747] INFO Starting log flusher with a default period of 9223372036854775807 ms. (kafka.log.LogManager) kafka | [2021-06-03 12:36:01,485] INFO Created ConnectionAcceptRate sensor, quotaLimit=2147483647 (kafka.network.ConnectionQuotas) kafka | [2021-06-03 12:36:01,489] INFO Created ConnectionAcceptRate-LISTENER_DOCKER_INTERNAL sensor, quotaLimit=2147483647 (kafka.network.ConnectionQuotas) kafka | [2021-06-03 12:36:01,493] INFO Updated LISTENER_DOCKER_INTERNAL max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2021-06-03 12:36:01,499] INFO Awaiting socket connections on kafka:19092. (kafka.network.Acceptor) publish-service_1 | 2021-06-03 12:36:01.547 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat initialized with port(s): 8080 (http) kafka | [2021-06-03 12:36:01,553] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : ListenerName(LISTENER_DOCKER_INTERNAL) (kafka.network.SocketServer) kafka | [2021-06-03 12:36:01,554] INFO Created ConnectionAcceptRate-LISTENER_DOCKER_EXTERNAL sensor, quotaLimit=2147483647 (kafka.network.ConnectionQuotas) kafka | [2021-06-03 12:36:01,555] INFO Updated LISTENER_DOCKER_EXTERNAL max connection creation rate to 2147483647 (kafka.network.ConnectionQuotas) kafka | [2021-06-03 12:36:01,556] INFO Awaiting socket connections on localhost:9092. (kafka.network.Acceptor) kafka | [2021-06-03 12:36:01,572] INFO [SocketServer brokerId=1001] Created data-plane acceptor and processors for endpoint : ListenerName(LISTENER_DOCKER_EXTERNAL) (kafka.network.SocketServer) publish-service_1 | 2021-06-03 12:36:01.577 INFO 1 --- [ main] o.apache.catalina.core.StandardService : Starting service [Tomcat] publish-service_1 | 2021-06-03 12:36:01.578 INFO 1 --- [ main] org.apache.catalina.core.StandardEngine : Starting Servlet engine: [Apache Tomcat/9.0.46] kafka | [2021-06-03 12:36:01,627] INFO [ExpirationReaper-1001-Produce]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,627] INFO [ExpirationReaper-1001-Fetch]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,628] INFO [ExpirationReaper-1001-DeleteRecords]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,632] INFO [ExpirationReaper-1001-ElectLeader]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,654] INFO [LogDirFailureHandler]: Starting (kafka.server.ReplicaManager$LogDirFailureHandler) kafka | [2021-06-03 12:36:01,660] INFO [broker-1001-to-controller-send-thread]: Starting (kafka.server.BrokerToControllerRequestThread) publish-service_1 | 2021-06-03 12:36:01.696 INFO 1 --- [ main] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring embedded WebApplicationContext publish-service_1 | 2021-06-03 12:36:01.697 INFO 1 --- [ main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1894 ms kafka | [2021-06-03 12:36:01,703] INFO Creating /brokers/ids/1001 (is it secure? false) (kafka.zk.KafkaZkClient) kafka | [2021-06-03 12:36:01,740] INFO Stat of the created znode at /brokers/ids/1001 is: 25,25,1622723761723,1622723761723,1,0,0,72068713902178304,309,0,25 kafka | (kafka.zk.KafkaZkClient) kafka | [2021-06-03 12:36:01,743] INFO Registered broker 1001 at path /brokers/ids/1001 with addresses: LISTENER_DOCKER_INTERNAL://kafka:19092,LISTENER_DOCKER_EXTERNAL://localhost:9092, czxid (broker epoch): 25 (kafka.zk.KafkaZkClient) kafka | [2021-06-03 12:36:01,851] INFO [ExpirationReaper-1001-topic]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,857] INFO Successfully created /controller_epoch with initial epoch 0 (kafka.zk.KafkaZkClient) kafka | [2021-06-03 12:36:01,859] INFO [ExpirationReaper-1001-Heartbeat]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,861] INFO [ExpirationReaper-1001-Rebalance]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:01,895] INFO Feature ZK node created at path: /feature (kafka.server.FinalizedFeatureChangeListener) kafka | [2021-06-03 12:36:01,901] INFO [GroupCoordinator 1001]: Starting up. (kafka.coordinator.group.GroupCoordinator) kafka | [2021-06-03 12:36:01,904] INFO [GroupCoordinator 1001]: Startup complete. (kafka.coordinator.group.GroupCoordinator) kafka | [2021-06-03 12:36:01,958] INFO [ProducerId Manager 1001]: Acquired new producerId block (brokerId:1001,blockStartProducerId:0,blockEndProducerId:999) by writing to Zk with path version 1 (kafka.coordinator.transaction.ProducerIdManager) kafka | [2021-06-03 12:36:01,986] INFO [TransactionCoordinator id=1001] Starting up. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2021-06-03 12:36:02,011] INFO [TransactionCoordinator id=1001] Startup complete. (kafka.coordinator.transaction.TransactionCoordinator) kafka | [2021-06-03 12:36:02,014] INFO [Transaction Marker Channel Manager 1001]: Starting (kafka.coordinator.transaction.TransactionMarkerChannelManager) kafka | [2021-06-03 12:36:02,025] INFO Updated cache from existing <empty> to latest FinalizedFeaturesAndEpoch(features=Features{}, epoch=0). (kafka.server.FinalizedFeatureCache) kafka | [2021-06-03 12:36:02,134] INFO [ExpirationReaper-1001-AlterAcls]: Starting (kafka.server.DelayedOperationPurgatory$ExpiredOperationReaper) kafka | [2021-06-03 12:36:02,212] INFO [/config/changes-event-process-thread]: Starting (kafka.common.ZkNodeChangeNotificationListener$ChangeEventProcessThread) zookeper | 2021-06-03 12:36:02,233 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@596] - Got user-level KeeperException when processing sessionid:0x1000a1d0b8f0000 type:multi cxid:0x3d zxid:0x1e txntype:-1 reqpath:n/a aborting remaining multi ops. Error Path:/admin/preferred_replica_election Error:KeeperErrorCode = NoNode for /admin/preferred_replica_election kafka | [2021-06-03 12:36:02,254] INFO [SocketServer brokerId=1001] Starting socket server acceptors and processors (kafka.network.SocketServer) kafka | [2021-06-03 12:36:02,273] INFO [SocketServer brokerId=1001] Started data-plane acceptor and processor(s) for endpoint : ListenerName(LISTENER_DOCKER_EXTERNAL) (kafka.network.SocketServer) kafka | [2021-06-03 12:36:02,280] INFO [SocketServer brokerId=1001] Started data-plane acceptor and processor(s) for endpoint : ListenerName(LISTENER_DOCKER_INTERNAL) (kafka.network.SocketServer) kafka | [2021-06-03 12:36:02,280] INFO [SocketServer brokerId=1001] Started socket server acceptors and processors (kafka.network.SocketServer) kafka | [2021-06-03 12:36:02,287] INFO Kafka version: 2.7.0 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2021-06-03 12:36:02,288] INFO Kafka commitId: 448719dc99a19793 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2021-06-03 12:36:02,288] INFO Kafka startTimeMs: 1622723762281 (org.apache.kafka.common.utils.AppInfoParser) kafka | [2021-06-03 12:36:02,291] INFO [KafkaServer id=1001] started (kafka.server.KafkaServer) kafka | [2021-06-03 12:36:02,367] INFO [broker-1001-to-controller-send-thread]: Recorded new controller, from now on will use broker 1001 (kafka.server.BrokerToControllerRequestThread) publish-service_1 | 2021-06-03 12:36:02.498 INFO 1 --- [ main] o.s.b.w.embedded.tomcat.TomcatWebServer : Tomcat started on port(s): 8080 (http) with context path '' publish-service_1 | 2021-06-03 12:36:02.512 INFO 1 --- [ main] cyExchangeProducerKafkaDockerApplication : Started CurrencyExchangeProducerKafkaDockerApplication in 3.528 seconds (JVM running for 4.29) publish-service_1 | 2021-06-03 12:36:02.514 INFO 1 --- [ main] o.s.b.a.ApplicationAvailabilityBean : Application availability state LivenessState changed to CORRECT publish-service_1 | 2021-06-03 12:36:02.517 INFO 1 --- [ main] o.s.b.a.ApplicationAvailabilityBean : Application availability state ReadinessState changed to ACCEPTING_TRAFFIC kafka | creating topics: orders:1:1 zookeper | 2021-06-03 12:36:09,408 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxnFactory@215] - Accepted socket connection from /192.168.240.2:42640 zookeper | 2021-06-03 12:36:09,414 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:ZooKeeperServer@949] - Client attempting to establish new session at /192.168.240.2:42640 zookeper | 2021-06-03 12:36:09,418 [myid:] - INFO [SyncThread:0:ZooKeeperServer@694] - Established session 0x1000a1d0b8f0001 with negotiated timeout 30000 for client /192.168.240.2:42640 zookeper | 2021-06-03 12:36:09,730 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@653] - Got user-level KeeperException when processing sessionid:0x1000a1d0b8f0001 type:setData cxid:0x4 zxid:0x20 txntype:-1 reqpath:n/a Error Path:/config/topics/orders Error:KeeperErrorCode = NoNode for /config/topics/orders kafka | Created topic orders. zookeper | 2021-06-03 12:36:09,786 [myid:] - INFO [ProcessThread(sid:0 cport:2181)::PrepRequestProcessor@487] - Processed session termination for sessionid: 0x1000a1d0b8f0001 zookeper | 2021-06-03 12:36:09,790 [myid:] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn@1056] - Closed socket connection for client /192.168.240.2:42640 which had sessionid 0x1000a1d0b8f0001 kafka | [2021-06-03 12:36:09,862] INFO [ReplicaFetcherManager on broker 1001] Removed fetcher for partitions Set(orders-0) (kafka.server.ReplicaFetcherManager) kafka | [2021-06-03 12:36:09,948] INFO [Log partition=orders-0, dir=/kafka/kafka-logs-3411a92ea52c] Loading producer state till offset 0 with message format version 2 (kafka.log.Log) kafka | [2021-06-03 12:36:09,964] INFO Created log for partition orders-0 in /kafka/kafka-logs-3411a92ea52c/orders-0 with properties {compression.type -> producer, message.downconversion.enable -> true, min.insync.replicas -> 1, segment.jitter.ms -> 0, cleanup.policy -> [delete], flush.ms -> 9223372036854775807, segment.bytes -> 1073741824, retention.ms -> 604800000, flush.messages -> 9223372036854775807, message.format.version -> 2.7-IV2, file.delete.delay.ms -> 60000, max.compaction.lag.ms -> 9223372036854775807, max.message.bytes -> 1048588, min.compaction.lag.ms -> 0, message.timestamp.type -> CreateTime, preallocate -> false, min.cleanable.dirty.ratio -> 0.5, index.interval.bytes -> 4096, unclean.leader.election.enable -> false, retention.bytes -> -1, delete.retention.ms -> 86400000, segment.ms -> 604800000, message.timestamp.difference.max.ms -> 9223372036854775807, segment.index.bytes -> 10485760}. (kafka.log.LogManager) kafka | [2021-06-03 12:36:09,965] INFO [Partition orders-0 broker=1001] No checkpointed highwatermark is found for partition orders-0 (kafka.cluster.Partition) kafka | [2021-06-03 12:36:09,966] INFO [Partition orders-0 broker=1001] Log loaded for partition orders-0 with initial high watermark 0 (kafka.cluster.Partition) publish-service_1 | 2021-06-03 12:36:51.170 INFO 1 --- [nio-8080-exec-1] o.a.c.c.C.[Tomcat].[localhost].[/] : Initializing Spring DispatcherServlet 'dispatcherServlet' publish-service_1 | 2021-06-03 12:36:51.171 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Initializing Servlet 'dispatcherServlet' publish-service_1 | 2021-06-03 12:36:51.176 INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet : Completed initialization in 4 ms publish-service_1 | 2021-06-03 12:36:51.306 INFO 1 --- [nio-8080-exec-1] c.c.c.controller.ProducerController : Recieved Order Order [orderId=123511, orderName=exchange1, orderType=SELL] publish-service_1 | 2021-06-03 12:36:51.306 INFO 1 --- [nio-8080-exec-1] c.c.c.messaging.KafkaPublisher : send Order [orderId=123511, orderName=exchange1, orderType=SELL] publish-service_1 | 2021-06-03 12:36:51.341 INFO 1 --- [nio-8080-exec-1] o.a.k.clients.producer.ProducerConfig : ProducerConfig values: publish-service_1 | acks = 1 publish-service_1 | batch.size = 16384 publish-service_1 | bootstrap.servers = [localhost:9092] publish-service_1 | buffer.memory = 33554432 publish-service_1 | client.dns.lookup = use_all_dns_ips publish-service_1 | client.id = producer-1 publish-service_1 | compression.type = none publish-service_1 | connections.max.idle.ms = 540000 publish-service_1 | delivery.timeout.ms = 120000 publish-service_1 | enable.idempotence = false publish-service_1 | interceptor.classes = [] publish-service_1 | internal.auto.downgrade.txn.commit = true publish-service_1 | key.serializer = class org.apache.kafka.common.serialization.StringSerializer publish-service_1 | linger.ms = 0 publish-service_1 | max.block.ms = 60000 publish-service_1 | max.in.flight.requests.per.connection = 5 publish-service_1 | max.request.size = 1048576 publish-service_1 | metadata.max.age.ms = 300000 publish-service_1 | metadata.max.idle.ms = 300000 publish-service_1 | metric.reporters = [] publish-service_1 | metrics.num.samples = 2 publish-service_1 | metrics.recording.level = INFO publish-service_1 | metrics.sample.window.ms = 30000 publish-service_1 | partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner publish-service_1 | receive.buffer.bytes = 32768 publish-service_1 | reconnect.backoff.max.ms = 1000 publish-service_1 | reconnect.backoff.ms = 50 publish-service_1 | request.timeout.ms = 30000 publish-service_1 | retries = 2147483647 publish-service_1 | retry.backoff.ms = 100 publish-service_1 | sasl.client.callback.handler.class = null publish-service_1 | sasl.jaas.config = null publish-service_1 | sasl.kerberos.kinit.cmd = /usr/bin/kinit publish-service_1 | sasl.kerberos.min.time.before.relogin = 60000 publish-service_1 | sasl.kerberos.service.name = null publish-service_1 | sasl.kerberos.ticket.renew.jitter = 0.05 publish-service_1 | sasl.kerberos.ticket.renew.window.factor = 0.8 publish-service_1 | sasl.login.callback.handler.class = null publish-service_1 | sasl.login.class = null publish-service_1 | sasl.login.refresh.buffer.seconds = 300 publish-service_1 | sasl.login.refresh.min.period.seconds = 60 publish-service_1 | sasl.login.refresh.window.factor = 0.8 publish-service_1 | sasl.login.refresh.window.jitter = 0.05 publish-service_1 | sasl.mechanism = GSSAPI publish-service_1 | security.protocol = PLAINTEXT publish-service_1 | security.providers = null publish-service_1 | send.buffer.bytes = 131072 publish-service_1 | socket.connection.setup.timeout.max.ms = 127000 publish-service_1 | socket.connection.setup.timeout.ms = 10000 publish-service_1 | ssl.cipher.suites = null publish-service_1 | ssl.enabled.protocols = [TLSv1.2] publish-service_1 | ssl.endpoint.identification.algorithm = https publish-service_1 | ssl.engine.factory.class = null publish-service_1 | ssl.key.password = null publish-service_1 | ssl.keymanager.algorithm = SunX509 publish-service_1 | ssl.keystore.certificate.chain = null publish-service_1 | ssl.keystore.key = null publish-service_1 | ssl.keystore.location = null publish-service_1 | ssl.keystore.password = null publish-service_1 | ssl.keystore.type = JKS publish-service_1 | ssl.protocol = TLSv1.2 publish-service_1 | ssl.provider = null publish-service_1 | ssl.secure.random.implementation = null publish-service_1 | ssl.trustmanager.algorithm = PKIX publish-service_1 | ssl.truststore.certificates = null publish-service_1 | ssl.truststore.location = null publish-service_1 | ssl.truststore.password = null publish-service_1 | ssl.truststore.type = JKS publish-service_1 | transaction.timeout.ms = 60000 publish-service_1 | transactional.id = null publish-service_1 | value.serializer = class org.springframework.kafka.support.serializer.JsonSerializer publish-service_1 | publish-service_1 | 2021-06-03 12:36:51.532 INFO 1 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka version: 2.7.1 publish-service_1 | 2021-06-03 12:36:51.534 INFO 1 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka commitId: 61dbce85d0d41457 publish-service_1 | 2021-06-03 12:36:51.534 INFO 1 --- [nio-8080-exec-1] o.a.kafka.common.utils.AppInfoParser : Kafka startTimeMs: 1622723811529 publish-service_1 | 2021-06-03 12:36:51.552 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Connection to node -1 (localhost/127.0.0.1:9092) could not be established. Broker may not be available. publish-service_1 | 2021-06-03 12:36:51.553 WARN 1 --- [ad | producer-1] org.apache.kafka.clients.NetworkClient : [Producer clientId=producer-1] Bootstrap broker localhost:9092 (id: -1 rack: null) disconnected publish-service_1 | 2021-hed. Broker may not be available.
Tony Evans
Ranch Hand
Posts: 681
1
posted 2 years ago
Number of slices to send:
Optional 'thank-you' note:
Send
My bad, so deep in docker compose I forgot to gradle build
With a little knowledge, a
cast iron skillet
is non-stick and lasts a lifetime.
reply
reply
Bookmark Topic
Watch Topic
New Topic
Boost this thread!
Similar Threads
WS Startup: NET_Bind (in depth, however).
what could possibly cause a UDP Socket server to be closed ?
problem in running Jboss and tomcat server in Eclipse SDK
Issue with ActiveMQ and springboot app
problem with ports
More...