Skip to content

Instantly share code, notes, and snippets.

2018-06-04T06:27:47,183 INFO [DruidSchema-Cache-0] io.druid.java.util.http.client.pool.ChannelResourceFactory - Generating: http://iZuf6flaph02htekc965qjZ:8100
2018-06-04T06:27:47,191 INFO [DruidSchema-Cache-0] io.druid.sql.calcite.schema.DruidSchema - Refreshed metadata for dataSource[preprod-credit-invocation-history-kafka] in 9 ms (1 segments queried, 0 segments left).
2018-06-04T06:27:47,203 INFO [DruidSchema-Cache-0] io.druid.sql.calcite.schema.DruidSchema - Refreshed metadata for dataSource[preprod-risk-invocation-history-kafka] in 12 ms (1 segments queried, 0 segments left).
2018-06-04T06:28:53,184 INFO [DruidSchema-Cache-0] io.druid.java.util.http.client.pool.ChannelResourceFactory - Generating: http://iZuf6flaph02htekc965qjZ:8100
2018-06-04T06:28:53,194 INFO [DruidSchema-Cache-0] io.druid.sql.calcite.schema.DruidSchema - Refreshed metadata for dataSource[preprod-credit-invocation-history-kafka] in 11 ms (1 segments queried, 0 segments left).
2018-06-04T06:28:53,195 INFO [DruidSchema-Cache-0] io.druid
2018-06-04T12:23:56,220 WARN [Coordinator-Exec--0] io.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[preprod-market-decision-result-kafka_2018-06-01T02:00:00.000Z_2018-06-01T03:00:00.000Z_2018-06-01T09:43:07.825Z]! Expected Replicants[2]
2018-06-04T12:23:56,220 WARN [Coordinator-Exec--0] io.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[preprod-market-decision-result-kafka_2018-06-01T01:00:00.000Z_2018-06-01T02:00:00.000Z_2018-06-01T09:43:05.504Z]! Expected Replicants[2]
2018-06-04T12:23:56,221 WARN [Coordinator-Exec--0] io.druid.server.coordinator.rules.LoadRule - No available [_default_tier] servers or node capacity to assign segment[preprod-market-decision-result-kafka_2018-05-31T23:00:00.000Z_2018-06-01T00:00:00.000Z_2018-06-01T09:43:05.614Z]! Expected Replicants[2]
2018-06-04T12:23:56,221 WARN [Coordinator-Exec--0] io.druid.server.coordinator.rules.LoadRule - No available
2018-06-04T02:40:12,554 INFO [main] io.druid.initialization.Initialization - added URL[file:/usr/local/imply-2.5.11/dist/druid/extensions/druid-parser-route/commons-lang-2.6.jar] for extension[druid-parser-route]
2018-06-04T02:40:12,554 INFO [main] io.druid.initialization.Initialization - added URL[file:/usr/local/imply-2.5.11/dist/druid/extensions/druid-parser-route/aws-java-sdk-kinesis-1.10.61.jar] for extension[druid-parser-route]
2018-06-04T02:40:12,554 INFO [main] io.druid.initialization.Initialization - added URL[file:/usr/local/imply-2.5.11/dist/druid/extensions/druid-parser-route/protobuf-java-2.6.1.jar] for extension[druid-parser-route]
2018-06-04T02:40:12,554 INFO [main] io.druid.initialization.Initialization - added URL[file:/usr/local/imply-2.5.11/dist/druid/extensions/druid-parser-route/lz4-1.3.0.jar] for extension[druid-parser-route]
2018-06-04T02:40:12,554 INFO [main] io.druid.initialization.Initialization - added URL[file:/usr/local/imply-2.5.11/dist/druid/extensions/druid-parser-route/guava-1
2018-06-04T12:05:42,469 INFO [KafkaIndexTaskClient-preprod-risk-invocation-history-kafka-0] io.druid.indexing.kafka.KafkaIndexTaskClient - No TaskLocation available for task [index_kafka_preprod-risk-invocation-history-kafka_652901e4170eb5c_fbjlhfof], this task may not have been assigned to a worker yet or may have already completed
2018-06-04T12:05:42,471 INFO [KafkaSupervisor-preprod-risk-invocation-history-kafka] io.druid.indexing.kafka.supervisor.KafkaSupervisor - {id='preprod-risk-invocation-history-kafka', generationTime=2018-06-04T12:05:42.471Z, payload={dataSource='preprod-risk-invocation-history-kafka', topic='preprod_RISK_INVOCATION_HISTORY_FLATTEN', partitions=1, replicas=1, durationSeconds=3600, active=[{id='index_kafka_preprod-risk-invocation-history-kafka_652901e4170eb5c_fbjlhfof', startTime=null, remainingSeconds=null}], publishing=[{id='index_kafka_preprod-risk-invocation-history-kafka_652901e4170eb5c_cgnlpdmk', startTime=2018-06-04T10:42:55.209Z, remainingSeconds=433}]}}
2018-06-04T12:05:42,5
2018-06-04T10:43:48,812 INFO [forking-task-runner-5] io.druid.storage.hdfs.tasklog.HdfsTaskLogs - Writing task log to: /druid/indexing-logs/index_kafka_preprod-risk-invocation-history-kafka_39f2b2a2bddc733_bpgmpcih
2018-06-04T10:43:48,830 INFO [forking-task-runner-5] io.druid.storage.hdfs.tasklog.HdfsTaskLogs - Wrote task log to: /druid/indexing-logs/index_kafka_preprod-risk-invocation-history-kafka_39f2b2a2bddc733_bpgmpcih
2018-06-04T10:43:48,830 INFO [forking-task-runner-5] io.druid.indexing.overlord.TaskRunnerUtils - Task [index_kafka_preprod-risk-invocation-history-kafka_39f2b2a2bddc733_bpgmpcih] status changed to [SUCCESS].
2018-06-04T10:43:48,830 INFO [forking-task-runner-5] io.druid.indexing.overlord.ForkingTaskRunner - Removing task directory: var/druid/task/index_kafka_preprod-risk-invocation-history-kafka_39f2b2a2bddc733_bpgmpcih
2018-06-04T10:43:48,833 INFO [WorkerTaskManager-NoticeHandler] io.druid.indexing.worker.WorkerTaskManager - Job's finished. Completed [index_kafka_preprod-risk-invocation-h
druid.service=druid/broker
druid.port=18082
# HTTP server threads
druid.broker.http.numConnections=5
druid.server.http.numThreads=12
# Processing threads and buffers
druid.processing.buffer.sizeBytes=50000000
druid.processing.numMergeBuffers=2
-server
-Xms512m
-Xmx512m
-XX:MaxDirectMemorySize=2g
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
druid.service=druid/coordinator
druid.port=18081
druid.coordinator.startDelay=PT10S
druid.coordinator.period=PT5S
-server
-Xms256m
-Xmx256m
-Duser.timezone=UTC
-Dfile.encoding=UTF-8
-Djava.io.tmpdir=var/tmp
-Djava.util.logging.manager=org.apache.logging.log4j.jul.LogManager
-Dderby.stream.error.file=var/druid/derby.log
druid.service=druid/historical
druid.port=18083
# HTTP server threads
druid.server.http.numThreads=12
# Processing threads and buffers
druid.processing.buffer.sizeBytes=50000000
druid.processing.numMergeBuffers=2
druid.processing.numThreads=1