Kafka Source Table

Function

Create a source stream to obtain data from Kafka as input data for jobs.

Apache Kafka is a fast, scalable, and fault-tolerant distributed message publishing and subscription system. It delivers high throughput and built-in partitions and provides data replicas and fault tolerance. Apache Kafka is applicable to scenarios of handling massive messages.

Prerequisites

Precautions

Syntax

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
create table kafkaSource(
  attr_name attr_type 
  (',' attr_name attr_type)* 
  (','PRIMARY KEY (attr_name, ...) NOT ENFORCED)
  (',' WATERMARK FOR rowtime_column_name AS watermark-strategy_expression)
)
with (
  'connector' = 'kafka',
  'topic' = '',
  'properties.bootstrap.servers' = '',
  'properties.group.id' = '',
  'scan.startup.mode' = '',
  'format' = ''
);

Parameters

Table 1 Parameter description

Parameter

Mandatory

Default Value

Data Type

Description

connector

Yes

None

String

Connector to be used. Set this parameter to kafka.

topic

Yes

None

String

Topic name of the Kafka record.

Note:

  • Only one of topic and topic-pattern can be specified.
  • If there are multiple topics, separate them with semicolons (;), for example, topic-1;topic-2.

topic-pattern

No

None

String

Regular expression for a pattern of topic names to read from.

Only one of topic and topic-pattern can be specified.

For example:

'topic.*'

'(topic-c|topic-d)'

'(topic-a|topic-b|topic-\\d*)'

'(topic-a|topic-b|topic-[0-9]*)'

properties.bootstrap.servers

Yes

None

String

Comma separated list of Kafka brokers.

properties.group.id

Yes

None

String

ID of the consumer group for the Kafka source.

properties.*

No

None

String

This parameter can set and pass arbitrary Kafka configurations.

Note:

  • The suffix to properties. must match the configuration key in Apache Kafka.

    For example, you can disable automatic topic creation via 'properties.allow.auto.create.topics' = 'false'.

  • Some configurations are not supported, for example, 'key.deserializer' and 'value.deserializer'.

format

Yes

None

String

Format used to deserialize and serialize the value part of Kafka messages. Note: Either this parameter or the value.format parameter is required.

Refer to Format for more details and format parameters.

key.format

No

None

String

Format used to deserialize and serialize the key part of Kafka messages.

Note:

  • If a key format is defined, the key.fields parameter is required as well. Otherwise, the Kafka records will have an empty key.
  • Refer to Format for more details and format parameters.

key.fields

No

[]

List<String>

Defines the columns in the table as the list of keys. This parameter must be configured in pair with key.format.

This parameter is left empty by default. Therefore, no key is defined.

The format is like field1;field2.

key.fields-prefix

No

None

String

Defines a custom prefix for all fields of the key format to avoid name clashes with fields of the value format.

value.format

Yes

None

String

Format used to deserialize and serialize the value part of Kafka messages.

Note:

  • Either this parameter or the format parameter is required. If two parameters are configured, a conflict occurs.
  • Refer to Format for more details and format parameters.

value.fields-include

No

ALL

Enum

Possible values: [ALL, EXCEPT_KEY]

Whether to contain the key field when parsing the message body.

Possible values are:

  • ALL (default): All defined fields are included in the value of Kafka messages.
  • EXCEPT_KEY: All the fields except those defined by key.fields are included in the value of Kafka messages.

scan.startup.mode

No

group-offsets

String

Start position for Kafka to read data.

Possible values are:

  • earliest-offset: Data is read from the earliest Kafka offset.
  • latest-offset: Data is read from the latest Kafka offset.
  • group-offsets (default): Data is read based on the consumer group.
  • timestamp: Data is read from a user-supplied timestamp. When setting this option, you also need to specify scan.startup.timestamp-millis in WITH.
  • specific-offsets: Data is read from user-supplied specific offsets for each partition. When setting this option, you also need to specify scan.startup.specific-offsets in WITH.

scan.startup.specific-offsets

No

None

String

This parameter takes effect only when scan.startup.mode is set to specific-offsets. It specifies the offsets for each partition, for example, partition:0,offset:42;partition:1,offset:300.

scan.startup.timestamp-millis

No

None

Long

Startup timestamp. This parameter takes effect when scan.startup.mode is set to timestamp.

scan.topic-partition-discovery.interval

No

None

Duration

Interval for a consumer to periodically discover dynamically created Kafka topics and partitions.

Metadata Column

You can define metadata columns in the source table to obtain the metadata of Kafka messages. For example, if multiple topics are defined in the WITH parameter and the metadata column is defined in the Kafka source table, the data read by Flink is labeled with the topic from which the data is read.

Table 2 Metadata column

Key

Data Type

R/W

Description

topic

STRING NOT NULL

R

Topic name of the Kafka record.

partition

INT NOT NULL

R

Partition ID of the Kafka record.

headers

MAP<STRING, BYTES> NOT NULL

R/W

Headers of Kafka messages.

leader-epoch

INT NULL

R

Leader epoch of the Kafka record.

For details, see example 1.

offset

BIGINT NOT NULL

R

Offset of the Kafka record.

timestamp

TIMESTAMP(3) WITH LOCAL TIME ZONE NOT NULL

R/W

Timestamp of the Kafka record.

timestamp-type

STRING NOT NULL

R

Timestamp type of the Kafka record. The options are as follows:

  • NoTimestampType: No timestamp is defined in the message.
  • CreateTime: time when the message is generated.
  • LogAppendTime: time when the message is added to the Kafka broker.

    For details, see example 1.

Example (SASL_SSL Disabled for the Kafka Cluster)

Example (SASL_SSL Enabled for the Kafka Cluster)

FAQ