pub struct Builder { /* private fields */ }
Expand description
A Kafka Consumer builder easing the process of setting up various configuration settings.
Implementations§
Source§impl Builder
impl Builder
Sourcepub fn with_group(self, group: String) -> Builder
pub fn with_group(self, group: String) -> Builder
Specifies the group on whose behalf to maintain consumed message offsets.
The group is allowed to be the empty string, in which case the resulting consumer will be group-less.
Sourcepub fn with_topic(self, topic: String) -> Builder
pub fn with_topic(self, topic: String) -> Builder
Specifies a topic to consume. All of the available partitions
of the identified topic will be consumed unless overridden
later using with_topic_partitions
.
This method may be called multiple times to assign the consumer multiple topics.
This method or with_topic_partitions
must be called at least
once, to assign a topic to the consumer.
Sourcepub fn with_topic_partitions(self, topic: String, partitions: &[i32]) -> Builder
pub fn with_topic_partitions(self, topic: String, partitions: &[i32]) -> Builder
Explicitly specifies topic partitions to consume. Only the
specified partitions for the identified topic will be consumed
unless overridden later using with_topic
.
This method may be called multiple times to subscribe to multiple topics.
This method or with_topic
must be called at least once, to
assign a topic to the consumer.
Sourcepub fn with_fallback_offset(self, fallback_offset: FetchOffset) -> Builder
pub fn with_fallback_offset(self, fallback_offset: FetchOffset) -> Builder
Specifies the offset to use when none was committed for the underlying group yet or the consumer has no group configured.
Running the underlying group for the first time against a
topic or running the consumer without a group results in the
question where to start reading from the topic, since it might
already contain a lot of messages. Common strategies are
starting at the earliest available message (thereby consuming
whatever is currently in the topic) or at the latest one
(thereby staring to consume only newly arriving messages.)
The “fallback offset” here corresponds to time
in
KafkaClient::fetch_offsets
.
Sourcepub fn with_fetch_max_wait_time(self, max_wait_time: Duration) -> Builder
pub fn with_fetch_max_wait_time(self, max_wait_time: Duration) -> Builder
See KafkaClient::set_fetch_max_wait_time
Sourcepub fn with_fetch_min_bytes(self, min_bytes: i32) -> Builder
pub fn with_fetch_min_bytes(self, min_bytes: i32) -> Builder
See KafkaClient::set_fetch_min_bytes
Sourcepub fn with_fetch_max_bytes_per_partition(
self,
max_bytes_per_partition: i32,
) -> Builder
pub fn with_fetch_max_bytes_per_partition( self, max_bytes_per_partition: i32, ) -> Builder
See KafkaClient::set_fetch_max_bytes_per_partition
Sourcepub fn with_fetch_crc_validation(self, validate_crc: bool) -> Builder
pub fn with_fetch_crc_validation(self, validate_crc: bool) -> Builder
See KafkaClient::set_fetch_crc_validation
Sourcepub fn with_offset_storage(self, storage: Option<GroupOffsetStorage>) -> Builder
pub fn with_offset_storage(self, storage: Option<GroupOffsetStorage>) -> Builder
See KafkaClient::set_group_offset_storage
Sourcepub fn with_retry_max_bytes_limit(self, limit: i32) -> Builder
pub fn with_retry_max_bytes_limit(self, limit: i32) -> Builder
Specifies the upper bound of data bytes to allow fetching from a kafka partition when retrying a fetch request due to a too big message in the partition.
By default, this consumer will fetch up to
KafkaClient::fetch_max_bytes_per_partition
data from each
partition. However, when it discovers that there are messages
in an underlying partition which could not be delivered, the
request to that partition might be retried a few times with an
increased fetch_max_bytes_per_partition
. The value
specified here defines a limit to this increment.
A value smaller than the
KafkaClient::fetch_max_bytes_per_partition
, e.g. zero, will
disable the retry feature of this consumer. The default value
for this setting is DEFAULT_RETRY_MAX_BYTES_LIMIT
.
Note: if the consumed topic partitions are known to host large
messages it is much more efficient to set
KafkaClient::fetch_max_bytes_per_partition
appropriately
instead of relying on the limit specified here. This limit is
just an upper bound for already additional retry requests.
Sourcepub fn with_connection_idle_timeout(self, timeout: Duration) -> Self
pub fn with_connection_idle_timeout(self, timeout: Duration) -> Self
Specifies the timeout for idle connections.
See KafkaClient::set_connection_idle_timeout
.
Sourcepub fn with_client_id(self, client_id: String) -> Self
pub fn with_client_id(self, client_id: String) -> Self
Specifies a client_id to be sent along every request to Kafka
brokers. See KafkaClient::set_client_id
.