[Webinar] Build Your GenAI Stack with Confluent and AWS | Register Now
Introducing Confluent Platform version 7.5, which offers a range of new features to enhance security, improve developer efficacy, and strengthen disaster recovery capabilities.
Building on the innovative feature set delivered in previous releases, Confluent Platform 7.5 makes enhancements to three categories of features:
Seamless and secure login with single sign-on (SSO) for Control Center.
Simplified user interface with Confluent Platform with version 3 of Confluent REST Proxy Produce API.
Optimized disaster recovery and increased reliability with bidirectional Cluster Linking, allowing for active/active architectures.
The following explores each of these enhancements and dives deep into the major feature updates and benefits. As with previous Confluent Platform releases, you can always find additional details about the features in the release notes.
Keep reading to get an overview of what’s included in Confluent Platform 7.5, or download Confluent Platform now if you’re ready to get started.
Confluent Platform 7.5 comes with enhancements to existing features, improving automated onboarding while streamlining architecture, leveraging a streamlined API interface with Confluent, and increasing reliability. Here are some highlights:
As part of this release, single sign-on (SSO) for Control Center (C3) via OpenID Connect (OIDC) is now generally available in Confluent Platform 7.5.
The solution enables security compliance by allowing customers to use their identity provider (IdP) for login, with additional controls provided by the IdP such as multi-factor authentication (MFA). This provides a familiar login experience, instills trust, and allows developers to have a single identity rather than maintaining a separate set of credentials for Control Center. User management is completely offloaded to the identity provider, eliminating daily management of users/passwords. This increases the use of C3 within the company and ensures compliance with security policies.
Check out this demo video for a quick overview of SSO for Confluent Platform.
Presently, the REST Proxy API enables applications to seamlessly connect with the Confluent Platform utilizing REST standards. This empowers users to access data from the Confluent Platform without grappling with the complexities of Apache Kafka®, leveraging REST calls in their preferred programming language with ease.
The REST Proxy Produce API v3 simplifies integration with Confluent while bridging gaps in supported libraries. Now, practitioners can effortlessly interface with Confluent, enjoying the benefits of HTTP/REST for smooth integration with legacy connectors, serverless solutions, and SaaS offerings. This update also supports custom headers and the efficient streaming produce mode, empowering users to access the Confluent Platform with their preferred programming language. As a result, solutions can be brought to the market faster, without having to fully internalize Kafka concepts.
To learn more about the capabilities of Confluent REST Proxy Produce API, check out the visual below, get detailed instructions in the documentation, or try a step-by-step tutorial.
In today's interconnected world, data replication and disaster recovery (DR) are critical components of an organization's infrastructure. Cluster Linking is a powerful tool that facilitates offset-preserving replication of topics and related metadata from a source cluster to a destination cluster, and has become the go-to solution for many businesses with Confluent clusters spread across different regions. However, as the demand for seamless failover capabilities grows, we are making our Cluster Linking capabilities better to meet full disaster recovery requirements.
This release adds bidirectional Cluster Linking to solve the problem of managing consumer offset migration between clusters in active-active scenarios or when failing back in active-passive scenarios. The new BIDIRECTIONAL cluster link mode allows each cluster to act as both a source and a destination for data and metadata. This enables topics to be mirrored in both directions between clusters with a single link, simplifying configuration and management while accommodating different customer namespaces. The BIDIRECTIONAL mode is crucial for any architecture that uses Cluster Linking for disaster recovery, as it smartly manages consumer offset syncing in both directions without manual intervention. Whether doing an active-active or an active-passive DR setup, consumer offsets will be correctly synced in both directions for all topics, both mirror topics and regular topics. This enhances the efficiency of using Cluster Linking for DR, providing a powerful solution for seamless data replication and metadata synchronization between clusters.
Bidirectional Cluster Linking offers solutions for two major use cases by sending data to two clusters:
Active/active architectures (where both clusters receive producers): simultaneous processing and data synchronization between multiple clusters or systems to achieve high availability and fault tolerance
Disaster recovery (both active/passive and active/active): Active/passive and active/active disaster recovery are strategies that involve standby environments or continuous synchronization for high availability, data integrity, and minimal downtime, protecting businesses from disruptions and data loss
In addition to the new BIDIRECTIONAL mode, Cluster Linking now allows:
Replication for new mirror topics to begin from the latest offset or from a specific timestamp, offering a more efficient data sharing and migration path by leaving behind unnecessary historical data. Mirror topics created with the latest offset option will immediately start to receive the newest records produced for their source topics.
Mirror topics have a different retention setting than their source topic, giving efficient and flexible data sharing and aggregation topologies. For example, source clusters deployed at the edge with limited disk space may have a 24-hour retention period, whereas mirror topics located in a main data center or public cloud could have infinite retention, creating a powerful edge-to-cloud architecture.
All mirror topics on a cluster link can be failed over at once, making disaster recovery easier to implement, using the new regex pattern option in the REST API.
These features can be enabled on existing cluster links once their destination cluster is upgraded to Confluent Platform 7.5.
Check out this demo video for a quick overview of bidirectional Cluster Linking.
Other key features that are part of this launch are:
Confluent Platform FIPS support for RHEL8 and RHEL9: The Federal Information Processing Standard (FIPS) is a U.S. government computer security standard (using FIPS 140-1, FIPS 140-2, and FIPS PUB 140-2) used to validate and certify software cryptographic modules. Confluent Platform provides configuration automation with additional FIPS-compliant cipher enforcement at the Kafka broker and metadata service (MDS) level. Both Confluent for Kubernetes (CFK) and CP-Ansible support this automation.
Use CFK to deploy and operate self-managed connectors: You can now use the same connect cluster to automate connectors that are coming from either Confluent Hub or a custom file share URL.
Confluent Platform 7.5 is built on the most recent version of Apache Kafka, in this case, version 3.5. For more details about Apache Kafka 3.5, please read the blog post by Mickael Maison or check out the video by Danica Fine below.
Download Confluent Platform 7.5 today to get started with the only cloud-native and complete platform for data in motion, built by the original creators of Apache Kafka.
We are proud to announce the release of Apache Kafka® 3.5.0. This release contains many new features and improvements. This blog post will highlight some of the more prominent features.
Versioned key-value state stores, introduced to Kafka Streams in 3.5, enhance stateful processing capabilities by allowing users to store multiple record versions per key, rather than only the single latest version per key as is the case for existing key-value stores today...