[Live Workshop] Streams on Tour: Hands-On Deep Dive into Confluent | Register Now
Change data capture is a popular method to connect database tables to data streams, but it comes with drawbacks. The next evolution of the CDC pattern, first-class data products, provide resilient pipelines that support both real-time and batch processing while isolating upstream systems...
Confluent Cloud Freight clusters are now Generally Available on AWS. In this blog, learn how Freight clusters can save you up to 90% at GBps+ scale.
Learn how to contribute to open source Apache Kafka by writing Kafka Improvement Proposals (KIPs) that solve problems and add features! Read on for real examples.
Existing Confluent Cloud (CC) Snowflake users can now use Tableflow to easily represent Kafka topics as Iceberg topics and then leverage Snowflake Open Catalog to power real-time AI and analytics workloads.
Big news! KIP-848, the next-gen Consumer Rebalance Protocol, is now available in Confluent Cloud! This is a major upgrade for your Kafka clusters, offering faster rebalances and improved stability. Our new blog post dives deep into how KIP-848 functions, making it easy to understand the benefits.
The users who need access to data stored in Apache Kafka® topics aren’t always experts in technologies like Apache Flink® SQL. This blog shows how users can use natural language processing to have their plain-language questions translated into Flink queries with Confluent Cloud.
Discover how Account Executive Jason helps customers turn roadblocks into wins—powered by Confluent’s collaborative, one-team culture.
Announcing the GA of Confluent’s fully-managed Kafka Connector V2 for Azure Cosmos DB—now available in Confluent Cloud. Seamlessly stream real-time data to and from Cosmos DB with improved scalability, performance, and simplified setup.
This blog announces the general availability of the next generation of Control Center for Confluent Platform
CC Q2 2025 adds Tableflow support for Delta Lake tables, Flink Snapshot Queries, maximum eCKU configuration for elastically scaling clusters, and more!
Announcing the launch of the 2025 Data Streaming Report—highlighting some of the key findings from the report, including data streaming platform’s role in driving AI success.
This post introduces the VISTA Framework, a structured approach to prioritizing AI opportunities. Inspired by project management models such as RICE (Reach, Impact, Confidence, and Effort), VISTA focuses on four dimensions: Business Value, Implementation Speed, Scalability, and Tolerance for Risk
In this blog, Confluent's Chief Product Officer, Shaun Clowes, explores strategies to foster effective async collaboration—reduce burnout, boost productivity, and make distributed work actually work.
For AI agents to transform enterprises with autonomous problem-solving, adaptive workflows, and scalability, they need event-driven architecture (EDA) powered by streaming data.
Just as some problems are too big for one person to solve, some tasks are too complex for a single artificial intelligence (AI) agent to handle. Instead, the best approach is to decompose problems into smaller, specialized units so that multiple agents can work together as a team.
By combining Google A2A’s structured protocol with Kafka’s powerful event streaming capabilities, we can shift from brittle, point-to-point integrations to a dynamic ecosystem where agents publish insights, subscribe to context, and coordinate in real time. Observability, auditability, and...
This blog post demonstrates using Tableflow to easily transform Kafka topics into queryable Iceberg tables. It uses UK Environment Agency sensor data as a data source, and shows how to use Tableflow with standard SQL to explore and understand the data.