Kafka ops: Migrating from ec2 classic to VPC

Engineering May 07, 2018

A few strategies for migrating Kafka to new cluster from classic to VPC:

Strategy 1:

Launch a new cluster (Zookeeper+Kafka brokers),
point producers to new cluster and drain the older cluster once consumers are also pointed to new cluster.

pros:

  • Existing cluster isn't impacted, and hence helps in easy revert.
  • Smooth transition on kafka cluster end. No config changes or zookeeper changes.

cons:

  • In case of revert, potential loss of messages.
  • Consumers may have to take some downtime.
  • Few consumers consume and produce to the cluster. They've to engineer to adapt both kafka clusters in their application.

Strategy 2:

Launch a new cluster (Zookeeper + Kafka brokers),
point producers to both the clusters. Let data in both clusters be eventually consistent. Once both the clusters are identical, drain the older cluster.

pros:

  • Minimal engineering required is on producers (also, those who produce by consumption).
  • Dummy consumers can be put up, and testing can be done somewhat smoothly.

cons:

  • Potential risk of duplicate consumption of messages.
  • Application need changes.

Strategy 3:

Add new brokers one by one, drain one by one.
Once the brokers are migrated, move zookeeper similarly.

pros:

  • applications are agnostic to these changes. No engineering changes required.

cons:

  • Draining a broker would turn out expensive. Cluster would take significant bandwidth and throttle upon network, causing distress to applications.
    There's a way to put throttling limits on bandwidth and let migration happen in controlled manner.
  • Operationally, lengthy procedure. Overall transition time would be in weeks.

Srujan

You should go to about section on this site.

Great! You've successfully subscribed.
Great! Next, complete checkout for full access.
Welcome back! You've successfully signed in.
Success! Your account is fully activated, you now have access to all content.