Skip to content

Strimzi ACL and security

Archived (pre-2022)

Preserved for reference only -- likely outdated. View original | Last updated: December 2021

We use User Operator to manage Kafka Users and ACL.

The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster.

The User Operator allows you to declare a KafkaUser resource as part of your application’s deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker.

When the user is created, the user credentials are created in a Secret. Your application needs to use the user and its credentials for authentication and to produce or consume messages.

In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user’s access rights in the KafkaUser declaration.

The User Operator allows us to create KafkaUsers to represent client authentication credentials. As mentioned in the beginning of the blog post, supported authentication types include TLS and SCRAM-SHA-512

Example how to create User with specific ACL and connect to Kafka cluster.

Pre-requirements

  1. Create a topic to which ACL needs to be granted. Should be performed with helm. Simply add a new topic to this location and apply the change.
  2. Create a user with list and read permissions for the test topic.
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaUser
metadata:
  name: test-user
  labels:
    strimzi.io/cluster: fairbid
spec:
  authentication:
    type: tls
  authorization:
    type: simple
    acls:
      # Example consumer Acls for topic my-topic using consumer group my-group
      - resource:
          type: topic
          name: test
          patternType: literal
        operation: Read
        host: "*"
      - resource:
          type: topic
          name: test
          patternType: literal
        operation: Describe
        host: "*"                
      - resource:
          type: group
          name: test
          patternType: literal
        operation: Read
        host: "*"

Authenticate to Kafka with Java client

  1. Extract and configure the user credentials
$ export KAFKA_USER_NAME=test-user
$ kubectl get secret $KAFKA_USER_NAME -n kafka-sdk-events -o jsonpath='{.data.user\.crt}' | base64 --decode > user.crt
$ kubectl get secret $KAFKA_USER_NAME -n kafka-sdk-events -o jsonpath='{.data.user\.key}' | base64 --decode > user.key
$ kubectl get secret $KAFKA_USER_NAME -n kafka-sdk-events -o jsonpath='{.data.user\.p12}' | base64 --decode > user.p12
$ kubectl get secret $KAFKA_USER_NAME -n kafka-sdk-events -o jsonpath='{.data.user\.password}' | base64 --decode > user.password
  1. Import the entry in user.p12 into another keystore
# deststorepass and srcstorepass are in user.password from step 1
$ keytool -importkeystore -deststorepass <password-from-user.password> -destkeystore kafka-auth-keystore.jks -srckeystore user.p12 -srcstorepass <password-from-user.password> -srcstoretype PKCS12

Example output:

Importing keystore user.p12 to kafka-auth-keystore.jks...
Entry for alias test-user successfully imported.
Import command completed: 1 entries successfully imported, 0 entries failed or cancelled
  1. Verify that the JKS was created properly
$ keytool -list -alias test-user -keystore kafka-auth-keystore.jks
# Enter keystore password when prompted

Example output:

test-user, Sep 17, 2021, PrivateKeyEntry,
Certificate fingerprint (SHA1): E9:E2:C3:C6:19:A4:52:BE:E1:A7:80:1E:84:5A:4D:9A:58:BB:23:6A
  1. Extract and configure server CA cert
$ export CLUSTER_NAME=fairbid
$ kubectl get secret $CLUSTER_NAME-cluster-ca-cert -n kafka-sdk-events -o jsonpath='{.data.ca\.crt}' | base64 --decode > ca.crt
$ kubectl get secret $CLUSTER_NAME-cluster-ca-cert -n kafka-sdk-events -o jsonpath='{.data.ca\.password}' | base64 --decode > ca.password
  1. Import it into truststore - I am using the built-in truststore which comes with a JDK (Java) installation - but this is just for convenience and you're free to use other truststores
# keypass value is in ca.password
$ sudo keytool -importcert -alias strimzi-kafka-cert -file ca.crt -keystore /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/security/cacerts -keypass <password-from-ca.password>
  1. Check CA is updated
# You will be prompted for the truststore password. For JDK truststore, the default password is "changeit"
$ keytool -list -v -keystore /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/security/cacerts
  1. Copy kafka-auth-keystore.jks and cacerts to the server where you would like to run Kafka CLI (for example, one of the brokers)
$ kubectl cp /Library/Java/JavaVirtualMachines/adoptopenjdk-8.jdk/Contents/Home/jre/lib/security/cacerts fairbid-kafka-0:/tmp/ -n kafka-sdk-events
$ kubectl cp kafka-auth-keystore.jks fairbid-kafka-0:/tmp/ -n kafka-sdk-events
  1. Create properties file for Kafka CLI clients
# Create client-ssl-auth.properties with the following content:
bootstrap.servers=fairbid-kafka-0.fairbid-kafka-brokers.kafka-sdk-events.svc.cluster.local:9093
security.protocol=SSL
ssl.truststore.location=/tmp/cacerts
ssl.truststore.password=changeit
ssl.keystore.location=/tmp/kafka-auth-keystore.jks
ssl.keystore.password= <password-from-step-2>
ssl.key.password= <password-from-step-2>
  1. List topics with CLI directly from the Kafka broker (in this example, a pod)
[kafka@fairbid-kafka-0 tmp]$ /opt/kafka/bin/kafka-topics.sh --list --bootstrap-server fairbid-kafka-1.fairbid-kafka-brokers.kafka-sdk-events.svc.cluster.local:9093 --command-config client-ssl-auth.properties

Example output:

test
  1. Example of Producer

    /opt/kafka/bin/kafka-console-producer.sh --bootstrap-server fairbid-kafka-1.fairbid-kafka-brokers.kafka-sdk-events.svc.cluster.local:9093 --topic test --producer.config client-scram-auth.properties
    
  2. Example of Consumer

    /opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server fairbid-kafka-1.fairbid-kafka-brokers.kafka-sdk-events.svc.cluster.local:9093 --topic test --group test --consumer.config client-scram-auth.properties  --from-beginning
    

Similar permissions can be created for consumers and producers.

P.S.
All you need to know about keytool is here: Article Most Common Java Keytool Keystore Commands

Nice blog post about Kafka Strimzi: Kafka on Kubernetes the Strimzi Way Part 3

Strimzi documentation: Using