Cool things from the Open Practice Library
The Open Practice Library is a community-driven, open-source collection of proven practices designed to support teams in delivering better outcomes. It includes tools like Event Storming, User Story Mapping, and Affinity Mapping, which help teams align on goals, prioritize work, and iterate effectively. Developed by Red Hat’s Open Innovation Labs, the library serves as a valuable resource for teams seeking to adopt agile and DevOps methodologies in a structured, collaborative way.
I was once able to attend a workshop in Vienna by two other Red Hatters and see how the OpenPractice Library was used in real life.
And liked what I saw.
The tools support both the efficient elaboration of the topics and the structuring of the results. We achieved many good results in a comparatively short time.
By the way, there’s a Red Hat course about the open practice library named TL500. Here’s a link.
But why am I writing this?
I will soon be moderating a two-hour workshop on app development and migration and am working out how to use the time most effectively. Since the time slot is very short, the goal can only be to address certain topics and then do deep dives at a later time. Even though the tools from the openpractice library require more time, I thought that I might be able to use only parts or sections and thus use the time more efficiently.
I contacted one of the two Red Hatters from Vienna and discussed this. He agreed that the time slot was very tight, but he also pointed me to some tools that could be used in part. I also talked with my Red Hat Specialist Solution Architect colleague Karsten Gresch and he also mentioned another tool that may be useful.
To summarize, here are links to tools from the openpractice library that were pointed out to me:
These tools are all well and good, but they’re only truly useful in a workshop if they’re properly embedded in a context and have a consistent theme. Overall, we chose the last two from the many tools because they’re relatively short and build on each other in terms of content. They’re ideal for reaching the goal, working out further steps, and identifying what’s important and what you can do in future workshops.
Migration Toolkit for Applications - Assessment Questionnaire
Another approach on how to use the time in the app development and migration workshop most effectively is to focus on the app migration part.
The goal of App migration today is to have an App that is a 12-factor architecture.
But how can this be achieved?
At Red Hat we have a questionnaire that aims to do just that. Applications are identified, categorized and examined for dependencies. Software lifecycles are recorded, maintainability and container readiness are examined and much more.
An excerpt from the complete questionnaire (under the name “Pathfinder”) has found its way into the konveyor project and the Migration Toolkit for Applications:
However, the aim of this questionnaire is not to carry out the migration, but rather to find out where you stand. To identify hurdles and to record in a structured way what generally needs to be done.
The migration steps themselves are also solved by the Migration Toolkit for Applications and the konveyor project, namely in the sense of a static code analysis and the provision of guidelines. But that’s another story.
Make modelcars and use it in OpenShift AI
Modelcars are cool. I already wrote about them in CW14, where I also shared the awesome blog post from Trevor Royer on Red Hat Developers on how to get started.
In our Kubernetes-dominated world, containers are the way. So, why don’t we deploy our models in containers? The reason probably is, there’s no real standard in model serving (yet) and some model serving frameworks aim to deploy multiple models.
But don’t worry, modelcars are easy to build and super cool in OpenShift AI. So let’s just try it out.
Step 0: Find a cool model on huggingface you would like to deploy
Obviously the first step is to see if there’s a nice model to use. In my use case it is the embedding model named nomic, here’s the link to huggingface.
Step 1: Create a Hugging Face Access token
- Click in huggingface on your profile picture and click on Access tokens.
- Click at the top right on “Create new token”.
- Set a token name
- Enable all checkboxes under “Repositories”
- Click on “Create” and save your access token.
Here’s a picture of my created token:
We will use the access token in the next step.
Step 2: Create a folder with the Containerfile
and the download_model.py
, as described in the Blog Post
For convenience reasons, here are the snippets:
download_model.py
, prefilled with the nomic bert model. If you would like to deploy a different model, feel free to change themodel_repo
value.
from huggingface_hub import snapshot_download
# Specify the Hugging Face repository containing the model
model_repo = "nomic-ai/nomic-bert-2048"
snapshot_download(
repo_id=model_repo,
local_dir="/models",
allow_patterns=["*.safetensors", "*.json", "*.txt", "*.py"],
)
Containerfile
FROM registry.access.redhat.com/ubi9/python-311:latest as base
USER root
ENV HF_TOKEN=<enter your huggingface token here>
RUN pip install huggingface-hub
# Download the model file from hugging face
COPY download_model.py .
RUN python download_model.py
# Final image containing only the essential model files
FROM registry.access.redhat.com/ubi9/ubi-micro:9.4
# Copy the model files from the base container
COPY --from=base /models /models
USER 1001
Enter your Hugging Face access token in line 5 as the value of HF_TOKEN
.
Step 3: Create an image repository on quay.io (or somewhere else) and login with podman
Go to quay.io and create a repository with the name of your model. In my case, I created quay.io/modzelewski/nomic-bert-2048
.
Login to quay.io with your podman cli (podman login quay.io
).
Step 4: Build and upload your image
All prerequisites are done, so you should be good to go. Build and push your application with these two commands (change to your podman repo name of course)
podman build -t quay.io/modzelewski/nomic-bert-2048 --platform linux/amd64 .
podman push quay.io/modzelewski/nomic-bert-2048
Step 5: Deploy on OpenShift AI
- Navigate to your OpenShift AI project to the models tab and click on “Deploy model”.
- Use as a serving runtime “vLLM ServingRuntime for KServe”.
- Fill in a name and scroll down to the Source model location section and click on “Create connection”.
- Select “URI”, fill a name and paste your quay.io repo url. Add an
oci://
prefix in my case the result isoci://quay.io/modzelewski/nomic-bert-2048:latest
Here’s a screenshot:
And that’s it. Your model should now be ready and you should be able to use your endpoints in your use case.