Lately, I’ve been spending considerable time traveling across Germany on Deutsche Bahn, which presented the perfect opportunity to finally dive into “vibe coding”—that relaxed, exploratory approach to building software with the assistance of Large Language Models. Sometimes the best projects emerge from the most mundane frustrations.
The ICE Portal Magazine Downloader Project
The Problem: Manual Magazine Downloads on Unreliable Connections
Anyone who’s traveled extensively on German ICE trains knows the pain: you want to download some reading material from the ICE Portal, but the process is tedious. Each magazine requires individual clicks, navigation, and patience. Worse yet, the onboard WiFi connection is notoriously unreliable, dropping out just when you need it most.
Since I prefer using my mobile data anyway, I found myself repeatedly thinking: “There has to be a better way to bulk download these magazines before switching networks.”
The Solution: Automation Through Vibe Coding
Enter the ICE Portal Magazine Downloader—a small utility I created that handles the tedious clicking and downloading automatically. As the project’s author, I can attest that it was proudly built via vibe coding during a train journey from Hamburg to Ingolstadt, with heavy assistance from DeepSeek R1 Qwen (14b), Llama 4 (17b), and Mistral (24b). Start the application, let it work its magic, then seamlessly switch to your mobile network with a complete digital library ready to go.
Technical Implementation: A Tale of Two Languages
What made this project particularly interesting was implementing it in both Python and Go, using VSCode with the Continue extension for AI-assisted development. This dual-language approach provided some fascinating insights:
Python Implementation:
- Quick to prototype and iterate
- Familiar ecosystem for web scraping
- Perfect for initial proof of concept
Go Implementation:
- Significantly better performance through parallel downloads
- More robust error handling for network operations
- Cleaner deployment as a single binary
The Go version ultimately proved superior, demonstrating how language choice can dramatically impact user experience—parallel downloads make the difference between grabbing a coffee and actually finishing your download before the train reaches the next station.
The Vibe Coding Experience
Using Continue with VSCode transformed the development experience. Instead of wrestling with documentation or debugging network requests manually, I could focus on the core logic while the AI handled boilerplate code, Website interactions, and error cases. This collaborative approach felt particularly natural for a utility project like this—less architectural planning, more iterative exploration.
Getting Started with Vibe Coding: VSCode Continue Setup
If this project has inspired you to explore vibe coding yourself, you’re in luck. Red Hat’s Model as a Service (MaaS) platform provides access to several powerful open-source models that work excellently with the Continue extension.
Why Use Red Hat MaaS with Continue?
The beauty of this setup lies in the diversity of available models. Rather than being locked into a single provider’s approach, you can experiment with different model personalities and capabilities:
- DeepSeek R1 - Excellent for reasoning-heavy tasks and complex problem solving
- Llama 4 Scout - Strong general-purpose model with good code generation
- Mistral Small - Efficient and fast, perfect for quick iterations
Setup Guide: From Zero to Vibe Coding
Step 1: Get Access to Red Hat MaaS
- Navigate to https://maas.apps.prod.rhoai.rh-aiservices-bu.com/
- Sign up and log in to the platform
- Create a new application to get your credentials
Step 2: Install the Tools
- Install VSCode if you haven’t already
- Install the Continue plugin from the VSCode marketplace
Step 3: Configure Continue
Create your configuration file with: code ~/.continue/config.yaml
Here’s a complete configuration that gives you access to all three models:
name: Local Assistant
version: 1.0.0
schema: v1
models:
- name: deepseek-r1-qwen-14b
provider: deepseek
model: r1-qwen-14b-w4a16
apiKey: <your-api-key-here>
apiBase: https://deepseek-r1-qwen-14b-w4a16-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1
- name: Llama-4-17b
provider: vllm
model: llama-4-scout-17b-16e-w4a16
apiKey: <your-api-key-here>
apiBase: https://llama-4-scout-17b-16e-w4a16-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1
- name: mistral-24b
provider: vllm
model: mistral-small-24b-w8a8
apiKey: <your-api-key-here>
apiBase: https://mistral-small-24b-w8a8-maas-apicast-production.apps.prod.rhoai.rh-aiservices-bu.com:443/v1
context:
- provider: code
- provider: docs
- provider: diff
- provider: terminal
- provider: problems
- provider: folder
- provider: codebase
Step 4: Start Coding
Restart VSCode, and you’re ready to experience the collaborative nature of AI-assisted development. The context providers ensure the models understand your codebase, recent changes, and current challenges.
Final Thoughts: The Future of Casual Development
This project reinforced my belief that we’re entering an era where the friction between “I have an idea” and “I have a working solution” is rapidly disappearing. The ICE Portal Magazine Downloader took mere hours to build, test, and refine—time I could easily spare during train journeys.
More importantly, vibe coding isn’t about replacing traditional development practices; it’s about expanding what’s possible during those in-between moments. Whether you’re on a train, waiting for a meeting, or just want to explore an idea quickly, AI-assisted development makes previously time-prohibitive projects suddenly feasible.
The combination of open-source models, accessible tooling, and cloud infrastructure is democratizing software creation in ways we’re only beginning to understand. Sometimes the best way to learn about this future is to build something small but genuinely useful—even if it’s just downloading train magazines more efficiently.