2026-02-18 10:38:13 +02:00
2026-02-15 20:04:11 +02:00
2026-02-15 17:34:42 +02:00
2026-02-18 10:38:13 +02:00
2026-02-15 20:04:11 +02:00
2026-02-15 20:04:11 +02:00
2026-02-15 20:04:11 +02:00
2026-02-15 20:04:11 +02:00
2026-02-18 10:16:45 +02:00

Open-Meteo Coordinates Service

A FastAPI-based microservice that queries the Open-Meteo Geocoding API to retrieve coordinates for various cities. The service includes caching, Prometheus metrics, and Grafana dashboards for monitoring.

Features

  • RESTful API for retrieving city coordinates
  • Intelligent caching to reduce external API calls
  • Prometheus metrics for observability
  • Pre-configured Grafana dashboards with 5 panels
  • Docker Compose setup for easy deployment

Prerequisites

  • Docker
  • Docker Compose

Quick Start

  1. Clone the repository

    cd open-meteo-service
    
  2. Start all services

    docker compose up --build
    
  3. Access the services

Helm (Kubernetes)

The Helm chart in the helm folder deploys the FastAPI app and, by default, Prometheus and Grafana.

What it deploys

  • App: FastAPI service with /healthz probes and Prometheus scrape annotations
  • Prometheus: Scrapes the app metrics at /metrics
  • Grafana: Pre-provisioned datasource and dashboard for the app metrics

Prerequisites

  • Kubernetes cluster
  • Helm v3

Install

helm install open-meteo-service ./helm

Upgrade

helm upgrade open-meteo-service ./helm

Access the services

kubectl port-forward svc/open-meteo-service-open-meteo-service 8080:8000
kubectl port-forward svc/open-meteo-service-open-meteo-service-prometheus 9090:9090
kubectl port-forward svc/open-meteo-service-open-meteo-service-grafana 3000:3000

Values overview

You can customize the chart via helm/values.yaml or --set flags.

App values

  • image.repository, image.tag, image.pullPolicy
  • service.type, service.port
  • env.cacheFile
  • persistence.enabled, persistence.size, persistence.storageClassName

Prometheus values

  • prometheus.enabled
  • prometheus.image
  • prometheus.service.type, prometheus.service.port
  • prometheus.persistence.enabled, prometheus.persistence.size, prometheus.persistence.storageClassName

Grafana values

  • grafana.enabled
  • grafana.image
  • grafana.service.type, grafana.service.port
  • grafana.adminUser, grafana.adminPassword
  • grafana.persistence.enabled, grafana.persistence.size, grafana.persistence.storageClassName

Example: disable Grafana and Prometheus

helm install open-meteo-service ./helm \
  --set grafana.enabled=false \
  --set prometheus.enabled=false

Example: enable persistence for all components

helm install open-meteo-service ./helm \
  --set persistence.enabled=true \
  --set prometheus.persistence.enabled=true \
  --set grafana.persistence.enabled=true

GitLab CI Workflow

This project uses a GitLab CI/CD pipeline that automates building, versioning, deployment, and testing on an external k3s cluster (homelab).

Pipeline Overview

The pipeline consists of two stages: build and deploy.

1. Build Stage

  • Docker Image Build: Builds the application Docker image using the project Dockerfile
  • Tagging Strategy: Tags each image with a unique identifier combining branch and commit SHA:
    • Format: <branch>-<shortsha> (e.g., main-a1b2c3d, feature-api-xyz1234)
    • Uses CI_COMMIT_REF_SLUG (sanitized branch name) and CI_COMMIT_SHORT_SHA
  • Multi-Tag Push: Pushes two tags to GitLab Container Registry:
    • $IMAGE_REPO:<branch>-<shortsha> - Specific version tag
    • $IMAGE_REPO:latest - Always points to the most recent build
  • Registry Authentication: Uses GitLab's built-in CI variables (CI_REGISTRY, CI_REGISTRY_USER, CI_REGISTRY_PASSWORD)

2. Deploy Stage

  • Values File Update:
    • Uses yq to update open-meteo-service/values.yaml
    • Sets image.tag to the newly built tag (<branch>-<shortsha>)
    • Keeps image.repository constant in the values file
    • Commits the change with message: ci: bump image tag to <tag> [skip ci]
    • Pushes back to the repository using CI_JOB_TOKEN for authentication
  • Helm Deployment:
    • Deploys to namespace: sandbox-gitlab
    • Release name: open-meteo-service-gitlab
    • Uses the updated values.yaml file from the workspace
    • Waits for deployment to be ready (5-minute timeout)
  • Automated Tests:
    • Port-forwards the service to localhost:8000
    • /healthz - Checks service health
    • /coordinates - Validates JSON response contains "data" field
    • /metrics - Ensures metrics endpoint is accessible
    • All tests must pass for the pipeline to succeed

Preventing Infinite Loops

The pipeline includes safeguards to prevent infinite pipeline triggers:

  • The deploy job includes a rule: if: '$CI_COMMIT_MESSAGE =~ /^ci: bump image tag/' with when: never
  • The values bump commit includes [skip ci] to prevent triggering a new pipeline
  • The deploy job uses the already-updated values.yaml in the current workspace

Deployment Target

  • Cluster: External k3s homelab cluster (not ephemeral)
  • Runner: Self-hosted GitLab runner tagged with homelab
  • Namespace: sandbox-gitlab (separate from ArgoCD/Harbor setups)
  • Ingress: Configured for open-meteo-gitlab.dvirlabs.com (see values.yaml)

Viewing Pipeline Status

  1. Navigate to CI/CD > Pipelines in your GitLab project
  2. Click on any pipeline run to see job details
  3. Expand jobs to view real-time logs
  4. The deploy job shows Helm output, rollout status, and test results

Manual Deployment

To deploy manually with a specific tag:

# Option 1: Deploy using updated values.yaml
helm upgrade --install open-meteo-service-gitlab ./open-meteo-service \
  -n sandbox-gitlab --create-namespace

# Option 2: Override tag via --set
helm upgrade --install open-meteo-service-gitlab ./open-meteo-service \
  -n sandbox-gitlab --create-namespace \
  --set image.tag=main-a1b2c3d

API Documentation

Endpoints

GET /coordinates

Retrieve coordinates for all configured cities.

Response:

{
  "source": "cache",
  "data": {
    "Tel Aviv": {
      "name": "Tel Aviv",
      "latitude": 32.08088,
      "longitude": 34.78057,
      "country": "Israel"
    },
    ...
  }
}

GET /coordinates/{city}

Retrieve coordinates for a specific city.

Parameters:

  • city (path) - City name (e.g., "Ashkelon", "London")

Example:

curl http://localhost:8000/coordinates/Paris

Response:

{
  "source": "open-meteo",
  "data": {
    "name": "Paris",
    "latitude": 48.85341,
    "longitude": 2.3488,
    "country": "France"
  }
}

GET /metrics

Prometheus metrics endpoint exposing service metrics.

Example:

curl http://localhost:8000/metrics

GET /healthz

Health check endpoint.

Response:

{
  "status": "ok"
}

Metrics

The service exposes the following Prometheus metrics:

HTTP Metrics

  • http_requests_total - Counter of total HTTP requests

    • Labels: endpoint, method, status
  • http_request_duration_seconds - Histogram of request durations

    • Labels: endpoint, method

Cache Metrics

  • coordinates_cache_hits_total - Counter of cache hits
  • coordinates_cache_misses_total - Counter of cache misses

External API Metrics

  • openmeteo_api_calls_total - Counter of calls to Open-Meteo Geocoding API
    • Labels: city

Grafana Dashboard

The pre-configured dashboard includes 5 panels:

  1. Request Rate - Requests per second by endpoint
  2. Request Duration p95 - 95th percentile latency
  3. Cache Hits vs Misses - Cache effectiveness
  4. Open-Meteo Calls by City - External API usage per city
  5. Requests by Status - HTTP status code distribution

Access the dashboard at http://localhost:3000 after logging in with admin/admin.

Caching

The service uses a local JSON file (coordinates_cache.json) to cache city coordinates:

  • Reduces external API calls
  • Shared across all API endpoints
  • Persists between requests (not container restarts)
  • Automatically updated when new cities are queried

Development

Project Structure

.
├── app/
│   ├── main.py          # FastAPI application
│   ├── service.py       # Business logic & caching
│   └── metrics.py       # Prometheus metrics definitions
├── grafana/
│   ├── provisioning/    # Auto-configured datasources & dashboards
│   └── dashboards/      # Dashboard JSON definitions
├── docker-compose.yml   # Service orchestration
├── Dockerfile           # Python app container
├── prometheus.yml       # Prometheus scrape configuration
└── requirements.txt     # Python dependencies

Stop Services

docker compose down

View Logs

docker compose logs -f open-meteo-service

Rebuild After Code Changes

docker compose up --build

Configuration

Environment Variables

  • CACHE_FILE - Path to cache file (default: coordinates_cache.json)

Scrape Interval

Edit prometheus.yml to adjust the scrape interval (default: 15s).

Testing

Generate test traffic to populate metrics:

# Test all endpoints
curl http://localhost:8000/coordinates
curl http://localhost:8000/coordinates/Paris
curl http://localhost:8000/coordinates/London

# Generate load
for i in {1..10}; do curl -s http://localhost:8000/coordinates > /dev/null; done

License

MIT

Languages
Python 83.2%
Smarty 14%
Dockerfile 2.8%