BlogThe Carbon-Aware Computing Ecosystem: Progress, Patterns and our Race Against Time

The Carbon-Aware Computing Ecosystem: Progress, Patterns and our Race Against Time

Dave Masselink

Dave Masselink

June 30, 20258 min read
The Carbon-Aware Computing Ecosystem: Progress, Patterns and our Race Against Time

The carbon-aware computing ecosystem is having a "moment". About a week ago, Electricity Maps announced that their API now natively handles cloud-to-grid region mapping; a feature that means developers can query carbon intensity by AWS region or GCP zone without maintaining their own translation tables. Further, last month, I had an inspiring conversation with Ryan Singman covering lots of ground, including one of the more elegant and recent tools in the CarbonAware toolbox. In particular, a Prefect framework Python decorator approach that makes carbon-aware orchestration as simple as prefixing @carbon_aware to a function. And here at Compute Gardener, we're seeing increased adoption of our Kubernetes scheduler that enables carbon-aware workload scheduling without touching a single line of application code.

This should be cause for celebration. The ecosystem is maturing. Tools are becoming more sophisticated. Adoption barriers are falling.

And yet... latest projections show data center energy consumption (at least) doubling between now and 2030. GenAI workloads alone are expected to consume as much electricity as entire countries. We're in a race where the finish line keeps moving further away, faster than we're running.

This is the dual reality of carbon-aware computing in 2025: We're making real progress, but it's not enough. We're building better tools, but adoption isn't keeping pace with growth. We have solutions that work today. They are proven, tested and ready to deploy but they remain underutilized while energy demands soar.

Fortunately, the patterns emerging in our community show us a path forward. Not through any single tool or approach, but through an ecosystem where each solution serves its users best while amplifying the impact of others. Where a Python developer can add carbon awareness with a decorator, a DevOps engineer can enable it cluster-wide with a scheduler configuration and a cloud architect can optimize placement decisions through intelligent APIs.

Let me show you what this looks like in practice, why it matters more than ever and how you can start reducing your compute responsible carbon footprint today. Not someday, but today, with tools that already exist and patterns that already work.

While the challenge is growing faster than our solutions are currently being adopted, every kilowatt-hour shifted to a cleaner time, every workload deferred during peak carbon intensity, every optimization deployed—they all count. And they count faster when we work together.

Emerging Community Patterns

The carbon-aware computing space is crystallizing around three complementary patterns, each serving somewhat different users and use cases. Understanding these patterns (and how they work together) is key to accelerating adoption.

Developer-First Approaches: Code-Level Integration

Ryan Singman and the team at CarbonAware have pioneered what might be the most elegant entry point for developers: a decorator that makes any Python function running in Prefect/Airflow carbon-aware. Their approach is beautifully simple:

@carbon_aware(max_intensity=100) # Everything but this line remains unchanged
def train_model():
    model.fit(x_train, y_train)

This pattern shines because it meets developers where they are. No infrastructure changes. No DevOps coordination. Just add a decorator and your function automatically runs during lower intensity periods.

The limitation? It requires (minor) code changes. Legacy applications, vendor software or containerized workloads can't easily benefit without modification... which is not always practical or supported by others in an org. But for greenfield development and Python-heavy workflows, it's an immediate win!

Infrastructure-Level Solutions: Intelligent APIs

Electricity Maps' recent announcement represents another crucial pattern: moving complexity to specialized services. Their API now accepts queries like "What's the carbon intensity for AWS us-east-1?" and handles all the messy details of mapping cloud regions to electrical grids.

This is more than a convenience, it's a fundamental shift in how we think about carbon-aware infrastructure. Instead of every tool maintaining its own mapping tables (guilty as charged; we have one in Compute Gardener codebase), we can now rely on a constantly maintained and authoritative source.

Another of the CarbonAware projects, cloudinfo project, has been instrumental in pushing this direction, creating open mappings between cloud providers and grid regions. The fact that Electricity Maps integrated similar functionality shows how community ingenuity and collaboration drive real improvements.

Orchestration-Layer Tools: Zero Code Change Impact

This brings us to the pattern we've focused on at Compute Gardener: orchestration-level carbon awareness. By implementing carbon-aware scheduling at the Kubernetes layer, we can affect existing workloads without any code modifications.

Kubernetes already has a pluggable scheduler architecture. By deploying Compute Gardener as a secondary scheduler, teams can opt workloads into carbon-aware scheduling with a simple "annotation":

apiVersion: batch/v1
kind: Job
metadata:
  name: my-analysis-job
spec:
  template:
    spec:
      # This is all a job needs to specify in order to delegate to a carbon aware scheduler
      # (with default settings; custom thresholds, max delays, etc. can be further annotated as needed)
      schedulerName: compute-gardener-scheduler
      containers:
      - name: my-analysis-job
        image: existing-build-train-etl-etc:latest  # No changes to containers or related config needed!

No application changes. No risk to existing workloads. Just carbon-aware scheduling for any containerized workload that could leverage it.

The Collaborator's Dividend

Here's where it gets interesting (though not unexpectedly so, for those who believe in the value of open source software and data). These patterns aren't competing, they work in concert. And that composition is accelerating progress for everyone.

Take our recent decision to remove region mapping code from Compute Gardener. We spent some real engineering time towards building and maintaining cloud-to-grid mappings. But with Electricity Maps' new API endpoints and Ryan's open cloudinfo data, we're about to delete that code entirely.

Software engineers know that removing lines of code can OFTEN feel even better than adding new, especially when it allows for doing more with less. Less maintenance work for us, better data for our users and more time to focus on what we do best: making Kubernetes scheduling smarter.

This is the collaborator's dividend in action. When CarbonAware focuses on developer experience, Electricity Maps on data accuracy and accessiblity and Compute Gardener on orchestration, the whole ecosystem gets stronger. Users can choose tools that fit their constraints while benefiting from shared infrastructure.

Why This Matters Now: The Urgency Gap

Let's talk about the gassy elephant in the room: we're not moving fast enough.

Data center electricity consumption is projected to double by 2030, reaching 1,000 TWh annually; roughly the entire electricity consumption of Japan. GenAI workloads are accelerating this trend, with training runs consuming megawatt-hours and inference at scale beginning to dwarf traditional computing workloads.

Meanwhile, our carbon budgets are shrinking. To stay under 1.5°C of warming, we need to cut emissions by 45% by 2030. That's five years out. Not five years to commission a panel, five years to achieve.

This is why every deferred workload matters. When Compute Gardener delays a batch job from running at 500 gCO2eq/kWh to 100 gCO2eq/kWh, that's an 80% reduction in carbon emissions for that workload. Multiply that by thousands of jobs across hundreds of clusters and the impact becomes substantial.

But it's still not enough. Not when adoption remains limited to the sustainability-conscious few. Not when most organizations still view carbon awareness as a "nice to have" rather than a business imperative.

Practical Integration Patterns

So how do we accelerate? By making adoption so easy that NOT using carbon-aware computing becomes the exception. Here are patterns we're seeing work:

A Layered Approach

Start with orchestration-level tools for immediate impact to existing infra and layer in developer tools for new applications:

  1. Week 1: Deploy Compute Gardener for all deferrable workloads
  2. Month 1: Add carbon-aware decorators to data pipeline code
  3. Quarter 1: Integrate Electricity Maps API for multi-region deployment decisions

Quick Wins

Focus on the easiest, highest-impact workloads first:

  • Batch jobs: Usually perfect for time-shifting
  • CI/CD pipelines: Run builds and tests during cleaner hours
  • Data processing: ETL and analysis jobs are often ideal for carbon-aware scheduling
  • Model training: Ridiculously high energy use makes carbon awareness critical

The Enterprise Pattern

For larger organizations, create a carbon-aware platform layer:

  1. Deploy orchestration tools cluster-wide
  2. Provide and encourage developer libraries and decorators
  3. Integrate carbon metrics into existing dashboards
  4. Set carbon budgets alongside compute budgets

Building the Future We Need

The patterns are clear. The tools exist. The urgency is undeniable. What we need now is adoption at scale.

This is where you come in. Whether you're a developer who can add a decorator to your next function, a DevOps engineer who can enable carbon-aware scheduling or a leader who can mandate carbon metrics alongside performance metrics... YOU have a role to play.

Start where you are:

Share what you learn. The community grows stronger with every blog post, every GitHub issue and every conference talk about real-world carbon-aware computing.

And if you need help along the way (whether it's implementing these patterns, measuring impact or building a carbon-aware strategy), please reach out! We're all in this together and the clock is ticking.

The carbon-aware computing ecosystem is more than mature enough to make a difference. The patterns work. The tools exist. The only question is: will we deploy them fast enough?

Every kilowatt-hour shifted to cleaner times matters. Every optimized workload counts. We know the time to act is now.


Compute Gardener is an open-source Kubernetes scheduler that enables carbon-aware workload scheduling across hybrid clouds. Join us in making carbon-aware computing the default, not the exception.

Enjoyed this article?

Subscribe to our newsletter for the latest insights on sustainable computing and carbon-aware scheduling.