Migrating from Heroku to Kubernetes via GKE

  • Control and Customizations
  • Platform Limitations & Available Technologies
  • Automated SSL Certificate Management and Infrastructure Scaling
  • Secrets/Config Management
  • Separate Testing Environments
  • Easy Access to Application Log Outputs

Standardized Deployments and Secret Management

When leveraging Fairwinds’ ClusterOps Service, we help get you set up with a standards-based set of Google Kubernetes Engine (GKE) clusters on which to schedule your containerized workloads. Some of our on-boards include helping get your application into containers so that it will play nicely in the Kubernetes ecosystem.

Infrastructure Management and SSL Certificate Management

As you may know, when your application starts to get heavy usage, you need to start growing your ancillary components like Load Balancers and Databases. Kubernetes via GKE has deep integrations to be able to auto-provision Cloud Load Balancers for individual services, as well as unify management under “ingress controllers,” so that you can slice up traffic to your applications as you need to when accommodating growth.

Separate Testing Environments

Kubernetes has lovely features and capabilities that can enable you to separate workloads and provide service discovery contexts for your applications. Our most common use cases are housing a development and a staging/pre-prod environment side-by-side in one cluster. This is enabled by Kubernetes Namespaces. Namespaces are a logical boundary of grouped permissions and discovery that can help reduce how much configuration splay you have in your application configurations, as well as provide security separation between workloads. Some of our ClusterOps clients also have custom solutions that allow them to utilize feature branches, which can deploy your Pull Requests to an application in a separate namespace.

Access to Logs and Output

One of the biggest concerns when transitioning platforms is how you access the application data I need to debug production. No one wants to move to a new system and feel like they’ve lost visibility into the applications serving client traffic.


Once you start feeling your stride in Kubernetes, it’s common to start thinking about some of the extras of auto-scaling your workloads as well as your cluster. Luckily, Kubernetes in GKE enables you to do Cluster Node Scaling as well as Workload scaling! HorizontalPodAutoscaling can help you scale your application based on aggregate load of all your containers and once your cluster reaches it’s CPU or Memory limits, it can also add extra Compute Nodes to help keep your application scaling up within its bounds, but not wasting nodes while system load is low.

To learn more, download our full guide: “Migrating from Heroku to Kubernetes

We’re always available to talk more via email or on twitter. At the end of the day, we all need sound, repeatable technical solutions that drive your business and don’t add overhead. Those are the solutions we support and create with our clients.

Fairwinds — The Kubernetes Enablement Company | Editor of uptime 99