Ecogate Completes Major Infrastructure Upgrade

Updated: Mar 8

About a year and a half ago, Ecogate decided to start building our online software to run on Kubernetes, an open-source cloud computing project that began at Google. Google has some of the most talented software developers on the planet, and it runs the largest software services by scale. This combination ensured that Kubernetes would become a rock-solid platform that can meet the scaling needs of virtually any organization.

Kubernetes marks a breakthrough for DevOps because it allows it to keep pace with the requirements of modern software development. Kubernetes allows us to build cloud-native applications that can run anywhere, independent of cloud-specific requirements. Kubernetes helps manage containers, which are used to package up code to allow software engineers to easily run applications exactly the same way whether they're hosted in dedicated servers, the cloud, or on their notebook PC. Containers allow applications like Ecogate Remote Access (ERAS) to run across all different platforms, use computing resources more efficiently, and roll out new product updates faster. The software is automatically compiled, and built-in tests are checked automatically every time software for potential issues.

What is very important is that Kubernetes provides reliability: they are fault-tolerant and self-healing. If a container or entire node (virtual server) goes down, resources or a single process will be rescheduled by Kubernetes on a healthy virtual server.

Where does the name “Kubernetes” come from?

Kubernetes (“koo-burr-NET-eez”) is based on the conventional pronunciation of a Greek word κυβερνήτης, meaning “helmsman” or “pilot.”

As part of this upgrade, we modernized our cloud infrastructure to make use of tools and workflows that enable us to deliver new features and improvements to our customers at a faster pace than ever before.

Our new infrastructure makes use of Kubernetes for managing workload orchestration along with continuous integration and deployment tools to enable us to deploy new software faster and more with more confidence. Once a workload has been deployed to our cluster it is continuously monitored both internally by Prometheus and externally by third-party providers to ensure our applications remain available and that we can respond to anomalies quickly.

All of the applications running in our cluster are containerized and both the build artifacts and production configurations are version controlled using GitOps methodologies and tools. This gives us a canonical source of truth for the state of our cluster and ensures we can verify and audit everything running in our cluster.

By making use of continuous integration tools and automated testing we ensure that our applications meet our quality standards before they are deployed to our cluster. Our continuous integration tools run for every commit so that we can provide quick and consistent feedback for our developers giving them the confidence to deploy updates quickly and efficiently.

Once deployed all the applications in our cluster are monitored with health checks and automated metrics collection. Automated health checks ensure applications remain online and available and automatically heal degraded applications. Metric collection through the use of Prometheus gives us insight into our applications so that we can ensure strong performance and take corrective actions before actual outages occur. Further uptime monitoring by external third parties helps us ensure that our applications remain accessible by our customers.

By completing this upgrade we continue our mission of providing our customers with technologically advanced solutions enabling world-leading energy efficiency dust & fume collection systems.

Author: Marek Litomisky

Thanks for submitting!

Subscribe to our Newsletter