Leveraging Kubernetes for OpenNMS (2024)

Skip to content

In my last post I covered Kubernetes and how to utilize it with legacy applications. In this post, I'll dig into specifically how to use it with OpenNMS.

OpenNMS Architecture

OpenNMS, in some senses, can be considered a legacy application as it has been in the market for almost two decades and utilizes a lot of well-established protocols for monitoring. OpenNMS is designed with a modular architecture that allows components to be added or extended to meet the specific needs of different network environments. It is intended to be highly scalable and flexible, capable of monitoring large, complex networks with ease.

Leveraging Kubernetes for OpenNMS (2)

OpenNMS is architected as a scalable and extensible network management platform. Its architecture consists of several key components:

  • Data Collection: OpenNMS uses a data collection system to gather information about the network, including device status, performance metrics, and events. This system includes data collectors for SNMP, ICMP, JMX, and other protocols, as well as a polling engine for regularly querying devices.
  • Event Management: OpenNMS includes an event management system that processes and correlates events generated by devices on the network. It uses a rules engine to define event processing logic and can generate notifications based on event criteria.
  • Topology Discovery: OpenNMS can automatically discover the topology of the network, including devices, interfaces, and connections between devices. This information is used for visualization and monitoring purposes.
  • Alarm Management: OpenNMS manages alarms generated by events or threshold violations. It includes features for acknowledging, escalating, and clearing alarms, and for generating notifications to operators.
  • Performance Monitoring: OpenNMS provides performance monitoring capabilities, including the collection and storage of performance data over time. It includes features for creating and viewing graphs of performance metrics.
  • Web Interface: OpenNMS includes a web-based user interface for monitoring and managing the network. The interface provides dashboards, views, and reports for displaying network status and performance information.
  • REST API: OpenNMS exposes a REST API that allows external systems to interact with the platform. This API can be used for automation, integration with other systems, and custom application development.

How do you fit OpenNMS into Kubernetes?

1. Start with containers!

Kubernetes containers are lightweight, portable units that package an application and its dependencies, enabling consistent deployment across different environments. They run in isolated environments, ensuring applications do not interfere with each other, and are more efficient than virtual machines, sharing the host OS kernel. This efficiency allows for higher density and faster performance. Containers can be easily moved across environments, making them ideal for CI/CD pipelines. Kubernetes manages the deployment and scaling of these containers, automatically adjusting the number of running instances based on demand and ensuring high availability through self-healing capabilities. This provides a consistent, isolated, and efficient way to deploy and run applications at scale.

2. With a Helm Chart!

Helm itself is a package manager for Kubernetes that helps manage the lifecycle of Kubernetes applications. It simplifies the process of deploying, upgrading, and managing applications on Kubernetes by providing a templating system for defining Kubernetes resources and a command-line interface for interacting with Kubernetes clusters.

A Helm chart is a package format used to define and manage Kubernetes applications. It contains all the Kubernetes resources necessary to run an application, such as deployments, services, ingress rules, and configuration files, packaged into a single unit. Helm charts are written in YAML and can be easily shared and reused, making them a popular way to define and deploy complex applications on Kubernetes.

3. Move the Minion out of Kubernetes

To adjust for the S-NAT issue described above, our recommendation is to move the OpenNMS Mion completely out of K8. You put the minion elsewhere because if it's not within the Kubernetes environment, it does not have to deal with that network address translation. It can just take the source information directly. Minions are fairly lightweight—the idea behind them is that they exist within remote data centers, and you can deploy them anywhere. Having the minion outside flows naturally and resolves that problem.

Leveraging Kubernetes for OpenNMS (3)

Don’t Worry! This is a Journey…

  • Don't give up on your legacy applications!

You probably can't even if you want to! Legacy applications aren't likely to go away any time soon. It's part of why we call them legacy. They represent technical debt or IT debt that has been accumulated over time. Those applications are typically sticky; you can't just get rid of them. There's often a specific reason for keeping them around like they’re tied to compliance or tied to a specific business strategy. It's the same reason mainframes are still used.

  • Be Ready: Kubernetes does not always make things easier

The paradigms that Kubernetes brings into play can make things more complicated. It’s Inherently meant for highly distributed, highly scalable architecture. And if your app is monolithic, not distributable, if your app is not able to scale out rather than up, then inherently you're going to run into issues.

Automation is another challenge. Kubernetes has paradigms for scalability, security and deployments, and automation that delivers and installs those paradigms. When a user deploys an app into Kubernetes, that automation still needs to be written. Somebody has to write it. It's not going to magically write itself.

  • Start with containerization

You need to have an app that runs in a container. If you don't have an app that runs in a container, you can't really run it in Kubernetes.

  • Persistent storage and database needs

Once you've achieved containerization, you need to figure out your storage needs - whether that's a database or just persistent storage for configuration, you have to figure that out and make it work in Kubernetes. One of the ideas behind Kubernetes is that applications are treated as "cattle not pets.” You have lots of them and they aren’t given the same attention you would a singular application, and the containers within Kubernetes are inherently ephemeral. If it gets knocked over or dies, you just spin up a new one. In trying that with a legacy application, you might not have all the configuration data, so now you need a configuration store somewhere. Or, in databases, you must ensure their databases exist so you can access them without worry of loss of data.

  1. Re-Architect for scale and resiliency

You don't start off by trying to architect the app; you don't start with the hardest parts. You start with the easy stuff to get it to run in the environment. If you do get to the point of rearchitecting, you will want to develop a more multi-tier application rather than a single tier, monolithic application.

Conclusion

As I said in the beginning, legacy applications often still provide irreplaceable value to organizations. Making them work with modern automated software deployment, scaling, and management solutions is worth the effort. It’s simply a matter of understanding the Kubernetes flavor you’re using, the requirements of the software you’re working with and adjusting to fit. Following the steps above will put you on the path to extending the life of your priceless legacy applications even further.

Jump to section

  • OpenNMS Architecture
  • How do you fit OpenNMS into Kubernetes?
  • Don’t Worry! This is a Journey…
  • Conclusion

About the Author: Marshall Massengill

Leveraging Kubernetes for OpenNMS (4)

I'm the Principal Solution Delivery Architect for OpenNMS. If you've got questions about IT, Networking, or building robots then I'm happy to help!

Published On: July 24th, 2024Last Updated: July 24th, 20246 min read

Related Posts

Leveraging Kubernetes for OpenNMS (5)

Leveraging Kubernetes for Legacy Applications

Gallery

Leveraging Kubernetes for Legacy Applications

July 18th, 2024

Leveraging Kubernetes for OpenNMS (6)

Interview with Alan Brown, SVP and GM, OpenNMS Pt. 3

Gallery

Interview with Alan Brown, SVP and GM, OpenNMS Pt. 3

July 5th, 2024

Leveraging Kubernetes for OpenNMS (7)

Interview with Alan Brown, SVP and GM, OpenNMS Pt. 2

Gallery

Interview with Alan Brown, SVP and GM, OpenNMS Pt. 2

June 26th, 2024

Leveraging Kubernetes for OpenNMS (8)

Interview with Alan Brown, SVP and GM, OpenNMS

Gallery

Interview with Alan Brown, SVP and GM, OpenNMS

June 20th, 2024

Leveraging Kubernetes for OpenNMS (9)

Introducing OpenNMS Meridian 2024

Gallery

Introducing OpenNMS Meridian 2024

May 14th, 2024

Leveraging Kubernetes for OpenNMS (10)

NetFlow Traffic Analyzer for Network Monitoring: A Comprehensive Guide

Gallery

NetFlow Traffic Analyzer for Network Monitoring: A Comprehensive Guide

April 2nd, 2024

Code of Conduct / Trademark Policy / Privacy Policy / Commercial Service Terms / Careers

Contact Info

760 W Fire Tower Rd
Suite 107
Winterville, NC 28590

Phone: +1 919-533-0160
Email: [emailprotected]

© Copyright 2024 The OpenNMS Group, Inc.

Page load link
Go to Top
Leveraging Kubernetes for OpenNMS (2024)
Top Articles
Latest Posts
Recommended Articles
Article information

Author: Greg Kuvalis

Last Updated:

Views: 5615

Rating: 4.4 / 5 (75 voted)

Reviews: 82% of readers found this page helpful

Author information

Name: Greg Kuvalis

Birthday: 1996-12-20

Address: 53157 Trantow Inlet, Townemouth, FL 92564-0267

Phone: +68218650356656

Job: IT Representative

Hobby: Knitting, Amateur radio, Skiing, Running, Mountain biking, Slacklining, Electronics

Introduction: My name is Greg Kuvalis, I am a witty, spotless, beautiful, charming, delightful, thankful, beautiful person who loves writing and wants to share my knowledge and understanding with you.