It Was Twenty Years Ago Today …

On March 30th, 2000, the OpenNMS Project was registered on Sourceforge. While the project actually started sometime in the summer of 1999, this was the first time OpenNMS code had been made public so we’ve always treated this day as the birth date of the OpenNMS project.

Wow.

OpenNMS Entry on Sourceforge

Now I wasn’t around back then. I didn’t join the project until September of 2001. When I took over the project in May of 2002 I didn’t really think I could keep it alive for twenty years.

Seriously. I wasn’t then nor am I now a Java programmer. I just had a feeling that there was something of value in OpenNMS, something worth saving, and I was willing to give it a shot. Now OpenNMS is considered indispensable at some of the world’s largest companies, and we are undergoing a period of explosive growth and change that should cement the future of OpenNMS for another twenty years.

What really kept OpenNMS alive was its community. In the beginning, when I was working from home using a slow satellite connection, OpenNMS was kept alive by people on the IRC channel, people like DJ and Mike who are still involved in the project today. A year or so later I was able to convince my business partner and good friend David to join me, and together we recruited a real Java programmer in Matt. Matt is no longer involved in the project (people leaving your project is one of the hardest things to get used to in open source) but his contributions in those early days were important. Several years after that we were joined by Ben and Jeff, who are still with us today, and through slow and steady steps the company grew alongside the project. They were followed by even more amazing people that make up the team today (I really want to name every single one of them but I’m afraid I’ll miss one and they’ll be rightfully upset).

I can’t really downplay enough my lack of responsibility for the success of OpenNMS. My only talent is getting amazing people to work with me, and then I just try to remove any obstacles that get in their way. I get some recognition as “The Mouth of OpenNMS” but most of the time I just stand on the shoulders of giants and enjoy the view.

Once Again Into the Breach – Back with Apple

After almost a decade since my divorce from Apple, I find myself back with the brand, and it is all due to the stupid watch.

TL;DR: As a proponent of free software, I grouse at the “walled garden” approach Apple takes with its products, but after a long time of not using their products I find myself back in, mainly because free software missed the boat on mobile.

Back in 2011, I stopped using Apple products. This was for a variety of reasons, and for the most part I found that I could do quite well with open source alternatives.

My operating system of choice became Linux Mint. The desktop environment, Cinnamon, allowed me to get things done without getting in the way, and the Ubuntu base allowed me to easily interact with all my hardware. I got rid of my iMac and bought a workstation from System 76, and for a time things were good.

I sold my iPhone and bought an Android phone which was easier to interact with using Linux. While I didn’t have quite all of the functionality I had before, I had more than enough to do the things I needed to do.

But then I started to have issues with the privacy of my Android phone. I came across a page which displayed all of the data Google was collecting on me, which included every call, every text and every application I opened and how long I used it. Plus the stock Google phones started to ship with all of the Google Apps, many of which I didn’t use and they just took up space. While the base operating system of Android, the Android Open Source Project (AOSP), is open source, much of the software on a stock Android phone is very proprietary, with questionable motives behind gathering all of that data.

Then I started playing with different Android operating systems known as “Custom ROMs”. Since I was frequently installing the operating system on my phone I finally figured out that when Google asks “Would you like to improve your Android experience?”, and you say “yes”, that is when they start the heavy data collection. Opt-out and the phone still works, but even basic functionality such as storing your recent location searches in Google Maps goes away. Want to be able to go to a previous destination with one click? Give them all yer infos.

The Custom ROM world is a little odd. While there is nothing wrong with using software projects run by hobbyists, the level of support can be spotty at best. ROMs that at one time were heavily supported can quickly go quiet as maintainers get other interests or other handsets. For a long time I used OmniROM with a minimal install of Google Apps (with the “do not improve my Android experience” option) and it even worked with my Android Wear smartwatch from LG.

I really liked my smartwatch. It reminded me of when we started using two monitors with our desktops. Having things like notifications show up on my wrist was a lot easier to deal with than having to pull out and unlock my phone.

But all good things must come to an end. When Android Wear 2.0 came out they nerfed a lot of the functionality, requiring Android Assistant for even the most basic tasks (which of course requires the “improved” Android experience). I contacted LG and it wasn’t possible to downgrade, so I stopped wearing the watch.

Things got a little better when I discovered the CopperheadOS project. This was an effort out of Canada to create a highly secure handset based on AOSP. It was not possible (or at least very difficult) to install Google Apps on the device, so I ended up using free software from the F-Droid repository. For those times when I really needed a proprietary app I carried a second phone running stock Android. Clunky, I know, but I made it work.

Then CopperheadOS somewhat imploded. The technical lead on the project grew unhappy with the direction it was going and left in a dramatic fashion. I tried to explore other ROMs after that, but grew frustrated in that they didn’t “just work” like Copperhead did.

So I bought an iPhone X.

Apple had started to position themselves as a privacy focused company. While they still don’t encrypt information in iCloud, I use iCloud minimally so it isn’t that important to me. It didn’t take me too long to get used to iOS again, and I got an Apple Watch 3 to replace my no longer used Android Wear watch.

This was about the time the GDPR was passed in the EU, and in order to meet the disclosure requirements Apple set up a website where you could request all of the personal data they collected on you. Now I have been a modern Apple user since February of 2003 when I ordered a 12-inch Powerbook, so I expected it to be quite large.

It was 5MB, compressed.

The majority of that was a big JSON file with my health data collected from the watch. While I’m not happy that this data could be made available to third parties as it isn’t encrypted, it is a compromise I’m willing to make in order to have some health data. Now that Fitbit is owned by Google I feel way more secure with Apple holding on to it (plus I have no current plans to commit a murder).

The Apple Watch also supports contactless payments through Apple Pay. I was surprised at how addicted I became to the ease of paying for things with the watch. I was buying some medication for my dog when I noticed their unit took Apple Pay, and the vet came by and asked “Did you just Star Trek my cash register?”.

Heh.

For many months I pretty much got by with using my iPhone and Apple Watch while still using open source for everything else. Then in July of last year I was involved in a bad car accident.

In kind of an ironic twist, at the time of the accident I was back to carrying two phones. The GrapheneOS project was created by one of the founders of Copperhead and I was once again thinking of ditching my iPhone.

I spent 33 nights in the hospital, and during that time I grew very attached to my iPhone and Watch. Since I was in a C-collar it made using a laptop difficult, so I ended up interacting with the outside world via my phone. Since I slept off and on most of the day, it was nice to get alerts on my watch that I could examine with a glance and either deal with or ignore and go back to sleep.

This level of integration made me wonder how things worked now on OSX, so I started playing with a Macbook we had in the office. I liked it so much I bought an iMac, and now I’m pretty much neck deep back in the Apple ecosystem.

The first thing I discovered is that there is a ton of open source software available on OSX, and I mainly access it through the Homebrew project. For example, I recently needed the Linux “watch” command and it wasn’t available on OSX. I simply typed “brew install watch” and had it within seconds.

The next major thing that changed for me was how integrated all my devices became. I was used to my Linux desktop not interacting with my phone, or my Kodi media server being separate from my smartwatch. I didn’t realize how convenient a higher level of integration could be.

For example, for Christmas I got an Apple TV. Last night we were watching Netflix through that device and when I picked up my iPhone I noticed that I could control the playback and see information such as time elapsed and time remaining for the program. This happened automatically without the need for me to configure anything. Also, if I have to enter in text, etc. on the Apple TV, I can use the iPhone as a keyboard.

I’ve even started to get into a little bit of home automation. I bought a “smart” outlet controller that works with Homekit. Now I don’t have the “Internet of Things”, instead I have the “LAN of Things” as I block Internet access for most of my IoT-type things such as cameras. Since the Apple TV acts as a hub I can still remotely control my devices even though I can’t reach them via the Internet. All of the interaction occurs through my iCloud account, so I don’t even have to poke a hole in my firewall. I can control this device from any of my computers, my iPhone or even my watch.

It’s pretty cool.

It really sucks that the free and open source community missed the boat on mobile. The flagship mobile open source project is AOSP, and that it heavily controlled by Google. While some brave projects are producing Linux-based phones, they have a long way to go to catch up with the two main consumer options: Apple and Google. For my piece of mind I’m going with Apple.

There are a couple of things Tim Cook could do to ease my conscience about my use of Apple products. The first would be to allow us the option of having greater control of the software we install on iOS. I would like to be able to install software outside of the App Store without having to jailbreak my device. The second would be to enable encryption on all the data stored in iCloud so that it can’t be accessed by any other party than the account holder. If they are truly serious about privacy it is the logical next step. I would assume the pressure from the government will be great to prevent that, but no other company is in a better position to defy them and do it anyway.

A Low Bandwidth Camera Solution

My neighbor recently asked me for advice on security cameras. Lately when anyone asks me for tech recommendations, I just send them to The Wirecutter. However, in this case their suggestions won’t work because every option they recommend requires decent Internet access.

I live on a 21 acre farm 10 miles from the nearest gas station. I love where I live but it does suffer from a lack of Internet access options. Basically, there is satellite, which is slow, expensive and with high latency, or Centurylink DSL. I have the latter and get to bask in 10 Mbps down and about 750 Kbps up.

Envy me.

Unfortunately, with limited upstream all of The Wirecutter’s options are out. I found a bandwidth calculator that estimates a 1 megapixel camera encoding video using H.264 at 24 fps in low quality would still require nearly 2 Mbps and over 5 Mbps for high quality. Just not gonna happen with a 750 Kbps circuit. In addition, I have issues sending video to some third party server. Sure, it is easy but I’m not comfortable with it.

I get around this by using an application called Surveillance Station that is included on my Synology DS415+. Surveillance Station supports a huge number of camera manufacturers and all of the information is stored locally, so no need to send information to “the cloud”. There is also an available mobile application called DS-cam that can allow you to access your live cameras and recordings remotely. Due the the aforementioned bandwidth limitations, it isn’t a great experience on DSL but it can be useful. I use it, for example, to see if a package I’m expecting has been delivered.

DS-Cam Camera App

[DS-Cam showing the current view of my driveway. Note the recording underneath the main window where you can see the red truck of the HVAC repair people leaving]

Surveillance Station is not free software, and you only get two cameras included with the application. If you want more there is a pretty hefty license fee. Still, it was useful enough to me that I paid it in order to have two more cameras on my system (for a total of four).

I have the cameras set to record on motion, and it will store up to 10GB of video, per camera, on the Synology. For cameras that stay inside I’m partial to D-Link devices, but for outdoor cameras I use Wansview mainly due to price. Since these types of devices have been known to be easily hackable, they are only accessible on my LAN (the “LAN of things”) and as an added measure I set up firewall rules to block them from accessing the Internet unless I expressly allow it (mainly for software updates).

To access Surveillance Station remotely, you can map the port on the Synology to an external port on your router and the communication can be encrypted using SSL. No matter how many cameras you have you only need to open the one port.

The main thing that prevented me from recommending my solution to my neighbor is that the DS415+ loaded with four drives was not inexpensive. But then it dawned on me that Synology has a number of smaller products that still support Surveillance View. He could get one of those plus a camera like the Wansview for a little more than one of the cameras recommended by The Wirecutter.

The bargain basement choice would be the Synology DS118. It cost less than $200 and would still require a hard drive. I use WD RED drives which run around $50 for 1TB and $100 for 4TB. Throw in a $50 camera and you are looking at about $300 for a one camera solution.

However, if you are going to get a Synology I would strongly recommend at least a 2-bay device, like the DS218. It’s about $70 more than the DS118 and you also would need to get another hard drive, but now you will have a Network Attached Storage (NAS) solution in addition to security cameras. I’ve been extremely happy with my DS415+ and I use it to centralize all of my music, video and other data across all my devices. With two drives you can suffer the loss of one of them and still protect your data.

I won’t go in to all of the features the Synology offers, but I’m happy with my purchase and only use just a few of them.

It’s a shame that there isn’t an easy camera option that doesn’t involve sending your data off to a third party. Not only does that solution not work for a large number of people, you can never be certain what the camera vendor is going to do with your video. This solution, while not cheap, does add the usefulness of a NAS with the value of security cameras, and is worth considering if you need such things.

The OpenNMS Group Turns 15

Fifteen years ago today, on September 1, 2004, David Hustace, Matt Brozowski and I formed The OpenNMS Group, Inc.

This was the fourth business entity to steward the OpenNMS Project, and would turn out to be the one with staying power.

The original OpenNMS Group office was in a single 10 foot by 15 foot room with just enough space for three desks. The landlord provided Internet access. By adopting the business plan of “spend less money than you earn” we managed to survive and grow. Now the company has its main office in Apex, NC, USA as well as one in Ottawa, Ontario, CA, with a satellite office in Germany.

The OpenNMS platform is being used to monitor some of the largest networks in existence, many with millions of devices. With the introduction of ALEC the team is bringing artificial intelligence and machine learning technologies to network monitoring to provide the highest level of visibility to the most complex environments.

OpenNMS has always been lucky to have a wonderful community of users, contributors and customers. With their support the next fifteen years should be as great if not better than the first. I am humbled to have played a small part in its history.

Crash

It’s been even longer than usual since I’ve updated this site. I’m missing a ton of stuff, including the last day of Dev-Jam as well as my trip to this year’s OSCON conference in Portland. I wouldn’t be surprised if I lose one if not all of my three readers.

But I do have an excuse. This happened.

Crashed F150 Pickup Truck

On Friday, July 26th, I left my farm in Chatham County, North Carolina, to head to town. I needed to get the oil changed in the F150 and I was planning on meeting some friends for lunch.

About three miles from my house, another driver crossed the centerline on Hwy 87 and hit my truck nearly head-on. I suffered a broken rib, a fractured C2 vertebrae, and a fractured right big toe, but the major damage was that my left ankle was shattered.

I’ve spent the last 33 days at the UNC Medical Center in Chapel Hill, where I underwent two surgeries and was taken care of by some amazing staff.

I’m home now and plan to return to work (remotely) next week. I still have many months to go before I can approach normality, but a journey of ten thousand miles begins with a single step.

Thanks for your kind thoughts. One good thing that has come out of this is that I’ve spent the last 17 years trying to build OpenNMS into something that can thrive even without me, and the team has been amazing in my absence. I can’t wait to be at full strength again.

2019 Dev-Jam – Day 4

The next to the last day of Dev-Jam was pretty much like the one before it, except now it was quite clear that Dev-Jam was coming to a close (sniff).

I actually managed to get some of the work done that I wanted to do this week, namely to start working on the next version of my OpenNMS 101 video series. A lot changed in Horizon 24 and now the videos are a little off (especially when it comes to alarms) and I want to fix that soon.

2019 Dev-Jam: Group of People Hacking Away

I did make one bad decision when I purchased take-away sushi from the Union, but I was lucky that I got over it quickly (grin)

2019 Dev-Jam: Jesse Talking About ALEC

It’s so nice to be able to break out into little groups and share what is going on in OpenNMS. Jesse gave an in-depth talk on ALEC (and I’ll be presenting it at this year’s All Things Open conference).

It wasn’t all work, though.

2019 Dev-Jam: Table with Snacks and Ulf

A group of people had gone to the Mall of America on Sunday, and Markus bought a Rick and Morty card game that seemed pretty popular. Parasites!

For dinner I ordered some delicious pizza from Punch as many people wanted to stay in and finish up their projects in time for tomorrow’s “Show and Tell”.

It’s hard to believe Dev-Jam is almost over.

2019 Dev-Jam – Day 3

Not much to add on Day 3 of Dev-Jam. By now the group has settled into a routine and there’s lots of hacking on OpenNMS.

As part of my role as “cruise director” Mike and I ran out for more snacks.

2019 Dev-Jam: Table with Snacks and Ulf

On the way we stopped by the Science Museum of Minnesota to pick up a hoodie for Dustin. As fans of Stranger Things we thought we should get our Dustin the same hoodie worn by Dustin in the show. The one in the show was apparently an actual hoodie sold by the museum in the 1980s, but it was so popular they brought it back.

2019 Dev-Jam: Dustin and Dustin in Brontosaurus Hoodie

While not exactly the “Upside Down” in the evening the gang descended on Up-Down, a barcade located a few miles away. Jessica organized the trip and folks seemed to have a great time.

2019 Dev-Jam: Selfie of Folks at Up-Down.

The combination bar and arcade features vintage video games

2019 Dev-Jam: People Playing Video Games at Up-Down.

as well as pinball machines

2019 Dev-Jam: Selfie of Folks at Up-Down.

Of course, there was also a bar

2019 Dev-Jam: People at the Bar at Up-Down.

Good times.

2019 Dev-Jam – Day 2

While the OpenNMS team does a pretty good job working remotely, it is so nice to be able to work together on occasion. Here is an example.

I wanted to explore the current status of the OpenNMS Selenium monitor. My conclusion was that while this monitor can probably be made to work, it needs to be deprecated and probably shouldn’t be used.

I started off on the wiki page, and when I didn’t really understand it I just looked at the page’s history. I saw that it was last updated in 2017 by Marcel, and Marcel happened to be just across the room from me. After talking to him for awhile, I understood things much better and then made the decision to deprecate it.

The idea was that one could take the Selenium IDE, record a session and then export that to a JUnit test. Then that output would be minimally modified and added to OpenNMS so that it could periodically run the test.

The main issue is that the raw Selenium test *requires* Firefox, and Firefox requires an entire graphics stack, i.e. Xorg. Most servers don’t have that for a number of good reasons, and if you are trying to run Selenium tests on a large number of sites the memory resources could become prohibitive.

An attempt to address this was made using PhantomJS, another Javascript library that did not require a graphical interface. Unfortunately, it is no longer being maintained since March of 2018.

We’ve made a note of this with an internal OpenNMS issue. Moving forward the option looks like to use “headless Chrome” but neither OpenNMS nor Selenium support that at the moment.

We still have the Page Sequence Monitor. This is very efficient but can be difficult to set up.

Playing with that took up most of my morning. It was hard staying inside because it was a beautiful day in Minneapolis.

2019 Dev-Jam: Picture of Downtown Minneapolis from UMN

Most of my afternoon was spent working with OpenNMS customers (work doesn’t stop just because it is Dev-Jam) but I did wander around to see what other folks were doing.

2019 Dev-Jam: Jesse White with VR headset

Jesse was playing with a VR headset. The OpenNMS AI/Machine Learning module ALEC can create a visualization of the network, and he wrote a tool that lets you move through it in virtual reality (along with other people using other VR headsets). Not sure how useful it would be on a day to day basis, but it is pretty cool.

That evening most of us walked down the street to a pretty amazing Chinese restaurant. I always like bonding over food and we had discovered this place last year and were eager to return. I think the “bonding” continued after the meal at a bar across the street, but I ended up calling it a day.

2019 Dev-Jam: People at a table at a Chinese restaurant

2019 Dev-Jam: People at a table at a Chinese restaurant

2019 Dev-Jam – Day 1

Dev-Jam officially got started Monday morning at 10:00.

I usually kick off the week with a welcome and some housekeeping information, and then I turn it over to Jesse White, our project CTO. We do a roundtable introduction and then folks break off into groups and start working on the projects they find interesting.

This year we did something a little different. The development team scheduled a series of talks about the various things that have been added since the last Dev-Jam, and I spent most of the day listening to them and learning a lot of details about the amazing platform that is OpenNMS. While we had some technical difficulties, most of these presentations were recorded and I’ll add links to the videos once they are available.

2019 Dev-Jam: Graph of Main Projects Over the Last Year

Jesse started with an overview of the main development projects over the last year. Sentinel is a project to use the Kafka messaging bus to distribute OpenNMS functionality over multiple instances. While only implemented for telemetry data at the moment (flows and metrics) the goal is to enable the ability to distribute all of the functionality, such as service assurance polling and data collection, across multiple machines for virtually unlimited scalability.

After the Sentinel work, focus was on both the OpenNMS Integration API (OIA) and the Architecture for Learning Enabled Correlation (ALEC).

The OIA is a Java API to make it easier to add functionality to OpenNMS. While it is used internally, the goal is to make it easier for third parties to integrate with the platform. ALEC is a framework for adding AI and machine learning functions to OpenNMS. It currently supports two methods for the correlation of alarms into situations: DBScan and TensorFlow, but is designed to allow for others to be added.

The current development focus is on the next version of Drift. Drift is the feature that does flow collection, and there are a number of improvements being worked on for “version 2”.

2019 Dev-Jam: Title Slide for the Contributing to OpenNMS talk

Markus von Rüden gave the next talk on contributing to OpenNMS. He covered a number of topics including dealing with our git repository, pull requests, test driven development and our continuous integration systems.

2019 Dev-Jam: Title Slide for the Karaf/OSGi talk

Matt Brooks presented an overview on how to leverage Karaf to add functionality to OpenNMS. Karaf is the OSGi container used by OpenNMS to manage features, and Matt used a simple example to show the process for adding functionality to the platform.

2019 Dev-Jam: Title Slide for the OIA talk

Extending on this was a talk by Chandra Gorantla about using the OIA with an example of creating a trouble ticketing integration. OpenNMS has had a ticketing API for some time but this talk leveraged the improvements added by the new API to make the process easier.

2019 Dev-Jam: Title Slide for the ALEC talk

Following this was a talk by David Smith on ALEC. He demonstrated how to add a simple time-based correlation to OpenNMS which covered a lot of the different pieces implemented by the architecture, including things like feedback.

That ended the development overview part of the presentation but there were two more talks on Docker and Kubernetes.

2019 Dev-Jam: Slide showing Useful Docker Commands for OpenNMS

Ronny Trommer gave a short overview of running OpenNMS in Docker, covering a lot of information about how to deal with the non-immutable (mutable?) aspects of the platform such as configuration.

2019 Dev-Jam: Kubernetes Diagram

This was followed by an in-depth talk by Alejandro Galue on Kubernetes, running OpenNMS using Kubernetes and how OpenNMS can be used to monitor services running in Kubernetes. While Prometheus is the main application people implement for monitoring Kubernetes, it is very temporal and OpenNMS can augment a lot of that information, especially at the services level.

These presentations took up most of the day. Since it is hard to find places where 30 people can eat together, we have a tradition of getting catering from Brasa, and we did that for Monday night’s meal.

2019 Dev-Jam: Table Filled with Food from Brasa

Jessica Hustace, who did the majority of the planning for Dev-Jam, handed out this year’s main swag gift: OpenNMS jackets.

2019 Dev-Jam: OpenNMS logo jacket

Yup, I make this look good.

2019 Dev-Jam – Day 0

For the fourteenth time in fifteen years, a group of core community members and power users are getting together for our annual OpenNMS Developers Conference: Dev-Jam.

This is one of my favorite times of the year, probably second only to Thanksgiving. While we do a good job of working as a distributed team, there is nothing like getting together face-to-face once in awhile.

We’ve tried a number of venues including my house, Georgia Tech and Concordia University in Montréal, but we keep coming back to Yudof Hall on the University of Minnesota campus in Minneapolis. It just works out really well for us and after coming here so many times the whole process is pretty comfortable.

My role in Dev-Jam is pretty much just the “cruise director”. As is normal, other people do all the heavy lifting. I did go on a food and drink run which included getting “Hello Kitty” seaweed snacks.

2019 Dev-Jam: Hello Kitty Seaweed Snacks

Yudof Hall is a dorm. The rooms are pretty nice for dorm rooms and include a small refrigerator, two burner stove, furniture and a sink. You share a bathroom with one other person from the conference. On the ground floor there is a large room called the Club Room. On one side is a kitchen with tables and chairs. On the other side is a large TV/monitor and couches, and in the middle we set up tables. There is a large brick patio that overlooks the Mississippi River.

2019 Dev-Jam: Yudof Hall Club Room

The network access tends to be stellar, and with the Student Union just across the street people can easily take a break to get food.

We tend to eat dinner as a group, and traditionally the kickoff meal is held at Town Hall Brewery across the river.

2019 Dev-Jam: UMN Bridge Over the River

It was a pretty rainy day but it stopped enough for most of us to walk over the bridge to the restaurant. You could feel the excitement for the week start to build as old friends reunited and new friends were made.

2019 Dev-Jam: Town Hall Brewery

When we were setting up the Club Room tables, we found a whiteboard which is sure to be useful. I liked the fact that someone had written “Welcome Home” on it. Although I don’t live here, getting together with these people sure feels coming home.

2019 Dev-Jam: Welcome Home on Whiteboard