And the Winner Is …

Okay, for many, many months now I have been agonizing over getting a new phone. My trusty LG Fusic on Sprint (now for sale on eBay) has been with me for over 3 years, and with the advent of all the new smartphone options available I wanted something a little more versatile.

I live out in the woods of North Carolina, and wireless phone coverage is iffy at best. Sprint seems to have the best service (which is why I’ve used them since 1998) but it is only best because the other major players have closer to no coverage.

However, over the last year everyone but AT&T introduced a femtocell product. This is a small device that plugs into your broadband internet connection and instead of making calls over a cell tower, it acts like a bridge between your phone and the network. It’s like having a cell tower in your house. This has opened up some options for me.

Unlike many people, I was pretty happy with Sprint. They have a good network and as long as you don’t ever, ever, have to talk with customer service you’ll probably be happy with them. The downside is that they rarely have cool phones.

Verizon has the best network overall, while AT&T is oversubscribed and T-mobile is a newcomer to the North Carolina market, but the latter run on GSM networks which means your phone will work practically anywhere. I travel enough overseas that this is a consideration.

In my hunt for a new phone I narrowed it down to the following choices:

  • Sprint and the Palm Pre: The Pre was the first exciting phone to hit the Sprint network in years. I seriously considered it until I realized that it was pretty much dead on arrival. The issues they experienced when trying to create a developer community didn’t help much either.
  • Sprint and the HTC Hero: This is the best phone Sprint has right now. It’s designed well, exciting and powered by Android. The main issue holding me back from this phone was the lack of synchronization between the desktop and the phone for things like contacts. Sure, you can sync through Google, but as much as I like Google I don’t want to host that kind of information on a third-party server. There may be an app to address my sync issues, but the Android Market website is so weak that you can’t browse all of the apps. It says “For a comprehensive, up-to-date list of the thousands of titles that are available, you will need to view Android Market on a handset.” This isn’t possible if I don’t have a handset, and I won’t get a handset unless I know that basic synchronization is available. (sigh) Kris Buytaert seems to like his, though.
  • Verizon and the Blackberry Storm: A friend of mine has one of these and loves it. The first generation came without Wi-Fi, but that has been corrected. It also supports both CDMA and GSM, so you can use it overseas. The downside is that it is a proprietary platform. Not a show stopper, but a negative.
  • AT&T and the Apple iPhone: I use a Mac as my desktop and I have an iPod Touch, so the iPhone is definitely a contender. Although it is a closed platform, we do develop some apps on it in house, so I am a little bit familiar with the hoops you have to jump through. It is GSM, so I could use it anywhere and almost everyone else at the office has one and likes it. The problem is almost no service where I live.
  • Verizon and the Droid: This is the Android phone I’d been waiting for. A full featured Android phone on a great network. But the more I read about it, the more disappointed I became. I was told that the version offered in the US would not support GSM. When I got into examining Verizon’s pricing structure, it seems like they nickel and dime you for everything. Plus the fact that their “unlimited” plan is limited to 5GB a month, with additional traffic costing $50 per GB, made me hesitate. I mean, I think 5GB is a lot, but I really don’t know, and I’d hate to get hit with that fee.

So, the day before the Droid launch I was a little disappointed. While no GSM is not a deal-breaker, the fact that I wasn’t sure I could sync my contacts, coupled with the exceedingly high prices Verizon charges, had me thinking about waiting a few months more.

But then I found out that AT&T’s femtocell offering (the 3G Microcell) had just become available in my area. So I am now the owner of a 32GB iPhone 3GS, replacing my iPod Touch and Fusic.

The Microcell meant that I could get AT&T service at my house. At $150, it was a full $100 off of the price of the Verizon solution, and with the plan I got it included another $100 rebate, making it even cheaper than Sprint’s $100 Airave (which requires an additional $5 per month as well as using minutes). With AT&T I got the lowest minute plan at 700 minutes per month, but for $20/month extra I get unlimited calls on the Microcell. With rollover minutes I don’t think I’ll ever run out, since I make a lot of calls from home.

The Microcell installed pretty easily. It requires a GPS signal so that they know you aren’t using it overseas (where the roaming revenue lives) and thus you have to have it near a window, but since I have skylights in my living room (as well as an Ethernet switch) it was simple to install. It took it about 20 minutes to become active.

The Microcell is Cisco-branded, and I thought it was interesting that it included a disk full of copies of the GPL and other licenses (but no source code) for a number of common GNU/Linux software. I wonder how hackable the Microcell will be? A nmap scan shows that it only responds to ping, so I’m not sure there is a way to get into it over the network.

There are a few downsides. One was that my Sony Ericsson K610i phone (a gift from Alex Hoogerhuis), while 3G, wouldn’t connect to the Microcell (although it connected to AT&T’s network just fine). I don’t know if it was because the phone is unlocked or just because AT&T’s 3G isn’t necessarily the rest of the world’s 3G. So I ended up getting my wife a Sony Ericsson W518a instead, and it connected just fine.

The second was that Embarq had their quarterly DSL outage just after I got everything set up. Without broadband the Microcell is useless, so I was back to one bar (if that). Everything was back up in about 2 hours so I shouldn’t have to worry about it again for another few months.

As for the iPhone – I like it. I knew pretty much what I was getting into since I had a Touch. Since it now supports the bluetooth A2DP protocol I can stream music over FM using the MotoRokr T505 in my car. The voice control is pretty cool (just say “Play Songs by Spoon” and voilà) but it had a lot of trouble with voice dialing. I think it had something to do with the 1300+ contacts I had in my address book. An hour or so spent pruning it down to 300 (I had several dead people listed, for example, plus a number of ex-employers personnel lists) seems to have helped. The camera is crappier than I thought it would be, but the reviews of the Droid camera aren’t much better.

I expect my next phone to be powered by Android, but that is at least 2 years away. A friend of mine just bought a Droid and he’s also an iPhone user, so I look forward to his review.

Anyway, being on the ‘net almost everywhere I go is a little addicting, and I’m on the lookout for must-have iPhone apps. Please send along any suggestions.

2009 Open Source Monitoring Conference

I’m in Nürnberg, Germany this week to speak at a Nagios conference of all things. I was a little disappointed, however, to learn that Nürnberg is no where near the Nürburgring, but so far I am happy I came.

The conference is sold out, and the people I’ve met so far are pretty cool (although I haven’t met all of the 260 in attendance – yet). It’s an interesting ecosystem that has been built up around Nagios, and in many ways it is a microcosm for open source as a whole.

OpenNMS and Nagios are about the same age, but Nagios is much better known, in part because it is written in C and has been available in a number of distributions for years. OpenNMS, being written in Java, has always faced some resistance from users who just don’t like Java, and until the recent advent of the OpenJDK there as been a hurdle to getting something like OpenNMS included in major distributions.

Nagios and OpenNMS share some overlapping features, but from the beginning OpenNMS was aimed at large scale enterprises and was designed with that in mind. Nagios appealed to users at a smaller scale, who liked the ability to add almost any check they could think of to the system very quickly.

But there are more differences. Ethan Galstad, the author of Nagios, always kept pretty close control of the core software. As I write this, OpenNMS has over 40 people with commit access to the code, while Nagios has 9 (up from 3 just a few months ago).

This has caused some tension and has resulted in a number of new projects based on Nagios and at least one outright fork. I can count at least two here at the conference: Opsview and Icinga, and while Ethan used to attend this show he is not here this year.

Maybe it is a coincidence, but last night he announced Nagios XI. This appears to be an “open core” version of his monitoring product, with the old Nagios now being called “Nagios Core OSS”.

As you can imagine, I’m not excited to see more open core software. Groundwork has raised nearly US$30 million and continues to fail with its open core fork of Nagios, and while I consider Ethan much more capable I think it is a bad move. The popularity of Nagios was driven by its community, and it seems obvious that this community is not happy. Driving another wedge between them by splitting development into open and closed versions will not help.

But this is the beauty of open source software. It doesn’t matter what I think. The users get to make the decisions, and Nagios has a great group of users.

Perhaps I can get a few of them interested in OpenNMS.

Dual Licensing and the Beauty of Positano

Note: As usual, I have over-analyzed an issue that normal people wouldn’t spend two minutes thinking about. I apologize to my three readers in advance, but unlike more famous commentators on open source I often find that it is not possible to explore the nuances of issues in our community with a couple of short sound bites.

I have been in Italy for the last week scouting locations for our Italian headquarters. I’m thinking Positano would be a great place:

I was also able to take a few days off for a short holiday, but I couldn’t stay off the Internet for long. As I was skimming through my news feeds, I came across a post by Brian Aker about dual licensing open source software, and it stuck in my mind. It wasn’t necessarily the topic but it was the, well, harshness of his post. It was full of absolutes and hyperbole which is very unusual coming from Brian, and it must have struck a chord with others as it managed to appear on Slashdot as well.

For a little background: Brian was one of the early contributors to MySQL, and MySQL pioneered the idea of “dual licensing” open source software. The legal basis for open source is based on copyright law (distribution of software is considered in the same fashion as distributing “copies” of other works, such as books and music). The holder of the copyright is able to set the terms by which copies can be made. MySQL published its code under the GNU General Public License (GPL) which allows for derivative works to be created and distributed as long as the requirements of the license are met. By its very nature the GPL makes it difficult to commercialize the code itself, since when it is distributed one is required to include the source.

However, MySQL retained 100% of the copyright to its code, and so it was also able to offer it under a license that was different than the GPL. Thus if a company wanted to embed MySQL into its product but didn’t want to be subject to the GPL, they could purchase the rights to use the software under a proprietary license. This enabled MySQL to create a revenue stream from the code, which allowed it to hire more people, advertise, etc.

In my very first post on the subject of open source licensing and business models I mentioned this model as being an acceptable way to fund open source development. The main caveat was that there may be some resistance from the community to contribute code if they had to transfer the copyright to MySQL, but it didn’t appear that the MySQL community cared.

Well, times change. It is my understanding that as MySQL grew they started to offer code to their commercial clients ahead of the community, and they may have even adopted more of an “open core” model where some of the code was never offered under an open source license (I’m not 100% sure of the details). Eventually, MySQL was bought by Sun and now Sun is in the process of being purchased by Oracle.

And this is where the trouble begins. MySQL was one of the first, if not the first, open source database projects that aimed to provide an alternative to commercial products such as those sold by Oracle, so one has to wonder what Oracle will do with MySQL. It is doubtful that a company like Oracle will focus on undermining its main product line with an open source alternative. I’m not saying that they will kill off the product as I seriously doubt that, but the motivations behind it have changed (see the 451 Group for a better timeline on this).

This has resulted in some turmoil in the MySQL community and the announcement of a number of code forks to MySQL. At issue is that since Sun, and possibly now Oracle, holds the copyright to the code there are some limits placed by the GPL for anyone wanting to commercialize those forks or even create free alternatives. Brian Aker works on a fork of MySQL called Drizzle, and I believe that his involvement in, and possible frustration with, the whole copyright issue lead to his post on “The Peculiar Institution of Dual Licensing“.

He starts off with a quote from Richard Stallman that ends:

With excellent management and considerable trust within the user community, MySQL became the gold standard for web based FLOSS database applications.

I think the key phrase here is “considerable trust”. In the case of MySQL, there must have been some considerable trust within the community to contribute code since it became the property of MySQL. More on this in a moment.

To move on to Brian’s comments, the first sentence that struck me as odd was:

Dual Licensing is nothing more then the creation of yet more software, where the end result is the creation of not more open source software, but the creation of yet more closed source software.

I had to think about this one. The only way I could make it work was that if open source code is available under a dual license, then that code can become part of closed source software through the purchase of a commercial license. That code is by definition closed source, since there would be no reason to purchase a license if it was open source code. But my guess is that any real open source software company would use the money raised from such licenses to focus on the main open source product, thus increasing the amount of open source software. Also, if the “Bubba’s Inventory ‘n Stuff” application chooses to license and embed MySQL, its distribution will probably be more limited than MySQL itself, so the ratio of open to closed software is relatively unchanged and may even increase with respect to open source.

Brian continues:

The GPL’s approach is to provide a stick or carrot. If you are open source, then you don’t pay, assuming of you are “the right” sort of open source. If you are close source or pick a license which is not compatible with the GPL you are forced into paying for use of a commercial license. When you “pay” for open source the freedom that was originally offered to the end user is removed.

I don’t understand this talk of “‘the right’ sort” of open source. It’s like the guy who compared the GPL to DRM. The creators of open source software have a wide variety of licenses they can use, from the very permissive Apache and BSD licenses to the more restrictive GPL. Brian seems to be making the case that some are better than the others. He used the word “forced” – who is “forced” to use open source software? If you don’t like the license, don’t use the code. It’s not about forcing people to use software but offering a choice. Dual licensing actually increases freedom – you have the freedom to use the software at no cost under the open source license, and you also have the freedom to purchase the right to use it outside of such a license.

Recently I’ve been giving a talk about what it is like to run a business based on open source software. While the presentation is constantly evolving, it starts off with the two main options for creating powerful open source software. One is to form a “foundation,” preferably non-profit, that is funded by companies with a vested interest in seeing a certain type of application created. This has resulted in a number of amazing projects such as Firefox and Apache. A company like IBM may want to have an alternative to Microsoft Office without having to own such a product line, so they fund OpenOffice. Rackspace may need the benefits of a project like Cassandra, and so they keep people on the payroll to make it a reality. In many cases these projects are published with very permissive licenses so that the commercial entities behind them can take full advantage of the software without many restrictions like those imposed by the GPL. Since permissive licenses allow easy integration into closed source software, I’m not sure if Brian would consider this the creation of more of the same, but in any case it is hard to argue that certain projects have thrived very well under the foundation model.

The other way to produce large scale open source software is to create a company to support the project, such as is the model with OpenNMS and The OpenNMS Group. In this case funding the effort is more problematic. Without a “patron” supplying the resources like in the foundation model, a business has to look more seriously at the bottom line. It has to build up a customer base and a brand, and in the case of a bootstrapped company like OpenNMS this takes time. Despite that we have created some really amazing code, but without a more restrictive license like the GPL someone with money could take our work and quickly commercialize it. It is important that the ownership of OpenNMS is controlled by a single group to enforce the license and to prevent this. Brian has his own comments about ownership:

At the heart of dual licensing is “ownership rights”, which inevitably come into conflict with the nature of open source. Open Source projects that preserve the ability to do dual license come into conflict with the developers who contribute the code. For the project to continue it must ask the original developer to give up their rights to the code via copyright assignment (there is some debate on whether copyright can be held in joint, but this is often disputed by lawyers). Thus dual licensing forces any developer who wishes to contribute into a position of either giving up their rights and allowing their work to end up in commercial software, or creating a fork of the software with their changes. In essence it creates monopolies which can only be broken via forking the software. Forking software over small changes is for the most part unviable because of the cost of keeping a fork of the software up to date, but it is not impossible.

I really dislike this paragraph since it tries to paint in shades of black and white a subject that is anything but. Since the entire concept of open source software is based on copyright law, by its very definition it requires an “owner” to enforce the copyright. At OpenNMS we found out the hard way that when the copyright is held by multiple parties it makes it very difficult to enforce any license.

I also don’t think that it is “inevitable” that these rights will cause conflict. Brian is probably experiencing some friction between Drizzle and MySQL, but I don’t think that one example makes conflict “inevitable”.

He then brings up my original criticism of MySQL’s contributor agreement which required that a developer assign all rights to any contribution to MySQL. Think about it – suppose some person came up with a cool algorithm that improved performance and contributed to MySQL, assigning all rights. This would not only give MySQL complete ownership of the work product, but it would prevent the developer from using his work in another project.

This is why at OpenNMS we adopted the Sun Contributors Agreement which allows for multiple copyright holders. Contributors assign copyright to the maintainers of the project, which allows them to more easily protect it from abuse (as well as dual licensing the code if they so chose), but they retain the copyright to their work. For example, Bobby Krupczak contributed a data collector for XMP, a new network management protocol he is developing. He assigned copyright to that work to OpenNMS so we could use it in our project but he retained the rights to his work so he could also contribute that code to another project or create one of his own. The contributor loses no rights yet the project can maintain some form of control over the whole work. I think this is an awesome concept but Brian states that “this is often disputed by lawyers”. He doesn’t link to a single example and while I am certain there are one or two lawyers who would be willing to raise an argument against it, in my experience it isn’t a concept that is “often disputed”. It reminds me of the fear, uncertainty and doubt spread around the GPL and its enforceability, but as time has shown the GPL does stand up in court and I think the idea of dual copyright will as well.

In addition, dual licensing provides an option for companies that would like to use open source software and perhaps even contribute to it but for a variety of reasons cannot. One recent example would be Apple’s use of ZFS. It was obvious that they were considering support for ZFS but for some reason they changed their mind, and the rumours indicate licensing. As it would have been in Apple’s best interests to contribute back to the project, this would have been a win for ZFS, but now that option for improvement is gone.

I think that Brian is reacting more to a problem specific with MySQL than with open source as a whole. His frustration is evident with such loaded phrases as “take the rights from others” and “lack of imagination”. I don’t believe anyone’s rights have been taken away, except for those that freely gave them to MySQL. If there is some assumed right that open source code can be used by anyone at any time for any reason, then we are no longer talking about open source. The creators of open source software have always been able to protect both their work and their rights through a variety of means, but there has never been a real or implied right granted to end users that wasn’t specified in the license. It seems Brian wants to move the slider from “open” to “free” but I don’t think that is sustainable in many cases.

What I do think is sustainable is a 100% open source project that also has a dual license option. It does involve a level of trust between the copyright holder and the community and the implied covenant that profits from such licenses will be fed back into the project, but it is definitely doable. In this instance Stallman got it right.

Win.

Secretary of State FAIL

One of the many things I do at OpenNMS is handle the finances. This often requires me to interact with the Federal and State governments. To say that the system could be simplified is an understatement.

For example, take this postcard I received in the mail. It was for a corporation that was dissolved in 2003, but for some reason the office of the North Carolina Secretary of State (with whom I dissolved the corporation) still thinks that it is, in some form, active.

As you can see, the card tells one not to contact the office of the Secretary, but instead to call a particular number to talk with the Department of Revenue.

Go ahead, dial it. I dare you.

It appears that the Department of Revenue didn’t pay their phone bill, and the number has been disconnected.

(sigh)

Sometimes movies like Brazil get it right.

Mercedes Profitable

A few weeks ago I commented on a Paul Graham post about being “Ramen Profitable” with a reply that at OpenNMS we were “Sushi Profitable“. I also speculated that one day we would be “Mercedes Profitable”.

Well, I wouldn’t say we were there yet, but I did buy a Mercedes.

To be honest, I didn’t pay all that much for it (I bought it from friends) and a C230 isn’t exactly the best example of the marque (as this “Overhead in New York” points out) but I like it.

I can’t see spending a lot of money on a car. I mean, I could pay myself a bonus or get a company car, but we try to plow all of our profits into making the company stronger. One way to do this is to hire more people and, as we are having a good year, I am happy to announce that Jason Aras has decided to join our team.

Jason has been involved with OpenNMS for years now. He is a member of the Order of the Green Polo and can often be found on the IRC channel as “fastjay”. He even came out to Germany, on his own nickel, to present at our first OpenNMS Users Conference. We are very excited to have him on board.

I am often asked how one gets a job working on OpenNMS, and the best way is simply to show us what you can do by getting involved. It may sound a little self-serving, but we’ve been successful by attracting people who really enjoy what they do. It is the second part of our mission statement: Have Fun. If you wouldn’t spend some of your time on the project “for fun” you probably wouldn’t enjoy it as a job, either.

When I was in college the big thing was to co-op. The co-op program let you take time off from school to work, preferably in a field you were hoping to enter, both to earn some money and to see if you liked it. I think open source provides an even better opportunity, as it allows people to really hone their skills, both in things like programming as well as working in a team, through the simple act of showing up. It can provide experience that one can put on a resumé which shows initiative, talent and the ability to work remotely and with others.

And who knows, there might even be a paying job in it.

Roadmaps

The roads in San Antonio are something to behold. Immense concrete leviathans that snake above the ground, surrounded on both sides by access roads that are superhighways unto themselves. My GPS goes crazy because it is simply not accurate enough to tell if I’m on the frontage road or on the highway above.

Think I’m kidding? In downtown on I-10 they actually have signs indicating “upper level” and “lower level”. That’s right, they built an interstate on top of the first interstate. I almost wrecked looking at it, as it is quite the engineering feat. The upper deck is cantilevered way out from the supports. It’s amazing.

I was downtown to meet up with Eric Evans. Eric is also amazing: Debian maintainer, OGP Emeritus and a self-taught developer who has probably forgotten more about code than I’ll ever know.

I met Eric back in April of 2002 when I first started doing work for Rackspace. He is now working on the Rackspace Cloud, where Rackspace has dedicated full time resources to the Apache Cassandra Project (I knew I loved these guys for a reason).

During the meal the topic of roadmaps came up. Someone on the Cassandra IRC channel had asked about a roadmap and Jonathan Ellis replied with a list of things they had targeted for the next few releases. This didn’t sit well with the questioner. He wanted something a little more solid, with firm dates, etc.

The answer was: that doesn’t exist. The development team is dedicated to moving Cassandra forward, but needs and priorities are very fluid. They are more focused on doing new, useful, and interesting work and not in meeting some sort of artificial deadline.

[Note: this is my interpretation of the story and may not accurately reflect the thoughts of Eric, Jonathan or others within the Cassandra project.]

This reminded me a lot of how OpenNMS is developed.

We always have at least two releases of OpenNMS going at any one time. The stable, or production, release experiences as few changes as possible and most new code is designed to address bugs. The second number in the version is always even: 1.0, 1.2, 1.6, etc.

Then there is the unstable, or development release. The term “unstable” has nothing to do with the robustness of the code, but that whole chunks might change from release to release, which in turn can introduce bugs (although the main goal is that any outstanding bugs in an unstable release are minor or cosmetic). The second number in the version is always odd: 1.1, 1.3, 1.5, etc.

We have various feature targets for each release. For example, 1.8 has two main features: a new provisioning system and access control lists. Thus 1.8 will be done when those features are complete. When we believe that the main coding has been done for 1.8, version 1.7.90 will be released. Chances are there will be a 1.7.91, 1.7.92, etc. Once we are happy that it is worthy of production use, it will be christened 1.8.

There is one more feature coming in 1.8: documentation. OpenNMS is really an amazing piece of software worthy of the name “network management application platform” but more than half of the features are undocumented or at least underdocumented. This will change.

So, when will 1.8 be released? The smartass answer is “when it is done” but the truth is I don’t know.

There is one other factor in the mix, and that is the commercial company behind OpenNMS. While we have a very active community, most of the code comes from people who get paid to work on it, simply because they can work on it full time. With our business plan of “spend less than you earn” we have to focus on the bottom line, and a good portion of that income is the result of custom programming projects. Right now we are in the middle of two big projects with two more waiting in the wings, and thus we are pretty much at capacity when it comes to working on the main features in 1.8. This doesn’t mean we’re not working on them at all, but other things take precedence so it goes more slowly than I would like.

I’m sure there are people out there who will advise me that this is the wrong way to go about it. You need a firm roadmap, they’ll say, and they may be right. But I have to rationalize what I want with what I can afford to do, and something has to give. I could focus all of our resources on getting 1.8 out, but then there might not be a company around to support it when it is done. Plus, one of the things I go on and on about is the flexibility of open source software, and if I am not able to provide custom solutions to clients how true can that statement be?

There are others who might say that this focus on client code isn’t “open source”. Fully 100% of the code we write is published under an OSI approved license. Heck, you can even see what we are up to by browsing the subversion repository or subscribing to the opennms-cvs mailing list. We are completely transparent in that regard.

Finally, we plow all of our profits back into the project by hiring more people. I’ll have an announcement along those lines next week.

Some might say that the lack of a roadmap means less releases and thus less press to entice new users. That may be true as well, but at least we’re at a point where a large number of people don’t even know OpenNMS exists and, once they do, find that 1.6 is more than adequate to address their management needs.

So, to summarize: yes, we have a roadmap for OpenNMS but it can change over time depending on custom development we get paid to write. The upside is that most of that development is of benefit to other users, and often comes with a timeliness that a static roadmap couldn’t match.

The OpenNMS roadmap evolves in a way that both addresses the needs of our clients and keeps us in business. That’s a nice road to be on.

The Business of Open Source is Not Software

I’ve been staying out of the free vs. open source wars running around my little corner of the world of late. There is a lot of talk about whether or not open source has “won”. Open source is free software, so it seems silly to try to differentiate the two. The only way to do that is to focus on the people who care about the difference, and that just results in ad hominem attacks.

For years now I’ve been struggling to educate the market on the fact that the business around open source software is not about software. It’s about solutions. The clients I talk to are ultimately not concerned with what software to buy but instead want solutions to a variety of problems facing their business. Unfortunately, many of them only know the process of purchasing software, and they are unable to adapt to a solutions-based purchase.

Think about it. How does, say, the choice of a management solution usually play out?

First, a list of requirements are drawn up. Then, either through a VAR or just by searching the Internet, a list of possible software solutions is drawn up. The next step is to get demo versions of the software or perhaps talk to the vendor and get them to do a proof-of-concept. Finally, a choice is made and a check is written for software licenses.

In the “demo” step the vendor is usually asked to expend some resources on the sale. Those costs are recovered when the customer purchases a software license. Not only will they pay for the software, due to lock-in they will most likely buy maintenance for years to come. It is a nice revenue stream that makes a gamble on free demos worth it.

This doesn’t work very well for open source. Real open source software doesn’t have a licensing cost, so one can’t make up revenue there. Real open source software can’t prevent access to the latest and greatest code, so there is no requirement to purchase maintenance. Since the client isn’t required to purchase anything, that makes the “demo” phase of a sale a lot more risky.

At OpenNMS we are happy to do demos during the pre-sales process, but we have to draw the line when it comes to a large amount of pre-sales consulting. There is a product we offer called the “Getting to Know You” project in which a consultant will come and spend two days demonstrating what OpenNMS can do on their network, allowing the client to kick the tires and ask questions, and we charge for it. That way, regardless of the choice made by the client, our costs are covered. This is important, since our business model is “spend less than you earn”.

The reason I am writing about this now is that over the last two days I have had to deal with a potential client who is asking for a large amount of work above and beyond what we do with a normal sale. We have been trying to meet their needs for several weeks, but they wouldn’t come to training and when I pressed for a Getting to Know You project I was told no. Since the product has not been “approved” they don’t want to spend any money on it, even if by spending a little money they could save a ton in the future.

This reminds me of one of my favorite YouTube videos, where a woman goes to the hairdresser for a new hair style but doesn’t want to pay for it until after all of the work is done and then only if she likes it.

I run into potential clients like this from time to time, and what I’ve found is that it is better to cut and run instead of spending the time to try and win the business. Someone who isn’t willing to pay for your time most likely won’t understand the value you provide, and in these cases they are better off buying something traditional like Solarwinds than investing in OpenNMS.

I sometimes get asked “how do you make money selling free software?” and I have to answer that I have no clue. I don’t sell software, I sell solutions. The prevalence of Software as a Service (SaaS) businesses are making this easier, since people are being introduced to the mindset of getting a solution without having to purchase software, but the biggest challenge to my business is getting people to understand the value free and open software provides in creating a great solution without the “purchase software” mentality.

Luckily, there are enough people out there who “get it” that our business is doing very well this year. Their companies now have a competitive advantage, which, over time, will be demonstrated. Only when these advantages are demonstrated in the market place can open source be said to have “won”.

Vonnegut, The Wire, and Narrative

This is one of these introspective Sunday posts that have little or no OpenNMS content. As usual, feel free to skip.

Many years ago I was lucky enough to see Kurt Vonnegut give a talk on narrative. He went to a whiteboard and drew some axes. On one was time and the other ranged from “bad” to “good”.

First, he examined the story of Cinderella. The story starts with her father remarrying due to her mother’s death (bad). She has to deal with her evil step-mother and step-sisters (worse). Then she meets her fairy godmother (better). She goes to the ball (good). She dances with the Prince (great). Then the spell breaks and she has to flee (bad), etc.

The line he drew was a curvy swing from bad to good and back, eventually ending with good. Quite a few of our popular stories, movies and shows have a curve very similar.

Then he talked about Native American narrative. This was much more along the lines of walking through the woods, seeing a stream, spotting a deer, etc. The line he drew was almost completely straight, and pretty much neutral between good and bad.

With these two graphs on the board, he then examined one of the most classic stories of all time: Hamlet.

Hamlet starts out with the death of Hamlet’s father and his uncle’s ascension as king. (bad) Then his father’s ghost tells him that he was murdered by his brother (bad). To see if the ghost is right, Hamlet stages a play about murder and watches his uncle’s reaction. When his uncle leaves the room he believes the ghost is right (bad). Then he accidentally kills his girlfriend’s father (bad), she kills herself (bad) and everyone dies at the end (bad).

It was a straight line, very much like the Native American one, only, well, bad.

I don’t remember anything more about the talk, but it struck me that the serious stories, the real stories, sort of plod along without these great swings.

Which brings me to The Wire. The Wire was an HBO television series set in Baltimore, Maryland (USA). It consists of 60 one-hour episodes over five seasons, and has been called the greatest television show of all time.

I started watching it as a way to pass the time on airplanes. I finished the show today on a flight from Oregon to Texas, and while I will try my best not to spoil anything with my thoughts on the it, if you are strict about such things stop reading now.

Each season focuses on a different aspect of life in the city. You know something is up when the second season departs so radically from the first it may leave you wondering if you missed something, or if you are really watching the same show. As I watched it I always thought it was good, but just like each season could stand on its own, each show could almost stand on its own. There weren’t any great cliffhangers, although there were many times when things happened that shocked the hell out of you. Good guys died; bad guys died. Good guys lived; bad guys lived. The powerful were brought low by the weak, and the powerful were made even more powerful by abusing the weak.

Up until today I would have said that The Wire was good, but not necessarily the greatest television show ever, but the final episode changed that. It wasn’t even that the last episode “revealed all” – for the most part it played out like any other – but the final five minutes consisted of a montage that literally left me shaken. It consisted of short, 10-30 second scenes of various characters in the near future, and it is only then that you get a real idea of the scope of the show and how well it was written and how well everything fit together. It was truly amazing.

And if you were to plot it out on a whiteboard, it would be a straight line.

I keep that image of Vonnegut up at the whiteboard at Pomona College in my mind. It reminds me that the greatest and most powerful things in life don’t come in big swings, but mainly in just moving forward.

Free Software and Baseball Analogies

We have been crazy busy over the last few months and since fourth quarter is historically our busiest time, I don’t expect it to get any less hectic any time soon. I expect blogging to be very sporadic and out of chronological order (as I’ll get to things as I think about them) so Faulkner fans rejoice.

On second thought, true Faulkner fans would run away screaming and have nightmares at the mere thought of comparing my writing abilities with that of the venerable author, so scratch that.

[Re-reading this, I’m not happy with it, so stop reading and go watch Auto-tune the News. I’m going to post it anyway since it will help me get back in the groove.]

We had training at OpenNMS headquarters this week. This will be the last training of the year, as the next one would need to be scheduled in November or December and rarely do people like to travel during the holidays (although I always end up in Chicago for some reason). Look for training to return in January.

It was great. We had six incredibly smart, amazingly handsome people in for the week. Two of them came all the way from Chile, and one guy rode his Ducati from Pittsburgh. It was fun, and the guys from Chile had replaced Netcool with OpenNMS, so that was even better.

In other news, I’m in Portland (Oregon) for the weekend working with a client, and while I was traveling yesterday I came across a post by Terry Hancock called “Is free software major league or minor?

It’s worth reading.

Open source and free software detractors often try to paint the community as a bunch of zealots who hold ideals over practicality with no room for compromise, and while it is true that there are some notable examples of such behavior, the vast majority of free software users prefer open software over proprietary programs but use a combination of both.

Hancock points out that on one hand we in the community state that open source is just as good as expensive “major league” software, but when we are called on the carpet for a lack of documentation or usability, etc., we cry “but we’re just volunteers” and take a “minor league” stance. Which is it?

Obviously, I liked this article. Hancock had me at:

“Free Software” and “Open Source Software” is the exact same artifact, no matter who is promoting it, nor on what advantages of it they promote.

which is something I’ve been saying for some time now.

I can’t improve on his post but I would like to add a couple of thoughts.

The first is that open source software often gets rid of the cruft associated with proprietary software. In terms of the major league analogy, does one need a huge stadium with luxury skyboxes, designer uniforms and a private team jet to play great baseball? One of the reasons that open source software tends to be more stripped down than commercial software is that we tend to focus on what is important from a functional standpoint versus what looks good. Since open source software is not “sold”, there is no need to make it all bright and shiny.

But at what point in time does this focus on the basics start to impact usability in a meaningful way? To return to the baseball analogy, you can strip away the stadium but you can’t, say, remove the pitcher’s mound.

When it comes to usability, Apple products are hard to beat. They are also experts at controlling the user experience.

They spend lots of money on the little details. I can’t find the post now, but apparently the message displayed while emptying the trash changed from Leopard to Snow Leopard. It was a small change, along the lines of “This action will delete your Trash items permanently” to “This action will permanently delete your Trash items” but it serves as an example of the level of detail they track.

Since Apple customers pay a premium for their products, there is an expectation for this attention to detail. But the “user experience” doesn’t necessarily mean “usability”. While I’m impressed with the changed text in the example above, it doesn’t do a thing to improve usability (there are a number of other features in OS X that do, however).

With OpenNMS, we really need to focus a more on usability. While large changes in the webapp are not coming in the next few months, one of the things that is holding up the release of 1.8 is that it won’t be released without greatly improved documentation. The second most frequent comment we get in our training classes is “I didn’t know OpenNMS could do that” (with the most frequent being “This class is great!”).

The second thought I had comes back to this concept of freedom in free software. Open source is free software (and those that tell you differently are selling something – probably software). However, as Hancock illustrates, people who focus on “open” tend to be more concerned with how open source software can be used to solve problems than those who focus on “free” – who tend to be more concerned with freedom as an ethical issue.

I am more in the “open” camp, but like others I do get concerned about freedom when it extends beyond the realm of code. I don’t care that I can’t have access to the code that runs my microwave oven, but I get a little more concerned when it comes to the code that runs my car. Not because I plan to hack that code, but as cars become more and more dependent on their computer systems it could become impossible to work on the car without access to the software. Plus, this software could be collecting data that I might want kept private. I think it is important to focus on freedom in software since control of software is becoming synonymous with the control of information.

Last weekend at the Atlanta Linux Fest, Jeff’s wife teased me about my old LG phone. I got it over 3 years ago with Sprint. It’s not a bad phone but it is a little dated. The problem is that I can’t decide what phone to get next.

I have an iPod Touch and so the iPhone is a contender. The augmented reality stuff is really cool, and it was only possible because Apple created a phone with a solid SDK, a video camera and a compass. But working on the pre-alpha OpenNMS iPhone showed me what a royal pain it is to develop an open source application on that platform.

Chris Dibona was kind enough to send me a G1 handset. Even though it is open, the only way I could find to sync my contacts was through my GMail account. Now I like Google as a company, but I don’t want anyone to have access to my contact list. Even Apple, as far as I know, lets me keep that information private. So I gave the phone to Ben so he can make an Android OpenNMS app. I’m still waiting on some of the newer Android phones to come out as possible contenders.

The question I’ve been asking myself is: how open and free does my phone have to be? Is it like a microwave – something that can easily be replaced or has substitutes, or is it something I must have to control my information. I’m not sure.

Perhaps the whole idea of using a game analogy to describe what’s going on is a FAIL. People keep talking about open source “winning”. Lots of people find open source software useful and it is a viable alternative to proprietary software in many cases, so it has “won”. I also think it has “won” by delaying, if not preventing, those who would control all of our information. Anyone with hardware and an internet connection can get a web server running and to send mail. But has it replaced all proprietary software? No.

My guess is that we need a better goal than “winning”. For example, while I want OpenNMS to replace OpenView and Tivoli everywhere, my first goal for OpenNMS is to make it easier for first time users to get it installed and know what it can do. When we have achieved that goal I want to make the configuration files easier to modify. Each step is an improvement, each step brings us closer to the “major league” and in such a fashion that we can deal with the pragmatic need to pay for it all.

Eventually, if we do it right, people will see that they should choose OpenNMS, not because it is cheaper or that it is open source, but because it is, quite simply, better.

As Sun Tzu said:

For to win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.

Monitoringforge.org

Every so often someone comes along with a plan to radically change the open source world, and my first reaction is skepticism. This is no different with the launch of Monitoringforge.org.

When Tara Spalding, the VP of Marketing at Groundwork, contacted us about this new site I really didn’t see any value in addition to what we already get from Sourceforge. The fact that it was being driven by the “VP of Marketing” raised a red flag with me. It’s not that I have anything against marketing people, but open source to me has always been about results, and often that involves cutting through a lot of the hyperbole associated with marketing.

Usually I ignore stuff like this, but seeing all of the mentions of the new site coming through my RSS feed, it is obvious the marketing folks have done a great job so I thought I should comment.

Where to start? Take the “Top Rated Projects” section on the front page of the site. What? How can you rate such a disparate group of applications? “Top Rated” for what? Service monitoring? Data collection? Event management? Agent technology? If they were serious about providing useful information about solutions, shouldn’t the first step be to divide the solutions in to specific groups, like in the categories section on the right (the smaller, tinier section on the right)? Or is it more about how many stars you can get next to your name so you show up on the front page?

The next thing I dislike about it is the use of the word “forge”. To me a forge is a place to host code and other code related services, such as a bug tracker. I think Sourceforge already does that well for us. I don’t see what role this new site will play as a “forge“.

The third thing is the word “monitoring”. Sure, OpenNMS has great monitoring capabilities, but we have designed it to be a network management application platform of which monitoring is just one part. The term “Monitoringforge” seems limiting, and from the standpoint of our marketing this is a bad thing.

It reminds me of the Open Management Consortium. Remember them? It was started back in 2006, it died, then it was rebooted last year, but now it seems to be dead again. I can’t even get the website to load and typing in “Open Management Consortium” into Google returns only press release results and not the site itself (one would think that an organization with a focus on monitoring would know its website was down). As you can see, a lot of fuss was made about that organization, too.

It seems a similar organization, the Open Solutions Alliance, is still around and active, although with fewer members than when they were founded. Perhaps they are still around because they charge dues.

Within the open source community, brand is very important. Before we put the OpenNMS label on anything, we want to make sure that it is real and it doesn’t suck. This is very important. We take being a member of a community very seriously, and we don’t want to be a part of one until we know we have the resources to play an active role, that it makes sense for our project and our business, and that it is going to last.

It’s sort of like our practice of running our business profitably. If we are guaranteed to survive, there is no limit to what we can do – just how long it takes us to do it.

I mentioned before, I am very results-driven. If this new site provides value, we’ll end up being a part of it. But for now we’ll just wait and see and wish them the best of luck.