This is a long post about overcoming some challenges I had with a recent network install. It should have been pretty straightforward but instead I hit a couple of speed bumps that caused it to take much longer than I expected.
Last year I moved for the first time in 24 years, and the new place presented some challenges. One big upside is that it has gigabit fiber, which is a massive improvement over my last place.
The last place, however, had a crawl space and I was able to run Ethernet cable pretty much everywhere it was needed. The new place is a post and beam house (think “ski lodge”) which doesn’t lend itself well to pulling cable, so I needed a wireless solution.
I used to use Ubiquiti gear, and it is quite nice, but It had a couple of downsides. You had to install controller software on a personal device for ease of configuration, and I kind of soured on the company with how it dealt with GPL compliance issues and in disclosing the scale of a security incident. I wanted to check out other options for the new place.
I looked on Wirecutter for their mesh network recommendations and they suggested the ASUS ZenWifi product for gigbit fiber networks, and I ended up buying several nodes. You can connect each node via cable if you want, but there is a dedicated 5 GHz network for backhaul between the nodes for wireless communication, which is what I needed.
There were a couple of issues with the stock firmware, specifically that it didn’t support dynamic DNS for my registrar (the awesome Namecheap) and it also didn’t support SNMP, which is a must have for me. Luckily I found the Merlin project, specifically the gnuton group which makes an open source firmware (with binary blobs for the proprietary bits) for my device with the features I need and more.
[Note: As I write this there is a CVE that ASUS has addressed that has not be patched in gnuton. It’s frustrating as I’ve had to close down NAT for now and the fix is taking some time. I tried to join the team’s Discord channel but you have to be approved and my approval has been in queue for a week now (sigh). Still love the project, though.]
Anyway, while I had some stability issues initially (I am still a monitoring nerd so I was constantly observing the state of the network) those seem to have gone away in the last few months and the system has been rock-solid.
The new farm is nice but there were no outbuildings, so we ended up building a barn. The barn is about 200m from the house, and I wanted to get internet access out there. We live in a dead zone for mobile wireless access so if you are in the barn you are basically cut off, and it would be nice to be able to connect to things like cameras and smart switches.
For a variety of reasons running a cable wasn’t an option, so I started looking for some sort of wireless bridge. I found a great article on Ars Technica on just such a solution, and ended up going with their recommendation of the TP-Link CPE-210, 2.4 GHz version. It has a maximum throughput of 100Mbps but that is more than sufficient for the barn.
I bought these back in December and they were $40 each (you’ll need two) but I just recently got around to installing them.
You configure one as the access point (AP) and one as the client. It doesn’t really matter which is which but I put the access point on the house side. Note that the “quick setup” has an option called “bridge mode” which is exactly what I want to create, a bridge, but that means something different in TP-Link-speak so stick with the AP/Client configuration.
I plugged the first unit into my old Netgear GS110TP switch, but even though it has PoE ports it is not able to drive the CPE-210, so I ended up using the included PoE injector. I simply followed the instructions: I set the IP addresses on each unit to ones I can access from my LAN, I created a separate, wireless network for the bridge, and with the units sitting about a meter apart I was able to plug a laptop on the client side and get internet access.
Now I wanted to be able to extend my mesh network out to the barn, so I bought another ASUS node. The one I got was actually version 2 of the hardware and even though it has the same model number (AX-6600) the software is different enough that gnuton doesn’t support it. From what I can tell this shouldn’t make a difference in the mesh since it will just be a remote node, but I made sure to update the ASUS firmware in any case. The software has a totally different packaging system, which just seems weird to me since the model number didn’t change. I plugged it in and made sure it was connected to my AiMesh network.
I was worried that alignment might be a problem, so I bought a powerful little laser pointer and figured I could use that to help align the radios as I could aim it from one to another.
I had assumed the TP-Link hardware could be directly mounted on the wall, but it is designed to be tie-wrapped to a pole. I don’t have any poles, so it was once again off to Amazon to buy two small masts that I could use for the installation.
Now with everything ready, I started the install. I placed the AP right outside of my office window, which has a clear line of sight to the hayloft window off the barn.
I mounted the Client unit on the side of the barn window, and ran the Ethernet cable down through the ceiling into the tack room.
The tack room is climate controlled and I figured it would be the best place for electronics. The Ethernet cable went into the PoE injector, and I plugged the ASUS ZenWiFi node into the LAN port on the same device. I then crossed my fingers and went back to my desk to test everything.
Yay! I could ping the AP, the Client and the ASUS node. Preparation has it’s benefits.
Or does it?
When I went back to the barn to test out the connection, I could connect to the wireless SSID just fine, but then it would immediately drop. The light on the unit, which should be white, was glowing yellow indicating an issue. I ended up taking the unit back to the house to troubleshoot.
It took me about 90 minutes to figure out the issue. The ASUS device has four ports on the back. Three switched LAN ports and a single WAN port. On the primary unit the WAN port is used to connect to the gigbit fiber device provided by my ISP. What I didn’t realize is that you also use the WAN port if you want to use a cabled backhaul versus the 5GHz wireless network. While not a wired connection per se, I needed to plug the bridge into the blue WAN port versus the yellow LAN port I was using. The difference is about a centimeter yet it cost me an hour and a half.
Everything is a physical layer problem. (sigh)
Once I did that I was golden. I used the laser pointer to make sure I was as aligned as I could be, but it really wasn’t necessary as these devices are meant to span kilometers. My 0.2km run was nothing, and the connection doesn’t require a lot of accuracy. I did a speed test and got really close to the 100Mbps promised.
So I was golden, right?
Nope.
I live so remote that I have an open guest network. There is no way my neighbors would ever come close enough to pick up my signal, and I just think it is easier to tell visitors just to connect without having to give them a password. Most modern phones support calling over WiFi and with mobile wireless service being so weak here my guests can get access easily.
We often have people in the barn so I wanted to make sure that they could have access as well, but when I tested it out it the guest network wouldn’t work. You could connect to the network but it would never assign an IP address.
(sigh)
More research turned out that ASUS uses VLAN tagging to route guest traffic over the network. Something along the way must be gobbling up those packets.
I found what looked like a promising post covering just that issue with the CPE-210, but changing the configuration didn’t work for me.
Finally it dawned on me what the problem must be. Remember that old Netgear switch I mentioned above? I had plugged the bridge into that switch instead of directly into the ASUS. I did this because I thought I could drive it off the switch without using a PoE injector. When I swapped cables around to connect the bridge directly to the AiMesh node, everything started working as it should.
Success! I guess the switch was messing up the protocol just enough to cause the guest network to fail.
If at least one of my three readers has made it this far, I want to point out that several things here made it difficult to pinpoint the problem. When I initially brought up the bridge I could ping the ASUS remote node reliably, so it was hard to diagnose that it was plugged into the wrong port. When the Netgear switch was causing issues with the guest network, the main, password protected SSID worked fine. Had either of these things not worked I doubt it would have taken me so long to figure out the issues.
I am very happy with how it all turned out. I was able to connect the Gree mini-split HVAC unit in the tack room to the network for remote control, and I added TP-Link “kasa” smart switches so we could turn on and off the stall fans from Homekit. I’m sure the next time we have people out here working they will appreciate the access.
Anyway, hope this proves helpful for someone else, and be sure to check out the CPE-210 if you have a similar need.
One of the perks of my job is that I get to work with some incredible partners. One of those is Confluent, probably best known for being the primary maintainer of the Apache Kafka project.
This year, Confluent is doing a multi-city “Data In Motion” tour. The name comes from a focus on real-time data processing. Modern applications often have a requirement to collect data from one or more sources, enrich it and then use the enriched data to provide useful information to the end user, usually in real-time. This tour was a half-day seminar exploring some solutions to that use case.
The event was held at a place called Boxyard RTP. It has been many years since I worked in the Research Triangle Park and it has really grown in that time. Boxyard is made up of repurposed shipping containers. There are restaurants, a bar, a stage (when I walked up a band was playing) and for the purposes of this seminar there was an area on the second level with a conference room and a patio.
The agenda consisted of four main items: an overview of what is going on with Confluent’s offerings, a “fireside chat” about using real-time data for security, an hour-long demo of new functionality and a customer success story.
It was cool to see that AWS was a Diamond Sponsor of this event.
The first presenter was Ananda Bose, who is the Director of Solutions Engineering at Confluent. He covered some of the new products available from Confluent, especially Kora. Kora is a cloud native implementation of Apache Kafka.
At my previous company we wanted to be able to offer our technology as a managed service, which was difficult since it was a monolithic Java application. The ultimate goal was to have a cloud native version, and by that I mean a version of the application that can take advantage of cloud technologies that provide resilience and automatic scalability. Apache Kafka is also a Java app and a lot of work must have gone in to decoupling the storage, identity management, metrics and other aspects of the program to fit in the cloud native paradigm.
One thing I liked about Ananda’s presentation style was that he was very direct. Confluent has just completed an integration with Apache Flink, which is a stream processing framework. One thing that Flink brings is ANSI-compliant SQL. Prior to this integration people used KSQL, but the words that Ananda used to describe KSQL are not really appropriate for this family-friendly blog. (grin)
Kora reinforces something I’ve been saying about open source for some time. When it comes to open source software, people are willing to pay for three things: simplicity, stability and security. Kora does all three and the design of Kora even won the “Best Industry Paper” award at last year’s Very Large Databases conference.
We would see Kora and Flink in action in the demo section.
The second talk was a fireside chat between Ananda and Dr. Jared Smith. Jared works at SecurityScorecard, a security risk mitigation company.
SecurityScorecard has to consume petabytes of data in order to detect malicious behavior on the network. In the way their system works, the payload of a given message may be 20 megabytes or larger, and when they used RabbitMQ it simply couldn’t handle the workload. When they switched to Kora their scaling issues went away.
One cool story Jared told happened during the start of the war in Ukraine. SecurityScorecard placed a “honeypot” server in Kyiv and it was able to detect a large Russian botnet attacking the network. They were able to collect and block the IP addresses of the bots and thus mitigate the damage.
The next hour was taken up by a demo. ChunSing Tsui, a Senior Solutions Engineer, walked us through an example using Confluent Cloud, MongoDB and Terraform. The whole demo is available on Github if you’d like to recreate it on your own.
In this example, a shoe store called HappyFeet wants to monitor website traffic to identify customers who visit but don’t stay on the site very long. Then they could use this information to try and re-engage with them through a marketing campaign offering discounts, etc.
While I am in no way an expert at this stuff, it was engaging. There were four data sources that would be processed to provide an enriched data stream to MongoDB tables. What I did like about it is that the heart of the demo was all written in SQL.
As an “old” I am not as up to date on the new hotness in cloud computing as I would like to be, but SQL I know. This was a product that took a difficult concept and made it accessible.
The final presentation was a customer success story from SAS Institute. It was given by Bryan Stines, Director of Product Management for the SAS Cloud, and Justin Dempsey, a Senior Manager for SAS Cloud Innovation and Automation Services.
It was a nice close to the meeting as the “Data In Motion” theme was very present here. One of the products SAS provides involves fraud detection for credit and debit card transactions. When a person swipes or taps a credit card, that sets off a series of events to detect fraud that may involve numerous checks. This has to be done on the order of milliseconds.
Now I am a big open source software enthusiast, but free software doesn’t mean free solution. With my previous project we used technology such as Apache Kafka, Apache Cassandra, PostgreSQL and others. Our users had to either acquire or develop some of that expertise in house or they needed to find a partner, and that was the issue facing SAS. By partnering with Confluent they were able to get the most out of the software from the people who knew it best.
I no longer live that close to RTP but I felt the three hour round trip was worth it for this event. There are still several dates on the calendar so if this interests you, please check it out.
I’m not sure where I first learned about the inaugural Open Source Founders Summit, but I can remember thinking that I’d wished this had been around 20 years ago.
In many ways, starting and running an open source business is no different than any other business. You still need to take care of your customers, create a useful product, and control expenses. But even though open source businesses are software businesses, they different greatly in that the software isn’t the product.
My first computer was a TRS-80 from Radio Shack that I got for Christmas in 1978. A few years later my father would buy one of the first IBM PCs and the main piece of software he ran was Lotus 1-2-3. Mitch Kapor was the first person I’d heard of to become wealthy off of software (followed soon by Bill Gates). From a business perspective the model was compelling: create a useful, high margin product and distribute it for next to nothing.
Because open source involves software, people still think that the standard software business model applies, and that the open source aspect is more of a “loss leader” for a proprietary product. For those of us who want to run a truly open source business, this isn’t an option. From someone who spent two decades running an open source business, the idea of getting a bunch of us together was exciting.
The conference took place in Paris on 27-28 May, and was organized by Emily Omier and Remy Bertot. About 75 people showed up for the conference, and I got to see some old acquaintances like Frank Karlitschek, Peter Zaitsev, Monty Widenius, and Peter Farkas. Brian Proffitt from Red Hat was also there (and I finally got to meet his wife as they came to Paris early to celebrate their wedding anniversary).
The event kicked off Sunday night with a social activity at a brewery. That was a lot of fun as I got to make many new friends, although the venue was a bit loud for conversation. It did make me very eager for the conference to start the next morning.
The venue was pretty cool. It was called The Loft and once you exited off the street you entered into a covered courtyard (which was nice since it did rain off and on). Up a short flight of stairs brought you to the main room. On one side were chairs and two screens for presentations, and on the other was an area with high top tables where food was served and a bar for juice and coffee in the morning (they added soft drinks during lunch).
I really liked the conference format, which consisted of several 30 minute presentations in the morning that everyone could attend. While I understand the need and usefulness of multiple tracks at many larger conferences, it was nice not to have to miss anything when at a smaller event. The presentations were followed by an hour of five minute “lightning talks” and then by lunch.
In the afternoon we would break out into smaller groups for an open discussion of topics around various aspects of running an open source business. One group would stay in the main area while the others would move to spaces in a loft and downstairs in the basement. In every group I attended we would sit in a circle while the moderator would start off the discussion, but after that it was a pretty open format.
One of the main tenets of the conference was that it was a safe space in which to share stories, so none of the sessions were recorded and I will be somewhat circumspect in what I share from the conference as well. It created an intimacy and a level of trust I haven’t experienced at other conferences.
Remy and Emily started us off with an overview of the conference and what was planned for the next two days.
Thomas talked about “projects vs. products”. In an open source business where you are commercializing the work of an open source project, you don’t have total control of the product roadmap. He discussed strategies for working with your project to better align both the needs of it and of the business. He also talked about pricing open source products. One of the things we did wrong in the beginning at my company was we priced our offerings too low. We were able to adjust that over time but you should never, ever compete on price in an open source business. You compete by being a better and more flexible solution, and one that doesn’t come with vendor lock-in.
I had never heard of Strapi, but they are a popular (and successful) content management system. They have raised a lot of investment money over several rounds and their business model was originally based on “open core” or having a proprietary “enterprise” version.
Now as my three readers know, for over 20 years I’ve been writing about open source business and in many of those posts I have railed against the open core model. I just don’t like it. But it is much more palatable to investors than a pure open source model, and several of the speakers at this conference went the VC route. Strapi did and it seems to be working well for them.
But the basis of Pierre’s talk was on Strapi’s addition of a SaaS model and how it compared to their enterprise offering. I am a huge fan of companies offering a hosted version of open source software. I’ve found that people are willing to pay for three things when it comes to open source: simplicity, security and stability. One of the best ways to offer this is through a managed product.
Selling Strapi’s enterprise offering had similarities to selling proprietary software. It had longer sales cycles and needed a lot of one to one contact. But the deals were large. Contrast that to the SaaS offering which was self-serve, required much less customer contact, and resulted in a faster sales cycle. But the deals were smaller and there was a greater chance that the customer would later leave.
The next speaker was someone I’ve followed for decades but never met: Gaël Duval.
There was a time when I was extremely into running truly free and open source software on my mobile devices. I bought a CopperheadOS phone and later ran GrapheneOS. I loaded Asteroid on my smartwatch. For $REASONS I’m back in the Apple ecosystem but I still applaud efforts to bring open source to mobile. It is one area where we in the FOSS community have failed (for example, I could be perfectly happy running Linux Mint as my sole desktop environment but there is nothing truly equivalent in mobile).
However, Gaël wasn’t here to talk about that, instead he focused on the change from being a hands-on founder to becoming a CEO. That is something I had to work through as well, and the issue of delegation was something with which I struggled. He talked about how he addressed that and more and I identified with a lot of what he said.
As with many conferences my favorite part was the hallway track. I didn’t know that Brian Proffitt and Monty Widenius had never met, so I made introductions. This kicked off a spirited discussion over licensing (Red Hat is owned by IBM which will soon own Hashicorp, which opens up some interesting licensing possibilities).
As you can see in this picture I’ve hit my goal of spreading joy and happiness throughout the conference. (grin)
Amandine Le Pape was the next speaker, and she talked about building an open ecosystem.
Amandine is the CEO of Element, the company behind the open source Matrix protocol. What I found interesting about this talk was hearing from a company with an extremely successful project but one where the company behind it struggles to share in that success. She was extremely transparent with numbers, including that Element often contributed 50% of its operating budget to the project and that they had to downsize. But the decisions they had to make have worked and Element is doing much better.
The project I was associated with never had the adoption of something like Matrix, but I do remember the two times we had to downsize and those were the worst days of my professional life. This talk really drove home the reality that open source businesses are businesses, and you have to make decisions based on the health of the company as well as the project.
The last talk of the morning was followed by an hour of five minute lightning talks. The first one was by Olga Rusakova who talked about what types of content from engineers best drive engagement.
Engineers don’t necessarily write compelling content, even though such content can drive business leads. Olga talked about the best practices for getting engineers to write such content.
Kevin Muller was next with a talk about the best KPIs to measure when trying to gauge the health of an open source startup. Spoiler: it isn’t Github stars. I didn’t get a good picture of Kevin but I did get a very clear shot of the guy’s head who was sitting in front of me (sigh).
Alexandre talked about becoming the CEO of an open source company while not being a founder. In many cases when an open source startup gets a funding round, the investors will insert a new CEO into the mix. In the case of Alexandre he was already part of the company and he was just a good fit for the role. Founder and CEO are different roles and sometimes it is best for a company if someone other then a founder takes that role.
And for the viewpoint from the other side, Peter Zaitsev talked about replacing himself in the company he founded with a new CEO.
He covered his rationale for turning over the reins of his company to another, and having met Ann Schlemmer I can see why he made the choice.
Elisabeth Poirier-Defoy finished up the talks with a discussion of the organization behind GTFS, the specification that allows municipalities to publish transit schedules for consumption by third parties, especially mapping applications.
I feel especially close to this technology since I was doing a lot of work with the City of Portland in 2005, and it was their local transportation authority, TriMet, that started publishing transit schedules in an accessible format. It is great to see how that effort has grown into a worldwide practice. Many times in Paris I pulled out my mobile to look up the best bus or metro to take to get to my destination.
We then took a break for a nice lunch and more conversation.
I didn’t take any pictures of the afternoon sessions. There was something about this conference that kind of discouraged my usual need to record everything. As I mentioned above, after lunch we would break out into groups of 20 or so to discuss a certain aspect of open source business. They worked surprisingly well, and in the sessions I attended everyone seemed eager to contribute yet no one really talked over anyone else. It was a refreshingly frank exchange of ideas.
As the afternoon came to a close, I ended up in the courtyard talking with Stephen Augustus. I know we’ve been in the same room before, but this was the first time we were formally introduced. Stephen is the head of open source at Cisco, and we had a great conversation walking to the evening event, which Cisco sponsored.
We met at the Hotel du Nord for drinks and dinner. It was wonderful, although the bar space was a little constrained so it made circulating difficult (and like Sunday night, it was loud). I ended up grabbing a seat (my bad ankle was starting to bother me) but was soon enjoying more conversation and appetizers. We then adjourned to the dining room where the great conversation continued (and with a different set of tablemates so I wouldn’t wear out my welcome). It capped off a nearly perfect day.
Day Two started off very much like the first: with opening remarks from Emily and Remy.
Frank Karlitschek started off the day with a discussion of generating leads when you are an open source company.
I am an unabashed Karlitschek fan-boy. He started a company called ownCloud, and when his vision for the product and the investors diverged, he took a huge leap and left to form a fork called Nextcloud. I use Nextcloud several times a day, and it is amazing, and I am always eager to hear what Frank has to say.
Nextcloud is self-financed (i.e. no VC) and is profitable. They do not subscribe to the open core model, but unlike other open source companies they also do not perform any professional services (at my company professional services made up about 35% of revenue).
He talked about their sales funnel, where they generate leads into the CRM system which are then qualified by marketing and eventually assigned to a salesperson. It’s called a funnel for a reason, and the number drops significantly from Leads to MQLs to SALs. He was kind enough to share actual current numbers from the past week, which gave us an example to set realistic expectations.
The next speaker was Alexis Richardson, the founder of RabbitMQ who went on to found Weaveworks, a startup focused on cloud native development tooling. Unfortunately, despite generating a lot of revenue (numbers which he shared) they ended up having to close their doors earlier this year. It was the most brutally honest presentation of the conference. One thing I found interesting was that his company did both an enterprise product and a SaaS product, and they decided to focus on the enterprise product. This was a bit of a departure from my current belief that managed open source is the best option for open source startups.
Next up was Matt Barker. Matt is in sales and he started us off with a story about going from England to the US to sell magazine subscriptions door to door. That is something I could never do, but it turns out he was good at it.
Plus he referenced Crossing the Chasm, which is probably my favorite business book of all time. Anyone who references that must be cool (granted, I got to talk to Matt a lot after his talk and he is cool, especially when sharing some of his experiences at Canonical where his time overlapped with my friend Jono). A lot of the things he learned selling magazines can be applied to selling open source products.
Continuing the sales theme, Peter Zaitsev returned with a talk about how you can incentivize sales teams.
People often focus on the engineering and product management when it comes to open source businesses, but sales plays an important role, too. In Peter’s experience the best solution is to offer salespeople a commission-based compensation (or “eat what you kill”) and make it unlimited. If they are successful they could end up getting paid more than the CEO, but this can be a good thing for the company.
Garima Kapoor, co-founder of MinIO, closed out the main talks with a discussion about how to convert widespread open source adoption into financial success for the company behind the project. She covered many topics but one line on a slide caught my eye concerning per-incident support. I constantly had to explain to our customers why we didn’t provide it and I was hoping to follow up with her but never got the chance.
Like the previous day, the final morning hour was reserved for lightning talks. First up was David Aronchick.
David founded Expanso, a company that commercializes the Bacalhau project. Bacalhau is a Portuguese seafood dish consisting of salted cod, and that project implements a Compute over Data (CoD) solution. Get it? Expanso is Portuguese for distributed, or so I was told. He talked about how one can drive community engagement through management of the product cycle.
Julien Coupey followed with a talk about how expertise in an open source project is something that can be commercialized.
At my old company I used to compare us to plumbers. There are a lot of things I can do around my farm, woodworking and electrical come to mind, but I suck at plumbing. I end up wet and frustrated. So even though I can buy the same pipes and fittings as a professional, I call someone because overall it will save me time and money. In much the same way, contracting with the people who actually make the open source software you use can save you time and money, since chances are they have the experience to solve your issues more quickly than you could on your own.
Taco’s company interacts a lot with non-profits, other charities and governments. These organizations represent unique challenges when it comes to the sales cycle. He shared some of his experiences in this field.
Continue with that theme, Yann Lechelle discussed developing a commercial enterprise around Scikit-learn, a publicly sponsored open-source library for Machine Learning.
He talked about the challenges in developing and funding a mission-driven organization around an open source project while still remaining true to its open source nature.
The final speaker was Eyal Bukchin, who talked about using an open source minimal viable product (MVP) to start a business.
I think open source provides a great way to demonstrate the value and demand for a software solution. By working on mirrord, he was able to secure funding for MetalBear, his business aimed at making the lives of backend developers easier.
At lunch I introduced myself to Greg Austic, who works on open source software for farms.
He had on a T-shirt from upstate New York, but I learned he is actually in the Minneapolis-St. Paul area. When he asked me where I was from the conversation went a little like this:
Greg: Where are you located?
Tarus: Oh, I live in North Carolina.
Greg: Where in North Carolina?
Tarus: Central North Carolina.
Greg: Where in central North Carolina? I spent five years in Pittsboro.
Tarus: I used to live in Pittsboro.
Turns out we know a lot of the same people, yet we didn’t meet until we both went to France.
After lunch we broke out into more workshops. I can’t stress enough how I liked the format of this conference.
Unfortunately, while Monday was a holiday in the US, Tuesday wasn’t, and being six hours ahead of the New York time zone I ended up leaving a bit early to deal with some meetings I couldn’t miss.
In case this hasn’t come out in this post, I really enjoyed this conference. A lot. It was very eye-opening. There were companies that embraced VC and did well, and others that embraced VC and did poorly. Some were steadfastly against outside investment, preferring to grow organically. Some focused on the public sector, some focused on the enterprise and some focused on managed services. Everyone had a story to tell and something to add, and I eagerly await next year’s event.
The one challenge will be to maintain the intimacy and honesty of this first conference as it grows. It should be a must attend event for open source founders and anyone seriously interested in forming an open source company.