RDO and Upstream Packaging

Derek mentioned “upstream packaging” on this week’s packaging meeting and asked RDO packagers to participate in the upstream discussions. I thought some more context might be useful.

First, a little history …

When I first started contributing to OpenStack, it briefly looked like I would need to make some Ubuntu packaging updates in order to get a Nova patch landed. At the Essex design summit a few weeks later, I raged at Monty Taylor how ridiculous it would be to require a Fedora packager to fix Ubuntu packaging in order to contribute a patch. I was making the point that upstream projects should leave packaging to the downstream packaging maintainers. Upstream CI quickly moved away from using packages after that summit, and I’ve heard Monty cite that conversation several times as why upstream should not get into packaging.

Meanwhile, Dan Prince was running the Smokestack CI system at the time, which effectively was being treated as OpenStack’s first “third party CI”. Interestingly, Smokestack was using packages to do its deployment, and for a long time Dan was successfully keeping packaging up to date such that Smokestack could build packages for patches proposed in gerrit.

And then there’s been the persistent interest in “chasing trunk”. Operators who want to practice Continuous Deployment of OpenStack from trunk. How does packaging fit in that world? Well, the DevOps mantra of doing development and CI in environments that model your production environment applies. You should be using packaging as early on in your pipeline as possible.

My conclusion from all of that is:

  1. A key part in building a Continuous Delivery pipeline for OpenStack is to practice continuous package maintenance. You can glibly say this is “applying a DevOps mindset to package maintenance”.
  2. How awesome would it be if OpenStack had “upstream infrastructure for downstream package maintainers”. In other words, if downstream package maintainer teams could do their work close to the upstream project, using upstream infrastructure, without disrupting upstream development.

I think the work that Derek, Alan, Dan, John, and everyone else has been doing on Delorean is really helping RDO maintainers figure out how to practice (1). I first started maintaining Fedora packages for Fedora Core 2, so IMO what RDO is doing here is really dramatic. It’s a very different way of thinking about package maintenance.

As for (2), this where we get back on topic …

At a Design Summit session in Vancouver, the idea of maintaining packaging using upstream infra really took hold. Thomas Goirand (aka zigo) proposed the creation of a “distribution packaging” team and this triggered a healthy debate on openstack-dev. Derek has since pushed a WIP patch showing how RDO packaging could be imported.

There’s a clear desire on the part of the Debian and Ubuntu package maintainers to collaborate on shared packaging, and it sounds like this goal of further collaboration is one of the primary motivators for moving their packaging upstream. This makes a lot of sense, given the shared heritage of Debian and Ubuntu.

The RDO team is enthusiastic about adopting this sort of upstream workflow, but the Debian/Ubuntu collaboration has added an entirely new aspect to the conversation. Despite the fact that RDO and SUSE platforms have little in the way of shared heritage, shouldn’t the RDO and SUSE packaging teams also collaborate, since they both use the RPM format? And perhaps deb and rpm maintainers should also collaborate to ensure consistency?

To my mind, the goal here should be to encourage downstream packaging teams to work closer to the upstream project, and have downstream packaging teams collaborate more with upstream developers. This is about upstream infrastructure for downstream teams, rather than a way to force collaboration between downstream teams, simply because forced collaboration rarely works.

For me, what’s hugely exciting about all of this is the future prospect of the package maintainers for different platforms adopting a “continuous packaging” workflow and working closely with project developers, to the extent that packaging changes could even be coordinated with code changes. With its amazing infrastructure, OpenStack has broken new ground for how open-source projects can operate. This could be yet another breakthrough, this time demonstrating how a project’s infrastructure can be used to enable an entirely new level of collaboration between package maintainers and project developers.

 

 

 

Network Function Virtualization – The Opportunity for OpenStack and Open Source

This week’s launch of OPNFV is a good opportunity to think about a simmering debate in the OpenStack developer community for a while now – what exactly does NFV have to do with OpenStack, and is it a good thing?

My own “journey” on this started exactly one year ago today when I visited a local Red Hat partner to talk about OpenStack and, towards the end of our Q&A, I was asked something like “will OpenStack support NFV?”. I’d never heard of the term and, when the general idea was explained, I gave a less than coherent version of “OpenStack implements an elastic cloud for cattle; this sounds like pets. Sorry”. After the meeting, the person who asked the question forwarded me an NFV whitepaper from October 2012 and, glancing through it, most of it went right over my head and I didn’t see what it had to do with OpenStack.

Since then, Chris Wright has been patiently talking me through this space and gently trying to get me over my initial skepticism. Chris would say that our conversations has helped him refine how he explains the concepts to open-source developers, and I think he really nailed it in his keynote at the Linux Foundation’s Collaboration summit in April.

In his keynote, Chris talks about the benefits of collaboration in open-source and walks through all of the various aspects of how the networking industry is changing, and how open-source is playing a key part in all of those changes. He covers, and simplifies:

  • Taking the current architecture of proprietary, expensive, complex, difficult to manage forwarding devices (like routers) and how SDN (Software Defined Networking) aims to “put an API on it”. This is what’s meant by “disaggregation of the control plane and data plane” – that forwarding devices become devices which are controlled by open standards, and allows your distributed system of forwarding devices to be controlled and automated.
  • NFV (Network Function Virtualization) as a shift in the telco data-center world which embraces many of the lessons that the elastic infrastructure cloud has taught the IT industry. More on that below.
  • Changes in the “data plane” world, where we’re starting to see the network device market mimic the x86 server market such that these devices can be “white box” servers running open-source software. Again that disaggregation word, but this time it’s about “disaggregation of hardware and software” and how the software part can be open-source implementations of optimized packet-forwarding capabilities which we’re used to seeing implemented in expensive and proprietary hardware appliances.

But let’s focus here on NFV.

I real don’t know much about the telco industry, but what Chris has me imagining now is data-centers full of proprietary, black-box hardware appliances which are collectively know as “network functions” or “middle boxes”. These boxes are used for everything from firewalls, NAT, deep packet inspect (DPI), the mobile packet core, etc. These are software applications trapped in hardware. They’re expensive, proprietary, slow to roll-out, don’t always scale well and are hindering telco service providers as they attempt to react to a rapidly changing market.

NFV is about completely re-thinking the architecture of these data-centers. This is the telco industry re-imaging their data centers as elastic infrastructure clouds running their “network functions” as virtualized, horizontally scalable applications on these clouds. The exciting – simply stunning – aspect of all of this for me as an open-source advocate, is that the telco industry is settling on a consensus around an architecture involving open-source generally and OpenStack specifically.

Say that again? These huge telcos want to rebuild their entire data centers with OpenStack and open-source? Yes.

If, like me, you want to see open-source change the IT world into one where we all embrace the opportunity to collaborate in the open, while still successfully building building businesses that serve our users’ needs … then this sounds pretty cool, right?

If, like me, you want to see OpenStack as the standard platform from which many of the worlds’ elastic infrastructure clouds are built … then this sounds like a no-brainer, right?

Well, the thing we need to bear in mind is that these applications (i.e. network functions) are pretty darn specialized. They need to have a high level of performance, determinism and reliability. But that does not necessarily mean they are “pets” and missing one of the key points of an elastic cloud.

Let’s take the reliability requirement – when these network functions are implemented as horizontal scale-out applications, they will look to achieve high levels of reliability in the same way that typical cloud applications do – with each tier of the application spread across multiple failure domains, and by spreading application load horizontally. Telcos will just want to take this further, with faster and more deterministic response to failures, while also avoiding any compromise to application’s performance. For example, you’ll see a lot of interest in how instances are scheduled to take to into account affinity and anti-affinity within an instance group.

The performance requirement is largely about high-performance packet processing. How to get a packet off the network, into a VM, processed quickly and back out again on the network. One of the techniques being pursued is to give VMs direct physical access to the network via SR-IOV which, in turn, means the compute scheduler needs to know which physical networks the NICs on each compute node has access to.

The deterministic requirement is about predictable performance. How to avoid the vagaries of the hypervisor and host OS scheduler affecting these performance-sensitive applications? You’ll see work around allowing operators to define flavors, and application owners to define image properties, which between them control things like vCPU topology, vCPU to pCPU pinning, the placement of applications in relation to NUMA nodes and making huge pages available to the applications. Compare to Amazon’s memory-optimized and compute-optimized flavors, and imagine this being taken a step further.

Oh, and another requirement you’ll see come up in this space a lot is … IPv6 everywhere! I’m certainly down with that.

Want to learn more about the work involved? See the OpenStack NFV team’s amazing wiki page which goes into excruciating detail.

The more you dig into the specifics of what we’re talking about here, start breaking this down into tangible concepts without all the acronyms and buzzwords, you start to realize that this really is the telco world embracing everything that OpenStack is all about, but just pushing the envelope a bit with some requirements which are a pretty natural evolution for us, but we might not otherwise have expected to come about for some time yet.

I guess the summary here is that if you’re skeptical, that’s cool … you’re not alone. But please do take the time to see through the complexity and confusion to the simple fact we’re poised to be a key part in turning the telco data-center, and how this is just another exciting part of our goal to “to produce the ubiquitous Open Source Cloud Computing platform”.

An Ideal OpenStack Developer

(This is a prose version of a talk I gave at OpenStack meetups in Israel and London recently. Apologies for the wordiness.)

In a recent update Jonathan gave to the Board of Directors, we described how OpenStack has had 2,130 contributors to date and 466 of those are active on a monthly basis. That’s an incredible statistic. There’s no doubt OpenStack has managed to attract an unusual number of contributors and, for such a complex project, made it relatively easy for them to contribute.

However, this isn’t just a numbers game. I often hear mutterings that a much smaller, focused group could achieve the same velocity that OpenStack is achieving. In some sense that’s true, but I think that the diversity of interests and priorities is the energy that a community like OpenStack thrives on.

The question then is how to improve the overall quality of our large number of contributors. In order to do that, we need to be able to set expectations. What do we expect and value from our contributors?

What I’m going to attempt to do here is define The Prototypical OpenStack Developer. The ideal that we should aspire to. The standard that all contributors should be held to.

(But … bear with me here. I’m being a little tongue-and-cheek.)

Ok. Where do we start? How do we begin to forge this hero from the raw resources we are presented with?

Let’s start with the basics. The breadth and depth of knowledge you need on a variety of computing topics.

On virtualization, you could start with KVM. You should know about CPU extensions such as Intel’s VT-x and I/O virtualization with VT-d and PCI SR-IOV. Some knowledge of the history of software based virtualization and paravirtualization would be nice context too. Now understand the responsibilities of the KVM kernel module versus the userspace component, qemu. How does qemu emulate various devices? How does live migration work? How does a hypervisor use page table flags to track dirty pages during a migration?

And there’s probably little point in understanding all of this without understanding the x86 architecture in some detail. Understanding how it compares to RISC architectures would be no harm. Memory segmentation, MMUs, page tables are all fun topics. You really can’t get into this without learning a bit of assembly, at least the basic idea. The history of x86, from real/protected mode to modern day PAE or x86-64 are all important to understand. Ignore Itanium, though. It’s not enough to just understand the CPU, though, you need to go beyond and think about how that CPU interacts with peripherals using DMA and buses like PCI.

And, honestly, if you go this far you may as well understand basic digital systems theory, like how you can construct a counter or register from a set of basic logic gates …

Woah, I think I’ve digressed a little. That’s virtualization. Do the same for storage and networking. I’ll leave that as an exercise for the reader.

That’s just the concept behind the basic resources managed by OpenStack, though. It’s a pretty complicated distributed system, so it’s pretty essential you do some reading on that topic. What do terms like “quorum” and “consensus” mean? Why do people describe the Paxos algorithm as “quicksort of distributed systems”? What do people mean when they describe OpenStack as a “shared nothing” architecture, and are they crazy? How would you describe OpenStack’s approach to fault tolerance?

And obviously related to all of this is the need for deep knowledge of databases and messaging systems. We seem to have a large number of ex-MySQL consultants on this project, but don’t let that be an excuse. You know what foreign keys and cross-table joins are, right? And you really need to know the kind of operations which will simply lock individual rows rather than an entire table. For messaging, there’s a little research you can do there. We’re all about AMQP in OpenStack, but there’s been a few other messaging protocols in the past. My personal favorite is CORBA. What’s the difference between a broker, router and peer-to-peer based architecture? What’s this “fanout” and “topic” things we talk about in messaging? Incidentally, you know that we’re not actually using the standard AMQP protocol in OpenStack, right?

You needn’t have touched a line of code at this point. But, if you’re going to contribute to OpenStack, you need to code, right? Almost certainly in Python, but we like ourselves a little Bash too. With Python, it’s important to understand not just the syntax from the most basic to the more advanced topics like iterators, decorators, context managers and metaclasses. You also need to have a good knowledge of the huge number of python libraries out there, inside and outside the core Python distribution. We need true Pythonistas. Oh, and we’re in the process of porting to Python 3, so make sure you understand the differences between Python 2 and 3.

But wait, wait. That’s no good. You can’t just dive straight into Python. You need to start with C. Allocate and free your own memory, damnit. You can’t go through life without learning about pointers. Now learn how to use threads and the various synchronization primitives out there, and why it’s all just terrible. Now learn about asynchronous I/O techniques; what an event loop is, how you use select() and non-blocking sockets to write a single-threaded server which processes requests from multiple clients. Oh, Richard Stevens. My hero. Don’t be afraid to read a few RFCs.

Speaking of authors, we forgot algorithms. Yes, those. Just carefully study all three volumes of Knuth.

Now, before returning to Python, perhaps you should implement a REST API in Java using JAX-RS and a web UI using Ruby on Rails. Hang out with the cool kids and port your UI to Sinatra, before realizing that’s not cool anymore and switching to Node.js.

You might be ready to contribute some code to OpenStack at this point. But, I hate to think of anyone writing software without having a full appreciation of the user experience design we’re driving towards. We don’t want the inmates running the asylum, do we? Which “personas” are we designing for? “As a web developer, I want to launch a virtual machine to test my code in.”

Wait, we forgot tools. You can’t get anything done without knowing your tools. You’re going to do all of your work on Linux, whether that be in VMs or by running Linux on your main machine. If you’re a serious person, you need to learn emacs. You’re going to become very close friends with grep and sed, so learn yourself regular expressions. Lazy and greedy regexs, both. You know how to do a HTTP POST with curl, right?

Ah, git! Oh, the glorious git! You can never learn too much about git. It’s the gift that keeps on giving. If you think I’m joking, spend some time getting to know interactive rebasing. Reordering, editing, squashing and splitting commits! Re-writable history! Where have you been all my life? No git detail is too obscure to ignore. Learn how a tilde is different from a caret in revision parameters. How you can delete branches by leaving out the first part of a refspec in a git-push. Force override, exciting! Is your mind blown yet? No? Find out how git’s reflog is a history of history!

(Give me a second to calm down, here)

Now, you’ve got to realize something. Based on everything you’ve learned so far, you could probably write OpenStack on your own. But that’s not what’s going on here. You’re collaborating. You’re following a process. How we collaborate and why we follow certain processes is a more complex, involved and undocumented topic than anything you’ve learned so far.

To really understand how we get stuff done in OpenStack, you need to be steeped in open source culture. Understand what we mean when we say things like “rough consensus and running code” or “do-acry”.

Perhaps start by following the linux-kernel mailing list for a few months, watching how controversial discussions are worked through and the subtleties that determine who holds the balance of power and influence. Don’t worry if you’re shocked and appalled by how unfriendly it all seems, you’re not the first. If that’s your one take-away from the kernel, that was time well spent. Now seek out friendlier communities and understand how they get stuff done. Compare them to OpenStack and ask yourself questions like “how does our reliance on voting to make decisions compare to other communities?” or “why does there seem to be less flamewars in OpenStack than elsewhere?”.

The history of open source is important, will inform how you engage with OpenStack and that, in turn, will influence how OpenStack evolves. Learn about the “free software” versus “open source” camps, and how those philosophies relate to the choice of copyleft licenses like the GPL versus permissive licenses like Apache, MIT or BSD. Are you in this for the freedom of users of your code, or are you in it to build collaborative software development communities? That contributor agreement you were asked to sign before you contributed to OpenStack – how do you feel about that?

Think about the different governance models that open-source communities adopt. Learn about benevolent dictators, project management committees, “commit bit”, consensus based decision making and the pros and cons of our representative democracy model.

Learn about the release processes various projects use. Time based versus feature based. Rapid release cycles with merge windows. Planning periods, feature freezes, release candidates, stable branches. How do different distros do this when there are so many maintainers and packages involved? We use Python a lot, how do they coordinate their release cycles?

That’s all very well, but it’s important not to be blind to the world outside open source. Understand how extreme programming and agile software development evolved. Read the Agile Manifesto. Understand how this all relates to Continuous Integration, Continuous Delivery and DevOps. We’re operating in a much different context, but is code review our variant of XP’s pair programming? Is our gated master superior to traditional post-commit CI?

You can now consider educated to a basic level. But is that enough to be an effective contributor? Do you now have everything you need to make an impact? No, far from it. The hardest part is learning to be a good human. You need to have superb communication skills, in English of course, mostly written communication skills for mailing list, gerrit and IRC discussions. We do meet twice a year in design summits, so you need to be able to present and defend your ideas in person too. You need to work on that Irish mumble of yours.

More than that, though, you need to understand people. You need to know when to be empathetic, when to be pragmatic and when you be dogmatic. When is someone’s -1 on your patch likely to be an intractable veto and when is it simply a take-it-or-leave-it suggestion? What fights are worth fighting? How can you build up kudos points by assisting your fellow contributors and when is the right time to call in some favours and spend those kudos points?

Ok, we’re ready to go! How do we put all of this into practice?

Probably the best way to start contributing to the project is by doing code reviews. You should probably be spending at least a couple of hours on code review every day. Not just because the number of code reviewers on a project has the greatest influence on its velocity, but also because its the best way to start building trust with your fellow contributors. If you can show yourself as thoughtful, committed and diligent through your code reviews, then other code reviewers will be much more inclined to prioritize your patches and less carefully scrutinize your work.

A good code reviewer manages to simultaneously focus on the little details while also considering the big picture. Try not to just leave +1 on patches, but instead a little commentary that shows the kind of things you’ve taken into consideration. Why should anyone trust that your +1 was the result of 2 hours of careful analysis, research and testing rather than just 2 minutes of coding style checking?

Also, think about who you are building up trust with. As a new code reviewer it’s probably more fruitful to provide helpful input on some meaty patches from some of the lead developers on the project. Then again, patch triage can be hugely helpful too – catch obvious problems in patches before the core reviewers ever get to the patch. Don’t forget to mentor new contributors as a code reviewer, though. Code review is the face of the project to these contributors and its your opportunity to show how you can lead by example.

Now, you obviously want to contribute code. Find some gnarly bug to fix, perhaps some race condition only rarely seen during automated tests. With all the code reviewing you’ve been doing, you’ve acquired excellent taste in coding and your work will no doubt live up to those standards. Don’t forget to write a detailed, helpful commit message and include a unit test which would catch any regression of the issue. If this is a more substantial change, you must split your change into smaller chunks where each patch represents a logical step in your progression towards the final result.

If you’re making a substantial addition like a new feature or a re-architecture, you need to document your design in some detail in a blueprint. Make sure someone reading the spec can quickly understand the problem you’re trying to solve, why it’s important and the general idea behind your solution. Then make sure there’s enough background information included that a reviewers work is made easy. Include the use cases, any relevant history, related discussions or bugs, alternative approaches considered and rejected and any security, upgrade, performance or deployer impact. Describe how your work will be tested and what documentation changes will be required.

While we’re on the subject of blueprints, don’t forget that these too need reviewers. Most projects now review the specs associated with blueprints using gerrit and so this is a way for you to demonstrate your design skills and catch things which no-one else has yet considered.

Back to code, though. Yes, it’s important to contribute to the various integrated service projects like Nova, Neutron, Swift and whatnot. However, there are a bunch of other areas where code contributions are always needed. For a start, the client projects are always forgotten. Then there’s the cross-project technical debt that the Oslo program is hard at work cleaning up. We’re also gradually porting all of OpenStack to Python 3, and this is going to be a multi year effort requiring the help of many.

We also place a huge emphasis on automated testing in OpenStack, and the awesome CI system we have doesn’t come from nowhere. You should always be ready to jump in a contribute to the infrastructure itself, tools like devstack-gate, zuul, nodepool or elastic-recheck. And, last but not least, our functional test suite, Tempest, is always desperately in need of more contributions to increase our test coverage.

Security is critical in a public-facing service like OpenStack, and there are several ways you should contribute in this area. Firstly, there is a small vulnerability management team which collaborates with each project’s -coresec team to handle privately reported security bugs, ensuring a fix is prepared for each supported branch before a coordinated, responsible disclosure of the issue first to vendors and then the wider world. Important work is this. There’s also a security group which is trying to bring together the efforts of interested parties to prepare official notices on security issues that aren’t actual vulnerabilities, develop a threat analysis process for OpenStack and maintain the OpenStack Security Guide. They need your help! Most importantly, though, you need to be security conscious as you write and review code. There’s a good chance you’ll find and report an existing vulnerability during the course of your work if you keep your eyes open!

And then there’s docs, always the poor forgotten child of any open source project. Yet OpenStack has some relatively awesome docs and a great team developing them. They can never hope to cope with the workload themselves, though, so they need you to pitch in and help perfect those docs in your area of expertise.

I mentioned bugs. We must not forget the bugs! Bugs are one way users can provide valuable contributions to the project, and we must ensure these contributions are valued so that users will continue to file bugs. With over 700 configuration options in Nova alone, the project can’t possibly test all possible combinations by itself so we rely on our users to test their own use cases and report any issues as bugs. You should help out here by setting aside some time every day to triage new bugs, making sure enough information has been provided and the bug has been appropriately tagged, categorized and prioritized.

Along those same lines, users often struggle with issues with aren’t obviously or necessarily bugs. You should also pay attention to forums like ask.openstack.org or the openstack-operators mailing list. Any outreach you can do to help users be successful with OpenStack will pay massive dividends in the long run, even just in terms of your understanding which issues are most important to real users. This outreach should extend to your attending OpenStack meetups, giving presentations on your work and listening to what users have to say.

Speaking of mailing lists, we have a hugely active openstack-dev mailing list, with over 2500 emails in April alone. This is the center of all activity happening in OpenStack at any time. You really must track what’s happening there and engage where you can help move things forward positively. It’s a struggle to keep up, but it really isn’t an option.

However, one of the side effects of openstack-dev being overloaded is that many important conversations now happen IRC. You can’t expect to be around for all of those, so make sure to remain connected and log all channels so you can catch up later.

Because conversations can be spread around multiple places, it can be helpful to link all of these conversations with little breadcrumbs. A mailing list thread might reference a gerrit review, which might reference a log of an IRC conversation, which might reference a blog post, which might reference a bug, which might reference a previous commit message which referenced a previous mailing list thread.

Don’t be fooled into thinking IRC is all about the serious stuff, though. It’s also a place where you can get to know your fellow contributors on a personal level and build up yet more of that all important trust. You will make friends working on OpenStack and some of those friendships will last longer than your involvement in OpenStack itself. That’s a hugely positive sign in any community. Beware of forming cliques, however. We need this community to be open to the most diverse set of contributors, and not all of those will buy into US-centric young white male geek humour, for example.

Speaking of cliques, it’s popular to accuse OpenStack developers on being so self-absorbed that the needs of real operators and users are ignored. That OpenStack developers aren’t held responsible for the real world consequences of the decisions they make. “You write code differently when you carry a pager”. Lorin Hochstein proposed an “Adopt a Dev” program where operators could invite individual developers to shadow them for a few days and share their experience in the terms of a summary, bug reports and blueprints. Basically, you should take any opportunity you can to get your hands dirty and help operate a production OpenStack service.

Related to the needs of operators are the deployment, configuration and operational tools out there which desperately need contributions with people more familiar with the dirty details of how the software works. Many developers use devstack to deploy their development clouds, but there’s huge benefit in occasionally deploying something more production-like and contributing to whatever tool you used. TripleO is a great deployment effort to contribute to because it’s attempting to create a space where everyone interested in deployment can collaborate, but also because it closely tracks the development branch of OpenStack.

Once you have succeeded at making an impact as an individual contributor, you should look to extend your leadership efforts beyond simply leading by example. Naturally, you’ll tend towards volunteering for the responsibility of the PTL position on whichever program you contribute most to. To demonstrate your willingness and trustworthiness for the position, perhaps you’ll suggest the PTL delegate some of their responsibilities to you.

Your leadership interests should extend beyond a single project too. In some ways, the kind of cross-project issues considered by the Technical Committee are as important as the per-project responsibilities of PTLs. Do you have strong opinions on how, why and when should add new programs or Integrated projects. If not, why not?

The governance of OpenStack and the shared responsibility for the future direction of OpenStack extends beyond the TC and PTL’s governance of the project itself, to the role of the Foundation Board of Directors in protecting, empowering and promoting the project as well as ensuring there’s a healthy commercial and non-commercial ecosystem around the project. Do you care how the TC and board divide their responsibilities? Or how much explicit corporate influence is appropriate in the technical decision making of the project? Or how the board makes legal decisions which directly impact the project? Or how individual members elect their representatives on the board? You should.

Wait, wait, I’m forgetting a bunch of stuff. You should care deeply about bringing contributors on board and participate in the awesome OPW and GSoC programs. It’s important to keep track of how the project is perceived, so you should read any articles published about the project and follow even our worst detractors on twitter. Watch carefully how our major competitors like AWS and GCE are evolving. Make sure to keep on relevant new developments like NFV or Docker. Keep an eye on new projects on Stackforge to track how they develop.

Huh, wait. You’re probably employed to work full time on the project, right? Well, you really need to learn how to wear upstream and downstream “hats”. You need to understand how you can help your employer be successful with their objectives around the project. You need to be able to reconcile any apparent conflicts between your employers’ needs and the best interests of the project. This is not a zero sum game. Meet with your employer’s customers and partners, help deliver what OpenStack product or service your employer is providing, mentor colleagues on how to successfully engage with the project and be the bridge over the upstream and downstream gap.

Above all, through all of this, be nice to everyone you encounter and wear a smile.

BZZZT … BURNOUT ALERT

I’m obviously being facetious, right? There’s no way anyone can possibly live up to those expectations and live to tell the tale?

It’s pretty obvious when you put it all together like this that these are unreasonable expectations. The hero of this tale does not exist. Many of us have tried to be this person, but it’s just not possible. Read into this, if you like, a very personal tale of burnout caused by unreasonable self-imposed expectations.

But really, what I want to get across today is that you don’t need to be this hero in order to contribute. Far from being too many active monthly contributors, five hundred is just the tip of the iceberg. Why shouldn’t every attendee of every OpenStack meetup be able to contribute in some small way?

When mentoring new Red Hat engineers, my basic advice is always “find your niche”. Find something that takes your interest and that you can see an obvious path towards making a significant impact, and go deep! Ignore pretty much everything else and do your thing. Maybe after a while you’ll have got the ball rolling of its own accord and that there are other areas you can now make an equally big impact on. Or perhaps you’ll stick with this niche and continue to make an impact doing it over the longer term.

One of my favorite examples of a less likely niche is bug triage. Back in the summer of 2001 when I started seriously contributing the GNOME project and became a maintainer of its CORBA ORB, ORBit, another new contributor to the project called Luis Villa posted this email:

Hey, everybody. By way of introduction: I’m the new bugmaster at Ximian. As some of you may have noticed, I’m slowly moving towards cleaning out evo and RC bugs from bugzilla.gnome and into bugzilla.ximian.

Luis went on to breath new life into GNOME’s “bugsquad”, helped put in place a highly effective bug triage process and taught the GNOME community how to truly value and celebrate the contributions of both bug reporters and bug triagers. If you want to make fame and fortune in the open source world, how many people would pick bug triage as the place to start? Well, Luis did and made a huge impact, before moving on to engineering management and then giving it all up to go to law school. He is now Assistant General Counsel for the Wikimedia Foundation.

There’s a real “find your niche” lesson in that story, but also a lesson that we as a community need to learn to truly value and celebrate all of the myriad of different ways that contributors can help the project. Rather than judge others based on how they’re not contributing, rather than feel exasperated when so few others share your passion for a particular niche no matter how important it seems to you personally, we as a community need to acquire a greater level of empathy for our fellow contributors.

We also need to experiment with ways of running the project so that different roles and niches are appropriately recognized. Does the focus we put on PTLs detract from the valuable project management contributions others make? Are official programs the only way of recognizing the importance of particular areas? If programs are the only way, do we need to be more open to creating programs wherever a group of people have coalesced around some particular effort? Do we need to explicitly raise the profiles of those contributors doing hard behind-the-scenes work in areas that we don’t typically recognize? Are we building a culture that places too much emphasis on recognition and instead roll back some of the ways we recognize people now?

Lot’s of questions, few answers. But hopefully this can get the conversation started.

May 11 OpenStack Foundation Board Meeting

The OpenStack Foundation Board of Directors met in-person in advance of the OpenStack Summit in Atlanta. This my informal recollection of the meeting. It’s not an official record, etc.

Unlike previous meetings held in advance of summits, this meeting only ran from 09:00 to 14.30 at which time we switched venue for the first ever joint board of directors and technical committee meeting.

I’m about to head off on vacation for a week, so I figured I’d do my best to briefly cover some of the topics covered during the meeting.

Jonathan’s Update

After the usual preliminaries, we began the meeting with Jonathan Bryce (in his role as Executive Director) giving the board an update from the Foundation’s perspective.

One of the more interesting slides in Jonathan’s updates is always the latest statistics showing community and ecosystem growth. We now have over 355 companies supporting the foundation, over two thousand total contributors and almost five hundred active contributors every month. Over 17,000 commits were merged during the Icehouse release cycle, an increase of 25% from Havana. This level growth is just phenomenal.

Jonathan also talked about the growth in visitors to the openstack.org website and made some interesting observations about the geographical spread of the visitors. The top 4 countries seen in the stats are the U.S., India, China and France. That France figures so highly in the stats is a good sign in advance of the summit in Paris in November.

Next, Jonathan moved on to talk about the week ahead in Atlanta. Once again, we’re seeing a huge increase in the level of interest in the event with over 4,500 attendees compared to the roughly 3,000 attendees in Hong Kong. Running an even of this size is a massive undertaking and Jonathan mentioned one crazy statistic – the foundation had over 23,000 pieces printed for the event and had to spread those orders over three printing companies in order to be able to do it.

A big emphasis for the week was an increased focus on users and operators. And, interestingly, there were roughly 800 developers and 700 operators signed up for the event. All were agreed that it’s a very healthy sign to see so many operators attend.

One comment from Jonathan triggered some debate – that the event was turning into a broader cloud industry event rather than strictly limited to just OpenStack. Some board members raised a concern that the event shouldn’t become completely generic and the focus should always be on OpenStack and its ecosystem. Jonathan clarified that this is the intent.

Jonathan also talked about the geographical diversity of attendees at the summit. People were coming from over 55 countries, but 81% attendees were from the U.S. In contrast, in Hong Kong, the percentage of US attendees were more like 40% and Jonathan felt that this showed the importance of regularly holding summits outside of the U.S.

Finances

Jonathan also walked the board through and update on the foundation’s financial position. Operating income was 3% above their predictions and expenses was down 7%. This has left the foundation with $7.8M in the bank, as part of Jonathan’s goal to build up a substantial war-chest to ensure the foundation’s stability even in the event of unforseeable events.

The summit in Atlanta was predicted to make a loss of $50k but was on track to make a profit. And yet, while it was predicted to be a $2.7M event, it was turning out to be a $4M event. The situation will be very different in Paris because of different cost structures and the event is expected to make a loss. While on the topic, some board members requested that the board be more closely involved in choosing the location of future summits. Jonathan was happy to facilitate that and expected to be able to give the board an update in July.

Jonathan next gave a detailed update on the foundation’s application for US federal tax exempt status. He explained that while we are Delaware incorporated, non-stock, non-profit foundation we have not yet been granted 501(c)(6) status by the IRS. After providing the IRS with additional information in November, the IRS returned an initial denial in March and the foundation filed a protest in April.

The objections from the IRS boil down to their feeling (a) that the foundation is producing software and, as such, is “carrying on a normal line of business”, (b) that the foundation isn’t improving conditions for the entire industry and (c) that the foundation is performing services for its members. Jonathan explained why the foundation feels those objections aren’t warranted and that the OpenStack foundation is fundamentally no different from other similar 501(c)(6) organizations like the Linux Foundation. He explained that other similar organizations were going through similar difficulties and he feels it is incumbent on the foundation to continue to challenge this in order to avoid a precedent being set for other organizations in the future. Overall, Jonathan seemed confident about our position while also feeling that the outcome is hard to predict with complete certainty. This conversation continued for some time and, because of the interest, the board moved to establish a committee to track the issue consisting of the existing members of the finance committee along with Eileen, Todd and Sean.

Trademark Framework

Jonathan moved on to give an update on some changes the foundation have made around the trademark programs in place for commercial uses of the mark. The six logos previously used were causing too much confusion so the foundation has merged these into “Powered By OpenStack” and “OpenStack Compatible” marks.

There followed some debate and clarifications were given, before some members expressed concern that the board had not been adequately consulted on the change. That objection seemed unwarranted to me given that Jonathan had briefed the board on the change in advance of implementing it.

Staying on topic of trademark programs, Boris took the floor and gave an update on the DriverLog work his team has been working on. He request the board use the output of DriverLog to enforce quality standards for the use of the OpenStack compatible mark in conjunction with Nova, Neutron and Cinder drivers. There was rather heated debate on the implications of this, particularly around whether drivers would be required to be open-source and/or merged in trunk.

Several board members objected to the fact that this proposal wasn’t on the agenda and the board hadn’t been provided with supporting materials in advance of the meeting. Boris committed to providing said material to the board before revisiting the issue.

Defcore

Next up, Rob and Josh gave an update on the progress of their DefCore initiative. Rather than attempt to repeat the background here, it’s probably best to read Rob’s own words.

Once the background was covered, the board spent some time considering the capabilities scoring matrix where each capability (concretely, capabilities are groups of Tempest tests) is scored against 12 selection criteria. This allows the capabilities to be ranked so that the board can make an objective judgment on which capabilities should be considered “must have”. There appeared to be generally good consensus around the approach, but a suggestion was made to consider more graduated scoring of the criteria (e.g. 1-5 rather than 0 or 1).

The conversation moved on to the subject of “designated sections”. During the conversation, the example of Swift was used and Josh felt the technical committee’s feedback indicated that either Swift in its entirety should be a designated section or that none of Swift would be considered binary. Josh also felt that the technical community (either the TC or PTLs) should be responsible for such decisions but I felt that while the TC can provide input, trademark policy decisions must ultimately be made by the board lest we taint the technical communities technical decision making by requiring significant political and business implications to be considered.

One element of clarity that emerged from the discussion was the simple point that “must have” tests were intended to drive interoperability while designated sections were intended to help build our community by requiring vendors to ship/deploy certain parts of the codebase and, by implication, contribute to those parts of the codebase.

As time ran short, the board voted to approve the selection criteria used by the DefCore committee. A straw-poll was also held to get a feel for whether board members saw the need for an “OpenStack compatible” mark in addition to the “OpenStack powered” mark. All but three of the board members (Monty, Todd and Josh) indicated their support for an additional “OpenStack compatible” mark.

Win The Enterprise

Briefly, Imad introduced the “Win the Enterprise” he an his team were kicking off with a session during the summit. The goal is to drive adoption of OpenStack in the enterprise by analyzing the technical and business gaps that may be hindering such adoption and coming up with an action plan to address them.

Feedback from board members was quite positive, with the discussion centered around how the group would measure their success and how they would ensure they operated in the most open and transparent way possible.

There was also some discussion about the need for more product management input to the project along with an additional focused effort on end-users of OpenStack clouds.

Wrapping Up

After the meeting drew to a close, board members joined members of the technical committee for a joint meeting. I’m hoping one of the awesome individuals on the technical committee will write a summary of that meeting!

This was a hugely draining week for many of us at Red Hat. As I prepare to completely switch off for a week, allow me to pass on this sage advise from Robyn Bergeron:

Keep Calm and Ride The Drama Llama

Heartbleed

Watching #heartbleed (aka CVE-2014-0160) fly by in my twitter stream this week, I keep wishing we could all just pause time for a couple of weeks and properly reflect on all the angles here.

Some of the things I’d love to have more time to dig into:

Mar 4 OpenStack Foundation Board Meeting

On March 4th, the OpenStack Foundation Board of Directors met for an all-day, in-person meeting at DLA Piper’s office in Palo Alto, California. This my informal recollection of the meeting. It’s not an official record, etc.

Some 20 of the 24 board members managed to make the meeting in person with Todd and Tristan joining over the phone. Yujie Du and Chris Kemp were unable to attend.

As usual our teleconferencing capabilities were woefully inadequate for those hoping to contribute remotely. However, this time Rob and Lew joined a Google Hangout with video cameras trained on the meeting. One would hope that made it a little easier to engage with the meeting, but we didn’t really have any feedback on that.

Before we got started properly, Alan took some time to recommend “Startup Boards: Getting the Most Out of Your Board of Directors” to the directors (based, in turn, on Mark Radcliffe’s recommendation) as a book which could provide some useful background on the responsibilities of directors and what it takes to make a successful board.

[Update: Josh points out we also approved the minutes of the previous meeting]

Executive Director Update

Our first meaty topic was one of Jonathan’s regular updates.

Jonathan talked about the continued tremendous growth in interest around the project will all the foundation staff’s key metrics (e.g. website, developers, twitter, youtube, etc.) at lease doubling in 2013. One interesting aspect of this growth is that the China, India and Japan regions all grew their share of website traffic and this lead to a discussion around whether having the last summit in Hong Kong directly contributed to this shift.

Our community is growing too. We now have over 15,000 individual members of the foundation, over 2,000 contributors to the project over its lifetime and over 400 unique contributors to the project each month.

The mention of 15,000 individual members lead to a somewhat lengthy discussion about the fact that individual membership may be terminated under the following clause of the bylaws:

failure to vote in at least 50% of the votes for Individual Members within the prior twenty-four months unless the person does not respond within thirty (30) days of notice of such termination that the person wishes to continue to be an Individual Member

Jonathan explained how shortly after the foundation was launched, over 6,000 people signed up as individual members. Only 1,500 or so of those initial members have since voted in elections, so we could potentially be looking at removing somewhere in the region of 6,000 members in 2014. This reduced membership will facilitate bylaws changes by making it easier (or even possible) to reach the quorum necessary under clause 9.2(a):

requires an affirmative vote of a majority of the Individual Members voting as provided in Article III, but only if at least 25% of the Individual Members vote

Some discussion points around this included whether a future bylaws change should reduce the quorum requirement to something like 10%, that terminated members can re-register but would then have to wait 180 days in order to be eligible for voting and that project contributors need to be foundation members but some contributors may not be in the habit of voting and may have their membership terminated.

Jonathan moved on with his slide deck and briefly mentioned some of the foundation’s new supports like Oracle and Parallels. He also talked about OpenStack is increasingly fulfilling its role as a platform and is being put to work for many diverse use cases. He also included a slide with many positive media and analyst quotes about the project like “Industry support has coalesced around OpenStack”.

Jonathan then moved on to the foundation’s budget, describing it as an $8M budget which turned out to be $11M. Income was up, but expenses were kept in line such that $2.5M could be put in the bank. He expressed particular pride that 18 months ago the foundation was just starting with no money and already had built up a significant buffer which would allow us all to feel confident about the foundation’s future, even in more turbulent or unpredictable times.

Finally, Jonathan review the foundation staff’s priorities for 2014:

  1. Improve the software – whether that be continued investment by the foundation in the software development process or organizing activities which bring user feedback into the project
  2. Improve interoperability between OpenStack-powered products and services
  3. Grow the service provider global footprint, with a specific mention for interest from telco operators at Mobile World Congress around OpenStack and NFV

DefCore Update

Next up, Rob and Josh provided an update on the progress of the DefCore committee, requesting a checkpoint from the board as to whether there was consensus that the current approach should continue to be pursued.

Rob started by reviewing the purpose of DefCore and the approach taken to date. He explained that the committee is mandated to look at ways of governing commercial use of the OpenStack trademark and that some issues are deliberately being punted on for now, e.g. an API interoperability trademark and changes to the bylaws.

Josh took over and reviewed the currently agreed-upon criteria that will be used when evaluating whether a given capability will be required in order to use the trademark, e.g.

  1. Stable – required to be stable for >2 releases
  2. Complete – should be parity in capability tested across extension implementations
  3. Discoverable – e.g. can be found in Keystone and via service introspection
  4. Widely Deployed – favor capabilities that are supported by multiple public cloud providers and private cloud products
  5. Tools – supported by common tools
  6. Clients – part of common libraries
  7. Foundation – required by other must-have capabilities
  8. TC Future Direction – reflects future technical direction
  9. Documented – expected behaviour well documented
  10. Legacy – previously considered must-have
  11. Cluster – tests are available for this capability?
  12. Atomic – unique capability that cannot be built out of other must-have capabilities
  13. Non-Admin – capability does not require administrative rights

Next, Rob, Josh and Troy walked the board through a draft spreadsheet evaluating potential capabilities against those criteria.

Much of the subsequent discussion revolved around various board members being very eager to get wider feedback on the ramifications of this process, particularly around identifying the most thorny and controversial results. Concerns were expressed that the process has been so involved and detailed that few people are well appraised of where this is headed and may find some of the results very surprising.

Rob & Josh felt that this spreadsheet approach means that we can solicit much more targeted and useful feedback from stakeholders. For example, if a project feels one of its capabilities should be must-have, or a cloud provider is surprised that a capability it doesn’t yet provide is seen to be must-have, then the discussion can be around that specific capability, whether the set of criteria and weighting used for evaluation are appropriate, and whether the capability has been correctly evaluated against those criteria.

Finally, Rob & Josh explained the proposed approach for collecting the test results which would be used for evaluating trademark use applications. The idea (known as TCUP, or “tea-cup”, for “test collect, upload and publish”) currently being developed in the refstack repo on stackforge would allow people to download a docker container image, add your cloud credentials and endpoint URL, run the container which would then execute the tests against the endpoint and upload the results.

[Update: Josh points out that data uploaded via TCUP will “be treated as confidential for the time being”]

Driver Testing

Next, Boris Renski took the floor to talk about Nova, Neutron and Cinder driver testing particularly with a view to how it might relate to trademark usage. This relates to his blog post on the topic some weeks ago.

There were two main observations about changes happening in the technical community – (1) that projects were demanding that vendor maintainers provide reliable third-party automated testing feedback in gerrit patch reviews and (2) that manually maintained, often out-of-date, “driver compatibility matrices” in the OpenStack wiki may soon be replaced by dashboards showing the results of these third-party automated testing systems.

Boris wished to leave that technical work aside (since it is not the board’s domain) and focus the discussion on whether the board would consider a new trademark program such that vendors who’s drivers pass this automated testing would be allowed the use of a trademark such as “Built for OpenStack”.

The debate quickly got heated and went off in several different directions.

Part of the discussion revolved around the automated testing requirements projects were placing on projects, how that worked in practice, the ramifications of that, how deprecating drivers would work, whether a driver being in-tree implied a certain level of quality, etc. I felt the board was really off in the weeds on a topic that is under the authority of the individual projects and the TC. For example, it was easy to forget during the discussion that PTLs ultimately had the discretion to waive testing requirements for individual drivers.

Another surprising element to this was parallels being drawn with the overlap between the OpenStack Activity Board and Stackalytics. Some board members felt that Mirantis’s work on Stackalytics had deliberately duplicated and undermined the Activity Board effort, and that the same thing was happening here because this driver testing dashboard naturally (according to those board members) belongs under the DefCore/Refstack efforts. Boris acknowledged he “had his wrist slapped over Stackalytics” and was attempting to do the right thing here by getting advance buy-in. Others felt that the two efforts were either unrelated or that competing efforts can ultimately lead to a better end result.

Another thread of the discussion was that Boris’s use of, or allusion to, the special term “certification” automatically strayed this topic into the area of the OpenStack brand and that it was inappropriate to speculate that the foundation would embark on such a program before the board had discussed it.

In the end, the board directed that Jonathan should work with Boris and Rob on a plan to collect any automated test results out there and, secondly, work with the DefCore legal subcommittee to explore the possible use of the trademark in this context.

Operator’s Feedback

Tim Bell took the floor next to talk the board through feedback from OpenStack operators collected at the OpenStack Operators Mini Summit the previous day.

The etherpad linked above perhaps provides a better summary than I can give, but some of the highlights include:

  • The notion that operators should be able to provide feedback on blueprints as they are drafted, to help get operational insights to developers early on in the development process
  • Some observations on the stability (or lack thereof) of some of the core OpenStack components
  • The importance of a solid upgrades story was re-iterated
  • Some great feedback on TripleO
  • The split between teams doing CI/CD and those consuming releases
  • How to encourage operators to file more bugs upstream
  • Lots, lots more

This feedback was well received by the board and triggered a bunch of discussions and questions.

As a final point, Josh raised some concerns about how the invitee list was drawn up and how he felt it would have been appropriate for vendors (like Piston) to recommend some of their customers to be invited. Tim felt this was an unfair criticism and that the user committee had worked hard to seed the limited seating event with a diverse set of invitees before opening it up to the public.

Emerging Use Cases – NFV

Finally, with limited time remaining, Toby Ford from AT&T briefed the board on Network Function Virtualization (NFV) as an emerging use case for OpenStack which is heavily tied to SDN (Software Defined Network). He described how AT&T have set themselves a mission to:

Simplify, open up, and scale our Network to be more agile, elastic, secure, cost effective and fast by (a) moving from hardware centric to software centric, (b) separating the control plane and data plane and (c) making the network more programmable and open

Toby did a great job of walking through this complex area, leaving me with the understanding that there is a massive shift in the networking industry from hardware appliances to scale-out software appliances running on virtualized commodity hardware.

There appears to be consensus in the networking industry that OpenStack will be the management and orchestration platform for this new world order, but that there is a serious need for telcos and networking vendors to engage more closely with OpenStack in order to make this happen.

Wrap Up and Evening Event

Alan then wrapped up the meeting a little early after talking through the schedule for our next meetings with a conf call on April 3 and an in-person meeting on May 11.

The board then moved on to a local restaurant for dinner. Before and after dinner, I had some great conversations with Tim, Monty, Van and Troy. Funnily enough, because of the layout of the tables and the noise in the restaurant, it was only really possible to talk to the person sitting directly opposite you and so I found myself having an exclusive 2 hour dinner date with Boris! At one point, after Boris knocked a glass of wine over me, I joked that I should tweet “Red Hat and Mirantis tensions finally bubble over to physical violence”. But, in all honesty, these in-person, informal conversations around the board meetings are often far more effective at enabling shared understanding and real collaboration than the 20+ person meetings themselves. Such is the nature of the beast, I guess.

Naked Pings

Back in November 2009, ajax sent an email on IRC etiquette to Red Hat’s company-wide mailing list. I’ve had to refer several people to it over the years, so I asked ajax for permission to publish it. He agreed. Here it is in all its glory.

From: Adam Jackson
To: memo-list
Subject: On “ping” etiquitte
Date: Tue, 17 Nov 2009 12:21:30 -0500

IRC has developed a “ping” convention for getting someone’s attention. It works because most clients will highlight channels in which your name has been mentioned, so something like

ajax: ping

will make that channel show up pink instead of white for me [1].

I wish to correct, or at least amend, this behaviour. The naked ping should be Considered Harmful, for at least two reasons. The first is that it conveys no information. The recipient of your ping, like you, is a Busy Person. They may be in the middle of something requiring intricate thought, and should not be interrupted for anything less than fire, flood, or six figures of revenue. Worse, _you_ may forget why you pinged someone; when, four hours later, your victim gets back to IRC and responds to you, _you_ will be disrupted in turn trying to remember what was on your mind in the first place.

The second, more subtle reason proceeds from the first. A ping with no data is essentially a command. It’s passive-aggressive; it implies that the recipient’s time is less valuable than yours. [2] The pingee will respond in one (or both) of two ways. Either they will experience increased stress due to increased unpredictable demands on their time, or they will simply ignore naked pings.

The fundamental issue here is a misunderstanding of the medium. IRC is not a telephone. It’s volatile storage. The whole reason the ping works is because the client remembers seeing the ping, and can save it in your history buffer so you can see who was talking to you and why.

The naked ping removes this context.

Please. Save your time. Save my time. Make all of our lives more efficient and less stressful. Ping with data. At a minimum:

ajax: ping re bz 534027

See the difference? Now you’ve turned slow, lockstep, PIO-like interaction into smooth pipelineable DMA. It’s good for your hardware, and it’s good for you.

[1] – irssi 4 life.

[2] – Their time may well be less valuable than yours. That’s not the
point.

– ajax

Jan 30 OpenStack Foundation Board Meeting

The OpenStack Foundation Board of Directors met for a two hour conference call last week. This was the first meeting of the board since the recent Individual and Gold member director elections.

As usual, this my informal recollection of the meeting. It’s not an official record, etc.

Your trusty reporter had just arrived in Brussels for FOSDEM and got to stay in his hotel room for this meeting rather than sampling Belgian’s fine beers. Oh, the sacrifices we make! 😛

Preliminaries

As usual, calling the meeting to order was a challenge and it was at least 15 minutes after the scheduled start before we completed the roll call.

Next, Alan welcomed our new directors:

  • Yujie Du
  • Alex Freedland
  • Vish Ishaya
  • Imad Sousou

and thanked our outgoing directors:

  • Nick Barcet
  • Hui Cheng
  • Joseph George
  • Lauren Sell

as well as thanking those directors who served on the board for part of 2013:

  • Devin Carlen
  • Jim Curry
  • John Igoe
  • Kyle MacDonald
  • Jon Mittelhauser

Policies, Communication Channels and Meetings Schedule

Since it’s a new year, we took the opportunity to review the various policies which apply to board members.

Josh went over our transparency policy mentioning that the board endeavours to be as transparent as possible, with board meetings open to the public, a summary of meetings posted to the foundation list and directors encouraged to use the foundation mailing list for discussions. Sub-committees of the board are expected to be similarly transparent, with wiki pages and public mailing lists. Some caveats to this policy are that board members are not allowed to make public comments about the board meeting until after Jonathan has posted his summary (or 72 hours have passed), members should not discuss executive sessions and the distribution of some non-public documents may have to be limited to directors.

Alan also mentioned our Code of Conduct and encouraged directors to read it carefully. Finally, Jeff from DLA Piper walked us through our antitrust policy where he emphasised the importance of avoiding even the perception that board members are coming together to advance the interests of some companies over others. Members should restrict themselves to pro-competitive collaboration.

Next, we quickly reviewed the various channels for communication that directors need to be aware of – webex for conf calls, the foundation and foundation-board mailing lists, the #openstack-board and #openstack-foundation IRC channels, informal etherpads that we use during board meetings and the various committee mailing lists.

Finally, we discussed the upcoming board meetings – an all-day face-to-face meeting in Palo Alto on March 4, a 2 hour conference call on April 3, an all-day face-to-face meeting in Atlanta on May 11 in advance of the Atlanta summit and another face-to-face on July 21 at OSCON.

The subject of the timing of the Atlanta face-to-face was raised again. May 11 is also Mother’s Day (in the US and some other countries) which is a nasty conflict for many board members. However, a poll amongst board members had already established that no better time around the summit could be found, so we are proceeding with the meeting on May 11. The question was raised about whether future board meetings should be scheduled to not align with our summits, but the objection to this idea was that it puts too much of a time and budget strain on those members who have to travel a long distance for the meetings.

Status Reports From Committees

Finally, time to move on to some more meaty topics! A member of each committee of the board was asked to provide a status update and plans for the year ahead.

Alan first described the work of the compensation committee who are responsible for defining and evaluating the goals and performance of the Executive Director. In summary, the committee concluded that Jonathan met his 2013 goals and new goals have been set for 2014.

Next up, Sean Roberts talked about the finance committee. This committee works with the foundation staff on financial budgeting and accounting. Sean described the foundation’s IRS filing and that the foundation’s 2012 financial audit has been completed and deemed clean (with a note that the foundation is “operating on a cash basis”). The foundation’s application for 501(c)(6) is progressing with the IRS asking for some clarifications which were returned to them in December. The committee meets monthly to review any discrepancies above 10%, but there have been no such issues so far. Essentially, everything is in excellent shape.

Tim Bell talked through the latest from the user committee. Tim mentioned the user survey that was published at the Hong Kong summit and how the committee has asked the TC for input on the kind of feedback that would be useful for developers. The committee is preparing to run another survey in advance of the Atlanta summit. Tim also mentioned that the user committee is running a couple of small, focused “operator mini-summits” over the next few months to bring operators together to share their feedback. Tim described the challenge running the committee with a small number of core volunteer members so as to ensure the privacy of survey results while also encouraging volunteers to help with tasks like turning survey feedback into blueprints for new features.

Van Lindberg gave an update on the legal affairs committee. He emphasised that the committee is not counsel for the foundation or the board, but rather a group which makes recommendations to the board on IP policy. He recapped on some of the patent policy recommendations from last year, for example that the foundation should join OIN. There was a brief mention of the fact that all the committee members are currently lawyers and the by-laws limits the number of members to five. He also mentioned that the DefCore committee has a related sub-committee examining possible by-laws changes.

Todd described the elections committee, that is was formed in February 2013 with 8 members and the goal to consider possible changes to the individual member election process. The committee is currently considering proposing a change to either Condorcet or STV and held a town-hall meeting in Hong Kong on the subject. Todd noted that the meeting was lightly attended and there generally has been rather low participation in the process. The main hurdle to getting such a change passed is that a majority of at least 25% of our individual members would need to vote for the by-laws change and the turnout for the previous election was only 17%. However, in July we will be able to begin making inactive members ineligible for voting and this should help us achieve the required turnout.

Rob gave an update from the DefCore committee[5] which is considering changes to the requirements for commercial implementations of OpenStack who wish to use the OpenStack trademark. The committee is currently working to identify a set of must-pass tests and the functional capabilities which these correspond to. Rob mentioned that some projects currently have no or minimal test coverage and, as a result, their capabilities could not be considered for inclusion in the requirements. Rob also mentioned a “programs vs projects” issue which had been identified during the committee discussions and that a meeting with the TC would be required to resolve the issue. I proposed that Rob and Josh could join the TC’s IRC meeting to discuss the issue.

Finally, Simon gave a brief overview of the work of the gold member application committee. This committee helps prospective gold members prepare their application such that it fully anticipates all the questions and concerns the board may have about the application.

Wrapping Up

While there were some small number of other items on the agenda, we had run out of time at this point. In the short time available, we had covered a broad range of topics but hadn’t really covered new ground. This meeting was mostly about rebooting the board for 2014.

OpenStack, Meritocracy and Diversity

These days, any time I reach for the word “meritocracy” when I want to explain something about OpenStack’s technical community and its governance, I give pause.

Clearly, in some circles, the concept of “meritocracy” has been seriously discredited and represents a system whereby elites perpetuate their power by tilting the rules in favour of themselves.

I’m not much of a political thinker and my understanding of internal American politics is pretty limited (think watching The West Wing and vaguely following the spectacle of a presidential election) so the first time I really encountered the term was in the context of the GNOME project. From the GNOME Foundation Charter:

GNOME is a Meritocracy

A corporation, organization or individual should not be granted a place in the foundation unless its presence is justified by the merits of its contribution. Money cannot buy influence in the GNOME project: show us the code (or documentation, or translations, or leadership, or webmastering…).

and, subsequently, other projects like the ASF. From How The ASF Works:

When the group felt that the person had “earned” the merit to be part of the development community, they granted direct access to the code repository, thus increasing the group and increasing the ability of the group to develop the program, and to maintain and develop it more effectively.

We call this basic principle “meritocracy”: literally, government by merit.

What is interesting to note is that the process scaled very well without creating friction, because unlike in other situations where power is a scarce and conservative resource, in the apache group newcomers were seen as volunteers that wanted to help, rather than people that wanted to steal a position.

Being no conservative resource at stake (money, energy, time), the group was happy to have new people coming in and help, they were only filtering the people that they believed committed enough for the task and matched the human attitudes required to work well with others, especially in disagreement.

To me, the “power” we’re talking about here is the ability, permission or empowerment to get stuff done which advances the project. In some projects that means commit access, but ultimately it means building up the respect and trust of the other contributors to the project such that you can more easily influence and drive the direction of the project. You achieve that “power” by getting useful stuff done (defined broadly – code, documentation, translations, leadership, marketing, advocacy, etc.) and all it grants you is the ability to get more useful stuff done. In a healthy project, we want to give that power to more and more people rather than concentrating it in a small elite.

This is what we mean when we say “OpenStack is a technical meritocracy”. I hate to think of those well-meaning principles of project governance being sullied by “meritocracy” being used to explain away the social inequities in U.S. politics. I also don’t like to think of us seeing these principles as some sort of platonic ideal that don’t require us to constantly evaluate how we empower people to help advance OpenStack.

One hint that all is not perfect is the level of diversity within the project. Yes, we have diversity of opinions and a diversity of sponsoring organizations, but we don’t have an impressive level of gender, race or cultural geography.

My good friend from GNOME days, Daniel Veillard, asked this question of the Technical Committee in Hong Kong:

We are in China. There is no Asian on the podium. What can you do to actually try to improve the situation?

Yes, we have a meritocracy and anyone can advance to leadership positions within the project, but we need to recognize that there are extremely difficult language and cultural hurdles in front of many.

An example of these barriers is how we often conduct our Design Summit sessions. Quite regularly – especially when you get a large number of the more established contributors in the room together, folks who are good friends who understand each other well – the discussion can often devolve into a punchy flow of casual in-joke ridden sound-bites. I’m as much to blame for that as anyone, but sometimes I think back and shudder at how hard it must be for someone outside of the “in group” to join that discussion.

I’ve seen a number of examples where a new non-native English speaker has paired with an existing contributor to lead a design summit session about their work. What can work really well is that the existing contributor can help to engage the attendees, slow down the conversation and ensure the new contributor understands the feedback being given … without attempting to take credit for the work of the new contributor. This is just one technique we could use to empower new contributors.

Anyway, in summary – I think OpenStack’s “meritocracy” is a well-meaning model for empowering contributors (and celebrating their contributions) but we should all be on the lookout for ways that we can make a special effort to empower contributors from groups which are not already well represented in the leadership of the project.

The 2014 OpenStack Individual Member Director Elections and Red Hat

tl;dr – the affiliation limit means that at most one of the two Red Hat affiliated candidates can be elected. The cumulative voting system makes it likely that both of us running seriously damages both of our chances of being elected. A preferential voting system like Condorcet or STV would not have this problem.

At Red Hat, those of us who contribute to OpenStack take very seriously our responsibility to put what’s good for the project first and foremost in our minds – to wear our “upstream hat”, as we like to say. That’s especially true for me and Russell Bryant.

However, now that the candidate list for the 2014 OpenStack Individual Member Director Elections has been finalized, we find ourselves wrestling with the fact that Russell and I are both running as candidates. Two aspects of our election system make this a problem. First, the cumulative voting system means that those who would be happy to vote for either me or Russell are forced to choose between us – essentially, we are damaging each others’ chances of being elected. Secondly, the affiliation limit means that even if we were both lucky enough to receive enough votes to be elected, one of us would be eliminated by the limit.

The combination of these two issues means that we have to factor our affiliation into our decision. The rules place affiliation front and centre in election system, even though Individual Member Directors are not elected to represent their employer.

Now, I’m personally guilty of not pushing this election system issue hard enough over this past year. At one point I favoured experimenting with a tweak to the cumulative system over a more dramatic change because I found the prospect of getting a majority of over 25% of our enormous electorate to vote in favour of a change so daunting. I want to be completely open about this decision we now face because I want to help raise awareness about how important an issue it is.

Given that Red Hat is a Platinum Member and has a automatic seat on the board, the options we’re weighing up are:

  1. Continue with both Russell and me on the ballot, accepting the risk that we’re damaging each others chances.
  2. I or Russell remove ourselves from the ballot, giving the other of us the best possible chance of being elected.
  3. Brian Stevens steps down from the board and I or Russell takes his place, giving whichever of us remains on the ballot the best possible chance of being elected.

It’s not an easy decision. We both feel we have something to offer on the board. Both of us would be very proud to be elected to represent the Individual Members. Both of us feel that Brian Stevens (our CTO who we greatly respect) is the best possible representative for Red Hat on the board.

We will make a decision on this before the election, but right now we don’t see any of the options as being particularly better than the others. But, at the very least, I hope everyone will find this useful as a concrete example of why our election system needs to change.