Of humans and feelings

community, freesoftware No Comments

It was a Wednesday morning. I just connected to email, to realise that something was wrong with the developer web site. People had been having issues accessing content, and they were upset. What started with “what’s wrong with Trac?” quickly escalated to “this is just one more symptom of how The Company doesn’t care about us community members”.

As I investigated the problem, I realised something horrible. It was all my fault.

I had made a settings change in the Trac instance the night before – attempting to impose some reason and structure in ACLs that had grown organically over time – and had accidentally removed a group, containing a number of community members not working for The Company, from having the access they had.

Oh, crap.

After the panic and cold sweats died down, I felt myself getting angry. These were people who knew me, who I had worked alongside for months, and yet the first reaction for at least a few of them was not to assume this was an honest mistake. It was to go straight to conspiracy theory. This was conscious, deliberate, and nefarious. We may not understand why it was done, but it’s obviously bad, and reflects the disdain of The Company.

Had I not done enough to earn people’s trust?

So I fixed the problem, and walked away. “Don’t respond in anger”, I told myself. I got a cup of coffee, talked about it with someone else, and came back 5 minutes later.

“Look at it from their side”, I said – before I started working with The Company, there had been a strained relationship with the community. Yes, they knew Dave Neary wouldn’t screw them over, but they had no way of knowing that it was Dave Neary’s mistake. I stopped taking it personally. There is deep-seated mistrust, and that takes time to heal, I said to myself.

Yet, how to respond on the mailing list thread? “We apologise for the oversight, blah blah blah” would be interpreted as “of course they fixed it, after they were caught”. But did I really want to put myself out there and admit I had made what was a pretty rookie mistake? Wouldn’t that undermine my credibility?

In the end, I bit the bullet. “I did some long-overdue maintenance on our Trac ACLs yesterday, they’re much cleaner and easier to maintain now that we’ve moved to more clearly defined roles. Unfortunately, I did not test the changes well enough before pushing them live, and I temporarily removed access from all non-The Company employees. It’s fixed now. I messed up, and I am sorry. I will be more careful in the future.” All first person – no hiding behind the corporate identity, no “we stand together”, no sugar-coating.

What happened next surprised me. The most vocal critic in the thread responded immediately to apologise, and to thank me for the transparency and honesty. Within half an hour, a number of people were praising me and The Company for our handling of the incident. The air went out of the outrage balloon, and a potential disaster became a growth opportunity – yes, the people running the community infrastructure are human too, and there is no conspiracy. The Man was not out to get us.

I no longer work for The Company, and the team has scattered to the winds. But I never forgot those cold sweats, that feeling of vulnerability, and the elation that followed the community reaction to a heartfelt mea culpa.

Part of the OSS Communities series – difficult conversations. Contribute your stories and tag them on Twitter with #osscommunities to be included.

3 things community managers can learn from the 50 state strategy

General 1 Comment

This is part of the opensource.com community blogging challenge: Maintaining Existing Community.

There are a lot of parallels between the world of politics and open source development. Open source community members can learn a lot about how political parties cultivate grass-roots support and local organizations, and empower those local organizations to keep people engaged. Between 2005 and 2009, Howard Dean was the chairman of the Democratic National Congress in the United States, and instituted what was known as the “50 state strategy” to grow the Democratic grass roots. That strategy, and what happened after it was changed, can teach community managers some valuable lessons about keeping community contributors. Here are three lessons community managers can learn from it.

Growing grass roots movements takes effort

The 50 state strategy meant allocating rare resources across parts of the country where there was little or no hope of electing a congressman, as well as spending some resources in areas where there was no credible opposition. Every state and electoral district had some support from the national organization. Dean himself travelled to every state, and identified and empowered young, enthusiastic activists to lead local organizations. This was a lot of work, and many senior democrats did not agree with the strategy, arguing that it was more important to focus effort on the limited number of races where the resources could make a difference between winning and losing (swing seats). Similarly, for community managers, we have a limited number of hours in the day, and investing in outreach in areas where we do not have a big community already takes attention away from keeping our current users happy. But growing the community, and keeping community members engaged, means spending time in places where the short-term return on that investment is not clear. Identifying passionate community users and empowering them to create local user groups, or to man a stand aty a small local conference, or speak at a local meet-up helps keep them engaged and feel like part of a greater community, and it also helps grow the community for the future.

Local groups mean you are part of the conversation

Because of the 50 state strategy, every political conversation in the USA had Democratic voices expressing their world-view. Every town hall meeting, local election, and teatime conversation had someone who could argue and defend the Democratic viewpoint on issues of local and national importance. This means that people were aware of what the party stood for, even in regions where that was not a popular platform. It also meant that there was an opportunity to get a feel for how national platform messaging was being received on the ground. And local groups would take that national platform and “adjust” it for a local audience – emphasizing things which were beneficial to the local community. Open source projects also benefit from having a local community presence, by raising awareness of your project to free software enthusiasts who hear about it at conferences and meet-ups. You also have an opportunity to improve your project, by getting feedback from users on their learning curve in adopting and using it. And you have an increasing number of people who can help you understand what messaging resonates with people, and which arguments for adoption are damp squibs which do not get traction, helping you promote your project more effectively.

Regular contact maintains engagement

After Howard Dean finished his term as head of the DNC in 2009, and Debbie Wasserman-Schultz took over as the DNC chair, the 50 state strategy was abandoned, in favour of a more strategic and focussed investment of efforts in swing states. While there are many possible reasons that can be put forward, it is undeniable that the local Democratic party structures which flourished under Dean have lost traction. The Democratic party has lost hundreds of state legislature seats, dozens of state senate seats, and a number of governorships  in “red” states since 2009, in spite of winning the presidency in 2012. The Democrats have lost control of the House and the Senate nationally, in spite of winning the popular vote in 2016 and 2012. For community managers, it is equally important to maintain contact with local user groups and community members, to ensure they feel empowered to act for the community, and to give the resources they need to be successful. In the absence of regular maintenance, community members are less inclined to volunteer their time to promote the project and maintain a local community.

Summary

Growing local user groups and communities is a lot of work, but it can be very rewarding. Maintaining regular contact, empowering new community members to start a meet-up or a user group in their area, and creating resources for your local community members to speak about and promote your project is a great way to grow the community, and also to make life-long friends. Political organizations have a long history of organizing people to buy into a broader vision and support and promote it in their local communities.

What other lessons can community managers and organizers learn from political organizations?

 

Encouraging new community members

General 2 Comments

My friend and colleague Stormy Peters just launched a challenge to the community – to blog on a specific community related topic before the end of the week. This week, the topic is “Encouraging new contributors”.

I have written about the topic of encouraging new contributors in the past, as have many others. So this week, I am kind of cheating, and collecting some of the “Greatest Hits”, articles I have written, or which others have written, which struck a chord on this topic.

Some of my own blog posts I have particular affection for on the topic are:

I also have a few go-to articles I return to often, for the clarity of their ideas, and for their general usefulness:

  • Open Source Community, Simplified” by Max Kanat-Alexander, does a great job of communicating the core values of communities which are successful at recruiting new contributors. I particularly like his mantra at the end: “be really, abnormally, really, really kind, and don’t be mean“. That about sums it up…
  • Building Belonging“, by Jono Bacon: I love Jono’s ability to weave a narrative from personal stories, and the mental image of an 18 year old kid knocking on a stranger’s door and instantly feeling like he was with “his people” is great. This is a key concept of community for me – creating a sense of “us” where newcomers feel like part of a greater whole. Communities who fail to create a sense of belonging leave their engaged users on the outside, where there is a community of “core developers” and those outside. Communities who suck people in and indoctrinate them by force-feeding them kool-aid are successful at growing their communities.
  • I love all of “Producing Open Source Software“, but in the context of this topic, I particularly love the sentiment in the “Managing Participants” chapter: “Each interaction with a user is an opportunity to get a new participant. When a user takes the time to post to one of the project’s mailing lists, or to file a bug report, she has already tagged herself as having more potential for involvement than most users (from whom the project will never hear at all). Follow up on that potential.”

To close, one thing I think is particularly important when you are managing a team of professional developers who work together is to ensure that they understand that they are part of a team that extends beyond their walls. I have written about this before as the “water cooler” anti-pattern. To extend on what is written there, it is not enough to have a policy against internal discussion and decisions – creating a sense of community, with face to face time and with quality engagements with community members outside the company walls, can help a team member really feel like they are part of a community in addition to being a member of a development team in a company.

 

The Electoral College

General 4 Comments

Episode 4 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

A US presidential election year is a wondrous thing. There are few places around the world where the campaign for head of state begins in earnest 18 months before the winner will take office. We are now in the home straight, with the final Presidential debate behind us, and election day coming up in 3 weeks, on the Tuesday after the first Monday in November (this year, that’s November 8th). And as with every election cycle, much time will be spent explaining the electoral college. This great American institution is at the heart of how America elects its President. Every 4 years, there are calls to reform it, to move to a different system, and yet it persists. What is it, where did it come from, and why does it cause so much controversy?

In the US, people do not vote for the President directly in November. Instead, they vote for electors – people who represent the state in voting for the President. A state gets a number of electoral votes equal to its number of senators (2) and its number of US representatives (this varies based on population). Sparsely populated states like Alaska and Montana get 3 electoral votes, while California gets 55. In total, there are 538 electors, and a majority of 270 electoral votes is needed to secure the presidency. What happens if the candidates fail to get a majority of the electors is outside the scope of this blog post, and in these days of a two party system, it is very unlikely (although not impossible).

State parties nominate elector lists before the election, and on election day, voters vote for the elector slate corresponding to their preferred candidate. Electoral votes can be awarded differently from state to state. In Nebraska, for example, there are 2 statewide electors for the winner of the statewide vote, and one elector for each congressional district, while in most states, the elector lists are chosen on a winner take all basis. After the election, the votes are counted in the local county, and sent to the state secretary for certification.

Once the election results are certified (which can take up to a month), the electors meet in their state in mid December to record their votes for president and vice president. Most states (but not all!) have laws restricting who electors are allowed to vote for, making this mostly a ceremonial position. The votes are then sent to the US senate and the national archivist for tabulation, and the votes are then cross referenced before being sent to a joint session of Congress in early January. Congress counts the electoral votes and declares the winner in the presidency. Two weeks later, the new President takes office (those 2 weeks are to allow for the process where no-one gets a majority in the electoral college).

Because it is possible to win heavily in some states with few electoral votes, and lose narrowly in others with a lot of electoral votes, it is possible to win the presidency without having a majority of Americans vote for you (as George W. Bush did in 2000). In modern elections, the electoral college can result in a huge difference of attention between “safe” states, and “swing” states – the vast majority of campaigning is done in only a dozen or so states, while states like Texas and Massachusetts do not get as much attention.

Why did the founding fathers of the US come up with such a convoluted system? Why not have people vote for the President directly, and have the counts of the states tabulated directly, without the pomp and ceremony of the electoral college vote?

First, think back to 1787, when the US constitution was written. The founders of the state had an interesting set of principles and constraints they wanted to uphold:

  • Big states should not be able to dominate small states
  • Similarly, small states should not be able to dominate big states
  • No political parties existed (and the founding fathers hoped it would stay that way)
  • Added 2016-10-21: Different states wanted to give a vote to different groups of people (and states with slavery wanted slaves to count in the population)
  • In the interests of having presidents who represented all of the states, candidates should have support outside their own state – in an era where running a national campaign was impractical
  • There was a logistical issue of finding out what happened on election day and determining the winner

To satisfy these constraints, a system was chosen which ensured that small states had a proportionally bigger say (by giving an electoral vote for each Senator), but more populous states still have a bigger say (by getting an electoral vote for each congressman). In the first elections, electors voted for 2 candidates, of which only one could be from their state, meaning that winning candidates had support from outside their state. The President was the person who got the most electoral votes, and the vice president was the candidate who came second – even if (as was the case with John Adams and Thomas Jefferson) they were not in the same party. It also created the possibility (as happened with Thomas Jefferson and Aaron Burr) that a vice presidential candidate could get the same number of electoral votes as the presidential candidate, resulting in Congress deciding who would be president. The modern electoral college was created with the 12th amendment to the US constitution in 1803.

Another criticism of direct voting is that populist demagogues could be elected by the people, but electors (being of the political classes) could be expected to be better informed, and make better decisions, about who to vote for. Alexander Hamilton wrote in The Federalist #68 that: “It was equally desirable, that the immediate election should be made by men most capable of analyzing the qualities adapted to the station, and acting under circumstances favorable to deliberation, and to a judicious combination of all the reasons and inducements which were proper to govern their choice. A small number of persons, selected by their fellow-citizens from the general mass, will be most likely to possess the information and discernment requisite to such complicated investigations.” These days, most states have laws which require their electors to vote in accordance with the will of the electorate, so that original goal is now mostly obsolete.

A big part of the reason for having over two months between the election and the president taking office (and prior to 1934, it was 4 months) is, in part, due to the size of the colonial USA. The administrative unit for counting, the county, was defined so that every citizen could get to the county courthouse and home in a day’s ride – and after an appropriate amount of time to count the ballots, the results were sent to the state capital for certification, which could take up to 4 days in some states like Kentucky or New York. And then the electors needed to be notified, and attend the official elector count in the state capital. And then the results needed to be sent to Washington, which could take up to 2 weeks, and Congress (which was also having elections) needed to meet to ratify the results. All of these things took time, amplified by the fact that travel happened on horseback.

So at least in part, the electoral college system is based on how long, logistically, it took to bring the results to Washington and have Congress ratify them. The inauguration used to be on March 4th, because that was how long it took for the process to run its course. It was not until 1934 and the 20th amendment to the constitution that the date was moved to January.

Incidentally, two other constitutionally set constraints for election day are also based on constraints that no longer apply. Elections happen on a Tuesday, because of the need not to interfere with two key events: sabbath (Sunday) and market (Wednesday). And the elections were held in November primarily so as not to interfere with harvest. These dates and reasoning, set in stone in 1845, persist today.

FOSDEM SDN & NFV DevRoom Call for Content

community, freesoftware, openstack Comments Off on FOSDEM SDN & NFV DevRoom Call for Content

We are pleased to announce the Call for Participation in the FOSDEM 2017 Software Defined Networking and Network Functions Virtualization DevRoom!

Important dates:

  • (Extended!) Nov 28: Deadline for submissions
  • Dec 1: Speakers notified of acceptance
  • Dec 5: Schedule published

This year the DevRoom topics will cover two distinct fields:

  • Software Defined Networking (SDN), covering virtual switching, open source SDN controllers, virtual routing
  • Network Functions Virtualization (NFV), covering open source network functions, NFV management and orchestration tools, and topics related to the creation of an open source NFV platform

We are now inviting proposals for talks about Free/Libre/Open Source Software on the topics of SDN and NFV. This is an exciting and growing field, and FOSDEM gives an opportunity to reach a unique audience of very knowledgeable and highly technical free and open source software activists.

This year, the DevRoom will focus on low-level networking and high performance packet processing, network automation of containers and private cloud, and the management of telco applications to maintain very high availability and performance independent of whatever the world can throw at their infrastructure (datacenter outages, fires, broken servers, you name it).

A representative list of the projects and topics we would like to see on the schedule are:

  • Low-level networking and switching: IOvisor, eBPF, XDP, fd.io, Open vSwitch, OpenDataplane, …
  • SDN controllers and overlay networking: OpenStack Neutron, Canal, OpenDaylight, ONOS, Plumgrid, OVN, OpenContrail, Midonet, …
  • NFV Management and Orchestration: Open-O, ManageIQ, Juju, OpenBaton, Tacker, OSM, network management, PNDA.io, …
  • NFV related features: Service Function Chaining, fault management, dataplane acceleration, security, …

Talks should be aimed at a technical audience, but should not assume that attendees are already familiar with your project or how it solves a general problem. Talk proposals can be very specific solutions to a problem, or can be higher level project overviews for lesser known projects.

Please include the following information when submitting a proposal:

  • Your name
  • The title of your talk (please be descriptive, as titles will be listed with around 250 from other projects)
  • Short abstract of one or two paragraphs
  • Short bio (with photo)

The deadline for submissions is November 16th 2016. FOSDEM will be held on the weekend of February 4-5, 2017 and the SDN/NFV DevRoom will take place on Saturday, February 4, 2017 (Updated 2016-10-20: an earlier version incorrectly said the DevRoom was on Sunday). Please use the following website to submit your proposals: https://penta.fosdem.org/submission/FOSDEM17 (you do not need to create a new Pentabarf account if you already have one from past years).

You can also join the devroom’s mailing list, which is the official communication channel for the DevRoom: network-devroom@lists.fosdem.org (subscription page: https://lists.fosdem.org/listinfo/network-devroom)

– The Networking DevRoom 2016 Organization Team

Railway gauges

General 4 Comments

Episode 3 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

The standard railway gauge (that is, the distance between train rails) for over half of the world’s railways (including the USA and UK)  is 4′ 8.5″, or 1.435m. While a few other railway gauges are in common use, including, to my surprise, in Ireland, where the gauge is 5′ 3″, or 1.6m. If you’re like me, you’ve wondered where these strange numbers came from.

Your first guess might be that, similar to the QWERTY keyboard, it comes from the inventor of the first train, or the first successful commercial railway, and that there was simply no good reason to change it once the investment had been made in thbat first venture, in the interests of interoperability. There is some truth to this, as railways were first used in coal mines to extract coal by horse-drawn carriages, and in the English coal mines of the North East, the “standard” gauge of 4′ 8″ was used. When George Stephenson started his seminal work on the development of the first commercial railway and the invention of the Stephenson Rocket steam locomotive, his experience from the English coal mines led him to adopt this gauge of 4′ 8″. To allow for some wiggle room so that the train and carriages could more easily go around bends, he increased the gauge to 4′ 8.5″.

But why was the standard gauge for horse-drawn carriages 4′ 8″? The first horse-drawn trams used the same gauge, and all of their tools were calibrated for that width. That’s because most wagons, built with the same tools, had that gauge at the time. But where did it come from in the first place? One popular theory, which I like even if Snopes says it’s probably false, is that the gauge was the standard width of horse-drawn carriages all the way back to Roman times. The 4′ 8.5″ gauge roughly matches the width required to comfortably accommodate a horse pulling a carriage, and has persisted well beyond the end of that constraint.

 

 

QWERTY keyboards

General 2 Comments

Episode 2 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

American or English computer users are familiar with the QWERTY keyboard layout – which takes its name from the layout of letters on the first row of the traditional us and en_gb keyboard layouts. There are other common layouts in other countries, mostly tweaks to this format like AZERTY (in France) or QWERTZ (in Germany). There are also non-QWERTY related keyboard layouts like Dvorak, designed to allow increased typing speed, but which have never really gained widespread adoption. But where does the QWERTY layout come from?

The layout was first introduced with the Remington no. 1 typewriter (AKA the Scholes and Glidden typewriter) in 1874. The typewriter had a set of typebars which would strike the page with a single character, and these were arranged around a circular “basket”. The page was then moved laterally by one letter-width, ready for the next keystrike. The first attempt laid out the keys in alphabetical order, in two rows, like a piano keyboard. Unfortunately, this mechanical system had some issues – if two typebars situated close together were struck in rapid succession, they would occasionally jam the mechanism. To avoid this issue, common bigrams were distributed around the circle, to minimise the risk of jams.

The keyboard layout was directly related to the layout of typebars around the basket, since the keyboard was purely mechanical – pushing a key activated a lever system to swing out the correct typebar. As a result, the keyboard layout the company settled on, after much trial and error, had the familiar QWERTY layout we use today. At this point, too much is invested in everything from touch-type lessons and sunk costs of the population who have already learned to type for any other keyboard format to become viable, even though the original constraint which led to this format obviously no longer applies.

Edit: A commenter pointed me to an article on The Atlantic called “The Lies You’ve Been Told About the QWERTY Keyboard” which suggests an alternate theory. The layout changed to better serve the earliest users of the new typewriter, morse code transcribing telegraph operators. A fascinating lesson in listening to your early users, for sure, but also perhaps a warning on imposing early-user requirements on later adopters?

Summer vacations – not the farmer’s fault!

General Comments Off on Summer vacations – not the farmer’s fault!

Episode 1 in a series “Things that are the way they are because of constraints that no longer apply” (or: why we don’t change processes we have invested in that don’t make sense any more)

I posted a brief description of the Five Monkey experiment a few days ago, as an introduction to a series someone suggested to me as I was telling stories of how certain things came about. One of the stories was about school Summer vacation. Many educators these days feel that school holidays are too long, and that kids lose knowledge due to atrophy during the Summer months – the phenomenon even has a name.  And yet attempts to restructure the school year are strongly resisted, because of the amount of investment we have as a society in the school rhythms. But, why do US schools have 10-12 weeks of Summer vacation at all?

The story I had heard is that the Summer holiday is as long as it is, because at the origins of the modern education system, in a more agrarian society, kids were needed on the farm during the harvest and could not attend school.I do like to be accurate when talking about history, and so I went reading, and it turns out that this explanation is mostly a myth – at least in the US. And, as a farmer’s kid, that mostly makes sense to me. The harvest is mostly from August through to the beginning of October, so starting school in September, one of the busiest farming months, does not make a ton of sense.

But there is a grain of truth to it – in the US in the 1800s, there were typically two different school rhythms, depending on whether you lived in town or in the country. In town, schools were open all year round, but many children did not go all of the time. In the country, schools were mainly in session during two periods – Winter and Summer. Spring, when crops are plated, and Autumn, when they are harvested, were the busy months, and schools were closed. The advent of compulsory schooling brought the need to standardise the school year, and so vacations were introduced in the cities, and restructured in the country, to what we see today. This was essentially a compromise, and the long Summer vacation was driven, as you might expect, by the growing middle class’s desire to take Summer holidays with their children, not the farming family’s desire to exploit child labour. It was also the hardest period of the year for children in cities, with no air conditioning to keep school classrooms cool during the hottest months of the year.

So, while there is a grain of truth (holidays were scheduled around the harvest initially), the main driver for long Summer holidays is the same as today – parents want holidays too. The absence of air conditioning in schools would have been a distant second.

This article is US centric, but I have also seen this subject debated in France, where the tourism industry has strongly opposed changes to the school year structure, and in Ireland, where we had 8-9 weeks vacation in primary school. So – not off to a very good start, then!

The five monkeys thought experiment

General Comments Off on The five monkeys thought experiment

The (probably apocryphal) five monkeys experiment goes like this:

Five monkeys are placed in a cage. There is a lever, which, if pulled, delivers food. The monkeys soon learn how it works, and regularly pull the lever.

One day, when the lever is pulled, food is still delivered to the puller, but all the monkeys in the cage get an ice-cold shower for a period of time. The monkeys quickly learn the correlation between the lever and the cold shower, and stop any monkey from getting to the lever.

After a while, one of the monkeys is removed, and replaced by a new monkey. Out of curiosity, the new monkey tried to pull the lever, and was beaten into submission by the other monkeys. Progressively, more of the original five monkeys are removed, and replaced with new monkeys, and they all learn the social rule – if you try to pull the lever, the group will stop you.

Eventually, all of the original monkeys are gone. At this point, you can turn off the shower, secure in the knowledge that none of the monkeys will pull the lever, without ever knowing what will happen if they do.

A funny anecdote, right? A lesson for anyone who ever thinks “because that’s the way it has always been”.

And yet, there are a significant number of things in modern society that are the way they are because at one point in time, there was some constraint that applied, which no longer applies in the world of air travel and computers. I got thinking about this because of the electoral college and the constitutional delays between the November election and the January inauguration of a new president – a system that exists to get around the logistical constraints of having to travelling distances on horseback. But that is far from the only example.

This is a series, covering each of the examples I have found, and hopefully uncovering others along the way, and the electoral college will be one of them. First up, though, will be the Summer school vacation.

  1. Summer vacations
  2. QWERTY keyboards
  3. Railway gauges
  4. The Electoral College

The Dummies’ Guide to US Presidential Elections

General 3 Comments

The US presidential primaries

For those following the US presidential primaries from a distance, and wondering what is happening, here’s a brief dummies’ guide to the US presidential primaries and general election. It’s too early to say that Trump has won the Republican primary yet, even though (given his results and the media narrative) he is a strong favourite. To learn more than you will ever need to know about US presidential primaries, read on.

Primaries elect delegates

The presidential candidates are elected by the major parties at their party conventions, held during the Summer before the election. The primary elections are one way that the parties decide who gets to vote in the convention, and who they vote for.

Both parties have the concept of pledged and unpledged delegates – if you are pledged, then your vote in the 1st ballot of the nomination election has been decided in the primary. If you are unpledged, then you are free to change vote at any time. The Democrats have about 15% of their delegates unpledged, these are called superdelegates. The Republican party has about 170 unpledged delegates, representing about 7% of the total delegate count. Each state decides how to award pledged delegates, with a variety of processes which I will describe later.

If no candidate has a majority of delegated on the 1st ballot, then the fun starts – delegates are free to change their affiliation for 2nd and further ballots. This scenario, which used to happen often but now happens rarely, is called a contested or brokered convention. The last brokered convention was in 1952 for the Democrats, and 1948 for the Republicans. We have come close on a number of occasions, most recently 2008 for the Democrats, and 1976 for the Republicans.

Read the rest…

« Previous Entries