No surprises:
Looking forward to part-time holidays, beautiful Strasbourg, and the usual bunch of great people.
No surprises:
Looking forward to part-time holidays, beautiful Strasbourg, and the usual bunch of great people.
Tea, Cake, Research and Hardcore.
Being back home and having covered the first two days already, a short overall summary:
GNOME.Asia 2014 conference in Beijing was very well organized – fast and stable WiFi, free water, venue, hotel and sport facilities all within 5 minutes of walking.
While GNOME’s European GUADEC conference is more like meeting lots of old friends (and great new community members) for an old fart like me, GNOME.Asia is about meeting those community members who often cannot make it to the European GUADEC conference (prices of plane tickets), plus spreading the word.
For ignorant Europeans like me, it’s a lot more about listening and learning about the diversity of our community, problems and differences in other areas, cultures, societies. (Plus in this case getting out of that Western internet services bubble of Facebook/Twitter/Youtube/etc. which feel ubiquitous, to see a different bubble yet to explore and understand further.)
Check out the many great photos and the two amazing videos of day 1 and day 2. I’m convinced we won’t have to spend months waiting for the recordings of talks and presentations given either. Now can we import this awesome photo and video team for GUADEC please?
I really hope to see many people at GNOME.Asia 2015 again, and I also hope I’ll find some random reason to go back to Beijing and China soon.
Started writing this posting four weeks ago. Time to publish, seeing the latest blogposts by Philip and Marina on Planet GNOME about the Outreach Program for Women (OPW).
First things first (so you can stop reading if you disagree here):
Outreach programs need to scale and be sustainable to be a long-term success. Is that the case? I’m not sure myself; depends on the criteria you come up with.
As Jim wrote, “the program grew to a size that overwhelmed our administrative staff person” (to simplify it).
Outreach Programs need to run without putting the GNOME Foundation or any other involved entity into financial or legal problems. I don’t follow closely enough, so I wonder if GNOME Foundation and organizations taking part in OPW have considered creating a separate legal entity with a dedicated mission to facilitate more diversity in free and open source software projects (as OPW has grown and is bigger than what GNOME is about)?
Do OPW participants (and GSoC participants) keep participating in our community after OPW is over, and do some become mentors themselves? (And if participants had sufficient time and financial resources, would they also participate if no money was offered?)
I consider the GNOME Documentation project to be a rather successful project keeping OPW’ers involved: Of those 14 OPW participants who worked on GNOME documentation, 6 pretty much vanished after OPW. 2 have mentored, 2 will likely mentor soon, and the rest is still around or around a bit. I don’t have numbers to compare and interpret (Marina posted some) but other GNOME teams feel less successful to me.
I’ve been wondering why. I’m sure others see other criteria and practices and I have not investigated other organizations who take part in OPW – feel free to add your impressions to the comments.
What to consider best practices?
Statistics: Wikimedia has some statistics (beta) on community contributors joining and leaving. Which does not tell you why people join or leave of course. Could organizations be better to find out the “why”?
For example, I kept in contact with my OPW participant after OPW was over, but she got busy with her new job and moving to a new location (socializing). And I don’t want to be too be pushy and expectant towards volunteer contributions. Could I have done something better or differently? I don’t know. We all have our own lives and make our own decisions what to invest our time on.
Thanks to my team (Sumana, Quim, Guillaume) for discussing diversity in general and thanks to Kat for discussing best practices and numbers about OPW in GNOME Documentation.
And obviously everything written here is my personal opinion. As usual.
GNOME.Asia Summit 2014 is taking place this weekend in Beijing (China) together with FUDCon APAC.
Yesterday (Friday) before the conference started, Kat, Dave and I held a hands-on session about making your first contribution to GNOME documentation (also see Kat’s blogpost). As a result, a number of tickets in Bugzilla have received comments and patches.
Apart from the pleasure to walk around in the room, take a look over the shoulders of people and helping out on non-obvious things, it was extremely eye-opening to realize again how many obstacles you need to pass in order to finally make a contribution – running a recent GNOME version, finding the documentation that is supposed to explain the following steps, having a GNOME Bugzilla account, finding a task (bug report) that sounds interesting, finding the corresponding code repository, locating the documentation file to patch in that code repository (in some subfolder called “C” instead of “en”), using git (formatting the patch, providing a commit message), uploading the patch somewhere for review.
Today I gave a presentation about managing bug reports in GNOME. About 15 to 20 people attended it and I’m very happy how it turned out – we ran out of time to triage a few tickets at the end, but the audience was interested and asked really good questions (e.g. the upstream-downstream relation)! I’m looking forward to the video recording of it.
I would like to thank the organizers (especially Emily for her endless patience helping me to sort out visa issues), the sponsors, and the GNOME Foundation for paying part of my travel costs.
Slightly more correct title might be “Wikimedia to migrate its software development product/project code review issue tracking management planning tools to Phabricator”. Or something like that.
The Wikimedia technical community has used plenty of different tools for tracking bugs / product management / project management / todo lists. Bugzilla, RT, Gerrit, Mingle, Trello, Scrumbugs, Google Docs, to mention most of them.
From my personal point of view, Wikimedia Foundation is pretty bottom-up: Each team can experiment and use those tools they prefer and suit them. That also means teams might have moved from Bugzilla to Trello to Mingle to Google Docs while other teams prefer(ed) other tools. Bugzilla is our public issue tracker but misses a lot of functionality when it comes to agile development workflows, design review work, or activity feeds.
We also have some connectivity between these tools. Bingle/Bugello to sync some parts between Bugzilla and Mingle/Trello, or its-bugzilla (previously “bugzilla-hooks”) to have Gerrit post comments in Bugzilla tickets about related patches (if the bug number was correctly refered in the commit message). But things are brittle – for example, just this Friday the Gerrit→Bugzilla notifications broke.
All in all, the multitude of tools and channels is not helpful for cross-team collaboration, keeping track of what’s happening, and transparency of discussions and decisions in general as things are discussed in several places.
In late 2013, the idea was to start a discussion about a possible agreement on a recommendation for a smaller set of tools that teams could agree upon. My colleague Guillaume and I had the pleasure to facilitate the discussion and to ensure it doesn’t remain an idea only. References were a previous evaluation attempt in 2009/2010 and the Gerrit evaluation in 2012.
The first step was asking interested teams and individuals to describe their needs and workflows on a wiki discussion page.
Its content was then consolidated by Guillaume and cleaned up a little bit more by me. (I felt reminded of GNOME’s decision process to migrate from Subversion to Git but that was a survey among GNOME foundation members and hence a very different approach.)
After having those (sometimes contractive) needs and workflows collected, we tried to decrease the items in the list of candidate tools to consider, plus investigate and encourage discussing them on the related discussion page. For candidate tools not having an online test instance offered on the project’s homepage we wondered whether to set up test instances on Wikimedia Labs to make testing easier, but we left it to anybody strongly favoring a tool to set up that tool. Wikimedia Deutschland already had Scrumbugs instance in production (Scrum on top of Bugzilla) we could point to, and for Phabricator somebody had set up a test instance in Wikimedia Labs already.
To gather the broader community opinion and broader support for investigating more potential (wo)manpower, we started to prepare a Request for comments (RFC). While we listed several options at the beginning (Keep the status quo; status quo+Fulcrum; status quo+Scrumbugs; move completely to Phabricator; move partially to Phabricator; move to GitLab) the feedback quickly turned this into one question to ask the community: Move to Phabricator?
We ran this RFC for three weeks until May 6th and announced it widely on mailing lists and via banners on top of Bugzilla and mediawiki.org.
Parallel to running the RFC, we were working on sorting out the blockers for a potential migration from Bugzilla and documenting things. My colleague Quim created a comparison page between Phabricator and Bugzilla.
The result of the RFC is that there seems to be general support for moving from our infrastructure tools to Phabricator. This won’t all happen at the same time though – we will start investigating replacing Bugzilla, RT, Trello, Mingle.
For the code review functionality (currently done via Gerrit), more work in Phabricator is needed to fit the needs defined by our community, for example when it comes to Continuous Integration. We do not plan to switch off Gerrit on the day we start using Phabricator in production and we got more items to sort out (see the list of code review related items).
For managing the project to move to Phabricator we use the Phabricator test instance itself (dogfooding for the win), by tracking missing features compared to our existing tools, and tasks that need to be solved for the migration. Also, we asked users of existing tools what they would specifically miss in Phabricator by creating (sub)tasks in our Phabricator test instance.
We have not created a Phabricator production instance yet to which we would potentially migrate to, because in the past Wikimedia ended up with a lot of tools by not enforcing migration.
For the last months we also ran IRC office hours every two or three weeks in order to discuss and answer questions related to the project.
Last weekend the Wikimedia Hackathon took place in Zürich. There were several Phabricator related sessions (videos available for the sake of transparency; the first two videos are more like discussions though):
The usual disclaimer: Plans might be subject to change and there is intentionally no timeline yet.
Check out the Get involved section on the central project page and the planning board for Wikimedia Phabricator Day 1 in Production if there are tasks that interest you!
I would like to thank everybody in the community who has provided input, help and support. Upstream Phabricator developers have been extremely responsive and interested in discussing our needs and fixing issues – it’s a great pleasure to work with them.
Furthermore, getting to this point would have been impossible without my wonderful colleagues in the Wikimedia Engineering Community Team who have helped so much with communication, prioritization, planning, support.
(How we puppetized, upgraded and moved Bugzilla to another server)
Though we currently also evaluate Wikimedia’s project management tools, we will have to stick with our current infrastructure for a while. Among many other tasks, I spent the last months preparing the upgrade of Wikimedia’s Bugzilla instance from 4.2 to 4.4. Some reasons for upgrading can be found in this Bugzilla comment.
In late November 2013 I started by cleaning up Wikimedia Bugzilla’s custom CSS which was copied about five years ago and not kept in sync. It turned out that 16 of 22 files could be removed as there was no sufficient difference to upstream’s default CSS code (Bugzilla falls back to loading the default CSS file from /skins/default if no custom CSS file is found in /skins/custom). Less noise and less diffing required for future upgrades. In theory.
After testing these CSS changes on a Wikimedia Labs instance and merging them into our 4.2 production instance, I created numerous patches and put them into Gerrit (Wikimedia’s code review tool) by diffing upstream 4.2 code, upstream 4.4 code, and our custom code.
At the same time, Wikimedia’s Technical Operations team wanted to move the Bugzilla server from the kaulen server in our old Tampa datacenter to the zirconium server in our new Ashburn (Eqiad) datacenter. While you’d normally prefer to do only one thing at a time, Daniel Zahn (of Technical Operations) and I decided to create a fresh Bugzilla 4.4 instance from scratch on the new server to see into which problems we would run. During this process Daniel Zahn turned the old setup on kaulen, which was largely manual and had organically grown over the years, into a proper Puppet module. For every “missing module” error we ran into we avoided installing anything from Perl’s CPAN in Bugzilla’s /lib folder and ensured we just rely on distribution packages, for a much cleaner install. Daniel Zahn installed the needed packages by adding them to puppet code. While doing this we also removed Bugzilla’s Sitemap extension as it created sporadic Search::Sitemap errors when running Bugzilla’s checksetup.pl (plus it’s unmaintained anyway). Furthermore I ran into another runtime error to fix.
After fixing all checksetup.pl issues and having Bugzilla accessible via a web browser, only Bugzilla’s upstream CSS was displayed instead of our custom CSS. Neither was Wikimedia’s custom CSS offered as an option in the browser, nor could I log into the new Bugzilla (to check which theme is set as default in the admin settings) as the database dump we used for testing predated the creation of my user account.
After Sean Pringle of Technical Operations deployed a more recent Bugzilla database dump I expected further problems due to upstream changes to CSS loading. I was happy to see that I had been wrong: No problems with our custom CSS theming anymore. Instead, I ran into problems with our custom “See Also” field changes: Adding and removing such URLs triggered errors, and URLs themselves were not displayed (but their corresponding “Remove” checkbox). Thanks to upstream help in #bugzilla on Mozilla IRC I finally found out that Perl’s use base instead of use parent was the culprit.
After creating symlinks to /extensions/WeeklyReport/ to avoid 404 errors for the “Weekly Bug Summary” link in the sidebar (our setup is slightly busted) and after fixing two problems with our cronjobs for whining and data collection we agreed on a date to copy the database, do some maintenance work, and switch the DNS entry. This was announced one week in advance by adding a banner to Bugzilla via its announcehtml parameter.
A few hours before the switch on February 12th 2014, Daniel lowered the Time-to-live (TTL) values of the DNS entry of our Bugzilla. When the migration started, I set Bugzilla’s shutdown parameter to make the web UI inaccessible and also the WebService API return a 503 error for the Bingle script that syncs Bugzilla with Wikimedia’s Mingle instance. This was important to make sure that nobody can write anymore to the old database. We updated the IRC channel topic in #wikimedia-tech to tell that Bugzilla is under scheduled maintenance and logged the action in #wikimedia-operations so it got added to Wikimedia’s Server admin log. All in all we had only forgotten two minor things: Our Gerrit integration (a bot adding Gerrit notifications about related patches as comments in Bugzilla) bot was not able to write and got a 503 error back – Chad quickly disabled it. And our Nimsoft watchmouse sent an “ALERT! Bugzilla: Service Temporarily Unavailable” message to the Operations mailing list.
Sean Pringle migrated the old database from db9 in Tampa, to a new database on db1001 in Eqiad. After this was done, Daniel Zahn ran checksetup.pl to apply the scheme upgrades needed for 4.4.
After 30 minutes of testing to make sure everything works as expected we deployed two more custom patches: Showing common queries on the frontpage and making saved reports work. While having the downtime I also switched off bugmail to do some mass-changes without spamming everybody: I merged some version numbers in the “MediaWiki” product to have a shorter Version dropdown, removed the wikibugs-l watcher account from some bug reports as it is unneeded (set as a global watcher in Bugzilla anyway, hence a potential issue if a ticket was moved to a restricted product like “Security” still triggering public bugmail).
A few minutes before the end of the announced downtime of three hours, Daniel switched DNS so the new Bugzilla on the new server became available to the public. A few hours later, to work around isses for clients not supporting SNI, Daniel changed the order in which Apache loads virtual hosts. This ensures that older clients like Internet Explorer on Microsoft Windows XP will always get to see Bugzilla instead of other miscellaneous web services sharing the same hardware. I had also overlooked a small UI issue that I fixed two days later.
Now that all is done, the result can now be seen on bugzilla.wikimedia.org. All steps to upgrade Wikimedia Bugzilla from 4.2 to 4.4 were documented on a wiki page. You can find all of our custom modifications here.
One of the things I liked about the GNOME Documentation Hackfest (apart from the hospitality of Kat, Dave and the University of East Anglia) was the opportunity of teachers and students popping in and discussing open source project management related aspects with us.
One obvious topic is contributor involvement.
The Number Two question of people interested in contributing to GNOME (right after “How do I get started?”) is “I know programming language XYZ. Which projects would be good?”. Which leads to the question:
What are obstacles for contributors to find projects in language XYZ themselves?
And as volunteer based projects allow project maintainers or translation team leaders to disappear without a warning, new contributors might not realize or know where to escalate in order to become a new maintainer, especially if the single contact point of failure is a private email address of the previous maintainer who will never reply. New contributors do not know how to find out how “maintained” a project is – they would have to find the “log” page of the project on GNOME Git plus understand that translation updates are not a sign of development activity.
How to help contributors avoid writing a patch for an unmaintained dead rotten project where nobody will ever notice and appreciate their contribution?
This problem actually affects a larger group: Translators, documentators and bug reporters all waste time working on projects that will never see a release again. One year ago, one third of the non-archived modules in GNOME Git had not seen any code activity for more than two years. Obviously, these dead projects are not a good idea to contribute to if you have no experience in software project management yet but just want to contribute a small patch that scratches your own itch.
And as we discussed this I pointed to a university paper I wrote a year ago, comparing Apache and GNOME.
Realizing how often I refer to it it’s probably useful to link the PDF file:
Very quickly after that discussion, Frédéric created a “GNOME Project Health” webpage based on metrics in that paper.
Frédéric’s page allows to see which listed GNOME maintainers are active, how active the project is (the higher the score, the less activity exists), and in which programming language(s) the project is written. It’s a great start and I owe Frédéric a beer for it.
If you are a GNOME project maintainer you can help improving it:
If your project is mostly written in the programming languages X and Y, add the lines
<programming-language>X</programming-language> <programming-language>Y</programming-language>
to the .doap file in the top level folder in Git, and check if the maintainers listed in the DOAP file do not collide with the reality out there.
Reporting live from GNOME Docs Hackfest at Norridge, East Anglia.
Fréd arrived after bypassing some tree falling on his way. But he was very late. Shaun is still in a snowstorm. Ryan has escaped that already. But fearless leader Kat, Dave, Julita, Phil, Michael, and Baptiste made it. Petr plans to arrive in the middle of the night. If his plane does not decide to make even more problems.
Julita plans to refresh the structure of the Evolution user docs to make it more compact plus might take a peek at GNOME Developer Platform Demos.
Phil “arrived on a train, that was pretty good, and I’ve actually resolved a third of the licensing issues of GNOME system monitor.”
Michael and Baptiste visited the Protestant cathedral. Michael worked on system monitor and taking screenshots. Baptiste also kept contact to Fréd so he does not feel too alone and lost. And Sindhu joined remotely.
I went through numerous documentation related bug reports and updated them (plus as a side effect created statistics about all those many unmaintained modules in GNOME Git).
For a short moment I also wondered: Why does GNOME push for using AppData files if we have extensible DOAP files already in each repository? Hmm.
Google Code-In (GCI) 2013 is over and the winners have been announced! Congratulations to everybody!
Quim Gil and I organized the participation of Wikimedia in GCI this year. We set up a central wikipage that we pointed students to for recurring general questions. Mostly these were about expectations on communication, how to use IRC, how to set up a development environment and the toolchain (Git, Gerrit, Bugzilla), plus a list of our mentors and their IRC nicknames and timezones (some enthusiastic students welcome being reminded of timezones, plus having patience is one of the good lessons to learn).
We also had a section for mentors explaining how to write good GCI tasks and which sentences and information should be part of every task description.
For the Wikimedia Engineering Community Team, GCI was a helpful lesson to prepare improving our Annoying little bugs landing page for new contributors (still on my TODO list). Students hopefully found interest in contributing to a great FOSS community (I have seen numerous students who continue contributing and are still active after the contest has finished). I hope that Wikimedia will be able to invite the most active students to the Wikimania conference.
We ended up with way more tasks available than expected (and more than 200 successfully finished), had a number of students who really impressed us by code quality (number of patch reviews required) and speed, and Quim managed surprisingly well to convince established developers to become mentors. Also, the nine hours of time difference between Quim and me was a big advantage in order to respond quickly to requests.
All together, GCI was successful and participating in GCI was good decision, contrary to my initial reluctance.
I would like to thank Google and all involved mentors and students for their hard work and for making this possible.