Discovery Docs Part 4: Discovery

This is Part 4 in a series about the Discovery Docs initiative, which I will present about in my upcoming GUADEC talk. In Part 1: Discovering Why, I laid the groundwork for why I think we should focus our docs on discovery. In Part 2: Templates and Taxonomies, I talked about how to structure topics differently to emphasize learning. In Part 3: Voice and Style, I proposed using a more casual, direct writing style. In this post, I’ll look at increasing reader engagement.

“Nobody reads the docs.” This is a common complaint, a cliché even. It has some truth to it, but it misses the bigger picture. For this post, the more important point is that people don’t often seek out the docs. So if we’re writing interesting material, as I’ve discussed throughout this blog series, how do we reach interested people?

This post is all about how we encourage people to discover.

Help menus

I have been trying to rethink help menus for over a decade. From the venerable Help ▸ Contents, to the Help item in the deprecated app menu, to the Help item in the hamburger menu, Help has always been a blind target. What’s on the other side of that click? A helpful tutorial? A dusty manual? An anthropomorphic paperclip? Who knows.

To address this problem, I’ve worked on a design for help menus:

This design presents the users with topics that are relevant to what they’re doing right now. In these mockups, the example topics are mostly simple tasks. As I’ve discussed in this blog series, I want to move away from those. Combining this design with longer learning-based material can encourage people to explore and learn.

Learning

Speaking of learning, is “Help” even the right term anymore? That word is deeply ingrained in UI design. (Remember right-aligned Help menus in Motif applications?) And it fits well with the bite-sized tasks we currently write, probably better than it fit old manuals. But does it fit content focused on learning and discovery? Are people looking for help at all?

As we shift our focus, perhaps we should shift our language toward the word “Learn”. Use “Learn” instead of “Help” whenever it appears in the UI. Change the docs website from help.gnome.org to learn.gnome.org. Rename the app to something like “Help & Learning”.

Side note: I’ve never been a fan of the help buoy icon, and have always preferred the question mark in a blue circle. Somebody smart might be able to think of something even better for learning, although there’s also value in sticking with iconography that people know.

Web design

I mentioned the docs website. It needs more than a new URL. The current site uses an old design and is difficult to maintain. We have a documentation team initiative to redo the site using a documentation build tool designed to do these kinds of things. Here’s what it looks like at the moment:

This is extremely important for the docs team, regardless of whether we shift to learning-based content or not.

Visual presentation makes a difference in how people feel about your documentation. For comparison, imagine using GNOME 40 with the same user interaction, but using the boxy beveled aesthetics of GNOME 1. It’s just not as exciting.

To that end, it would be good to open up our designs to more people. I don’t scale when it comes to chasing design trends. The styling has been locked up in XSLT, which not many people are familiar with. One thing I did recently was to move the CSS to separate template files, which helps me at least. For a more radical change, I’ve also spent a bit of time on doing document transforms and styling entirely with JavaScript, Handlebars, and Sass. Unfortunately, I don’t have the bandwidth to finish that kind of overhaul as a free time project.

Social media

Imagine we have interesting and exciting new content that people enjoy reading. Imagine it’s all published on a visually stunning new website. Now what? Do we wait for people to stumble on it? Remember that the focus is on discovery, not stumbling.

Any well-run outreach effort meets people where they are. If you run a large-scale project blog or resource library, you don’t just quietly publish an article and walk away. You promote it. You make noise.

If we have topics we want people to discover, we should do what we can to get them in front of eyeballs. Post to Twitter. Post to Reddit. Set up a schedule of lessons to promote. Have periodic themes. Tie into events that people are paying attention to. Stop waiting for people to find our docs. Start promoting them.

Documentation is outreach.

Discovery Docs Part 3: Voice and Style

This is Part 3 in a series about the Discovery Docs initiative, which I will present about in my upcoming GUADEC talk. In Part 1: Discovering Why, I laid the groundwork for why I think we should focus our docs on discovery. In Part 2: Templates and Taxonomies, I talked about how to structure topics differently to emphasize learning. In this post, I’ll talk about how we should write to be engaging, but still clear.

One of the main goals of Discovery Docs is to be more engaging and to create enthusiasm. It’s hard to create enthusiasm when you sound bored. Just as your speaking voice can either excite or bore people, so too can your writing voice affect how people feel while reading. Boring docs can leave people feeling bored about the software. And in a world of short-form media, boring docs probably won’t even be read.

This post has been the hardest in the series for me to write. I’ve been in the documentation industry for two decades, and I’ve crafted a docs voice that is deliberately boring. It has been a long learning process for me to write for engagement and outreach.

To write for engagement, we need to adopt a more casual voice that addresses the reader directly. This isn’t just a matter of using the second person. We do that already when giving instructions. This is a matter of writing directly to the reader. Think about how blog posts are often written. Think about how I’m writing this blog post. I’m talking to you, as if I’m explaining my thoughts to you over a cup of coffee.

This doesn’t mean our writing should be filled with needless filler words, or that we should use complicated sentences. Our writing should still be direct and concrete. But it can still be friendly and conversational. Let’s look at an example.

  • #1: Some users need to type accented characters that are not available on their keyboards. A number of options are available.
  • #2: You may need to type accented characters that are not available on your keyboard. There are a number of ways you can do this.
  • #3: Maybe you need to type an accented character that’s not on your keyboard. You have a few options.

#1 is stuffy. It talks about hypothetical users in the third person. It is stiff and boring, and it uses too many words. Don’t write like this.

#2 is more direct. It’s how most of our documentation is written right now. There’s nothing wrong with it, but it doesn’t feel exciting.

#3 is more casual. It uses the fewest words. It feels like something I would actually say when talking to you.

Using a more casual and direct voice helps get readers excited about the software they’re learning about. It makes learning feel less like grunt work. A combination of exciting material and an engaging writing style can create docs that people actually want to read.

What’s more, an engaging writing style can create docs that people actually want to write. Most people don’t enjoy writing dry instructions. But many people enjoy writing blogs, articles, and social media posts that excitedly show off their hard work. Let’s excitedly show off our hard work in our docs too.

Discovery Docs Part 2: Templates and Taxonomies

This is Part 2 in a series about the Discovery Docs initiative, which I will present about in my upcoming GUADEC talk. In Part 1: Discovering Why, I gave a brief history of GNOME documentation, and explained our current unabashedly boring task-based approach. I proposed focusing instead on learning and discovery in an attempt to attract and retain enthusiastic users. In this post, I’ll explore what a learning-based topic might look like.

GNOME documentation currently focuses on tasks, concepts, and references, with tasks doing the bulk of the work. This is a common approach across the entire documentation industry, but that doesn’t mean it’s the only approach. Notably, we don’t strictly enforce these taxonomies as some other documentation systems do. Rather, we provide some templates and guidelines, and we trust humans to make good judgment.

To shift to a more engaging approach, I’m exploring a lengthier topic type that I’m hesitantly calling a “lesson”. A lesson incorporates a concept, possibly a task, possibly a quick reference, and further reading for specific use cases or advanced users. It should be readable in about five to seven minutes. Let’s break down an outline of a lesson.

  • Summary: A nicely formatted bullet list that provides a key point and a quick mention of what you are able to do. This acts as a guidepost so readers know they are in the right place, and for certain users, it may be the only thing they need.
  • Splash image: A well designed image can convey a lot of conceptual information. We don’t want to show random screenshots, but rather craft images that engage and explain, and which preferably do not need to be translated.
  • Concept: An explanation of what this thing is and how it has an impact on you. Traditionally, this would be its own concept topic.
  • Main task: A list of steps for a main task, if one exists for this lesson. Traditionally, this would be its own task topic.
  • Quick ref: A short table or list of things, if that applies to this lesson. Larger references should probably still be split into their own topics.
  • More tasks: Sometimes there are further tasks you might be interested in. If they don’t warrant their own lesson, we can put these in a collapsed section.
  • More information: Sometimes there is further conceptual information you might be interested in, but which not everybody cares about. We don’t want to clutter the top of the page. This can go in a collapsed section.
  • Advanced: Finally, sometimes there are concepts or tasks that we would only expect advanced users to mess with. This can go in a collapsed section, clearly marked as advanced.

Let’s explore what this would look like for some real-world documentation.

Compose Key

The compose key saw a lot of work in GNOME 40, from being moved into the main Settings, to getting visual feedback. It’s a great candidate for a lesson, because the task of “Set a compose key” misses the point of what most users are trying to accomplish. What people really want to do is type accented characters that aren’t on their keyboard. It is on us to explain the different ways they might do that, and how those mechanisms work.

A lesson for the compose key would start off with a three-item summary:

  • Use the compose key to enter accents and other special characters with a few easy to remember keystrokes.
  • You can define a compose key in the Keyboard Settings.
  • Advanced users can define their own compose key sequences.

For the average user looking to type “naïveté”, the first bullet point lets them know that this lesson is relevant to them. They know right away whether to read more or back away. For users who already know what a compose key is, the second bullet point might be all the information they need. And if you’re the kind of person who doesn’t mind editing dot files in text editors, we let you know there’s some bonus info at the end.

Then comes the splash image. A visual actually helps to convey the concept of the compose key very well. In his blog post on adding visualizations, Matthias showed three images of text fields as you type a compose sequence. We can pull these together with indications of key presses, as in this poorly done and hastily made prototype image:

Follow this with a conceptual overview of what a compose key is, and how you press mnemonic key sequences in a row, not all at once. Talk about what kinds of characters are best entered with a compose key— generally accented characters and variations of things found on keyboards. Maybe mention other methods of entering special characters. Mention that compose keys aren’t an actual physical thing, and that you have to redefine some key you don’t use.

After the reader understands what a compose key is, we present the main task, which is to define a compose key. This is perhaps a four-step task.

After that, we might have a quick reference of compose key sequences, or it may be worthwhile to have a separate reference topic altogether. This is a judgment call I could only make after starting to write. (Side note: It occurs to me that cases like this might call for partially collapsed tables or lists, where you always see the first handful of rows or items, and you expand for more.)

Finally, since our summary promised it, we finish with a section on defining compose key sequences, collapsed by default, and clearly marked as advanced.

Workspaces

Workspaces are another area that I think would benefit from learning-oriented topics. There’s not really any single task associated with them. In fact, they may not even answer any specific user question. Instead, they’re something you happen to learn about, and then love. A summary of a workspace lesson might look like this:

  • Workspaces allow you to organize your apps and windows into spaces that work for you.
  • You can launch apps on specific workspaces and move windows between workspaces.
  • You can also use convenient keyboard shortcuts and gestures to work with workspaces.

For a splash image, consider this beautifully drawn image from GNOME Tour:

This isn’t a screenshot, and it doesn’t need to be translated. It doesn’t tell you how to do anything specific with workspaces, but it nicely illustrates what they are and why you might want to use them. There’s a workspace for multimedia (with cassettes, lol). There’s one for pictures and drawing. To the left, we see a document being worked on. And to the right, we presumably see me getting frustrated with my taxes.

There’s no primary task with workspaces, at least not the sort that we can list steps for. They’re always there, on demand, just waiting to be used. But we can provide a quick reference of things you can do, and easy ways to do them. Things like moving windows between workspaces, or quickly switching workspaces with keyboard shortcuts and gestures.

Then there’s further information, like using workspaces on multiple monitors. This is an interesting case, because it could be further information within the workspaces lesson, or it could be its own lesson. It likely has enough material for its own lesson. I could write a summary and a conceptual overview, and there’s a primary task in changing whether workspaces show on secondary displays. There’s even excellent imagery that helps explain the concept. Consider this pair of images from the GNOME Shell blog:

These images nicely illustrate how the default configuration treats workspaces as something contained by the primary display, where secondary displays are sort of extra always-visible workspaces. But you can change to a model where workspaces effectively contain the displays. The images contain a small amount of translatable text, but that can be fixed or ignored with no real loss of information. And we could easily provide an RTL variant without burdening translators.

More lessons

I’ve already gone past my recommended max seven minute reading time, so I won’t dive into other topics in detail. But I will make a few notes:

  • The wireless hotspot functionality shows a QR code you can scan to connect quickly. There’s a proposal to add instructions on how to use this on common devices. We don’t want individual task topics for these, but they would fit nicely under “more tasks” in a lesson on hotspots.
  • The current topics on using speakers and using microphones already start with some conceptual information, and issues 107 and 116 list other things they could discuss. Most of this doesn’t fit well in tasks, and nobody will read a dozen tiny topics.

Lessons may not work for everything. They may not replace standalone references or concepts, although even those topic types might benefit from summaries and visual appeal. Lessons can be more engaging than tasks, which can make them more fun to read and to write.

In the next part of this series, I will discuss voice and style. Join me.

 

Discovery Docs Part 1: Discovering Why

This is Part 1 in a series about the Discovery Docs initiative, which I will present about in my upcoming GUADEC talk.

A long time ago, in the days of bonobos and fishes, GNOME documentation was  written as long, monolithic manuals. We split these beasts into digestible pages as best we could (which is to say, poorly) and hoped for the best. Then we had an idea. What if we actually controlled the granularity at which information was presented? What if, instead of writing books, we wrote topics?

And so we did. We weren’t the first software project to make this shift, but we were early on the curve, and we did it radically. While many help systems still try to shoehorn topics into a linear structure, our help focuses on creating a navigable web of information.

The question of how big the topics are — how big the chunks on the web are — is entirely up to us. For the most part, we have chosen small topics with the least amount of information we could get away with. The reasoning is that users can find quick answers to questions, and if they want to learn more, we have extensive cross linking. Our topics have mostly followed the familiar trichotomy of tasks, concepts, and references. Our documentation is deliberately excruciatingly boring.

Fast forward. For the last six years, I’ve had the privilege of getting paid to peek inside various open source documentation efforts. I’ve advised people on forming a basic content plan, and even made an attempt at templatizing content plans. Aside from the obvious user goals and user stories, one of the things I ask is to define the project goals. What do you want to happen as a result of your hard work on your documentation? Yes, of course you want users to accomplish their own goals, but what else? Here’s what keeps coming up:

  • Attract new users.
  • Create experienced users.
  • Create enthusiastic users.

We want users. Even when there isn’t a profit motive, users are a project’s lifeblood. And we want those users to be experienced — not necessarily experts, but experienced — because you retain people better when they feel invested. And we want those users to be enthusiastic, to love our software, because enthusiastic users help us attract even more users, but also, enthusiastic users are where we find future contributors. Enthusiasm is critical for the long-term success of open source software.

Boring documentation fails these goals. It succeeds elsewhere, but not here. So instead of focusing on quickly answering simple questions, I propose we focus our documentation on learning, exposition, and discovery.

In this series, I will explore ideas on how we can write differently, how we can present information differently, and how we can better position our documentation to help users discover and enjoy the software that we make, use, and love.

Documentation transforms in JavaScript

Three weeks ago, I had a project idea while napping. I tried to forget it, but ended up prototyping it. Now I named it Hookbill and put it on GitLab. From the README:

I appear to be the last XSLT programmer in the world, and sometimes people get very cross with me when I say those four letters. So I had an idea. What if we did it in JavaScript using Mustache templates?

Just to give a glimpse, here’s the Hookbill template for paragraphs:

{{~#if (if_test element)~}}
{{~#var 'cls' normalize=true}}
  {{element.localname}}
  {{#if (contains element.attrs.style 'lead')}}lead{{/if}}
  {{if_class element}}
{{/var~}}
<p class="{{vars.cls}}" {{call 'html-lang-attrs'}}>
  {{~call 'html-inline' element.children~}}
</p>
{{~/if~}}

And here’s the equivalent XSLT for Mallard in yelp-xsl:

<xsl:template mode="mal2html.block.mode" match="mal:p">
  <xsl:variable name="if"><xsl:call-template name="mal.if.test"/></xsl:variable><xsl:if test="$if != ''">
  <p>
    <xsl:call-template name="html.class.attr">
      <xsl:with-param name="class">
        <xsl:text>p</xsl:text>
        <xsl:if test="contains(concat(' ', @style, ' '), ' lead ')">
          <xsl:text> lead</xsl:text>
        </xsl:if>
        <xsl:if test="$if != 'true'">
          <xsl:text> if-if </xsl:text>
          <xsl:value-of select="$if"/>
        </xsl:if>
      </xsl:with-param>
    </xsl:call-template>
    <xsl:call-template name="html.lang.attrs"/>
    <xsl:apply-templates mode="mal2html.inline.mode"/>
  </p>
</xsl:if>
</xsl:template>

Is the hookbill template shorter? For sure. Is it better? In this case, yeah. For some other things, maybe not as much. It does force me to put a lot of logic in JavaScript, outside the templates. And maybe that’s a good thing for getting other people playing with the HTML and CSS.

I don’t know if I’ll continue working on this. It would take a fairly significant time investment to reach feature parity with yelp-xsl, and I have too many projects as it is. But it was fun to play with, and I thought I’d share it.

Making Releases

A few days ago, I posted to desktop-devel-list asking how we can ensure releases happen, especially beta releases for the freeze. I was frustrated and my language was too abrasive, and I’m sorry for that. My intention was really to open a discussion on how we can improve our release process. Emmanuele replied with a thorough analysis of which bits are hard to automate, which I enjoyed reading.

Earlier today, I tweeted asking developers of other open source projects how they make releases, just to get a sense of what the rest of the world does. There have been a lot of responses, and it will take me a while to digest it all.

In the meantime, I wanted to share my process for rolling releases. I maintain five core GNOME modules, plus a handful of things in the wider open source world. My release process hasn’t fundamentally changed in the 18 years I’ve been a maintainer. A lot of other stuff has changed (merge requests, CI, freeze break approvals, etc), so I’m just trying to think of how any of this could be better. Anyway, here’s my process:

  1. First, I run git status in my development checkout to do a sanity check for files I forgot to add to the repo. In at least one project, I have auto-generated docs files that I keep in git, because various tools rely on them being there.
  2. Next, I always want to make sure I’m making releases from a clean checkout. So I will git clone a fresh checkout. Or sometimes, I already have a checkout I use just for releases, so I will git pull there.
  3. Next, I actually roll a tarball before doing anything else, which I will promptly throw away. I’ve had a few times where I discovered a dist breakage after doing everything else, and I’ve had to start over.
  4. Now it’s time to write a NEWS entry. I run git log --stat PREVTAG.. > changes, where PREVTAG is the tag name of the previous release. I edit changes to turn it into a NEWS entry, then I copy it to the top of the NEWS file.
  5. I then bump the version number in either configure.ac or meson.build. I know a lot of people do pre-release version bumps. I don’t have a strong opinion on this, so I’ve never changed my habits.
  6. Now it’s time to roll the tarball that I don’t throw away. The commands I run depend on the build system, of course. What matters is that I run these commands myself and have a tarball at the end.
  7. Before I actually release that tarball, I run git commit and git push. If there have been any commits since I started, I either have to rebase and update the NEWS file, or do some branching. This is fortunately quite rare.
  8. Also before releasing the tarballs, I tag the release with git tag -s and push the tag. Importantly, I only do this after pushing the commits, because otherwise if other commits have happened I have to do tag surgery. Nobody likes that.
  9. Finally, I scp the tarball to a GNOME server, ssh into that server, and run a release script that the release team maintains.

The two things that take the most time are rolling a tarball (which I do at least twice), and creating the NEWS entry. Rolling tarballs is something that can happen in the background, so I usually have multiple terminal tabs, and I work on other releases while one release is building. So that part isn’t too bad, but sometimes I am just waiting on a build to finish with nothing else to do. I know some people auto-generate NEWS entries, or don’t write them at all, but I find hand-edited entries extremely valuable. (I do read them when writing help for apps, when I actually find time to do that, so a big thanks to app maintainers who write them.)

I’m tossing around in my head what a more GitLab-focused workflow would look like. I could imagine a workflow where I click a “Release” button, and I get prompted for a version number and a NEWS entry. I’m not even sure I care if the “news” is in a file in the tarball, as long as I know where to find it. (Distro packagers might feel differently. I don’t know what their processes look like.) I would still need to look at a commit log to write the news. And I guess I’d probably still want to do my own git status sanity check before going to GitLab, but I could probably catch a lot of that with either CI or local commit checks. Ensuring there are no dist breakages should almost certainly be done on every commit with CI.

I suppose another thing to consider is just maintainers remembering it’s time to make releases. I didn’t miss this one because I was eagerly awaiting playing with it and updating the help, but I’ve missed lots of releases in the past. Could we all get automatic issues added to our todo lists a few days in advance of expected releases? Would that be annoying? I don’t know.

All new yelp-tools

I’ve just released the 40.alpha release of yelp-tools, the collection of tools for building and maintaining your documentation in GNOME. This is the first release using the new Meson build system. More importantly, it’s the first release since I ported the tools from shell scripts to Python.

Porting to Python is a pretty big deal, and it comes with more improvements than you might expect. It fixes a number of issues that are just difficult to do right in a shell script, and it’s significantly faster. For some commands, it can be as much as 20 times faster.

But that’s not all. You can now provide a config file with default values for all command-line arguments. This is useful, for example, with the --version option for yelp-check status. Previously, to ensure you weren’t getting stale status information, everybody had to remember to pass --version. Now, you can set the current version in your config file, and it will always do the right thing for everybody.

It gets better. The config file can specify custom checkers for yelp-check. I blogged about custom checkers a couple months ago. You can use XPath expressions to make assertions about a document. This is very similar to how Schematron works. But now you don’t have to figure out how to write a Schematron file, call xmllint with it, and parse the output.

Here’s an example of how to ensure that every page uses XInclude to include a boilerplate legal.xml file:

[namespaces]
mal = http://projectmallard.org/1.0/
xi = http://www.w3.org/2001/XInclude

[check:gnome-info-legal-xi]
select = /mal:page/mal:info
assert = xi:include[@href='legal.xml']
message = Must include legal.xml
xinclude = false

To run this check, you simply call:

yelp-check gnome-info-legal-xi

For more examples, check out the config file I already added to gnome-user-docs.

What I’d like to do now is figure out a good way to share custom checkers across modules, and then add CI pipelines to all GNOME modules with user docs to ensure consistency.

Custom docs checkers in yelp-check

I have a personal goal of getting top-notch docs CI in place for GNOME 40, and part of that is having checks for merge requests. We have a number of great checks already in yelp-check, like checking for broken links and media references, but for large documents sets, you want to be able to enforce your own rules for style and consistency. The most common tool for this is Schematron, and in fact we do have a Schematron file in gnome-help. But it has a lot of boilerplate, most people don’t know how to write it, and the command to run it is unpleasant.

I’ve been rewriting yelp-check in Python. This has made it easy to implement things that were just not fun with /bin/sh. (It’s also way faster, which is great.) Today I added the ability to define custom checkers in a config file. These checkers use the same idea as Schematron: assert something with XPath and bomb if it’s false. But they’re way easier to write. From my commit message, here’s one of the checks from the gnome-help Schematron file, reworked as a yelp-check checker:

[namespaces]
mal = http://projectmallard.org/1.0/

[check:gnome-desc]
select = /mal:page/mal:info
assert = normalize-space(mal:desc) != ''
message = Must have non-empty desc

With the new yelp-check (as yet not released), you’ll be able to just run this command:

yelp-check gnome-desc

All of the custom checkers show up the the yelp-check usage blurb, alongside the builtin checks, so you can easily see what’s available.

The next steps are to let you define sets of commands in the config file, so you can define what’s important for you on merge requests, and then to provide some boilerplate GitLab CI YAML so all docs contributions get checked automatically.

Things to Love About Ducktype

I spent a lot of time making Ducktype into a lightweight syntax that I would really enjoy using. I had a list of design goals, and I feel like I hit them pretty well. In this post, I want to outline some of the things I hope you’ll love about the syntax.

1. Ducktype has a spec

I know not everybody nerds out over reading a spec the way I do. But whether or not you like reading them, specs are important for setting expectations and ensuring interoperable implementations. Probably my biggest gripe with Markdown is that there are just so many different flavors. Couple that with the multiple wiki flavors I regularly deal with, and my days become a game of guess-and-check.

Even in languages that haven’t seen flavor proliferation, you can still run into issues with new versions of the language. In a sense, every version of a parser that adds features creates a new flavor. Without a way to specify the version or do any sort of introspection or conditionals, you can run into situations where what you type is silently formatted very differently on different systems.

In Ducktype, you can declare the language version and any extensions (more on those later) right at the top of the file.

@ducktype/1.0
= Header

This is a paragraph.

2. Ducktype keeps semantics

I’ve been a strong proponent of semantic markup for a long time, since I started working with LaTeX over 20 years ago. Presentational markup has short-term gains in ease-of-use, but the long-term benefits of semantic markup are clear. Not only can you more easily adapt presentation over time, but you can do a lot with semantic markup other than display it. Semantic markup is real data.

Different lightweight languages support semantic markup to different degrees. Markdown is entirely presentational, and if you want to do anything even remotely sophisticated, you have to break out to HTML+CSS. AsciiDoc and reStructuredText both give you some semantic markup at the block level, and give you the tools to create inline semantic markup, but out of the box they still encourage doing semantic-free bold, italic, and monospace.

My goal was to completely capture the semantics of Mallard with less markup, not to just create yet another syntax. Ducktype does that. What I found along the way is that it can fairly nicely capture the semantics of other formats like DocBook too, but that’s a story for another day.

Ducktype uses a square bracket syntax to introduce semantic block elements, inspired by the group syntax in key files. So if you want a note, you type:

[note]
This text is an implicit paragraph in the note.

If you want a plain old bullet list, you type it just like in any other lightweight syntax:

* First item
* Second item
* Third item

But if you want a semantic steps list, you add the block declaration:

[steps]
* First step
* Second step
* Third step

Ducktype keeps the semantics in inline markup too. Rather than try to introduce special characters for each semantic inline element, Ducktype just lets you use the element name, but with a syntax that’s much less verbose than XML. For example:

Click $gui(File).
Call the $code(frobnicate) function.
Name the file $file(index.duck).

Both AsciiDoc and reStructuredText let you do something similar, although sometimes you have to cook up the elements yourself. With Ducktype, you just automatically get everything Mallard can do. You can even include attributes:

Press the $key[xref=superkey](Super) key.
Call the $code[style=function](frobnicate) function.

3. Ducktype keeps metadata

In designing Ducktype, it was absolutely critical that we have a consistent way to provide all metadata. Not only is metadata important for the maintenance of large document sets, but linking metadata is how documents get structure in Mallard. Without metadata, you can’t really write more than a single page.

You can do very good page-level metadata in both AsciiDoc and reStructuredText, but for Mallard we need to do metadata at the section and even block levels as well. Markdown, once again, is the poor format here with no support for any metadata at all. Some tools support a YAML header in Markdown, which would be pretty great if it were any sort of standard.

Ducktype uses a single, consistent syntax for metadata, always referencing the element name, and coming after the page header, section header, or block declaration that it’s providing info for.

= My First Topic
@link[type=guide xref=index]
@desc This is the first topic page I ever wrote.

== My First Section
@keywords first section, initial section

Do you like my first topic page with a section?

We can even do this with block elements:

[listing]
  @credit
    @name Shaun McCance
  . A code listing by Shaun
  [code]
    here_is_some_code()

There’s a lot going on in that little example, from nesting to shorthand block titles. The important part is that we can provide metadata for the code listing.

(Side note: I thought a lot about a shorthand for credits, and I have ideas on an extension to provide one. But in the end, I decided that the core syntax should have less magic. Explicit is better than implicit.)

4. Ducktype allows nesting

I do quite a bit of semi-manual format conversion fairly frequently. It’s not something I enjoy, and it’s not the best use of my time, but it just keep having to be done, and I’m kind of good at it. If you do that a lot, you’ll often run into cases where something just can’t be represented in another format, and that usually comes down to nesting. Can you put lists inside table cells? Can you put tables inside list items? Can lists nest? Can list items have multiple paragraphs?

Both reStructuredText and (shockingly) most flavors of Markdown allow some amount of nesting using indentation. This sometimes breaks down when tables or code blocks are involved, but it’s not terrible. This is the one place where AsciiDoc is my least favorite. In fact, this is my single least favorite thing about AsciiDoc. It uses a plus on its own line to indicate a continuation. It’s hard to visualize, and it has severely limited functionality.

Ducktype very explicitly allows nesting with indentation, whether you’re nesting lists, tables, code blocks, notes, or anything else. It allows you to skip some indentation in some common cases, but you can always add indentation to stay in a block. Ducktype specifies the exact nesting behavior.

[note style=important]
  This is very important note about these three things:

  * The first thing is hard to explain.

    It takes two paragraphs.

  * The second thing can really be broken in two:

    * First subthing
    * Second subthing

  * The third thing involves some code:

    [code]
      here_is_some_code()

You can use any number of spaces you like. I prefer two, in part because it fits in with the shorthand syntax for things like lists. Using indentation and block declarations, there is absolutely no limit on what or how deeply you can nest.

5. Ducktype tables are sane

I have never met a lightweight table syntax I liked. Trying to represent real tables with some sort of pseudo-ascii-art works ok for very simple tables, but it falls apart real fast with anything remotely non-trivial. Add to this the extra syntax required for for all the non-trivial stuff, and a substantial portion of your parser is devoted to tables. As a user, I have a hard time keeping all those workarounds in my head.

As I said above, the ability to nest things was very important, so most of these tables syntaxes were just a non-starter. How do you add extra blocks in a table cell when the whole table row is expected to fit on one line? In the end, I decided not to have a table syntax. Or more accurately, to treat tables as a sort of list.

Without any special treatment, tables, rows, and columns can already be written in Ducktype using the block declarations you’ve already seen:

[table]
  [tr]
    [td]
      One
    [td]
      Two
  [tr]
    [td]
      Three
    [td]
      Four

There’s no magic happening here. This is just standard block nesting. But it’s pretty verbose, not as verbose as XML, but still. Can’t we make it a bit easier, like we do for lists? Well yes, we make it easier exactly like we do for lists.

[table]
[tr]
* One
* Two
[tr]
* Three
* Four

There are a few things going on here. Most importantly, the asterisk is shorthand for a table cell, not a list item. What that shorthand does depends on context. But also, we’ve removed quite a lot of indentation. Earlier I wrote that there are some special rules that allow you to trim indentation in certain common cases. This is one of them.

Using a vertical table syntax means that you can do the same kind of nesting you can do with other elements. Here’s a list in a table cell:

[table]
[tr]
* Here's a list:
  * First
  * Second
* Another table cell

Ducktype isn’t the first format to allow a vertical table syntax. MediaWiki allows tables to be written vertically, and it works really well. But Ducktype is the only one to my knowledge not to introduce any new syntactical constructs at all. I have a hard time keeping all those dots and dashes in my head. Using a single, consistent syntax makes it easier to remember, and it reduces the number of special rules you need.

6. Ducktype supports extensions

I’ve mentioned extensions a couple of times already in this post, but I haven’t elaborated on them. In addition to being able to use Mallard extensions (which you can do without any extra syntax), I wanted  a way to add and experiment with syntax without shoving everything in the core. And, importantly, I wanted pages to have to declare what extensions they use, so we don’t have a guess-and-check mess of incompatible flavors. I live in a world where files are frequently cherry-picked and pushed through different tool chains. Being explicit is important to me.

So in Ducktype, you declare your extensions at the top, just like you do with the version of Ducktype you’re using:

@ducktype/1.0 if/experimental
= Page with Experimental Extension

What kinds of things can an extension do? Well, according to the spec, extensions can do literally anything. Realistically, what extensions can do depends on the extension points in the implementation. The reference implementation has two rough classes of extensions. There are the methods in the ParserExtension class, which let you plug in at the line level to affect how things are parsed. And there’s the NodeFactory, which lets you affect how parsed things are interpreted.

All of the ParserExtension methods are implemented and pretty well commented in the _test extension. One interesting use of this is the experimental csv extension. Remember how tables are always written vertically with a list-like syntax. That’s very powerful, but sometimes you really do just want something super simple. The csv extension lets you do simple comma-separated tables, like this:

@ducktype/1.0 csv/experimental
= CSV Table Example
[csv:table]
one,two,three
four,five,six
seven,eight,nine

That’s in about 40 lines of extension code. Another example of line parser extensions is a special syntax for conditionals, which I blogged about before.

The NodeFactory extension point, on the other hand, lets you control what the various bits of special syntax do. If you want to want to use Ducktype to write DocBook, you would use DocBook element names in explicit block declarations and inline notation, but what about the elements that get implicitly created for paragraphs, headers, list items, etc? That’s what NodeFactory does.

Et cetera

That’s six things to love about Ducktype. I’ve been really happy with Mallard’s unique approach to structuring docs. I hope Ducktype can make the power of Mallard available without all the extra typing that comes along with XML. If you want to learn more, get in touch.

 

 

 

Ducktype 1.0

Earlier this week, I released the final specification for Ducktype 1.0. If you’d like a quick primer on what Ducktype looks like, check out our Learn Ducktype tutorial.

Ducktype is a lightweight syntax designed to be able to do everything Mallard can do. It doesn’t sacrifice semantics. It doesn’t sacrifice metadata. It doesn’t sacrifice nesting and content model.

It took a long time to reach 1.0, because I take compatibility and correctness very seriously. Syntax is API. I work with multiple documentation formats on an almost daily basis. It is frustrating to constantly run into language quirks, especially quirks that vary between implementations. I realize quirks tend to accumulate over time, but I think we’ve prevented  a lot of them by writing an actual specification, developing a regression test suite, and focusing on the cohesiveness of the language.

I’m really happy with how it’s turned out so far. I hope you enjoy it too.

Creative Commons Attribution 3.0 United States
This work by Shaun McCance is licensed under a Creative Commons Attribution 3.0 United States.