2000th ColorHug

This week we will hit a milestone we thought we might not ever reach – selling ColorHug device number 2000. We started making ColorHugs just 18 months ago and have come a long way from the first batch that was hand-built on a desk in our back bedroom. I started the project as a hobby to make some embedded hardware as it was something I enjoyed doing at University and hadn’t done for a while. I assured my wife we wouldn’t need to make more then 50. So imagine how we felt when we got over 800 responses to a single blog post. I only wrote it to check it was worth making that initial batch! So this hobby turned into a second job almost overnight that came with all the fun dealing with customer emails, legal issues and setting up a business that pays tax. Ania was not best pleased when our lovely guest room turned into our manufacturing department and my “hobby” had her screwing devices together on weekends, evenings and even on Boxing Day.

DSC_7276_01-sm

A lot has changed since that first batch.  From outsourcing the PCB fabrication and to occasionally recruiting my mum, dad and of course Ania in the manufacture, assembly, dispatch and administrative aspects of ColorHug. We finally have a new outside office – so after nearly two years we have our house back! And most importantly we have had a little girl, who keeps us very busy and thus has slowed the development of the ColorHug Spectro.

DSC_0318

Building an OpenHardware device has definitely been a worthwhile pursuit and something I believe in. We will have built and shipped 2000 devices all around the world, including issuing two lots of free gifts to update early adopters with the latest accessories and design improvements. We’ve also built a large community who are using ColorHugs all over the world for calibrating external screens and panels in domestic and commercial settings.

We still don’t make much profit on each unit and definitely wouldn’t recommend making calibration hardware as a get rich quick scheme, but we have enjoyed growing ColorHug and fostering the community that has built up around it. We’d like to take this opportunity to thank everyone who has helped make ColorHug a success – from those who’ve helped make the device through to those who contribute in the community by testing and reporting bugs.

Some interesting statistics from the last 18 months:

  • Total Sold: 1998
  • Number of Batches: 8
  • Typical number of jiffy bags ready to go at any time: 60
  • Number of returns: 8
  • Number of automated emails from PayPal: 2520
  • Number of emails Ania and I have sent (most semi-automated): 5087
  • Number of LiveCD updates: 9
  • Number of different countries sent to: 21
  • Amount of money spent on postage: £9,354

So, basically, we’re very humbled and grateful! Richard, Ania and baby Hughes.

 

AppData Proposal, a.k.a. How to make your application appear in the software center

This morning I wrote up this proposal. If you maintain an application that you think should be in the GNOME Software Center, I would appreciate your views. Thanks!

p.s. Don’t actually commit anything to your modules just yet, I want to finish the polishing and wording before asking people to write anything. For example, the format of the long description should probably be much more prescriptive.

GNOME Software and DOAP Files

Progress on gnome-software is progressing nicely. Most of the major functionality is semi-working, although there are an awful lot of rough (and unimplemented) edges. Now the UI is coming together somewhat, it’s probably time to talk about what data it is going to consume. I’ve talked a lot before about extracting application icon and translations from .desktop files, but now I wanted to talk about long, formatted descriptions. Something like:

long-description

So where do we get this long description from? There seems to be many possible places to put this data:

  1. On the distribution web service
  2. In the ${app}.desktop files of the upstream application
  3. In the DOAP file of the upstream tarball
  4. In the package file description
  5. Some new ${app}.xml file shipped by application with all this extra data
  6. Some simple ${app}.md file containing markdown

Each has positives and negatives:

  1. All distros have to do basically the same work, and have to retranslate these over and over. -ENORESOURCE.
  2. It’s not much fun to do multiline descriptions trying to work on one line in a .desktop file, and trying to do rich text like hyperlinks and bullet points is impossible.
  3. DOAP files are not translated, and we only get one file per project, not per app. You probably want a different description for LibreOffice Calc than LibreOffice Presenter.
  4. Same problem as 3, and it also pushes the work to the distros, like 1. And it’s not typically translated.
  5. YAFF. Yet Another Format. Okay, it lets us define rich text (SGML/DocBook/whatever) but it’s another file format to be added to intltool and I’m not sure how easy it would be to get random projects to ship this.
  6. Easy to write, although much harder to extend in the future with things like screenshots and the like. Also, very hard to translate.

Also, anything except option 1 requires the use to have a big cache of all the possible applications they want to search for. So far I’m leaning on some kind of composite approach:

  • Add a X-SoftwareCenterLongDescURI key to the desktop file
  • Have a web URL with an .xml file on any remote server.
  • Download and load the .xml file when the application detail view is opened
  • Optionally translate the .xml.in using intltool and update the description at release time
  • Applications not shipping .desktop files with X-SoftwareCenterLongDescURI just get a shitty app-detail view in the software center.

Comments, suggestions, flames, all very welcome. If/when we come to a consensus, I’ll write up a proper proposal with some guidelines for application authors. Thanks.

 

Hawkey progress

The last week I’ve been continuing to work on the hawkey backend for PackageKit. It’s basically a small package manager backend that uses librepo to do the metadata checking and downloading and hawkey to do the depsolving. To glue all of this together and do kinda critical things like assembling and running a librpm transaction I’ve re-used globs of Zif, another little test project of mine.

Today marks a milestone. With librepo from rawhide, hawkey and PackageKit from git you can actually use the gnome-packagekit tools to install, remove and update packages. The latter was quite a bit of work, and I’ve been contributing patches like crazy to the hawkey project making sure all the pieces are in place.

Screenshot from 2013-07-19 16:49:15

Of course, we’re not doing this for gnome-packagekit, we’re looking forward to the future. If everything goes to plan, in Fedora 20 we’ll have a shiny new software center called gnome-software that will use the hawkey backend on Fedora to perform like a modern software store. No “waiting for locks“. No “downloading metadata” at inopportune times. With the new backend we can make the user experience of the new UI an order of magnitude better than the old tools.

hi-res-home-page (1)

And now, I’m going to eat ice-cream. Have a good weekend everyone.

 

The future of package management in Fedora

I spent a couple of days last week in Brno talking about the future of package (and application) management in Fedora. Things we discussed:

  • We currently don’t cater for desktop users by showing details about the packaging layer (GPG keys, packages, etc) and not being able to search while we install/remove
  • We need a centralized application store to stay competitive with other distros and OSs
  • Our metadata and package mirroring policies hurt end users not using the command line
  • I presented (and demoed) gnome-software with its plugin architecture that would allow us to switch to using packages and blobs like glick and listaller
  • I made the case for application metadata, so we can get things like localised application details, screenshots and ratings using a few different methods

Some interesting points came up, and this is approximately 30 hours of talking condensed into a few bullet points:

  • YUM upstream will soon be considered deprecated, and we will move into a DNF/hawkey/librepo-based future. This includes PackageKit. I’m going to be building a hawkey based backend with help from the maintainer, and he is is aware of what PK does and any unusual tasks that are performance critical.
  • We should keep old versions of metadata on the server to stop metadata refresh explosions happening where yesterdays primary gets updated because a transaction only has todays filelists installed. This will significantly reduce the amount of bandwidth used by the metadata updates.
  • We should keep old versions of packages on the mirrors, to avoid the case where we depsolve fine, get 404 on the package download and then have to re-download MD, depsolve, etc. YUM apparently has issues with multiple versions of a package being present in the metadata, so we should probably only reference the latest packages in the MD (which also keeps the MD to sane size).
  • We should ship the per-arch solv files in the repo MD. This avoids SAT solutions like libsolv from spending 20 seconds+ per repo rebuilding .solv files from sqlite or xml metadata, and allows us to kill the dnf cron job
  • We should teach rpm to update it’s own SAT database, which we can do with an RPM plugin.
  • We want a software center, and fedora-tagger can provide the ratings/comments information. We might need an OCS server for screenshots, or can tie in screenshots with automated QA somehow.
  • We are going to teach koji about appstream data, so a simple extract script (to be written by me) can produce a .tar file of icons and a .xml file of translated descriptions at the end of each koji build
  • We are going to teach the compose tools to xmlmerge all the appstream .xml files and ship as appstream.xml
  • We are going to teach the compose tools to join all the tar files and ship as appstream-data.tar.gz
  • We are going to investigate the use of meta-desktop files to install a super-set of applications, e.g. KDE, or “Python developer” which allows for screenshots, ratings and all that stuff.

It was an interesting couple of days, and quite a few people will be ping’ed over the next few days to make some of what we discussed a reality. I’m convinced the changes we can make here will give us a slick and featureful application installer, something that can really be an asset for Fedora and RHEL. Comments, as always, welcome.

Auto-EDID Results [updated]

A couple of weeks ago I asked people to run a command which uploaded all their auto-EDID display profiles to me. This was a massive success with 1858 profiles being added to a large dataset. These were scanned by the cd-find-broken tool, and results plotted on my G+ page. As there’s been so much new data I’m updating the graphs:

edid-vendors

I’m actually using this data to make sure we show something sane in the client UIs. Some interesting vendors are not included, e.g.:

  • “System manufacturer”,3
  • “To Be Filled By O.E.M.”,4

edid-vendors-broken

This is a chart of vendors Doing It Wrong™ by including random data (or implausible data) as the display primaries.

edid-cmf

This shows what program created the Auto-EDID ICC profile. Unknown is probably a mixture of oyranos and also early versions of gnome-setting-daemon which didn’t set the extra metadata.

edid-vendors-noserial

Last graph I promise. This shows a chart of all the vendors who do not populate the serial number in the EDID blob. I’ll explain why this is bad.

When we construct the device ID for colord, we use the vendor{-model}{-serial} as part of the key.  This allows you to use different ICC profiles even if you’ve got two “identical” external panels attached. Without the serial number, “lenovo-foo” looks the same as “lenovo-foo” and colord treats them as if they were the same panel. This sucks if the panels were not bought at the same time and have identical backlight burn time. Ohh, and we can’t use the connection name (e.g. DVI-1) as it would suck if you had to reassign all your profiles if you moved the connector to DVI-2…

This isn’t always a disaster: Laptops. We only need the make and model to ensure this is unique on the system as you can’t typically have two internal panels installed. This explains the Lenovo, Samsung, Dell and Apple entries I think, so don’t get out the pitchforks just yet. Unfortunately there’s nothing in the ICC profile that says “this is a laptop” so we can’t be more selective and hence this graph isn’t super useful. But, even on laptops, vendors should really be doing something semi-sane with the serial number, even if it’s just the batch number.

A new 0.1.34 colord was released this week. Thanks again to everyone that uploaded profiles.

 

Auto-EDID Profiles Results

First, thanks for everyone that contributed ICC profiles. I’ve received over 800 uploads in a little under 24 hours, so I’m very grateful.

TLDR:

Total profiles scanned: 800
Profiles with invalid or unlikely primaries: 45
EDIDs are valid 94.4% of the time

This resulted in the following commit to colord:

commit 87be4ed4411ca90b00509a42e64db3aa7d6dba5c
Author: Richard Hughes <richard@hughsie.com>
Date:   Wed Apr 24 21:47:14 2013 +0100
    Do not automatically add EDID profiles with warnings to devices

I’ll explain a little about what these numbers and the commit means. The EDID reports three primaries, i.e. what real world XYZ color red, green and blue map to. It also tells us the whitepoint of the display. From basic color science we know that

  • If R=G=B, we should be displaying a black/gray/white color without significant amounts of red green and blue showing
  • The reported gamut of colors should not be bigger on real hardware than theoretical archive spaces like ProPhotoRGB.
  • If R=G=B=100%, then we should have a good approximation of D50 white
  • The temperature of the display should not be too cold (>3000K) or too warm (<10000K).

There are actually 11 checks colord now does on RGB profiles, similar to the checks done above. If any of the 11 checks fail, the automatically generated profile is not used. The user can still add it manually if they want and then of course it will be used for the monitor, but we don’t break things by default for 5.6% of users.

If anyone is interested, the results were generated by this program, and the raw results are available here. My personal take-home messages you can take from this file are:

  • Sometimes blue and green are the wrong way around (Samsung SyncMaster)
  • Vendors need to use something other than random binary data (AU Optronics)
  • If you don’t know what a whitepoint is, don’t try and guess (Sharp and Lenovo YT07)
  • Projectors generally don’t really know/care what values to use (OPTi PK301 and In Focus Systems)

There’ll be a new colord release with this feature released in the next couple of weeks.

Auto-EDID ICC Profiles

A favour, my geeky friends:

gnome-settings-daemon and colord-kde create an ICC profile based on the color information found in the EDID blob. Sometimes the EDID data returns junk, and so the profile is also junk. This can do weird things to color managed applications. I’m trying to find a heuristic for when to automatically suppress the profile creation for bad EDIDs, such as the red primary being where blue belongs and that kind of thing. To do this, I need data. If you could run this command, I’d be very grateful.

for f in $HOME/.local/share/icc/edid-* ; do
    curl -i -F upload=@$f http://www.hughski.com/profile-store.php
done

This uploads the auto-EDID profile to my webserver. There is no way I can trace this data back to any particular user, and no identifiable data is stored in the profile other than the MD5 of the EDID blob. I’ll be sharing the processed data when I’ve got enough uploads. If you think that your EDID profile is wrong then I’d really appreciate you also emailing me with the “Location:” output from CURL, although this is completely optional. Thanks!