Rethinking the Linux distibution

Recently I’ve been thinking about how Linux desktop distributions work, and how applications are deployed. I have some ideas for how this could work in a completely different way.

I want to start with a small screencast showing how bundles work for an end user before getting into the technical details:

[youtube]http://www.youtube.com/watch?v=qpRjSAD_3wU[/youtube]

Note how easy it is to download and install apps? Thats just one of the benefits of bundles. But before we start with bundles I want to take a step back and look at what the problem is with the current Linux distribution models.

Desktop distributions like Fedora or Ubuntu work remarkably well, and have a lot of applications packaged. However, they are not as reliable as you would like. Most Linux users have experienced some package update that broke their system, or made their app stop working. Typically this happens at the worst times. Linux users quickly learn to disable upgrades before leaving for some important presentation or meeting.

Its easy to blame this on lack of testing and too many updates, but I think there are some deeper issues here that affect testability in general:

  • Every package installs into a single large “system” where everything interacts in unpredictable ways. For example, upgrading a library to fix one app might affect other applications.
  • Everyone is running a different set of bits:
    • The package set for each user is different, and per the above all packages interact which can cause problems
    • Package installation modify the system at runtime, including running scripts on the users machine. This can give different results due to different package set, install order, hardware, etc.

Also, while it is very easy to install the latest packaged version of an application, other things are not so easy:

  • Installing applications not packaged for your distribution
  • Installing a newer version of an application that requires newer dependencies than what is in your current repositories
  • Keeping multiple versions of the same app installed
  • Keeping older versions of applications running as you update your overall system

So, how can we make this better? First we make everyone run the same bits. (Note: From here we start to get pretty technical)

I imagine a system where the OS is a well defined set of non-optional core libraries, services and apps. The OS is shipped as a read-only image that gets loopback mounted at / during early boot. So, not only does everyone have the same files, they are using (and testing) *exactly* the same bits. We can do semi-regular updates by replacing the image (keeping the old one for easy rollback), and we can do security hot-fixes by bind-mounting over individual files.

The core OS is separated into two distinct parts. Lets call it theΒ platform and theΒ desktop. The platform is a small set of highly ABI stable and reliable core packages. It would have things like libc, coreutils, libz, libX11, libGL, dbus, libpng, Gtk+, Qt, and bash. Enough unix to run typical scripts and some core libraries that are supportable and that lots of apps need.

The desktop part is a runtimeΒ that lets you work with the computer. It has the services needed to be able to start and log into a desktop UI, including things like login manager, window manager, desktop shell, and the core desktop utilities. By necessity there will some libraries needed in the desktop that are not in the platform, these are considered internal details and we don’t ship with header files for them or support third party binaries using them.

Secondly, we untangle the application interactions.

All applications are shipped as bundles, single files that contain everything (libraries, files, tools, etc) the application depends on. Except they can (optionally) depend on things from the OS platform. Bundles are self-contained, so they don’t interact with other bundles that are installed. This means that if a bundle works once it will always keep working, as long as the platform is ABI stable as guaranteed. Running new apps is as easy as downloading and clicking a file. Installing them is as easy as dropping them in a known directory.

I’ve started writing a new bundle system, called Glick 2, replacing an old system I did called Glick. Here is how the core works:

When a bundle is started, it creates a new mount namespace, a kernel feature that lets different processes see different sets of mounts. Then the bundle file itself is mounted as a fuse filesystem in a well known prefix, say /opt/bundle. This mount is only visible to the bundle process and its children. Then an executable from the bundle is started, which is compiled to read all its data and libraries from /opt/bundle. Another kernel feature called shared subtrees is used to make the new mount namespace share all non-bundle mounts in the system, so that if a USB stick is inserted after the bundle is started it will still be visible in the bundle.

There are some problematic aspects of bundles:

  • Its a lot of work to create a bundle, as you have to build all the dependencies of your app yourself
  • Shared libraries used by several apps are not shared, leading to higher memory use and more disk i/o
  • Its hard for bundles to interact with the system, for instance to expose icons and desktop files to the desktop, or add a new mimetype

In Glick 2, all bundles are composed of a set of slices. When the bundle is mounted we see the union of all the slices as the file tree, but in the file itself they are distinct bits of data. When creating a bundle you build just your application, and then pick existing library bundles for the dependencies and combine them into an final applicationΒ  bundle that the user sees.

With this approach one can easily imagine a whole echo-system of library bundles for free software, maintained similarly to distro repositories (ideally maintained by upstream). This way it becomes pretty easy to package applications in bundles.

Additionally, with a set of shared slices like this used by applications it becomes increasingly likely that an up-to-date set of apps will be using the same build of some of its dependencies. Glick 2 takes advantage of this by using a checksum of each slice, and keeping track of all the slices in use globally on the desktop. If any two bundles use the same slice, only one copy of the slice on disk will be used, and the files in the two bundle mount mounts will use the same inode. This means we read the data from disk only once, and that we share the memory for the library in the page cache. In other words, they work like traditional shared libraries.

Interaction with the system is handled by allowing bundleΒ installation. This really just means dropping the bundle file in a known directory, like ~/Apps or some system directory. The session then tracks files being added to this directory, and whenever a bundle is added we look at it for slices marked asΒ exported. All the exported slices of all the installed bundles are then made visible in a desktop-wide instance of /opt/bundle (and to process-private instances).

This means that bundles can mark things like desktop files, icons, dbus service files, mimetypes, etc as exported and have them globally visible (so that other apps and the desktop can see them). Additionally we expose symlinks to the intstalled bundles themselves in a well known location like /opt/bundle/.bundles/<bundle-id> so that e.g. the desktop file can reference the application binary in an absolute fashion.

There is nothing that prohibits bundles from running in regular distributions too, as long as the base set of platform dependencies are installed, via for instance distro metapackages. So, bundles can also be used as a way to create binaries for cross-distro deployment.

The current codebase is of prototype quality. It works, but requires some handholding, and lacks some features I want. I hope to clean it up and publish it in the near future.

85 thoughts on “Rethinking the Linux distibution”

  1. I think the idea in general could make sense, but I’m skeptical of an app installation story that forces me to deal with the file system. I think Android allows you to install an APK that just goes in the right place after you have downloaded and activated it. This would take away the test-run part, but if we have an easy uninstall story, then that should be all right since most apps should be quick to install anyway.

    1. Andreas:
      Yeah, one could easily imagine extra stuff like an install handler that auto-installs bundles in the right place when you download and click on them, or an updater that regularly updates some bundles from some central store.
      You don’t necessarily have to manually download and install a file each time.

  2. Hi Alex,

    from the ROX desktop guys there was a somehow similar concept called 0install (http://0install.net/). Didn’t really try it but was using app-bundles (which were extracted tar balls in their case) in the ROX desktop.

    Cheers,
    Daniel

  3. Wow, this looks great. This is precisely what is needed in my opinion.

    Do you think there is a change to have this in gnome 3.4 or gnome 3.6?

    1. Leaves:
      I dunno if this will ever be rolled out for real. I’m thinking about it and experimenting with code, but actually rolling it out would be a lot of work…

  4. Thank you for expressing so clearly what has been bothering me for some time know. It would pretty much fix all the problems that exist with regard to upgrading to newer application versions by having to upgrade the entire system or risk breaking other applications by updating system-level dependencies.

    It would also help with the poor scalability of the current packaging system. App developers would be more or less directly responsible for their own apps.

    You might be interested in the http://portablelinuxapps.org/ project which tried to do something very similar.

    Would this pose a problem for app sandboxing efforts? I presume you are planning on making this a D-Bus enabled daemon. I would suggest that the default action be offering the user to install the application system-wide or only for that user.

    About the “[..] then pick existing library bundles” part. Would this go through a central online library?

    How about a nicer name? Something agnostic.

    Please try and make this a real cross-distro effort. And again thanks for making such a thoughtful proposal to a pretty big problem. Cheers.

  5. Great analysis!
    This is exactly how my Listaller project is supposed to work, except that I’m not using “bundles” but some kind of AppDirectory-like approach, which also allows extra dependencies (= shared resources) to be installed and integrated with PackageKit and Software Centers like the USC πŸ™‚
    It would be cool if we could get in contact about this topic, feel free to email me!

  6. Hello Alex,

    Interesting idea, and I definitely think it is good to consider, and re-imagine, the way a Linux distribution works.

    This seems similar to the OSX .dmg format perhaps? One criticism of this sort of packaging is that a security fix in a bundled library requires a new bundle, for every application using it. Unlike using a single shared library install. This might be a problem.

    Nevertheless, great to see the work!

    Alastair

    [WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.

  7. How about maintaining a list of trusted keys and allowing bundles signed with said keys to be installed without all the “chmod +x” magic? This could result in a distro providing just a core set of components, pre-trusting its own key and handling the rest with bundles.

  8. zekopeko, sherringham, Patrys:

    This posting is mostly about the lowlevel workings of the OS. The distribution channel for apps/bundles is really orthogonal to the lowlevel implementation. Its easy to imagine a layer above this that has things like automatic updates, and app-store like frontends.

  9. This sounds very similar to the PBIs (Push Button Installer) that were developed as an alternative to Ports and Packages for PC-BSD also I believe the Chakra Linux Project has “Bundles” for GTK apps since they focus on being a QT only distribution. Just curious if you have looked at either of those current examples already being used in those projects?

    1. Mike:
      I have looked at some bundle implementations before, but not PBI. I did look shortly at Chakra today.
      Bundles are hardly a new solution, but I think some of the ideas in Glick 2 are quite interesting for large scale deployment of it.

  10. I think the idea of a readonly core OS is quite cool!

    The bundle installation system you describe is quite similar to how applications are distributed on OS X. And yes, it requires more effort to create individual application bundles, but the end result is that application installation/removal is much easier for the end user.

  11. What about using something like AUFS to construct a faux environment for the app.

    I have liblib 3.0 and liblib 4.0, but it’s just liblib.so in /usr/lib, usually. So in this case, I have /usr/local/liblib/3.0.so and liblib/4.0.so. An application has a virtual chroot that has linked /usr/local/liblib/3.0.so to its /usr/lib/liblib.so. In this fashion, installing liblib 4.0 doesn’t break the old one. And we’re building on top of existing concepts and technologies.

    This sort of system could even tie into security update/advisory systems… “Warning: app foo uses liblib 3.0, which has a known security vulnerability. Proceed with launch?”

  12. As @Andreas Nilsson said, something even easier can be made for the user with a system like that. A sort of : Receive the file > double click on it > then having a windows saying :

    – This is a application file , what you want to do ?
    * choice : run it ( auto apply auto execution right )
    * choice : install it on my apps, and run it ( auto copy to the right app folder, then run )
    * choice : install it on my apps, and close this ( auto copy to the right app folder )

  13. Hi, there are clear advantages to the approach you’re going for, but out of curiosity, how would you tackle the following issues:
    – Bundles will, on average, require more disk space. So how does that fit in with the rise in solid state storage (much less disk space on average) and of (limited data) 3/4G wireless connections which people use to download them?
    – When updating API stable libraries, many under the hood improvements come along, like improved performance, bug fixes, etc, so updates these improve all apps that use said library.

    1. Jan:
      Obviously there are negative aspects to bundling libraries. Not getting improvements is one, not getting security fixes is another.
      I don’t have a good answer to these, apart from the fact that a system that makes it easier to get updates would make it less problematic.

      However, I really think that *not* bundling has such inherent problems that makes the whole OS not usable by lots of people. And if people can’t use it it then minor advantages with the packaging system doesn’t really matter.

      As for the space requirements, an intelligent downloaded could avoid downloading some duplicated data, but the on-disk requirements are still there. I don’t think it necessarily is that much of a problem though. Yes, SSDs are smaller, but libraries are rarely that big.

  14. This idea is essentially very old, and used in some other systems. There is a trade-off associated with bundles and we may or may not want to pay the price.

    The most obvious one is the security updates. Imagine another DigiNotar going down and suddenly all bundles related to SSL have to update. Most bundle maintainers are not experienced enough (hey, we are humans, too) to deal with stuff like public key revocation. Therefore, a huge security fiasco will be bound to happen.

    Another issue, that requires extremely smart solutions to be practical, is the size of bundles. Suddenly you will have to ship essential libraries many times in each bundle, and that will cost you bandwidth (which is expensive, unlike disk space).

    Most people who are familiar with apt/dpkg and have also worked with java will immediately cry foul if bundles were to be the default mode of deployment in linux.

    Bundles have clear merits that make them the optimal deployment mode in certain scenarios, such as initial testing of an application (to see how it works under the developer’s environment), quick deployment for demonstration or proof of concept, deployment when you can have SLAs, e.g. onto web services, etc.

  15. It’s been a while that I’ve been thinking more and more about this topic, but your findings, and code, exceeds my thoughts.

    It would be great to have it as a GNOME previewer thing, maybe not for the full GNOME 3.x version but instead in a per-app basis.

    I’m thinking on trying GNOME 3.2 on my Gentoo box but I don’t want to ruin the full system, and that’s one of the main points that would be solved with your approach.

    Keep the good work!

  16. Jan: Disk space is cheap (and $ per MB will probably go down fast for SSDs too). The thing that tends to eat hard drive space is media such as music and movies, to the extent you store that locally. Unlimited data plans for 3G/4G seems to become more and more common, at least in Sweden. Progress is on our side. πŸ™‚

  17. 1. How the platform ABI breakage are handled? Sometimes (as any user of Gentoo knows) the platform ABI changes which results in massive rebuilding of system. With central repository it is as simple as pushing new package bumps or even simply
    Note how easy it is to download and install apps? Thats just one of the benefits of bundles. But before we start with bundles I want to take a step back and look at what the problem is with the current Linux distribution models.

    Desktop distributions like Fedora or Ubuntu work remarkably well, binary diffs (which are probably small if the difference is rebuilding against libpng 1.5 instead of 1.4) but the problem is that now you need to keep the compatibility with programs in the wild.

    From what I heard the big problem Microsoft has is to keep backward compatibility. Each time that diverge from certain behaviour there is an outcry that it broken some programs and from what I heard it slowed down introducing security features (at least between 2000 and 7) – the application sometimes broke because they had unspoken assumptions (say – an accounting application which in 2011 run only on 32-bit Windows or XP Mode). I suspect this is part of why there is push for applications store in Windows as they may keep it more under control and know if there’s something that broke.

    2. What about needing new platform API? Either the developers would limit themselves to well known API milestones of say GTK+ or bundle GTK+. As users/distros can have large differences (say – newest Arch and Debian Lenny) the result may be that everything is bundled (often a current practice in binary bundles).

    3. Mentioned before problem with security. I believe that we may have talked about this during DS – the downstream have resources to track single packages (and even in such cases it is not necessary true – I know at least 2 distros being in top *5* in distrowatch which contains outdated version of library in supported version for which bugfix release exists in one case for more then year). The individual developers upstream might not have sufficient time to track them. Even the downstream often relays on users to request library/program updates or have automated process manually resolved. Requiring it from developer what wrote simple program might be limiting.
    I’ve observed that on Windows when the python/perl/tcl is bundled along with program a security program which scans for outdated programs (Secunia PSI) often notice that version of say python or gtk is outdated (it’s true for both open source and closed source programs) and contains some security bug.

  18. I think you have a solution searching for a problem. Ubuntus PPA solved pretty much every issue developers/ users might have had with deploying software on linux. The only remaining problem is how to differ between trusted/ untrusted PPAs. (apt-url already had the one-click activation of PPAs)
    But I think the Ubuntu App Developer website is already on a good way to solving that problem was well…

  19. I like a lot of things about this idea, especially cross-distribution deployment without worrying too much about the dependencies. I think it would be good if there were an alternative to using different repositories and package formats under each individual distribution. This would provide a centralized way of distributing packages to everyone, no matter what distro they use, but still give them choice over which versions and modifications they want for their specific distribution.

    I think I might have to reread this, but it seems that the huge problem of duplicated libraries taking up disk space could become an issue. However, from what I can tell this seems to be a much more conscientious approach than previous ‘bundle’ ideas on offer.

    I think it’s important that we take care of the basics as distributions before we focus too much on ourselves. It’s a sad day when people are choosing their distribution based solely on package availability, ignoring the customizations certain communities provide. Unity, for example, is hard to get in anything but Ubuntu. It’d be nice if it were packaged in an easily distributable way like these bundles, so that users who like something else about Fedora don’t have to use Ubuntu just to get Unity.

    I think flexibility is the number one complaint I hear about package management and upgrades.

  20. @Ryan So no, I don’t work for them. I tend to get most of my income from Mozilla. Story/workflow I guess. Not a native English speaker. I’m also sorry for using that word twice in the post, should have used something else the second time at least to give it a better flow…

  21. Andreas: Yep, the SSD issue (in regard to the size of libraries we ship) is likely to be a non issue in a very near future but I’m not so sure about 3G/4G connections. It’s very probable these types of connections will be the most prevalent way to access the internet in the future and while in Sweden fast and limitless connections might be the norm this will most likely not be the case in countries like Brazil, China, India and the US for a very very long time. An easy way to figure out how big of a deal this really is might be to run some math to compare application installation/update sizes with both approaches for an average distribution.

  22. Hi Alex,

    There are lots of comments here about similar systems. It would be great so see a comparison of them with Glick2.

    I see 0install.net has already been mentioned, but here’s my take on how it relates to Glick’s features (its goals are the same):

    “Every package installs into a single large β€œsystem” where everything interacts in unpredictable ways. For example, upgrading a library to fix one app might affect other applications.”

    In 0install, library versions are always selected per-application. If prog X needs a newer (“testing”) version of a library, that doesn’t affect any other program’s choice to use the “stable” version.

    “Package installation modify the system at runtime, including running scripts on the users machine. This can give different results due to different package set, install order, hardware, etc.”

    0install never runs scripts during install. The result of installing a set of packages is the same regardless of the order.

    “Installing applications not packaged for your distribution”

    Easy since there is no central repository.

    “Installing a newer version of an application that requires newer dependencies than what is in your current repositories”

    The correct dependencies will be used automatically, based on version requirements in the program’s metadata.

    “Keeping multiple versions of the same app installed”

    This happens automatically with 0install. For example, the 0test program can be used to test combinations of program and library versions for any application.

    http://0install.net/0test.html

    “Keeping older versions of applications running as you update your overall system”

    Since new versions install to a new directory, this works fine.

    “Glick 2 takes advantage of this by using a checksum of each slice, and keeping track of all the slices in use globally on the desktop. If any two bundles use the same slice, only one copy of the slice on disk will be used, and the files in the two bundle mount mounts will use the same inode.”

    0install also does this, except that it notices it already has the library before downloading it, so it saves on download time too.

    “This means that bundles can mark things like desktop files, icons, dbus service files, mimetypes, etc as exported and have them globally visible (so that other apps and the desktop can see them).”

    Doesn’t that defeat your isolation goals above?

    Features in 0install but not (apparently) in Glick2:

    – Automatic updates
    – BSD, Mac OS X and Windows support
    – Solver (selecting libraries dynamically, e.g. using the latest libssl)
    – Compile-from-source
    – Native package manager integration (e.g. PackageKit)
    – Digital signatures
    – Mirrors

  23. Your idea sounds interesting. I personally wouldn’t want to have to use a single set of “non-optional core libraries” as I really like my options and would be concerned with who decided what the “non-optional libraries” would be.
    You sound like you have given this quite a bit of thought and invested some effort.
    Have you seen Goblinux,? They have worked along similar lines for many years.

    http://www.gobolinux.org/

    Although their methods may differ some from yours, they achieve the ability to have multiple versions of the same software as well as multiple versions of libraries installed on the same system.
    I personally haven’t used Gobo in a few years but it worked well when I did.
    Check into their work, maybe it can give you some ideas on your own effort.

  24. Glick very much resembles app installation in MacOSX – basically just moving .app packages into Applications folder – perhaps it would be nice to make a comparison between the two.

    [WORDPRESS HASHCASH] The poster sent us ‘0 which is not a hashcash value.

  25. I think the simplest solution is to distribute a tar containing all the needed binaries (app + libs).
    The challenge is to make the desktop aware of the “installed” application.
    Probably the solution is a freedesktop standard on how to register a directory for some service (menu icon, mime type). In this way applications can be treaded the same way as plug-ins in a browser.
    chrooting every application is not really needed IMHO.

  26. Hi Thomas Leonard, no need for other OS support (in fact, MacOSX actually did have this feature for quite some time, except for applications that came with drivers or other files which needed system-wide installation), there’s also no need for integration with Packagekit – what I see here is a possibility that developers of applications themselves would be able to easily distribute binaries via Glick as well, and as long as integrity of such distribution of applications can be guaranteed, this could be a way more efficient way of distributing binaries – a package manager would only need to collect links to Glick packages, download them on request and drop the to Apps folder – which is basically how MacOSX AppStore works πŸ™‚

    A very hot candidate for this would be e.g. Ardour – a digital audio workstation based on gtk+, the author of this application has been shipping numerous libraries(such as gtkmm) along with the source code of the actual application.
    http://blog.flameeyes.eu/2009/04/19/application-binary-interface

  27. Hi Thomas Leonard, no need for other OS support (in fact, MacOSX actually did have this feature for quite some time, except for applications that came with drivers or other files which needed system-wide installation), there’s also no need for integration with Packagekit – what I see here is a possibility that developers of applications themselves would be able to easily distribute binaries via Glick as well, and as long as integrity of such distribution of applications can be guaranteed, this could be a way more efficient way of distributing binaries – a package manager would only need to collect links to Glick packages, download them on request and drop the to Apps folder – which is basically how Mac OSX AppStore works πŸ™‚

    A very hot candidate for this would be e.g. Ardour – a digital audio workstation based on gtk+, the author of this application has been shipping numerous libraries(such as gtkmm) along with the source code of the actual application.
    http://blog.flameeyes.eu/2009/04/19/application-binary-interface

  28. This is a great idea, but more for commercial applications and the like. Platform stuff – this just isn’t that needed.

    Regarding API/ABI stability, LMAO. Linux totally sucks in that regard and has completely kept it from advancing on the desktop in the marketplace. Commercial developers have to target 8000 different platforms and versions or focus on a couple of the bigger ones, which is still a clusterfuck and end up giving up in frustration – not to mention getting sick of the “why not this ditro?” questions. If Linux had API/ABI stability in the kernel and through the core libraries on major versions, it would help, but it still sucks. As one guy above mentioned, Windows has the problem of running old apps – might be a problem, but it sure is nice to know an app I created 10 years ago will likely still work ok on a current desktop.

    Unless you can beat some common sense into the kernel devs and have that trickle up the library chain, give it up.

  29. Alexl:
    You might want to look at the currently comatose distro Morphix. It utilized a similar strategy of cloop overlayed modules to make customized live cds.

  30. Foo,

    Would you mind supporting your assertion with some facts?

    I’m not sure how the idea doesn’t scale. This is what every Mac, iPhone and iPod Touch and iPad on the planet does these days. Given that it’s scaled on these platforms so far to several millions of units, your statement is bogus.

    GC

  31. First things coming to mind are:
    When gnome people will learn that majority of current desktop user base DO NOT want some lame reinvention of the wheel! There’s already /usr/local which on most Linux distributions stay epty! Why I’m going to need “App” folder, when /usr/local is standard and much more flexible?!

    Why? Oh, why, I’m wasting disk space by having same libraries multiple times? Flash storage(where all programs should be) is not cheap!

    One of the biggest problems in modern free desktop is that people working on it have no idea of all good things in Unix tradition, and are trying to copy not very good ideas from Mac OS and/or Windows.
    Btw, even Apple nowadays go with Linux like package manager reimplementation(AppStore).

  32. @Erick: As of API/ABI stability in kernel – you have API/ABI stability of kernel userspace API. The internal API is more flexible to allow pushing things forward (like move from synchronious USB to asynchronious USB somewhere in 2.6.x). Unless you write module (in most cases – you want to push it into the kernel anyway. I guess ck/tuxonice are a few of ‘justified’ cases) you don’t even notice API/ABI stability.

    As of library API – you end up otherwise in something like 2 interfaces for sorted sets in Java 1.7 as they couldn’t extend the previous one (NavigableMap and SortedMap). You still have Enumeration and Iterator which do basically the same.

    As in libgee I was less constrained by API/ABI compatibility I simply broke it in new version. Most users wouldn’t notice but the API/ABI might be broken.

    In GTK+ if they wanted to be backward compatible they will still be stuck with outdated stack which couldn’t be efficiently accelerated (I don’t know how the things looked like in GTK+ 1.2).

    And as far as Windows is concerned – they put enormous effort to not get them broken and they still ‘manage’ to broke some. IIRC some game use freed pointer and in Windows 98 it work fine (despite being illegal). However in Windows 2000 or XP started checking if pointer was freed (I might’ve mix versions of Windows) and the game crashed. There was outcry that the new version of Windows is broken. Solution – ‘if program in then check_if_pointer_was_freed’.
    Yes – you make sure that programs run but the problem is that the programmers might sill use API incorrectly which results in such problems. You need manpower to make any change to make sure that new version is compatible (or even bug-compatible) with previous version.

  33. @Marek: “no need for other OS support (in fact, MacOSX actually did have this feature for quite some time, except for applications that came with drivers or other files which needed system-wide installation)”

    The difference is, when I write a program and publish it with 0install then Linux, Mac and Windows uses can install and run my package (assuming the program itself is cross-platform, of course). If I made a Glick package, it would only run on Linux. I’d have to find Mac and Windows developers to make separate packages for those platforms.

    Regarding PackageKit integration, you’re right that, in theory, 0install doesn’t need this. If every library author published a 0install feed for their library then application authors could depend on them directly and everything would work fine. But in practice there are many libraries that are available as distribution packages but not yet as 0install feeds, and you still need to be able to depend on them.

  34. Personnally, and given the huge number of past attempts that didn’t really had much usccess, I think this is not gonna work.

    For one, there is the ABI stability issue. That’s quite hard to decide what is part of the ABI, and what is not. If we take firefox, the issue with sqlite show they would requires to be able to ship their own sqlite, yet, this would be considered as plateform ( at least, on android, or os x ). Unless we start doing change to the core OS, there will be a drift. So we would start to have 2 differents versions, which is bad on the disk ( but not that bad indeed, even if shipping bigger disk may cost more on a global scale ), but more importantly, bad on the memory usage side. Shipping your own version of the library is what people on windows tend to do, and also on mac, and that’s also why they requires more memory ( at least, on my own macbook, I couldn’t run openoffice and mail.app at the same time without having problem ).

    The update issue is IMHO moot. This one would be easy to solve, just use some kind of stuff like a rss to say “a new version exist”. But as asid before, that solve it only for the application, not for the rest of the platform bundled. IE, the authors of the bundle would have to make sure that everything they ship is working fine.

    One issue that was overlooked is how to ship something with more than one binary, let’s say Libreoffice. You cannot click on it, because there is 5 differents binarys.

    Or another issue is for eclipse, where you would have to bundle a whole jdk if not part of the base distribution ( the same goes for python3, mono, and everything ). Either you bundle everything, and then, it start to be quite big ( ie eclipse + jdk + library ), or you have them as part of the os ( with on demand installation ), and then you need to be sure of the ABI, with all the issue of bugfixes ( like what if a bug is found in the plateform version of the interpreter, should t be uncorrected, how can we ensure it get the fixes, and how do we detect ? ).

    There is the whole issue of trust, but i think that’s not needed to solve. People interested in getting a secured and trusted os would not use bundles I guess, the others do not care, so this is useless to try to fix this, especially if most of the fix will just result into “let’s click yes each time”. If something is sucessful, we will see malware on it, so just aim that goal as a measure of success.

  35. That’s how it always worked in Windows, leading to incidents like http://support.microsoft.com/kb/873374. That’s what ruby also has, leading to friction with Debian. So the idea is not new, but the security people from the existing distros will probably not buy it, for the same reasons as they don’t buy static linking and bundling in general.

    It does have positive aspects. E.g. it is not affected by Free Software propaganda that most distros propagate but not all users accept or even understand. Also it is not affected by some incompetent packagers forcing the app to link against the system-provided libraries even though they are incompatible (as happened, e.g., with Hadoop’s HBase in Debian – they forced it to use Debian’s jruby, thus making it impossible to create certain kinds of tables from the command line, see http://bugs.debian.org/587120). And as already said, a well-defined platform ABI attracts commercial developers.

    So, while your post is definitely heresy from any current distribution’s standpoint, it may well be that you are right just because Windows (which is based on essentially the same model) is so successful.

Leave a Reply

Your email address will not be published. Required fields are marked *