An Introduction to Shmem/IPC in Gecko

We use shared memory (shmem) pretty extensively in the graphics stack in Gecko. Unfortunately, there isn’t a huge amount of documentation regarding how the shmem mechanisms in Gecko work and how they are managed, which I will attempt to address in this post.

Firstly, it’s important to understand how IPC in Gecko works. Gecko uses a language called IPDL to define IPC protocols. This is effectively a description language which formally defines the format of the messages that are passed between IPC actors. The IPDL code is then compiled into C++ code by our IPDL compiler, and we can then use the generated classes in Gecko’s C++ code to do IPC related things. IPDL class names start with a P to indicate that they are IPDL protocol definitions.

IPDL has a built-in shmem type, simply called mozilla::ipc::Shmem. This holds a weak reference to a SharedMemory object, and code in Gecko operates on this. SharedMemory is the underlying platform-specific implementation of shared memory and facilitates the shmem subsystem by implementing the platform-specific API calls to allocate and deallocate shared memory regions, and obtain their handles for use in the different processes. Of particular interest is that on OS X we use the Mach virtual memory system, which uses a Mach port as the handle for the allocated memory regions.

mozilla::ipc::Shmem objects are fully managed by IPDL, and there are two different types: normal Shmem objects, and unsafe Shmem objects. Normal Shmem objects are mostly intended to be used by IPC actors to send large data chunks between themselves as this is more efficient than saturating the IPC channel. They have strict ownership policies which are enforced by IPDL; when the Shmem object is sent across IPC, the sender relinquishes ownership and IPDL restricts the sender’s access rights so that it can neither read nor write to the memory, whilst the receiver gains these rights. These Shmem objects are created/destroyed in C++ by calling PFoo::AllocShmem() and PFoo::DeallocShmem(), where PFoo is the Foo IPDL interface being used. One major caveat of these “safe” shmem regions is that they are not thread safe, so be careful when using them on multiple threads in the same process!

Unsafe Shmem objects are basically a free-for-all in terms of access rights. Both sender and receiver can always read/write to the allocated memory and careful control must be taken to ensure that race conditions are avoided between the processes trying to access the shmem regions. In graphics, we use these unsafe shmem regions extensively, but use locking vigorously to ensure correct access patterns. Unsafe Shmem objects are created by calling PFoo::AllocUnsafeShmem(), but are still destroyed in the same manner as normal Shmem objects by simply calling PFoo::DeallocShmem().

With the work currently ongoing to move our compositor to a separate GPU process, there are some limitations with our current shmem situation. Notably, a SharedMemory object is effectively owned by an IPDL channel, and when the channel goes away, the SharedMemory object backing the Shmem object is deallocated. This poses a problem as we use shmem regions to back our textures, and when/if the GPU process dies, it’d be great to keep the existing textures and simply recreate the process and IPC channel, then continue on like normal. David Anderson is currently exploring a solution to this problem, which will likely be to hold a strong reference to the SharedMemory region in the Shmem object, thus ensuring that the SharedMemory object doesn’t get destroyed underneath us so long as we’re using it in Gecko.

Electrolysis: A tale of tab switching and themes

Electrolysis (“e10s”) is a project by Mozilla to bring Firefox from its traditional single-process roots to a modern multi-process browser. The general idea here is that with modern computers having multiple CPU cores, we should be able to make effective use of that by parallelising as much of our work as possible. This brings benefits like being able to have a responsive user interface when we have a page that requires a lot of work to process, or being able to recover from content process crashes by simply restarting the crashed process.

However, I want to touch on one particular aspect of e10s that works differently to non-e10s Firefox, and that is the way tab switching is handled.

Currently, when a user switches tabs, we do the operation synchronously. That is, when a user requests a new tab for display, the browser effectively halts what it’s doing until it is in a position to display that tab, at which point it will switch the tab. This is fine for the most part, but if the browser gets bogged down and is stuck waiting for CPU time, it can take a jarring amount of time for the tab to be switched.

With e10s, we are able to do better. In this case, we are able to switch tabs asynchronously. When the user requests a new tab, we fire off the request to the process that handles processing that tab, and wait for a message back from that process that says “I’m ready to be displayed now”, and crucially we do not switch the selected tab in the user interface (the “chrome”) until we get that message. In most cases, this should happen in under 300ms, but if we hit 300ms and we still haven’t received the message, we switch the tab anyway and show a big spinner where the tab content would be to indicate that we’re still waiting for the initial content to be ready for display.

Unfortunately, this affects themes. Basically, there’s an attribute on the tab element called “selected” which is used by the CSS to style the tab accordingly. With the switch to asynchronous tab switching, I added a new attribute, “visuallyselected”, which is used by the CSS to style the tab. The “selected” attribute is now used to signify a logical selection within the browser.

We thought long and hard about making this change, and concluded that this was the best approach to take. It was the least invasive in terms of changing our existing browser code, and kept all our test assumptions valid (primarily that “selected” is applied synchronously with a tab switch). The impact on third party themes that haven’t been updated is that if they’re styled off the “selected” attribute, they will immediately show a tab switch but can show the old tab’s content for up to 300ms until either the new tab’s content is ready, or the spinner is shown. Using “visuallyselected” on non-e10s works fine as well, because we now apply both the “selected” and “visuallyselected” attributes on the tab when it is selected in the non-e10s case.

Announcing my Version Sanitiser Firefox Addon!

After conducting extensive market research and analysing our users’ reactions to the Firefox version numbers (and by this I mean I spent five minutes on reddit), I have come to the following conclusions:

  • Arbitrary numbering schemes are a source of great pain and frustration for many of our users. This is a typical reaction when they find out about the way we do ours.
  • People get really, really, passionate about it.
  • Everybody loves kitties.

So, armed with this information, I set out to make everyone’s lives a little better. And came up with this: version π of my Version Sanitiser Firefox Addon!

It’s available right now from my people.mozilla.org space, so show your love for kittens today and install it!

Pushing to git from Mozilla Toronto

Today, Ehsan and I set up a highly experimental git server in Mozilla Toronto, and it seems to be working relatively well (for now). If you want access, give me a ping and I’ll sort it out for you (should be accessible to anyone on a Mozilla corporate network, I think). We’re still ironing out the kinks though, so please only use it if you’re fairly well versed with git.

The server (currently accessible via “teenux.local”) hosts repositories mirroring mozilla-central, mozilla-inbound and a push-only repository for try:

git@teenux.local:mozilla-central.git
git@teenux.local:mozilla-inbound.git
git@teenux.local:try.git

To use it, you need to add a new section to your .ssh/config like this:

Host teenux.local
User git
ForwardAgent yes
IdentityFile path/to/your/hg/private_key

You may also need to register the private key you use for hg.mozilla.org with ssh-agent by doing:

ssh-add path/to/your/hg/private_key

That should be it. Now you can go ahead and clone git://github.com/mozilla/mozilla-central.git and set up a remote for the repositories hosted on teenux for pushing to. You can theoretically clone from teenux as well but given that the server isn’t anywhere near as reliable as github, I recommend you stick to github for fetching changes, and only use teenux for pushing.

There is a minor caveat: you must push to the master branch for this to work. So a typical command would be:

git push -f try my_local_branch:master

Enjoy!

Using git to push to Mozilla’s hg repositories

Last night I spent a huge amount of time working with Nicolas Pierron on setting up a two-way git-hg bridge to allow for those of us using git to push straight to try/central/inbound without manually importing patches into a local mercurial clone.

The basic design of the bridge is fairly simple: you have a local hg clone of mozilla-central, which has remote paths set up for try, central and inbound. It is set up as an hg-git hybrid and so the entire history is also visible via git at hg-repo/.hg/git. This is a fairly standard set up as far as hg-git goes.

Then on top of that, there’s a special git repository for each remote path in hg (try, central and inbound) inside .hg/repos. These are all set up with special git hooks such that when you push to the master branch of one of these repositories, they will automatically invoke hg-git, import the commits into hg and then invoke hg to push to the true remote repository on hg.mozilla.org.

Simple, right? Well, the good news is that for the most part, people shouldn’t need to actually set up this system. There is infrastructure in place to make it just look like a multi-user git repository that people can authenticate against and push to. So ultimately we can set this up on, say, git.mozilla.org and to push to try we just push to remote ssh://git.mozilla.org/try.git, or something. Authentication is handled by the system just by using ssh’s ForwardAgent option, so in theory it should be as secure as hg.mozilla.org (but don’t quote me on that!).

Now onto setting it up; first you have to clone mozilla-central from hg:

hg clone ssh://hg.mozilla.org/mozilla-central

Then edit the .hg/hgrc to contain the following:

[paths]
mozilla-central = ssh://hg.mozilla.org/mozilla-central
mozilla-inbound = ssh://hg.mozilla.org/integration/mozilla-inbound
try-pushonly = ssh://hg.mozilla.org/try
[extensions]
hgext.bookmarks =
hggit =

The -pushonly suffix on the try path tells the bridge to not bother pulling from try when synchronising the repositories. The other two will be kept in sync.

The next step is go ahead and use Ehsan’s git-mapfile to short-cut the repository creation process. By default, the bridge will use hg-git to create the embedded git repository, and doing this requires that hg-git processes every single commit in the entire repository, which takes days. The git-mapfile is the map that hg-git uses to determine which hg commit IDs correspond to which git commit IDs, and using Ehsan’s git-mapfile along with a clone of the canonical mozilla-central git repository at git://github.com/mozilla/mozilla-central.git will allow us to create these local repositories in a matter of minutes instead of days.

git clone https://github.com/ehsan/mozilla-history-tools
cp mozilla-history-tools/updates/git-mapfile /path/to/your/hg/clone/.hg/git-mapfile
cd /path/to/your/hg/clone/.hg
git clone --bare git://github.com/mozilla/mozilla-central.git git

This lays the groundwork, but there is still a little more to do. Unfortunately, this git repository contains a huge amount of commit history from the CVS era that isn’t present in the hg repositories, so if you try and push using the bridge, hg-git will see these commits that aren’t in the hg repository and try to import all these CVS commits into hg. To work around this, we can hack the git-mapfile. The basic idea here is to grab a list of all the git commit SHA1s that correspond to CVS commits, then map those in the git-mapfile to dummy hg commits (such as “0000000000000000000000000000000000000000”). Unfortunately, hg-git requires that all the mappings are unique, so we need to generate a unique dummy commit ID for each and every CVS commit in git.

If you’re using Ehsan’s repository, go ahead and just grab my git-mapfile from http://people.mozilla.org/~gwright/git-cvs-mapfile-non-unique for just the CVS commits and pre-pend that to your .hg/git-mapfile.

Now comes the fun part; setting up the bridge itself.

git clone git://github.com/nbp/spidermonkey-dev-tools.git
mkdir /path/to/your/hg/clone/.hg/bridge/
cp spidermonkey-dev-tools/git-hg-bridge/*.sh /path/to/your/hg/clone/.hg/bridge/

Then, you need to add the following to your .hg/git/config’s origin remote to ensure that the branches are set up correctly for inbound and central:

fetch = +refs/heads/master:refs/heads/mozilla-central/master
fetch = +refs/heads/inbound:refs/heads/mozilla-inbound/master

This is because pull.sh expects to find the mozilla-central history at a branch called mozilla-central/master, and the inbound history at mozilla-inbound/master.

Now to create the special push-only repositories. First pull.sh needs to be modified in order to allow for the short circuiting; temporarily remove the “return 0” call on line 112 after “No update needed”, then:

cd /path/to/your/hg/clone/.hg/bridge/
./pull.sh ../..
(Add the return 0 call back to pull.sh now)

This will create three repositories in /path/to/your/hg/clone/.hg/repos that correspond to try, mozilla-central and mozilla-inbound. If you now set these repositories as remotes in your main git working tree such as:

git remote add try /path/to/your/hg/clone/.hg/repos/try

You can just push to try by pushing to the master branch of that remote! The first push will take a while as the push-only repository has no commits in it (this should not be an issue for mozilla-central and mozilla-inbound pushes), but after that they should be nice and fast. Here’s an example:

[george@aluminium mozilla-central]$ git push -f try gwright/skia_rebase:master
Counting objects: 3028016, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (549026/549026), done.
Writing objects: 100% (3028016/3028016), 729.80 MiB | 59.21 MiB/s, done.
Total 3028016 (delta 2452303), reused 3015470 (delta 2440473)
remote: fatal: Not a valid commit name 0000000000000000000000000000000000000000
remote: Will Force update!
remote: Get git repository lock for pushing to the bridge.
remote: To ./.hg/git
remote:  * [new branch]      refs/push/master -> try-pushonly/push
remote: Get mercurial repository lock for pushing from the bridge.
remote: Convert changes to mercurial.
remote: abort: bookmark 'try-pushonly/push' does not exist
remote: importing git objects into hg
remote: pushing to ssh://hg.mozilla.org/try
remote: searching for changes
remote: remote: adding changesets
remote: remote: adding manifests
remote: remote: adding file changes
remote: remote: added 4 changesets with 7 changes to 877 files (+1 heads)
remote: remote: Looks like you used try syntax, going ahead with the push.
remote: remote: If you don't get what you expected, check http://trychooser.pub.build.mozilla.org/ for help with building your trychooser request.
remote: remote: Thanks for helping save resources, you're the best!
remote: remote: You can view the progress of your build at the following URL:
remote: remote:   https://tbpl.mozilla.org/?tree=Try&rev=b9783e130dd6
remote: remote: Trying to insert into pushlog.
remote: remote: Please do not interrupt...
remote: remote: Inserted into the pushlog db successfully.
To /home/george/dev/hg/mozilla-central/.hg/repos/try
 * [new branch]      gwright/skia_rebase -> master

And that’s it! Mad props to Nicolas Pierron for the huge amount of work he put in on building this solution. If you’re in the Mozilla Mountain View office, you can ping him to get access to his git push server, but hopefully Ehsan and I will work on getting an accessible server out there for everyone.

Firefox’s graphics performance on X11

It’s been long known that a vocal minority (or perhaps even majority) of Firefox users have had issues with rendering performance on X11, but nobody has quite pinpointed what the issue is.

Recently Nicolas Silva landed a change in mozilla-central to add a preference to allow for disabling the use of the RENDER extension when drawing using Cairo on X11. I’d like to call on any Firefox/Linux users who have been experiencing speed issues to download the latest nightly and go to about:config and set “gfx.xrender.enabled” to “false”. If you could also let me know what hardware and drivers you’re running that’d also be great.

Thoughts on Boot to Gecko

This week Mozilla announced the intention to deliver an open device based on the Boot to Gecko (B2G) project in 2012.

When I first heard of B2G, I’ve got to admit I didn’t see what all the fuss was about. It looked very much like what Palm had tried and failed to do with webOS, just a few years later. The more I hear about it, though, the more compelling it seems to be.

I think Mozilla’s timed this very well; whilst the web has been seen more and more as a development platform rather than a straight content delivery system for a while, it’s not been a viable one until recently, and in my opinion it’s still not totally viable on mobile devices. Up until now, in order to be able to run web applications with any sort of decent performance (the sort that consumers have come to expect from the standards that the iPhone has set), you have typically needed a relatively high end system-on-chip to power the thing.

Fast forward to 2012 and there’s a plethora of high performance SoCs available at relatively low cost; B2G devices can be targeted towards the lower end of the smartphone market, whilst still exhibiting good performance. And unlike all the other low end smartphone platforms currently out there, the platform is already well established and the cost of bringing applications to it should be relatively low compared to others. To take a real world example, look at the iPhone 3GS. This is currently positioned by Apple as their low end smartphone, but when running iOS 5 the experience is dismal compared to their flagship phone. This is the sort of phone that any B2G device will be up against, not models like the 4S.

In fact, Telefonica said that they’re hoping to price the B2G phone at “ten times cheaper than an iPhone”, which would suggest that the off-contract price would be around $60 – $70. This is huge for a smartphone that would conceivably be competing with something like the 3GS, which currently sells for $375 off-contract.

What gives me the most confidence, though, is that people responded to the demos very well, and we’re not even done yet. There’s a lot of optimisation work that can and needs to be done, and there’s no reason why we can’t have a user experience comparable to the incumbents in the industry.