An Introduction to Shmem/IPC in Gecko

We use shared memory (shmem) pretty extensively in the graphics stack in Gecko. Unfortunately, there isn’t a huge amount of documentation regarding how the shmem mechanisms in Gecko work and how they are managed, which I will attempt to address in this post.

Firstly, it’s important to understand how IPC in Gecko works. Gecko uses a language called IPDL to define IPC protocols. This is effectively a description language which formally defines the format of the messages that are passed between IPC actors. The IPDL code is then compiled into C++ code by our IPDL compiler, and we can then use the generated classes in Gecko’s C++ code to do IPC related things. IPDL class names start with a P to indicate that they are IPDL protocol definitions.

IPDL has a built-in shmem type, simply called mozilla::ipc::Shmem. This holds a weak reference to a SharedMemory object, and code in Gecko operates on this. SharedMemory is the underlying platform-specific implementation of shared memory and facilitates the shmem subsystem by implementing the platform-specific API calls to allocate and deallocate shared memory regions, and obtain their handles for use in the different processes. Of particular interest is that on OS X we use the Mach virtual memory system, which uses a Mach port as the handle for the allocated memory regions.

mozilla::ipc::Shmem objects are fully managed by IPDL, and there are two different types: normal Shmem objects, and unsafe Shmem objects. Normal Shmem objects are mostly intended to be used by IPC actors to send large data chunks between themselves as this is more efficient than saturating the IPC channel. They have strict ownership policies which are enforced by IPDL; when the Shmem object is sent across IPC, the sender relinquishes ownership and IPDL restricts the sender’s access rights so that it can neither read nor write to the memory, whilst the receiver gains these rights. These Shmem objects are created/destroyed in C++ by calling PFoo::AllocShmem() and PFoo::DeallocShmem(), where PFoo is the Foo IPDL interface being used. One major caveat of these “safe” shmem regions is that they are not thread safe, so be careful when using them on multiple threads in the same process!

Unsafe Shmem objects are basically a free-for-all in terms of access rights. Both sender and receiver can always read/write to the allocated memory and careful control must be taken to ensure that race conditions are avoided between the processes trying to access the shmem regions. In graphics, we use these unsafe shmem regions extensively, but use locking vigorously to ensure correct access patterns. Unsafe Shmem objects are created by calling PFoo::AllocUnsafeShmem(), but are still destroyed in the same manner as normal Shmem objects by simply calling PFoo::DeallocShmem().

With the work currently ongoing to move our compositor to a separate GPU process, there are some limitations with our current shmem situation. Notably, a SharedMemory object is effectively owned by an IPDL channel, and when the channel goes away, the SharedMemory object backing the Shmem object is deallocated. This poses a problem as we use shmem regions to back our textures, and when/if the GPU process dies, it’d be great to keep the existing textures and simply recreate the process and IPC channel, then continue on like normal. David Anderson is currently exploring a solution to this problem, which will likely be to hold a strong reference to the SharedMemory region in the Shmem object, thus ensuring that the SharedMemory object doesn’t get destroyed underneath us so long as we’re using it in Gecko.

Electrolysis: A tale of tab switching and themes

Electrolysis (“e10s”) is a project by Mozilla to bring Firefox from its traditional single-process roots to a modern multi-process browser. The general idea here is that with modern computers having multiple CPU cores, we should be able to make effective use of that by parallelising as much of our work as possible. This brings benefits like being able to have a responsive user interface when we have a page that requires a lot of work to process, or being able to recover from content process crashes by simply restarting the crashed process.

However, I want to touch on one particular aspect of e10s that works differently to non-e10s Firefox, and that is the way tab switching is handled.

Currently, when a user switches tabs, we do the operation synchronously. That is, when a user requests a new tab for display, the browser effectively halts what it’s doing until it is in a position to display that tab, at which point it will switch the tab. This is fine for the most part, but if the browser gets bogged down and is stuck waiting for CPU time, it can take a jarring amount of time for the tab to be switched.

With e10s, we are able to do better. In this case, we are able to switch tabs asynchronously. When the user requests a new tab, we fire off the request to the process that handles processing that tab, and wait for a message back from that process that says “I’m ready to be displayed now”, and crucially we do not switch the selected tab in the user interface (the “chrome”) until we get that message. In most cases, this should happen in under 300ms, but if we hit 300ms and we still haven’t received the message, we switch the tab anyway and show a big spinner where the tab content would be to indicate that we’re still waiting for the initial content to be ready for display.

Unfortunately, this affects themes. Basically, there’s an attribute on the tab element called “selected” which is used by the CSS to style the tab accordingly. With the switch to asynchronous tab switching, I added a new attribute, “visuallyselected”, which is used by the CSS to style the tab. The “selected” attribute is now used to signify a logical selection within the browser.

We thought long and hard about making this change, and concluded that this was the best approach to take. It was the least invasive in terms of changing our existing browser code, and kept all our test assumptions valid (primarily that “selected” is applied synchronously with a tab switch). The impact on third party themes that haven’t been updated is that if they’re styled off the “selected” attribute, they will immediately show a tab switch but can show the old tab’s content for up to 300ms until either the new tab’s content is ready, or the spinner is shown. Using “visuallyselected” on non-e10s works fine as well, because we now apply both the “selected” and “visuallyselected” attributes on the tab when it is selected in the non-e10s case.

Better wireless drivers for the bcm4331

So as of the other day, I found the solution to the last of my troubles regarding running Linux on my mid-2012 15″ Retina MacBook Pro – namely, that the wireless drivers were incredibly flakey and would drop randomly and frequently.

Basically, there are two major wireless drivers available for Linux for the Broadcom chipsets; the b43 drivers which are in the upstream kernel, and the broadcom-wl STA drivers. The b43 drivers work fairly well, but don’t implement power management for the bcm4331 and so can frequently/randomly drop. On the other side of the fence, there’s Broadcom’s Linux driver page. However, you’d be forgiven for thinking that Broadcom hasn’t released new drivers in over 2 years, right? And as a result, the currently available drivers don’t support the bcm4331? Wrong. Turns out Ubuntu managed to get newer drivers out of Broadcom in December.

Anyway, one of the major features of the new driver in the December release (version is that it supports the bcm4331 found in the newer MacBook Pro models, amongst other computers. So if you just download the source (I used apt-get source bcmwl on a 13.04 machine), it will compile on pretty much any modern kernel and seems to work fine. For reference, I’m using it now on Fedora 17 running kernel 3.7.9. It even suspends/resumes fine.

Announcing my Version Sanitiser Firefox Addon!

After conducting extensive market research and analysing our users’ reactions to the Firefox version numbers (and by this I mean I spent five minutes on reddit), I have come to the following conclusions:

  • Arbitrary numbering schemes are a source of great pain and frustration for many of our users. This is a typical reaction when they find out about the way we do ours.
  • People get really, really, passionate about it.
  • Everybody loves kitties.

So, armed with this information, I set out to make everyone’s lives a little better. And came up with this: version π of my Version Sanitiser Firefox Addon!

It’s available right now from my space, so show your love for kittens today and install it!

A month with the Retina MacBook and Linux

I’ve been using the Retina MacBook for a month now, and I’ve got to say the experience hasn’t been all that smooth, but better than I had expected.

Graphics Cards

Using the latest kernel (I used 3.6.0-rc3, but 3.6.0-rc4 should probably work too), switchable graphics works to an extent. If you forcibly enable the Intel HD4000 GPU in OS X using gfxCardStatus, then reboot straight into Linux, you will be able to use the i915 kernel driver, and the intel Xorg driver. You can even switch to/from the NVIDIA GeForce GT 650m at runtime by prodding vgaswitcheroo. Unfortunately, you can’t boot up with the GeForce enabled (ie – without the gfxCardStatus hack) for now, because the Intel driver won’t bring up the eDP link properly.

If you use a recent xorg-x11-drv-intel, the screen backlight controls work fine. In Fedora 17 I had to enable the “testing” repository to get an up-to-date enough version of this package.

I found that using the Intel card on my rMBP causes the screen to artifact badly every so often. As far as I’m aware, nobody else has seen this issue, but I can’t reproduce it when using the discrete GPU, or when I’m booted into OS X and using the Intel GPU. I’m not ruling out a hardware issue for now, but if you do see this issue (it manifests as a screen updating issue where portions of the screen jump around very quickly) please let me know.

nouveau is an exercise in frustration. It works, but don’t expect any sort of hardware acceleration. The cursor blinks frequently. There’s no power management, so the laptop will physically run quite warm and the battery won’t last long. I recommend using the latest beta NVIDIA binary driver (version 304.43 at time of writing) if you’re planning to use the discrete GPU on this thing; it actually works surprisingly well.

If you do use the Intel driver, ensure you turn off the discrete GPU at runtime by echoing “OFF” to /sys/kernel/debug/vgaswitcheroo/switch.

Backlight control works when using the NVIDIA drivers only after the laptop has been put to suspend and resumed at least once.


This is the worst supported piece of hardware, by far, in my opinion. It works to an extent, but the b43 driver frequently disconnects and exhibits high packet loss. It’s incredibly variable depending on the environment. I found that at the Mozilla office I would be disconnected every few minutes and I’d have to rmmod and modprobe b43 to even get it to reconnect, and even then the bandwidth throughput on the connection was terrible (I was seeing ~50kB/s on an 802.11g network). ndiswrapper is also flakey. In the end I gave up messing around with it and bought a cheap $13 USB dongle which uses a Realtek chipset, that works fine.


Mostly works out of the box. Backlighting works fine in latest kernels, you may need to set the brightness manually by prodding /sys/devices/platform/applesmc.768/leds/smc::kbd_backlight/brightness with echo.


Works well enough with the xorg-input-mtrack driver, but can be a little flakey. Despite lots of tweaking, I still haven’t found a configuration that doesn’t register false clicks. My biggest issue is the case where I’m resting my thumb on the trackpad and moving the cursor with my index finger; when I want to register a left click, about 50% of the time it’ll register as a right click because it’ll think it’s a two finger touch. For now I’ve worked around this by setting three finger click to be right click, four finger click to be a middle click and single and two finger clicks to be a left click. My configuration is:

Section "InputClass"
MatchIsTouchpad "on"
Identifier      "Touchpads"
Driver          "mtrack"
Option          "Sensitivity" "0.35"
Option          "IgnoreThumb" "true"
Option          "IgnorePalm" "true"
Option          "TapButton1" "0"
Option          "TapButton2" "0"
Option          "TapButton3" "0"
Option          "TapButton4" "0"
Option          "ClickFinger1" "0"
Option          "ClickFinger2" "3"
Option          "ClickFinger3" "2"
Option          "ButtonMoveEmulate" "false"
Option          "ClickTime" "25"
Option          "BottomEdge" "25"


Not working.


Speakers work fine, as does the mixer. Optical output is always on at boot (which is the source of the red LED shining out of the headphone socket); turn it off with:

amixer -c0 set IEC958 mute

Microphone doesn’t work.


Works out of the box, but somewhat useless without the microphone.

Power Management

Seems to work pretty well. Seems to hover around 17W when not doing much, but can jump to 30W under normal usage. When compiling it jumps to 60-70W. If using the Intel GPU with the discrete turned off, you gain a few watts on average.


Grab the vanilla 3.6.0-rc3 from, then apply this patch to fix the gmux, and these patches to fix nouveau. Once that’s done you can just build the kernel as normal.

Pushing to git from Mozilla Toronto

Today, Ehsan and I set up a highly experimental git server in Mozilla Toronto, and it seems to be working relatively well (for now). If you want access, give me a ping and I’ll sort it out for you (should be accessible to anyone on a Mozilla corporate network, I think). We’re still ironing out the kinks though, so please only use it if you’re fairly well versed with git.

The server (currently accessible via “teenux.local”) hosts repositories mirroring mozilla-central, mozilla-inbound and a push-only repository for try:


To use it, you need to add a new section to your .ssh/config like this:

Host teenux.local
User git
ForwardAgent yes
IdentityFile path/to/your/hg/private_key

You may also need to register the private key you use for with ssh-agent by doing:

ssh-add path/to/your/hg/private_key

That should be it. Now you can go ahead and clone git:// and set up a remote for the repositories hosted on teenux for pushing to. You can theoretically clone from teenux as well but given that the server isn’t anywhere near as reliable as github, I recommend you stick to github for fetching changes, and only use teenux for pushing.

There is a minor caveat: you must push to the master branch for this to work. So a typical command would be:

git push -f try my_local_branch:master


Using git to push to Mozilla’s hg repositories

Last night I spent a huge amount of time working with Nicolas Pierron on setting up a two-way git-hg bridge to allow for those of us using git to push straight to try/central/inbound without manually importing patches into a local mercurial clone.

The basic design of the bridge is fairly simple: you have a local hg clone of mozilla-central, which has remote paths set up for try, central and inbound. It is set up as an hg-git hybrid and so the entire history is also visible via git at hg-repo/.hg/git. This is a fairly standard set up as far as hg-git goes.

Then on top of that, there’s a special git repository for each remote path in hg (try, central and inbound) inside .hg/repos. These are all set up with special git hooks such that when you push to the master branch of one of these repositories, they will automatically invoke hg-git, import the commits into hg and then invoke hg to push to the true remote repository on

Simple, right? Well, the good news is that for the most part, people shouldn’t need to actually set up this system. There is infrastructure in place to make it just look like a multi-user git repository that people can authenticate against and push to. So ultimately we can set this up on, say, and to push to try we just push to remote ssh://, or something. Authentication is handled by the system just by using ssh’s ForwardAgent option, so in theory it should be as secure as (but don’t quote me on that!).

Now onto setting it up; first you have to clone mozilla-central from hg:

hg clone ssh://

Then edit the .hg/hgrc to contain the following:

mozilla-central = ssh://
mozilla-inbound = ssh://
try-pushonly = ssh://
hgext.bookmarks =
hggit =

The -pushonly suffix on the try path tells the bridge to not bother pulling from try when synchronising the repositories. The other two will be kept in sync.

The next step is go ahead and use Ehsan’s git-mapfile to short-cut the repository creation process. By default, the bridge will use hg-git to create the embedded git repository, and doing this requires that hg-git processes every single commit in the entire repository, which takes days. The git-mapfile is the map that hg-git uses to determine which hg commit IDs correspond to which git commit IDs, and using Ehsan’s git-mapfile along with a clone of the canonical mozilla-central git repository at git:// will allow us to create these local repositories in a matter of minutes instead of days.

git clone
cp mozilla-history-tools/updates/git-mapfile /path/to/your/hg/clone/.hg/git-mapfile
cd /path/to/your/hg/clone/.hg
git clone --bare git:// git

This lays the groundwork, but there is still a little more to do. Unfortunately, this git repository contains a huge amount of commit history from the CVS era that isn’t present in the hg repositories, so if you try and push using the bridge, hg-git will see these commits that aren’t in the hg repository and try to import all these CVS commits into hg. To work around this, we can hack the git-mapfile. The basic idea here is to grab a list of all the git commit SHA1s that correspond to CVS commits, then map those in the git-mapfile to dummy hg commits (such as “0000000000000000000000000000000000000000”). Unfortunately, hg-git requires that all the mappings are unique, so we need to generate a unique dummy commit ID for each and every CVS commit in git.

If you’re using Ehsan’s repository, go ahead and just grab my git-mapfile from for just the CVS commits and pre-pend that to your .hg/git-mapfile.

Now comes the fun part; setting up the bridge itself.

git clone git://
mkdir /path/to/your/hg/clone/.hg/bridge/
cp spidermonkey-dev-tools/git-hg-bridge/*.sh /path/to/your/hg/clone/.hg/bridge/

Then, you need to add the following to your .hg/git/config’s origin remote to ensure that the branches are set up correctly for inbound and central:

fetch = +refs/heads/master:refs/heads/mozilla-central/master
fetch = +refs/heads/inbound:refs/heads/mozilla-inbound/master

This is because expects to find the mozilla-central history at a branch called mozilla-central/master, and the inbound history at mozilla-inbound/master.

Now to create the special push-only repositories. First needs to be modified in order to allow for the short circuiting; temporarily remove the “return 0” call on line 112 after “No update needed”, then:

cd /path/to/your/hg/clone/.hg/bridge/
./ ../..
(Add the return 0 call back to now)

This will create three repositories in /path/to/your/hg/clone/.hg/repos that correspond to try, mozilla-central and mozilla-inbound. If you now set these repositories as remotes in your main git working tree such as:

git remote add try /path/to/your/hg/clone/.hg/repos/try

You can just push to try by pushing to the master branch of that remote! The first push will take a while as the push-only repository has no commits in it (this should not be an issue for mozilla-central and mozilla-inbound pushes), but after that they should be nice and fast. Here’s an example:

[george@aluminium mozilla-central]$ git push -f try gwright/skia_rebase:master
Counting objects: 3028016, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (549026/549026), done.
Writing objects: 100% (3028016/3028016), 729.80 MiB | 59.21 MiB/s, done.
Total 3028016 (delta 2452303), reused 3015470 (delta 2440473)
remote: fatal: Not a valid commit name 0000000000000000000000000000000000000000
remote: Will Force update!
remote: Get git repository lock for pushing to the bridge.
remote: To ./.hg/git
remote:  * [new branch]      refs/push/master -> try-pushonly/push
remote: Get mercurial repository lock for pushing from the bridge.
remote: Convert changes to mercurial.
remote: abort: bookmark 'try-pushonly/push' does not exist
remote: importing git objects into hg
remote: pushing to ssh://
remote: searching for changes
remote: remote: adding changesets
remote: remote: adding manifests
remote: remote: adding file changes
remote: remote: added 4 changesets with 7 changes to 877 files (+1 heads)
remote: remote: Looks like you used try syntax, going ahead with the push.
remote: remote: If you don't get what you expected, check for help with building your trychooser request.
remote: remote: Thanks for helping save resources, you're the best!
remote: remote: You can view the progress of your build at the following URL:
remote: remote:
remote: remote: Trying to insert into pushlog.
remote: remote: Please do not interrupt...
remote: remote: Inserted into the pushlog db successfully.
To /home/george/dev/hg/mozilla-central/.hg/repos/try
 * [new branch]      gwright/skia_rebase -> master

And that’s it! Mad props to Nicolas Pierron for the huge amount of work he put in on building this solution. If you’re in the Mozilla Mountain View office, you can ping him to get access to his git push server, but hopefully Ehsan and I will work on getting an accessible server out there for everyone.

Booting Linux on a Retina MacBook Pro

For those of you who know me, you’ll know I have a soft spot for pixels. The smaller and more plentiful the pixels, the better. So of course, when Apple announced the new MacBook Pro with the insanely high DPI display, I hesitated for a bit, then bought one with the intention of running Linux on it natively.

Now let me prefix this with a statement: Linux on the rMBP can be a sod to get working. It is doable, but don’t expect everything to work right away.

This blog will be part of a series on getting the rMBP working well on Linux, and will only concentrate on getting it to boot via the EFI bootloader.

Getting Linux to boot

I’m using Fedora 17 because @mjg59 is a Red Hat employee, and actively working on getting Linux to play nicely with this machine. So go ahead and grab one of the CD images from and dump it to a USB stick, e.g. (assuming you’re doing this on a Linux box and your USB storage device is /dev/sdb):

dd if=F17-Live-DESK-x86_64-20120720.iso of=/dev/sdb bs=1M

Remember to run sync before unplugging the USB device, or you will get bizarre errors and the installer won’t run.

You’ll also need to resize your OS X partition if you want to dual boot; go ahead and resize it in Apple’s Disk Utility (in /Applications/Utilities).

Once that’s done, you can reboot the machine whilst holding down the Alt key and you’ll be presented with a list of devices you can boot from, one of which should be your USB device with the Fedora logo showing. Go ahead and boot off that.

This will bring up the grub prompt; hit ‘e’ to edit the boot configuration and append the following parameters to your kernel arguments:

nointremap drm_kms_helper.poll=0 video=eDP-1:2880x1800@45e

That will boot you into a glorious Xorg session running at native panel resolution, and you can install Fedora as normal (sort of).

First off, you’ll need to ensure that there’s a 200MB HFS+ partition which you can install the EFI bootloader to. This should be set to mount at

Apart from that, you probably shouldn’t need to do anything special to the partition table, but here’s mine for reference:

Number  Start   End    Size   File system     Name                  Flags
1      20.5kB  210MB  210MB  fat32           EFI system partition  boot
2      210MB   256GB  256GB  hfs+            Customer
3      256GB   257GB  650MB  hfs+            Recovery HD
4      257GB   257GB  210MB  hfs+
5      257GB   258GB  524MB  linux-swap(v1)
6      258GB   500GB  243GB  ext4

Once the installation is done, you may or may not get an error installing the bootloader. If you do, you will need to fire up a terminal and do the following:

sudo chroot /mnt/sysimage
efibootmgr -c
efibootmgr -v

The last line should show Linux’s bootloader is enabled.

You then need to ensure grub’s configuration is set correctly. Go ahead and find out your kernel and initramfs files; they should be somewhere like:


Assuming they’re in /boot on /dev/sda6, this will translate to grub paths as something like:


Go ahead and write something like the following to /boot/efi/EFI/redhat/grub.conf:

title Fedora (3.4.5-2.fc17.x86_64)
        root (hd0,5)
        kernel /boot/vmlinuz-3.4.5-2.fc17.x86_64 root=/dev/sda6 rd.lvm=0 KEYTABLE=us SYSFONT=True rd.luks=0 ro LANG=en_US.UTF-8 rhgb quiet nointremap drm_kms_helper.poll=0 video=eDP-1:2880x1800@45e nomodeset
        initrd /boot/initramfs-3.4.5-2.fc17.x86_64.img

You will then need to symlink /etc/grub.conf to it, after which you can go ahead and reboot. You should be able to see an additional boot option now in the Alt boot menu. If not, you will need to boot into OS X and you will find a new volume labelled “untitled” most likely. You will need to bless the Linux bootloader in order to boot it, so as root in a terminal, run:

bless --folder /Volumes/untitled --file /Volumes/untitled/EFI/redhat/grub.efi

You should then be able to boot into Linux!

Firefox’s graphics performance on X11

It’s been long known that a vocal minority (or perhaps even majority) of Firefox users have had issues with rendering performance on X11, but nobody has quite pinpointed what the issue is.

Recently Nicolas Silva landed a change in mozilla-central to add a preference to allow for disabling the use of the RENDER extension when drawing using Cairo on X11. I’d like to call on any Firefox/Linux users who have been experiencing speed issues to download the latest nightly and go to about:config and set “gfx.xrender.enabled” to “false”. If you could also let me know what hardware and drivers you’re running that’d also be great.

Setting up a chroot for Android development

Let’s say you want to build Android. This is not an unreasonable thing to want to do; around here, the most common reason for doing this is to get easy access to debug symbols in system libraries.

However, Android only really supports building on a very specific distribution of Linux, and that is normally the LTS release of Ubuntu (currently 10.04 “Lucid”).

Luckily, it’s relatively easy to set up a chroot in Linux to build in, such that you don’t need to maintain a completely separate installation of Linux if you want to run, say, Ubuntu 11.10 instead.

First you need to install schroot and debootstrap:

sudo apt-get install schroot debootstrap

schroot is a tool to allow you to easily run a command or a login shell within a chroot that you have previously set up. debootstrap is a Debian tool to bootstrap a Debian (and by extension, Ubuntu) release inside a directory which can then be used as a chroot.

Once you have those installed, fire up your favourite text editor and append something similar to the following to your /etc/schroot/schroot.conf file:

description=Ubuntu Lucid

Now, you need to actually create the chroot. To do this, you need to use debootstrap. In this case, I’m going to create a Lucid chroot:

sudo debootstrap --arch amd64 lucid /var/chroot/lucid

The first argument here specifies the CPU architecture you want to install in the chroot (typically either i386 or amd64), the second is the distribution codename used in the repositories, the third is the directory in which to install the chroot and finally the last argument is the mirror you wish to use to download the Debian packages from for installing.

This will take a little while but once it’s done you can simply run the following:

schroot -c lucid

If all goes well, you’ll be greeted with a friendly and informative prompt thus:


You can then, inside this shell, follow the instructions for building and flashing Android yourself without any trouble.

As far as I can tell, there’s nothing here that should be Debian (or Debian derivative) specific, so hopefully if you’re running a different distribution, so long as you can get hold of debootstrap and schroot you should be fine.

(Update 2012/03/29 – correct the Lucid version number)