Electrolysis: A tale of tab switching and themes

Electrolysis (“e10s”) is a project by Mozilla to bring Firefox from its traditional single-process roots to a modern multi-process browser. The general idea here is that with modern computers having multiple CPU cores, we should be able to make effective use of that by parallelising as much of our work as possible. This brings benefits like being able to have a responsive user interface when we have a page that requires a lot of work to process, or being able to recover from content process crashes by simply restarting the crashed process.

However, I want to touch on one particular aspect of e10s that works differently to non-e10s Firefox, and that is the way tab switching is handled.

Currently, when a user switches tabs, we do the operation synchronously. That is, when a user requests a new tab for display, the browser effectively halts what it’s doing until it is in a position to display that tab, at which point it will switch the tab. This is fine for the most part, but if the browser gets bogged down and is stuck waiting for CPU time, it can take a jarring amount of time for the tab to be switched.

With e10s, we are able to do better. In this case, we are able to switch tabs asynchronously. When the user requests a new tab, we fire off the request to the process that handles processing that tab, and wait for a message back from that process that says “I’m ready to be displayed now”, and crucially we do not switch the selected tab in the user interface (the “chrome”) until we get that message. In most cases, this should happen in under 300ms, but if we hit 300ms and we still haven’t received the message, we switch the tab anyway and show a big spinner where the tab content would be to indicate that we’re still waiting for the initial content to be ready for display.

Unfortunately, this affects themes. Basically, there’s an attribute on the tab element called “selected” which is used by the CSS to style the tab accordingly. With the switch to asynchronous tab switching, I added a new attribute, “visuallyselected”, which is used by the CSS to style the tab. The “selected” attribute is now used to signify a logical selection within the browser.

We thought long and hard about making this change, and concluded that this was the best approach to take. It was the least invasive in terms of changing our existing browser code, and kept all our test assumptions valid (primarily that “selected” is applied synchronously with a tab switch). The impact on third party themes that haven’t been updated is that if they’re styled off the “selected” attribute, they will immediately show a tab switch but can show the old tab’s content for up to 300ms until either the new tab’s content is ready, or the spinner is shown. Using “visuallyselected” on non-e10s works fine as well, because we now apply both the “selected” and “visuallyselected” attributes on the tab when it is selected in the non-e10s case.

Better wireless drivers for the bcm4331

So as of the other day, I found the solution to the last of my troubles regarding running Linux on my mid-2012 15″ Retina MacBook Pro – namely, that the wireless drivers were incredibly flakey and would drop randomly and frequently.

Basically, there are two major wireless drivers available for Linux for the Broadcom chipsets; the b43 drivers which are in the upstream kernel, and the broadcom-wl STA drivers. The b43 drivers work fairly well, but don’t implement power management for the bcm4331 and so can frequently/randomly drop. On the other side of the fence, there’s Broadcom’s Linux driver page. However, you’d be forgiven for thinking that Broadcom hasn’t released new drivers in over 2 years, right? And as a result, the currently available drivers don’t support the bcm4331? Wrong. Turns out Ubuntu managed to get newer drivers out of Broadcom in December.

Anyway, one of the major features of the new driver in the December release (version is that it supports the bcm4331 found in the newer MacBook Pro models, amongst other computers. So if you just download the source (I used apt-get source bcmwl on a 13.04 machine), it will compile on pretty much any modern kernel and seems to work fine. For reference, I’m using it now on Fedora 17 running kernel 3.7.9. It even suspends/resumes fine.

A month with the Retina MacBook and Linux

I’ve been using the Retina MacBook for a month now, and I’ve got to say the experience hasn’t been all that smooth, but better than I had expected.

Graphics Cards

Using the latest kernel (I used 3.6.0-rc3, but 3.6.0-rc4 should probably work too), switchable graphics works to an extent. If you forcibly enable the Intel HD4000 GPU in OS X using gfxCardStatus, then reboot straight into Linux, you will be able to use the i915 kernel driver, and the intel Xorg driver. You can even switch to/from the NVIDIA GeForce GT 650m at runtime by prodding vgaswitcheroo. Unfortunately, you can’t boot up with the GeForce enabled (ie – without the gfxCardStatus hack) for now, because the Intel driver won’t bring up the eDP link properly.

If you use a recent xorg-x11-drv-intel, the screen backlight controls work fine. In Fedora 17 I had to enable the “testing” repository to get an up-to-date enough version of this package.

I found that using the Intel card on my rMBP causes the screen to artifact badly every so often. As far as I’m aware, nobody else has seen this issue, but I can’t reproduce it when using the discrete GPU, or when I’m booted into OS X and using the Intel GPU. I’m not ruling out a hardware issue for now, but if you do see this issue (it manifests as a screen updating issue where portions of the screen jump around very quickly) please let me know.

nouveau is an exercise in frustration. It works, but don’t expect any sort of hardware acceleration. The cursor blinks frequently. There’s no power management, so the laptop will physically run quite warm and the battery won’t last long. I recommend using the latest beta NVIDIA binary driver (version 304.43 at time of writing) if you’re planning to use the discrete GPU on this thing; it actually works surprisingly well.

If you do use the Intel driver, ensure you turn off the discrete GPU at runtime by echoing “OFF” to /sys/kernel/debug/vgaswitcheroo/switch.

Backlight control works when using the NVIDIA drivers only after the laptop has been put to suspend and resumed at least once.


This is the worst supported piece of hardware, by far, in my opinion. It works to an extent, but the b43 driver frequently disconnects and exhibits high packet loss. It’s incredibly variable depending on the environment. I found that at the Mozilla office I would be disconnected every few minutes and I’d have to rmmod and modprobe b43 to even get it to reconnect, and even then the bandwidth throughput on the connection was terrible (I was seeing ~50kB/s on an 802.11g network). ndiswrapper is also flakey. In the end I gave up messing around with it and bought a cheap $13 USB dongle which uses a Realtek chipset, that works fine.


Mostly works out of the box. Backlighting works fine in latest kernels, you may need to set the brightness manually by prodding /sys/devices/platform/applesmc.768/leds/smc::kbd_backlight/brightness with echo.


Works well enough with the xorg-input-mtrack driver, but can be a little flakey. Despite lots of tweaking, I still haven’t found a configuration that doesn’t register false clicks. My biggest issue is the case where I’m resting my thumb on the trackpad and moving the cursor with my index finger; when I want to register a left click, about 50% of the time it’ll register as a right click because it’ll think it’s a two finger touch. For now I’ve worked around this by setting three finger click to be right click, four finger click to be a middle click and single and two finger clicks to be a left click. My configuration is:

Section "InputClass"
MatchIsTouchpad "on"
Identifier      "Touchpads"
Driver          "mtrack"
Option          "Sensitivity" "0.35"
Option          "IgnoreThumb" "true"
Option          "IgnorePalm" "true"
Option          "TapButton1" "0"
Option          "TapButton2" "0"
Option          "TapButton3" "0"
Option          "TapButton4" "0"
Option          "ClickFinger1" "0"
Option          "ClickFinger2" "3"
Option          "ClickFinger3" "2"
Option          "ButtonMoveEmulate" "false"
Option          "ClickTime" "25"
Option          "BottomEdge" "25"


Not working.


Speakers work fine, as does the mixer. Optical output is always on at boot (which is the source of the red LED shining out of the headphone socket); turn it off with:

amixer -c0 set IEC958 mute

Microphone doesn’t work.


Works out of the box, but somewhat useless without the microphone.

Power Management

Seems to work pretty well. Seems to hover around 17W when not doing much, but can jump to 30W under normal usage. When compiling it jumps to 60-70W. If using the Intel GPU with the discrete turned off, you gain a few watts on average.


Grab the vanilla 3.6.0-rc3 from kernel.org, then apply this patch to fix the gmux, and these patches to fix nouveau. Once that’s done you can just build the kernel as normal.

Using git to push to Mozilla’s hg repositories

Last night I spent a huge amount of time working with Nicolas Pierron on setting up a two-way git-hg bridge to allow for those of us using git to push straight to try/central/inbound without manually importing patches into a local mercurial clone.

The basic design of the bridge is fairly simple: you have a local hg clone of mozilla-central, which has remote paths set up for try, central and inbound. It is set up as an hg-git hybrid and so the entire history is also visible via git at hg-repo/.hg/git. This is a fairly standard set up as far as hg-git goes.

Then on top of that, there’s a special git repository for each remote path in hg (try, central and inbound) inside .hg/repos. These are all set up with special git hooks such that when you push to the master branch of one of these repositories, they will automatically invoke hg-git, import the commits into hg and then invoke hg to push to the true remote repository on hg.mozilla.org.

Simple, right? Well, the good news is that for the most part, people shouldn’t need to actually set up this system. There is infrastructure in place to make it just look like a multi-user git repository that people can authenticate against and push to. So ultimately we can set this up on, say, git.mozilla.org and to push to try we just push to remote ssh://git.mozilla.org/try.git, or something. Authentication is handled by the system just by using ssh’s ForwardAgent option, so in theory it should be as secure as hg.mozilla.org (but don’t quote me on that!).

Now onto setting it up; first you have to clone mozilla-central from hg:

hg clone ssh://hg.mozilla.org/mozilla-central

Then edit the .hg/hgrc to contain the following:

mozilla-central = ssh://hg.mozilla.org/mozilla-central
mozilla-inbound = ssh://hg.mozilla.org/integration/mozilla-inbound
try-pushonly = ssh://hg.mozilla.org/try
hgext.bookmarks =
hggit =

The -pushonly suffix on the try path tells the bridge to not bother pulling from try when synchronising the repositories. The other two will be kept in sync.

The next step is go ahead and use Ehsan’s git-mapfile to short-cut the repository creation process. By default, the bridge will use hg-git to create the embedded git repository, and doing this requires that hg-git processes every single commit in the entire repository, which takes days. The git-mapfile is the map that hg-git uses to determine which hg commit IDs correspond to which git commit IDs, and using Ehsan’s git-mapfile along with a clone of the canonical mozilla-central git repository at git://github.com/mozilla/mozilla-central.git will allow us to create these local repositories in a matter of minutes instead of days.

git clone https://github.com/ehsan/mozilla-history-tools
cp mozilla-history-tools/updates/git-mapfile /path/to/your/hg/clone/.hg/git-mapfile
cd /path/to/your/hg/clone/.hg
git clone --bare git://github.com/mozilla/mozilla-central.git git

This lays the groundwork, but there is still a little more to do. Unfortunately, this git repository contains a huge amount of commit history from the CVS era that isn’t present in the hg repositories, so if you try and push using the bridge, hg-git will see these commits that aren’t in the hg repository and try to import all these CVS commits into hg. To work around this, we can hack the git-mapfile. The basic idea here is to grab a list of all the git commit SHA1s that correspond to CVS commits, then map those in the git-mapfile to dummy hg commits (such as “0000000000000000000000000000000000000000”). Unfortunately, hg-git requires that all the mappings are unique, so we need to generate a unique dummy commit ID for each and every CVS commit in git.

If you’re using Ehsan’s repository, go ahead and just grab my git-mapfile from http://people.mozilla.org/~gwright/git-cvs-mapfile-non-unique for just the CVS commits and pre-pend that to your .hg/git-mapfile.

Now comes the fun part; setting up the bridge itself.

git clone git://github.com/nbp/spidermonkey-dev-tools.git
mkdir /path/to/your/hg/clone/.hg/bridge/
cp spidermonkey-dev-tools/git-hg-bridge/*.sh /path/to/your/hg/clone/.hg/bridge/

Then, you need to add the following to your .hg/git/config’s origin remote to ensure that the branches are set up correctly for inbound and central:

fetch = +refs/heads/master:refs/heads/mozilla-central/master
fetch = +refs/heads/inbound:refs/heads/mozilla-inbound/master

This is because pull.sh expects to find the mozilla-central history at a branch called mozilla-central/master, and the inbound history at mozilla-inbound/master.

Now to create the special push-only repositories. First pull.sh needs to be modified in order to allow for the short circuiting; temporarily remove the “return 0” call on line 112 after “No update needed”, then:

cd /path/to/your/hg/clone/.hg/bridge/
./pull.sh ../..
(Add the return 0 call back to pull.sh now)

This will create three repositories in /path/to/your/hg/clone/.hg/repos that correspond to try, mozilla-central and mozilla-inbound. If you now set these repositories as remotes in your main git working tree such as:

git remote add try /path/to/your/hg/clone/.hg/repos/try

You can just push to try by pushing to the master branch of that remote! The first push will take a while as the push-only repository has no commits in it (this should not be an issue for mozilla-central and mozilla-inbound pushes), but after that they should be nice and fast. Here’s an example:

[george@aluminium mozilla-central]$ git push -f try gwright/skia_rebase:master
Counting objects: 3028016, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (549026/549026), done.
Writing objects: 100% (3028016/3028016), 729.80 MiB | 59.21 MiB/s, done.
Total 3028016 (delta 2452303), reused 3015470 (delta 2440473)
remote: fatal: Not a valid commit name 0000000000000000000000000000000000000000
remote: Will Force update!
remote: Get git repository lock for pushing to the bridge.
remote: To ./.hg/git
remote:  * [new branch]      refs/push/master -> try-pushonly/push
remote: Get mercurial repository lock for pushing from the bridge.
remote: Convert changes to mercurial.
remote: abort: bookmark 'try-pushonly/push' does not exist
remote: importing git objects into hg
remote: pushing to ssh://hg.mozilla.org/try
remote: searching for changes
remote: remote: adding changesets
remote: remote: adding manifests
remote: remote: adding file changes
remote: remote: added 4 changesets with 7 changes to 877 files (+1 heads)
remote: remote: Looks like you used try syntax, going ahead with the push.
remote: remote: If you don't get what you expected, check http://trychooser.pub.build.mozilla.org/ for help with building your trychooser request.
remote: remote: Thanks for helping save resources, you're the best!
remote: remote: You can view the progress of your build at the following URL:
remote: remote:   https://tbpl.mozilla.org/?tree=Try&rev=b9783e130dd6
remote: remote: Trying to insert into pushlog.
remote: remote: Please do not interrupt...
remote: remote: Inserted into the pushlog db successfully.
To /home/george/dev/hg/mozilla-central/.hg/repos/try
 * [new branch]      gwright/skia_rebase -> master

And that’s it! Mad props to Nicolas Pierron for the huge amount of work he put in on building this solution. If you’re in the Mozilla Mountain View office, you can ping him to get access to his git push server, but hopefully Ehsan and I will work on getting an accessible server out there for everyone.

Booting Linux on a Retina MacBook Pro

For those of you who know me, you’ll know I have a soft spot for pixels. The smaller and more plentiful the pixels, the better. So of course, when Apple announced the new MacBook Pro with the insanely high DPI display, I hesitated for a bit, then bought one with the intention of running Linux on it natively.

Now let me prefix this with a statement: Linux on the rMBP can be a sod to get working. It is doable, but don’t expect everything to work right away.

This blog will be part of a series on getting the rMBP working well on Linux, and will only concentrate on getting it to boot via the EFI bootloader.

Getting Linux to boot

I’m using Fedora 17 because @mjg59 is a Red Hat employee, and actively working on getting Linux to play nicely with this machine. So go ahead and grab one of the CD images from http://alt.fedoraproject.org/pub/alt/live-respins/ and dump it to a USB stick, e.g. (assuming you’re doing this on a Linux box and your USB storage device is /dev/sdb):

dd if=F17-Live-DESK-x86_64-20120720.iso of=/dev/sdb bs=1M

Remember to run sync before unplugging the USB device, or you will get bizarre errors and the installer won’t run.

You’ll also need to resize your OS X partition if you want to dual boot; go ahead and resize it in Apple’s Disk Utility (in /Applications/Utilities).

Once that’s done, you can reboot the machine whilst holding down the Alt key and you’ll be presented with a list of devices you can boot from, one of which should be your USB device with the Fedora logo showing. Go ahead and boot off that.

This will bring up the grub prompt; hit ‘e’ to edit the boot configuration and append the following parameters to your kernel arguments:

nointremap drm_kms_helper.poll=0 video=eDP-1:2880x1800@45e

That will boot you into a glorious Xorg session running at native panel resolution, and you can install Fedora as normal (sort of).

First off, you’ll need to ensure that there’s a 200MB HFS+ partition which you can install the EFI bootloader to. This should be set to mount at

Apart from that, you probably shouldn’t need to do anything special to the partition table, but here’s mine for reference:

Number  Start   End    Size   File system     Name                  Flags
1      20.5kB  210MB  210MB  fat32           EFI system partition  boot
2      210MB   256GB  256GB  hfs+            Customer
3      256GB   257GB  650MB  hfs+            Recovery HD
4      257GB   257GB  210MB  hfs+
5      257GB   258GB  524MB  linux-swap(v1)
6      258GB   500GB  243GB  ext4

Once the installation is done, you may or may not get an error installing the bootloader. If you do, you will need to fire up a terminal and do the following:

sudo chroot /mnt/sysimage
efibootmgr -c
efibootmgr -v

The last line should show Linux’s bootloader is enabled.

You then need to ensure grub’s configuration is set correctly. Go ahead and find out your kernel and initramfs files; they should be somewhere like:


Assuming they’re in /boot on /dev/sda6, this will translate to grub paths as something like:


Go ahead and write something like the following to /boot/efi/EFI/redhat/grub.conf:

title Fedora (3.4.5-2.fc17.x86_64)
        root (hd0,5)
        kernel /boot/vmlinuz-3.4.5-2.fc17.x86_64 root=/dev/sda6 rd.md=0 rd.lvm=0 rd.dm=0 KEYTABLE=us SYSFONT=True rd.luks=0 ro LANG=en_US.UTF-8 rhgb quiet nointremap drm_kms_helper.poll=0 video=eDP-1:2880x1800@45e nomodeset
        initrd /boot/initramfs-3.4.5-2.fc17.x86_64.img

You will then need to symlink /etc/grub.conf to it, after which you can go ahead and reboot. You should be able to see an additional boot option now in the Alt boot menu. If not, you will need to boot into OS X and you will find a new volume labelled “untitled” most likely. You will need to bless the Linux bootloader in order to boot it, so as root in a terminal, run:

bless --folder /Volumes/untitled --file /Volumes/untitled/EFI/redhat/grub.efi

You should then be able to boot into Linux!

Setting up a chroot for Android development

Let’s say you want to build Android. This is not an unreasonable thing to want to do; around here, the most common reason for doing this is to get easy access to debug symbols in system libraries.

However, Android only really supports building on a very specific distribution of Linux, and that is normally the LTS release of Ubuntu (currently 10.04 “Lucid”).

Luckily, it’s relatively easy to set up a chroot in Linux to build in, such that you don’t need to maintain a completely separate installation of Linux if you want to run, say, Ubuntu 11.10 instead.

First you need to install schroot and debootstrap:

sudo apt-get install schroot debootstrap

schroot is a tool to allow you to easily run a command or a login shell within a chroot that you have previously set up. debootstrap is a Debian tool to bootstrap a Debian (and by extension, Ubuntu) release inside a directory which can then be used as a chroot.

Once you have those installed, fire up your favourite text editor and append something similar to the following to your /etc/schroot/schroot.conf file:

description=Ubuntu Lucid

Now, you need to actually create the chroot. To do this, you need to use debootstrap. In this case, I’m going to create a Lucid chroot:

sudo debootstrap --arch amd64 lucid /var/chroot/lucid http://mirrors.rit.edu/ubuntu

The first argument here specifies the CPU architecture you want to install in the chroot (typically either i386 or amd64), the second is the distribution codename used in the repositories, the third is the directory in which to install the chroot and finally the last argument is the mirror you wish to use to download the Debian packages from for installing.

This will take a little while but once it’s done you can simply run the following:

schroot -c lucid

If all goes well, you’ll be greeted with a friendly and informative prompt thus:


You can then, inside this shell, follow the instructions for building and flashing Android yourself without any trouble.

As far as I can tell, there’s nothing here that should be Debian (or Debian derivative) specific, so hopefully if you’re running a different distribution, so long as you can get hold of debootstrap and schroot you should be fine.

(Update 2012/03/29 – correct the Lucid version number)

Thoughts on Boot to Gecko

This week Mozilla announced the intention to deliver an open device based on the Boot to Gecko (B2G) project in 2012.

When I first heard of B2G, I’ve got to admit I didn’t see what all the fuss was about. It looked very much like what Palm had tried and failed to do with webOS, just a few years later. The more I hear about it, though, the more compelling it seems to be.

I think Mozilla’s timed this very well; whilst the web has been seen more and more as a development platform rather than a straight content delivery system for a while, it’s not been a viable one until recently, and in my opinion it’s still not totally viable on mobile devices. Up until now, in order to be able to run web applications with any sort of decent performance (the sort that consumers have come to expect from the standards that the iPhone has set), you have typically needed a relatively high end system-on-chip to power the thing.

Fast forward to 2012 and there’s a plethora of high performance SoCs available at relatively low cost; B2G devices can be targeted towards the lower end of the smartphone market, whilst still exhibiting good performance. And unlike all the other low end smartphone platforms currently out there, the platform is already well established and the cost of bringing applications to it should be relatively low compared to others. To take a real world example, look at the iPhone 3GS. This is currently positioned by Apple as their low end smartphone, but when running iOS 5 the experience is dismal compared to their flagship phone. This is the sort of phone that any B2G device will be up against, not models like the 4S.

In fact, Telefonica said that they’re hoping to price the B2G phone at “ten times cheaper than an iPhone”, which would suggest that the off-contract price would be around $60 – $70. This is huge for a smartphone that would conceivably be competing with something like the 3GS, which currently sells for $375 off-contract.

What gives me the most confidence, though, is that people responded to the demos very well, and we’re not even done yet. There’s a lot of optimisation work that can and needs to be done, and there’s no reason why we can’t have a user experience comparable to the incumbents in the industry.

Debugging OpenGL on Android without losing your sanity

Recently I’ve had to work more and more on OpenGL code as part of my job, which is hardly surprising given that my new job is to work on graphics. One thing that’s annoyed me since I started, however, is the relative difficulty of debugging OpenGL code compared to normal C/C++ code that just runs on the CPU.

The main reason for this, I’ve found, is that keeping track of which OpenGL commands have been issued, with what parameters, and what state has been set on your GL context is actually a rather hard task. In fact, it’s such a common problem that some bright hackers came up with the holy grail of OpenGL debugging tools – a tool called apitrace.

Put simply, apitrace is just a command tracer that logs all the OpenGL calls you make from your application. Why is that so wonderful? Well, for a start it decouples your development and testing environments. It allows you to record a series of OpenGL commands on a target device or piece of software that’s exhibiting a bug, which you can then replay and debug at your leisure in your native development environment.

Anyone who has had the pleasure of trying to debug OpenGL ES 2.0 code running on an embedded device running something like Android will understand the value here. You can just trace your buggy application, put the trace file on your desktop or laptop, analyse the GL commands you issued and modify them, then fix the bug. Problem solved! No messing around with a huge number of printf() statements or GL debugging states.

Well that’s all well and good, but how do you use this thing? Turns out on Android, that’s not so easy. First off you’ll need to grab my Android branch of apitrace (I’m working on getting these patches upstreamed, so don’t worry), and build it for your device:

export ANDROID_NDK=/path/to/your/android/ndk
cmake \
 -DCMAKE_TOOLCHAIN_FILE=android/android.toolchain.cmake \
make -C build

When this is done, you’ll find a file called egltrace.so in build/wrappers which you can then put somewhere on your device, such as /data/local.

However, on Android I have yet to find a way to preload a library, using the LD_PRELOAD environment variable or otherwise, so you’ll have to put the following lines of code before you make any gl calls in your application:

setenv("TRACE_FILE", "/path/to/trace/file", false);
dlopen("/data/local/egltrace.so", RTLD_LAZY);

This will ensure that the symbols can be found, but you also need to actually look up the value of each gl function you’re hoping to use before you can start to get anywhere. In the case of glGetError(), this can be:

typedef glGetErrorFn (GLenum (*)(void));
glGetErrorFn fGetError =
    (glGetErrorFn)dlsym(RTLD_DEFAULT, "glGetError");

Unfortunately this will need to be done for all the symbols you’re planning to use, but on the up side you get total control over when your dynamic libraries are loaded and used, which means you can optimise your startup time accordingly.

Once that’s all set up you can go ahead and run your application and grab the tracing output. This is where the fun part starts. apitrace has a GUI written in Qt by Zack Rusin that can be used to do all sorts of crazy stuff, such as:

  • View all GL commands issued, frame by frame
  • Modify parameters passed into GL commands on the fly
  • View current texture data*
  • View bound shader programs
  • Inspect GL context state at any point
  • Replay traces
  • View current uniforms

qapitrace in action

You get the idea. Whilst not all of the features seem to be working at the moment with EGL/GLESv2 traces, I hope to devote some spare cycles to fixing those. The most important one to me right now is that qapitrace is unable to view current texture data from traces we obtain from Firefox/Android. It seems unlikely that it’s an issue with our tracing support as replaying the traces using eglretrace works fine, but without investigating further I can’t say whether this is a limitation of qapitrace with EGL/GLESv2 or an issue with our tracing in Firefox. I do get the impression that upstream are targeting desktop GL rather than embedded GL, but that just gives me an opportunity to learn a bit more GL and help out!

Getting EGL playbacks to work on Linux can be a bit trying however. First off you will need to get the Mesa EGL and GLESv2 development libraries, as well as the usual requirements for building apitrace – Qt version 4.7 and so on – and you can build as per the installation instructions. Before running qapitrace or eglretrace though, you will need to set the following environment variable or I found (on my system at least) DRI fails to authenticate with the kernel:

export EGL_SOFTWARE=true

Of course everything that’s been said here also works great for debugging desktop GL applications, but there’s significantly less pain involved as you shouldn’t need to resort to dlopen/dlsym magic.


Well, here we are again. Another blog. But hopefully not just another blog, but rather an entry point into the fascinating world of open source and its development.

Who am I, I hear you ask. I am a platform engineer working at Mozilla Corporation on the graphics stack for Firefox. My job is to help make the browser awesome and in turn help developers with their goal to make the web awesome, one step at a time.

Here I will offer my insights as a community member, a passionate computer scientist, an open source junkie, a developer and a human. Through this blog I hope to help increase visibility, even if only by a little bit, of what we do and how we do it.