Jeff WaldenI’ve stopped hiking the PCT

Every year’s PCT is different. The path routinely changes, mostly as fresh trail is built to replace substandard trail (or no trail at all, if the prior trail was a road walk). But the reason for an ever-shifting PCT, even within hiking seasons, should be obvious to any California resident: fires.

The 2013 Mountain Fire near Idyllwild substantially impacted the nearby trail. A ten-mile stretch of trail is closed to hikers even today, apparently (per someone I talked to at the outfitter in Idyllwild) because the fire burned so hot that essentially all organic matter was destroyed, so they have to rebuild the entire trail to have even a usable tread. (The entire section is expected to open midsummer next year – too late for northbound thru-hikers, but maybe not for southbounders.)

These circumstances lead to an alternate route for hikers to use. Not all do: many hikers simply hitchhike past the ten closed miles and the remote fifteen miles before them.

But technically, there’s an official reroute, and I’ve nearly completed it. Mostly it involves lots of walking on the side of roads. The forest service dirt roads are rough but generally empty, so not the worst. The walking on a well-traveled highway with no usable shoulder, however, was the least enjoyable hiking I’ve ever done. (I mean ever.) I really can’t disagree with the people who just hitchhiked to Idyllwild and skipped it all.

I’ll be very glad to get back on the real trail several miles from now.

Air MozillaWebdev Beer and Tell: May 2017

Webdev Beer and Tell: May 2017 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on in...

Hacks.Mozilla.OrgShowcasing your WebVR experiences

WebVR combines the powerful reach of the Internet with the immersive appeal of virtual reality content. With WebVR, a VR experience is never more than one URL away. Nevertheless, VR equipment is still very expensive and not quite fully adopted for consumer use. For this reason, it is useful to be able to record your VR projects for others to experience and enjoy, at least from a viewer perspective.

Recording VR content

This video tutorial teaches you how to record a virtual experience you’ve created using the mirror mode in SteamVR. Capturing one eye allows your audience to enjoy the video in a regular 2D mode but capturing both eyes will enable a more immersive experience thanks to stereoscopic video.

This video tutorial assumes you have a SteamVR compatible setup and Open Broadcast Software installed.

There are other options for capturing your VR experiences. If you’re a Windows 10 user, perhaps you prefer to use Game DVR, which works out of the box.

Extract a GIF from your favorite cut

Now that you have a video with your VR content, you can make a GIF from it with Instagiffer for Windows. Instagiffer is not the fastest software out there, but the output quality of the GIFs is superb.

Start by installing and launching Instagiffer. The UI is split into three sections or steps.

A window with three sections for choosing the video, settings and preview

Click on Load Video in the Step 1 section, and select the video from which you want to extract the GIF.

When clicking load video, the Windows file selection dialog appears

Locate the sequence you want to convert into a GIF and fill the options in the Step 2 section. In this case, I want to extract 5.5 seconds of video starting from second 18; a sequence in which I shot an enemy bullet.

Three slide bars allow to modify start time, framerate (smoothness) and frame size. A text box indicates the length of the clip.

Length, Smoothness and Frame Size will affect the size of your GIF: the higher the values, the higher the size of the resulting file.

In Step 3 section, you can crop the image by dragging the square red helpers. In this case, I’m removing the black bands around the video. You can also use it to isolate each eye.

A red rectangle with two handlers in each corner represent the cropping area

Notice that the size of the GIF is shown in the bottom-right corner of the preview. You can adjust this size by moving the Frame Size slider in the Step 2 section.

Finally, click on the Create GIF! button at the bottom of the window to start the conversion.

A progress bar shows how it remains until completion

One of the things I love about Instagiffer is that, after finishing, it will display compatibility warnings about the GIF, testing on some of the most popular Internet services.

The notice shows warnings for Tumblr, Imgur and Twitter, pointing out problems with sizes and dimensions

Click on the final result to see the animation. It’s really good!

Capture of a A-Blast gameplay

If you are more into old-school tools, check out Kevin’s CLI utility Gifpardy and see how it goes.

Make a 3D YouTube video

One of the advantages of recording both eyes is that you can assemble stereoscopic side-by-side 3D videos. You can use YouTube, for instance.

Just upload your video and edit it. Go to the Advanced settings tab inside the Info & Settings view.

Browser content screenshot at Info & Settings tab of a YouTube video

Check the box that says This video is 3D and select Side by side: Left video on the left side in the combo box.

Checkbox for enabling 3D video. A warning reads:

The deprecation warning encourages you to do this step offline, with your favorite video editor.

Once you are done, YouTube will select the best option for displaying 3D content, applying the proper filters or corrections as needed.

For instance, you’ll see an anaglyph representation when viewing your video with the Firefox browser on desktop.

An anaglyph red/green representation of the 3D video

You can switch to a 2D representation as well.

Regular 2D representation chooses only one eye to show

When you view the video with Firefox for Android you will see both eyes side by side.

Video on Firefox Android is shown side by side with no distortion (as the original video)

And if you try with the YouTube native app, an icon for Cardboard/Daydream VR will appear, transporting you to a virtual cinema where you can enjoy the clip.

In the YouTube app, a Cardboard is shown in the bottom-right corner to enter VR mode

Theater mode applies the proper distortion to each eye and provides a cinematic view

In conclusion

Virtual reality is not widely adopted or easily accessible yet, but the tools are available now to reach more people and distribute your creative work by recording your WebVR demos in video. Discover VR galleries on Twitter, GIPHY or Tumblr, choose your best clips and share them!

Do you prefer high quality video? Check out the VR content on YouTube or Vimeo.

At Mozilla, we support the success of WebVR and aim to demonstrate that people can share and enjoy virtual reality experiences on the Web! Please share your WebVR projects with the world. We’d love to see what you’re making. Let us know on Twitter by tagging your project with #aframevr, and we’ll RT it! Follow @AframeVR and @MozillaVR for the latest developments and new creative work.

Air MozillaGecko And Native Profiler

Gecko And Native Profiler Ehsan Akhgari: Gecko And Native Profiler. May 18, 2017 Ehsan and Markus etherpad is here: https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Gecko_Profiler_FAQ

Daniel Stenbergcurl: 5000 stars

The curl project has been hosted on github since March 2010, and we just now surpassed 5000 stars. A star on github of course being a completely meaningless measurement for anything. But it does show that at least 5000 individuals have visited the page and shown appreciation this low impact way.

5000 is not a lot compared to the really popular projects.

On August 12, 2014 I celebrated us passing 1000 stars with a beer. Hopefully it won’t be another seven years until I can get my 10000 stars beer!

Ehsan AkhgariQuantum Flow Engineering Newsletter #10

Let’s start this week’s updates with looking at the ongoing efforts to improve the usefulness of the background hang reports data.  With Ben Miroglio’s help, we confirmed that we aren’t blowing up telemetry ping sizes yet by sending native stack traces for BHR hangs, and as a result we can now capture a deeper call stack depth, which means the resulting data will be easier to analyze.  Doug Thayer has also been hard at work at creating a new BHR dashboard based on the perf-html UI.  You can see a sneak peak here, but do note that this is work in progress!  The raw BHR data is still available for your inspection.
Kannan Vijayan has been working on adding some low level instrumentation to SpiderMonkey in order to get some detailed information on the relative runtime costs of various builtin intrinsic operations inside the JS engine in various workloads using the rdtsc instruction on Windows.  He now has a working setup that allows him to take a real world JS workload and get some detailed data on what builtin intrinsics were the most costly in that workload.  This is extremely valuable because it allows us to focus our optimization efforts on these builtins where the most gains are to be achieved first.  He already has some initial results of running this tool on the Speedometer benchmark and on a general browsing workload and some optimization work has already started to happen.
Dominik Strohmeier has been helping with running startup measurements on the reference Acer machine to track the progress of the ongoing startup improvements using an HDMI video capture card.  For these measurements, we are tracking two numbers, one is the first paint times (the time at which we paint the first frame from the browser window) and the other is the hero element time (the time at which we paint the “hero element” which is the search box in about:home in this case.)  The baseline build here is the Nightly of Apr 1st as a date before active work on startup optimizations started.  At that time, our median first paint time was 1232.84ms (with a standard deviation of 16.58ms) and our hero element time was
1849.26ms (with a standard deviation of 28.58ms).  On the Nightly of May 18, our first paint time is 849.66ms (with a standard deviation of 11.78ms) and our hero element time is 1616.02ms (with a standard deviation of 24.59ms).
Next week we’re going to have a small work week with some people from the DOM, JS, Layout, Graphics and Perf teams here in Toronto.  I expect to be fully busy at the work week, so you should expect the next issue of this newsletter in two weeks!  With that, it is time to acknowledge the hard work of those who helped make Firefox faster this past week.  I hope I’m not dropping any names by accident!

Manish GoregaokarTeaching Programming: Proactive vs Reactive

I’ve been thinking about this a lot these days. In part because of an idea I had but also due to this twitter discussion.

When teaching most things, there are two non-mutually-exclusive ways of approaching the problem. One is “proactive”1, which is where the teacher decides a learning path beforehand, and executes it. The other is “reactive”, where the teacher reacts to the student trying things out and dynamically tailors the teaching experience.

Most in-person teaching experiences are a mix of both. Planning beforehand is very important whilst teaching, but tailoring the experience to the student’s reception of the things being taught is important too.

In person, you can mix these two, and in doing so you get a “best of both worlds” situation. Yay!

But … we don’t really learn much programming in a classroom setup. Sure, some folks learn the basics in college for a few years, but everything they learn after that isn’t in a classroom situation where this can work2. I’m an autodidact, and while I have taken a few programming courses for random interesting things, I’ve taught myself most of what I know using various sources. I care a lot about improving the situation here.

With self-driven learning we have a similar divide. The “proactive” model corresponds to reading books and docs. Various people have proactively put forward a path for learning in the form of a book or tutorial. It’s up to you to pick one, and follow it.

The “reactive” model is not so well-developed. In the context of self-driven learning in programming, it’s basically “do things, make mistakes, hope that Google/Stackoverflow help”. It’s how a lot of people learn programming; and it’s how I prefer to learn programming.

It’s very nice to be able to “learn along the way”. While this is a long and arduous process, involving many false starts and a lack of a sense of progress, it can be worth it in terms of the kind of experience this gets you.

But as I mentioned, this isn’t as well-developed. With the proactive approach, there still is a teacher – the author of the book! That teacher may not be able to respond in real time, but they’re able to set forth a path for you to work through.

On the other hand, with the “reactive” approach, there is no teacher. Sure, there are Random Answers on the Internet, which are great, but they don’t form a coherent story. Neither can you really be your own teacher for a topic you do not understand.

Yet plenty of folks do this. Plenty of folks approach things like learning a new language by reading at most two pages of docs and then just diving straight in and trying stuff out. The only language I have not done this for is the first language I learned3 4.

I think it’s unfortunate that folks who prefer this approach don’t get the benefit of a teacher. In the reactive approach, teachers can still tell you what you’re doing wrong and steer you away from tarpits of misunderstanding. They can get you immediate answers and guidance. When we look for answers on stackoverflow, we get some of this, but it also involves a lot of pattern-matching on the part of the student, and we end up with a bad facsimile of what a teacher can do for you.

But it’s possible to construct a better teacher for this!

In fact, examples of this exist in the wild already!

The Elm compiler is my favorite example of this. It has amazing error messages

The error messages tell you what you did wrong, sometimes suggest fixes, and help correct potential misunderstandings.

Rust does this too. Many compilers do. (Elm is exceptionally good at it)

One thing I particularly like about Rust is that from that error you can try rustc --explain E0373 and get a terminal-friendly version of this help text.

Anyway, diagnostics basically provide a reactive component to learning programming. I’ve cared about diagnostics in Rust for a long time, and I often remind folks that many things taught through the docs can/should be taught through diagnostics too. Especially because diagnostics are a kind of soapbox for compiler writers — you can’t guarantee that your docs will be read, but you can guarantee that your error messages will. These days, while I don’t have much time to work on stuff myself I’m very happy to mentor others working on improving diagnostics in Rust.

Only recently did I realize why I care about them so much – they cater exactly to my approach to learning programming languages! If I’m not going to read the docs when I get started and try the reactive approach, having help from the compiler is invaluable.

I think this space is relatively unexplored. Elm might have the best diagnostics out there, and as diagnostics (helping all users of a language – new and experienced), they’re great, but as a teaching tool for newcomers; they still have a long way to go. Of course, compilers like Rust are even further behind.

One thing I’d like to experiment with is a first-class tool for reactive teaching. In a sense, clippy is already something like this. Clippy looks out for antipatterns, and tries to help teach. But it also does many other things, and not all are teaching moments are antipatterns.

For example, in C, this isn’t necessarily an antipattern:

struct thingy *result;
if (result = do_the_thing()) {
    frob(*result)
}

Many C codebases use if (foo = bar()). It is a potential footgun if you confuse it with ==, but there’s no way to be sure. Many compilers now have a warning for this that you can silence by doubling the parentheses, though.

In Rust, this isn’t an antipattern either:

fn add_one(mut x: u8) {
    x += 1;
}

let num = 0;
add_one(num);
// num is still 0

For someone new to Rust, they may feel that the way to have a function mutate arguments (like num) passed to it is to use something like mut x: u8. What this actually does is copies num (because u8 is a Copy type), and allows you to mutate the copy within the scope of the function. The right way to make a function that mutates arguments passed to it by-reference would be to do something like fn add_one(x: &mut u8). If you try the mut x thing for non-Copy values, you’d get a “reading out of moved value” error when you try to access num after calling add_one. This would help you figure out what you did wrong, and potentially that error could detect this situation and provide more specific help.

But for Copy types, this will just compile. And it’s not an antipattern – the way this works makes complete sense in the context of how Rust variables work, and is something that you do need to use at times.

So we can’t even warn on this. Perhaps in “pedantic clippy” mode, but really, it’s not a pattern we want to discourage. (At least in the C example that pattern is one that many people prefer to forbid from their codebase)

But it would be nice if we could tell a learning programmer “hey, btw, this is what this syntax means, are you sure you want to do this?”. With explanations and the ability to dismiss the error.

In fact, you don’t even need to restrict this to potential footguns!

You can detect various things the learner is trying to do. Are they probably mixing up String and &str? Help them! Are they writing a trait? Give a little tooltip explaining the feature.

This is beginning to remind me of the original “office assistant” Clippy, which was super annoying. But an opt-in tool or IDE feature which gives helpful suggestions could still be nice, especially if you can strike a balance between being so dense it is annoying and so sparse it is useless.

It also reminds me of well-designed tutorial modes in games. Some games have a tutorial mode that guides you through a set path of doing things. Other games, however, have a tutorial mode that will give you hints even if you stray off the beaten path. Michael tells me that Prey is a recent example of such a game.

This really feels like it fits the “reactive” model I prefer. The student gets to mold their own journey, but gets enough helpful hints and nudges from the “teacher” (the tool) so that they don’t end up wasting too much time and can make informed decisions on how to proceed learning.

Now, rust-clippy isn’t exactly the place for this kind of tool. This tool needs the ability to globally “silence” a hint once you’ve learned it. rust-clippy is a linter, and while you can silence lints in your code, you can’t silence them globally for the current user. Nor does that really make sense.

But rust-clippy does have the infrastructure for writing stuff like this, so it’s an ideal prototyping point. I’ve filed this issue to discuss this topic.

Ultimately, I’d love to see this as an IDE feature.

I’d also like to see more experimentation in the department of “reactive” teaching — not just tools like this.

Thoughts? Ideas? Let me know!

thanks to Andre (llogiq) and Michael Gattozzi for reviewing this


  1. This is how I’m using these terms. There seems to be precedent in pedagogy for the proactive/reactive classification, but it might not be exactly the same as the way I’m using it.

  2. This is true for everything, but I’m focusing on programming (in particular programming languages) here.

  3. And when I learned Rust, it only had two pages of docs, aka “The Tutorial”. Good times.

  4. I do eventually get around to doing a full read of the docs or a book but this is after I’m already able to write nontrivial things in the language, and it takes a lot of time to get there.

Justin DolskePhoton Engineering Newsletter #1

Well, hello there. Let’s talk about the state of Photon, the upcoming Firefox UI refresh! You’ve likely seen Ehsan’s weekly Quantum Flow updates. They’re a great summary of the Quantum Flow work, so I’m just going to copy the format for Photon too. In this update I’ll briefly cover some of the notable work that’s happened up through the beginning of May. I hope to do future updates on a weekly basis.

Our story so far

Up until recently, the Photon work hasn’t been very user-visible. It’s been lots of planning, discussion, research, prototypes, and foundational work. But now we’re at the point where we’re getting into full-speed implementation, and you’ll start to see things changing.

Photon is landing incrementally between now and Firefox 57. It’s enabled by default on Nightly, so you won’t need to enable any special settings. (Some pieces may be temporarily disabled-by-default until we get them up to a Nightly level of quality, but we’ll enable them when they’re ready for testing.) This allows us to get as much testing as possible, even in versions that ultimately won’t ship with Photon. But it does mean that Nightly users will only gradually see Photon changes arriving, instead of a big splash with everything arriving at once.

For Photon work that lands on Nightly-55 or Nightly-56, we’ll be disabling almost all Photon-specific changes once those versions are out of Nightly. In other words, Beta-55 and Beta-56 (and of course the final release versions, Firefox-55 and Firefox-56). That’s not where we’re actively developing or fixing bugs – so if you want to try out Photon as it’s being built, you should stick with Nightly. Users on Beta or Release won’t see Photon until 57 starts to ship on those channels later this year.

The Photon work is split into 6 main areas (which is also how the teams implementing it are organized). These are, briefly:

1. Menus and structure – Replaces the existing application menu (“Hamburger button”) with a simplified linear menu, adds a “page action” menu, changes the bookmarks split-button to be a more general-purpose “library menu”, updates sidebars, and more.

2. Animation – Adds animation to toolbar button actions, and improves animations/transitions of other UI elements (like tabs and menus).

3. Preferences – Reorganizes the Firefox preferences UI to improve organization and adds the ability to search.

4. Visual redesign – This is a collection of other visual changes for Photon. Updating icons, changing toolbar buttons, adapting UI size when using touchscreens, and many other general UI refinements.

5. Onboarding – An important part of the Photon experience is helping new users understand what’s great about Firefox, and showing existing users what’s new and different in 57.

6. Performance – Performance is a key piece throughout Photon, but the Performance team is helping us to identify what areas of Firefox have issues. Some of this work overlaps with Quantum Flow, other work is improve specific areas of Firefox UI jank.

Recent Changes

These updates are going to focus more on the work that’s landing and less on the process that got it there. To start getting caught up, here’s a summary of what’s happened so far in each of the project areas though early May…

Menus/structure: Work is underway to implement the new menus. It’s currently behind a pref until we have enough implemented to turn them on without making Nightly awkward to use. In bug 1355331 we briefly moved the sidebar to the right side of the window instead of the left. But we’ve since decided that we’re only going to provide a preference to allow putting it on the right, and it will remain on the left by default.

Animation: In bug 1352069 we consolidated some existing preferences into a single new toolkit.cosmeticAnimations.enabled preference, to make it easy to disable non-essential animations for performance or accessibility reasons. Bugs 1345315 and 1356655 reduced jank in the tab opening/closing animations. The UX team is finalizing the new animations that will be used in Photon, and the engineering team has build prototypes for how to implement them in a way that performs well.

Preferences: Earlier in the year, we worked with a group of students at Michigan State University to reorganize Firefox’s preferences and add a search function (bug 1324168). We’re now completing some final work, preparing for a second revision, and adding some new UI for performance settings. While this is now considered part of Photon, it was originally scheduled to land in Firefox 55 or 56, and so will ship before the rest of Photon.

Visual redesign:  Bug 1347543 landed a major change to the icons in Firefox’s UI. Previously the icons were simple PNG bitmaps, with different versions for OS variations and display scaling factors. Now they’re a vector format (SVG), allowing a single source file to be be rendered within Firefox at different sizes or with different colors. You won’t notice this change, because we’re currently using SVG versions of the current pre-Photon icons. We’ll switch to the final Photon icons later, for Firefox 57. Another big foundational piece of work landed in bug 1352364, which refactored our toolbar button CSS so that we can easily update it for Photon.

Onboarding: The onboarding work got started later than other parts of Photon. So while some prototyping has started, most of the work up to May was spent finalizing the scope and design of project.

Performance: As noted in Ehsan’s Quantum updates, the Photon performance work has already resulted in a significant improvement to Firefox startup time. Other notable fixes have made closing tabs faster, and work to improve how favicons are stored improved performance on our Tp5 page-load benchmark by 30%! Other fixes have reduced awesomebar jank. While a number of performance bugs have been fixed (of which these are just a subset), most of the focus so far has been on profiling Firefox to identify lots of other things to fix. And it’s also worth noting the great Performance Best Practices guide Mike Conley helped put together, as well as his Oh No! Reflow! add-on, which is a useful tool for finding synchronous reflows in Firefox UI (which cause jank).

That’s it for now! The next couple of these Photon updates will catch up with what’s currently underway.


Robert O'Callahanrr Usenix Paper And Technical Report

Our paper about rr has been accepted to appear at the Usenix Annual Technical Conference. Thanks to Dawn for suggesting that conference, and to the ATC program committee for accepting it :-). I'm scheduled to present it on July 13. The paper is similar to the version we previously uploaded to arXiv.

Some of the reviewers requested more material: additional technical details, explanations of some of our design choices compared to alternatives, and reflection on our "real world" experience with rr. There wasn't space for that within the conference page limits, so our shepherd suggested publishing a tech-report version of the paper with the extra content. I've done that and uploaded "Engineering Record And Replay For Deployability: Extended Technical Report". I hope some people find it interesting and useful.

Adblock PlusThe plan towards offering Adblock Plus for Firefox as a Web Extension

TL;DR: Sometime in autumn this year the current Adblock Plus for Firefox extension is going to be replaced by another, which is more similar to Adblock Plus for Chrome. Brace for impact!

What are Web Extensions?

At some point, Web Extensions are supposed to become a new standard for creating browser extensions. The goal is writing extensions in such a way that they could run on any browser without any or only with minimal modifications. Mozilla and Microsoft are pursuing standardization of Web Extensions based on Google Chrome APIs. And Google? Well, they aren’t interested. Why should they be, if they already established themselves as an extension market leader and made everybody copy their approach.

It isn’t obvious at this point how Web Extensions will develop. The lack of interest from Google isn’t the only issue here; so far the implementation of Web Extensions in Mozilla Firefox and Microsoft Edge shows very significant differences as well. It is worth noting that Web Extensions are necessarily less powerful than the classic Firefox extensions, even though many shortcomings can probably be addressed. Also, my personal view is that the differences between browsers are either going to result in more or less subtle incompatibilities or in an API which is limited by the lowest common denominator of all browsers and not good enough for anybody.

So why offer Adblock Plus as a Web Extension?

Because we have no other choice. Mozilla’s current plan is that Firefox 57 (scheduled for release on November 14, 2017) will no longer load classic extensions, and only Web Extensions are allowed to continue working. So we have to replace the current Adblock Plus by a Web Extension by then or ideally even by the time Firefox 57 is published as a beta version. Otherwise Adblock Plus will simply stop working for the majority of our users.

Mind you, there is no question why Mozilla is striving to stop supporting classic extensions. Due to their deep integration in the browser, classic extensions are more likely to break browser functionality or to cause performance issues. They’ve also been delaying important Firefox improvements due to compatibility concerns. This doesn’t change the fact that this transition is very painful for extension developers, and many existing extensions won’t take this hurdle. Furthermore, it would have been better if the designated successor of the classic extension platform were more mature by the time everybody is forced to rewrite their code.

What’s the plan?

Originally, we hoped to port Adblock Plus for Firefox properly. While using Adblock Plus for Chrome as a starting point would require far less effort, this extension also has much less functionality compared to Adblock Plus for Firefox. Also, when developing for Chrome we had to make many questionable compromises that we hoped to avoid with Firefox.

Unfortunately, this plan didn’t work out. Adblock Plus for Firefox is a large codebase and rewriting it all at once without introducing lots of bugs is unrealistic. The proposed solution for a gradual migration doesn’t work for us, however, due to its asynchronous communication protocols. So we are using this approach to start data migration now, but otherwise we have to cut our losses.

Instead, we are using Adblock Plus for Chrome as a starting point, and improving it to address the functionality gap as much as possible before we release this version for all our Firefox users. For the UI this means:

  • Filter Preferences: We are working on a more usable and powerful settings page than what is currently being offered by Adblock Plus for Chrome. This is going to be our development focus, but it is still unclear whether advanced features such as listing filters of subscriptions or groups for custom filters will be ready by the deadline.
  • Blockable Items: Adblock Plus for Chrome offers comparable functionality, integrated in the browser’s Developer Tools. Firefox currently doesn’t support Developer Tools integration (bug 1211859), but there is still hope for this API to be added by Firefox 57.
  • Issue Reporter: We have plans for reimplementing this important functionality. Given all the other required changes, this one has lower priority, however, and likely won’t happen before the initial release.

If you are really adventurous you can install a current development build here. There is still much work ahead however.

What about applications other than Firefox Desktop?

The deadline only affects Firefox Desktop for now; in other applications classic extensions will still work. However, it currently looks like by Firefox 57 the Web Extensions support in Firefox Mobile will be sufficient to release a Web Extension there at the same time. If not, we still have the option to stick with our classic extension on Android. Update (2017-05-18): Mozilla announced that Firefox Mobile will drop support for classic extensions at the same time as Firefox Desktop. So the option to keep our classic extension there doesn’t exist, we’ll have to make with whatever Web Extensions APIs are available.

As to SeaMonkey and Thunderbird, things aren’t looking well there. It’s doubtful that these will have noteworthy Web Extensions support by November. In fact, it’s not even clear whether they plan to support Web Extensions at all. And unlike with Firefox Mobile, we cannot publish a different build for them (Addons.Mozilla.Org only allows different builds per operating system, not per application). So our users on SeaMonkey and Thunderbird will be stuck with an outdated Adblock Plus version.

What about extensions like Element Hiding Helper, Customizations and similar?

Sadly, we don’t have the resources to rewrite these extensions. We just released Element Hiding Helper 1.4, and it will most likely remain as the last Element Hiding Helper release. There are plans to integrate some comparable functionality into Adblock Plus, but it’s not clear at this point when and how it will happen.

Mozilla Addons BlogCompatibility Update: Add-ons on Firefox for Android

We announced our plans for add-on compatibility and the transition to WebExtensions in the Road to Firefox 57 blog post. However, we weren’t clear on what this meant for Firefox for Android.

We did this intentionally, since at the time the plan wasn’t clear to us either. WebExtensions APIs are landing on Android later than on desktop. Many of them either don’t apply or need additional work to be useful on mobile. It wasn’t clear if moving to WebExtensions-only on mobile would cause significant problems to our users.

The Plan for Android

After looking into the most critical add-ons for mobile and the implementation plan for WebExtensions, we have decided it’s best to have desktop and mobile share the same timeline. This means that mobile will be WebExtensions-only at the same time as desktop Firefox, in version 57. The milestones specified in the Road to Firefox 57 post now apply to all platforms.

The post Compatibility Update: Add-ons on Firefox for Android appeared first on Mozilla Add-ons Blog.

Air MozillaMozilla Roadshow Paris

Mozilla Roadshow Paris The Mozilla Roadshow is making a stop in Paris. Join us for a meetup-style, Mozilla-focused event series for people who build the Web. Hear from...

Chris H-CData Science is Hard: Anomalies Part 3

So what do you do when you have a duplicate data problem and it just keeps getting worse?

You detect and discard.

Specifically, since we already have a few billion copies of pings with identical document ids (which are extremely-unlikely to collide), there is no benefit to continue storing them. So what we do is write a short report about what the incoming duplicate looked like (so that we can continue to analyze trends in duplicate submissions), then toss out the data without even parsing it.

As before, I’ll leave finding out the time the change went live as an exercise for the reader:newplot(1)

:chutten


The Mozilla BlogOne Step Closer to a Closed Internet

Today, the FCC voted on Chairman Ajit Pai’s proposal to repeal and replace net neutrality protections enacted in 2015. The verdict: to move forward with Pai’s proposal

 

We’re deeply disheartened. Today’s FCC vote to repeal and replace net neutrality protections brings us one step closer to a closed internet.  Although it is sometimes hard to describe the “real” impacts of these decisions, this one is easy: this decision leads to an internet that benefits Internet Service Providers (ISPs), not users, and erodes free speech, competition, innovation and user choice.

This vote undoes years of progress leading up to 2015’s net neutrality protections. The 2015  rules properly place ISPs under “Title II” of the Communications Act of 1934, and through that well-tested basis of legal authority, prohibit ISPs from engaging in paid prioritization and blocking or throttling of web content, applications and services. These rules ensured a more open, healthy Internet.

Pai’s proposal removes the 2015 protections and re-re-classifies ISPs under “Title I,” which courts already have determined is insufficient for ensuring a truly neutral net. The result: ISPs would be able to once again prioritize, block and throttle with impunity. This means fewer opportunities for startups and entrepreneurs, and a chilling effect on innovation, free expression and choice online.

Net neutrality isn’t an abstract issue — it has significant, real-world effects. For example, in the past, without net neutrality protections, ISPs have imposed limits on who can FaceTime and determined how we stream videos, and also adopted underhanded business practices.

So what’s next and what can we do?

We’re now entering a 90-day public comment period, which ends in mid-August. The FCC may determine a path forward as soon as October of this year.

During the public comment period in 2015, nearly 4 million citizens wrote to the FCC, many of them demanding strong net neutrality protections.  We all need to show the same commitment again.

We’re already well on our way to making noise. In the weeks since Pai first announced his proposal, more than 100,000 citizens (not bots) have signed Mozilla’s net neutrality petition at mzl.la/savetheinternet. And countless callers (again, not bots) have recorded more than 50 hours of voicemail for the FCC’s ears. We need more of this.

We’re also planning strategic, direct engagement with policymakers, including through written comments in the FCC’s open proceeding. Over the next three months, Mozilla will continue to amplify internet users’ voices and fuel the movement for a healthy internet.

The post One Step Closer to a Closed Internet appeared first on The Mozilla Blog.

Air MozillaReps Weekly Meeting May 18, 2017

Reps Weekly Meeting May 18, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Air Mozilla2017 Global Sprint Ask Me Anything

2017 Global Sprint Ask Me Anything Questions and answers about the upcoming 2017 Mozilla Global Sprint, our worldwide collaboration party for the Open Web

Eric ShepherdA genuine medical oddity

My health continues to be an adventure. My neuropathy continues to worsen steadily; I no longer have any significant sensation in many of my toes, and my feet are always in a state of “pins and needles” style numbness. My legs are almost always tingling so hard they burn, or feel like they’re being squeezed in a giant fist, or both. The result is that I have some issues with my feet not always doing exactly what I expect them to be doing, and I don’t usually know exactly where they are.

For example, I have voluntarily stopped driving for the most part, because much of the time, sensation in my feet is so bad that I can’t always tell whether my feet are in the right places. A few times, I’ve found myself pressing the gas and brake pedals together because I didn’t realize my foot was too far to the left.

I also trip on things a lot more than I used to, since my feet wander a bit without my realizing it. On January 2, I tripped over a chair in my office while carrying an old CRT monitor to store it in my supply cabinet. I went down hard on my left knee and landed on the monitor I was carrying, taking it squarely to my chest. My chest was okay, just a little sore, but my knee was badly injured. The swelling was pretty brutal, and it is still trying to finish healing up more than four months later.

Given the increased problems with my leg pain, my neurologist recently had an MRI performed on my lumbar (lower) spine. An instance of severe nerve root compression was found which is possibly contributing to my pain and numbness in my legs. We are working to schedule for them to attempt to inject medication at that location to try to reduce the swelling that’s causing the compression. If successful, that could help temporarily relieve some of my symptoms.

But the neuropathic pain in my neck and shoulders continues as well. There is some discussion of possibly once again looking at using a neurostimulator implant to try to neutralize the pain signals that are being falsely generated. Apparently I’m once again eligible for this after a brief period where my symptoms shifted outside the range of those which are considered appropriate for that type of therapy.

In addition to the neurological issues, I am in the process of scheduling a procedure to repair some vascular leaks in my left leg, which may be responsible for some swelling there that could be in part responsible for some of my leg trouble (although that is increasingly unlikely given other information that’s come to light since we started scheduling that work).

Then you can top all that off with the side effects of all the meds I’m taking. I take at least six medications which have the side effect of “drowsiness” or “fatigue” or “sleepiness.” As a result, I live in a fog most of the time. Mornings and early afternoons are especially difficult. Just keeping awake is a challenge. Being attentive and getting things written is a battle. I make progress, but slowly. Most of my work happens in the afternoons and evenings, squeezed into the time between my meds easing up enough for me to think more clearly and alertly, and time for my family to get together for dinner and other evening activities together.

Balancing work, play, and personal obligations when you have this many medical issues at play is a big job. It’s also exhausting in and of itself. Add the exhaustion and fatigue that come from the pain and the meds, and being me is an adventure indeed.

I appreciate the patience and the help of my coworkers and colleagues more than I can begin to say. Each and every one of you is awesome. I know that my unpredictable work schedule (between having to take breaks because of my pain and the vast number of appointments I have to go to) causes headaches for everyone. But the team has generally adapted to cope with my situation, and that above all else is something I’m incredibly grateful for. It makes my daily agony more bearable. Thank you. Thank you. Thank you.

Thank you.

Doug BelshawFake News and Digital Literacies: some resources

Free Hugs CC BY-NC-ND clement127

In a couple of weeks’ time, on Thursday, 1st June 2017, I’ll be a keynote speaker at an online Library 2.0 event, convened by Steve Hargadon. The title is Digital Literacy and Fake News and you can register for it here. An audience of around 5,000 people from all around the world is expected to hear us discuss the following:

What does “digital literacy” mean in an era shaped by the Internet, social media, and staggering quantities of information? How is it that the fulfillment of human hopes for a open knowledge society seem to have resulted in both increased skepticism of, and casualness with, information? What tools and understanding can library professionals bring to a world that seems to be dominated by fake news?

In preparation for the session, Steve has asked us all to provide a ‘Top 10’ of our own resources on the topic, as well as those from others that we’d recommend. In the spirit of working openly, I’m sharing in advance what I’ve just emailed to him.

I’ll be arguing that ‘Fake News’ is a distraction from more fundamental problems, including algorithmic curation of news feeds, micro-targeting of user groups, and the way advertising fuels the web economy.

 1. My resources

 2. Other resources

I hope you can join us live, or at least watch the recording afterwards! Don’t forget to sign up.


Comments? Questions? I’m off Twitter for May, but you can email me: [email protected]

Image CC BY-NC-ND clement127

Daniel PocockHacking the food chain in Switzerland

A group has recently been formed on Meetup seeking to build a food computer in Zurich. The initial meeting is planned for 6:30pm on 20 June 2017 at ETH, (Zurich Centre/Zentrum, Rämistrasse 101).

The question of food security underlies many of the world's problems today. In wealthier nations, we are being called upon to trust a highly opaque supply chain and our choices are limited to those things that major supermarket chains are willing to stock. A huge transport and storage apparatus adds to the cost and CO2 emissions and detracts from the nutritional value of the produce that reaches our plates. In recent times, these problems have been highlighted by the horsemeat scandal, the Guacapocalypse and the British Hummus crisis.

One interesting initiative to create transparency and encourage diversity in our diets is the Open Agriculture (OpenAg) Initiative from MIT, summarised in this TED video from Caleb Harper. The food produced is healthier and fresher than anything you might find in a supermarket and has no exposure to pesticides.

An open source approach to food

An interesting aspect of this project is the promise of an open source approach. The project provides hardware plans, a a video of the build process, source code and the promise of sharing climate recipes (scripts) to replicate the climates of different regions, helping ensure it is always the season for your favour fruit or vegetable.

Do we need it?

Some people have commented on the cost of equipment and electricity. Carsten Agger recently blogged about permaculture as a cleaner alternative. While there are many places where people can take that approach, there are also many overpopulated regions and cities where it is not feasible. Some countries, like Japan, have an enormous population and previously productive farmland contaminated by industry, such as the Fukushima region. Growing our own food also has the potential to reduce food waste, as individual families and communities can grow what they need.

Whether it is essential or not, the food computer project also provides a powerful platform to educate people about food and climate issues and an exciting opportunity to take the free and open source philosophy into many more places in our local communities. The Zurich Meetup group has already received expressions of interest from a diverse group including professionals, researchers, students, hackers, sustainability activists and free software developers.

Next steps

People who want to form a group in their own region can look in the forum topic "Where are you building your Food Computer?" to find out if anybody has already expressed interest.

Which patterns from the free software world can help more people build more food computers? I've already suggested using Debian's live-wrapper to distribute a runnable ISO image that can boot from a USB stick, can you suggest other solutions like this?

Can you think of any free software events where you would like to see a talk or exhibit about this project? Please suggest them on the OpenAg forum.

There are many interesting resources about the food crisis, an interesting starting point is watching the documentary Food, Inc.

If you are in Switzerland, please consider attending the meeting on at 6:30pm on 20 June 2017 at ETH (Centre/Zentrum), Zurich.

One final thing to contemplate: if you are not hacking your own food supply, who is?

Air MozillaThe Joy of Coding - Episode 100

The Joy of Coding - Episode 100 mconley livehacks on real Firefox bugs while thinking aloud.

Kim MoirThe Manager’s Path in review

I ordered Camille Fournier’s book on engineering management when it was in pre-order.  I was delighted to read when in arrived in the mail.  If you work in the tech industry, manager or not, this book is a must read.

The book is a little over 200 pages but it packs much more succinct and use useful advice that other longer books on this topic that I’ve read in the past.  It takes a unique approach in that the first chapters describes what to expect from your manager, as a new individual contributor.  Each chapter then moves to how to help others as a mentor, a tech lead, managing other people, to managing the a team, to a manager of multiple teams and further up the org chart.  At the end of each chapter, there are questions for you that relate to the chapter to assess your own experience with the material.

IMG_3914

Some of the really useful advice in the book

  • Choose your manager wisely, consider who you will be reporting to when interviewing.  “Strong managers know how to play the game at their company.  They can get you promoted; the can get you attention and feedback from important people.  Strong managers have strong networks, and can get you jobs every after you stop working for them.”
  • How to be a great tech lead – understand the architecture, be a team player, lead technical discussions, communicate “Your productivity is now less important than the productivity of the whole team….If one universal talent separates successful leaders from the pack, it’s communication skills.  Successful leaders write well, they read carefully, and they can get up in front of a group and speak.”
  • On transitioning to a job managing a team “As much as you may want to believe that management is a natural progression of the skills you develop as a senior engineer, it’s really a whole new set of skills and challenges.”  So many people think that and don’t take the time to learn new skills before taking on the challenge of managing a team.
  • One idea I thought was really fantastic was to create a 30/60/90 day plan for new hires or new team members to establish clear goals to ensure they are meeting expectations on getting up to speed.
  • Camille also discusses the perils of micromanagement and how this can be a natural inclination for people who were deeply technical before becoming managers.  Instead of focusing on the technical details, you need to focus on giving people autonomy over their work, to keep them motivated and engaged.
  • On giving performance reviews – use concrete examples from anonymous peer reviews to avoid bias.  Spent plenty of time preparing, start early and focus on accomplishments and strengths.    When describing areas for improvement, keep it focused. Avoid surprises during the review process. If a person is under-performing, the review process should not be the first time they learn this.
  • From the section on debugging dysfunctional teams, the example was given of a team that wasn’t shipping the team only released once a week and the release process was very time consuming and painful.  Once the release process was more automated and occurred more regularly – the team became more productive.  In other words, sometimes dysfunctional teams are due to resource constraints or broken processes.
  • Be kind, not nice. It’s kind to tell someone that they aren’t ready for promotion and describe the steps that they need to get to the next level.  It’s not kind to tell someone that they should get promoted, but watch them fail.  It’s kind to tell someone that their behaviour is disruptive, and that they need to change it.
  • Don’t be afraid.  Avoiding conflict because of fear will not resolve the problems on your team.
  • “As you grow more into leadership positions, people will look to you for behavioral guidance. What you want to teach them in how to focus.  To that end, there are two areas I encourage you to practice modeling, right now: figuring out what’s important, and going home.”  💯💯💯
  • Suggestions for roadmapping uncertainty – be realistic about the probability of change, break down projects into smaller your deliverables so that if you don’t implement the entire larger project,  you have still implemented some of the project’s intended results.
  • Chapter 9 on bootstrapping culture discusses how the role of a senior engineering leader is not just to set technical direction, but to be clear and thoughtful about setting the culture of the engineering team.
  • I really like this paragraph on hiring for the values of the team

culture

  • The bootstrapping culture chapter finishes with notes about code review, running outage postmortems and architectural reviews which all have very direct and useful advice.

This book describes concrete and concise steps to be an effective leader as you progress through your career.  It also features the voices and perspectives of women in leadership, something some well-known books lack. I’ve read a lot of management books over the years, and while some have jewels of wisdom, I haven’t read one that is this densely packed with useful content.  It really makes you think – what is the most important thing I can be doing now to help others make progress and be happy with their work?

I’ve also found the writing, talks and perspectives of the following people on engineering leadership invaluable


Mozilla Open Policy & Advocacy BlogWorking Together Towards a more Secure Internet through VEP Reform

Today, Mozilla sent a letter to Congress expressing support for an important bill has just been introduced: the Protecting Our Ability to Counter Hacking Act (PATCH Act). You can read more in this post from Denelle Dixon.

This bill focuses on a relatively unknown, but critical, piece of the U.S. government’s responsibility to secure our internet infrastructure: the Vulnerabilities Equities Process (VEP). The VEP is the government’s process for reviewing and coordinating the disclosure of vulnerabilities to folks who write code – like us – who can fix them in the software and hardware we all use (you can learn more about what we know here). However, the VEP is not codified in law, and lacks transparency and reporting on both the process policymakers follow and the considerations they take into account. The PATCH Act would address these gaps.

The cyberattack over the last week – using the WannaCry exploit from the latest Shadow Brokers release, and exploiting unpatched Windows computers – only emphasizes the need to work together and make sure that we’re all as secure as we can be. As we said earlier this week, these exploits might have been shared with Microsoft by the NSA – and that would be the right way to handle an exploit like this. If the government has exploits that have been compromised, they must disclose them to software companies before they can be used widely putting users at risk. The lack of transparency around the government’s decision-making processes points to the importance of codifying and improving the Vulnerabilities Equities Process.

We’ve said before – many times – how important it is to work together to protect cybersecurity. Reforming the VEP is one key component of that shared responsibility, ensuring that the U.S. government shares vulnerabilities that put swaths of the internet at risk. The process was conceived in 2010 to improve our collective cybersecurity, and implemented in 2014 after the Heartbleed vulnerability put most of the internet at risk (for more information, take a look at this timeline). It’s time to take the next step and put this process into statute.

Last year, we wrote about five important reforms to the VEP we believe are necessary:

  • All security vulnerabilities should go through the VEP.
  • All relevant federal agencies involved in the VEP should work together using a standard set of criteria to ensure all risks and interests are considered.
  • Independent oversight and transparency into the processes and procedures of the VEP must be created.
  • The VEP should be placed within the Department of Homeland Security (DHS), with their expertise in existing coordinated vulnerability disclosure programs.
  • The VEP should be codified in law to ensure compliance and permanence.

Over the last year, we have seen many instances where hacking tools from the U.S. government have been posted online, and then used – by unknown adversaries – to attack users. Some of these included “zero days”, which left companies scrambling to patch their software and protect their users, without prior notice. It’s important that the government defaults to disclosing vulnerabilities, rather than hoarding them in case they become useful later. We hope they will instead work with technology companies to help protect all of us online.

The PATCH Act – introduced by Sen. Gardner, Sen. Johnson, Sen. Schatz, Rep. Farenthold, and Rep. Lieu – aims to codify and make the existing Vulnerabilities Equities Process more transparent. It’s relatively simple – a good thing, when it comes to legislation: it creates a VEP Board, housed at DHS, which will consider disclosure of vulnerabilities that some part of the government knows about. The VEP Board would make public the process and criteria they use to balance the relevant interests and risks – an important step – and publish reporting around the process. These reports would allow the public to consider whether the process is working well, without sharing classified information (saving that reporting for the relevant oversight entities). This would also make it easier to disclose vulnerabilities through DHS’ existing channels.

Mozilla looks forward to working with members of Congress on this bill, as well as others interested in VEP reform – and all the other government actors, in the U.S. and around the world, who seek to take action that would improve the security of the internet. We stand with you, ready to defend the security of the internet and its users.

The post Working Together Towards a more Secure Internet through VEP Reform appeared first on Open Policy & Advocacy.

The Mozilla BlogImproving Internet Security through Vulnerability Disclosure

Supporting the PATCH Act for VEP Reform

 

Today, Mozilla sent a letter to Congress in support of the Protecting Our Ability to Counter Hacking Act (PATCH Act) that was just introduced by Sen. Cory Gardner, Sen. Ron Johnson, Sen. Brian Schatz, Rep. Blake Farenthold, and Rep. Ted Lieu.

We support the PATCH Act because it aims to codify and make the existing Vulnerabilities Equities Process more transparent. The Vulnerabilities Equities Process (VEP) is the U.S. government’s process for reviewing and coordinating the disclosure of new vulnerabilities learns about.

The VEP remains shrouded in secrecy, and is in need of process reforms to ensure transparency, accountability, and oversight. Last year, I wrote about five important reforms to the VEP we believe are necessary to make the internet more secure. The PATCH Act includes many of the key reforms, including codification in law to increase transparency and accountability.

For background, a vulnerability is a flaw – in design or implementation – that can be used to exploit or penetrate a product or system. We saw an example this weekend as a ransomware attack took unpatched systems by surprise – and you’d be surprised at how common they are if we don’t all work together to fix them. These vulnerabilities can put users and businesses at significant risk from bad actors. At the same time, exploiting these same vulnerabilities can also be useful for law enforcement and intelligence operations. It’s important to consider those equities when the government decides what to do.

If the government has exploits that have been compromised, they must disclose them to tech companies before those vulnerabilities can be used widely and put users at risk. The lack of transparency around the government’s decision-making processes here means that we should improve and codify the Vulnerabilities Equities Process in law. Read this Mozilla Policy blog post from Heather West for more details.

The internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the internet.

We look forward to working with the U.S. government (and governments around the world) to improve disclosure of security vulnerabilities and better secure the internet to protect us all.

 

 

The post Improving Internet Security through Vulnerability Disclosure appeared first on The Mozilla Blog.

Daniel PocockBuilding an antenna and receiving ham and shortwave stations with SDR

In my previous blog on the topic of software defined radio (SDR), I provided a quickstart guide to using gqrx, GNU Radio and the RTL-SDR dongle to receive FM radio and the amateur 2 meter (VHF) band.

Using the same software configuration and the same RTL-SDR dongle, it is possible to add some extra components and receive ham radio and shortwave transmissions from around the world.

Here is the antenna setup from the successful SDR workshop at OSCAL'17 on 13 May:

After the workshop on Saturday, members of the OSCAL team successfully reconstructed the SDR and antenna at the Debian info booth on Sunday and a wide range of shortwave and ham signals were detected:

Here is a close-up look at the laptop, RTL-SDR dongle (above laptop), Ham-It-Up converter (above water bottle) and MFJ-971 ATU (on right):

Buying the parts

Component Purpose, Notes Price/link to source
RTL-SDR dongle Converts radio signals (RF) into digital signals for reception through the USB port. It is essential to buy the dongles for SDR with TCXO, the generic RTL dongles for TV reception are not stable enough for anything other than TV. ~ € 25
Enamelled copper wire, 25 meters or more Loop antenna. Thicker wire provides better reception and is more suitable for transmitting (if you have a license) but it is heavier. The antenna I've demonstrated at recent events uses 1mm thick wire. ~ € 10
4 (or more) ceramic egg insulators Attach the antenna to string or rope. Smaller insulators are better as they are lighter and less expensive. ~ € 10
4:1 balun The actual ratio of the balun depends on the shape of the loop (square, rectangle or triangle) and the point where you attach the balun (middle, corner, etc). You may want to buy more than one balun, for example, a 4:1 balun and also a 1:1 balun to try alternative configurations. Make sure it is waterproof, has hooks for attaching a string or rope and an SO-239 socket. from € 20
5 meter RG-58 coaxial cable with male PL-259 plugs on both ends If using more than 5 meters or if you want to use higher frequencies above 30MHz, use thicker, heavier and more expensive cables like RG-213. The cable must be 50 ohm. ~ € 10
Antenna Tuning Unit (ATU) I've been using the MFJ-971 for portable use and demos because of the weight. There are even lighter and cheaper alternatives if you only need to receive. ~ € 20 for receive only or second hand
PL-259 to SMA male pigtail, up to 50cm, RG58 Joins the ATU to the upconverter. Cable must be RG58 or another 50 ohm cable ~ € 5
Ham It Up v1.3 up-converter Mixes the HF signal with a signal from a local oscillator to create a new signal in the spectrum covered by the RTL-SDR dongle ~ € 40
SMA (male) to SMA (male) pigtail Join the up-converter to the RTL-SDR dongle ~ € 2
USB charger and USB type B cable Used for power to the up-converter. A spare USB mobile phone charge plug may be suitable. ~ € 5
String or rope For mounting the antenna. A ligher and cheaper string is better for portable use while a stronger and weather-resistent rope is better for a fixed installation. € 5

Building the antenna

There are numerous online calculators for measuring the amount of enamelled copper wire to cut.

For example, for a centre frequency of 14.2 MHz on the 20 meter amateur band, the antenna length is 21.336 meters.

Add an extra 24 cm (extra 12 cm on each end) for folding the wire through the hooks on the balun.

After cutting the wire, feed it through the egg insulators before attaching the wire to the balun.

Measure the extra 12 cm at each end of the wire and wrap some tape around there to make it easy to identify in future. Fold it, insert it into the hook on the balun and twist it around itself. Use between four to six twists.

Strip off approximately 0.5cm of the enamel on each end of the wire with a knife, sandpaper or some other tool.

Insert the exposed ends of the wire into the screw terminals and screw it firmly into place. Avoid turning the screw too tightly or it may break or snap the wire.

Insert string through the egg insulators and/or the middle hook on the balun and use the string to attach it to suitable support structures such as a building, posts or trees. Try to keep it at least two meters from any structure. Maximizing the surface area of the loop improves the performance: a circle is an ideal shape, but a square or 4:3 rectangle will work well too.

For optimal performance, if you imagine the loop is on a two-dimensional plane, the first couple of meters of feedline leaving the antenna should be on the plane too and at a right angle to the edge of the antenna.

Join all the other components together using the coaxial cables.

Configuring gqrx for the up-converter and shortwave signals

Inspect the up-converter carefully. Look for the crystal and find the frequency written on the side of it. The frequency written on the specification sheet or web site may be wrong so looking at the crystal itself is the best way to be certain. On my Ham It Up, I found a crystal with 125.000 written on it, this is 125 MHz.

Launch gqrx, go to the File menu and select I/O devices. Change the LNB LO value to match the crystal frequency on the up-converter, with a minus sign. For my Ham It Up, I use the LNB LO value -125.000000 MHz.

Click OK to close the I/O devices window.

On the Input Controls tab, make sure Hardware AGC is enabled.

On the Receiver options tab, change the Mode value. Commercial shortwave broadcasts use AM and amateur transmission use single sideband: by convention, LSB is used for signals below 10MHz and USB is used for signals above 10MHz. To start exploring the 20 meter amateur band around 14.2 MHz, for example, use USB.

In the top of the window, enter the frequency, for example, 14.200 000 MHz.

Now choose the FFT Settings tab and adjust the Freq zoom slider. Zoom until the width of the display is about 100 kHZ, for example, from 14.15 on the left to 14.25 on the right.

Click the Play icon at the top left to start receiving. You may hear white noise. If you hear nothing, check the computer's volume controls, move the Gain slider (bottom right) to the maximum position and then lower the Squelch value on the Receiver options tab until you hear the white noise or a transmission.

Adjust the Antenna Tuner knobs

Now that gqrx is running, it is time to adjust the knobs on the antenna tuner (ATU). Reception improves dramatically when it is tuned correctly. Exact instructions depend on the type of ATU you have purchased, here I present instructions for the MFJ-971 that I have been using.

Turn the TRANSMITTER and ANTENNA knobs to the 12 o'clock position and leave them like that. Turn the INDUCTANCE knob while looking at the signals in the gqrx window. When you find the best position, the signal strength displayed on the screen will appear to increase (the animated white line should appear to move upwards and maybe some peaks will appear in the line).

When you feel you have found the best position for the INDUCTANCE knob, leave it in that position and begin turning the ANTENNA knob clockwise looking for any increase in signal strength on the chart. When you feel that is correct, begin turning the TRANSMITTER knob.

Listening to a transmission

At this point, if you are lucky, some transmissions may be visible on the gqrx screen. They will appear as darker colours in the waterfall chart. Try clicking on one of them, the vertical red line will jump to that position. For a USB transmission, try to place the vertical red line at the left hand side of the signal. Try dragging the vertical red line or changing the frequency value at the top of the screen by 100 Hz at a time until the station is tuned as well as possible.

Try and listen to the transmission and identify the station. Commercial shortwave broadcasts will usually identify themselves from time to time. Amateur transmissions will usually include a callsign spoken in the phonetic alphabet. For example, if you hear "CQ, this is Victor Kilo 3 Tango Quebec Romeo" then the station is VK3TQR. You may want to note down the callsign, time, frequency and mode in your log book. You may also find information about the callsign in a search engine.

The video demonstrates reception of a transmission from another country, can you identify the station's callsign and find his location?

If you have questions about this topic, please come and ask on the Debian Hams mailing list. The gqrx package is also available in Fedora and Ubuntu but it is known to crash on startup in Ubuntu 17.04. Users of other distributions may also want to try the Debian Ham Blend bootable ISO live image as a quick and easy way to get started.

Mozilla Addons BlogAdd-on Compatibility for Firefox 55

Firefox 55 will be released on August 8th. Here’s the list of changes that went into this version that can affect add-on compatibility. There is more information available in Firefox 55 for Developers, so you should also give it a look. Also, if you haven’t yet, please read our roadmap to Firefox 57.

General

Recently, we turned on a restriction on Nightly that only allows multiprocess add-ons to be enabled. You can use a preference to toggle it. Also, Firefox 55 is the first version to move directly from Nightly to Beta after the removal of the Aurora channel.

XPCOM and Modules

Let me know in the comments if there’s anything missing or incorrect on these lists. We’d like to know if your add-on breaks on Firefox 55.

The automatic compatibility validation and upgrade for add-ons on AMO will happen in a few weeks, so keep an eye on your email if you have an add-on listed on our site with its compatibility set to Firefox 54.

The post Add-on Compatibility for Firefox 55 appeared first on Mozilla Add-ons Blog.

QMOFirefox 54 Beta 7 Testday Results

Hello Mozillians!

As you may already know, last Friday – May 12th – we held a Testday event, for Firefox 54 Beta 7.

Thank you all for helping us making Mozilla a better place – Gabi Cheta, Ilse Macías, Juliano Naves, Athira Appu, Avinash Sharma,  Iryna Thompson.

From India team: Surentharan R.A, Fahima Zulfath, Baranitharan.M,  Sriram, vignesh kumar.

From Bangladesh team: Nazir Ahmed Sabbir, Rezaul Huque Nayeem, Md.Majedul islam, Sajedul Islam, Saddam Hossain, Maruf Rahman, Md.Tarikul Islam Oashi, Md. Ehsanul Hassan, Meraj Kazi, Sayed Ibn Masud, Tanvir Rahman, Farhadur Raja Fahim, Kazi Nuzhat Tasnem, Md. Rahimul Islam, Md. Almas Hossain, Saheda Reza Antora, Fahmida Noor, Muktasib Un Nur, Mohammad Maruf Islam,  Rezwana Islam Ria, Tazin Ahmed, Towkir Ahmed, Azmina Akter Papeya

Results:

– several test cases executed for the Net Monitor MVP and Firefox Screenshots features;

– 2 new bugs filed: 13647711364773.

– 15 bugs verified: 116717813640901363288134125813420021335869,  776254, 13637371363840, 13276911315550135847913380361348264 and 1361247.

Again thanks for another successful testday! 🙂

We hope to see you all in our next events, all the details will be posted on QMO!

Hacks.Mozilla.OrgHaving fun with physics and A-Frame

A-Frame is a WebVR framework to build virtual reality experiences. It comes with some bundled components that allow you to easily add behavior to your VR scenes, but you can download more –or even create your own.

In this post I’m going to share how I built a VR scene that integrates a physics engine via a third-party component. While A-Frame allows you to add objects and behaviors to a scene, if you want those objects to interact with each other or be manipulated by the user, you might want to use a physics engine to handle the calculations you’ll need. If you are new to A-Frame, I recommend you check out the Getting started guide and play with it a bit first.

The scene I created is a bowling alley that works with the HTC Vive headset. You have a ball in your right hand which you can throw by holding the right-hand controller trigger button and releasing it as you move your arm. To return the ball back to your hand and try again, press the menu button. You can try the demo here! (Note: You will need Firefox Nightly and an HTC Vive. Follow the the setup instructions in WebVR.rocks)

ScreenshotThe source code is at your disposal on Github to tweak and have fun with.

Adding a physics engine to A-Frame

I’ve opted for aframe-physics-system, which uses Cannon.js under the hood. Cannon is a pure JavaScript physics engine (not a compiled version to ASM from C/C++), so we can easily interface with it –and peek at its code.

aframe-physics-system is middleware that initialises the physics engine and exposes A-Frame components for us to apply to entities. When we use its static-body or dynamic-body components, aframe-physics-system creates a Cannon.Body instance and “attaches” it to our A-Frame entities, so on every frame it adjusts the entity’s position, rotation, etc. to match the body’s.

If you wish to use a different engine, take a look at aframe-physics-system or aframe-physics-components. These components are not very complex and it should not be complicated to mimic their behavior with another engine.

Static and dynamic bodies

Static bodies are those which are immovable. Think of the ground, or walls that can’t be torn down, etc. In the scene, the immovable entities are the ground and the bumpers on each side of the bowling lane.

Dynamic bodies are those which move, bounce, topple etc. Obviously the ball and the bowling pins are our dynamic bodies. Note that since these bodies move and can fall, or collide and knock down other bodies, the mass property will have a big influence. Here’s an example for a bowling pin:

<a-cylinder dynamic-body="mass: 1" ...>

The avatar and the physics world

To display the “hands” of the user (i.e., to show the tracked VR controllers as hands) I used the vive-controls component, already bundled in A-Frame.

<a-entity vive-controls="hand: right" throwing-hand></a-entity>
<a-entity vive-controls="hand: left"></a-entity>

The challenge here is that the user’s avatar (“head” and “hands”) is not part of the physical world –i.e. it’s out of the physics engine’s scope, since the head and the hands must follow the user’s movement, without being affected by physical rules, such as gravity or friction.

In order for the user to be able to “hold” the ball, we need to fetch the position of the right controller and manually set the ball’s position to match this every frame. We also need to reset other physical properties, such as velocity.

This is done in the custom throwing-hand component (which I added to the entity representing the right hand), in its tick callback:

ball.body.velocity.set(0, 0, 0);
ball.body.angularVelocity.set(0, 0, 0);
ball.body.quaternion.set(0, 0, 0, 1);
ball.body.position.set(position.x, position.y, position.z);

Note: a better option would have been to also match the ball’s rotation with the controller.

Throwing the ball

The throwing mechanism works like this: the user has to press the controller’s trigger and when she releases it, the ball is thrown.

There’s a method in Cannon.Body which applies a force to a dynamic body: applyLocalImpulse. But how much impulse should we apply to the ball and in which direction?

We can get the right direction by calculating the velocity of the throwing hand. However, since the avatar isn’t handled by the physics engine, we need to calculate the velocity manually:

let velocity = currentPosition.vsub(lastPosition).scale(1/delta);

Also, since the mass of the ball is quite high (to give it more “punch” against the pins), I had to add a multiplier to that velocity vector when applying the impulse:

ball.body.applyLocalImpulse(
  velocity.scale(50),
  new CANNON.Vec3(0, 0, 0)
);

Note: If I had allowed the ball to rotate to match the controller’s rotation, I would have needed to apply that rotation to the velocity vector as well, since applyLocalImpulse works with the ball’s local coordinates system.

To detect when the controller’s trigger has been released, the only thing needed is a listener for the triggerup event in the entity representing the right hand. Since I added my custom throwing-hand component there, I set up the listener in its init callback:

this.el.addEventListener('triggerup', function (e) {
  // ... throw the ball
});

A glitch

At the beginning, I was simulating the throw by pressing the space bar key. The code looked like this:

document.addEventListener('keyup', function (e) {
  if (e.keyCode === 32) { // spacebar
    e.preventDefault();
    throwBall();
  }
});

However, this was outside of the A-Frame loop, and the computation of the throwing hand’s lastPosition and currentPosition was out of sync, and thus I was getting odd results when calculating the velocity.

This is why I set a flag instead of calling launch directly, and then, inside of the throwing-hand’s tick callback, throwBall is called if that flag is set to true.

Another glitch: shaking pins

Using the aframe-physics-system’s default settings I noticed a glitch when I scaled down the bowling pins: They were shaking and eventually falling to the ground!

This can happen when using a physics engine if the computations are not precise enough: there is a small error that carries over frame by frame, it accumulates and… you have things crumbling or tumbling down, especially if these things are small –they need less error for changes to be more noticeable.

One workaround for this is to increase the accuracy of the physics simulation — at the expense of performance. You can control this with the iterations setting at the aframe-physics-system’s component configuration (by default it is set to 10). I increased it to 20:

<a-scene physics="iterations: 20">

To better see the effects of this change, here is a comparison side by side with iterations set to 5 and 20:

NOTE: Upload to Youtube this video and insert it here: https://drive.google.com/a/mozilla.com/file/d/0B45CULzwzeLdNGdDTk9QOFFqQUE/view?usp=sharing

The “sleep” feature of Cannon provides another possible workaround to handle this specific situation without affecting performance. When an object is in sleep mode, physics won’t make it move until it wakes upon collision with another object.

Your turn: play with this!

I have uploaded the project to Glitch as well as to a Github repository in case you want to play with it and make your own modifications. Some things you can try:

  • Allow the player to use both hands (maybe with a button to switch the ball from one hand to the other?)
  • Automatically reset the bowling pins to their original position once they have all fallen. You can check the rotation of their bodies to implement this.
  • Add sound effects! There is a callback for collision events you can use to detect when the ball has collided with another element… You can add a sound effects for when the ball clashes against the pins, or when it hits the ground.

If you have questions about A-Frame or want to get more involved in building WebVR with A-Frame, check out our active community on Slack. We’d love to see what you’re working on.

Christian HeilmannYou don’t owe the world perfection! – keynote at Beyond Tellerand

Yesterday morning I was lucky enough to give the opening keynote at the excellent Beyond Tellerand conference in Dusseldorf, Germany. I wrote a talk for the occasion that covered a strange disconnect that we’re experiencing at the moment.
Whilst web technology advanced leaps and bounds we still seem to be discontent all the time. I called this the Tetris mind set: all our mistakes are perceived as piling up whilst our accomplishments vanish.

Eva-Lotta Lamm created some excellent sketchnotes on my talk.
Sketchnotes of the talk

The video of the talk is already available on Vimeo:

Breaking out of the Tetris mind set from beyond tellerrand on Vimeo.

You can get the slides on SlideShare:

I will follow this up with a more in-depth article on the subject in due course, but for today I am very happy how well received the keynote was and I want to remind people that it is OK to build things that don’t last and that you don’t owe the world perfection. Creativity is a messy process and we should feel at ease about learning from mistakes.

Nick DesaulniersSubmitting Your First Patch to the Linux kernel and Responding to Feedback

After working on the Linux kernel for Nexus and Pixel phones for nearly a year, and messing around with the excellent Eudyptula challenge, I finally wanted to take a crack at submitting patches upstream to the Linux kernel.

This post is woefully inadequate compared to the existing documentation, which should be preferred.

I figure I’d document my workflow, now that I’ve gotten a few patches accepted (and so I can refer to this post rather than my shell history…). Feedback welcome (open an issue or email me).

Step 1: Setting up an email client

I mostly use git send-email for sending patch files. In my ~/,gitconfig I have added:

1
2
3
4
5
6
[sendemail]
  ; setup for using git send-email; prompts for password
  smtpuser = [email protected]
  smtpserver = smtp.googlemail.com
  smtpencryption = tls
  smtpserverport = 587

To send patches through my gmail account. I don’t add my password so that I don’t have to worry about it when I publish my dotfiles. I simply get prompted every time I want to send an email.

I use mutt to respond to threads when I don’t have a patch to send.

Step 2: Make fixes

How do you find a bug to fix? My general approach to finding bugs in open source C/C++ code bases has been using static analysis, a different compiler, and/or more compiler warnings turned on. The kernel also has an instance of bugzilla running as an issue tracker. Work out of a new branch, in case you choose to abandon it later. Rebase your branch before submitting (pull early, pull often).

Step 3: Thoughtful commit messages

I always run git log <file I modified> to see some of the previous commit messages on the file I modified.

1
2
3
4
5
6
7
8
$ git log arch/x86/Makefile

commit a5859c6d7b6114fc0e52be40f7b0f5451c4aba93
...
    x86/build: convert function graph '-Os' error to warning
commit 3f135e57a4f76d24ae8d8a490314331f0ced40c5
...
    x86/build: Mostly disable '-maccumulate-outgoing-args'

The first words of commit messages in Linux are usually <subsystem>/<sub-subsystem>: <descriptive comment>.

Let’s commit, git commit <files> -s. We use the -s flag to git commit to add our signoff. Signing your patches is standard and notes your agreement to the Linux Kernel Certificate of Origin.

Step 4: Generate Patch file

git format-patch HEAD~. You can use git format-patch HEAD~<number of commits to convert to patches> to turn multiple commits into patch files. These patch files will be emailed to the Linux Kernel Mailing List (lkml). They can be applied with git am <patchfile>. I like to back these files up in another directory for future reference, and cause I still make a lot of mistakes with git.

Step 5: checkpatch

You’re going to want to run the kernel’s linter before submitting. It will catch style issues and other potential issues.

1
2
3
4
$ ./scripts/checkpatch.pl 0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch
total: 0 errors, 0 warnings, 9 lines checked

0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch has no obvious style problems and is ready for submission.

If you hit issues here, fix up your changes, update your commit with git commit --amend <files updated>, rerun format-patch, then rerun checkpatch until you’re good to go.

Step 6: email the patch to yourself

This is good to do when you’re starting off. While I use mutt for responding to email, I use git send-email for sending patches. Once you’ve gotten a hang of the workflow, this step is optional, more of a sanity check.

1
2
$ git send-email \
0001-x86-build-require-only-gcc-use-maccumulate-outgoing-.patch

You don’t need to use command line arguments to cc yourself, assuming you set up git correctly, git send-email should add you to the cc line as the author of the patch. Send the patch just to yourself and make sure everything looks ok.

Step 7: fire off the patch

Linux is huge, and has a trusted set of maintainers for various subsystems. The MAINTAINERS file keeps track of these, but Linux has a tool to help you figure out where to send your patch:

1
2
3
4
5
6
$ ./scripts/get_maintainer.pl 0001-x86-build-don-t-add-maccumulate-outgoing-args-w-o-co.patch
Person A <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Person B <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
Person C <[email protected]> (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
[email protected] (maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT))
[email protected] (open list:X86 ARCHITECTURE (32-BIT AND 64-BIT))

With some additional flags, we can feed this output directly into git send-email.

1
2
3
4
$ git send-email \
--cc-cmd='./scripts/get_maintainer.pl --norolestats 0001-my.patch' \
--cc [email protected] \
0001-my.patch

Make sure to cc yourself when prompted. Otherwise if you don’t subscribe to LKML, then it will be difficult to reply to feedback. It’s also a good idea to cc any other author that has touched this functionality recently.

Step 8: monitor feedback

Patchwork for the LKML is a great tool for tracking the progress of patches. You should register an account there. I highly recommend bookmarking your submitter link. In Patchwork, click any submitter, then Filters (hidden in the top left), change submitter to your name, click apply, then bookmark it. Here’s what mine looks like. Not much today, and mostly trivial patches, but hopefully this post won’t age well in that regard.

Feedback may or may not be swift. I think my first patch I had to ping a couple of times, but eventually got a response.

Step 9: responding to feedback

Update your file, git commit <changed files> --amend to update your latest commit, git format-patch -v2 HEAD~, edit the patch file to put the changes below the dash below the signed off lines (example), rerun checkpatch, rerun get_maintainer if the files you modified changed since V1. Next, you need to find the messageID to respond to the thread properly.

In gmail, when viewing the message I want to respond to, you can click “Show Original” from the dropdown near the reply button. From there, copy the MessageID from the top (everything in the angle brackets, but not the brackets themselves). Finally, we send the patch:

1
2
3
4
5
$ git send-email \
--cc-cmd='./scripts/get_maintainer.pl --norolestats 0001-my.patch' \
--cc [email protected] \
--in-reply-to 2017datesandletters@somehostname \
0001-my.patch

We make sure to add anyone who may have commented on the patch from the mailing list to keep them in the loop. Rinse and repeat 2 through 9 as desired until patch is signed off/acked or rejected.

Finding out when your patch gets merged is a little tricky; each subsystem maintainer seems to do things differently. My first patch, I didn’t know it went in until a bot at Google notified me. The maintainers for the second and third patches had bots notify me that it got merged into their trees, but when they send Linus a PR and when that gets merged isn’t immediately obvious.

It’s not like Github where everyone involved gets an email that a PR got merged and the UI changes. While there’s pros and cons to having this fairly decentralized process, and while it is kind of is git’s original designed-for use case, I’d be remiss not to mention that I really miss Github. Getting your first patch acknowledged and even merged is intoxicating and makes you want to contribute more; radio silence has the opposite effect.

Happy hacking!

(Thanks to Reddit user /u/EliteTK for pointing out that -v2 was more concise than --subject-prefix="Patch vX").

J.C. JonesAnalyzing Let's Encrypt statistics via Map/Reduce

I've been supplying the statistics for Let's Encrypt since they've launched. In Q4 of 2016 their volume of certificates exceeded the ability of my database server to cope, and I moved it to an Amazon RDS instance.

Ow.

Amazon's RDS service is really excellent, but paying out of pocket hurts.

I've been re-building my existing Golang/MySQL tools into a Golang/Python toolchain at slowly over the past few months. This switches from a SQL database with flexible, queryable columns to a EBS volume of folders containing certificates.

The general structure is now:

/ct/state/Y3QuZ29vZ2xlYXBpcy5jb20vaWNhcnVz
/ct/2017-08-13/qEpqYwR93brm0Tm3pkVl7_Oo7KE=.pem
/ct/2017-08-13/oracle.out
/ct/2017-08-13/oracle.out.offsets
/ct/2017-08-14/qEpqYwR93brm0Tm3pkVl7_Oo7KE=.pem

Fetching CT

Underneath /ct/state exists the state of the log-fetching utility, which is now a Golang tool named ct-fetch. It's mostly the same as the ct-sql tool I have been using, but rather than interact with SQL, it simply writes to disk.

This program creates folders for each notAfter date seen in a certificate. It appends each certificate to a file named for its issuer, so the path structure for cert data looks like:

/BASE_PATH/NOT_AFTER_DATE/ISSUER_BASE64.pem

The Map step

A Python 3 script ct-mapreduce-map.py processes each date-named directory. Inside, it reads each .pem and .cer file, decoding all the certificates within and tallying them up. When it's done in the directory, it writes out a file named oracle.out containing the tallies, and also a file named oracle.out.offsets with information so it can pick back up later without starting over.

map script running

The tallies contain, for each issuer:

  1. The number of certificates issued each day
  2. The set of all FQDNs issued
  3. The set of all Registered Domains (eTLD + 1 label) issued

The resulting oracle.out is large, but much smaller than the input data.

The Reduce step

Another Python 3 script ct-mapreduce-reduce.py finds all oracle.out files in directories whose names aren't in the past. It reads each of these in and merges all the tallies together.

The result is the aggregate of all of the per-issuer data from the Map step.

This will get converted then into the data sets currently available at https://ct.tacticalsecret.com/

State

I'm not yet to the step of synthesizing the data sets; each time I compare this data with my existing data sets there are discrepancies - but I believe I've improved my error reporting enough that after another re-processing, the data will match.

Performance

Right now I'm trying to do all this with a free-tier AWS EC2 instance and 100 GB of EBS storage; it's not currently clear to me whether that will be able to keep up with Let's Encrypt's issuance volume. Nevertheless, even an EBS-optimized instance is much less expensive than an RDS instance, even if the approach is less flexible.

performance monitor

This Week In RustThis Week in Rust 182

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

Crate of the Week

This week's crate of the week is PX8, a Rust implementation of an Open Source fantasy console. Thanks to hallucino for the suggestion.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

125 pull requests were merged in the last week.

New Contributors

  • Bastien Orivel
  • Dennis Schridde
  • Eduardo Pinho
  • faso
  • Liran Ringel

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Style RFCs

Style RFCs are part of the process for deciding on style guidelines for the Rust community and defaults for Rustfmt. The process is similar to the RFC process, but we try to reach rough consensus on issues (including a final comment period) before progressing to PRs. Just like the RFC process, all users are welcome to comment and submit RFCs. If you want to help decide what Rust code should look like, come get involved!

We're making good progress and the style is coming together. If you want to see the style in practice, check out our example or use the Integer32 Playground and select 'Proposed RFC' from the 'Format' menu. Be aware that implementation is work in progress.

Issues in final comment period:

Good first issues:

We're happy to mentor these, please reach out to us in #rust-style if you'd like to get involved

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

Rust Jobs

Tweet us at @ThisWeekInRust to get your job offers listed here!

Quote of the Week

Spent the last week learning rust. The old martial arts adage applies. Cry in the dojo, laugh in the battlefield.

/u/crusoe on Reddit.

Thanks to Ayose Cazorla for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

J.C. JonesOCSP Telemetry in Firefox

Firefox Telemetry is such an amazing tool for looking at the state of the web. Keeler and I have recently been investigating the state of OCSP, which is a method of revoking TLS certificates. Firefox is among the last of the web browsers to use OCSP on each secure website by default.

The telemetry keys that are relevant to our OCSP support are:

  • SSL_TIME_UNTIL_HANDSHAKE_FINISHED
  • CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME
  • CERT_VALIDATION_HTTP_REQUEST_CANCELED_TIME
  • CERT_VALIDATION_HTTP_REQUEST_FAILED_TIME

The first (SSL_TIME_UNTIL_HANDSHAKE_FINISHED) is technically not an OCSP response time telemetry – it includes the server you're connecting to and may/may not have an OCSP fetch – but I include it as it’s highly correlated with successful responses (r=0.972) [1][2], and clearly related in the code. Before you get too excited though, the correlation likely only indicates that a user's network performance is generally similar between the web servers they're talking to and the OCSP servers being queried. Still, cool.

Otherwise the keys are what they say, and times are in milliseconds.

Speeding Up TLS Handshakes

We've been toying with ideas to speed up SSL_TIME_UNTIL_HANDSHAKE_FINISHED; the first of which was to reduce the timeout for OCSP for Domain-Validated certificates. Looking at the Continuous Distribution Function of CERT_VALIDATION_HTTP_REQUEST_SUCCEEDED_TIME, only 11% of OCSP handshakes complete after 1 second in flight: https://mzl.la/2ogNaSN
OCSP Success time (ms) CDF

Correspondingly, we decreased our timeout on DV OCSP from 2 seconds to 1 second.

This led to a very modest improvement in SSL_TIME_UNTIL_HANDSHAKE_FINISHED (~2%) in Firefox Nightly; this is probably because after one successful OCSP query, the result is cached, so only the first connection is slower, while the telemetry metric includes all connections - whether or not they had an OCSP query.

We've got some other tricks still to try.

Speed Trends

Among CAs in mailing lists there’s been a lot of talk about their efforts at improving OCSP response times. Looking at the aggregate information (as we can't break this down by CA), we’ve not really seen any improvement in response time over the last 18 months [3].

OCSP success time (ms) over time

Next steps

We're going to try some more ideas to speed up the initial TLS connection for both DV and EV certificates.

Footnotes

  1. Correlation between TLS handshake and OCSP Success https://i.have.insufficient.coffee/handshakevsocsp_success.png
  2. Correlation between TLS handshake and OCSP Failure https://i.have.insufficient.coffee/handshakevsocsp_fail.png
  3. OCSP Successes (median, 95th percentile) over time by date. Note: this took a while to load: https://mzl.la/2ogT8TJ

Mozilla Marketing Engineering & Ops BlogMozMEAO SRE Status Report - 5/16/2017

Here’s what happened on the MozMEAO SRE team from May 9th - May 16th.

Current work

Bedrock (mozilla.org)

Work continues on moving Bedrock to our Kubernetes infrastructure.

Postgres/RDS provisioning

A Postgres RDS instance has already been provisioned in us-east-1 for our Virginia cluster, and another was created in ap-northeast-1 to support the Tokyo cluster. Additionally, development, staging, and production databases were created in each region. This process was documented here.

Elastic Load Balancer (ELB) provisioning

We’ve automated the creation of ELB’s for Bedrock in Virginia and Tokyo. There are still a few more wrinkles to sort out, but the infra is mostly in place to begin The Big Move to Kubernetes.

MDN

Work continues to analyze the Apache httpd configuration from the current SCL3 datacenter config.

Downtime incident 2017-05-13

On May 13th, 2017 22:49 -22:55, New Relic reported that MDN was unavailable. The site was slow to respond to page views, and was running long database queries. Log analysis show a security scan of our database-intensive endpoints.

On May 14th, 2017, there were high I/O alerts on 3 of the 6 production web servers. This was not reflected in high traffic or a decrease in responsiveness.

Basket

The FxA team would like to send events (FXA_IDs) to Basket and Salesforce, and needed SQS queues in order to move forward. We automated the provisioning of dev/stage/prod SQS queues, and passed off credentials to the appropriate engineers.

The FxA team requested cross AWS account access to the new SQS queues. Access has been automated and granted via this PR.

Snippets

Snippets Stats Collection Issues 2017-04-10

A planned configuration change to add a Route 53 Traffic Policy for the snippets stats collection service caused a day’s worth of data to not be collected due to a SSL certificate error.

Careers

Autoscaling

In order to take advantage of Kubernetes cluster and pod autoscaling (which we’ve documented here), app memory and CPU limits were set for careers.mozilla.org in our Virginia and Tokyo clusters. This allows the careers site to scale up and down based on load.

Acceptance tests

Giorgos Logiotatidis added acceptance tests, which contains a simple bash script and additional Jenkinsfile stages to check if careers.mozilla.org pages return valid responses after deployment.

Downtime incident 2017-04-11

A typo was merged and pushed to production and caused a couple of minutes of downtime before we rolled-back to the previous version.

Decommission openwebdevice.org status

openwebdevice.org will remain operational in http-only mode until the board approves decommissioning. A timeline is unavailable.

Future work

Nucleus

We’re planning to move nucleus to Kubernetes, and then proceed to decommissioning current nucleus infra.

Basket

We’re planning to move basket to Kubernetes shortly after the nucleus migration, and then proceed to decommissioning existing infra.

Links

Mozilla Addons BlogAdd-ons Update – 2017/05

Here’s the state of the add-ons world this month.

The Road to Firefox 57 explains what developers should look forward to in regards to add-on compatibility for the rest of the year. So please give it a read if you haven’t already.

The Review Queues

In the past month, our team reviewed 1,132 listed add-on submissions:

  • 944 in fewer than 5 days (83%).
  • 21 between 5 and 10 days (2%).
  • 167 after more than 10 days (15%).

969 listed add-ons are awaiting review.

For two weeks we’ve been automatically approving add-ons that meet certain criteria. It’s a small initial effort (~60 auto-approvals) which will be expanded in the future. We’re also starting an initiative this week to clear most of the review queues by the end of the quarter. The change should be noticeable in the next couple of weeks.

However, this doesn’t mean we won’t need volunteer reviewers in the future. If you’re an add-on developer and are looking for contribution opportunities, please consider joining us. Visit our wiki page for more information.

Compatibility

We published the blog post for 54 and ran the bulk validation script. Additionally, we’ll publish the add-on compatibility post for Firefox 55 later this week.

Make sure you’ve tested your add-ons and either use WebExtensions or set the multiprocess compatible flag in your manifest to ensure they continue working in Firefox. And as always, we recommend that you test your add-ons on Beta.

You may also want  to review the post about upcoming changes to the Developer Edition channel. Firefox 55 is the first version that will move directly from Nightly to Beta.

If you’re an add-ons user, you can install the Add-on Compatibility Reporter to identify and report any add-ons that aren’t working anymore.

Recognition

We would like to thank the following people for their recent contributions to the add-ons world:

  • psionikangel
  • lavish205
  • Tushar Arora
  • raajitr
  • ccarruitero
  • Christophe Villeneuve
  • Aayush Sanghavi
  • Martin Giger
  • Joseph Frazier
  • erosman
  • zombie
  • Markus Stange
  • Raajit Raj
  • Swapnesh Kumar Sahoo

You can read more about their work in our recognition page.

The post Add-ons Update – 2017/05 appeared first on Mozilla Add-ons Blog.

Eric RahmFirefox memory usage with multiple content processes

This is a continuation of my Are They Slim Yet series, for background see my previous installment.

With Firefox’s next release, 54, we plan to enable multiple content processes — internally referred to as the e10s-multi project — by default. That means if you have e10s enabled we’ll use up to four processes to manage web content instead of just one.

My previous measurements found that four content processes are a sweet spot for both memory usage and performance. As a follow up we wanted to run the tests again to confirm my conclusions and make sure that we’re testing on what we plan to release. Additionally I was able to work around our issues testing Microsoft Edge and have included both 32-bit and 64-bit versions of Firefox on Windows; 32-bit is currently our default, 64-bit is a few releases out.

The methodology for the test is the same as previous runs, I used the atsy project to load 30 pages and measure memory usage of the various processes that each browser spawns during that time.

Without further ado, the results:

Graph of browser memory usage, Chrome uses a lot.

So we continue to see Chrome leading the pack in memory usage across the board: 2.4X the memory as Firefox 32-bit and 1.7X 64-bit on Windows. IE 11 does well, in fact it was the only one to beat Firefox. It’s successor Edge, the default browser on Windows 10, appears to be striving for Chrome level consumption. On macOS 10.12 we see Safari going the Chrome route as well.

Browsers included are the default versions of IE 11 and Edge 38 on Windows 10, Chrome Beta 59 on all platforms, Firefox Beta 54 on all platforms, and Safari Technology Preview 29 on macOS 10.12.4.

Note: For Safari I had to run the test manually, they seem to have made some changes that cause all the pages from my test to be loaded in the same content process.

The Mozilla BlogWannaCry is a Cry for VEP Reform

This weekend, a vulnerability in some versions of the Windows operating system resulted in the biggest cybersecurity attack in years. The so-called “WannaCry” malware relied on at least one exploit included in the latest Shadow Brokers release. As we have repeated, attacks like this are a clarion call for reform to the government’s Vulnerabilities Equities Process (VEP).

The exploits may have been shared with Microsoft by the NSA. We hope that happened, as it would be the right way to handle a vulnerability like this. Sharing vulnerabilities with tech companies enables us to protect our users, including the ones within the government. If the government has exploits that have been compromised, they must disclose them to software companies before they can be used widely putting users at risk. The lack of transparency around the government’s decision-making processes here means that we should improve and codify the Vulnerabilities Equities Process in law.

The WannaCry attack also shows the importance of security updates in protecting users. Microsoft patched the relevant vulnerabilities in a March update, but users who had not updated remain vulnerable. Mozilla has shared some resources to help users update their software, but much more needs to be done in this area.

The internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the internet.

The post WannaCry is a Cry for VEP Reform appeared first on The Mozilla Blog.

Daniel GlazmanW3C Advisory Board elections

The W3C Advisory Board (AB) election 2017 just started, and I am not running this time. I have said multiple times the way people are elected is far too conservative, giving a high premium to "big names" or representatives of "big companies" on one hand, tending to preserve a status-quo in terms of AB membership on the other. Newcomers and/or representative of smaller companies have almost zero chance to be elected. Even with the recent voting system changes, the problem remains.

Let me repeat here my proposal for both AB and TAG: two consecutive mandates only; after two consecutive mandates, elected members cannot run again for re-election at least during at least one year.

But let's focus on current candidates. Or to be more precise, on their electoral program:

  1. Mike Champion (Microsoft), who has been on the AB for years, has a clear program that takes 2/3rds of his AB nominee statement.
    1. increase speed on standards
    2. bridge the gap existing between "fast" implementors and "slow" standards
    3. better position W3C internally
    4. better position W3C externally
    5. help the Web community
  2. Rick Johnson (VitalSource Technologies | Ingram Content Group) does not have a detailed program. He wants to help the Publishing side of W3C.
  3. Charles McCathie Nevile (Yandex) wants
    1. more pragmatism
    2. to take "into account the broad diversity of its membership both in areas of interest and in size and power" but he has "been on the AB longer than any current participant, including the staff", which does not promote diversity at all
  4. Natasha Rooney (GSMA) has a short statement with no program at all.
  5. Chris Wilson (Google Inc.), who has also been elected to the AB twice already, wants :
    1. to engage better developers and vendors
    2. to focus better W3C resources, with more agility and efficiency
    3. to streamline process and policies to let us increase speed and quality
  6. Zhang Yan (China Mobile Communications Corporation) does not really have a clear program besides "focus on WEB technology for 5G, AI and the Internet of things and so on"
  7. Judy Zhu (Alibaba (China) Co., Ltd.) wants:
    1. to make W3C more globalized (good luck on that one...)
    2. to make W3C Process more usable/effective/efficient
    3. increase W3C/industries collaboration (but isn't it a industrial consortium already?)
    4. increase agility
    5. focus more on security and privacy

If I except the mentions of agility and Process, let me express a gut feeling: this is terribly depressing. Candidacy statements from ten years ago look exactly the same. They quote the same goals. They're even phrased the same way... But in the meantime, we have major topics on the meta-radar (non-exhaustive list):

  • the way the W3C Process is discussed, shaped and amended is so incredibly long it's ridiculous. Every single major topic Members raised in the last ten years took at least 2 years (if not six years) to solve, leaving Groups in a shameful mess. The Process is NOT a Technical Report that requires time, stability and implementations. It's our Law, that impacts our daily life as Members. When an issue is raised, it's because it's a problem right now and people raising the issue expect to see the issue solved in less than "years", far less than years.
  • no mention at all of finances! The finances of the W3C are almost a taboo, that only a few well-known zealots like yours truly discuss, but they feed all W3C activities. After years of instability, and even danger, can the W3C afford keeping its current width without cutting some activities and limiting its scope? Can the W3C avoid new revenue streams? Which ones?
  • similarly, no mention of transparency! I am not speaking of openness of our technical processes here, I am very clearly and specifically speaking of the transparency of the management of the Consortium itself. The way W3C is managed is far too vertical and it starts being a real problem, and a real burden. We need changes there. Now.
  • the role of the Director, another taboo inside W3C, must be discussed. It is time to acknowledge the fact the Director is not at the W3C any more. It's time to stop signing all emails "in the name of the Director', handle all transition conference calls "in the name of the Director" but almost never "with the Director". I'm not even sure we need another Director. It's time to acknowledge the fact Tim should become Honorary Director - with or without veto right - and distribute his duties to others.
  • we need a feedback loop and very serious evaluation of the recent reorganization of the W3C. My opinion is as follows: nobody knows who to contact and it's a mess of epic magnitude. The Functional leaders centralize input and then re-dispatch it to others, de facto resurrecting Activities and adding delays to everything. The reorg is IMHO a failure, and a rather expensive one in terms of effectiveness.
  • W3C is still not a legal entity, and it does not start being a burden... it's been a burden for eons. The whole architecture of W3C, with regional feet and a too powerful MIT, is a scandalous reminiscence of the past.
  • our election system for AB and TAG is too conservative. People stay there for ages, while all our technical world seems to be reshaped every twelve months. My conclusion is simple, and more or less matches what Mike Champion said : the Consortium is not tailored any more to match its technical requirements. Where we diverge: I understand Mike prefers evolution to revolution, I think evolution is not enough any more and revolution is not avoidable any more. We probably need to rethink the W3C from the ground up.
  • Incubation has been added to W3C Process in a way that is perceived by some as a true scandal. I am not opposed at all to Incubation, but W3C has shown a lack of caution, wisdom, consensus and obedience to its own Process that is flabbergasting. W3M acts fast when it need to remind a Member about the Process, but W3M itself seems to work around the Process all the time. The way Charters under review are modified during the Charter Review itself is the blatant example of that situation.

Given how far the candidacy statements are from what I think are the real and urgent issues of the W3C, I'm not even sure I am willing to vote... I will eventually cast a ballot, sure, but I stand by my opinion above: this is depressing.

I am now 50 years old, I have been contributing to W3C for, er, almost 22 years and that's why I will not run any more. We need younger people, we need different perspectives, we need different ways of doing, we need different futures. We need a Consortium of 2017, we still have a Consortium of 2000, we still have the people of 2000. If I was 20 today, born with the Web as a daily fact, how would I react looking at W3C? I think I would say « oh that old-school organization... » and that alone explains this whole article.

Conclusion for all W3C AB candidates: if you want my vote, you'll have to explain better, much better, where you stand in front of these issues. What do you propose, how do you suggest to implement it, what's your vision for W3C 2020. Thanks.

Jeff WaldenGuys! The Mojave Desert is hot and dry. Who knew?

Four days in, 77mi so far, at Julian overnight. Longest waterless stretch was 17.8mi, but I did end up only drinking the water I started with on the first 20mi day, so I suppose it was as if it were a 20mi waterless stretch, even if water was plentiful. (That said, this year was so rainy/snowy that a ton of water sources that usually would be dry, are still running now.)

Starting group picture

First rail crossing

Not a rattlesnake across the trail

A PCT sign that says

Unexploded military ordnance nearby! Woo!

An overlook, with other hiker trash in the foreground

View on a valley

Overnight campsite at sunset - good view, but very windy

Campsite in morning

Prickly pear-looking cactus

And, my overnight lodgings in Julian:

Overnight on the floor of a small restaurant

Gervase MarkhamCaddy Webserver and MOSS

The team behind the Caddy secure-by-default webserver have written a blog post on their experience with MOSS:

The MOSS program kickstarted a new era for Caddy: turning it from a fairly casual (but promising!) open source project into something that is growing more than we would have hoped otherwise. Caddy is seeing more contributions, community engagement, and development than it ever has before! Our experience with MOSS was positive, and we believe in Mozilla’s mission. If you do too, consider submitting your project to MOSS and help make the Internet a better place.

Always nice to find out one’s work makes a difference. :-)

Mike Taylortext-shadow in ::selection, still not great

7 years ago I tweeted my only good tweet:

please kill the text-shadow in ::selection. obsessive compulsive text highlighters like myself go blind

screenshot of text selection ugliness

(apologies for hideous screenshot, 2010 was a weird time for web design, I guess)

Some internet hipsters agreed, so they put a default text-shadow: none rule for ::selection in html5 boilerplate's main.css.

Anyways, we recently got a bug about nearly same exact issue: if you have a white background and set a white text-shadow on the copy (wat), things can get weird when someone makes a selection:

#wrapper {
  background-color: #fff;
}
.post-meta {
  text-shadow: 2px 0px 1px #fff;
}

So don't do that?

Anyways. The most important takeaway (for me) is that the devs over at thegunmag.com don't follow me on twitter, which is super rude when you think about it.

The Servo BlogThis Week In Servo 102

In the last week, we landed 140 PRs in the Servo organization’s repositories.

Planning and Status

Our roadmap is available online, including the overall plans for 2017. Q2 plans will appear soon; please check it out and provide feedback!

This week’s status updates are here.

Notable Additions

  • fabrice fixed an issue loading stylesheets with unusual MIME types.
  • ferjm allowed retrieving line numbers for CSS rules in Stylo.
  • behnam generated many conformance tests for the unicode-bidi crate.
  • canaltinova shared quirks information between Stylo and Servo.
  • MortimerGoro fixed an unsafe transmute that was causing crashes on Android.
  • mrobinson corrected the behaviour of the scrollBy API to better match the specification.
  • jdm removed incorrect buffer padding in ipc-channel on macOS.
  • kvark fixed an assertion failure when rendering fonts on unix.
  • aneeshusa implemented per-repository labelling actions in highfive.
  • nox refactored the implementation of CSS position values to reduce code duplication.
  • UK992 reenabled all unit tests on TravisCI.
  • jdm extended the cross-origin canvas security tests to cover same-origin redirects.
  • cbrewster made non-initial about:blank navigations asynchronous.
  • jdm fixed a GC hazard stemming from the transitionend event.

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

The Rust Programming Language BlogTwo years of Rust

Rust is a language for confident, productive systems programming. It aims to make systems programming accessible to a wider audience, and to raise the ambitions of dyed-in-the-wool systems hackers.

It’s been two years since Rust 1.0 was released. Happy second birthday, Rust!

Group picture from RustFest Berlin

Rustaceans at RustFest Berlin, September 2016. Picture by Fiona Castiñeira

Over these two years, we have demonstrated stability without stagnation, maintaining backwards compatibility with version 1.0 while also making many improvements. Conveniently, Rust’s birthday is a bit under halfway through 2017, which makes this a great time to reflect not only on the progress in the last year but also on the progress of our 2017 Roadmap goals.

After reading this post, if you’d like to give us your feedback on how we’re doing and where Rust should focus next, please fill out our 2017 State of Rust survey.

But first, let’s do the numbers!

Rust in numbers

A lot has happened since Rust’s first birthday:

  • 10,800 commits by 663 contributors (438 of them new this year) added to the core repository;
  • 56 RFCs merged;
  • 9 minor releases and 2 patch releases shipped;
  • 4,405 new crates published;
  • 284 standard library stabilizations;
  • 10 languages rust-lang.org has been translated into;
  • 48 new companies running Rust in production;
  • 4 new teams (Docs, Style, Infrastructure, and the Unsafe Guidelines strike team);
  • 24 occasions of adding people to teams, 6 retirings of people from teams;
  • 3 babies born to people on the Rust teams;
  • 2 years of stability delivered.

On an average week this year, the Rust community merged 1 RFC and published 83 new crates. Rust topped the “most loved language” for the second year in a row in the StackOverflow survey. Also new this year is thanks.rust-lang.org, a site where you can browse contributors by release.

Rust in production

In addition to the 48 new Rust friends, we now have a Rust jobs website! More and more companies are choosing Rust to solve problems involving performance, scaling, and safety. Let’s check in on a few of them.

Dropbox is using Rust in multiple high-impact projects to manage exabytes of data on the back end, where correctness and efficiency is critical. Rust code is also currently shipping in the desktop client on Windows running on hundreds of millions of machines. Jamie Turner recently spoke at the SF Rust Meetup about the details on how Rust helps Dropbox use less RAM and get more throughput with less CPU.

Mozilla, Rust’s main sponsor, has accelerated their use of Rust in production. Not only did Servo start shipping nightly builds, Firefox 48 marked the first Firefox release that included Rust code as part of the Oxidation project. Project Quantum, announced in October 2016, is an effort to incrementally adopt proven parts of Servo into Firefox’s rendering engine, Gecko. Check out this blog series that’s just getting started for a detailed look at Project Quantum.

GNOME, a free and open source desktop environment for Linux, went from experimenting with Rust in librsvg in October 2016 to a hackfest in March to work on the interoperability between GNOME and Rust to enable more GNOME components to be written in Rust. The hackfest participants made good progress, be sure to check out the reports at the bottom of the hackfest page for all the details. We’re all excited about the possibilities of Rust and GNOME working together.

This year, npm started using Rust in production to serve JavaScript packages. The Rust pieces eliminate performance bottlenecks in their platform that serves around 350 million packages a day. Ashley Williams recently gave a talk at RustFest in Ukraine about npm’s experience with Rust in production; check out the video.

This is just a sampling of the success stories accumulating around Rust. If you’re using Rust in production, we want to hear yours too!

Rust in community

Speaking of conferences, We’ve had four Rust conferences in the last year:

And we have at least three conferences coming up!

That’s not even including the 103 meetups worldwide about Rust. Will you be the one to run the fourth conference or start the 104th meetup? Contact the community team for help and support!

Rust in 2017

The 2017 Roadmap goals have been great for focusing community efforts towards the most pressing issues facing Rust today. Of course we’d love for every aspect of Rust to improve all the time, but we don’t have an infinite number of contributors with an infinite amount of time available yet!

Let’s check in on some of the initiatives in each of the goals in the roadmap. The linked tracking issues give even more detail than the summaries here.

Rust should have a lower learning curve

The second edition of The Rust Programming Language Book is one chapter shy of having its initial content complete. There’s lots more editing to be done to get the book ready for publication in October, though. The print version is currently available for preorder from No Starch, and the online version of the second edition has boarded the beta train and will be an option in the documentation shipped with Rust 1.18.0. Steve and I have gotten feedback that the ownership chapter especially is much improved and has helped people understand ownership related concepts better!

The Language Ergonomics Initiative is another part of the lower learning curve goal that has a number of improvements in its pipeline. The language team is eager to mentor people (another goal!) who are interested in getting involved with moving these ergonomic improvement ideas forward by writing RFCs and working with the community to flesh out the details of how these improvements would work. Comment on the tracking issue if you’d like to jump in.

Also check out:

Rust should have a pleasant edit-compile-debug cycle

Waiting on the compiler is the biggest roadblock preventing the Rust development workflow from being described as “pleasant”. So far, a lot of work has been done behind the scenes to make future improvements possible. Those improvements are starting to come to fruition, but rest assured that this initiative is far from being considered complete.

One of the major prerequisites to improvements was adding MIR (Mid-level Intermediate Representation) to the compiler pipeline. This year, MIR became a default part of the compilation process.

Because of MIR, we’re now able to work on adding incremental recompilation. Nightly builds currently offer “beta” support for it, permitting the compiler to skip over code generation for code that hasn’t changed. We are in the midst of refactoring the compiler to support finer-grained incremental computation, allowing us to skip type-checking and other parts of compilation as well. This refactoring should also offer better support for the IDE work (see next section), since it enables the compiler to do things like compile a single function in isolation. We expect to see the next stage of incremental compilation becoming available over the next few months. If you’re interested in getting involved, please check out the roadmap issue #4, which is updated periodically to reflect the current status, as well as places where help is needed.

The February post on the “beta” support showed that recompiling in release mode will often be five times as fast with incremental compilation! This graph shows the improvements in compilation time when making changes to various parts of the regex crate and rebuilding in release mode:

Graph showing improved time with incremental compilation

Try out incremental compilation on nightly Rust with CARGO_INCREMENTAL=1 cargo <command>!

Thanks to Niko Matsakis for this incremental compilation summary!

We’ve also made some progress on the time it takes to do a full compilation. On average, compile times have improved by 5-10% in the last year, but some worst-case behavior has been fixed that results in >95% improvements in certain programs. Some very promising improvements are on the way for later this year; check out perf.rust-lang.org for monitoring Rust’s performance day-to-day.

Rust should provide a basic, but solid IDE experience

As part of our IDE initiative, we created the Rust Language Server project. Its goal is to create a single tool that makes it easy for any editor or IDE to have the full power of the Rust compiler for error checking, code navigation, and refactoring by using the standard language server protocol created by Microsoft and Eclipse.

While still early in its life, today the RLS is available from rustup for nightly users. It provides type information on hover, error messages as you type, and different kinds of code navigation. It even provides refactoring and formatting as unstable features! It works with projects as large as Cargo. We’re excited to watch the RLS continue to grow and hope to see it make its way to stable Rust later this year.

Thanks to Jonathan Turner for this RLS summary!

Rust should have 1.0-level crates for essential tasks, and Rust should provide easy access to high quality crates

The recent post on the Libz Blitz details the Library Team’s initiative to increase the quality of crates for common tasks; that post is excellent so I won’t repeat it here. I will note that many of the issues that the Libs Team is going to create will be great starter issues. For the blitz to be the best it can be, the Libs Team is going to need help from the community– that means YOU! :) They’re willing to mentor people interested in contributing.

In order to make awesome crates easier to find for particular purposes, crates.io now has categories for crate authors to better indicate the use case of their crate. Crates can also now have CI badges, and more improvements to crates.io’s interface are coming that will help you choose the crates that fit your needs.

Rust should be well-equipped for writing robust, high-scale servers

One of the major events in Rust’s ecosystem in the last year was the introduction of a zero-cost futures library, and a framework, Tokio, for doing asynchronous I/O on top of it. These libraries are a boon for doing high-scale, high-reliability server programming, productively. Futures have been used with great success in C++, Scala, and of course JavaScript (under the guise of promises), and we’re reaping similar benefits in Rust. However, the Rust library takes a new implementation approach that makes futures allocation-free. And Tokio builds on that to provide a futures-enabled event loop, and lots of tools for quickly implementing new protocols. A simple HTTP server using Tokio is among the fastest measured in the TechEmpower server benchmarks.

Speaking of protocols, Rust’s full-blown HTTP story is solidifying, with Hyper’s master branch currently providing full Tokio support (and official release imminent). Work on HTTP/2 is well under way. And the web framework ecosystem is growing too. For example, Rocket came out this year: it’s a framework that marries the ergonomics and flexibility of a scripting framework with the performance and reliability of Rust. Together with supporting libraries like the Diesel ORM, this ecosystem is showing how Rust can provide slick, ergonomic developer experiences without sacrificing an ounce of performance or reliability.

Over the rest of this year, we expect all of the above libraries to significantly mature; for a middleware ecosystem to sprout up; for the selection of supported protocols and services to grow; and, quite possibly, to tie all this all together with an async/await notation that works natively with Rust’s futures.

Thanks to Aaron Turon for this server-side summary!

Rust should integrate easily into large build systems

Cargo, Rust’s native package manager and build system, is often cited as one of people’s favorite aspects of Rust. But of course, the world runs on many build systems, and when you want to bring a chunk of the Rust ecosystem into a large organization that has its own existing build system, smooth integration is paramount.

This initiative is mostly in the ideas stage; we’ve done a lot of work with stakeholders to understand the challenges in build system integration today, and we think we have a good overall vision for how to solve them. There’s lots of great discussion on the tracking issue that has resulted in a few Cargo issues like these:

There are a lot of details yet to be worked out; keep an eye out for more improvement in this area soon.

Rust’s community should provide mentoring at all levels

The “all levels” part of the roadmap item is important to us: it’s about onboarding first-time contributors as well as adding folks all the way up at the core team level (like me, hi!)

For people just getting started with Rust, we held RustBridge events before RustFest Berlin and Rust Belt Rust. There’s another coming up, planned for the day before RustConf in Portland!

The Mozilla Rust folks are going to have Outreachy and GSoC interns this summer working on a variety of projects.

We’ve also had success involving contributors when there are low-committment, high impact tasks to be done. One of those efforts was improving the format of error messages– check out the 82 participants on this issue! The Libz Blitz mentioned in a previous section is set up specifically to be another source of mentoring opportunities.

In January, the Language Team introduced shepherds, which is partly about mentoring a set of folks around the Language Team. The shepherds have been quite helpful in keeping RFC discussions moving forward!

We’ve also been working to grow both the number and size of subteams, to create more opportunities for people to step into leadership roles.

There’s also less formal ways that we’ve been helping people get involved with various initiatives. I’ve worked with many people at many places in their Rust journey: helping out with the conferences, giving their first conference talks, providing feedback on the book, working on crates, contributing to Rust itself, and joining teams! While it’s hard to quantify scenarios like these, everywhere I turn, I see Rustaceans helping other Rustaceans and I’m grateful this is part of our culture.

Rust in the future

At two years old, Rust is finding its way into all corners of programming, from web development, to embedded systems, and even your desktop. The libraries and the infrastructure are maturing, we’re paving the on-ramp, and we’re supporting each other. I’m optimistic about the direction Rust is taking!

Happy birthday, Rust! Here’s to many more! 🎉

Karl Dubost[worklog] Edition 066. Removing knots

webcompat life

  • Often tracking protection is confusing for people. I always wonder if it's because people are put in front of it in failure situation. Basically you discover something is not working because the site breaks, and you are later on told it is because of tracking protection. It's a negative feeling feature which doesn't show itself when everything is fine. I wonder if there would be a way to reverse that feeling. Something like, an individual site report on the blocking and a daily or weekly stats dashboard explaining what has been blocked. "Congratulations, Tracking Protection has blocked this week X of this, Y of that."
  • Webcompat Minutes published

webcompat issues

webcompat.com dev

Interesting read

Otsukare!

Manish GoregaokarMentally Modelling Modules

The module and import system in Rust is sadly one of the many confusing things you have to deal with whilst learning the language. A lot of these confusions stem from a misunderstanding of how it works. In explaining this I’ve seen that it’s usually a common set of misunderstandings.

In the spirit of “You’re doing it wrong”, I want to try and explain one “right” way of looking at it. You can go pretty far1 without knowing this, but it’s useful and helps avoid confusion.



First off, just to get this out of the way, mod foo; is basically a way of saying “look for foo.rs or foo/mod.rs and make a module named foo with its contents”. It’s the same as mod foo { ... } except the contents are in a different file. This itself can be confusing at first, but it’s not what I wish to focus on here. The Rust book explains this more in the chapter on modules.

In the examples here I will just be using mod foo { ... } since multi-file examples are annoying, but keep in mind that the stuff here applies equally to multi-file crates.

Motivating examples

To start off, I’m going to provide some examples of Rust code which compiles. Some of these may be counterintuitive, based on your existing model.

pub mod foo {
    extern crate regex;

    mod bar {
        use foo::regex::Regex;
    }
}

(playpen)

use std::mem;


pub mod foo {
    // not std::mem::transmute!
    use mem::transmute;

    pub mod bar {
        use foo::transmute;
    }
}

(playpen)

pub mod foo {
    use bar;
    use bar::bar_inner;

    fn foo() {
        // this works!
        bar_inner();
        bar::bar_inner();
        // this doesn't
        // baz::baz_inner();

        // but these do!
        ::baz::baz_inner();
        super::baz::baz_inner();

        // these do too!
        ::bar::bar_inner();
        super::bar::bar_inner();
        self::bar::bar_inner();

    }
}

pub mod bar {
    pub fn bar_inner() {}
}
pub mod baz {
    pub fn baz_inner() {}
}

(playpen)

pub mod foo {
    use bar::baz;
    // this won't work
    // use baz::inner();

    // this will
    use self::baz::inner;
    // or
    // use bar::baz::inner

    pub fn foo() {
        // but this will work!
        baz::inner();
    }
}

pub mod bar {
    pub mod baz {
        pub fn inner() {}
    }
}

(playpen)

These examples remind me of the “point at infinity” in elliptic curve crypto or fake particles in physics or fake lattice elements in various fields of CS2. Sometimes, for something to make sense, you add in things that don’t normally exist. Similarly, these examples may contain code which is not traditional Rust style, but the import system still makes more sense when you include them.

Imports

The core confusion behind how imports work can really be resolved by remembering two rules:

  • use foo::bar::baz resolves foo relative to the root module (lib.rs or main.rs)
    • You can resolve relative to the current module by explicily trying use self::foo::bar::baz
  • foo::bar::baz within your code3 resolves foo relative to the current module
    • You can resolve relative to the root by explicitly using ::foo::bar::baz

That’s actually … it. There are no further caveats. The rest of this is modelling what constitutes as “being within a module”.

Let’s take a pretty standard setup, where extern crate declarations are placed in the the root module:

extern crate regex;

mod foo {
    use regex::Regex;

    fn foo() {
        // won't work
        // let ex = regex::Regex::new("");
        let ex = Regex::new("");
    }
}

When we say extern crate regex, we pull in the regex crate into the crate root. This behaves pretty similar to mod regex { /* contents of regex crate */}. Basically, we’ve imported the crate into the crate root, and since all use paths are relative to the crate root, use regex::Regex works fine inside the module.

Inline in code, regex::Regex won’t work because as mentioned before inline paths are relative to the current module. However, you can try ::regex::Regex::new("").

Since we’ve imported regex::Regex in mod foo, that name is now accessible to everything inside the module directly, so the code can just say Regex::new().

The way you can view this is that use blah and extern crate blah create an item named blah “within the module”, which is basically something like a symbolic link, saying “yes this item named blah is actually elsewhere but we’ll pretend it’s within the module”

The error message from this code may further drive this home:

use foo::replace;

pub mod foo {
    use std::mem::replace;
}

(playpen)

The error I get is

error: function `replace` is private
 --> src/main.rs:3:5
  |
3 | use foo::replace;
  |     ^^^^^^^^^^^^

There’s no function named replace in the module foo! But the compiler seems to think there is?

That’s because use std::mem::replace basically is equivalent to there being something like:

pub mod foo {
    fn replace(...) -> ... {
        ...
    }

    // here we can refer to `replace` freely (in inline paths)
    fn whatever() {
        // ...
        let something = replace(blah);
        // ...
    }
}

except it’s actually like a symlink to the function defined in std::mem. Because inline paths are relative to the current module, saying use std::mem::replace works as if you had defined a function replace in the same module, and you can refer to replace() without needing any extra qualification in inline paths.

This also makes pub use fit perfectly in our model. pub use says “make this symlink, but let others see it too”:

// works now!
use foo::replace;

pub mod foo {
    pub use std::mem::replace;
}


Folks often get annoyed when this doesn’t work:

mod foo {
    use std::mem;
    // nope
    // use mem::replace;
}

As mentioned before, use paths are relative to the root module. There is no mem in the root module, so this won’t work. We can make it work via self, which I mentioned before:

mod foo {
    use std::mem;
    // yep!
    use self::mem::replace;
}

Note that this brings overloading of the self keyword up to a grand total of four! Two cases which occur in the import/path system:

  • use self::foo means “find me foo within the current module”
  • use foo::bar::{self, baz} is equivalent to use foo::bar; use foo::bar::baz;
  • fn foo(&self) lets you define methods and specify if the receiver is by-move, borrowed, mutably borrowed, or other
  • Self within implementations lets you refer to the type being implemented on

Oh well, at least it’s not static.




Going back to one of the examples I gave at the beginning:

use std::mem;


pub mod foo {
    use mem::transmute;

    pub mod bar {
        use foo::transmute;
    }
}

(playpen)

It should be clearer now why this works. The root module imports mem. Now, from everyone’s point of view, there’s an item called mem in the root.

Within mod foo, use mem::transmute works because use is relative to the root, and mem already exists in the root! When you use something, all child modules will see it as if it were actually belonging to the module. (Non-child modules won’t see it because of privacy, we saw an example of this already)

This is why use foo::transmute works from mod bar, too. bar can refer to the contents of foo via use foo::whatever, since foo is a child of the root module, and use is relative to the root. foo already has an item named transmute inside it because it imported one. Nothing in the parent module is private from the child, so we can use foo::transmute from bar.

Generally, the standard way of doing things is to either not use modules (just a single lib.rs), or, if you do use modules, put nothing other than extern crates and mods in the root. This is why we rarely see shenanigans like the above; there’s nothing in the root crate to import, aside from other crates specified by extern crate. The trick of “reimport something from the parent module” is also pretty rare because there’s basically no point to using that (just import it directly!). So this is not the kind of code you’ll see in the wild.



Basically, the way the import system works can be summed up as:

  • extern crate and use will act as if they were defining the imported item in the current module, like a symbolic link
  • use foo::bar::baz resolves the path relative to the root module
  • foo::bar::baz in an inline path (i.e. not in a use) will resolve relative to the current module
  • ::foo::bar::baz will always resolve relative to the root module
  • self::foo::bar::baz will always resolve relative to the current module
  • super::foo::bar::baz will always resolve relative to the parent module

Alright, on to the other half of this. Privacy.

Privacy

So how does privacy work?

Privacy, too, follows some basic rules:

  • If you can access a module, you can access all of its pub contents
  • A module can always access its child modules, but not recursively
    • This means that a module cannot access private items in its children, nor can it access private grandchildren modules
  • A child can always access its parent modules (and their parents), and all their contents
  • pub(restricted) is a proposal which extends this a bit, but it’s experimental so we won’t deal with it here

Giving some examples,

mod foo {
    mod bar {
        // can access `foo::foofunc`, even though `foofunc` is private

        pub fn barfunc() {}

    }
    // can access `foo::bar::barfunc()`, even though `bar` is private
    fn foofunc() {}
}
mod foo {
    mod bar {
        // We can access our parent and _all_ its contents,
        // so we have access to `foo::baz`. We can access
        // all pub contents of modules we have access to, so we
        // can access `foo::baz::bazfunc`
        use foo::baz::bazfunc;
    }
    mod baz {
        pub fn bazfunc() {}
    }
}

It’s important to note that this is all contextual; whether or not a particular path works is a function of where you are. For example, this works4:

pub mod foo {
    /* not pub */ mod bar {
        pub mod baz {
            pub fn bazfunc() {}
        }
        pub mod quux {
            use foo::bar::baz::bazfunc;
        }
    }
}

We are able to write the path foo::bar::baz::bazfunc even though bar is private!

This is because we still have access to the module bar, by being a descendent module.



Hopefully this is helpful to some of you. I’m not really sure how this can fit into the official docs, but if you have ideas, feel free to adapt it5!


  1. This is because most of these misunderstandings lead to a model where you think fewer things compile, which is fine as long as it isn’t too restrictive. Having a mental model where you feel more things will compile than actually do is what leads to frustration; the opposite can just be restrictive.

  2. One example closer to home is how Rust does lifetime resolution. Lifetimes form a lattice with 'static being the bottom element. There is no top element for lifetimes in Rust syntax, but internally there is the “empty lifetime” which is used during borrow checking. If something resolves to have an empty lifetime, it can’t exist, so we get a lifetime error.

  3. When I say “within your code”, I mean “anywhere but a use statement”. I may also term these as “inline paths”.

  4. Example adapted from this discussion

  5. Contact me if you have licensing issues; I still have to figure out the licensing situation for the blog, but am more than happy to grant exceptions for content being uplifted into official or semi-official docs.

Eric ShepherdDoing what doesn’t come naturally

I’ve been writing developer documentation for 20 years now, 11 of those years at Mozilla. For most of those years, documentation work was largely unmanaged. That is to say, we had management, and we had goals, but how we reached those goals was entirely up to us. This worked well for me in particular. My brain is like a simple maze bot in some respects, following all the left turns until it reaches a dead end, then backing up to where it made the last turn and taking the next path to the right, and repeating until the goal has been reached.

This is how I wrote for a good 14 or 15 years of my career. I’d start writing about a topic, linking to APIs, functions, other guides and tutorials, and so forth along the way—whether they already existed or not. Then I’d go back through the page and click the first link on the page I just created, and I’d make sure that that page was solid. Any material on that page that needed to be fixed for my new work to be 100% understood, I’d update. If there were any broken links, I’d fix them, creating and writing new pages as needed, and so forth.

How my mind wants to do it

Let’s imagine that the standards gurus have spoken and have decided to add to a new <dial> element to HTML, providing support for creating knobs and speedometer-style feedback displays. My job is to document this element.

I start by creating the main article in the HTML reference for <dial>, and I write that material, starting with a summary (which may include references to <progress>, <input>, and other elements and pages). It may also include links to articles I plan to create, such as “Using dial elements” and “Displaying information in HTML” as well as articles on forms.

As I continue, I may wind up with links to subpages which need to be created; I’ll also wind up with a link to the documentation for the HTMLDialElement interface, which obviously hasn’t been written yet. I also will have links to subpages of that, as well as perhaps for other elements’ attributes and methods.

Having finished the document for <dial>, I save it, review it and clean it up, then I start following all the links on the page. Any links that take me to a page that needs to be written, I write it. Any links that take me to a page that needs content added because of the new element, I expand them. Any links that take me to a page that is just horribly unusably bad, I update or rewrite as needed. And I continue to follow those left-hand turns, writing or updating article after article, until eventually I wind up back where I started.

If one of those pages is missing an example, odds are good it’ll be hard to resist creating one, although if it will take more than a few minutes, this is where I’m likely to reluctantly flag it for someone else to do later, unless it’s really interesting and I am just that intrigued.

By the time I’m done documenting <dial>, I may also have updated badly out of date documentation for three other elements and their interfaces, written pages about how to decide on the best way to represent your data, added documentation for another undocumented element that has nothing to do with anything but it was a dead link I saw along the way, updated another element’s documentation because that page was where I happened to go to look at the correct way to structure something, and I saw it had layout problems…

You get the idea.

How I have to do it now

Unfortunately, I can’t realistically do that anymore. We have adopted a system of sprints with planned work for each sprint. Failing to complete the work in the expected amount of time tends to get you dirty looks from more and more people the longer it goes on. Even though I’m getting a ton accomplished, it doesn’t count if it’s not on the sprint plan.

So I try to force myself to work on only the stuff directly related to the sprint we’re doing. But sometimes the line is hard to find. If I add documentation for an interface, but the documentation for its parent interface is terrible, it seems to me that updating that parent interface is a fairly obvious part of my job for the sprint. But it wasn’t budgeted into the time available, so if I do it, I’m not going to finish in time.

The conundrum

That leaves me in a bind: do strictly what I’m supposed to do, leaving behind docs that are only partly usable, or follow at least some of those links into pages that need help before the new content is truly usable and “complete,” but risk missing my expected schedule.

I almost always choose the latter, going in knowing I’m going to be late because of it. I try to control my tendency to keep making left turns, but sometimes I lose myself in the work and write stuff I am not expected to be doing right now.

Worse, though, is that the effort of restraining myself to just writing what’s expected is unnatural to me. My brain rebels a bit, and I’m quite sure my overall throughput is somewhat lower because of it. As a result: a less enjoyable writing experience for me, less overall content created, and unmet goals.

I wonder, sometimes, how my work results would look if I were able to cut loose and just go again. I know I have other issues slowing me down (see my earlier blog post Peripheral neuropathy and me), but I can’t help wondering if I could be more productive by working how I think, instead of doing what doesn’t come naturally: work on a single thing from A to Z without any deviation at all for any reason.

Marcia KnousIt is all about community

This past weekend's l10n/Nightly workshop reminded me about how great it is meeting Mozillians that work on various aspects of our project. I had some interesting conversations with various communities, about lots of different topics. I don't work specifically with localizers, but it was interesting to hear some of the challenges they face when they have to translate terms in Firefox.
marcia and the Mozilla Ugandan community
These face to face meetups are the best part of working at Mozilla. Although we were mostly in our own spaces in the office, during the lunches and dinners we got to explore some far ranging topics. Some of the communities also brought some "sweets" to the event, which was wonderful.
marcia and the Mozilla Persian community
Thanks to Jeff, Delphine, Flod, Axel, Peiying, Theo, Pascal, and Clara for all their hard work putting the Paris event together and coordinating the logistics. It was truly a great event!

Daniel PocockThank you to the OSCAL team

The welcome gift deserves its own blog post. If you want to know what is inside, I hope to see you at OSCAL'17.

Daniel PocockKamailio World and FSFE team visit, Tirana arrival

This week I've been thrilled to be in Berlin for Kamailio World 2017, one of the highlights of the SIP, VoIP and telephony enthusiast's calendar. It is an event that reaches far beyond Kamailio and is well attended by leaders of many of the well known free software projects in this space.

HOMER 6 is coming

Alexandr Dubovikov gave me a sneak peek of the new version of the HOMER SIP capture framework for gathering, storing and analyzing messages in a SIP network.

exploring HOMER 6 with Alexandr Dubovikov at Kamailio World 2017

Visiting the FSFE team in Berlin

Having recently joined the FSFE's General Assembly as the fellowship representative, I've been keen to get to know more about the organization. My visit to the FSFE office involved a wide-ranging discussion with Erik Albers about the fellowship program and FSFE in general.

discussing the Fellowship program with Erik Albers

Steak and SDR night

After a hard day of SIP hacking and a long afternoon at Kamailio World's open bar, a developer needs a decent meal and something previously unseen to hack on. A group of us settled at Escados, Alexanderplatz where my SDR kit emerged from my bag and other Debian users found out how easy it is to apt install the packages, attach the dongle and explore the radio spectrum.

playing with SDR after dinner

Next stop OSCAL'17, Tirana

Having left Berlin, I'm now in Tirana, Albania where I'll give an SDR workshop and Free-RTC talk at OSCAL'17. The weather forecast is between 26 - 28 degrees celsius, the food is great and the weekend's schedule is full of interesting talks and workshops. The organizing team have already made me feel very welcome here, meeting me at the airport and leaving a very generous basket of gifts in my hotel room. OSCAL has emerged as a significant annual event in the free software world and if it's too late for you to come this year, don't miss it in 2018.

OSCAL'17 banner

Ehsan AkhgariQuantum Flow Engineering Newsletter #9

It’s been 10 weeks since I have started writing these newsletters (the number in the title isn’t an off by one error, there was a one week hiatus due to a work week!). We still have quite a bit of work ahead of us, but we have also accomplished a good amount. Finding a good metric for progress is hard, but we live and breathe in Bugzilla, so we use a bug-based burn-down chart. As you can see, we are starting to see a decrease in the number of open bugs, and this is as we are actively adding tens of new bugs to the pool in the weekly triage meetings.
The other thing that this burn-down chart shows is that we need help! Very recently Kan-Ru came up with the great idea of creating the qf-bugs-upforgrabs tracker bug. These are reasonably self-contained bugs that require less specific domain knowledge and can be worked on by anyone in a reasonable time frame. Please consider taking a look at the dependency list of that bug to see if something interests you! (The similarity of this tacker bug to photon-perf-upforgrabs isn’t an accident!)
On the telemetry hang reports data collection, the new data from hangs of 128ms or longer have been coming in, but there have been some wrinkles in actually receiving this data, and also in receiving the hang data correlated to user interactivity. Michael Layzell has been tirelessly at work on the BHR backend to make it suit our needs, and has been discovering the edges of computation limits in order to symbolicate the BHR reports on people.mozilla.org (now moved to AWS!).
I realized we haven’t had a performance mini-story for a while — I sort of dropped the ball on that. Running over this bug made me want to talk about a pretty well known sort of slowness in C++ code, virtual functions. The cost of virtual functions comes from several different aspects, firstly they effectively prevent the compiler from doing inlining the function which enables a host of compiler optimizations, essentially by enabling the compiler to see more of the code and optimize more effectively based on that. But then there is the runtime cost of the function, which mostly comes from the indirect call. The majority of the performance penalty here on modern hardware is due to branch midpredictions when different implementations of a virtual function get called at a call site. You should remember that on modern desktop processors, the cost of a branch misprediction can be around 15-20 cycles (depending on the processor) so if what your function does is very trivial and it has many overrides that can be called in hot code chances are that you are spending a considerable amount of time waiting for the instruction cache misses on the calls to the virtual function in question. Of course, finding which virtual functions in your program are these expensive ones requires profiling the workloads you care about improving, but always keep an eye for this problem as unfortunately the object-oriented programming model in C++ really encourages writing code like this. This is the kind of issue that a native profiler is probably more suitable for discovering, for example if you are using a simple native sampling profiler these issues typically show up as a long amount of time being spent on the first instruction of the virtual function being called (which is typically an inexpensive instruction otherwise.)
Now it’s time to acknowledge the work of all of you who have helped in improving the performance of the browser in the last week. As always, I hope I’m not forgetting anyone:

Gervase MarkhamEurovision Bingo (chorus)

Some people say that all Eurovision songs are the same. (And some say all blog posts on this topic are the same…) That’s probably not quite true, but there is perhaps a hint of truth in the suggestion that some themes tend to recur from year to year. Hence, I thought, Eurovision Bingo.

I wrote some code to analyse a directory full of lyrics, normally those from the previous year of the competition, and work out the frequency of occurrence of each word. It will then generate Bingo cards, with sets of words of different levels of commonness. You can then use them to play Bingo while watching this year’s competition (which is on Saturday).

There’s a Github repo, or if you want to go straight to pre-generated cards for this year, they are here.

Here’s a sample card from the 2014 lyrics:

fell cause rising gonna rain
world believe dancing hold once
every mean LOVE something chance
hey show or passed say
because light hard home heart

Have fun :-)

Daniel StenbergThe curl user survey 2017

The annual survey for curl and libcurl users is open. The 2017 edition has some minor edits since last year but is mostly the same set of questions used before. To help us detect changes and trends over time.

If you use curl or libcurl, in any way, shape or form, please consider spending a few minutes of your precious time on this. Your input helps us understand where we are and in which direction we should go next.

Fill in the form!

The poll is open fourteen days from Friday May 12th until midnight (CEST) May 26th 2017. All data we collect is non-personal and anonymous.

To get some idea of what sort of information we extract and collect from the results, have a look at the analysis of last year’s survey.

Air MozillaFirefox DevTools London Meetup May 2017

Firefox DevTools London Meetup May 2017 Introducing Firefox Developer Tools (DevTools), devtools.html and associated projects such as the new debugger and console.

Air MozillaReps Weekly Meeting May 11, 2017

Reps Weekly Meeting May 11, 2017 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Security BlogRelaunching Mozilla’s Web Security Bounty Program

Today we are announcing the relaunch of our web security bug bounty program, creating greater transparency into how we handle web security bug bounty payouts.

History

Bug bounty programs started a number of years ago with Netscape leading the way. In August of 2004, Mozilla joined in by launching our first bug bounty program. Funded by Linspire, Inc. and Mark Shuttleworth, it paid out $500 for critical security vulnerabilities found in Firefox and other Mozilla software. Although this may seem quaint in comparison to modern day bug bounties that can reach well into the six figures, at the time it was considered a revolutionary advance in how technology companies deal with the discovery of security flaws.

Six years later, in December of 2010, Mozilla was one of the first companies to add bugs found in their web properties to their bounty programs. Ranging from $500 up to $3000, it was another leap forward, this time focused on improving the state of web security.

From our first awarded web bounty bug (a cross-site scripting vulnerability in addons.mozilla.org) to now, we have paid out hundreds of thousands of dollars to researchers around the world who have lent their expertise to help us protect our users.

Challenges and Solutions

Bug bounty programs are always challenging to administer, especially for a company like Mozilla. We have staff and contributors that have lived and breathed the web for almost 20 years and our portfolio of websites has grown exponentially. From www.mozilla.org, to www.bugzilla.org to arewefastyet.com some of these sites create significantly more risk to Mozilla’s operations than others.

Problems have arisen with communicating this risk spectrum to bounty hunters. A hypothetical SQL injection on Bugzilla presents a different level of risk to Mozilla than a cross-site scripting attack on the Observatory or an open redirect on a community blog. To a bounty hunter, the level of risk is often irrelevant — they simply want to know if a class of bug on a specific site will pay out a bounty and how much it will pay out.

Overall, we think we have done a reasonable job listing the Mozilla websites that pay out bounties, but the actual payout amounts have varied. In addition, payouts have become more complicated for bugs discovered on sites that are not explicitly part of the program.

If a payout comes in at a level that meets or exceeds what the researcher was expecting, then everything is great. But if it comes in lower than expectations, a bounty hunter may be disappointed. Furthermore, making a payout exception for a given site creates an expectation that additional exceptions will be made.

Today

We are excited to relaunch our web based bounty program in a way that will address many of these historical issues while also expanding the number of websites and bug classes that are covered. In addition, we are explicitly listing how much each bug class will pay out and for what websites, based on their risk profile.


[see the whole table here]

Having a clear and straightforward table of payouts allows bounty hunters to devote their time and effort to discovering bugs that they know will receive a payout. The hunters will also know the exact amount of the payouts.  We’re also expanding the classes of bugs that qualify for our bug bounty Hall of Fame. Although these bugs don’t come with a monetary payout, it’s our way of publicly acknowledging the work of bounty hunters in making the web a safer place.

From our logos to our products, Mozilla is a company that prides itself on its openness. Although being open about payouts is generally unexplored territory, we hope that it helps contribute to greater openness in bug bounty programs around the web.

If you are an existing contributor to our web bug bounty program, we hope this structure helps focus your efforts. If you are just starting out, we look forward to working with you to help make the internet more secure!

The post Relaunching Mozilla’s Web Security Bounty Program appeared first on Mozilla Security Blog.

Hacks.Mozilla.OrgDebugger.html Call Stack Improvements

Debugger.html is an open source project, built on top of React and Redux, that functions as a standalone debugger for Firefox, Chrome and Node. The debugger is also being integrated into the Firefox Developer Tools offering. Currently it is available in the Firefox 53 release behind the devtools.debugger.new-debugger-frontend preference.

The debugger.html project was announced last September and continues to improve with added features and functionality. In this post, we will cover some changes for the call stack display that are currently under development or recently implemented.

Call Stack Display

Most JavaScript debuggers offer a visual representation of the call stack. This display represents the execution context stack or simply the call stack, displaying a frame for each nested call currently being executed. Generically speaking, an execution context refers to the environment in which a function runs, including scoped variables, function arguments, etc. If your application provides a button to execute function A, then a new execution context is created and loaded onto the call stack when the button is clicked. Function A may call function B.

In this case, an execution context is created for function B and loaded onto the top of the call stack. If function B has a breakpoint set, the call stack display will show a frame for the button function, one for Function A and one for Function B. Essentially, the Call Stack display shows a chain of functions that are waiting to be completed and return a value.

Selecting a frame in the Call Stack display will update the Scopes display with the variables for that particular frame. Clicking on the filename in the display will open that specific file in the editor. The Call Stack display is a great tool for tracking execution flow through your application. That said, with complex libraries or frameworks the display can become convoluted and difficult to follow.

The Debugger.html team is working on several key features that will improve the usability of the Call Stack display and make it more intuitive to understand. We’ll take a closer look at these four:

  • Simplifying function names
  • Highlighting libraries
  • Collapsing library frames
  • Naming library frames

Simplifying Function Names

The vast majority of JavaScript functions are not named, making them anonymous. This presents a problem: the call stack currently gives verbose function names to anonymous functions. In practice, an anonymous function might be given the name app.AppView<.success because it is defined in the AppView class. It’s important to be able to scan the call stack quickly. In this context, it’s helpful to see the simplest name possible (success). Compare the image below to the previous image of the call stack. This feature is currently implemented in the latest source code for the debugger.html project.

Highlighting Libraries

Libraries and frameworks are used in a large portion of web applications. In the process of debugging your code, sending calls to the library or framework can quickly fill up the Call Stack display. In most cases, it’s helpful to quickly identify and exclude libraries from your debugging work, since most bugs will likely be found in your custom application code. The debugger now includes a library highlight feature, which replaces the file URL and line location with the library name and logo, so that you can focus your debugging efforts more efficiently on your own code.

Note that this setting can be reverted in the debugger settings page, if you need to debug a library.

Collapsing Library Frames

In similar fashion, you can unclutter your view of the debugging effort by collapsing multiple function calls within a library into one visual line in the Call Stack display. This will further reduce the visual noise and help you locate and debug your application code. With this feature turned on, the previous stack trace now looks like this:

Then you can view the nested frames by clicking the frame to open it.

With these three features enabled, you gain access to a very simple view that can improve your debugging workflow. As with the previous feature, collapsing can be disabled.

Naming Library Frames

Collapsing the library frames offers an additional benefit. It gives you a way to describe what the library is doing in the combined lines that are collapsed. For example, instead of showing two frames for jQuery elemData.handle, and event.dispatch functions, we can simply show the label event. This can also encourage better naming conventions for describing specific library or framework operations such as rendering, routing, or doing any other task. For example, in Call Stack display image above, the display will show when the Backbone Model and Views are created.

This feature is currently under development in order to provide better names for the library operations taking place in the collapsed library frames. Naming should be specific to individual libraries and should summarize all the operations taking place in the lines of code represented by the one collapsed display item.

Conclusion

We’re really excited about how the new call stack will help users debug and improve their web applications. This is just the beginning, we hope to introduce more framework improvements in the coming months.

The Debugger.html team welcomes new contributors and suggestions for improving the tool. If you are interested in helping to build or have suggestions for an improved product, check out the README on our GitHub page.

Air MozillaMozilla Curriculum Workshop, Spring 2017

Mozilla Curriculum Workshop, Spring 2017 Mozilla Curriculum Workshop, Spring 2017 Join us on Thursday, May 11th, 2017, at 10 AM ET, to talk about teaching and learning in response to...

Daniel StenbergEverything curl – printed!

TLDR: fill in your info in this form if you want to buy a print copy!

Long time curl friend and contributor Dan Fandrich printed a (very limited) first edition of Everything curl on real actual dead-tree paper a while ago. Getting this rather heavy thing in your hand is actually an awesome feeling and quite different to just reading it on a screen!

However, those few initial copies were quickly given away to interested readers and there are none left now.

We are now investigating if there is still interest from people in getting one of these physical, hard copy versions, of the book. The price is likely to be about 20 Euros including International shipping. The first edition of the book is a 232 page professionally-printed and bound softcover book.  The second edition is planned to be very similar.

The content of the first edition book was picked from the book’s git repository in March 2017 and is not the intended final version of the book. Who knows if there will ever be a final version. There are ‘tbd’ markers on many places in the book where additional content is meant to be added in a future.

To sign up for your own copy of the book and you are willing to pay around 20 Euros for one, please fill in your contact information in this Google form, and we if we get enough proof of interest we might get a second edition printed.

You buy this book because you want a physical version of it. All the contents is already available for free online, in PDF version and in two e-book formats. The money charged for the book will not go to the curl project but is for printing and shipping.

Firefox NightlyThese Weeks in Firefox: Issue 16

Highlights

  • about:addons now has a legacy tag to show when an extension is not a WebExtension and addons.mozilla.org now specifically tags WebExtensions as compatible on Firefox 57!
  • The Activity Stream Test Pilot now shows recommended stories from Pocket, and you can  try the first bits of the Activity Stream integration in Firefox on Nightly by switching the pref browser.newtabpage.activity-stream.enabled

The new about:newtab featuring Activity Stream!

Friends of the Firefox team

(Give a shoutout/thanks to people for helping fix and test bugs. Introductions)

Project Updates

Add-ons

Electrolysis (e10s)

  • gabor turned on the pre-allocated process manager, which should improve the perceived performance of opening tabs and windows in new processes
  • mconley is currently fixing a regression in the tab switch spinner metric
  • e10s-multi A/B test is currently underway on Beta
  • e10s-a11y support still targeted for Firefox 55

Firefox Core Engineering

  • Shield Study defaulting Flash to click-to-play (Plugin Safety) has begun on Release 53 and will run through June 15.

Form Autofill

Mobile

Photon

Performance
  • We are working on deterministic perf tests for sync reflows and files loaded too early during startup
    • Sync layout and style flush tests will be in browser/base/content/test/performance. When these become available, make sure to run these when you add Photon-y things!
  • Expect big patches to land to stop using Task.jsm in browser/ and toolkit/, and stop using the non standard Promise.defer from promise.jsm
    • If you have WIP patches for these folders, consider landing them soon to avoid bitrot.
Animation
Visuals
Onboarding
  • Fischer and Rex and Fred report that the team has been iterating on the overlay prototype for the onboarding experience
    • The team is in discussions with the Activity Stream team to figure out how the overlay will integrate with the Activity Stream page
    • The team is also sorting out integration with Firefox Account log-in, and Automigration
    • The onboarding overlay experience is currently being developed as a system add-on
Preferences
  • timdream reports that the team has almost finished scoping out the work for this project, and that this wiki page is a great way to track the team as they work
  • Please file about:preferences bugs! Preferably blocking the right meta bugs.

Privacy/Security

  • jkt is looking into rewriting containers to use WebExtensions instead of the Addon SDK
  • Our three Outreachy interns for May 30 – August 30 were announced last week:

Project Mortar (PDFium)

Search

Sync / Firefox Accounts

Test Pilot

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Andrzej HuntTracking Protection for Android’s WebView

Unlike iOS (really just Safari), Android has no content blocking API. Tracking protection is available in some browsers, e.g. Firefox in combination with addons (and also in Firefox’s private browsing which includes tracking protection enabled by default). For fun, we decided to look into whether it’s possible to provide Tracking Protection when using Android’s default WebView implementation. This blog post describes how that was done, and explores some of the implementation details of our URL matching algorithm.

It turns out that Firefox Focus on iOS also had to build their own URL matching implementation: iOS content blocking is current only available in Safari, and not in the iOS WebView equivalent. That implementation was influenced by the design of iOS’s content blocking APIs and file formats, but when you’re not subject to that restriction it’s possible to build a faster approach, so my ignorance of that version wasn’t necessarily a bad thing, as I’ll describe later in this post.

Why would you want to do this? One reason is that browser engines are large – and we wanted to see whether it’s possible to build a privacy focused browser whose size measures in megabytes instead of tens of megabytes – which would require reusing whatever engine the platform provides (in the case of iOS you actually have no choice in the matter, fortunately Android is a little more free). There are actually some drawbacks to using platform-provided browser engines – which will the topic of a future post – but it’s certainly possible to implement tracking protection on top of Android’s WebView.

Tracking Protection Lists

Firefox and Focus use the Disconnect tracking protection lists: these are lists of domains hosting trackers that should be blocked, categorised by tracker type, e.g. Social trackers, Analytics Trackers, Advertising Trackers, etc. Further to this there’s an override “entity” list, which unblocks domains that are owned by a given company whenever you are browsing a site owned by that company. (E.g. if FooBar Tracker Corp owns both foo.com and bar.com, we would allow loading of resources from bar.com while browsing foo.com, even though we’d block all other sites from loading resources from foo.com and bar.com.) You can read more about these lists at the repo where the Mozilla copies of these lists are maintained.

As such, tracking protection is fairly simple: every time a given webpage requests a resource, we match the resource URL’s host against the blocklist. If it’s blocked, we check the entitylist to verify whether there’s an override in place for the current site. Android’s WebView provides a callback that is called every time it wants to load a resource, allowing you to override resource loading.

The iOS content blocking API actually allows for regex based matching on the entire resource URL, which is more complex than what we needed for basic tracking protection. The disconnect lists only work using domains/hosts, which simplifies the implementation somewhat. Focus on iOS originally only supported the content blocking API, and added the browser later – the browser implementation therefore simply reused the same bundled list format. The content blocking lists aren’t used for iOS’s WebView equivalent, although that is apparently changing.

Implementing URL matching

The simple (but not particularly efficient) method would be iterate over the list of hosts every time a resource is fetched. In fact, we could just iterate over the regex’s in the iOS content blocking lists, and check those directly to avoid implementing our own matching.

The original Android implementation was actually a rushed afternoon (or two) hacky proof of concept from our December All Hands – it turned out to be robust and fast enough, so it was kept beyond that time. It might be possible to build an even faster implementation, but this one hasn’t provoked any user complaints yet.

As mentioned, iterating over the list of blocked hosts is expensive, O(nh) for n = number of blocked hosts == very large, h = host length (small). Fortunately at some point or another I had learned about Tries (contrary to what some might assume, an Information and Computer Engineering degree at my alma mater doesn’t actually involve any Data Structures and Algorithms – but that’s nothing a little independent study can’t quickly fix).

Those offer much smaller memory consumption (not that memory consumption is particularly significant compared to what a web engine will need), and much faster lookup [O(h)]:

A trie containing multiple domains.

(In reality, the Trie possibly consumes more memory because of the overhead of each node being an object. More efficient representations are available in order to avoid one node per character, but that didn’t seem worthwhile given that this implementation is already performant enough.)

There’s still a bunch of overhead in various places: we’re using the Android/Java URL classes to extract the hostname from the resource URL, which could well be more costly than the actual act of searching the tree. I haven’t measured in detail yet.

(Building this concluded completed the bi-yearly cycle of proper Data Structures and Algorithms construction  – I’d last been able to build some trees for a bookmarks folder UI the preceeding summer.)

As mentioned above, there’s also the entitylist: this consists of sets of hosts (A), for which another set of hosts (B) is whitelisted (usually those sets would be the same, but that isn’t guaranteed or necessary). This is simply an extension of the same tree: the set of whitelisted domains (B) is another Trie. That Trie is then attached to every node representing one of the whitelisted domains (A) – we simply extend the default Node to have a WhitelistNode, which has a reference to the whitelisted-domains Trie.

Every real project needs its own String implementation

Searching and inserting into our hostname tries involves walking strings backwards. That would either require either some annoying index arithmetic, or reversing the String before insertion/search (i.e. creating a copy of the String). Neither of those sounded like fun, so I decided to add a String wrapper. This is arguably completely unnecessary, but made things a little simpler (and perhaps more efficient). The String wrapper also meant that the Trie implementation didn’t need to have much knowledge about subdomains either, we can just start at the start of our reversed String. (Because we need to correctly match subdomains, but not other domains, the Trie still needs to be aware of full stop being used for domain separation, so it isn’t completely domain agnostic.

We only need to access the String character by character, which is why we can avoid a complete string copy/reversal – if this weren’t the case, there would be little value in a wrapper.

The wrapper takes care of index arithmetic for reversed strings – and implements support for getChar(int) and substring(int). That’s pretty much all there was to FocusString. (I no longer need to miss the amazing days of many C++ string classes…)

substring() copies…

Somewhat naively, I’d assumed that our Java implementation doesn’t create a copy when calling String.substring() – in other words that it would just adjust internal indexes while reusing the same String buffer and/or equivalent behaviour. Without that assumption, there would be little point in avoiding a String copy on reversal, since – thanks to our recursive Trie traversal – we’d be creating copies when traversing that Trie.

It turns out that assumption was wrong: it was true for Java 6, and also for earlier versions of Java 7 – before changing in Java 7u6. I don’t really know where Android’s implementation originates, but it also creates copies. Thus, FocusString was expanded to include offsets, and FocusString.substring() merely fiddles those offsets.

It was hard to predict what the impact of this change might be in advance, since I didn’t have much experience in this area – I discovered that it was actually a noticeable improvement: on my fairly modern Nexus 6P, average URL matching time dropped by about 20% – from approximately 1.2ms to 1.0ms (these numbers are for debug builds with code coverage enabled – that drops to 0.26ms vs 0.42ms for coverage free debug builds, which is even more significant). We already had tests in place which helped verify that things wouldn’t break, so this was a fairly low risk change (I did use this as an opportunity to extend those tests though).

Results

As mentioned above, the iOS equivalent implementation is a lot simpler. It iterates over the lists of hosts, and does regex matching for each host. I decided to port that implementation to Android, primarily to check for consistency of results. Fortunately the Trie based implementation was mostly correct, except for our subdomain matching. Both bar.com and foo.bar.com should be blocked if bar.com is in the blocklist. My Trie based implementation also blocked foobar.com. Ooops. That was a quick fix, albeit one which required making the Trie search implementation hostname aware. Other than that, results have been the same in our testing.

These parallel implementations allowed for performance comparisons. (Note: the underlying regex and other library implementations on each platform might be different, so the difference in results could be very different if both algorithms were running on an iPhone.) On my N6P, the Trie based implementation took an average of 0.3ms per resource URL check, the ported iterative/regex approach took 42ms. Some pages like to load a lot of resources – so that’s a difference you’d notice quickly. It’s possible that my ported implementation was suboptimal, but it’s certainly clear that the Trie based approach was worth it from a performance perspective.

To be fair, this implementation did take more work – and you have to remember that the iOS implementation was influenced by the blocklist file format that iOS uses for its tracking protection API, whereas the Android version was clean-sheet design.

Edits:

Trie Diagram corrected on 10th May 2017, thank you to Gervase Markham for spotting the mistake.

Air MozillaWhen Surveillance Goes Private: A 2027 Retrospective from Adrian Hon

When Surveillance Goes Private: A 2027 Retrospective from Adrian Hon In our May 10 “future retrospective,” we'll look at how we - in 2027 - became so collectively compliant to others owning data about our...

The Mozilla BlogMozilla Awards Nearly $300,000 to Research Grant Winners

We’re happy to announce the results of the Mozilla Research Grant program for the first half of 2017. This was a competitive process, and after three rounds of judging, we settled on funding nine proposals in five countries for a total of $299,444. These projects support Mozilla’s mission to make the internet safer, more empowering and more accessible.

The Mozilla Research Grants program is part of Mozilla’s Emerging Technologies charter to explore the future of the open internet, and reflects Mozilla’s commitment to open innovation, as well as accelerating our own research. As such, these grants include research supporting our core projects like Firefox and Rust, as well as exploring new domains for the future of the Internet.

Cosmin Munteanu University of Toronto Mississauga Communication, Culture, Information, and Technology A safer Internet for the socially-isolated and digitally-marginalized older adults
Eelke Folmer University of Nevada, Reno Computer Science and Engineering Understanding Gender Differences in VR Locomotion Interfaces
Ethan Hanner University of Colorado Boulder Computer Science Understanding Perceptions of Ethics in Hacktivism
J. Shane Culpepper RMIT University School of Science (Computer Science) Efficient and Effective Multi-Stage Retrieval in Rust
James Clawson Indiana University Bloomington School of Informatics and Computing Designing aurally distinct audio corpora for use in eyes-free text entry evaluations.
Karen Louise Smith Brock University Communication, Popular Culture & Film Add-ons for Privacy: Open Source Advocacy Tactics for Internet Health
Kenneth Heafield University of Edinburgh School of Informatics Open Data: Mining Translations and Transcripts from the Web
Louise Barkhuus The IT University of Copenhagen Department of Digital Design Understanding and encouraging grade school girls’ interest in in computer programing
Taesoo Kim Georgia Tech Computer Science Designing New Operating Primitives to Improve Fuzzing Performance

Congratulations to all successfully funded applicants! The 2017H2 round of grant proposals will open in early August and be due September 1st.

Sean White, Senior Vice President, Emerging Technologies, Mozilla
Jofish Kaye, Principal Research Scientist, Emerging Technologies, Mozilla

The post Mozilla Awards Nearly $300,000 to Research Grant Winners appeared first on The Mozilla Blog.

Mozilla Addons BlogIncompatible change to sessions.restore API in Firefox 54

The add-on compatibility update for Firefox 54 was published a while back, but a backward-incompatible change to the sessions.restore WebExtensions API was uplifted to 54, currently in Beta and set to be released on June 13th.

sessions.restore now returns an object instead of an array. With this change, the API now matches the spec and its behavior in Google Chrome. If you use this API in your WebExtension, this bug report has all the details.

The post Incompatible change to sessions.restore API in Firefox 54 appeared first on Mozilla Add-ons Blog.

Air MozillaDAP Web Literacy Virtual Workshop

DAP Web Literacy Virtual Workshop An introductory web literacy workshop for participants in the Digital Ambassador Program.

Daniel StenbergImproving timers in libcurl

A few years ago I explained the timer and timeout concepts of the libcurl internals. A decent prerequisite for this post really.

Today, I introduced “timer IDs” internally in libcurl so that all expiring timers we use has to specify which timer it is with an ID, and we only have a set number of IDs to select from. Turns out there are only about 10 of them. 10 different ones. Per easy handle.

With this change, we now only allow one running timer for each ID, which then makes it possible for us to change timers during execution so that they never “fire” in vain like they used to do (since we couldn’t stop or remove them them before they expired previously). This change makes event loops slightly more efficient since now they will end up getting much fewer “spurious” timeouts that happen only because we had no infrastructure to prevent them.

Another benefit from only keeping one single timer for each ID in the list of timers, is that the dynamic list of running timers immediately become much shorter. This, because many times the same timer ID was used again and we would then add a new node to the list so the timer that had one purpose would expire twice (or more). But now it no longer works like that. In some typical uses I’ve tested, I’ve seen the list shrink from a maximum of 7-8 nodes down to a maximum of 1 or 2.

Finally, since we now have a finite number of timers that can be set at any given time and we know the maximum amount is fairly small, I could make the timer code completely skip using dynamic memory. Allocating 10 tiny structs as part of the main handle allocation is more efficient than doing tiny mallocs() for them one by one. In a basic comparison test I’ve run, this reduced the total number of allocations from 82 to 72 for “curl localhost”.

This change will be included in the pending curl release targeted to ship on June 14th 2017.  Possibly called version 7.54.1.

Those are all in a tree

As explained previously: the above explanation of timers goes for the set of timers kept for each individual easy handle, and with libcurl you can add an unlimited amount of easy handles to a multi handle (to perform lots of transfers in parallel) and then the multi handle has a self-balanced splay tree with the nearest-in-time timer for each individual easy handle as nodes in the tree, so that it can quickly and easily figure out which handle that needs attention next and when in time that is.

The illustration below shows a captured imaginary moment in time when there are five easy handles in different colors, all doing their own separate transfers, Each easy handle has three private timers set. The tree contains five nodes and the root of the tree is the node representing the the easy handle that needs to be taken care of next (in time). It also means we immediately know exactly how long time there is left until libcurl needs to act next.

Expiry

As soon “time N” occurs or expires, libcurl takes care of what the yellow handle needs to do and then removes that timer node from the tree. The yellow handle then advances the next timer first in line and the tree gets re-adjusted accordingly so that the new yellow first-node gets re-inserted positioned at the right place in the tree.

In our imaginary case here, the new yellow time N (formerly known as N + 1) is now later in time than L, but before M. L is now nearest in time and the tree has now adjusted to look something like this:

Since the tree only really cares about the root timer for each handle, you also see how adding a new timeout to single easy handle that isn’t the next in time is a really quick operation. It just adds a node in a linked list – per that specific handle. The linked list which now has a maximum length that is capped to the total amount of different timers: 10.

Straight-forward!

Marcia KnousLiving in the World of Nightly - Nightly Workshop recap

This weekend we gathered a small group of Mozillians to work on activities related to Nightly. You can see the goals we intended to accomplish onour wiki. Our event was held along with thel10n team's Workshop, which was a great mix of different communities who were focused on improving the localization of Firefox in their respective languages. The Nightly group worked on the first floor, but we shared meals with the rest of the participants, which was a great way for all of the various communities to meet each other.
Pascal and Arnaud exploring the Mozregression tool.
Here are some of the things we accomplished during the course of the weekend:
MozActivate: Sunday we spent almost the entire day brainstorming aMozActivate activitythat we could build a template around.Florehadan event in Lyon, and we used some of the feedback from her event and built that into the template design. I am happy to say that thanks to our hard work that we actually came up with a good template for an activity - we are currently getting some feedback and hope to have an activity on the site shortly. Even better, the template can be used for other events related to Nightly, especially when we need to have specific features tested as we continue work onProject Quantum.
Flore andChristopheduring the Nightly Workshop in Paris
Other areas we covered during the course of the weekend:
Installing Nightly- We advertised on the Telegram channel and invited participants to come down and get Nightly installed on their laptops.
Triaging Bugs- We did a little bit of group triage, looking at some of the latest UNCO bugs and trying to bucket the in the correct component.
showed the participants how to set up MozRegression and use it to find a regression range.
Tracking Flags- Marcia talked about how to mark bugs so that they get the attention they need, and the importance of this as we work on Project Quantum and Photon.
Project Dawn- Axel and Marcia gave a short presentation to the entire group explaining Project Dawn
Nightly Community-Pascalsharedhis presentationabout Nightly to the entire workshop group, giving participants an inside view of his work on Nightly and how community can help get involved and promote Nightly.
You can find more pictures of the eventhere, and the entire group photohere. Overall it was a great event, with lots of interaction and lots of discussion.

Hacks.Mozilla.OrgQuantum Up Close: What is a browser engine?

In October of last year Mozilla announced Project Quantum – our initiative to create a next-generation browser engine. We’re well underway on the project now. We actually shipped our first significant piece of Quantum just last month with Firefox 53.

But, we realize that for people who don’t build web browsers (and that’s most people!), it can be hard to see just why some of the changes we’re making to Firefox are so significant. After all, many of the changes that we’re making will be invisible to users.

With this in mind, we’re kicking off a series of blog posts to provide a deeper look at just what it is we’re doing with Project Quantum. We hope that this series of posts will give you a better understanding of how Firefox works, and the ways in which Firefox is building a next-generation browser engine made to take better advantage of modern computer hardware.

To begin this series of posts, we think it’s best to start by explaining the fundamental thing Quantum is changing.

What is a browser engine, and how does one work?


If we’re going to start from somewhere, we should start from the beginning.

A web browser is a piece of software that loads files (usually from a remote server) and displays them locally, allowing for user interaction.

Quantum is the code name for a project we’ve undertaken at Mozilla to massively upgrade the part of Firefox that figures what to display to users based on those remote files. The industry term for that part is “browser engine”, and without one, you would just be reading code instead of actually seeing a website. Firefox’s browser engine is called Gecko.

It’s pretty easy to see the browser engine as a single black box, sort of like a TV- data goes in, and the black box figures out what to display on the screen to represent that data. The question today is: How? What are the steps that turn data into the web pages we see?

The data that makes up a web page is lots of things, but it’s mostly broken down into 3 parts:

  • code that represents the structure of a web page
  • code that provides style: the visual appearance of the structure
  • code that acts as a script of actions for the browser to take: computing, reacting to user actions, and modifying the structure and style beyond what was loaded initially

The browser engine combines structure and style together to draw the web page on your screen, and figure out which bits of it are interactive.

It all starts with structure. When a browser is asked to load a website, it’s given an address. At this address is another computer which, when contacted, will send data back to the browser. The particulars of how that happens are a whole separate article in themselves, but at the end the browser has the data. This data is sent back in a format called HTML, and it describes the structure of the web page. How does a browser understand HTML?

Browser engines contain special pieces of code called parsers that convert data from one format into another that the browser holds in its memory 1. The HTML parser takes the HTML, something like:

<section>
 <h1 class="main-title">Hello!</h1>
 <img src="http://example.com/image.png">
</section>

And parses it, understanding:

Okay, there’s a section. Inside the section is a heading of level 1, which itself contains the text: “Hello!” Also inside the section is an image. I can find the image data at the location: http://example.com/image.png

The in-memory structure of the web page is called the Document Object Model, or DOM. As opposed to a long piece of text, the DOM represents a tree of elements of the final web page: the properties of the individual elements, and which elements are inside other elements.

A diagram showing the nesting of HTML elements

In addition to describing the structure of the page, the HTML also includes addresses where styles and scripts can be found. When the browser finds these, it contacts those addresses and loads their data. That data is then fed to other parsers that specialize in those data formats. If scripts are found, they can modify the page structure and style before the file is finished being parsed. The style format, CSS, plays the next role in our browser engine.

With Style

CSS is a programming language that lets developers describe the appearance of particular elements on a page. CSS stands for “Cascading Style Sheets”, so named because it allows for multiple sets of style instructions, where instructions can override earlier or more general instructions (called the cascade). A bit of CSS could look like the following:

section {
  font-size: 15px;
  color: #333;
  border: 1px solid blue;
}
h1 {
  font-size: 2em;
}
.main-title {
  font-size: 3em; 
}
img {
  width: 100%;
}

CSS is largely broken up into groupings called rules, which themselves consist of two parts. The first part is selectors. Selectors describe the elements of the DOM (remember those from above?) being styled, and a list of declarations that specify the styles to be applied to elements that match the selector. The browser engine contains a subsystem called a style engine whose job it is to take the CSS code and apply it to the DOM that was created by the HTML parser.

For example, in the above CSS, we have a rule that targets the selector “section”, which will match any element in the DOM with that name. Style annotations are then made for each element in the DOM. Eventually each element in the DOM is finished being styled, and we call this state the computed style for that element. When multiple competing styles are applied to the same element, those which come later or are more specific wins. Think of stylesheets as layers of thin tracing paper- each layer can cover the previous layers, but also let them show through.

Once the browser engine has computed styles, it’s time to put it to use! The DOM and the computed styles are fed into a layout engine that takes into account the size of the window being drawn into. The layout engine uses various algorithms to take each element and draw a box that will hold its content and take into account all the styles applied to it.

When layout is complete, it’s time to turn the blueprint of the page into the part you see. This process is known as painting, and it is the final combination of all the previous steps. Every box that was defined by layout gets drawn, full of the content from the DOM and with styles from the CSS. The user now sees the page, reconstituted from the code that defines it.

That used to be all that happened!

When the user scrolled the page, we would re-paint, to show the new parts of the page that were previously outside the window. It turns out, however, that users love to scroll! The browser engine can be fairly certain it will be asked to show content outside of the initial window it draws (called the viewport). More modern browsers take advantage of this fact and paint more of the web page than is visible initially. When the user scrolls, the parts of the page they want to see are already drawn and ready. As a result, scrolling can be faster and smoother. This technique is the basis of compositing, which is a term for techniques to reduce the amount of painting required.

Additionally, sometimes we need to redraw parts of the screen. Maybe the user is watching a video that plays at 60 frames per second. Or maybe there’s a slideshow or animated list on the page. Browsers can detect that parts of the page will move or update, and instead of re-painting the whole page, they create a layer to hold that content. A page can be made of many layers that overlap one another. A layer can change position, scroll, transparency, or move behind or in front of other layers without having to re-paint anything! Pretty convenient.

Sometimes a script or an animation changes an element’s style. When this occurs, the style engine need to re-compute the element’s style (and potentially the style of many more elements on the page), recalculate the layout (do a reflow), and re-paint the page. This takes a lot of time as computer-speed things go, but so long as it only happens occasionally, the process won’t negatively affect a user’s experience.

In modern web applications, the structure of the document itself is frequently changed by scripts. This can require the entire rendering process to start more-or-less from scratch, with HTML being parsed into DOM, style calculation, reflow, and paint.

Standards

Not every browser interprets HTML, CSS, and JavaScript the same way. The effect can vary: from small visual differences all the way to the occasional website that works in one browser and not at all in another. These days, on the modern Web, most websites seem to work regardless of which browser you choose. How do browsers achieve this level of consistency?

The formats of website code, as well as the rules that govern how the code is interpreted and turned into an interactive visual page, are defined by mutually-agreed-upon documents called standards. These documents are developed by committees consisting of representatives from browser makers, web developers, designers, and other members of industry. Together they determine the precise behavior a browser engine should exhibit given a specific piece of code. There are standards for HTML, CSS, and JavaScript as well as the data formats of images, video, audio, and more.

Why is this important? It’s possible make a whole new browser engine and, so long as you make sure that your engine follows the standards, the engine will draw web pages in a way that matches all the other browsers, for all the billions of web pages on the Web. This means that the “secret sauce” of making websites work isn’t a secret that belongs to any one browser. Standards allow users to choose the browser that meets their needs.

Moore’s No More

When dinosaurs roamed the earth and people only had desktop computers, it was a relatively safe assumption that computers would only get faster and more powerful. This idea was based on Moore’s Law, an observation that the density of components (and thus miniaturization/efficiency of silicon chips) would double roughly every two years. Incredibly, this observation held true well into the 21st century and, some would argue, still holds true at the cutting edge of research today. So why is it that the speed of the average computer seems to have leveled off in the last 10 years?

Speed is not the only feature customers look for when shopping for a computer. Fast computers can be very power-hungry, very hot, and very expensive. Sometimes, people want a portable computer that has good battery life. Sometimes, they want a tiny touch-screen computer with a camera that fits in their pocket and lasts all day without a charge! Advances in computing have made that possible (which is amazing!), but at the cost of raw speed. Just as it’s not efficient (or safe) to drive your car as fast as possible, it’s not efficient to drive your computer as fast as possible. The solution has been to have multiple “computers” (cores) in one CPU chip. It’s not uncommon to see smartphones with 4 smaller, less powerful cores.

Unfortunately, the historical design of the web browser kind-of assumed this upward trajectory in speed. Also, writing code that’s good at using multiple CPU cores at the same time can be extremely complicated. So, how do we make a fast, efficient browser in the era of lots of small computers?

We have some ideas!

In the upcoming months, we’ll take a closer look at some of changes coming to Firefox and how they will take better advantage of modern hardware to deliver a faster and more stable browser that makes websites shine.

Onward!

[1]: Your brain can do things that are like parsing: the word “eight” is a bunch of letters that spell a word, but you convert them them to the number 8 in your head, not the letters e-i-g-h-t. back

Air MozillaMartes Mozilleros, 09 May 2017

Martes Mozilleros Reunión bi-semanal para hablar sobre el estado de Mozilla, la comunidad y sus proyectos. Bi-weekly meeting to talk (in Spanish) about Mozilla status, community and...

Gervase MarkhamThunderbird’s Future Home Decided

Here’s the announcement. Rather than moving to live somewhere else like The Document Foundation or the Software Freedom Conservancy, Thunderbird will stay with the Mozilla Foundation as its fiscal home, but will disentangle itself from Mozilla Corporation infrastructure. As someone who has been helping steward this exploration process, I’m glad to see it come to a successful outcome.

Also in the world of Thunderbird, the community is discussing the future of the product, in the face of significant upcoming changes to the Gecko platform. On the table is a “Thunderbird++” rewrite/transformation using web technologies. Interesting times…

Mozilla ThunderbirdThunderbird’s Future Home

Summary

The investigations on Thunderbird’s future home have concluded. The Mozilla Foundation has agreed to serve as the legal and fiscal home for the Thunderbird project, but Thunderbird will migrate off Mozilla Corporation infrastructure, separating the operational aspects of the project.

Background

In late 2015 Mitchell Baker started a discussion on the future of Thunderbird, and later blogged about the outcome of that, including this quote:

I’ve seen some characterize this as Mozilla “dropping” Thunderbird. This is not accurate. We are going to disentangle the technical infrastructure. We are going to assist the Thunderbird community. This includes working with organizations that want to invest in Thunderbird, several of which have stepped forward already. Mozilla Foundation will serve as a fiscal sponsor for Thunderbird donations during this time.

To investigate potential new homes for Thunderbird, Mozilla commissioned a report from Simon Phipps, former president of the OSI.

The Last Year’s Investigations

The Phipps report saw three viable choices for the Thunderbird Project’s future home: the Software Freedom Conservancy (SFC), The Document Foundation (TDF) and a new deal at the Mozilla Foundation. An independent “Thunderbird Foundation” alternative was not recommended as a first step but the report said it “may become appropriate in the future for Thunderbird to separate from its new host and become a full independent entity”.

Since then the Thunderbird Council, the governing body for the Thunderbird project, has worked to determine the most appropriate long term financial and organizational home, using the Phipps report as a starting point. Over the past year, the Council has thoroughly discussed the needs of a future Thunderbird team, and focused on investigating the non-Mozilla organizations as a potential future home. Many meetings and conversations were held with organizations such as TDF and SFC to determine their suitability as potential homes, or models to build on.

In parallel, Thunderbird worked to develop a revenue stream, which would be needed regardless of an eventual home. So the Thunderbird Council arranged to collect donations from our users, with the Mozilla Foundation as fiscal sponsor. Many months of donations have developed a strong revenue stream that has given us the confidence to begin moving away from Mozilla-hosted infrastructure, and to hire a staff to support this process. Our infrastructure is moving to thunderbird.net and we’re already running some Thunderbird-only services, like the ISPDB (used for auto configuring users’ email accounts), on our own.

Legally our existence is still under the Mozilla Foundation through their ownership of the trademark, and their control of the update path and websites that we use. This arrangement has been working well from Thunderbird’s point of view. But there are still pain points – build/release, localization, and divergent plans with respect to add-ons, to name a few. These are pain points for both Thunderbird and Firefox, and we obviously want them resolved. However, the Council feels these pain points would not be addressed by moving to TDF or SFC.

Thus, much has changed since 2015 – we were able to establish a financial home at the Mozilla Foundation, we are successfully collecting donations from our users, and the first steps of migrating infrastructure have been taken. We started questioning the usefulness of moving elsewhere, organizationally. While Mozilla wants to be laser-focused on the success of Firefox, in recent discussions it was clear that they continue to have a strong desire to see Thunderbird succeed. In many ways, there is more need for independent and secure email than ever. As long as Thunderbird doesn’t slow down the progress of Firefox, there seems to be no significant obstacles for continued co-existence.

We have come to the conclusion that a move to a non-Mozilla organization will be a major distraction to addressing technical issues and building a strong Thunderbird team. Also, while we hope to be independent from Gecko in the long term, it is in Thunderbird’s interest to remain as close to Mozilla as possible to in the hope that it gives use better access to people who can help us plan for and sort through Gecko-driven incompatibilities.

We’d like to emphasize that all organizations we were in contact with were extremely welcoming and great to work with. The decision we have made should not reflect negatively on these organizations and we would like to thank them for their support during our orientation phase.

What’s Next

The Mozilla Foundation has agreed to continue as Thunderbird’s legal, fiscal and cultural home, with the following provisos:

  1. The Thunderbird Council and the Mozilla Foundation executive team maintain a good working relationship and make decisions in a timely manner.
  2. The Thunderbird Council and the team make meaningful progress in short order on operational and technical independence from Mozilla Corporation.
  3. Either side may give the other six months notice if they wish to discontinue the Mozilla Foundation’s role as the legal and fiscal host of the Thunderbird project.

Mozilla would invoke C if A+B don’t happen. If C happened, Thunderbird would be expected to move to another organization over the course of six months.

From an operational perspective, Thunderbird needs to act independently. The Council will be managing all operations and infrastructure required to serve over 25 million users and the community surrounding it. This will require a certain amount of working capital and the ability to make strong decisions. The Mozilla Foundation will work with the Thunderbird Council to ensure that operational decisions can be made without substantial barriers.

If it becomes necessary for operational success, the Thunderbird Council will register a separate legal organization. The new organization would run certain aspects of Thunderbird’s operations, gradually increasing in capacity. Donor funds would be allocated to support the new organization. The relationship with Mozilla would be contractual, for example permission to use the trademark.

A Bright Future

The Thunderbird Council is optimistic about the future. With the organizational question settled, we can focus on the technical challenges ahead. Thunderbird will remain a Gecko-based application at least in the midterm, but many of the technologies Thunderbird relies upon in that platform will one day no longer be supported. The long term plan is to migrate our code to web technologies, but this will take time, staff, and planning. We are looking for highly skilled volunteer developers who can help us with this endeavor, to make sure the world continues to have a high-performance open-source secure email client it can rely upon.

Mozilla Open Policy & Advocacy BlogEU privacy reform can increase trust, user empowerment, but must be done right

This January, the European Commission introduced a proposal to update the current EU-wide legal framework regarding the privacy and security of communications online, known as the ePrivacy Directive. This effort is timely – trust online isn’t great, and there’s more that we need to do to build it. But the Commission’s draft is far from perfect. Because online privacy is one of the core principles in the Mozilla mission, we are actively working with all of the EU institutions, to share our experiences with investing in privacy online and to advise them on ways we believe European privacy law can improve.

The specific proposal on the table is for an ePrivacy Regulation to replace the current Directive. This proposed Regulation, like the General Data Protection Regulation (GDPR), would be binding and harmonized across the European Union. Although both the GDPR and ePrivacy relate to privacy, the first focuses more on the protection of personal data and the latter more on the confidentiality and security of communications.

Here are the key issues in the proposal as adopted by the Commission:

  • Confidentiality of communications: Establishes that all e-communications data shall be confidential. “Listening, tapping, storing, monitoring, scanning or other kinds of interception, surveillance or processing of electronic communications data” shall be prohibited except as outlined in the Regulation.
  • Consent for tracking/cookies: Consent may be expressed via technical settings of a software application allowing access to the internet (like a browser).
  • Privacy settings: Software permitting electronic communications (like browsers) shall offer a privacy friendly option (e.g. prevent third party cookies).
  • Lawful access: Member states may restrict e-privacy for “public interest”; providers of e-communication services shall establish internal procedures to respond to requests by law enforcement agencies for users’ data.
  • Broader application: The Regulation applies to telcos and ISPs, but also to over-the-top content providers like messaging apps, email providers, VoIP platforms, etc. Any technology using cookies or tracking technology (like device fingerprinting) will also be subject to the rules.

We understand and sympathize on many levels with the goals of this process. And getting this right is important. As part of our ongoing efforts to understand privacy in practice, we recently conducted a survey of Mozilla’s community on how users feel about online privacy. Our survey found massive challenges to trust online for internet users. First, respondents are concerned about their privacy online. 8 out of every 10 respondents fear being hacked by a stranger, and 61% of respondents are concerned about being tracked by advertisers. Second, respondents report not knowing much about how to secure their own privacy, with over 90% of survey participants saying they don’t know much about protecting themselves online. Global surveys of consumers indicate the same sentiments, concluding that “only when consumers around the world trust online companies with their data will those companies be able to make the most of the possibilities offered by global database marketing.” As these surveys illustrate, the core problem is one of trust — internet users don’t trust that their activities online are private, which creates a negative dynamic between internet users and online service providers, preventing an optimal condition for both.

Although intended to address this gap, the current EU framework hasn’t produced behavior from the technology industry that promotes a good experience for users (e.g. the ‘cookie header’ that users simply click through), or most importantly, engendered sufficient trust. In revising the ePrivacy framework, the EU government bodies hope to further encourage the right kind of dynamic, one founded on trust.

For our part, we work to enable trust in internet users by building privacy into our products, with Private Browsing embedded in Firefox and Firefox focus, where private browsing is the default. We also believe government action can play a positive role in improving trust and that the proposed ePrivacy Regulation, if done right, would have such an effect. We support the spirit and intention of the ePrivacy Regulation, because it would give EU citizens stronger privacy protection online, fostering individual online security. We will work with the institutions in order to shape a future proof framework that provides predictability for both users and online service providers, and contributes to a more secure online communications ecosystem.

“Doing it right” will be no easy feat. For instance, the draft Regulation imposes very specific restrictions on the technology industry that may challenge the business models of some ISPs. In some areas, obligations are proscriptive, undermining the principle of technological neutrality that this legislation needs to withstand the test of time in a rapidly changing environment, in addition to potentially restricting companies in freely developing innovative products and services.

Achieving harmony on these seemingly competing principles is where the challenge of a successful reform process lies. The core of the challenge is in ensuring these regulations are implementable and will achieve their goals of giving individual users choice, agency, and control, while not imposing undue or unhelpful burdens, or prematurely regulating burgeoning industries and practices.

The reform process is still in the early stages and is now in the hands of the European Parliament. The intention is to wrap up negotiations and have this legislation implemented at the same time that the GDPR comes into force, which will be May 2018. We think that is an overly aggressive timeline to tackle such a complex, important issue space, and hope that the institutions opt to take more time to more thoroughly assess the Regulation.

At Mozilla, we continue to work on these issues within a philosophy of empowering individuals. We’ll be engaged throughout this process and will share updates soon.

This post was written by Sherrie Quinn, Policy & Legal Extern at Mozilla

The post EU privacy reform can increase trust, user empowerment, but must be done right appeared first on Open Policy & Advocacy.

The Mozilla BlogIntroducing Paperstorm: Drop Airborne Leaflets to Fix EU Copyright

Mozilla and Moniker have created a digital advocacy tool that urges lawmakers to modernize EU copyright law

 

In the EU, outdated copyright law is threatening the health of the Internet.

The EU’s current copyright framework — developed for a time before the Internet — can stymie innovation, preventing entrepreneurs from building on existing data or code. It can stifle creativity, making it technically illegal to create, share and remix memes and other online culture and content. And it can limit the materials that educators and nonprofits like Wikipedia depend on for teaching and learning.

That’s why Mozilla is dropping airborne leaflets — millions of them — onto European cities.

Well, sort of.

Mozilla doesn’t own a fleet of zeppelins. And we like to conserve paper.

So we built Paperstorm.it instead. Paperstorm is a digital advocacy tool that urges EU policy makers to update copyright laws for the Internet age.

Paperstorm allows you to drop copyright reform flyers onto maps of European landmarks, like the Palace of Science and Culture in Warsaw, Poland. When you drop a certain amount, you can then message EU policymakers — like Pavel Svoboda, Chair of the EU Parliament Committee on Legal Affairs — on social media and urge them to support reform.

Alone, you might drop a handful of fliers. But together, we can drop millions — and send a clear, forceful message to EU policymakers.

Paperstorm is a collaboration between Mozilla and our friends at Moniker, the Webby award-winning interactive design studio based in Amsterdam. Paperstorm is another installment in Mozilla’s suite of advocacy media, which includes Codemoji and Post Crimes.

Why now? Copyright reform is at a critical juncture. Presently, lawmakers are crafting amendments to the proposal for a new copyright law, a process that will end this year. These amendments have the potential to make copyright law more Internet-friendly — or, conversely, more restrictive and rooted in the 20th century.

The good news: We know lawmakers are listening. Last year, Mozilla and our allies collected hundreds of thousands of signatures calling for copyright reform that would foster innovation and creativity in Europe. Some members of the European Parliament have been working diligently to improve the Commission’s proposal, and have taken into account some of the changes we’ve called for, such as removing dangerous provisions like mandatory upload filters, and pushing back against extending copyright to links and snippets.

But many other lawmakers need to be convinced not to break the Internet, and to support a modern copyright reform that empowers creators, innovators and Internet users.

“The EU Commission’s proposal to modernize copyright law for the 21st century falls short,” says Raegan MacDonald, Mozilla’s Senior Policy Manager in the EU. “It would stifle, rather than promote, innovation and creativity online.”

“We are especially concerned about the provisions calling for mandatory upload filters, which would force online services from Soundcloud to eBay to Wikipedia to monitor all content posted online in the name of copyright protection,” MacDonald adds. “Such an obligation would have a disastrous impact on the Internet ecosystem, repressing free expression and wedging out smaller players.”

“Paperstorm is part advocacy tool, part art project,” says Luna Maurer, Co-Founder of Moniker. “It’s a way to explore the intersection of the digital and physical worlds, while also standing up for free expression online.”

If you support common-sense copyright law in the EU — and a healthier Internet — join the #paperstorm today.

The post Introducing Paperstorm: Drop Airborne Leaflets to Fix EU Copyright appeared first on The Mozilla Blog.