Myk MelezSpiderNode In Positron

Last Friday, Brendan Dahl landed SpiderNode integration into Positron. Now, when you run an Electron app in Positron, the app’s main script runs in a JavaScript context that includes the Node module loader and the core Node modules.

The hello-world-server test app demonstrates an Electron BrowserWindow connecting to a Node HTTP server started by the main script. It’s similar to the hello-world test app (a.k.a. the Electron Quick Start app), with this added code to create the HTTP server:

// Load the http module to create an HTTP server.
var http = require('http');

// Configure our HTTP server to respond with Hello World to all requests.
var server = http.createServer(function (request, response) {
  response.writeHead(200, {"Content-Type": "text/plain"});
  response.end("Hello World from node " + process.versions.node + "\n");
});

The main script then loads a page from the server in an Electron BrowserWindow:

const electron = require('electron');
const app = electron.app;  // Module to control application life.
const BrowserWindow = electron.BrowserWindow;  // Module to create native browser window.
…
var mainWindow = null;
…
// This method will be called when Electron has finished
// initialization and is ready to create browser windows.
app.on('ready', function() {
    // Create the browser window.
    mainWindow = new BrowserWindow({width: 800, height: 600});

    // and load the index.html of the app.
    mainWindow.loadURL('http://localhost:8000');
    …
});

Which results in:

Hello World Server Demo

The simplicity of that screenshot belies the complexity of the implementation! It requires SpiderNode, which depends on SpiderShim (based on ChakraShim from node-chakracore). And it must expose Node APIs to the existing JavaScript context created by the Positron app runner while also synchronizing the SpiderMonkey and libuv event loops.

It’s also the first example of using SpiderNode in a real-world (albeit experimental) project, which is an achievement for that effort and a credit to its principal contributors, especially Ehsan Akhgari, Trevor Saunders, and Brendan himself.

Try it out for yourself:

git clone https://github.com/mozilla/positron
cd positron
git submodule update --init
MOZCONFIG=positron/config/mozconfig ./mach build
./mach run positron/test/hello-world-server/

Or, for a more interesting example, run the basic browser app:

git clone https://github.com/hokein/electron-sample-apps
./mach run electron-sample-apps/webview/browser/

(Note: Positron now works on Mac and Linux but not Windows, as SpiderNode doesn’t yet build there.)

Mozilla Cloud Services BlogDevice management coming to your Firefox Account

Today we are beginning a phased roll out of a new account management feature to Firefox Accounts users. This new feature aims to give users a clear overview of all services attached to the account, and provide our users with full control over their synced devices.

With the new “Devices” panel in your Firefox Accounts settings, you will be able to manage all your devices that use Firefox Sync. The devices section shows all connected Firefox clients on Desktop, iOS and Android, making it an excellent addition to those who use Firefox Sync on multiple devices. Use the “Disconnect” button to get rid of the devices that you don’t want to sync.

This feature will be made available to all users soon and we have a lot more planned to make account management easier for everyone. Here’s what the first version of the devices view looks like:

Devices
To stay organized you can easily rename your device in the Sync Preferences using the “Device Name” panel:

Updating Device Name

Thanks to everyone who worked on this feature: Phil Booth, Jon Buckley, Vijay Budhram, Alex Davis, Ryan Feeley, Vlad Filippov, Mark Hammond, Ryan Kelly , Sean McArthur, John Morrison, Edouard Oger, Shane Tomlinson. Special thanks to developers on the mobile teams that helped with device registration: Nick Alexander, Michael Comella, Stephan Leroux and Brian Nicholson.

If you want to get involved with the Firefox Accounts open source project please visit: fxa.readthedocs.io. Make sure to visit the Firefox Accounts settings page in the coming weeks to take more control over your devices!

The Mozilla BlogPromoting Cybersecurity Awareness

We are happy to support National Cyber Security Awareness Month (NCSAM), a global effort between government and industry to ensure everyone has the resources they need to be safer, more secure and better able to protect their personal information online.

We’ve talked about how cybersecurity is a shared responsibility, and that is the theme for National Cybersecurity Awareness Month – the Internet is a shared resource and securing it is our shared responsibility. This means technology companies, governments, and even users have to work together to protect and improve the security of the Internet. We all have to do our part to make the Internet safer and more secure for everyone. This is a time for all Internet users to Stop. Think. Connect. This month, and all year long, we want to help you be more “CyberAware.”

ncsam

Our responsibility as a technology company is to create secure platforms, build features that improve security, and empower people with education and resources to better protect their security. At Mozilla, we have security features like phishing and malware protection built into Firefox, Firefox Add-ons focused on cybersecurity, and a checkup site to make sure Firefox and all your add-ons and plugins up to date, just to name a few.

But, the increasing incidents we’ve seen in the news show that as cybersecurity efforts and technology protections advance, so do the threats against Internet security. Now, more than ever, each Internet user has a responsibility to protect themselves and help protect those around them.

What can you do?

There are lots of tips, tools, and resources available to you to help protect your privacy and security online. Try to take advantage of the resources available to increase your cybersecurity awareness and digital literacy skills. We believe that creating awareness and giving people access to the right tools to learn basic Web literacy skills — like reading, writing, and participating online — opens new opportunities to better utilize the Web for your needs.

We’ll list a few basic cybersecurity tips here, and you should also know how each of your devices, services, and accounts handles your private information.

These steps don’t just protect people who care about their own security, they help create a more secure Internet for the billions of people online.

The basic steps to protect your cybersecurity include: (here’s a fun infographic with these tips)

  • Lock down your login: Use strong passwords and the strongest authentication tools available to protect your online accounts and personal information.
  • Keep a clean machine: Make sure all your Internet connected devices, Web services, and apps are with up to date with the latest software and enable auto updates when you can.
  • Remember- Personal information is like money: Value it and protect it- everything from your location to purchase history. Be aware and in control of what information is shared about you online.
  • When in doubt, throw it out: Cybercriminals are sneaky and often use links in email, social media, and ads to steal your personal information. Even if you know the source, if something looks suspicious, don’t click on it- delete it.
  • Share with Care: Think before your post. Consider who will see the post and how it might be perceived, now or in the future. And, don’t post something about someone else that you wouldn’t want posted about yourself.
  • Own Your Online Presence: Consider limiting how and with whom you share information online. Make sure to set your individual app and website privacy and security settings to meet your needs.

If you’re interested in more ways you can protect your digital privacy, you should check out the Consumer Reports 10 minute digital privacy tuneup that Mozilla contributed to, or for even more tips, you can read the full article with 66 ways to protect your privacy.

To get more information and resources to promote a safer, more secure, and more trusted Internet all month long, visit: Stop.Think.Connect, Stay Safe Online, and the European Cyber Security Month website.

You can join Mozilla, National Cyber Security Alliance (NCSA) and others in a Stop. Think. Connect Twitter chat today at 12 pm PT for more about the basics of online safety. #CyberAware #ChatSTC. You can follow and use the official NCSAM hashtag #CyberAware on Twitter throughout the month.

We’ll also continue to share more about important cybersecurity topics throughout the month.

 

Air MozillaConnected Devices Weekly Program Update, 06 Oct 2016

Connected Devices Weekly Program Update Weekly project updates from the Mozilla Connected Devices team.

Soledad PenadesMoving to the DevTools team

As of this week I am working in the DevTools team.

This isn’t entirely surprising, given that I’ve been collaborating with this team a lot in the past (proof!) and also that I care a lot about making sure web developers have the best tools so they can build the best web ever.

I will be managing a few folks and also plan on getting some bugs fixed myself (famous last words? 😏). I also am going to give the talks I agreed to give, so I will still be attending Hackference (Birmingham), CSSConf Asia (Singapore) and JSConf AU (Melbourne).

I’m very excited both about the future and about working with this team full time! Yasss!

It is bittersweet to leave my former team as my colleagues are very cool, but we keep working closely together, and I intend to keep using my devrel-ing skills to announce all the cool stuff coming out of my new team. We will keep building cross team synergies! 😝

🌞 Onward! 🌞

flattr this!

Air MozillaReps Weekly Meeting Oct. 6, 2016

Reps Weekly Meeting Oct. 6, 2016 This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Doug BelshawWeb Literacy badges in GitHub

I’m delighted to see that Mozilla have worked with Digitalme to create new Open Badges based on their Web Literacy Map. Not only that, but the badges themselves are platform-agnostic, with a GitHub repository containing the details that can be used under an open license.

Web Literacy Badges

As Matt Rogers from Digitalme documents in A journey into the open, there’s several levels to working open:

In a recent collaboration with the Mozilla Learning team – I got to understand how I can take our work to the next level of openness. Creating publicly available badge projects is one thing, but it’s another when they’re confined to one platform – even if that is your own. What truly makes a badge project open is its ability to be taken, maybe remixed, and utilised anywhere across the web. Be that on a different badging platform, or via a completely different delivery means entirely.

This is exactly the right path for the Web Literacy work and Open Badges work at Mozilla. It’s along the same lines as something I tried to achieve during my time as Web Literacy Lead there. Now, however, it seems like they’ve got the funding and capacity to get on with it. A big hats-off to Digitalme for wrangling this, and taking the hard, but extremely beneficial and enlightening steps towards working (even) more openly.

If you’re interested in working more openly, why not get in touch with the co-operative I’m part of, We Are Open?


Comments? Questions? I’m @dajbelshaw on Twitter, or you can email me: [email protected]

Mozilla Addons BlogFriend of Add-ons: Atique Ahmed Ziad

Please meet our newest Friend of Add-ons: Atique Ahmed Ziad, who in September alone closed 14 AMO bugs!

Atique is primarily interested in front-end work. His recent contributions mostly focused on RTL language bugs, however it was this complex error messaging bug that proved the most challenging because it forced Atique to grapple with complex code.

Beyond crushing bugs, Atique also helps organize Activate campaigns.

When he’s not busy being a tireless champion for the open Web, Atique likes to unwind by taking in a good movie or playing video games; and while he’s one of the nicest guys you’ll ever meet, you’ll want to avoid him in the virtual settings of Grand Theft Auto and Call of Duty.

On behalf of the AMO staff and community, thank you for all of your great work, Atique!

Do you contribute to AMO in some way? If so, don’t forget to add your contributions to our Recognition page!

Frédéric WangFonts at the Web Engines Hackfest 2016

Last week I travelled to Galicia for one of the regular gatherings organized by Igalia. It was a great pleasure to meet again all the Igalians and friends. Moreover, this time was a bit special since we celebrated our 15th anniversary :-)

Igalia Summit October 2016
Photo by Alberto Garcia licensed under CC BY-SA 2.0.

I also attended the third edition of the Web Engines Hackfest, sponsored by Igalia, Collabora and Mozilla. This year, we had various participants from the Web Platform including folks from Apple, Collabora, Google, Igalia, Huawei, Mozilla or Red Hat. For my first hackfest as an Igalian, I invited some experts on fonts & math rendering to collaborate on OpenType MATH support HarfBuzz and its use in math rendering engines. In this blog post, I am going to focus on the work I have made with Behdad Esfahbod and Khaled Hosny. I think it was again a great and productive hackfest and I am looking forward to attending the next edition!

OpenType MATH in HarfBuzz

Web Engines Hackfest, main room
Photo by @webengineshackfest licensed under CC BY-SA 2.0.

Behdad gave a talk with a nice overview of the work accomplished in HarfBuzz during ten years. One thing appearing recently in HarfBuzz is the need for APIs to parse OpenType tables on all platforms. As part of my job at Igalia, I had started to experiment adding support for the MATH table some months ago and it was nice to have Behdad finally available to review, fix and improve commits.

When I talked to Mozilla employee Karl Tomlinson, it became apparent that the simple shaping API for stretchy operators proposed in my blog post would not cover all the special cases currently implemented in Gecko. Moreover, this shaping API is also very similar to another one existing in HarfBuzz for non-math script so we would have to decide the best way to share the logic.

As a consequence, we decided for now to focus on providing an API to access all the data of the MATH table. After the Web Engines Hackfest, such a math API is now integrated into the development repository of HarfBuzz and will available in version 1.3.3 :-)

MathML in Web Rendering Engines

Currently, several math rendering engines have their own code to parse the data of the OpenType MATH table. But many of them actually use HarfBuzz for normal text shaping and hence could just rely on the new math API the math rendering too. Before the hackfest, Khaled already had tested my work-in-progress branch with libmathview and I had done the same for Igalia’s Chromium MathML branch.

MathML Fraction parameters test
MathML test for OpenType MATH Fraction parameters in Gecko, Blink and WebKit.

Once the new API landed into HarfBuzz, Khaled was also able to use it for the XeTeX typesetting system. I also started to experiment this for Gecko and WebKit. This seems to work pretty well and we get consistent results for Gecko, Blink and WebKit! Some random thoughts:

  • The math data is exposed through a hb_font_t which contains the text size. This means that the various values are now directly resolved and returned as a fixed-point number which should allow to avoid rounding errors we may currently have in Gecko or WebKit when multiplying by float factors.
  • HarfBuzz has some magic to automatically handle invalid offsets and sizes that greatly simplifies the code, compared to what exist in Gecko and WebKit.
  • Contrary to Gecko’s implementation, HarfBuzz does not cache the latest result for glyph-specific data. Maybe we want to keep that?
  • The WebKit changes were tested on the GTK port, where HarfBuzz is enabled. Other ports may still need to use the existing parsing code from the WebKit tree. Perhaps Apple should consider adding support for the OpenType MATH table to CoreText?

Brotli/WOFF2/OTS libraries

Web Engines Hackfest, main room
Photo by @webengineshackfest licensed under CC BY-SA 2.0.

We also updated the copies of WOFF2 and OTS libraries in WebKit and Gecko respectively. This improves one requirement from the WOFF2 specification and allows to pass the corresponding conformance test.

Gecko, WebKit and Chromium bundle their own copy of the Brotli, WOFF2 or OTS libraries in their source repositories. However:

  • We have to use more or less automated mechanisms to keep these bundled copies up-to-date. This is especially annoying for Brotli and WOFF2 since they are still in development and we must be sure to always integrate the latest security fixes. Also, we get compiler warnings or coding style errors that do not exist upstream and that must be disabled or patched until they are fixed upstream and imported again.

  • This obviously is not an optimal sharing of system library and may increase the size of binaries. Using shared libraries is what maintainers of Linux (or other FLOSS systems) generally ask and this was raised during the WebKitGTK+ session. Similarly, we should really use the system Brotli/WOFF2 bundled in future releases of Apple’s operating systems.

There are several issues that make hard for package maintainers to provide these libraries: no released binaries or release tags, no proper build system to generate shared libraries, use of git submodule to include one library source code into another etc Things have gotten a bit better for Brotli and I was able to tweak the CMake script to produce shared libraries. For WOFF2, issue 40 and issue 49 have been inactive but hopefully these will be addressed in the future…

Jared HirschBootstrapping Test Pilot Community through Public Decision Making

This proposal is pulled verbatim from a message I sent to the Test Pilot mailing list a few minutes ago.

The question is: how do we best begin to build a community around Test Pilot?

My answer: start by making decisions in public.

If this seems interesting to you, read on below.

Proposed Participation Timeline

Q4 2016: Start by making decisions in public, in the Discourse user forums.

Q4 2016 - Q1 2017: Once we’ve made our process accessible to contributors, ask active Mozillians to get involved. Build an awesome core community. Advertise idea submissions to active Mozillians & iterate on the submission system before a huge public influx.

Q1-Q2 2017: Once the core community is in place, and idea submission has been tweaked, get Marketing involved with a public call for ideas. Our awesome core community will help manage the influx: greet newcomers, bash trolls, de-dupe suggestions.

Background

To frame the discussion, I wrote up some thoughts on what a more open, community-centered product development cycle might look like. TL;DR: give community a seat at the table by offering equal visibility into the process, and opportunities to provide input at decision points.

See also Producing OSS, which makes interesting points on the importance of public decision-making.

Suggested Q4 plan

In Q4, we can set the stage for community by making our work public:

  • ask product and UX to move decision making discussions to Discourse
  • ask experiment teams to post in Discourse as they launch, measure, iterate, and wind down
  • make Discourse a secondary call to action on the Test Pilot experiment pages
  • deprecate mailing list in favor of Discourse
  • (if there’s time) ask vetted, active Mozillians to join the conversation
  • (if there’s time) ask Mozillians to share ideas

If we make these changes in Q4, then by Q1, we’ll have plenty of content in Discourse. New community members will tend to model their input on the tone and content of the existing discussion; for this reason, I think we should hold the open call until a bit later. I realize I’m contradicting my own recent suggestion, but I do think this approach will yield better results.

Key Result: 100 monthly active Discourse users

One natural number to measure would be the number of monthly active Discourse users (MADU), meaning: users who post at least once a month in the Test Pilot category. Right now, this number is probably below 10. If all the dev teams and product/UX get involved, that’ll jump to, maybe, 30 MADUs. If we get Mozillians engaged, we could see this number jump into the hundreds. 100 MADUs seems ambitious, but possible.

Q4 plan in detail

It’s a bit buried in the list above, but I think we should deprecate the mailing list, and move discussion to Discourse instead. Our current list has little traffic, and a primitive, plain text archive that doesn’t allow search. It would be easy to move the current traffic to Discourse. Finally, moving to Discourse would encourage the Test Pilot team to use it.

Going into a bit more detail on the types of content that should move to Discourse to help seed our community:

Design phase (product / UX):

  • Idea proposals & discussion
  • Decision making discussions: which ideas to invest in, and why those ideas.

Development / iteration phase (product / UX / dev):

  • Summaries of user interaction data (how are people using the product, how many keep using it)
  • High level discussion of which features to add, change, or remove, grounded in the public interaction data
  • I’m not yet sure how much development discussion should happen on Discourse vs. a specific mailing list; we can ask around to see what approaches different teams would like to try.

Graduation / end of cycle phase (product):

  • Discussions about whether to keep an experiment running, move into Firefox, or retire it.

Wrapping up

That does it for the mailing list post.

What do you think about this proposal? Is this a good way to build the foundations of a participatory Test Pilot community?

Let me know! Twitter and email are in the footer below.

Air MozillaThe Joy of Coding - Episode 74

The Joy of Coding - Episode 74 mconley livehacks on real Firefox bugs while thinking aloud.

Christian HeilmannCan we stop bad-mouthing CSS in developer talks, please?

At almost every developer conference right now there will be a talk that features the following “funny GIF”:

funny CSS gif
Peter Griffin aka Family Guy trying to make some blinds close and making a total mess of it, randomly dragging the cords until he gives up and rips them off the window. With the caption CSS.

It is always a crowd pleaser, and can be a good introduction to the issues of CSS and some examples how to fix them. In most cases – and the likeliness of that increases with how “techy” the conference is – it is the start of a rant how bad CSS is, how its architecture is terrible and how inconsistent it is. And, and, and…

Here’s the thing: I am sick of this. It is not clever, it is not based in fact and it makes us appear as arrogant know-it-alls that want everything to work the way we are used to. It drives a firm divider between “developers” and “people who do web stuff”, aka “not real developers”. Which is nonsense. Arrogant, dangerous nonsense, not helping us – at all – to grow our developer community and be more exciting for a new, diverse, crowd to join.

Here’s a fact: we build incredibly complex, exciting and beautfiful things on the web. The most democratic distribution system of information and – by now – a high fidelity and exciting software platform. If you think you know every facet of this, you can do it all without relying on other experts to work with you, and in the one technology you like, you’re blinded by your own ambition. And an arrogant person I really could not be bothered to work with.

Yes, it is easy to poke fun at CSS and it’s Frankenstein-esque syntax. It is also easy to show that you can do all the things it does with other technologies. But that gives you no right – at all – to belittle and disregard the people who like CSS and took it as their weapon of choice to build great user interfaces.

In other words: if you don’t like it, don’t use it. Work with someone who does like it. It is a self-fulfilling prophecy that when you use a technology that you don’t take serious and you don’t like, the end result will be bad. It is a waste of time. When you complain about the issues you faced because you want the technology to bend to your comfort zone’s rules you really complain about your failure to use it. It doesn’t apply to those who happen to like the technology and play it to its strengths.

Another one that always crops up is the “CSS is awesome” coffee mug:

css is awesome mug

The joke here is that CSS is inadequate to fix that problem of overflowing text. Well, my question is how should that be handled? Overflow with scroll bars? That’s possible in CSS. Cutting the text off? Also possible. Shortening the text followed by an ellipsis? That’s also possible. Are either of those good solutions? No. The main thing here is that the text is too big to fit the container. And a fixed container is a mistake on the web. You can’t fix anything in an environment that by definition could be any size or form factor. So the mistake here is a “fixed container” thinking, not that CSS doesn’t magically do something to the text you can’t control. This, in interfaces, would really get you into trouble.

I challenge anyone to look at the mind-boggling things Ana Tudor does with CSS and tell me that’s not “real programming” and based on a “stupid language”.

See the Pen cube broadcast (pure CSS) by Ana Tudor (@thebabydino) on CodePen.


I challenge you not to see the benefit of flexbox and the ability it gives us to build dynamic interfaces that can adapt to different amounts of content and the needs of smaller versus bigger screens as explained by Zoe Mickley Gillenwater:


Zoe Mickley Gillenwater | Flexbox | CSS Day from Web Conferences Amsterdam on Vimeo.

I challenge you not to get excited about the opportunities of grid layouts as explained by Rachel Andrews:

I challenge you not to be baffled by the beauty of using type and shapes to build complex layouts that are not hindered by a fixed pixel thinking as explained by Jen Simmons.

I challenge you not to marvel at the power of CSS filters and blend modes and what they empower an artistic mind to do as explained by Una Kravets:


SmashingConf Freiburg 2016 – Una Kravets on Practical Blend Modes from Smashing Magazine on Vimeo.

So next time you think about using “the CSS joke”, please understand that people who care do not try to colour some text. CSS is a very expressive language to build complex interfaces that cater to a lot of varying user needs. If you can’t get your head around that – and I am admitting that I can’t anymore – have the decency to not belittle those who do. Applaud them for their efforts and work with them instead.

Karl DubostDebug your CSS with outline visualizations.

Reading The GDS blog post on how to prototype in the browser, I realized that it's always good to explain little tips for the benefits of others. Their technique is something I use on a daily basis for modifying content, evolving a design, etc.

When diagnosing on webcompat.com, I often use a trick for having a better understanding of the way the elements flow with regards to each other.

Using CSS outline for visualizing

David Lorente reported an issue about the menu of Universia. Basically two items where missing in the main navigation bar of the Web site.

Hovering the menu with the mouse and doing ctrl+click to get the contextual menu, I can choose inspect.

Inspect Contextual Menu

It opens the developer tools and place the cursor on the right element and displays its associated CSS.

Inspector

For this particular issue because the elements were not immediately visible. I decided to add a higher z-index in case there were hidden by another layer, but more specifically. I selected the wrapper element for the navigation bar <div class="header-nav"> and headed to the + sign on the right side.

Add new rule

Clicking on it will help you to add a new rule for this selected node (element) in the inspector. In this case, it will add .header-nav.

header nav selector

which I usually edit for adding all the children of this node with .header-nav *. Then I proceed to add an outline CSS property with a color which will give an acceptable contrast, helping me to understand what is happening. In this case outline: 1px solid pink

outline css rule

The result helps to visualize all the children boxes of the div.

Visualization

It is now a lot easier to understand what is going on.

Why CSS outline?

The reason I use CSS outline is that they do not participate to the intrinsic sizes of boxes and does not destroy the flow of boxes. It just makes them visible for the purpose of the diagnosis.

What are the tricks you are using which seems obvious to you? Share them with others.

Oh and the site has been fixed since.

Otsukare!

Jared HirschTest Pilot and Open Product Development

Test Pilot started out as a better way for Mozilla to build ambitious new features in Firefox: prototype features as add-ons, share with an opt-in beta testing community, and iterate much more quickly than the Firefox release cycle allows. This all makes sense, and it’s amazing how much we’ve accomplished in just over a year–but this conception leaves out the community algother.

How could Test Pilot be more open, more participatory?

Well, what if the Test Pilot team asked the community for feature ideas?

Even better: what if Test Pilot shipped prototypes built by self-organized teams of community members (volunteer or staff)?

What if we think about Test Pilot as a community of contributors interested in discussing and building new Firefox features?

Yeah, that’s the stuff!

I don’t have the time or space to get deeper into this vision right now, but I will mention it in another post, soon.

Assuming it makes sense to build products in the open from the early stages, how might we get there?


Here’s a rough sketch of an open product development lifecycle for Firefox features, centered around Test Pilot.

Generating and discussing feature ideas

An inventor with an idea creates a Discourse account and creates a post describing their idea. This could be anyone: a product manager at Mozilla, or a new Mozillian located anywhere in the world.

Interested community members discuss the idea, and help the inventor improve their proposal.

Community members with user research skills can help generate qualitative or quantitative insights about ideas they are interested in.

Turning ideas into working prototypes

Community members with design and development skills self-organize to build prototypes of ideas they believe in.

Companies (like Mozilla) can sponsor development of ideas.

Deciding which prototypes belong in Firefox Test Pilot

Every few months, someone from the Firefox product org leads a public discussion about which of the in-progress prototypes might be a good fit for the Firefox roadmap.

The decision-making process is a public, open conversation based on publicly defined criteria (the public Firefox roadmap and Mozilla’s guiding principles).

Input from the community is welcome, the Firefox product org makes the final decision, and deliberations happen in the same public channels used by the rest of the community.

Sharing prototypes with the Firefox Test Pilot audience

This is the part we’ve already built at https://testpilot.firefox.com.

Prototypes ship in the Test Pilot site, and Mozilla provides marketing support and a global audience. Firefox users interested in trying new features install different prototypes, and offer feedback on what they do or don’t like.

Sharing prototypes that don’t make it into Test Pilot

Like any other Firefox add-on, teams ship in AMO (or self-host) and self-promote.

It’s still possible to create a great feature that later makes it into Test Pilot, or becomes a great add-on on its own, and teams can continue to discuss features and updates with the Test Pilot community.

Iterating on the product concept

Once launched in Test Pilot, the product should evolve based on quantitative usage data, as well as qualitative user feedback.

Prototype building teams are encouraged to make their discussions public, so that interested community members can provide input.

From Test Pilot prototype to Firefox

The Firefox product org decides which Test Pilot experiments to turn into real browser features. Again, this is an open discussion, centered around criteria any contributor could understand (except for contractual / partner private factors).

From Test Pilot prototype to AMO

Ideas that fail to catch on with a big audience, at the end of their Test Pilot run, can still be supported by the original team, or handed off to other community members, and permanently hosted on AMO.


Next steps: I’m out of time for now, but I’d like to explore which parts of this theoretical open product dev cycle might make sense at Mozilla. In other words, stay tuned, more to come :-)

What do you think? Let me know! Twitter and email are in the footer below.

Mozilla Addons BlogOctober 2016 Featured Add-ons

Firefox Logo on blue background

Pick of the Month: MailtoWebmails

by Noitidart
Now you can easily customize which Webmail client you want to use whenever you click a “mailto:” link, instead of being pushed to your default desktop email. Get completely set up in just a few simple steps.

“I searched repeatedly for a way to change from Gmail to inbox—this works like a treat.”

Featured: Messenger & Notifier for Facebook™

by glin, by Elen Norphen
Access Messenger right from the Firefox toolbar, and instantly receive notifications when you have inbound messages.

“Finally I can reply to messages without being distracted by the news feed! Thank you very much!”

Nominate your favorite add-ons

Featured add-ons are selected by a community board made up of add-on developers, users, and fans. Board members change every six months, so there’s always an opportunity to participate. Stayed tuned to this blog for the next call for applications. Here’s further information on AMO’s featured content policies.

If you’d like to nominate an add-on for featuring, please send it to [email protected] for the board’s consideration. We welcome you to submit your own add-on!

Air MozillaWebdev Extravaganza: October 2016

Webdev Extravaganza: October 2016 Once a month web developers across the Mozilla community get together (in person and virtually) to share what cool stuff we've been working on. This...

Mozilla Open Design BlogFine Tuning

At the Brand New Conference in Nashville two weeks ago, we shared a snapshot of our progress and posted the designs here simultaneously. About 100 graphic designers and brand experts stayed after our presentation to share some snacks and their initial reactions.

Since then, we’ve continued to refine the work and started to gather feedback through quantitative testing, reflecting the great advice and counsel we’ve received online and in person. While we’ve continued to work on all four designs, Dino 2.0 and Flame show the most dramatic progress in response to feedback from conference attendees and Mozillians. We wanted to refine these designs prior to testing with broader audiences.

 

Meet Dino 2.1

Our early work on Dino 2.0 focused on communicating that Mozilla is opinionated, bold, and unafraid to be different. Embodying the optimism and quirkiness of our culture, the minimalist design of this dinosaur character needed to extend easily to express our wide array of programs, communities, events, and initiatives. Dino 2.0 met the challenge.

On the other hand, the character’s long jaw reminded some commenters of an alligator and others of a red stapler. Colleagues also pointed out that it’s playfulness might undermine serious topics and audiences. Would we be less believable showing up to testify about encryption with a dino logo smiling from our briefcases?

So the dinosaur has continued to evolve. A hand-cut font helps shorten the jaw while a rounder outline shifts the mis-perception that we sell office supplies. After much debate, we also decided to make the more serious Mozilla portion – the upper part of the jaw – the core mark.

mozilla-dino_3oct-blog_2

 

mozilla-dino_3oct-blog_3mozilla-dino_3oct-blog_4mozilla-dino_3oct-blog_5

Wait, does that doom the dino character? Not at all.

Since the bulk of our Mozilla brand expression occurs on screens, this shift would allow the animated dino to show an even wider range of emotions. Digitally, the core mark can come to life and look surprised, hungry, or  annoyed as the situation warrants, without having those expressions show up on a printed report to Congress. And our communities would still have the complete Dino head to use as part of their own self expression.

Should Dino 2.1 end up as one of our finalists, we’ll continue to explore its expressions. Meanwhile, let us know what you think of this evolution.

 

Making Flame eternal.

The ink was still moist on Flame, our newest design direction, when we shared it in Nashville. We felt the flame metaphor was ideal for Mozilla, referencing being a torch-bearer for an equal, accessible internet, and offering a warm place for community to gather. Even so, would a newcomer seeing our name next to a traditional flame assume we were a religious organization? Or a gas company? We needed a flame that was more of the Web and more our own.

So we asked: what if our core mark was in constant motion — an eternal flame that represents and reinforces our purpose? Although we haven’t landed on the exact Flame or the precise font, we are on a better design path now.

mozilla-flame_3oct-blog_2mozilla-flame_3oct-blog_3mozilla-flame_3oct-blog_4mozilla-flame_3oct-blog_5

 

Should the Flame make it into our final round, we will continue to explore different flame motions, shapes, and static resting states, along with a flexible design system. Tell us what you think so far.

What about Protocol 2.0 and Burst?  We’ve shifted Protocol 2.0 from Lubalin to an open source font, Arvo Bold, to make it more readily available globally. We continue to experiment with Burst in smaller sizes (with reduced spokes) and as a means to visualize data. All four designs are still in the running.

burst_rotating

Testing 1,2,3.

This week begins our quantitative consumer testing in five global markets. Respondents in our target audience will be asked to compare just one of the four designs to a series of brand attributes for Mozilla, including Unique, Innovative, Inclusive, and others. We have also shared a survey to all Mozillians with similar questions plus a specific ask to flag any cultural bias. And since web developers are a key audience for us, we’ve extended the survey through the Mozilla Developer Network as well.

This research phase will provide additional data to help us select our final recommendation. It will help us discern, for instance, which of these four pathways resonates best with which segment of our audience. The findings will not be the only factor in our decision-making. Comments from the blog and live crit sessions, our 5-year strategic goals as an organization, and other factors will weigh into the decision.

We’ll share our top-level research findings and our rationale for proceeding as we go. Continued refinement will be our next task for the route(s) selected in this next round, so your insights and opinions are still as valuable as ever.

Thanks again to everyone who has taken the time to be a part of this review process. Three cheers for Open Design!

Tarek ZiadéLyme Disease & Running

I am writing this blog post to share what happened to me, and make more people aware of that vicious illness.

If you don't know about Lyme, read up here: https://en.wikipedia.org/wiki/Lyme_disease

I am writing this blog post to share what happened to me, and make more people aware of that vicious illness. I've contracted the Lyme Disease a year ago and got gradually sick without knowing what was happening to me at first.

I am a avid runner and I live in a forest area. I do a lot of trail running and that exposes me to ticks. Winters are warmer these days and ticks are just craving to bite mammals.

On my case, I got bitten in the forest last summer by many ticks I've removed, and a week after, without making the link between the two events I got a full week of heavy fever. I did a bunch of tests back then including Lyme and we could not find what was happening. Just that my body was fighting something.

Then life went on and one month after that happened, I had a Erythema under the armpit that grew on half the torso.

I went back to the doctor, did some tests, and everything was negative again and life went on. The Erythema dissipated eventually.

About 3 months ago, I started to experience extreme eyes fatigue and muscle soreness. I blamed the short nights because of our new-born baby and I blamed over-training. But cutting the training down and sleeping more did not help.

This is where it gets interesting & vicious: for me, everything looked like my body was gradually reacting to over-training. I went to the osteopath and he started to tell me that I was simply doing too much, not stretching enough. etc. Every time I mentioned Lyme, people were skeptical. It's very weird how some doctors react when you tell them that it could be that.

This disease is not really known and since its symptoms are so different from one individual to the other due to its auto-immune behavior, some doctors will just end up saying you have psychosomatic reactions.

Yeah, doctors will end up telling you that it's all in your head just so they don't face their ignorance. Some Lyme victims turn nuts because of that. Suicides are happening in worst cases.

At some point, I felt like I simply broke my body with all those races I am doing. I felt 80 years old. Doing a simple 30 minutes endurance run would feel like doing a Marathon.

And I went to another doctor and did a new blood test to eventually discover I had late Lyme disease (probably phase 2) - that's when the borellia gets into your muscle/tendons/nerves.

I took me almost one year to get the confirmation. Right before I got that test result I really thought I had a cancer or something really bad. That is the worst: not knowing what's happening to you, and seeing your body degrading without being able to know what to do.

They gave me the usual 3 weeks of heavy antibiotics. I felt like crap the first week. Sometime raising my arm would be hard. But by the end of the 3 weeks, I felt much better and it looked like I was going to be ok.

After the 3 weeks ended, symptoms were back and I went to the hospital to see a neurologist that seemed to know a bit about Lyme. He said that I was probably having post Lyme symptoms, which is pretty common. e.g. your body continues to fight for something that's not there anymore. And that can last for months.

And the symptoms are indeed gradually fading out, like how they came.

I am just so worried about developing a chronic form. We'll see.

The main problem in my story is that my doctor did not give me some antibiotics when I had the Erythema. That was a huge mistake. Lyme is easy to get rid off when you catch it early. And it should be a no-brainer. Erythema == antibiotics.

Anyways, some pro tips so you don't catch that crap on trails:

  • use long socks in the forest and put a bunch of tick/mosquito repellant on them. winters are warmer, ticks are nuts.
  • full body check after the run & shower. Ticks are very small when they jump on you around 3-5mm. They wait for you to run by. Most of the time they are not attached under your skin yet
  • use dog/cats tick remover
  • if any byte gets red-ish - go to your doctor immediately and ask for antibiotics. If your doctor is skeptical about Lyme, go see another doctor.

QMOFirefox 50 Beta 3 Testday Results

Hello Mozillians!

As you may already know, last Friday – September 30th – we held a new Testday event, for Firefox 50 Beta 3.

Thank you all for helping us making Mozilla a better place – Julie Myers, Logicoma, Tayba Wasim, Nagaraj V, Suramya Shah, Iryna Thompson, Moin Shaikh, Dragota Rares, Dan Martin,  P Avinash Sharma.

From Bangladesh: Hossain Al Ikram, Azmina Akter Papeya, Nazir Ahmed Sabbir, Saddam Hossain, Aminul Islam Alvi, Raihan Ali, Rezaul Huque Nayeem, Md. Rahimul Islam, Sayed Ibn Masud, Roman Syed, Maruf Rahman, Tovikur Rahman, Md. Rakibul Islam, Siful Islam Joy, Sufi Ahmed Hamim, Md Masudur-Rahman, Niaz Bhuiyan Asif, Akash Kishor Sarker, Mohammad Maruf Islam, MD Maksudur Rahman, M Eftekher Shuvo, Tariqul Islam Chowdhury, Abdullah Al Jaber Hridoy, Md Sajib Mullla, MD. Almas Hossain, Rezwana islam ria, Roy Ayers, Nzmul Hossain, Md. Nafis Fuad, Fahim. 

From India: Vibhanshu Chaudhary, Subhrajyoti Sen, Bhuvana Meenakshi K, Paarttipaabhalaji, Nagaraj V, Surentharan.R.A, Rajesh . D, Pavithra.R.

A big thank you goes out to all our active moderators too! 

Results:

Keep an eye on QMO for upcoming events!

This Week In RustThis Week in Rust 150

Hello and welcome to another issue of This Week in Rust! Rust is a systems language pursuing the trifecta: safety, concurrency, and speed. This is a weekly summary of its progress and community. Want something mentioned? Tweet us at @ThisWeekInRust or send us a pull request. Want to get involved? We love contributions.

This Week in Rust is openly developed on GitHub. If you find any errors in this week's issue, please submit a PR.

Updates from Rust Community

News & Blog Posts

New Crates & Project Updates

Other weeklies from Rust community

Crate of the Week

No crate was selected for CotW.

Submit your suggestions and votes for next week!

Call for Participation

Always wanted to contribute to open-source projects but didn't know where to start? Every week we highlight some tasks from the Rust community for you to pick and get started!

Some of these tasks may also have mentors available, visit the task page for more information.

If you are a Rust project owner and are looking for contributors, please submit tasks here.

Updates from Rust Core

181(!) pull requests were merged in the last week.

New Contributors

  • Chris McDonald
  • Frank Rehberger
  • Jesus Garlea
  • Martin Thoresen
  • Nathan Musoke
  • ParkHanbum
  • Paul Lange
  • Paulo Matos
  • Peter N
  • Philip Davis
  • Pweaver (Paul Weaver)
  • Ross Schulman

Approved RFCs

Changes to Rust follow the Rust RFC (request for comments) process. These are the RFCs that were approved for implementation this week:

No RFCs were approved this week.

Final Comment Period

Every week the team announces the 'final comment period' for RFCs and key PRs which are reaching a decision. Express your opinions now. This week's FCPs are:

New RFCs

Upcoming Events

If you are running a Rust event please add it to the calendar to get it mentioned here. Email the Rust Community Team for access.

fn work(on: RustProject) -> Money

No jobs listed for this week.

Tweet us at @ThisWeekInRust to get your job offers listed here!

Friends of the Forest

Our community likes to recognize people who have made outstanding contributions to the Rust Project, its ecosystem, and its community. These people are 'friends of the forest'.

This week's friends of the forest are:

I'd like to nominate Veedrac for his awesome contributions to various performance-related endeavors.

I'd like to highlight tomaka for his numerous projects (glium, vulkano, glutin). I know he's also involved in some other crates I take for granted, like gl_generator.

I like to play with gamedev, but I am a newcomer to OpenGL things and I have been very grateful for projects like glium and gl_generator that not only give me a good starting point, but through various documentation has informed me of OpenGL pitfalls.

He recently wrote a post-mortem for glium, which I think is good as a matter of reflection, but I'm still very impressed with that project, and the others he is tirelessly contributing to.

Well done!

Submit your Friends-of-the-Forest nominations for next week!

Quote of the Week

My favorite new double-meaning programming phrase: "my c++ is a little rusty"

Jake Taylor on Twitter.

Thanks to Zachary Dremann for the suggestion.

Submit your quotes for next week!

This Week in Rust is edited by: nasa42, llogiq, and brson.

The Mozilla BlogMOSS supports four more open source projects in Q3 2016 with $300k

moz-love-open

 

If you have worked with data at Mozilla you have likely seen a data dashboard built with it. Re:dash is enabling Mozilla to become a truly data driven organization.
— Roberto Vitillo, Mozilla

In the third quarter, the Mozilla Open Source Support (MOSS) program has made awards to a number of “plumbing” projects – unobtrusive but essential initiatives which are part of the foundation for building software, building businesses and improving accessibility. This quarter, we awarded over $300k to four projects – three on Track 1 Foundational Technology for projects Mozilla already uses or deploys, and one on Track 2 Mission Partners for projects doing work aligned with our mission.

On the Foundational Technology track, we awarded $100,000 to Redash, a tool for building visualizations of data for better decision-making within organizations, and $50,000 to Review Board, software for doing web-based source code review. Both of these pieces of software are in heavy use at Mozilla. We also awarded $100,000 to Kea, the successor to the venerable ISC DHCP codebase, which deals with allocation of IP addresses on a network. Mozilla uses ISC DHCP, which makes funding its replacement a natural move even though we haven’t deployed it yet.

moss_graphic_v1

On the Mission Partners track, we awarded $56,000 to Speech Rule Engine, a code library which converts mathematical markup into vocalised form (speech) for the sight-impaired, allowing them to fully appreciate mathematical and scientific content on the web.

In addition to all that, we have completed another two MOSS Track 3 Secure Open Source audits, and have more in the pipeline. The first was for the dnsmasq project. Dnsmasq is another piece of Internet plumbing – an embedded server for the DNS and DHCP protocols, used in all mainstream Linux distros, Android, OpenStack, open router projects like openWRT and DD-WRT, and many commercial routers. We’re pleased to say only four issues were found, none of them severe. The second was for the venerable zlib project, a widely-used compression library, which also passed with flying colors.

Applications for Foundational Technology and  Mission Partners remain open, with the next batch deadline being the end of November 2016. Please consider whether a project you know could benefit from a MOSS award, and encourage them to apply. You can also submit a suggestion for a project which might benefit from an SOS audit.

Air MozillaBrownbag: Brand Identity

Brownbag: Brand Identity A presentation of the next round of refined designs for the Mozilla brand identity and a Q & A session about the Open Design process...

Robert KaiserThe Neverending Question of Login Systems

I put a lot of work into my content management system in the last week(s), first because I had the time to work on some ongoing backend rework/improvements (after some design improvements on this blog site and my main site) but then to tackle an issue that has been lingering for a while: the handling of logins for users.

When I first created the system (about 13 years ago), I did put simple user and password input fields into place and yes, I didn't know better (just like many people designing small sites probably did and maybe still do) and made a few mistakes there like storing passwords without enough security precautions or sending them in plaintext to people via email (I know, causes a WTF moment in even myself nowadays but back then I didn't know better).

And I was very happy when the seemingly right solution for this came along: Have really trustworthy people who know how to deal with it store the passwords and delegate the login process to them - ideally in a decentralized way. In other words, I cheered for Mozilla Persona (or the BrowserID protocol) and integrated my code with that system (about 5 years ago), switching most of my small sites in this content management system over to it fully.

Yay, no need to make my system store and handles passwords in a really safe and secure way as it didn't need to store passwords any more at all! Everything is awesome, let's tackle other issues. Or so I thought. But, if you haven't heard of that, Persona is being shut down on November 30, 2016. Bummer.

So what were the alternatives for my small websites?

Well, I could go back to handling passwords myself, with a lot of research into actually secure practices and a lot of coding to get things right, and probably quite a bit of bugfixing afterwards, and ongoing maintenance to keep up with ever-growing security challenges. Not really something I was wanting to go with, also because it may make my server's database more attractive to break into (though there aren't many different people with actual logins).

Another alternative is using delegated login via Facebook, Google, GitHub or others (the big question is who), using the OAuth2 protocol. Now there's two issues there: First, OAuth2 isn't really made for delegated login but for authentication of using some resource (via an API), so it doesn't return a login identifier (e.g. email address) but rather an access token for resources and needs another potentially failure-prone roundtrip to actually get such an identifier - so it's more complicated than e.g. Persona (because using it just for login is basically misusing it). Second, the OAuth2 providers I know of are entities to whom I don't want to tell every login on my content management system, both because their Terms of Service allow them to sell that information to anyone, and second because I don't trust them enough to know about each and every one of those logins.

Firefox Accounts would be an interesting option, given that Mozilla is trustworthy on the side of dealing with password data and wouldn't sell login data or things like that, may support the same BrowserID assertion/verification flow as Persona (which I have implemented already), but it doesn't (yet) support non-Mozilla sites to use it (and given that it's a CMS, I'd have multiple non-Mozilla sites I'd need to use it for). It also seems to support an OAuth2 flow, so may be an option with that as well if it would be open to use at this point - and I need something before Persona goes away, obviously.

Other options, like "passwordless" logins that usually require a roundtrip to your email account or mobile phone on every login sounded too inconvenient for me to use.

That said, I didn't find anything "better" as a Persona replacement than OAuth2, so I took an online course on it, then put a lot of time into implementing it and I have a prototype working with GitHub's implementation (while I don't feel to trust them with all those logins, I felt they are OK enough to use for testing against). That took quite some work as well, but some of the abstraction I did for Persona implementation can be almost or completely reused (in the latter case, I just abstracted things to a level that works for both) - and there's potential in for example getting some more info than an email from the OAuth2 provider and prefill some profile fields on user creation. That said, I'm still wondering about an OAuth2 provider that's trustworthy enough privacy-wise - ideally it would just be a login service, so I don't have to go and require people to register for a whole different web service to use my content management system. Even with the fallback alone and without the federation to IdPs, Mozilla Persona was nicely in that category, and Firefox Accounts would be as well if they were open to use publicly. (Even better would be if the browser itself would act as an identity/login agent and I could just get a verified email from it as some ideas behind BrowserID and Firefox Accounts implied as a vision.)

I was also wondering about potentially hosting my own OAuth2 provider, but then I'd need to devise secure password handling on my server yet again and I originally wanted to avoid that. And I'd need to write all that code - unless I find out how to easily run existing code for an OAuth2 or BrowserID provider on my server.

So, I'm not really happy yet but I have something that can go into production fast if I don't find a better variant before Persona shuts down for good. Do you, dear reader, face similar issues and/or know of good solutions that can help?

Chris FinkeInterpr.it Will Be Shutting Down

Interpr.it is a platform for translating browser extensions that I launched five years ago; it will be shutting down on September 1, 2017.  I no longer have the time to maintain it, and since I stopped writing Firefox extensions, I don’t have any skin in the game either.

I’ve notified everyone that uploaded an extension so that they have ample time to download any translations (333 days). It was not a large Bcc list; although nearly six thousand users created an account during the last five years, only about two dozen of those users uploaded an extension. Eight hundred of those six thousand contributed a translation of at least one string.

For anyone interested in improving the browser extension translation process, I’d suggest writing a GlotPress plugin to add support for Firefox and Chrome-style locale files. It’s been on my todo list for so long that I’m sure I will never get to it.

Daniel Stenbergscreenshotted curl credits

If you have more or better screenshots, please share!

gta-end-credits-libcurl

This shot is taken from the ending sequence of the PC version of the game Grand Theft Auto V. 44 minutes in! See the youtube version.

curl-sky-box

Sky HD is a satellite TV box.

curl-tv-philips

This is a Philips TV. The added use of c-ares I  consider a bonus!

bmw

The infotainment display of a BMW car.

ps4

Playstation 4 lists open source products it uses.

ios-credits

This is a screenshot from an Iphone open source license view. The iOS 10 screen however, looks like this:

curl-ios10

curl in iOS 10 with an older year span than in the much older screenshot?

Instagram credits screenshot

Instagram on an Iphone.

Spotify credits screenshot

Spotify on an Iphone.

curl-virtualbox

Virtualbox (thanks to Anders Nilsson)

curl-battle-net

Battle.net (thanks Anders Nilsson)

curl-freebox

Freebox (thanks Alexis La Goutte)

curl-youtube

The Youtube app on Android. (Thanks Ray Satiro)

curl-youtube-ios

The Youtube app on iOS (Thanks Anthony Bryan)

Tarek ZiadéWeb Services Best Practices

The other day I've stumbled on a reddit comment on Twitter about micro-services. It really nailed down the best practices around building web services, and I wanted to use it as a basis to write down a blog post. So all the credits go to rdsubhas for this post :)

Web Services in 2016

The notion of micro-service rose in the past 5 years, to describe the fact that our applications are getting splitted into smaller pieces that need to interact to provide the same service that what we use to do with monolothic apps.

Splitting an app in smaller micro services is not always the best design decision in particular when you own all the pieces. Adding more interactions to serve a request just makes things more complex and when something goes wrong you're just dealing with a more complex system.

Peope often think that it's easier to scale an app built with smaller blocks, but it's often not the case, and sometimes you just end up with a slower, over-engineered solution.

So why are we building micro-services ?

What really happened I think is that most people moved their apps to cloud providers and started to use the provider services, like centralized loggers, distributed databases and all the fancy services that you can use in Amazon, Rackspace or other places.

In the LAMP architecture, we're now building just one piece of the P and configuring up to 20 services that interact with it.

A good chunk of our daily jobs now is to figure out how to deploy apps, and even if some tools like Kubertenes gives us the promise of an abstraction on the top of cloud providers, the reality is that you have to learn how AWS or another provider works to built something that works well.

Understanding how multi-zone replication works in RDS is mandatory to make sure you control your application behavior.

Because no matter how fancy and reliable, all those services are, the quality of your application will be tighted to its ability to deal with problems like network splits or timeouts etc.

That's where the shift in bests practices is: when something goes wrong, it's harder just to tail your postgres logs and your Python app and see what's going on. You have to deal with many parts.

Best Practices

I can't find the original post on Reddit, so I am just going to copy it here and curate it with my own opinions and with the tools we use at Mozilla. I've also removed what I see as redundant tips.

Basic monitoring, instrumentation, health check

We use statsd everywhere and services like Datadog to see what's going on in our services.

We also have two standard heartbeat endpoints that are used to monitor the services. One is a simple round trip where the service just sends back a 200, and one is more of a smoke test, where the service tries to use all of its own backends to make sure it can reach them and read/write into them.

We're doing this distinction because the simple round trip health check is being hit very often, and the one that calls all the services the service use, less often to avoid doing too much traffic and load.

Distributed logging, tracing

Most of our apps are in Python, and we use Sentry to collect tracebacks and sometimes New Relic to detect problems we could not reproduce in a dev environment.

Isolation of the whole build+test+package+promote for every service.

We use Travis-CI to trigger most of our builds, tests and packages. Having reproducible steps made in an isolated environment like a CI gives us good confidence on the fact that the service is not spaghetti-ed with other services.

The bottom line is that "gill pull & make test" should work in Travis no matter what, without calling an external service. The travis YML file, the Makefile and all the mocks in the tests are rhoughly our 3 gates to the outside world. That's as far as we go in term of build standards.

Maintain backward compatibility as much as possible

The initial tip included forward compatibility. I've removed it, because I don't think it's really a thing when you build web services. Forward compatibility means that an older version of your service can accept requests from newer version of the client side. But I think it should just be a deployment issue and an error management on the client side, so you don't bend your data design just so it works with older service versions.

For backward compatibility though, I think it's mandatory to make sure that you know how to interact with older clients, whatever happens. Depending on your protocol, older clients could get an update triggered, partially work, or just work fine -- but you have to get this story right even before the first version of your service is published.

But if your design has dramatically changed, maybe you need to accept the fact that your are building something different, and just treat it as a new service (with all the pain that brings if you need to migrate data.)

Firefox Sync was one complex service to migrate from its first version to its latest version because we got a new authentication service along the way.

Ready to do more TDD

I just want to comment on this tip. Doing more TDD imply that it's cool to do less TDD when you build software that's not a service.

I think this is a bad advice. You should simply do TDD right. Not less or more, but right.

Doing TDD right in my opinion is :

  • 100% coverage unless your have something very specific you can't mock.
  • Avoid over-mocking at all costs because testing mocks is often slightly different from testing the real stuff.
  • Make sure your tests pass all the time, and are fast to pass, otherwise people will just start to skip them.
  • Functional tests are generally superior to unit tests for testing services. I often drop unit tests in some services projects because everything is covered by my functional tests. Remember: you are not building a library.

Have engineering methodologies and process-tools to split down features and develop/track/release them across multiple services (xp, pivotal, scrum)

That's a good tip. Trying to reproduce what has worked when building a service, to build the next one is a great idea.

However, this will only work if the services are built by the same team, because the whole engineering methodology is adopted and adapted by people. You don't stick into people's face the SCRUM methodology and make the assumption that everyone will work as described in the book. This never happens. What usually happens is that every member of the team brings their own recipes on how things should be done, which tracker to use, what part of XP makes sense to them, and the team creates its own custom methodology out of this. And it takes time.

Start a service with a new team, and that whole phase starts again.

Mozilla Reps CommunityIntroducing Regional Coaches

As a way to amplify the Participation’s team focused support to communities, we have created a project called Regional Coaches.

Reps Regional coaches project aims to bring support to all Mozilla local communities around the world thanks to a group of excellent core contributors who will be talking with these communities and coordinating with the Reps program and the Participation team.

We divided the world into 10 regions, and selected 2 regional coaches to take care of the countries in these regions.

  • Region 1: USA, Canada
  • Region 2: Mexico, El Salvador, Costa Rica, Panama, Nicaragua, Venezuela, Colombia, Ecuador, Peru, Bolivia, Brazil, Paraguay, Chile, Argentina, Cuba
  • Region 3: Ireland, UK, France, Belgium, Netherlands, Germany, Poland, Sweden, Lithuania, Portugal, Spain, Italy, Switzerland, Austria, Slovenia, Czech Republic.
  • Region 4: Hungary, Albania, Kosovo, Serbia, Bulgaria, Macedonia, Greece, Romania, Croatia, Bosnia, Montenegro, Ukraine, Russia, Israel
  • Region 5: Algeria, Tunisia, Egypt, Jordan, Turkey, Palestine, Azerbaijan, Armenia, Iran, Morocco
  • Region 6: Cameroon, Nigeria, Burkina Faso, Senegal, Ivory Coast, Ghana
  • Region 7: Uganda, Kenya, Rwanda, Madagascar, Mauritius, Zimbabwe, Botswana
  • Region 8: China, Taiwan, Bangladesh, Japan
  • Region 9: India, Nepal, Pakistan, Sri Lanka, Myanmar
  • Region 10: Thailand, Cambodia, Malaysia, Singapore, Philippines, Indonesia, Vietnam, Australia, New Zealand.

These regional coaches are not a power structure nor a decision maker, they are there to listen to the communities and establish a 2-way communication to:

  • Develop a clear view of local communities status, problems, needs.
  • Help local communities surface any issues or concerns.
  • Provide guidance/coaching on Mozilla’s goals to local communities.
  • Run regular check-ins with communities and volunteers in the region.
  • Coordinate with the rest of regional coaches on a common protocol, best practices.
  • Be a bridge between communities in the same region.

We want communities to be better integrated with the rest of the org, not just to be aligned with the current organizational needs but also to allow them to be more involved in shaping the strategy and vision for Mozilla and work together with staff as a team, as One Mozilla.

We would like to ask all Reps and mozillians to support our Regional Coaches, helping them to meet communities and work with them. This project is key for bringing support to everyone, amplifying the strategy, vision and work that we have been doing from the Reps program and the Participation team.

Current status

regional-coachesWe have on-boarded 18 regional coaches to bring support to 87 countries (wow!) around the world. Currently they have started to contact local communities and hold video meetings with all of them.

What have we learned so far?

Mozilla communities are very diverse, and their structure and activity status is very different. Also, there is a need for alignment with the current projects and focus activities around Mozilla and work to encourage mozillians to get involved in shaping the future.

In region 1, there are no big formal communities and mozillians are working as individuals or city-level groups. The challenge here is to get everyone together.

In region 2 there are a lot of communities, some of them currently re-inventing themselves to align better with focus initiatives. There is a huge potential here.

Region 3 is where the oldest communities started, and there is big difference between the old and the emerging ones. The challenge is to get the old ones to the same level of diverse activity and alignment as the new ones.

In region 4 the challenge is to re-activate or start communities in small countries.

Region 5 has been active for a long time, focused mainly in localization. How to align with new emerging focus areas is the main challenge here.

Region 6 and 7 are also very diverse, huge potential, a lot of energy. Getting mozillians supercharged again after Firefox OS era is the big challenge.

Region 8 has some big and active communities (like Bangladesh and Taiwan) and a lot of individuals working as small groups in other countries. The challenge is to bring alignment and get the groups together.

In region 9 the challenge is to bring the huge activity and re-organization Indian communities are doing to nearby countries. Specially the ones who are not fully aligned with the new environment Mozilla is in today.

Region 10 has a couple of big active communities. The challenge is how to expand this to other countries where Mozilla has never had community presence or communities are no longer active.

Comments, feedback? We want to hear from you on Mozilla’s discourse forum.

Rubén MartínAmplifying our support to communities with Reps Regional Coaches

In my previous post, I explained how the Participation staff team was going to work with a clear focus, and today I want to explain how we are going to amplify this support to all local communities thanks to a project inside the Reps program called Regional Coaches.

Reps Regional coaches project aims to bring support to all Mozilla local communities around the world thanks to a group of excellent core contributors who will be talking with these communities and coordinating with the Reps program and the Participation team.

We divided the world into 10 regions, and selected 2 regional coaches to take care of the countries in these regions.

  • Region 1: USA, Canada
  • Region 2: Mexico, El Salvador, Costa Rica, Panama, Nicaragua, Venezuela, Colombia, Ecuador, Peru, Bolivia, Brazil, Paraguay, Chile, Argentina, Cuba
  • Region 3: Ireland, UK, France, Belgium, Netherlands, Germany, Poland, Sweden, Lithuania, Portugal, Spain, Italy, Switzerland, Austria, Slovenia, Czech Republic.
  • Region 4: Hungary, Albania, Kosovo, Serbia, Bulgaria, Macedonia, Greece, Romania, Croatia, Bosnia, Montenegro, Ukraine, Russia, Israel
  • Region 5: Algeria, Tunisia, Egypt, Jordan, Turkey, Palestine, Azerbaijan, Armenia, Iran, Morocco
  • Region 6: Cameroon, Nigeria, Burkina Faso, Senegal, Ivory Coast, Ghana
  • Region 7: Uganda, Kenya, Rwanda, Madagascar, Mauritius, Zimbabwe, Botswana
  • Region 8: China, Taiwan, Bangladesh, Japan
  • Region 9: India, Nepal, Pakistan, Sri Lanka, Myanmar
  • Region 10: Thailand, Cambodia, Malaysia, Singapore, Philippines, Indonesia, Vietnam, Australia, New Zealand.

These regional coaches are not a power structure nor a decision maker, they are there to listen to the communities and establish a 2-way communication to:

  • Develop a clear view of local communities status, problems, needs.
  • Help local communities surface any issues or concerns.
  • Provide guidance/coaching on Mozilla’s goals to local communities.
  • Run regular check-ins with communities and volunteers in the region.
  • Coordinate with the rest of regional coaches on a common protocol, best practices.
  • Be a bridge between communities in the same region.

We want communities to be better integrated with the rest of the org, not just to be aligned with the current organizational needs but also to allow them to be more involved in shaping the strategy and vision for Mozilla and work together with staff as a team, as One Mozilla.

I would like to ask all mozillians to support our Regional Coaches, helping them to meet communities and work with them. This project is key for bringing support to everyone, amplifying the strategy, vision and work that we have been doing from the Reps program and the Participation team.

Current status

regional-coaches

We have on-boarded 18 regional coaches to bring support to 87 countries (wow!) around the world. Currently they have started to contact local communities and hold video meetings with all of them.

What have we learned so far?

Mozilla communities are very diverse, and their structure and activity status is very different. Also, there is a need for alignment with the current projects and focus activities around Mozilla and work to encourage mozillians to get involved in shaping the future.

In region 1, there are no big formal communities and mozillians are working as individuals or city-level groups. The challenge here is to get everyone together.

In region 2 there are a lot of communities, some of them currently re-inventing themselves to align better with focus initiatives. There is a huge potential here.

Region 3 is where the oldest communities started, and there is big difference between the old and the emerging ones. The challenge is to get the old ones to the same level of diverse activity and alignment as the new ones.

In region 4 the challenge is to re-activate or start communities in small countries.

Region 5 has been active for a long time, focused mainly in localization. How to align with new emerging focus areas is the main challenge here.

Region 6 and 7 are also very diverse, huge potential, a lot of energy. Getting mozillians supercharged again after Firefox OS era is the big challenge.

Region 8 has some big and active communities (like Bangladesh and Taiwan) and a lot of individuals working as small groups in other countries. The challenge is to bring alignment and get the groups together.

In region 9 the challenge is to bring the huge activity and re-organization Indian communities are doing to nearby countries. Specially the ones who are not fully aligned with the new environment Mozilla is in today.

Region 10 has a couple of big active communities. The challenge is how to expand this to other countries where Mozilla has never had community presence or communities are no longer active.

Comments, feedback? We want to hear from you on Mozilla’s discourse forum.

Firefox NightlyThese Weeks in Firefox: Issue 2

As is fortnightly tradition, the Firefox Desktop team rallied together last Tuesday to share notes and ramblings on things that are going on. Here are some hand-picked, artisinal updates from your friendly neighbourhood Firefox team:

Highlights

Contributor(s) of the Week

Project Updates

 

Add-ons

Context Graph

Firefox Core Engineering

Form Auto-fill

Privacy / Security

Quality of Experience

Storage Management

Here are the raw meeting notes that were used to derive this list.

Want to help us build Firefox? Get started here!

Here’s a tool to find some mentored, good first bugs to hack on.

Robert O'Callahanrr Paper: "Lightweight User-Space Record And Replay"

Earlier this year we submitted the paper Lightweight User-Space Record And Replay to an academic conference. Reviews were all over the map, but ultimately the paper was rejected, mainly on the contention that most of the key techniques have (individually) been presented in other papers. Anyway, it's probably the best introduction to how rr works and how it performs that we currently have, so I want to make it available now in the hope that it's interesting to people.

The Servo BlogThis Week In Servo 80

In the last week, we landed 103 PRs in the Servo organization’s repositories.

One of the largest challenges for new contributors in the DOM area is understanding how all of the pieces fit together. A new blog post by jeenalee provides a very clear and compelling walkthrough of how to to contribute support for a new DOM API into Servo!

In a big step forward, glennw made WebRender not only on by default but also made it possible to be used in our testing infrastructure! WebRender can now be used with OSMesa as its backend. See this mailing list thread if you have opinions on the future of the deprecated and untested Azure/Moz2D code path.

Finally, we had two students contribute to Servo as part of the Google Summer of Code program, and we wrote up the results of their efforts. Spoilers: they did great and Servo is better for it!

Planning and Status

Our overall roadmap is available online and now includes the Q3 plans. The Q4 and 2017 planning will begin shortly!

This week’s status updates are here.

Notable Additions

  • j-koreth added instructions for building Servo on openSUSE
  • canaltinova fixed origin/clip CSS shorthand parsing behavior
  • mortimergoro fixed some issues with our WebGL shader compilation
  • bholley implemented an AtomicRefCell for use in layout/style node data
  • pcwalton integrated Servo’s time profiler with the macOS signposts
  • larsberg split up the macOS builders, getting our CI landing time back under 50 minutes
  • uk992 fixed the location of the custom bootstrap download directory on Windows with MSVC
  • mmatyas added support for WebRender on ARM devices
  • glennw switched the default renderer to WebRender
  • kichjang added support for letter-spacing in Stylo
  • pcwalton fixed some inline hypothetical box layout issues that were breaking display of Twitter
  • JanZerebecki removed the same-origin-data-url flag from the Fetch implementation
  • manish fixed the Servo documentation build
  • glennw implemented the brightness CSS filter
  • MortimerGoro fixed upside-down WebGL
  • dati91 updated WebBluetooth to use Promises
  • mrobinson simplified collection of stacking contexts during display list building
  • flacerdk implemented the word-break property’s keep-all mode
  • notriddle implemented a sequential fallback for failed float layout speculation
  • glennw made WebRender run in our CI using OSMesa for headless testing
  • manish added the Stylo unit tests in the Servo codebase
  • pcwalton improved incremental layout to make CNN look better in Servo
  • aneeshusa handled the fallout from the Homebrew 1.0 release
  • malisas implemented the Body interface for the Response and Request APIs
  • canaltinova implemented parsing for mask shorthands
  • jeenalee implemented the JS fetch API
  • zack1030 created a new ./mach test-perf command
  • coder206 improved our linux packaging by performing it in a new, clean folder

New Contributors

Interested in helping build a web browser? Take a look at our curated list of issues that are good for new contributors!

Screenshot

New time profiler support for macOS signpost event reporting:

New time profiler support for macOS signpost event reporting

Niko MatsakisObservational equivalence and unsafe code

I spent a really interesting day last week at Northeastern University. First, I saw a fun talk by Philip Haller covering LaCasa, which is a set of extensions to Scala that enable it to track ownership. Many of the techniques reminded me very much of Rust (e.g., the use of spores, which are closures that can limit the types of things they close over); if I have time, I’ll try to write up a more detailed comparison in some later post.

Next, I met with Amal Ahmed and her group to discuss the process of crafting unsafe code guidelines for Rust. This is one very impressive group. It’s this last meeting that I wanted to write about now. The conversation helped me quite a bit to more cleanly separate two distinct concepts in my mind.

The TL;DR of this post is that I think we can limit the capabilities of unsafe code to be things you could have written using the safe code plus a core set of unsafe abstractions (ignoring the fact that the safe implementation would be unusably slow or consume ridiculous amounts of memory). This is a helpful and important thing to be able to nail down.

Background: observational equivalence

One of the things that we talked about was observational equivalence and how it relates to the unsafe code guidelines. The notion of observational equivalence is really pretty simple: basically it means two bits of code do the same thing, as far as you can tell. I think it’s easiest to think of it in terms of an API. So, for example, consider the HashMap and BTreeMap types in the Rust standard library. Imagine I have some code using a HashMap<i32, T> that only invokes the basic map operations – e.g., new, get, and insert. I would expect to be able to change that code to use a BTreeMap<i32, T> and have it keep working. This is because HashMap and BTreeMap, at least with respect to i32 keys and new/get/insert, are observationally equivalent.

If I expand the set of API routines that I use, however, this equivalence goes away. For example, if I iterate over the map, then a BTreeMap gives me an ordering guarantee, whereas HashMap doesn’t.

Note that the speed and memory use will definitely change as I shift from one to the other, but I still consider them observationally equivalent. This is because I consider such changes unobservable, at least in this setting (crypto code might beg to differ).

Composing unsafe abstractions

One thing that I’ve been kind of wrestling with in the unsafe code guidelines is how to break it up. A lot of the attention has gone into thinking about some very low-level decisions: for example, if I make a *mut pointer and an &mut reference, when can they legally alias? But there are some bigger picture questions that are also equally interesting: what kinds of things can unsafe code even do in the first place, whatever types it uses?

One example that I often give has to do with the infamous setjmp/longjmp in C. These are some routines that let you implement a poor man’s exception handling. You call setjmp at one stack frame and then, down the stack, you call longjmp. This will cause all the intermediate stack frames to be popped (with no unwinding or other cleanup) and control to resume from the point where you called setjmp. You can use this to model exceptions (a la Objective C), build coroutines, and of course – this is C – to shoot yourself in the foot (for example, by invoking longjmp when the stack frame that called setjmp has already returned).

So you can imagine someone writing a Rust wrapper for setjmp/longjmp. You could easily guarantee that people use the API in a correct way: e.g., that you when you call longjmp, the setjmp frame is still on the stack, but does that make it safe?

One concern is that setjmp/longjmp do not do any form of unwinding. This means that all of the intermediate stack frames are going to be popped and none of the destructors for their local variables will run. This certainly means that memory will leak, but it can have much worse effects if you try to combine it with other unsafe abstractions. Imagine for example that you are using Rayon: Rayon relies on running destructors in order to join its worker threads. So if a user of the setjmp/longjmp API wrote something like this, that would be very bad:

1
2
3
4
5
setjmp(|j| {
    rayon::join(
        || { /* original thread */; j.longjmp(); },
        || { /* other thread */ });
});

What is happening here is that we are first calling setjmp using our safe wrapper. I’m imagining that this takes a closure and supplies it some handle j that can be used to longjmp back to the setjmp call (basically like break on steroids). Now we call rayon::join to (potentially) spin off another thread. The way that join works is that the first closure executes on the current thread, but the second closure may get stolen and execute on another thread – in that case, the other thread will be joined before join returns. But here we are calling j.longjmp() in the first closure. This will skip right over the destructor that would have been used to join the second thread. So now potentially we have some other thread executing, accessing stack data and raising all kinds of mischief.

(Note: the current signature of join would probably prohibit this, since it does not reflect the fact that the first closure is known to execute in the original thread, and hence requires that it close over only sendable data, but I’ve contemplated changing that.)

So what went wrong here? We tried to combine two things that independently seemed safe but wound up with a broken system. How did that happen? The problem is that when you write unsafe code, you are not only thinking about what your code does, you’re thinking about what the outside world can do. And in particular you are modeling the potential actions of the outside world using the limits of safe code.

In this case, Rayon was making the assumption that when we call a closure, that closure will do one of four things:

  • loop infinitely;
  • abort the process and all its threads;
  • unwind;
  • return normally.

This is true of all safe code – unless that safe code has access to setjmp/longjmp.

This illustrates the power of unsafe abstractions. They can extend the very vocabulary with which safe code speaks. (Sorry, I know that was ludicrously flowery, but I can’t bring myself to delete it.) Unsafe abstractions can extend the capabilities of safe code. This is very cool, but also – as we see here – potentially dangerous. Clearly, we need some guidelines to decide what kinds of capabilities it is ok to add and which are not.

Comparing setjmp/longjmp and rayon

But how can we decide what capabilities to permit and which to deny? This is where we get back to this notion of observational equivalence. After all, both Rayon and setjmp/longjmp give the user some new powers:

  • Rayon lets you run code in different threads.
  • Setjmp/longjmp lets you pop stack frames without returning or unwinding.

But these two capabilities are qualitiatively different. For the most part, Rayon’s superpower is observationally equivalent to safe Rust. That is, I could implement Rayon without using threads at all and you as a safe code author couldn’t tell the difference, except for the fact that your code runs slower (this is a slight simplification; I’ll elaborate below). In contrast, I cannot implement setjmp/longjmp using safe code.

But wait, you say, Just what do you mean by ‘safe code’? OK, That last paragraph was really sloppy. I keep saying things like you could do this in safe Rust, but of course we’ve already seen that the very notion of what safe Rust can do is something that unsafe code can extend. So let me try to make this more precise. Instead of talking about Safe Rust as it was a monolithic entity, we’ll gradually build up more expressive versions of Rust by taking a safe code and adding unsafe capabilities. Then we can talk more precisely about things.

Rust0 – the safe code

Let’s start with Rust0, which corresponds to what you can do without using any unsafe code at all, anywhere. Rust0 is a remarkably incapable language. The most obvious limitation is that you have no access to the heap (Box and Vec are unsafely implemented libraries), so you are limited to local variables. You can still do quite a lot of interesting things: you have arrays and slices, closures, enums, and so forth. But everything must live on the stack and hence ultimately follow a stack discipline. Essentially, you can never return anything from a function whose size is not statically known. We can’t even use static variables to stash stuff, since those are inherently shared and hence immutable unless you have some unsafe code in the mix (e.g., Mutex).

Rust1 – the heap (Vec)

So now let’s consider Rust1, which is Rust0 but with access to Vec. We don’t have to worry about how Vec is implemented. Instead, we can just think of Vec as if it were part of Rust itself (much like how ~[T] used to be, in the bad old days). Suddenly our capabilities are much increased!

For example, one thing we can do is to implement the Box type (Box<T> is basically a Vec<T> whose length is always 1, after all). We can also implement something that acts identically to HashMap and BTreeMap in pure safe code (obviously the performance characteristics will be different).

(At first, I thought that giving access to Box would be enough, but you can’t really simulate Vec just by using Box. Go ahead and try and you’ll see what I mean.)

Rust2 – sharing (Rc, Arc)

This is sort of an interesting one. Even if you have Vec, you still cannot implement Rc or Arc in Rust1. At first, I thought perhaps we could fake it by cloning data – so, for example, if you want a Rc<T>, you could (behind the scenes) make a Box<T>. Then when you clone the Rc<T> you just clone the box. Since we don’t yet have Cell or RefCell, I reasoned, you wouldn’t be ablle to tell that the data had been cloned. But of course that won’t work, because you can use a Rc<T> for any T, not just T that implement Clone.

Rust3 – non-atomic mutation

That brings us to another fundamental capability. Cell and RefCell permit mutation when data is shared. This can’t be modeled with just Rc, Box, or Vec, all of which maintain the invariant that mutable data is uniquely reachable.

Rust4 – asynchronous threading

This is an interesting level. Here we add the ability to spawn a thread, as described in std::thread (note that this thread runs asynchronously and cannot access data on the parent’s stack frame). At first, I thought that threading didn’t add expressive power since we lacked the ability to share mutable data across threads (we can share immutable data with Arc).

After all, you could implement std::thread in safe code by having it queue up the closure to run and then, when the current thread finishes, have it execute. This isn’t really correct for a number of reasons (what is this scheduler that overarches the safe code? Where do you queue up the data?), but it seems almost true.

But there is another way that adding std::thread is important. It means that safe code can observe memory in an asynchronous thread, which affects the kinds of unsafe code that we might write. After all, the whole purpose of this exercise is to figure out the limits of what safe code can do, so that unsafe code knows what it has to be wary of. So long as safe code did not have access to std::thread, one could imagine writing an unsafe function like this:

1
2
3
4
5
6
fn foo(x: &Arc<i32>) {
    let p: *const i32 = &*x;
    let q: *mut i32 = p as *mut i32;
    *q += 1;
    *q -= 1;
}

This function takes a shared i32 and temporarily increments and then decrements it. The important point here is that the invariant that the Arc<i32> is immutable is broken, but it is restored before foo returns. Without threads, safe code can’t tell the difference between foo(&my_arc) and a no-op. But with threads, foo() might trigger a data-race. (This is all leaving aside the question of compiler optimization and aliasing rules, of course.)

(Hat tip to Alan Jeffreys for pointing this out to me.)

Rust5 – communication between threads and processes

The next level I think are abstractions that enable threads to communiate with one another. This includes both within a process (e.g., AtomicU32) and across processes (e.g., I/O).

This is an interesting level to me because I think it represents the point where the effects of a library like rayon becomes observable to safe code. Until this point, the only data that could be shared across Rayon threads was immutable, and hence I think the precise interleavings could also be simulated. But once you throws atomics into the mix, and in particular the fact that atomics give you control over the memory model (i.e., they do not require sequential consistency), then you can definitely observe whether threading is truly in use. The same is true for I/O and so forth.

So this is the level that shows that what I wrote earlier, that Rayon’s superpower is observationally equivalent to safe Rust is actually false. I think it is observationally equivalent to safe Rust4, but not Rust5. Basically Rayon serves as a kind of Rust6, in which we grow Rust5 by adding scoped threads, that allow sharing data on stack frames.

And so on

We can keep going with this exercise, which I actually think is quite valuable, but I’ll stop here for now. What I’d like to do asynchronously is to go over the standard library and interesting third-party packages and try to nail down the core unsafe abstractions that you need to build Rust, as well as the dependencies between them.

But I want to bring this back to the core point: the focus in the unsafe code guidelines has been on exploring what unsafe code can do in the small. Basically, what types it ought to use to achieve certain kinds of aliasing and so forth. But I think it’s also very important to nail down what unsafe code can do in the large. How do we know whether (say) abomonation, deque, and so forth represent legal libraries?

As I left the meeting with Amal’s group, she posed this question to me. Is there something where all three of these things are true:

  • you cannot simulate using the standard library;
  • you can do with unsafe code;
  • and it’s a reasonable thing to do.

Whenever the answer is yes, that’s a candidate for growing another Rust level. We already saw one yes answer in this blog post, right at the end: scoped threads, which enable threading with access to stack contents. Beyond that, most of the potential answers I’ve come up with are access to various kernel capabilities:

  • dynamic linking;
  • shared memory across processes;
  • processes themselves. =)

What’s a bit interesting about these is that they seem to be mostly about the operating system itself. They don’t feel fundamental in the same way as scoped threads: in other words, you could imagine simulating the O/S itself in safe code, and then you could build these things. Not quite how to think about that yet.

In any case, I’d be interested to hear about other fundamental abstractions that you can think of.

Coda: Picking and choosing your language levels

Oh, one last thing. It might seem like defining all these language levels is a bit academic. But it can be very useful to pick them apart. For example, imagine you are targeting a processor that has no preemption and always uses cooperative multithreading. In that case, the concerns I talked about in Rust4 may not apply, and you may be able to do more aggressive things in your unsafe code.

Comments

Please leave comments in this thread on the Rust internals forum.

Chris McDonaldi-can-manage-it Weekly Update 3

Weekly post already? But it seems like the last one was just the other day! It’s true, it has been less than a week since the last one, but I feel like the weekend is a good time for me to write these so you’re getting another update. This post is going to be very tech heavy. So I’m going to put the less tech heavy stuff in the next couple paragraph or so, then I’m going to explain my implementation for educational purposes.

I’m currently reading Game Engine Architecture by Jason Gregory and one of the early chapters focused on development tools and how important they are. My previous full time job was building development tools for web developers so I’ve already developed an appreciation for having them. Also, you may remember my last post where I talked about debugging tools I’ve added to my game.

Games require a lot of thought and consideration to the performance of the code that is written and one of the primary metrics that the game industry uses is FPS, or Frames Per Second. This is the number of times the full screen is rendered to the screen per second. A common standard for this is 60FPS which is what most “high definition” monitors and TVs can produce. Because the frames need to be roughly evenly spaced it means that each frame gets about 16.6 milliseconds to be fully calculated and rendered.

So, I built a tool to let me analyze the amount of time each frame took to render. I knew I’d want to graph the data, and I didn’t have the ability to make graphs using my game engine. I don’t even have the ability to display text. So I went with a setup called Electron to let me use the sort of code and techniques I use for web development and am very familiar with. And this screenshot is the results:

Screen Shot 2016-10-01 at 6.43.00 PM.png

In the background is my text editor with some code, and a bunch of debug information in my terminal. On the right with the pretty colors is my game. It is over there rendering about 400-450 FPS on my mac. On the left in the black and white is my stats viewer. Right now it just shows the duration of every frame. The graph dynamically sizes itself, but at the moment it was showing 2ms-25ms range. Interesting things to note is that I’m averaging 400FPS but I have spikes that take over 16.6ms, so the frames are not evenly spaced and it looks like ~58FPS.

Ok, that’s the tool I built and a brief explanation. Next, I’m going to go into the socket server I wrote to have the apps communicate. This is the very tech heavy part so friends just reading along because they want to see what I’m up to, but aren’t programmers, this is the time to hit the eject button if you find that stuff boring and you kinda wish I’d just shut up sometimes.

To start with, this gist has the full code that I’ll be talking about here. I’m going to try to use snippets cut up with text from that, so you can refer to that gist for context if needed. This is a very simple socket server I wrote to export a few numbers out of my engine. I expect to expand this and make it more featureful as well as bidirectional so I can opt in or out of debugging stuff or tweak settings.

Lets first look at the imports, I say as if that’s interesting, but one thing to note is I’m not using anything outside of std for my stats collection and socket server. Keep in mind this is a proof of concept, not something that will need to work for hundreds of thousands of users per second or anything.

use std::io::Write;
use std::net::TcpListener;
use std::sync::mpsc::{channel, Receiver, Sender};
use std::thread;

I’ve pulled in the Write trait from std::io so I can write to the sockets that connect. Next up is TcpListener which is the way in the standard library to listen for new socket connections. Then we have channels for communicating across threads easily. Speaking of threads, I pull in that module as well.

Ok, so now that we know the pieces we’re working with, lets talk design. I wanted to have my stats display work by a single initializing call, then sending data over a channel to a stats collection thread. Because channels in rust are MPSC channels, or Multiple Producer Single Consumer channels, they can have many areas sending data, but only 1 thing consuming data. This is what lead to the interesting design of the initializing function seen below:

pub fn run_stats_server () -> Sender<Stats> {
    let (stats_tx, stats_rx) = channel();
    thread::Builder::new()
        .name("Stats:Collector".into())
        .spawn(move || {
            let new_socket_rx = stats_socket_server();

            let mut outputs = vec![];
            while let Ok(stats) = stats_rx.recv() {
                while let Ok(new_output) = new_socket_rx.try_recv() {
                    outputs.push(new_output);
                }

                let mut dead_ones = vec![];
                for (number, output) in outputs.iter().enumerate() {
                    if let Err(_) = output.send(stats) {
                        dead_ones.push(number);
                    }
                }

                for dead in dead_ones.into_iter() {
                    outputs.remove(dead);
                } 
            }
        })
        .unwrap();
    stats_tx
}

Let’s work our way through this. At the start we have our function signature,
run_stats_server is the name of our function, it takes no arguments and returns a Sender channel that sends Stats objects. That channel is how we’ll export data from the engine to the stats collector. Next we create a channel, using common rust naming of tx or “transmit” for the Sender and rx for Receiver sides of the channel. These will send and receive stats objects so we’ll name them as such.

Next, we start building up the thread that will house our stats collection. We make sure to give it a name so stack traces, profilers, and other development tools will be able to help us identify what we are seeing. In this case, Stats:Collector. We spawn the thread and hand it a special type of function called a closure, specifying that values it uses from the function creating the closure, should become owned by the closure via the move flag.

We’re going to skip the implementation of stats_socket_server() for now, except to note that it returns a Receiver<Sender<Stats>> which the receiving side of a channel that will contain the sending side of a channel containing stats objects. Oooph a mouthful! Remember the “interesting” design, this is the heart of it. Because, I could have any number of clients connect to the socket over the life of the app, I needed to be able to receive from a single channel on multiple threads. But if you recall above, channels are single consumer. This means I have to spread the messages across multiple channels myself. Part of that design means anytime a new connection comes in, the stats collection service gets another channel to send to.

We make some storage for the channels we’ll be getting back from the socket server, then launch into our loop. A reader may notice that the pattern while let Ok(value) = chan_rx.recv() {} is littered all over my code. I just learned of this and it is terribly useful for working with channels. You see, that stats_rx.recv() call in the code above? That blocks the thread until something is written to stats_tx. When it does return a value, that value is a result that could be Ok<T> where T is the type of the channel, or Err<E> where E is some error type.

Channels will return an Err when you try to read or write to them and the other side of the channel has been closed. Generally when this channel fails it is because I’ve started shutting down the main thread and the Stats:Collector thread hasn’t shut down yet. So as long as the channel is still open, the server will keep running.

Once we get past this while let we have a new Stats object to work with. We check to see if any new connections have come in and add them to the outputs vector. We do it in this order because new connections only matter if there is new data to send to them. We aren’t sending any history. Notice how this loop uses try_recv() instead of recv()to get messages from the channel. This is because we don’t want to wait for a message if there isn’t any, we just want to check and keep going instead. The try version of the function will immediately return an Err if there are no messages ready.

We make a vector to hold onto the indices of the dead channels as we try to send the stats payload to each of them. Since channels return errors when the other side has closed, we close the socket’s side of the channel when the socket closes, letting it cascade the error to here. We then collect the index so we can remove it later. We can’t remove it now since we’re accessing the vector, and rust ensures that while something is reading the data, nothing can write to it. Also, a note, when you use a channel’s send function it takes ownership of the object you are sending. Since my stats objects are pretty small and simple I made them copiable and rust is automatically creating a copy for each outgoing channel.

In the last part of the loop, we do a quick pass to clean up any dead channels. The only other things of note in this function are that the thread creation uses .unwrap() as a deliberate choice because thread creation should never fail, if it does, the application is in some state we didn’t account for and should crash, probably low memory or too many threads. Then finally it returns the stats_tx we made at the top.

Now we get to the other function that makes up this stats collector and server. The goal of this function is to listen for new socket connections and return channels to send to them. Without further adieu here it is:

fn stats_socket_server() -> Receiver<Sender<Stats>> {
    let (new_socket_tx, new_socket_rx) = channel();
    thread::Builder::new()
        .name("Stats:SocketServer".into())
        .spawn(move || {
            let server = TcpListener::bind("127.0.0.1:6327").unwrap();
            let mut connection_id = 0;
            for stream in server.incoming() {
                if let Ok(mut stream) = stream {
                    let (tx, rx): (_, Receiver<Stats>) = channel();
                    new_socket_tx.send(tx).unwrap();
                    thread::Builder::new()
                        .name(format!("Stats:SocketServer:Socket:{}",
                                      connection_id))
                        .spawn(move || {
                            while let Ok(stats) = rx.recv() {
                                let message = format!("[{},{}]\n",
                                                       stats.when,
                                                       stats.duration)
                                                  .into_bytes();
                                if let Err(_) = stream.write(&message) {
                                    // Connection died;
                                    break;
                                }
                            }
                        })
                        .unwrap();
                    connection_id += 1;
                }
            }
        })
        .unwrap();
    new_socket_rx
}

We’ve already discussed the function signature above, but now we’ll get to see the usage of the Sender side of at channel sending channel. Like our first function, we immediately create a channel, one side of which new_socket_rx is returned at the bottom of the function. The other we’ll use soon.

Also familiar is the thread building. This time we name it Stats:SocketServer as that is what will be running in this thread. Moving on, we see TcpListener show up. We create a new TcpListener bound to localhost on port 6327 and unwrap the value. We create a counter we’ll use to uniquely track the socket threads.

We use the .incoming() function much the same way as we use the .recv() function on channels. It will return an Ok<TcpStream> on successful connect or Err<E> when an error happens. We ignore the errors for now and grab the stream in the success case. Each stream will get its own channel so we create channels, simply named tx and rx. We send tx to over new_socket_tx which is connected to the channel sending channel we return.

We build yet another thread, 1 thread per connection would be wasteful if I planned on having a lot of connections, but since I’ll typically only have 0-1 connection, I feel like using a thread for each isn’t too expensive. This is where we used that connection_id counter to uniquely name the thread. Because we may have multiple of these at the same time, we make sure they are named so we can tell them apart.

Inside the thread, we use the now familiar pattern of using .recv() to block and wait for messages. Once we get one, we format it as a 2 element JSON array with a newline on the end. I didn’t want to worry about escaping or using a full JSON serialization library, so I just wrote the values to a string and sent that. The reason for the newline is so the receiving side of the socket can treat it as a “newline delimited JSON stream” which is a convenient way to speak across languages. We note if there is an error trying to write to the socket, and if so, break out of our loop.

The rest is just a little bookkeeping for tracking the connection_id and returning the channel sending channel. While this description has gotten pretty long, the implementation is relatively simple. Speaking of things to build out with time, the last bit of code we’ve not discussed for there rust side of this. The Stats struct.

#[derive(Clone, Copy, Debug)]
pub struct Stats {
    pub when: u64,
    pub duration: u64
}

The reason I didn’t mention it sooner, is it is pretty boring. It holds onto two u64 which are unsigned 64bit integers, or whole positive numbers, that I send over the wire. With time this will certainly grow larger, not sure in what ways though. I could have used a 2-tuple to hold my stats like (u64, u64) instead of a struct. As far as I know they are just as memory efficient. The reason I went with a struct though was for two attributes. First it is a name that I can change the contents of without having to change code everywhere it passes through, just where the struct is created or accessed. If I add another u64 to the tuple above, the function signatures and the points where the data is created and accessed need to change.

The other reason is proper use of the type system. There are many reasons to create a (u64, u64) that have nothing to do with stats, by creating a type we force the API user to be specific about what their data is. Both that the positions of the data are correct by referencing them by name, and because they are in a container with a very specific name. Granted, I’m the API user as well as implementer, but in 6 months, it may as well been implemented by you, for how familiar it’ll be to me.

The electron side of this is actually pretty boring. Because JS is built to work well with events, and this data comes in as a series of events, I basically just propagate them from the socket connection to electron’s IPC, or Inter Process Communication, layer which is one of the first things folks learn when making electron apps. For the graph I used Smoothie and basically just copied their example code and replaced their call to Math.random() with my data.

This project was meant to be the start of development tools for my game engine. A proof of concept for having those tools be external but hooked up via a socket. Next steps will be making the data presentation nicer, possibly making it two way so I can see what debugging tools are enabled and change those settings from this tool, and many other things.

I really hope that this explanation of some rust code was fun and helpful. If you have questions, feel free to ask. Keep in mind this tool and code are not meant to be a bullet proof production used by many people thing, but more just an exploration of a brain worm I had.  While I’m keeping most of my source private, all the source shown here should be considered under the ISC license which basically says do whatever with it and don’t blame me if it turns out to be terrible.


Andy McKayYour own AMO

Back when I started on addons.mozilla.org (AMO) there was a suggestion lurking in the background... "what if I wanted to run my own copy of addons.mozilla.org"?

I'm never been quite sure if that would be something someone would actually want to do, but people kept mentioning it. I think for a while for some associated Mozilla projects might have tried it, but in the six years of the project I've seen zero bugs about anyone actually doing it. Just some talk of "well if you wanted to do it...".

I think we can finally lay to rest that while AMO is an open source project (and long may it stay it that way) and running your own version is technically possible, it's not something Mozilla should worry about or support.

This decision is bolstered by a couple of things that happened in the add-ons community recently: add-on signing, which means that Mozilla can be the only one to sign add-ons for Firefox and the use of Firefox Accounts for authentication.

These are things you can work around or re-purpose, but in the end you'll probably find that these things are not worth the effort when it comes down to it.

From a contribution point of view AMO is very easy to set up and install these days. Pull down the docker containers, run them and you are going. You'll have a setup that is really similar to production in a few minutes. As an aside: development and production actually use slightly different docker containers, but that will be merged in the future.

From a development point of view, knowing that AMO is only ever deployed in one way makes life so very much easier. We don't have to support multiple OS's, environments or combinations that will never happen in production.

Recently we've started to move to API driven site and that means that all the data in AMO is now exposed through an API. So if you want to do something with AMO data, the best thing to do is start playing with the API to grab some data and remix that add-on data as much as you'd like (example).

So AMO is still open source and remain so, it just won't support every single option in its development and I think that's a good thing.

Robert O'Callahanrr 4.4.0 Released

I just pushed out the release of rr 4.4.0. It's mostly the usual reliability and syscall coverage improvements. There are a few highlights:

  • Releases are now built with CMAKE_BUILD_TYPE=Release. This significantly improves performance on some workloads.
  • We support recording and replaying Chromium-based applications, e.g. chromium-browser and google-chrome. One significant issue was that Chromium (via SQLite) requires writes performed by syscalls to be synced automatically with memory-maps of the same file, even though this is technically not required by POSIX. Another significant issue is that Chromium spawns a Linux task with an invalid TLS area, so we had to ensure rr's injected preload library does not depend on working glibc TLS.
  • We support Linux 4.8 kernels. This was a significant amount of work because in 4.8, PTRACE_SYSCALL notifications moved to being delivered before seccomp notifications instead of afterward. (It's a good change, though, because as well as fixing a security hole, it also improves rr recording performance; the number of ptrace notifications for each ptrace-recorded syscall decreases from 3 to 2.) This also uncovered a serious (to rr) kernel bug with missing PTRACE_EVENT_EXIT notifications, which fortunately we were able to get fixed upstream (thanks to Kees Cook).
  • Keno Fischer contributed some nice performance improvements to the "slow path" where we are forced to context-switch between tracees and rr.
  • Tom Tromey contributed support for accessing thread-local variables while debugging during replay. This is notable because the "API" glibc exports for this is ghastly.

J.C. JonesLet's Encrypt's Growth to 10 Million Active Unique FQDNs

Let's Encrypt's Growth to 10 Million Active Unique FQDNs

Yesterday Let's Encrypt reached a new milestone: the unique set of all fully-qualified domain names in the currently-unexpired certificates issued by Let's Encrypt is now 10,022,446.

This data is coming from the same source as my previous posts: my CT box which is maintaining a state of Censys.io and Certificate Transparency using github.com/jcjones/ct-sql and a much-abused MariaDB server.

Let's Encrypt Growth Timeline

Let's Encrypt's Growth to 10 Million Active Unique FQDNs

You can take a look at the graph in live-form, as well as some of the datasets coming from it at ct.tacticalsecret.com.

This is the future of the Let's Encrypt Statistics page on letsencrypt.org. The current graphs on LE's site are my doing, and they were 20 minutes of work late one night just to get something out there (We've all done that, right?). Of course, they've stayed online as "the" stats for far longer than I had ever intended. Doubly-problematic, those graphs' queries look like they show data all the way back to first issuance, but they actually don't - there's some LIMIT statements which are there because the queries were fast and ugly.

The update to the Let's Encrypt website to use these new datasets is live as PR #61 on the LE Website repo, awaiting @LetsEncrypt_Ops time to set up the cron job. In the meanwhile, enjoy ct.tacticalsecret.com as the demo and the bleeding-edge site.

Niko MatsakisAnnouncing intorust.com

For the past year or so, I and a few others have been iterating on some tutorial slides for learning Rust. I’ve given this tutorial here at the local Boston Rust Meetup a few times, and we used the same basic approach at RustConf; I’ve been pretty happy with the results. But until now it’s been limited to in person events.

That’s why I’m so happy to announce a new site, Into Rust. Into Rust contains screencasts of many of these slides, and in particular the ones I consider most important: those that cover Ownership and Borrowing, which I think is the best place to start teaching Rust. I’ve divided up the material into roughly 30min screencasts so that they should be relatively easy to consume in one sitting – each also has some associated exercises to help make your knowledge more concrete.

I want to give special thanks to Liz Baillie, who did all the awesome artwork on the site.

Cameron Kaisergdb7 patchlevel 4 available

First, in the "this makes me happy" dept.: a Commodore 64 in a Gdansk, Poland auto repair shop still punching a clock for the last quarter-century. Take that, Phil Schiller, you elitist pig.

Second, as promised, patchlevel 4 of the TenFourFox debugger (our hacked version of gdb) is available from SourceForge. This is a minor bugfix update that wallpapers a crash when doing certain backtraces or other operations requiring complex symbol resolution. However, the minimum patchlevel to debug TenFourFox is still 2, so this upgrade is merely recommended, not required.

Yunier José Sosa VázquezCómo se hace? Evitar que Firefox afecte tu SSD

Los dispositivos de estado sólido o SSD, como comúnmente se les conoce siguen ganando terreno a los discos duros tradicionales y prácticamente cualquiera que compre un ordenador moderno elegirá estas unidades de almacenamiento en lugar de un disco mecánico. Sin embargo, los SSD no son eternos y tienen un período de vida limitado a la cantidad de operaciones de escritura establecidas por sus fabricantes.

Teniendo en cuenta lo antes mencionado, entonces deberíamos tener cuidado y estar informados del tiempo “que le queda” a nuestra unidad SSD para no perder los datos almacenados repentinamente. Si desean saber más sobre el tema pueden leer este artículo publicado en Blogthinkbig.

Según un estudio realizado por STH, los navegadores Firefox y Chrome afectan las SSD al escribir aproximadamente unos 10 Gb cada día y como principal responsable de este problema a la generación de archivos recovery.js empleados para guardar los datos de la sesión actual en caso de un cierre o fallo inesperado.

La buena noticia para los usuarios de Firefox es que este valor se puede modificar gracias a la página about:config. En Chrome no es posible ajustar esta configuración.

En Firefox debemos hacer lo siguiente:

  1. Abrir la página about:config y aceptar la advertencia.
  2. Localizar la preferencia browser.sessionstore.interval y modificar su valor a uno deseado. Verás que aparece el valor 15000, esto quiere decir que cada 15 segundos se genera un nuevo recovery.js, así que simplemente tenemos que cambiar ese número por uno mayor. 1000 equivale a 1 segundo.
  3. Si deseas que Firefox no almacene el estado de lo que haces (no recomendado), debes cambiar la preferencia browser.sessionhistory.max_entries a 0. Por defecto se mantienen en el historial 50 entradas.

Espero que les haya sido útil el artículo a todos aquell@s que tienen SSD.

Fuente: omicrono

Karl Dubost[worklog] Edition 038. Exploring python 3 in between bugs

I managed to break a bit of my python installation. I will need to figure out next week. Taipei is coming quickly, then seems I will be speaking at another event in November. Tune of the Week: Jardin d'hiver

Webcompat Life

Progress this week:

Today: 2016-10-03T11:13:33.042445
347 open issues
----------------------
needsinfo       14
needsdiagnosis  109
needscontact    12
contactready    23
sitewait        166
----------------------

You are welcome to participate

Webcompat issues

(a selection of some of the bugs worked on this week).

Webcompat.com development

Reading List

Follow Your Nose

TODO

  • Document how to write tests on webcompat.com using test fixtures.
  • ToWrite: Amazon prefetching resources with <object> for Firefox only.

Otsukare!

Mitchell BakerMozilla Hosting the U.S. Commerce Department Digital Economy Board of Advisors

Today Mozilla is hosting the second meeting of the Digital Economy Board of Advisors of the United States Department of Commerce, of which I am co-chair.

Support for the global open Internet is the heart of Mozilla’s identity and strategy. We build for the digital world. We see and understand the opportunities it offers, as well as the threats to its future. We live in a world where a free and open Internet is not available to all of the world’s citizens; where trust and security online cannot be taken for granted; and where independence and innovation are thwarted by powerful interests as often as they are protected by good public policy. As I noted in my original post on being named to the Board, these challenges are central to the “Digital Economy Agenda,” and a key reason why I agreed to participate.

Department of Commerce Secretary Pritzker noted earlier this year: “we are no longer moving toward the digital economy. We have arrived.” The purpose of the Board is to advise the Commerce Department in responding to today’s new status quo. Today technology provides platforms and opportunities that enable entrepreneurs with new opportunities. Yet not everyone shares the benefits. The changing nature of work must also be better understood. And we struggle to measure these gains, making it harder to design policies that maximize them, and harder still to defend the future of our digital economy against myopic and reactionary interests.

The Digital Economy Board of Advisors was convened to explore these challenges, and provide expert advice from a range of sectors of the digital economy to the Commerce Department as it develops future policies. At today’s meeting, working groups within the Board will present their initial findings. We don’t expect to agree on everything, of course. Our goal is to draw out the shared conclusions and direction to provide a balanced, sustainable, durable basis for future Commerce Department policy processes. I will follow up with another post on this topic shortly.

Today’s meeting is a public meeting. There will be two live streams: one for the 8:30 am-12:30 pm PT pre-lunch session and one for the afternoon post-lunch 1:30-3:00pm PT. We welcome you to join us.

Although the Board has many more months left in its tenure, I can see a trend towards healthy alignment between our mission and the outcomes of the Board’s activities. I’m proud to serve as co-chair of this esteemed group of individuals.

Tim TaubertTLS Version Intolerance

A few weeks ago I listened to Hanno Böck talk about TLS version intolerance at the Berlin AppSec & Crypto Meetup. He explained how with TLS 1.3 just around the corner there again are growing concerns about faulty TLS stacks found in HTTP servers, load balancers, routers, firewalls, and similar software and devices.

I decided to dig a little deeper and will use this post to explain version intolerance, how version fallbacks work and why they’re insecure, as well as describe the downgrade protection mechanisms available in TLS 1.2 and 1.3. It will end with a look at version negotiation in TLS 1.3 and a proposal that aims to prevent similar problems in the future.

What is version intolerance?

Every time a new TLS version is specified, browsers usually are the fastest to implement and update their deployments. Most major browser vendors have a few people involved in the standardization process to guide the standard and give early feedback about implementation issues.

As soon as the spec is finished, and often far before that feat is done, clients will have been equipped with support for the new TLS protocol version and happily announce this to any server they connect to:

Client: Hi! The highest TLS version I support is 1.2.
Server: Hi! I too support TLS 1.2 so let’s use that to communicate.
[TLS 1.2 connection will be established.]

In this case the highest TLS version supported by the client is 1.2, and so the server picks it because it supports that as well. Let’s see what happens if the client supports 1.2 but the server does not:

Client: Hi! The highest TLS version I support is 1.2.
Server: Hi! I only support TLS 1.1 so let’s use that to communicate.
[TLS 1.1 connection will be established.]

This too is how it should work if a client tries to connect with a protocol version unknown to the server. Should the client insist on any specific version and not agree with the one picked by the server it will have to terminate the connection.

Unfortunately, there are a few servers and more devices out there that implement TLS version negotiation incorrectly. The conversation might go like this:

Client: Hi! The highest TLS version I support is 1.2.
Server: ALERT! I don’t know that version. Handshake failure.
[Connection will be terminated.]

Or:

Client: Hi! The highest TLS version I support is 1.2.
Server: TCP FIN! I don’t know that version.
[Connection will be terminated.]

Or even worse:

Client: Hi! The highest TLS version I support is 1.2.
Server: (I don’t know this version so let’s just not respond.)
[Connection will hang.]

The same can happen with the infamous F5 load balancer that can’t handle ClientHello messages with a length between 256 and 512 bytes. Other devices abort the connection when receiving a large ClientHello split into multiple TLS records. TLS 1.3 might actually cause more problems of this kind due to more extensions and client key shares.

What are version fallbacks?

As browsers usually want to ship new TLS versions as soon as possible, more than a decade ago vendors saw a need to prevent connection failures due to version intolerance. The easy solution was to decrease the advertised version number by one with every failed attempt:

Client: Hi! The highest TLS version I support is 1.2.
Server: ALERT! Handshake failure. (Or FIN. Or hang.)
[TLS version fallback to 1.1.]
Client: Hi! The highest TLS version I support is 1.1.
Server: Hi! I support TLS 1.1 so let’s use that to communicate.
[TLS 1.1 connection will be established.]

A client supporting everything from TLS 1.0 to TLS 1.2 would start trying to establish a 1.2 connection, then a 1.1 connection, and if even that failed a 1.0 connection.

Why are these insecure?

What makes these fallbacks insecure is that the connection can be downgraded by a MITM, by sending alerts or TCP packets to the client, or blocking packets from the server. To the client this is indistinguishable from a network error.

The POODLE attack is one example where an attacker abuses the version fallback to force an SSL 3.0 connection. In response to this browser vendors disabled version fallbacks to SSL 3.0, and then SSL 3.0 entirely, to prevent even up-to-date clients from being exploited. Insecure version fallback in browsers pretty much break the actual version negotiation mechanisms.

Version fallbacks have been disabled since Firefox 37 and Chrome 50. Browser telemetry data showed it was no longer necessary as after years, TLS 1.2 and correct version negotiation was deployed widely enough.

The TLS_FALLBACK_SCSV cipher suite

You might wonder if there’s a secure way to do version fallbacks, and other people did so too. Adam Langley and Bodo Möller proposed a special cipher suite in RFC 7507 that would help a client detect whether the downgrade was initiated by a MITM.

Whenever the client includes TLS_FALLBACK_SCSV {0x56, 0x00} in the list of cipher suites it signals to the server that this is a repeated connection attempt, but this time with a version lower than the highest it supports, because previous attempts failed. If the server supports a higher version than advertised by the client, it MUST abort the connection.

The drawback here however is that a client even if it implements fallback with a Signaling Cipher Suite Value doesn’t know the highest protocol version supported by the server, and whether it implements a TLS_FALLBACK_SCSV check. Common web servers will likely be updated faster than others, but router or load balancer manufacturers might not deem it important enough to implement and ship updates for.

Signatures in TLS 1.2

It’s been long known to be problematic that signatures in TLS 1.2 don’t cover the list of cipher suites and other messages sent before server authentication. They sign the ephemeral DH parameters sent by the server and include the *Hello.random values as nonces to prevent replay attacks:

h = Hash(ClientHello.random + ServerHello.random + ServerParams)

Signing at least the list of cipher suites would have helped prevent downgrade attacks like FREAK and Logjam. TLS 1.3 will sign all messages before server authentication, even though it makes Transcript Collision Attacks somewhat easier to mount. With SHA-1 not allowed for signatures that will hopefully not become a problem anytime soon.

Downgrade Sentinels in TLS 1.3

With neither the client version nor its cipher suites (for the SCSV) included in the hash signed by the server’s certificate in TLS 1.2, how do you secure TLS 1.3 against downgrades like FREAK and Logjam? Stuff a special value into ServerHello.random.

The TLS WG decided to put static values (sometimes called downgrade sentinels) into the server’s nonce sent with the ServerHello message. TLS 1.3 servers responding to a ClientHello indicating a maximum supported version of TLS 1.2 MUST set the last eight bytes of the nonce to:

0x44 0x4F 0x57 0x4E 0x47 0x52 0x44 0x01

If the client advertises a maximum supported version of TLS 1.1 or below the server SHOULD set the last eight bytes of the nonce to:

0x44 0x4F 0x57 0x4E 0x47 0x52 0x44 0x00

If not connecting with a downgraded version, a client MUST check whether the server nonce ends with any of the two sentinels and in such a case abort the connection. The TLS 1.3 spec here introduces an update to TLS 1.2 that requires servers and clients to update their implementation.

Unfortunately, this downgrade protection relies on a ServerKeyExchange message being sent and is thus of limited value. Static RSA key exchanges are still valid in TLS 1.2, and unless the server admin disables all non-forward-secure cipher suites the protection can be bypassed.

The comeback of insecure fallbacks?

Current measurements show that enabling TLS 1.3 by default would break a significant fraction of TLS handshakes due to version intolerance. According to Ivan Ristić, as of July 2016, 3.2% of servers from the SSL Pulse data set reject TLS 1.3 handshakes.

This a very high number and would affect way too many people. Alas, with TLS 1.3 we have only limited downgrade protection for forward-secure cipher suites. And that is assuming that most servers either support TLS 1.3 or update their 1.2 implementations. TLS_FALLBACK_SCSV, if supported by the server, will help as long as there are no attacks tampering with the list of cipher suites.

The TLS working group has been thinking about how to handle intolerance without bringing back version fallbacks, and there might be light at the end of the tunnel.

Version negotiation with extensions

The next version of the proposed TLS 1.3 spec, draft 16, will introduce a new version negotiation mechanism based on extensions. The current ClientHello.version field will be frozen to TLS 1.2, i.e. {3, 3}, and renamed to legacy_version. Any number greater than that MUST be ignored by servers.

To negotiate a TLS 1.3 connection the protocol now requires the client to send a supported_versions extension. This is a list of versions the client supports, in preference order, with the most preferred version first. Clients MUST send this extension as servers are required to negotiate TLS 1.2 if it’s not present. Any version number unknown to the server MUST be ignored.

This still leaves potential problems with big ClientHello messages or choking on unknown extensions unaddressed, but according to David Benjamin the main problem is ClientHello.version. We will hopefully be able to ship browsers that have TLS 1.3 enabled by default, without bringing back insecure version fallbacks.

However, it’s not unlikely that implementers will screw up even the new version negotiation mechanism and we’ll have similar problems in a few years down the road.

GREASE-ing the future

David Benjamin, following Adam Langley’s advice to have one joint and keep it well oiled, proposed GREASE (Generate Random Extensions And Sustain Extensibility), a mechanism to prevent extensibility failures in the TLS ecosystem.

The heart of the mechanism is to have clients inject “unknown values” into places where capabilities are advertised by the client, and the best match selected by the server. Servers MUST ignore unknown values to allow introducing new capabilities to the ecosystem without breaking interoperability.

These values will be advertised pseudo-randomly to break misbehaving servers early in the implementation process. Proposed injection points are cipher suites, supported groups, extensions, and ALPN identifiers. Should the server respond with a GREASE value selected in the ServerHello message the client MUST abort the connection.

Kim MoirBeyond the Code 2016 recap

I've had the opportunity to attend the Beyond the Code conference for the past two years.  This year, the venue moved to a location in Toronto, the last two events had been held in Ottawa.  The conference is organized by Shopify who again managed to have a really great speaker line up this year on a variety of interesting topics.  It was a two track conference so I'll summarize some of the talks I attended.  

The conference started off with Anna Lambert of Shopify welcoming everyone to the conference.





The first speaker was Atlee Clark, Director of App and Developer relations at Shopify who discussed the wheel of diversity.


The wheel of diversity is a way of mapping the characteristics that you're born with (age, gender, gender expression, race or ethnicity, national origin, mental/physical ability), along with those that you acquire through life (appearance, education, political belief, religion, income, language and communication skills, work experience, family,  organizational role).  When you look at your team, you can map how diverse it is by colour.  (Of course, some of these characteristics are personal and might not be shared with others).  You can see how diverse the team is by mapping different characteristics with different colours.  If you map your team and it's mostly the same colour, then you probably will not bring different perspectives together when you work because you all have similar backgrounds and life experiences.  This is especially important when developing products. 



This wheel also applies to hiring too.  You want to have different perspectives when you're interviewing someone.  Atlee mentioned when she was hiring for a new role, she mapped out the characteristics of the people who would be conducting the hiring interviews and found there was a lot of yellow.


So she switched up the team that would be conducting the interviews to include people with more diverse perspectives.

She finished by stating that this is just a tool, keep it simple, and practice makes it better. 

The next talk was by Erica Joy, who is a build and release engineer at Slack, as well as a diversity advocate.  I have to admit, when I saw she was going to speak at Beyond the Code, I immediately pulled out my credit card and purchased a conference ticket.  She is one of my tech heroes.  Not only did she build the build and release pipeline at Slack from the ground up, she is an amazing writer and advocate for change in the tech industry.   I highly recommend reading everything she has written on Medium, her chapter in Lean Out and all her discussions on twitter.  So fantastic.

Her talk at the conference was "Building a Diverse Corporate Culture: Diversity and Inclusion in Tech".  She talked about how literally thousands of companies say they value inclusion and diversity.  However, few talk about what they are willing to give up to order to achieve it.  Are you willing to give up your window seat with a great view?   Something else so that others can be paid fairly?  She mentioned that change is never free.  People need both mentorship and sponsorship in in order to progress in their career.





I really liked her discussion around hiring and referrals.  She stated that when you're hire people you already know you're probably excluding equally or better qualified that you don't know.  By default, women of colour are underpaid.

Pay gap for white woman, African American women and Hispanic women compared to a white man in the United States.

Some companies have referral system to give larger referral bonuses to people who are underrepresented in tech, she gave the example of Intel which has this in place.  This is a way to incentivize your referral system so you don't just hire all your white friends.  

The average white American has 91 white friends and one black friend so it's not very likely that they will refer non-white people. Not sure what the numbers are like in Canada but I'd guess that they are quite similar.
  
In addition, don't ask people to work for free, to speak at conferences or do diversity and inclusion work.  Her words were "We can't pay rent with exposure".

Spend time talking to diversity and inclusion experts.  There are people that have spent their entire lives conducting research in this area and you can learn from their expertise.  Meritocracy is a myth, we are just lucky to be in the right place in the right time.  She mentioned that her colleague Duretti Hirpa at Slack points out the need for accomplices, not allies. People that will actually speak up for others.  So people feeling pain or facing a difficult work environment don't have to do all the work of fighting for change. 




In most companies, there aren't escalation paths for human issues either.  If a person is making sexist or racist remarks, shouldn't that be a firing offense? 

If people were really working hard on diversity and inclusion, we would see more women and people of colour on boards and in leadership positions.  But we don't.

She closed with a quote from Beyonce:

"If everything was perfect, you would never learn and you would never grow"

💜💜💜

The next talk I attended was by Coraline Ada Ehmke, who is an application engineer at Github.  Her talk was about the "Broken Promise of Open Source".  Open source has the core principals of the free exchange of ideas, success through collaboration, shared ownership and meritocracy.


However, meritocracy is a myth.  Currently, only 6% of Github users are women.  The environment can be toxic, which drives a lot of people away.  She mentioned that we don't have numbers for diversity in open source other than women, but Github plans to do a survey soon to try to acquire more data.


Gabriel Fayant from Assembly of Seven Generation's talk was entitled "Walking in Both Worlds, traditional ways of being and the world of technology".  I found this quite interesting, she talked about traditional ceremonies and how they promote the idea of living in the moment, and thus looking at your phone during a drum ceremony isn't living the full experience.  A question from the audience from someone who worked in the engineering faculty at the University of Toronto was how we can work with indigenous communities to share our knowledge of the technology and make youth both producers of tech, not just consumers. 

If everything was perfect, you would never learn and you would never grow.
Read more at: http://www.brainyquote.com/quotes/quotes/b/beyoncekno596349.html

f everything was perfect, you would never learn and you would never grow.
Read more at: http://www.brainyquote.com/quotes/quotes/b/beyoncekno596349.html
The next talk was by Sandi Metz, entitled "Madame Santi tells your future".  This was a totally fascinating look at the history of printing text from scrolls all the way to computers.

She gave the same talk at another conference earlier so you watch it here.  It described the progression of printing technology from 7000 years ago until today.  Each new technology disrupted the previous one, and it was difficult for those who worked on the previous technology to make the jump to work on the new one. 

So according to Sandi, what is your future?
  • What you are working on now probably won't be relevant in 10 years
  • You will all die
  • All the people you love will die
  • Your body will start to fail you
  • Life is short
  • Tell people that you love them
  • Guard your health
  • Spend time with your kids
  • Get some exercise (she loves to bike)
  • We are bigger than tech
  • Community and schools need help
  • She gave the example of Habitat for Humanity where she volunteers
  • These organizations also need help to write code, they might not have the knowledge or time to do it right

The last talk I attended was by Sabrina Geremia of Google Canada.  She talked about the factors that encourage a girl to consider computer science (encouragement, career perception, self-perception and academic exposure.)


I found that this talk was interesting but it focused a bit too much on the pipeline argument - that the major problem is that girls are not enrolling in CS courses.  If you look at all the problems with environment, culture, lack of pay equity and opportunities for promotion due to bias, maybe choosing a career where there is more diversity is a better choice.  For instance, law, accounting and medicine have much better numbers for these issues, despite there still being an imbalance.

At the end of the day, there was a panel to discuss diversity issues:

Moderator: Ariti Sharma, Shopify, Panelists: Mohammed Asaduallah, Format, Katie Krepps, Capital One Canada, Lateesha Thomas, Dev Bootcamp, Ramya Raghavan, Google, Kara Melton, TWG, Gladstone Grant, Microsoft Canada
Some of my notes from the panel
  • Be intentional about seeking out talent
  • Fix culture to be more diverse
  • Recruit from bootcamps. Better diversity today.  Don't wait for universities to change the ratios.
  • Environment impacts retention
  • Conduct and engagement survey to see if underrepresented groups feel that their voices are being heard.
  • There is a need for sponsorship, not just mentoring.  Define a role that doesn't exist at the company.  A sponsor can make that role happen by advocating for it at higher levels
  • Mentors do better if matched with demographics.  They will realize the challenges that you will face in the industry better than a white man who has never directly experienced sexism or racism.
  • Sponsors tend to be men due to the demographics of our industry
  • At Microsoft, when you reach a certain level your are expected to mentor an unrepresented person
  • Look at compensation and representation across diverse groups
  • Attrition is normal, it varies by region, especially acute in San Francisco.
  • Women leave companies at 2x the rate of men due to culture
  • You shouldn't stay at a place if you are burnt out, take care of yourself.

Compared to the previous two iterations of this conference, it seemed that this time it focused a lot more on solutions to have more diversity and inclusion in your company. The previous two conferences I attended seemed to focus more on technical talks by diverse speakers.


As a side note, there were a lot of Shopify folks in attendance because they ran the conference.  They sent a bus of people from their head office in Ottawa to attend it.  I was really struck at how diverse some of the teams were.  I met group of women who described themselves as a team of "five badass women developers" 💯 As someone who has been the only woman on her team for most of her career, this was beautiful to see and gave me hope for the future of our industry.   I've visited the Ottawa Shopify office several times (Mr. Releng works there) and I know that the representation of of their office doesn't match the demographics of the Beyond the Code attendees which tended to be more women and people of colour.  But still, it is refreshing to see a company making a real effort to make their culture inclusive.  I've read that it is easier to make your culture inclusive from the start, rather than trying to make difficult culture changes years later when your teams are all homogeneous. So kudos to them for setting an example for other companies.

Thank you Shopify for organizing this conference, I learned a lot and I look forward to the next one!

Mozilla Security BlogMitigating Logjam: Enforcing Stronger Diffie-Hellman Key Exchange

In response to recent developments attacking Diffie-Hellman key exchange (https://weakdh.org/) and to protect the privacy of Firefox users, we have increased the minimum key size for TLS handshakes using Diffie-Hellman key exchange to 1023 bits. A small number of servers are not configured to use strong enough keys. If a user attempts to connect to such a server, they will encounter the error “ssl_error_weak_server_ephemeral_dh_key”.

Support.Mozilla.OrgWhat’s Up with SUMO – 29th September

Hello, SUMO Nation!

Change is a constant, and Mozilla is no different. Bigger and smaller changes are coming up across many a project, including SUMO – and we need your help figuring out what they should be like. Learn more about the ways you can make us be better below!

Welcome, new contributors!

If you just joined us, don’t hesitate – come over and say “hi” in the forums!

Contributors of the week

We salute you!

Don’t forget that if you are new to SUMO and someone helped you get started in a nice way you can nominate them for the Buddy of the Month!

SUMO Community meetings

  • LATEST ONE: 28th of September- you can read the notes here and see the video at AirMozilla.
  • NEXT ONE: happening on the 5th of October!
  • If you want to add a discussion topic to the upcoming meeting agenda:
    • Start a thread in the Community Forums, so that everyone in the community can see what will be discussed and voice their opinion here before Wednesday (this will make it easier to have an efficient meeting).
    • Please do so as soon as you can before the meeting, so that people have time to read, think, and reply (and also add it to the agenda).
    • If you can, please attend the meeting in person (or via IRC), so we can follow up on your discussion topic during the meeting with your feedback.

Community

Platform

Social

  • Thank you for the SUMO Day today! It was a record day for the number of people logging in – you rock!
  • The new training for filtering in widgets is available here:

    http://screencast.com/t/llm6PF5rI2 – it also shows the new support thread-specific inbox for the dashboard.

  • Some issues popping up nowadays are startup crashes – caused by AVG and WebSense in particular.
  • Inactive accounts may be removed soon, so if you’re still active, please log in this week. If you no longer have an account, please get in touch with Rachel!
  • Want to join us? Please email Rachel and/or Madalina to get started supporting Mozilla’s product users on Facebook and Twitter. We need your help! Use the step-by-step guide here. Take a look at some useful videos:

Support Forum

Knowledge Base & L10n

  • We are 5 weeks before next release / 1 week after current release What does that mean? (Reminder: we are following the process/schedule outlined here).
    • No work on next release content for KB editors or localizers 
    • All existing content is open for editing and localization as usual; please focus on localizing the most recent / popular content
  • Since pizza turned out to be a great success, if you have ideas how to virtually gather your l10n team mates, contact me about that!

Firefox

  • for Android
    • Version 50 is slated to come out on November 8th. It should bring video viewing and controlling improvements.
  • for Desktop
    • Version 50 (November 8th as well) will bring the following goodies:
      • WebRTS – full duplex audio streams
      • Tracking Protection supporting Do Not Track
      • Electrolysis – e10s RTL for Windows and Mac
      • First e10s sandbox for Mac OS X and Windows
      • Find in page with a mode to search for whole words only
      • New preference for cycling tabs using Ctrl + Tab
      • Improved printing options via the Reader Mode
  • for iOS
    • Still quiet… Keep using 5.0!

…and that’s it for this week! Remember that we <3 you all for being there for the users when it matters most! Keep rocking the helpful web!

Soledad PenadesTalking about Web Audio in WeCodeSign Podcast

I recorded an episode for the WeCodeSign podcast. It’s in Spanish!

You can download / listen from their website.

We actually talked about more than Web Audio; there’s a list of links to things we mentioned during the episode. From progressive enhancement to Firefox’s Web Audio editor, to the old PCMania tracking stories, to Firefox for iOS… lots of things!

I was really pleased with the experience. The guys were really good at planning, and did a great job editing the podcast as well (and they use Audacity!).

Totally recommended—in fact I suggested that both my fantastic colleague Belén and the very cool Buriticá are interviewed at some point in the future.

I’d love to hear what they have to say!

Throwback to the last time I recorded a podcast in Spanish–at least this time I wasn’t under a massive cold! 🙃

flattr this!

Air MozillaReps Weekly Meeting

Reps Weekly Meeting This is a weekly call with some of the Reps to discuss all matters about/affecting Reps and invite Reps to share their work with everyone.

Mozilla Addons BlogWebExtensions in Firefox 51

Firefox 51 landed in Developer Edition this week, so we have another update on WebExtensions for you. In this update, we’re making it easier for you to port your existing add-ons to WebExtensions. In addition to being fully compatible with multiprocess Firefox, WebExtensions are becoming the standard for add-on development.

Embedded WebExtensions

In Firefox Developer Edition, you can now embed a WebExtensions add-on inside an existing SDK or bootstrapped add-on.

This is especially useful to developers of SDK or bootstrapped add-ons who want to start migrating to WebExtensions and take advantage of new APIs like Native Messaging, but can’t fully migrate yet. It’s also useful for developers who want to complete data migration towards WebExtensions, and who want to take parts of their add-on that are not compatible with multiprocess Firefox and make them compatible.

For more documentation on this, please head over to MDN or check out some examples.

If you need help porting to WebExtensions, please start with the compatibility checker, and check out these resources.

Manifest Change

Because of confusion around the use of strict_min_version in WebExtensions manifests, we’ve prevented the use of * in strict_min_version, for example 48.* is no longer valid. If you upload an add-on to addons.mozilla.org we’ll warn you of that fact.

API Changes

The clipboardWrite permission is now enabled which removes the need to be in a user gesture. This is usable from extension tabs, popups and content scripts.

When a WebExtensions add-on is uninstalled, any local storage is now cleared. If you’d like to persist data across an uninstall then you can use the upcoming sync storage.

The management API now supports the uninstallSelf and getSelf methods. The idle.queryState API has been updated to accurately reflect the state, previously it always returned the value “idle”.

In the webRequest API, onBeforeRequest is now supported in Firefox Nightly and Developer Edition. There are some platform changes that are required to get that to land in a Release version of Firefox.

Developers have been testing out Native messaging and a couple of bugs were filed and fixed on that. New, more detailed, documentation has been written. One of the useful pieces of feedback involved the performance of the round-trip time, and that has now improved.

There has been a few improvements to the appearance of popup windows including the popup arrow, the corners of the popup and reducing flicker on the animation. Here’s a before and after:

popup-before

popup-after

Out of process extensions

Now that the majority of the work multi process Firefox has been completed, we are looking ahead to the many improvements it can bring. One of them is allowing WebExtensions to be run in a separate process. This process-sandboxing of add-ons will bring clear performance and security benefits.

But before we can do that, there is quite a bit of work that needs to be done. The main tracking bug lists some of these tasks. There is also a video of Rob Wu presenting the work he has done on this. There currently isn’t a timeline for when this will be landed, but the work is progressing.

Recognition

We’d also like to give a thank you to four new contributors to WebExtensions, who’ve helped with this release. Thanks to sj, Jorg K, fiveNinePlusR and Tomislav.

Update: link to Robs presentation fixed.

Firefox NightlyFirefox Nightly got its “What’s New” page back last week!

Years ago, every time we were releasing a new version of Firefox and bumped the version number for all Firefox channels, nightly builds were also getting a “What’s New” page displayed at restart after that major version number change (this old page is still available on the WayBack Machine and you can even see a video with ex-QA team lead Juan Becerra).

Then, at some point (Bug 748503), the call to that What’s New page was redirected to the First Run page. It made sense at the time as nobody was actively maintaining that content and it had not been updated in years, but it was also shutting down one of the few direct communication channels with our Nightly users.

Kohei Yoshino and myself worked on resurrecting that page and turn it into a simple yet effective communication channel with our Nightly users where they can get news about what’s new in the Nightly world.

What's New page for Nightly

Unlike the old page we had, this new updated version is integrated correctly into mozilla.org framework (bedrock) which means that we inherit from the nice templates they create and have a workflow which allows localization of that page (see the French and Japanese version of the page) and we might even be able to provide conditional content based on geolocation in the future.

We have created this page with the objective of increasing participation and communication with our core technical users and we intend to update it periodically and make it useful not only to Mozilla with calls to feedback and testing of recently landed features but also to Nightly users (how about having a monthly power-user tip there for example?).

If you have ideas on what information could be part of this What’s New page, don’t hesitate to leave a comment on the blog or to reach out to me directly (pascal At mozilla Dot com)!

CREDITS

Many thanks to Kohei for his great work on the design and the quality of his code. Thanks to the rest of the Release Management team and in particular to Liz Henry and Marcia Knous for helping fix my English! Many thanks to the mozilla.org webdev team for helping with reviews and suggesting nice visual tricks such as the responsive multi-column layout and improved typography tips for readability. Finally, thanks to the localizers that took the time to translate that page in a couple of days before we shipped it even though the expected audience is very small!

BONUS

We were asked via our @FirefoxNightly Twitter account if we could provide the nice background on the What’s New page as a wallpaper for desktop. Instead of providing the file, I am showing you in the following video tutorial how you can do it by yourself with Firefox Nightly Developer Tools, enjoy hacking with your browser and the Web, that’s what Nightly is for!

Michael KaplyKeyword Search is No Longer Feeling Lucky

UPDATE: I put a hack in into Keyword Search that automatically clicks on the first result if you are using “I’m feeling lucky.” This is the best I can do for now.

I’m getting a lot of reports that the Google “I’m Feeling Lucky” option is no longer working with Keyword Search. Unfortunately Google seems to have broken this in their latest search update even though they’ve left the button on the homepage. There’s nothing I can really do to work around it at this time.

If you want a similar feature, you can switch to DuckDuckGo and use their “I’m Feeling Ducky” option.

Daniel Stenberg25,000 curl questions on stackoverflow

stackoverflow-logoOver time, I’ve reluctantly come to terms with the fact that a lot of questions and answers about curl is not done on the mailing lists we have setup in the project itself.

A primary such external site with curl related questions is of course stackoverflow – hardly news to programmers of today. The questions tagged with curl is of course only a very tiny fraction of the vast amount of questions and answers that accumulate on that busy site.

The pile of questions tagged with curl on stackoverflow has just surpassed the staggering number of 25,000. Of course, these questions involve persons who ask about particular curl behaviors (and a large portion is about PHP/CURL) but there’s also a significant amount of tags for questions where curl is only used to do something and that other something is actually what the question is about. And ‘libcurl’ is used as a separate tag and is often used independently of the ‘curl’ one. libcurl is tagged on almost 2,000 questions.

curl-symbolBut still. 25,000 questions. Wow.

I visit that site every so often and answer to some questions but I often end up feeling a great “distance” between me and questions there, and I have a hard time to bridge that gap. Also, stackoverflow the site and the format isn’t really suitable for debugging or solving problems within curl so I often end up trying to get the user move over to file an issue on curl’s github page or discuss the curl problem on a mailing list instead. Forums more suitable for plenty of back-and-forth before the solution or fix is figured out.

Now, any bets for how long it takes until we reach 100K questions?

Niko MatsakisDistinguishing reuse from override

In my previous post, I started discussing the idea of intersection impls, which are a possible extension to specialization. I am specifically looking at the idea of making it possible to add blanket impls to (e.g.) implement Clone for any Copy type. We saw that intersection impls, while useful, do not enable us to do this in a backwards compatible way.

Today I want to dive a bit deeper into specialization. We’ll see that specialization actually couples together two things: refinement of behavior and reuse of code. This is no accident, and its normally a natural thing to do, but I’ll show that, in order to enable the kinds of blanket impls I want, it’s important to be able to tease those apart somewhat.

This post doesn’t really propose anything. Instead it merely explores some of the implications of having specialization rules that are not based purely on subsets of types, but instead go into other areas.

Requirements for backwards compatibility

In the previous post, my primary motivating example focused on the Copy and Clone traits. Specifically, I wanted to be able to add an impl like the following (we’ll call it impl A):

1
2
3
4
5
impl<T: Copy> Clone for T { // impl A
    default fn clone(&self) -> Point {
        *self
    }
}

The idea is that if I have a Copy type, I should not have to write a Clone impl by hand. I should get one automatically.

The problem is that there are already lots of Clone impls in the wild (in fact, every Copy type has one, since Copy is a subtrait of Clone, and hence implementing Copy requires implememting Clone too). To be backwards compatible, we have to do two things:

  • continue to compile those Clone impls without generating errors;
  • give those existing Clone impls precedence over the new one.

The last point may not be immediately obvious. What I’m saying is that if you already had a type with a Copy and a Clone impl, then any attempts to clone that type need to keep calling the clone() method you wrote. Otherwise the behavior of your code might change in subtle ways.

So for example imagine that I am developing a widget crate with some types like these:

1
2
3
4
5
6
7
8
9
10
11
struct Widget<T> { data: Option<T> }

impl<T: Copy> Copy for Widget<T> { } // impl B

impl<T: Clone> Clone for Widget<T> { // impl C
    fn clone(&self) -> Widget<T> {
        Widget {
            data: self.data.clone()
        }
    }
}

Then, for backwards compatibility, we want that if I have a variable widget of type Widget<T> for any T (including cases where T: Copy, and hence Widget<T>: Copy), then widget.clone() invokes impl C.

Thought experiment: Named impls and explicit specialization

For the purposes of this post, I’d like to engage now in a thought experiment. Imagine that, instead of using type subsets as the basis for specialization, we gave every impl a name, and we could explicitly specify when one impl specializes another using that name. When I say that an impl X specializes an impl Y, I mean primarily that items in the impl X override items in impl Y:

  • When we go looking for an associated item, we use the one in X first.

However, in the specialization RFC as it currently stands, specializing is also tied to reuse. In particular:

  • If there is no item in X, then we go looking in Y.

The point of this thought experiment is to show that we may want to separate these two concepts.

To avoid inventing syntax, I’ll use a #[name] attribute to specify the name of an impl and a #[specializes] attribute to declare when one impl specializes another. So we might declare our two Clone impls from the previous section as follows:

1
2
3
4
5
6
#[name = "A"]
impl<T: Copy> Clone for T {...}

#[name = "B"]
#[specializes = "A"]
impl<T: Clone> Clone for Widget<T> {...}

Interestingly, it turns out that this scheme of using explicit names interacts really poorly with the reuse aspects of the specialization RFC. The Clone trait is kind of too simple to show what I mean, so let’s consider an alternative trait, Dump, which has two methods:

1
2
3
4
trait Dump {
    fn display(&self);
    fn debug(&self);
}

Now imagine that I have a blanket implementation of Dump that applies to any type that implements Debug. It defines both display and debug to print to stdout using the Debug trait. Let’s call this impl D.

1
2
3
4
5
6
7
8
9
10
11
12
#[name = "D"]
impl<T> Dump
    where T: Debug,
{
    default fn display(&self) {
        self.debug()
    }

    default fn debug(&self) {
        println!("{:?}", self);
    }
}

Now, maybe I’d like to specialize this impl so that if I have an iterator over items that also implement Display, then display dumps out their debug instead. I don’t want to change the behavior for debug, so I leave that method unchanged. This is sort of analogous to subtyping in an OO language: I am refining the impl for Dump by tweaking how it behaves in certain scenarios. We’ll call this impl E.

1
2
3
4
5
6
7
8
9
#[name = "E"]
#[specializes = "D"]
impl<T> Dump
    where T: Display + Debug,
{
    fn display(&self) {
        println!("{}", value);
    }
}

So far, everything is fine. In fact, if you just remove the #[name] and #[specializes] annotations, this example would work with specialization as currently implemented. But imagine that we did a slightly different thing. Imagine we wrote impl E but without the requirement that T: Debug (everything else is the same). Let’s call this variant impl F.

1
2
3
4
5
6
7
8
9
#[name = "F"]
#[specializes = "D"]
impl<T> Dump
    where T: Display,
{
    fn display(&self) {
        println!("{}", value);
    }
}

Now we no longer have the subset of types property. Because of the #[specializes] annotation, impl F specializes impl D, but in fact it applies to an overlapping, but different set of types (those that implement Display rather than those that implement Debug).

But losing the subset of types property makes the reuse in impl F invalid. Impl F only defines the display() method and it claims to inherit the debug() method from Impl D. But how can it do that? The code in impl D was written under the assumption that the types we are iterating over implement Debug, and it uses methods from the Debug trait. Clearly we can’t reuse that code, since if we did so we might not have the methods we need.

So the takeaway here is that if an impl A wants to reuse some items from impl B, then impl A must apply to a subset of impl B’s types. That guarantees that the item from impl B will still be well-typed inside of impl A.

What does this mean for copy and clone?

Interesting thought experiment, you are thinking, but how does this relate to `Copy` and `Clone`? Well, it turns out that if we ever want to be able to add add things like an autoconversion impl between Copy and Clone (and Ord and PartialOrd, etc), we are going to have to move away from subsets of types as the sole basis for specialization. This implies we will have to separate the concept of when you can reuse (which requires subset of types) from when you can override (which can be more general).

Basically, in order to add a blanket impl backwards compatibly, we have to allow impls to override one another in situations where reuse would not be possible. Let’s go through an example. Imagine that – at timestep 0 – the Dump trait was defined in a crate dump, but without any blanket impl:

1
2
3
4
5
// In crate `dump`, timestep 0
trait Dump {
    fn display(&self);
    fn debug(&self);
}

Now some other crate widget implements Dump for its type Widget, at timestep 1:

1
2
3
4
5
6
7
8
9
10
11
12
13
// In crate `widget`, timestep 1
extern crate dump;

struct Widget<T> { ... }

// impl G:
impl<T: Debug> Debug for Widget<T> {...}

// impl H:
impl<T> Dump for Widget<T> {
    fn display(&self) {...}
    fn debug(&self) {...}
}

Now, at timestep 2, we wish to add an implementation of Dump that works for any type that implements Debug (as before):

1
2
3
4
5
6
7
8
9
10
11
12
// In crate `dump`, timestep 2
impl<T> Dump // impl I
    where T: Debug,
{
    default fn display(&self) {
        self.debug()
    }

    default fn debug(&self) {
        println!("{:?}", self);
    }
}

If we assume that this set of impls will be accepted – somehow, under any rules – we have created a scenario very similar to our explicit specialization. Remember that we said in the beginning that, for backwards compatibility, we need to make it so that adding the new blanket impl (impl I) does not cause any existing code to change what impl it is using. That means that Widget<T>: Dump also needs to be resolved to impl H, the original impl from the crate widget: even if impl I also applies.

This basically means that impl H overrides impl I (that is, in cases where both impls apply, impl H takes precedence). But impl H cannot reuse from impl I, since impl H does not apply to a subset of blanket impl’s types. Rather, these impls apply to overlapping but distinct sets of types. For example, the Widget impl applies to all Widget<T>, even in cases where T: Debug does not hold. But the blanket impl applies to i32, which is not a widget at all.

Conclusion

This blog post argues that if we want to support adding blanket impls backwards compatibly, we have to be careful about reuse. I actually don’t think this is a mega-big deal, but it’s an interesting observation, and one that wasn’t obvious to me at first. It means that subset of types will always remain a relevant criteria that we have to test for, no matter what rules we wind up with (which might in turn mean that intersection impls remain relevant).

The way I see this playing out is that we have some rules for when one impl specializes one another. Those rules do not guarantee a subset of types and in fact the impls may merely overlap. If, additionally, one impl matches a subst of the other’s types, then that first impl may reuse items from the other impl.

PS: Why not use names, anyway?

You might be thinking to yourself right now boy, it is nice to have names and be able to say explicitly what we specialized by what. And I would agree. In fact, since specializable impls must mark their items as default, you could easily imagine a scheme where those impls had to also be given a name at the same time. Unfortunately, that would not at all support my copy-clone use case, since in that case we want to add the base impl after the fact, and hence the extant specializing impls would have to be modified to add a #[specializes] annotation. Also, we tried giving impls names back in the day; it felt quite artificial, since they don’t have an identity of their own, really.

Comments

Since this is a continuation of my previous post, I’ll just re-use the same internals thread for comments.

Christian HeilmannQuick tip: using modulo to re-start loops without the need of an if statement

the more you know

A few days ago Jake Archibald posted a JSBin example of five ways to center vertically in CSS to stop the meme of “CSS is too hard and useless”. What I found really interesting in this example is how he animated showing the different examples (this being a CSS demo, I’d probably would’ve done a CSS animation and delays, but he wanted to support OldIE, hence the use of className instead of classList):

var els = document.querySelectorAll('p');
var showing = 0;
setInterval(function() {
  // this is easier with classlist, but meh:
  els[showing].className = els[showing].className.replace(' active', '');
  showing = (showing + 1) % 5;
  els[showing].className += ' active';
}, 4000);

The interesting part to me here is the showing = (showing + 1) % 5; line. This means that if showing is 4 showing becomes 0, thus starting the looping demo back from the first example. This is the remainder operator of JavaScript, giving you the remaining value of dividing the first value with the second. So, in the case of 4 + 1 % 5, this is zero.

Whenever I used to write something like this, I’d do an if statement, like:

showing++;
if (showing === 5) { showing = 0; }

Using the remainder seems cleaner, especially when instead of the hard-coded 5, you’d just use the length of the element collection.

var els = document.querySelectorAll('p');
var all = els.length;
var c = 'active';
var showing = 0;
setInterval(function() {
  els[showing].classList.remove(c);
  showing = (showing + 1) % all;
  els[showing].classList.add(c);
}, 4000);

A neat little trick to keep in mind.

Chris McDonaldi-can-manage-it Weekly Update 2

A little over a week ago, I started this series about the game I’m writing. Welcome to the second installment. It took a little longer than a week to get around to writing. I wanted to complete the task, determining what tile the user clicked on, I set out for myself at the end of my last post before coming back and writing up my progress. But while we’re on the topic, the “weekly” will likely be a loose amount of time. I’ll aim for each weekend but I don’t want guilt from not posting getting in the way of building the game.

Also, you may notice the name changed just a little bit. I decided to go with the self motivating and cuter name of i-can-manage-it. The name better captures my state of mind when I’m building this. I just assume I can solve a problem and keep working on it until I understand how to solve it or why that approach is not as good as some other approach. I can manage building this game, you’ll be able to manage stuff in the game, we’ll all have a grand time.

So with the intro out of the way, lets talk progress. I’m going to bullet point the things I’ve done and then discuss them in further detail below.

  • Learned more math!
  • Built a bunch of debugging tools into my rendering engine!
  • Can determine what tile the mouse is over!
  • Wrote my first special effect shader!

Learned more math!

If you are still near enough to high school to remember a good amount of the math from it and want to play with computer graphics, keep practicing it! So far I haven’t needed anything terribly advanced to do the graphics I’m currently rendering. In my high school Algebra 2 covered matrix math to a minor degree. Now back then I didn’t realize that this was a start into linear algebra. Similarly, I didn’t consider all the angle and area calculations in geometry to be an important life lesson, just neat attributes of the world expressed in math.

In my last post I mentioned this blog post on 3d transformations which talks about several but not necessarily all coordinate systems a game would have. So, I organized my world coordinate system, the coordinates that my map outputs and game rules use, so that it matched how the X and Y change in OpenGL coordinates. X, as you’d expect gets larger going toward the right of the screen. And if you’ve done much math or looked at graphs, you’ve seen demonstrations of the Y getting larger going toward the top. OpenGL works this way and so I made my map render this way.

You then apply a series of 4×4 matrices that correspond to things like moving the object to where it should be in world coordinates from it’s local coordinates which are the coordinates that might be exported from 3d modelling or generated by the game engine. You also apply a 4×4 matrix for the window’s aspect ratio, zoom, pan and probably other stuff too.

That whole transform process I described above results in a bunch of points that aren’t even on the screen. OpenGL determines that by looking at points between -1 and 1 on each axis and anything outside of that range is culled, which means that the graphics card wont put it on the screen.

I learned that a neat property of these matrices is that many of them are invertable. Which means you can invert the matrix then apply it to a point on the screen and get back where that point is in your world coordinates. If we wanted to know what object was at the center of the screen, we’d take that inverted matrix and multiply it by {x: 0, y: 0, z: 0, w: 1} (as far as I can tell the w servers to make this math magic all work) and get back what world coordinates were at the center of the view. In my case because my world is 2d, that means I can just calculate what tile is at that x and y coordinate and what is the top most thing on that tile. If you had a 3d world, you’d then need to something like ray casting, which sends a ray out from the specified point at the camera’s z axis and moves across the z axis until it encounters something (or hits the back edge).

I spent an afternoon at the library and wrote a few example programs to test this inversion stuff to check my pen and paper math using the cgmath crate. That way I could make sure I understood the math, as well as how to make cgmath do the same thing. I definitely ran into a few snags where I multiplied or added the wrong numbers when working on paper due to taking short cuts. Taking the time to also write the math using code meant I’d catch these errors quickly and then correct how I thought about things. It was so productive and felt great. Also, being surrounded by knowledge in the library is one of my favorite things.

Built a bunch of debugging tools into my rendering engine!

Through my career, I’ve found that the longer you expect the project to last, the more time you should spend on making sure it is debuggable. Since I expect this project to take up the majority of my spare time hacking for at least a few years, maybe even becoming the project I work on longer than any other project before it I know that each debugging tool is probably a sound investment.

Every time I add in a 1 off debugging tool, I work on it for a while getting it to a point to solve my problem at hand. Then, once I’m ready to clean up my code, I think about how many other types or problems that debugging tool might solve and how hard it would be to make easy to access in the future. Luckily, most debugging tools are more awesome when you can toggle them on the fly. If the tool is easy to toggle, I definitely leave it in until it causes trouble adding a new feature.

An example of adapting tools to keep them, my FPS (frames per second) counter I built was logging the FPS to the console every second and had become a hassle. When working on other problems because other log lines would scroll by due to the FPS chatter. So I added a key to toggle the FPS printing, but keep calculating it every frame. I’d thought about removing the calculation too, but decided I’ll probably want to track that metric for a long time so it should probably be a permanent fixture and cost.

A tool I’m pretty proud of had to do with my tile map rendering. My tiles are rendered as a series of triangles, 2 per tile, that are stitched in a triangle strip, which is a series of points where each 3 points is a triangle. I also used degenerate triangles which are triangles that have no area so OpenGL doesn’t render them. I generate this triangle strip once then save it and reuse it with some updated meta data on each frame.

I had some of the points mixed up causing the triangles to cross the whole map that rendered over the tiles. I added the ability to switch to line drawing instead of filled triangles, which helped some of the debugging because I could see more of the triangles. I realized I could take a slice of the triangle strip and only render the first couple points. Then by adding a couple key bindings I could make that dynamic, so I could step through the vertices and verify the order they were drawn in. I immediately found the issue and felt how powerful this debug tool could be.

Debugging takes up an incredible amount of time, I’m hoping by making sure I’ve got a large toolkit I’ll be able to overcome any bug that comes up quickly.

Can determine what tile the mouse is over!

I spent time learning and relearning the math mentioned in the first bullet point to solve this problem. But, I found another bit of math I needed to do for this. Because of how older technology worked, mouse pointers coordinates start in the upper left of the screen and grow bigger going toward the right (like OpenGL) and going toward the bottom (the opposite of OpenGL). Also, because OpenGL coordinates are a -1 to 1 range for the window, I needed to turn the mouse pointer coordinates into that as well.

This inversion of the Y coordinate were a huge source of my problems for a couple days. To make a long story short, I inverted the Y coordinate when I first got it, then I was inverting it again when I was trying to work out what tile the mouse was over. This was coupled with me inverting the Y coordinate in the triangle strip from my map instead of using a matrix transform to account for how I was drawing the map to the console. This combination of bugs meant that if I didn’t pan the camera at all I could get the tile the mouse was over correctly. But, as soon as I panned it up or down, the Y coordinate would be off, moving in the opposite direction of the panning. Took me a long time to hunt this combination of bugs down.

But, the days of debugging made me take a lot of critical looks at my code, taking the time to cleaned up my code and math. Not abstracting it really, just organizing it into more logical blocks and moving some things out of the rendering loop, only recalculating them as needed. This may sound like optimization, but the goal wasn’t to make the code faster, just more logically organized. Also I got a bunch of neat debugging tools in addition to the couple I mentioned above.

So while this project took me a bit longer than expected, I made my code better and am better prepared for my next project.

Wrote my first special effect shader!

I was attempting to rest my brain from the mouse pointer problem by working on shader effects. It was something I wanted to start learning and I set a goal of having a circle at the mouse point that moves outwards. I spent most of my hacking on Sunday on this problem and here are the results. In the upper left click the 2 and change it to 0.5 to make it nice and smooth. Hide the code up in the upper left if that isn’t interesting to you.

First off, glslsandbox is pretty neat. I was able to immediately start experimenting with a shader that had mouse interaction. I started by trying to draw a box around the mouse pointer. I did this because it was simple and I figured calculating the circle would be more expensive than checking the bounding box. I was quickly able to get there. Then a bit of Pythagorean theorem, thanks high school geometry, and I was able to calculate a circle.

The only trouble was that it wasn’t actually a circle. It was an elliptical disc instead, matching the aspect ratio of the window. Meaning that because my window was a rectangle instead of a square, my circle reflected that the window was shorter than it was wide. In the interest of just getting things working, I pulled the orthographic projection I was using in my rendering engine and translated it to glsl and it worked!

Next was to add another circle on the inside, which was pretty simple because I’d already done it once, and scaling the circle’s size with time. Honestly, despite all the maybe scary looking math on that page, it was relatively simple to toss together. I know there are whole research papers on just parts of graphical effects, but it is good to know that some of the more simple ones are able to be tossed together in a few hours. Then later, if I decide I want to really use the effect, I can take the time to deeply understand the problem and write a version using less operations to be more efficient.

On that note, I’m not looking for feedback on that shader I wrote. I know the math is inefficient and the code is pretty messy. I want to use this shader as a practice for taking and effect shader and making it faster. Once I’ve exhausted my knowledge and research I’ll start soliciting friends for feedback, thanks for respecting that!

Wrapping up this incredibly long blog post I want to say everyone in my life has been so incredibly supportive of me building my own game. Co-workers have given me tips on tools to use and books to read, friends have given input on the ideas for my game engine helping guide me in an area I don’t know well. Last and most amazing is my wife, who listens to me prattle away about my problems in my game engine or how some neat math thing I learned works, and then encourages me with her smile.

Catch you in the next update!


Mitchell BakerUN High Level Panel and UN Secretary General Ban Ki-moon issue report on Women’s Economic Empowerment

“Gender equality remains the greatest human rights challenge of our time.”  UN Secretary General Ban Ki-moon, September 22, 2016.

To address this challenge the Secretary General championed the 2010 creation of UN Women, the UN’s newest entity. To focus attention on concrete actions in the economic sphere he created the “High Level Panel on Women’s Economic Empowerment” of which I am a member.

The Panel presented its initial findings and commitments last week during the UN General Assembly Session in New York. Here is the Secretary General, with the the co-chairs, and the heads of the IMF and the World Bank, the Executive Director of the UN Women, and the moderator and founder of All Africa Media, each of whom is a panel member.

UN General Assembly Session in New York

Photo Credit: Anar Simpson

The findings are set out in the Panel’s initial report. Key to the report is the identification of drivers of change, which have been deemed by the panel to enhance women’s economic empowerment:

  1. Breaking stereotypes: Tackling adverse social norms and promoting positive role models
  2. Leveling the playing field for women: Ensuring legal protection and reforming discriminatory laws and regulations
  3. Investing in care: Recognizing, reducing and redistributing unpaid work and care
  4. Ensuring a fair share of assets: Building assets—Digital, financial and property
  5. Businesses creating opportunities: Changing business culture and practice
  6. Governments creating opportunities: Improving public sector practices in employment and procurement
  7. Enhancing women’s voices: Strengthening visibility, collective voice and representation
  8. Improving sex-disaggregated data and gender analysis

Chapter Four of the report describes a range of actions that are being undertaken by Panel Members for each of the above drivers. For example under the Building assets driver: DFID and the government of Tanzania are extending land rights to more than 150,000 Tanzanian women by the end of 2017. Tanzania will use media to educate people on women’s land rights and laws pertaining to property ownership. Clearly this is a concrete action that can serve as a precedent for others.

As a panel member, Mozilla is contributing to the working on Building Assets – Digital. Here is my statement during the session in New York:

“Mozilla is honored to be a part of this Panel. Our focus is digital inclusion. We know that access to the richness of the Internet can bring huge benefits to Women’s Economic Empowerment. We are working with technology companies in Silicon Valley and beyond to identify those activities which provide additional opportunity for women. Some of those companies are with us today.

Through our work on the Panel we have identified a significant interest among technology companies in finding ways to do more. We are building a working group with these companies and the governments of Costa Rica, Tanzania and the U.A. E. to address women’s economic empowerment through technology.

We expect the period from today’s report through the March meeting to be rich with activity. The possibilities are huge and the rewards great. We are committed to an internet that is open and accessible to all.”

You can watch a recording of the UN High Level Panel on Women’s Economic Empowerment here. For my statement, view starting at: 2.07.53.

There is an immense amount of work to be done to meet the greatest human rights challenge of our time. I left the Panel’s meeting hopeful that we are on the cusp of great progress.

Hub FiguièreIntroducing gudev-rs

A couple of weeks ago, I released gudev-rs, Rust wrappers for gudev. The goal was to be able to receive events from udev into a Gtk application written in Rust. I had a need for it, so I made it and shared it.

It is mostly auto-generated using gir-rs from the gtk-rs project. The license is MIT.

Source code

If you have problems, suggestion, patches, please feel free to submit them.

The Rust Programming Language BlogAnnouncing Rust 1.12

The Rust team is happy to announce the latest version of Rust, 1.12. Rust is a systems programming language with the slogan “fast, reliable, productive: pick three.”

As always, you can install Rust 1.12 from the appropriate page on our website, and check out the detailed release notes for 1.12 on GitHub. 1361 patches were landed in this release.

What’s in 1.12 stable

The release of 1.12 might be one of the most significant Rust releases since 1.0. We have a lot to cover, but if you don’t have time for that, here’s a summary:

The largest user-facing change in 1.12 stable is the new error message format emitted by rustc. We’ve previously talked about this format and this is the first stable release where they are broadly available. These error messages are a result of the effort of many hours of volunteer effort to design, test, and update every one of rustcs errors to the new format. We’re excited to see what you think of them:

A new borrow-check error

The largest internal change in this release is moving to a new compiler backend based on the new Rust MIR. While this feature does not result in anything user-visible today, it paves the way for a number of future compiler optimizations, and for some codebases it already results in improvements to compile times and reductions in code size.

Overhauled error messages

With 1.12 we’re introducing a new error format which helps to surface a lot of the internal knowledge about why an error is occurring to you, the developer. It does this by putting your code front and center, highlighting the parts relevant to the error with annotations describing what went wrong.

For example, in 1.11 if a implementation of a trait didn’t match the trait declaration, you would see an error like the one below:

An old mismatched trait error

In the new error format we represent the error by instead showing the points in the code that matter the most. Here is the relevant line in the trait declaration, and the relevant line in the implementation, using labels to describe why they don’t match:

A new mismatched trait error

Initially, this error design was built to aid in understanding borrow-checking errors, but we found, as with the error above, the format can be broadly applied to a wide variety of errors. If you would like to learn more about the design, check out the previous blog post on the subject.

Finally, you can also get these errors as JSON with a flag. Remember that error we showed above, at the start of the post? Here’s an example of attempting to compile that code while passing the --error-format=json flag:

$ rustc borrowck-assign-comp.rs --error-format=json
{"message":"cannot assign to `p.x` because it is borrowed","level":"error","spans":[{"file_name":"borrowck-assign-comp.rs","byte_start":562,"byte_end":563,"line_start":15,"line_end":15,"column_start":14,"column_end":15,"is_primary":false,"text":[{"text":"    let q = &p;","highlight_start":14,"highlight_end":15}],"label":"borrow of `p.x` occurs here","suggested_replacement":null,"expansion":null}],"label":"assignment to borrowed `p.x` occurs here","suggested_replacement":null,"expansion":null}],"children":[],"rendered":null}
{"message":"aborting due to previous error","code":null,"level":"error","spans":[],"children":[],"rendered":null}

We’ve actually elided a bit of this for brevity’s sake, but you get the idea. This output is primarily for tooling; we are continuing to invest in supporting IDEs and other useful development tools. This output is a small part of that effort.

MIR code generation

The new Rust “mid-level IR”, usually called “MIR”, gives the compiler a simpler way to think about Rust code than its previous way of operating entirely on the Rust abstract syntax tree. It makes analysis and optimizations possible that have historically been difficult to implement correctly. The first of many upcoming changes to the compiler enabled by MIR is a rewrite of the pass that generates LLVM IR, what rustc calls “translation”, and after many months of effort the MIR-based backend has proved itself ready for prime-time.

MIR exposes perfect information about the program’s control flow, so the compiler knows exactly whether types are moved or not. This means that it knows statically whether or not the value’s destructor needs to be run. In cases where a value may or may not be moved at the end of a scope, the compiler now simply uses a single bitflag on the stack, which is in turn easier for optimization passes in LLVM to reason about. The end result is less work for the compiler and less bloat at runtime. In addition, because MIR is a simpler ‘language’ than the full AST, it’s also easier to implement compiler passes on, and easier to verify that they are correct.

Other improvements

See the detailed release notes for more.

Library stabilizations

This release sees a number of small quality of life improvements for various types in the standard library:

See the detailed release notes for more.

Cargo features

The biggest feature added to Cargo this cycle is “workspaces.” Defined in RFC 1525, workspaces allow a group of Rust packages to share the same Cargo.lock file. If you have a project that’s split up into multiple packages, this makes it much easier to keep shared dependencies on a single version. To enable this feature, most multi-package projects need to add a single key, [workspace], to their top-level Cargo.toml, but more complex setups may require more configuration.

Another significant feature is the ability to override the source of a crate. Using this with tools like cargo-vendor and cargo-local-registry allow vendoring dependencies locally in a robust fashion. Eventually this support will be the foundation of supporting mirrors of crates.io as well.

There are some other improvements as well:

See the detailed release notes for more.

Contributors to 1.12

We had 176 individuals contribute to 1.12. Thank you so much!

  • Aaron Gallagher
  • abhi
  • Adam Medziński
  • Ahmed Charles
  • Alan Somers
  • Alexander Altman
  • Alexander Merritt
  • Alex Burka
  • Alex Crichton
  • Amanieu d’Antras
  • Andrea Pretto
  • Andre Bogus
  • Andrew
  • Andrew Cann
  • Andrew Paseltiner
  • Andrii Dmytrenko
  • Antti Keränen
  • Aravind Gollakota
  • Ariel Ben-Yehuda
  • Bastien Dejean
  • Ben Boeckel
  • Ben Stern
  • bors
  • Brendan Cully
  • Brett Cannon
  • Brian Anderson
  • Bruno Tavares
  • Cameron Hart
  • Camille Roussel
  • Cengiz Can
  • CensoredUsername
  • cgswords
  • Chiu-Hsiang Hsu
  • Chris Stankus
  • Christian Poveda
  • Christophe Vu-Brugier
  • Clement Miao
  • Corey Farwell
  • CrLF0710
  • crypto-universe
  • Daniel Campbell
  • David
  • [email protected]
  • Diggory Blake
  • Dominik Boehi
  • Doug Goldstein
  • Dridi Boukelmoune
  • Eduard Burtescu
  • Eduard-Mihai Burtescu
  • Evgeny Safronov
  • Federico Ravasio
  • Felix Rath
  • Felix S. Klock II
  • Fran Guijarro
  • Georg Brandl
  • ggomez
  • gnzlbg
  • Guillaume Gomez
  • hank-der-hafenarbeiter
  • Hariharan R
  • Isaac Andrade
  • Ivan Nejgebauer
  • Ivan Ukhov
  • Jack O’Connor
  • Jake Goulding
  • Jakub Hlusička
  • James Miller
  • Jan-Erik Rediger
  • Jared Manning
  • Jared Wyles
  • Jeffrey Seyfried
  • Jethro Beekman
  • Jonas Schievink
  • Jonathan A. Kollasch
  • Jonathan Creekmore
  • Jonathan Giddy
  • Jonathan Turner
  • Jorge Aparicio
  • José manuel Barroso Galindo
  • Josh Stone
  • Jupp Müller
  • Kaivo Anastetiks
  • kc1212
  • Keith Yeung
  • Knight
  • Krzysztof Garczynski
  • Loïc Damien
  • Luke Hinds
  • Luqman Aden
  • m4b
  • Manish Goregaokar
  • Marco A L Barbosa
  • Mark Buer
  • Mark-Simulacrum
  • Martin Pool
  • Masood Malekghassemi
  • Matthew Piziak
  • Matthias Rabault
  • Matt Horn
  • mcarton
  • M Farkas-Dyck
  • Michael Gattozzi
  • Michael Neumann
  • Michael Rosenberg
  • Michael Woerister
  • Mike Hommey
  • Mikhail Modin
  • mitchmindtree
  • mLuby
  • Moritz Ulrich
  • Murarth
  • Nick Cameron
  • Nick Massey
  • Nikhil Shagrithaya
  • Niko Matsakis
  • Novotnik, Petr
  • Oliver Forral
  • Oliver Middleton
  • Oliver Schneider
  • Omer Sheikh
  • Panashe M. Fundira
  • Patrick McCann
  • Paul Woolcock
  • Peter C. Norton
  • Phlogistic Fugu
  • Pietro Albini
  • Rahiel Kasim
  • Rahul Sharma
  • Robert Williamson
  • Roy Brunton
  • Ryan Scheel
  • Ryan Scott
  • saml
  • Sam Payson
  • Samuel Cormier-Iijima
  • Scott A Carr
  • Sean McArthur
  • Sebastian Thiel
  • Seo Sanghyeon
  • Shantanu Raj
  • ShyamSundarB
  • silenuss
  • Simonas Kazlauskas
  • srdja
  • Srinivas Reddy Thatiparthy
  • Stefan Schindler
  • Stephen Lazaro
  • Steve Klabnik
  • Steven Fackler
  • Steven Walter
  • Sylvestre Ledru
  • Tamir Duberstein
  • Terry Sun
  • TheZoq2
  • Thomas Garcia
  • Tim Neumann
  • Timon Van Overveldt
  • Tobias Bucher
  • Tomasz Miąsko
  • trixnz
  • Tshepang Lekhonkhobe
  • ubsan
  • Ulrik Sverdrup
  • Vadim Chugunov
  • Vadim Petrochenkov
  • Vincent Prouillet
  • Vladimir Vukicevic
  • Wang Xuerui
  • Wesley Wiser
  • William Lee
  • Ximin Luo
  • Yojan Shrestha
  • Yossi Konstantinovsky
  • Zack M. Davis
  • Zhen Zhang
  • 吴冉波

Mozilla Addons BlogHow Video DownloadHelper Became Compatible with Multiprocess Firefox

Today’s post comes from Michel Gutierrez (mig), the developer of Video DownloadHelper, among other add-ons. He shares his story about the process of modernizing his XUL add-on to make it compatible with multiprocess Firefox (e10s).

***

Video DownloadHelper (VDH) is an add-on that extracts videos and image files from the Internet and saves them to your hard drive. As you surf the Web, VDH will show you a menu of download options when it detects something it can save for you.

It was first released in July 2006, when Firefox was on version 1.5. At the time, both the main add-on code and DOM window content were running in the same process. This was helpful because video URLs could easily be extracted from the window content by the add-on. The Smart Naming feature was also able to extract video names from the Web page.

When multiprocess Firefox architecture was first discussed, it was immediately clear that VDH needed a full rewrite with a brand new architecture. In multiprocess Firefox, DOM content for webpages run in a separate process, which means required asynchronous communication with the add-on code would increase significantly. It wasn’t possible to simply make adaptations to the existing code and architecture because it would make the code hard to read and unmaintainable.

The Migration

After some consideration, we decided to update the add-on using SDK APIs. Here were our requirements:

  • Code running in the content process needed to run separately from code running in Javascript modules and the main process. Communication must occur via message passing.
  • Preferences needed to be available in the content process, as there are many adjustable parameters that affect the user interface.
  • Localization of HTML pages within the content script should be as easy as possible.

In VDH, the choice was made to handle all of these requirements using the same Client-Server architecture commonly used in regular Web applications: the components that have access to the preferences, localization, and data storage APIs (running in the main process) serve this data to the UI components and the components injected into the page (running in the content process), through the messaging API provided by the SDK.

Limitations

Migrating to the SDK enabled us to become compatible with multiprocess Firefox, but it wasn’t a perfect solution. Low-level SDK APIs, which aren’t guaranteed to work with e10s or stay compatible with future versions of Firefox, were required to implement anything more than simple features. Also, an increased amount of communication between processes is required even for seemingly simple interactions.

  • Resizing content panels can only occur in the background process, but only the content process knows what the dimensions should be. This gets more complicated when the size dynamically changes or depends on various parameters.
  • Critical features like monitoring network traffic or launching external programs in VDH requires low-level APIs.
  • Capturing tab thumbnails from the Add-on SDK API does not work in e10s mode. This feature had to be reimplemented in the add-on using a framescript.
  • When intercepting network responses, the Add-on SDK does not decode compressed responses.
  • The SDK provides no easy means to determine if e10s is enabled or not, which would be useful as long as glitches remain where the add-on has to act differently.

Future Direction

Regardless of the limitations posed, making VDH compatible to multiprocess Firefox was a great success. Taking the time to rewrite the add-on also improved the general architecture and prepared it for changes needed for WebExtensions. The first e10s-compatible version of VDH is version 5.0.1 and had been available since March 2015.

Looking forward, the next big challenge is making VDH compatible with WebExtensions. We considered migrating directly to WebExtensions, but the legacy and low-level SDK APIs used in VDH could not be replaced at the time without compromising the add-on’s features.

To fully complete the transition to WebExtensions, additional APIs may need to be created. As an extension developer we’ve found it helpful to work with Mozilla to define those APIs, and design them in a way that is general enough for them to be useful in many other types of add-ons.

A note from the add-ons team: resources for migrating your add-ons to WebExtensions can be found here.

The Servo BlogWrapping up Google Summer of Code 2016

This year Servo had two students working on projects as part of the Google Summer of Code program. Rahul Sharma (creativcoder) tackled the daunting project of implementing initial support for the new ServiceWorker API, under the mentorship of jdm, while Zhen Zhang (izgzhen) implemented missing support for files, blobs, and related APIs under the direction of Manishearth. Let’s see how they did!

ServiceWorker support

Three months is not enough time for a single student to implement the entire set of features defined by the ServiceWorker specification, so the goal of this project was to implement the fundamental pieces required for sharing workers between multiple pages, along with the related DOM APIs for registering and interacting with the workers, and finally enabling interception of network requests to support the worker’s fetch event.

Notable pull requests include:

Rahul put together a full writeup of his activities as part of GSoC; please check it out! His mentor was very pleased with Rahul’s work over the course of the summer - it was a large, complex task, and he tackled it with enthusiasm and diligence. Congratulations on completing the project, and thank you for your efforts, Rahul!

File/Blob support

The second project by Zhen Zhang was to implement most of the File specification. The scope of the project included things like file upload form controls, manipulating files from the DOM, and the creation/management of blob URIs. As of now almost all of the spec is implemented, except for the ability to construct and serialize Blobs to/from ArrayBuffers (due to the lack of ArrayBuffer bindings at the time), URL.createFor, and handling fragments in blob URIs.

Notable pull requests include:

Status updates and design docs live in this repo. The midterm summary is a particularly good read, as it explains the preliminary design for the refcounted blob store. Zhen was quite fun to work with, and showed lots of initiative in exploring solutions. Thank you for your help, Zhen!

Armen Zambrano[NEW] Added build status updates - Usability improvements for Firefox automation initiative - Status update #6

[NEW] Starting on this newsletter we will start giving you build automation improvements since they help with the end to end time of your pushes

On this update we will look at the progress made in the last two weeks.

A reminder that this quarter’s main focus is on:
  • Debugging tests on interactive workers (only Linux on TaskCluster)
  • Improve end to end times on Try (Thunder Try project)

For all bugs and priorities you can check out the project management page for it:

Status update:
Debugging tests on interactive workers
---------------------------------------------------

Accomplished recently:
  • Fixed regression that broke the interactive wizard
  • Support for Android reftests landed

Upcoming:
  • Support for Android xpcshell
  • Video demonstration


Thunder Try - Improve end to end times on try
---------------------------------------------

Project #1 - Artifact builds on automation
##########################################

Accomplished recently:
  • Windows and Mac artifact builds are soon to land
  • |mach try| now supports --artifact option
  • Compiled-code tests jobs error-out early when run with --artifact on try

Upcoming:
  • Windows and Mac artifact builds available on Try
  • Fix triggering of test jobs on Buildbot with artifact build

Project #2 - S3 Cloud Compiler Cache
####################################

Nothing new in this edition.

Project #3 - Metrics
####################

Accomplished recently:

  • Drill-down charts:

  • Which lead to a detailed view:

  • With optional wait times included (missing 10% outliers, so looks almost the same):


Upcoming:
  • Iron out interactivity bugs
  • Show outliers
  • Post these (static) pages to my people page
  • Fix ActiveData production to handle these queries (I am currently using a development version of ActiveData, but that version has some nasty anomalies)

Project #4 - Build automation improvements
##########################################
Upcoming:


Project #5 - Run Web platform tests from the source checkout
############################################################
Accomplished recently:
  • WPT is now running from the source checkout in automation

Upcoming:
  • There are still parts in automation relying on a test zip. Next steps is to minimize those so you can get a loner, pull any revision from any repo, and test WPT changes in an environment that is exactly what the automation tests run in.

Other
#####
  • Bug 1300812 - Make Mozharness downloads and unpacks actions handle better intermittent S3/EC2 issues
    • This adds retry logic to reduce intermittent oranges


Creative Commons License
This work by Zambrano Gasparnian, Armen is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 Unported License.

Air MozillaThe Joy of Coding - Episode 73

The Joy of Coding - Episode 73 mconley livehacks on real Firefox bugs while thinking aloud.

Cameron KaiserFirefox OS goes Tier WONTFIX

I suppose it shouldn't be totally unexpected given the end of FirefoxOS phone development a few months ago, but a platform going from supported to take-it-completely-out-of-mozilla-central in less than a year is rather startling: not only has commercial development on FirefoxOS completely ceased (at version 2.6), but the plan is to remove all B2G code from the source tree entirely. It's not an absolutely clean comparison to us because some APIs are still relevant to current versions of OS X macOS, but even support for our now ancient cats was only gradually removed in stages from the codebase and even some portions of pre-10.4 code persisted until relatively recently. The new state of FirefoxOS, where platform support is actually unwelcome in the repository, is beyond the lowest state of Tier-3 where even our own antiquated port lives. Unofficially this would be something like the "tier WONTFIX" BDS referenced in mozilla.dev.planning a number of years ago.

There may be a plan to fork the repository, but they'd need someone crazy dedicated to keep chugging out builds. We're not anything on that level of nuts around here. Noooo.

The Mozilla BlogFirefox’s Test Pilot Program Launches Three New Experimental Features

Earlier this year we launched our first set of experiments for Test Pilot, a program designed to give you access to experimental Firefox features that are in the early stages of development. We’ve been delighted to see so many of you participating in the experiments and providing feedback, which ultimately, will help us determine which features end up in Firefox for all to enjoy.

Since our launch, we’ve been hard at work on new innovations, and today we’re excited to announce the release of three new Test Pilot experiments. These features will help you share and manage screenshots; keep streaming video front and center; and protect your online privacy.

What Are The New Experiments?

Min Vid:

Keep your favorite entertainment front and center. Min Vid plays your videos in a small window on top of your other tabs so you can continue to watch while answering email, reading the news or, yes, even while you work. Min Vid currently supports videos hosted by YouTube and Vimeo.

Page Shot:

The print screen button doesn’t always cut it. The Page Shot feature lets you take, find and share screenshots with just a few clicks by creating a link for easy sharing. You’ll also be able to search for your screenshots by their title, and even the text captured in the image, so you can find them when you need them.

Tracking Protection:

We’ve had Tracking Protection in Private Browsing for a while, but now you can block trackers that follow you across the web by default. Turn it on, and browse free and breathe easy. This experiment will help us understand where Tracking Protection breaks the web so that we can improve it for all Firefox users.

How do I get started?

Test Pilot experiments are currently available in English only. To activate Test Pilot and help us build the future of Firefox, visit testpilot.firefox.com.

As you’re experimenting with new features within Test Pilot, you might find some bugs, or lose some of the polish from the general Firefox release, so Test Pilot allows you to easily enable or disable features at any time.

Your feedback will help us determine what ultimately ends up in Firefox – we’re looking forward to your thoughts!