From its inception, npm has been keenly focused on open source values. As we’ve grown as a company, however, we’ve learned the important lesson that making source code available under an open license is the bare minimum for open source software. To take it even further, we’ve also learned that “open source” doesn’t necessarily mean community-driven. With these insights in mind, the web team has decided to make some changes to the community interface of npm’s website — with the goal of creating a more efficient and effective experience for everyone involved.
npm/newww is being retired and made privatenpm/www has been created for new issues and release notesnpm/newwwAs you may (or may not!) have noticed, the repo that used to home npm’s website (npm/newww) isn’t in sync with the production website (http://www.npmjs.com).
A few months back, the team made the executive decision to close source the npm website. There were several reasons for this:
This was a super tough call, and there were strong arguments from both sides. In the end, though, the team reached a unified understanding that this was both the best call for the company and for the community. The repo will be officially shutting down tomorrow, Friday, July 29, 2016.
One of the things we’re aware of is that many in the Node community were using the website as an example repo for using the Hapi framework. While we’re completely flattered by this, we honestly don’t believe the codebase is currently in a state to serve that role — it’s a katamari of many practices over many years rolled into one right now!
That being said, we do care about sharing our work with the world, and intend to and are excited to publish many of the website components as packages that will be open sourced and reusable.
npm/wwwIn place of the npm/newww repo, we’ve created npm/www! The goals of this repo are to give the community a place to:
While the source code for the website will no longer be available, the hope is that this new repo can be a more effective way to organize and respond to the needs the community has. We’re super excited to hear your thoughts, questions, and concerns — head over to npm/www now so we can start collaborating!
I don’t want to bury the lede, so here it is: npm has a new CTO, and her name is CJ Silverio. My title is changing from CTO to COO, and I will be taking over a range of new responsibilities, including modeling our business, defining and tracking our metrics, and bringing that data to bear on our daily operations, sales, and marketing. This will allow Isaac to concentrate more on defining the product and strategy of npm as we continue to grow.
CJ will be following this post with a post of her own, giving her thoughts about her new role. I could write a long post about how awesome CJ is — and she is awesome — but that wouldn’t achieve much more than make her embarrassed. Instead I thought I’d take this chance to answer a question I get a lot, before I forget the answer:
The answer is that it depends on the needs of the company. npm has grown from 3 people to 25 in the past 2.5 years, and in that time my job changed radically from quarter to quarter. Every time I got the hang of the job, the needs of the company would shift and I found myself doing something new. So this is my list of some of the things a CTO might do. Not all of them are a good idea, as you’ll see. The chronological order is an over-simplification: I was doing a small piece of all of these tasks all the time, but each quarter definitely had a focus, so I’ve talked about it then.
Started this quarter: CJ, Raquel.
npm Inc had a bumpy launch: the registry was extremely unstable, because it was running on insufficient hardware and had not been architected for high uptime. Our priority was to get the registry to stay up. I was spinning up hardware by hand, without the benefit of automation. By April we had found the hot spots and mostly met the load, but CJ was the first person to stridently make the case that we had to automate our way out of this. I handed operations to her.
Started: Ben, Forrest, Maciej.
Once the fires were out, we could finally think about building products, and we had a choice: do we build a paid product on top of the current (highly technically indebted) architecture, or build a new product and architecture? We decided on a new, modular architecture that we could use to build npm Enterprise first, and then extend later to become “Registry 2.0”. Between recruitment, management, and other duties, I discovered by the end of the quarter that it was already impossible to find time to write code.
This was the quarter we dug in and built npm Enterprise. My job became primarily that of an engineering manager: keeping everybody informed about what everybody else was up to, assigning tasks, deciding priorities of new work vs. sustaining and operational work, and handling the kind of interpersonal issues which every growing company experiences. I found I was relying on CJ a lot when solving these kinds of problems.
Started: Rebecca
With npm Enterprise delivered to its first customer, we started learning how to sell it. I went to conferences, gave talks, went to meetings and sales calls, wrote documentation and blog posts, and generally tried to make noise. I was never particularly good at any of this, so I was grateful when Ben took over npm Enterprise as a product, which started around this time.
In February 2014 I had written the system that to this day serves our download counts, but we were starting a process of raising our series A, and that data wasn’t good enough. I dredged up my Hadoop knowledge from a previous job and started crunching numbers, getting new numbers we hadn’t seen before, like unique IP counts and other trends. This is one job I’m keeping as I move to COO, since measuring these metrics and optimizing them is a big part of my new role.
Started: Ernie, Ryan
We’d been hiring all the time, of course, but we closed our series A in Q1 2015, so there was a sudden burst of recruitment at this time, most of whom didn’t actually start until the next quarter. By the end of this process we’d hired so many people that I never had to do recruitment again: the teams were now big enough to interview and hire their own people.
Started: Kat, Stephanie, Emily, Jeff, Chris, Jonathan, Aria, Angela
With so many new people, we had a sudden burst of momentum, and it became necessary for the first time to devote substantial effort to planning “what do we do next?” Until this point the next move had been obvious: put out the fire, all hands on deck. Now we had enough people that some of them could work on longer-term projects, which was good news, but meant we had to pull our heads up and think about the longer term. To accomplish this, I handed management of nearly all the engineering team to CJ, who became VP of engineering.
Started: Ashley, Andrea, Andrew (yes, it was confusing)
We had already launched npm Private Modules, a single-user product, but it hadn’t really taken off. We were sure we knew why: npm Organizations, a product for teams, was demanded by nearly everybody. It was a lot more complicated, and with more people there was a lot more coordination to do, so I started doing the kind of time, tasks, and dependency management of a project manager. I will be the first to admit that I was not particularly good at it, and nobody was upset when I mostly gave this task to Nicole the following quarter. We launched Orgs in November, and it was an instant hit, becoming half of npm’s revenue by the end of the year.
Started: Nicole, Jerry
Now with two product lines and a bunch of engineers, fully defining what the product should do (or not do), and what the next priority was, became critical. Isaac was too busy CEO’ing to do this, so he gave it to most available person: me. This was not a huge success, partly because I was still stuck in project management mode, which is a very different thing, and partly because I’m just not as creative as Isaac when it comes to product. Everybody learned something, even if it was “Laurie isn’t very good at this”.
Started: Kiera
Isaac’s baby was born on April 1st (a fact that, combined with his not having mentioned they were even expecting a baby, led many people to assume at first that his announcement of parenthood was a joke). He went on parental leave for most of Q2, so I took over as interim CEO. CJ, already VP of eng, effectively started being CTO at this time.
When Isaac came back from parental leave, we’d learned some things: I had, of necessity, handled the administrative and executive functions of a CEO for a quarter. CJ had handled those of a CTO. We now had two people who could be CTO, and one overloaded CEO with a talent for product. The course of action was obvious: Isaac handed over everything he could that was not-product to me, to focus on product development, while I handed over CTO duties to CJ. We needed a title for “CEO stuff that isn’t product” and picked COO mostly because it’s a title people recognize.
You’ll notice a common thread, which is that as I moved to new tasks I was mostly handing them to CJ. Honestly, it was pretty clear to me from day 1 that CJ was just as qualified to be CTO as I was, if not more — she has an extra decade’s worth of experience on me and is a better engineer to boot. The only thing she lacked was the belief that she could, and over the last two and a half years it has been a pleasure watching her confidence grow as she’s mastered every new challenge I put in front of her, and more than a little funny watching her repeatedly express surprise at her ability to do all these things. It’s been like slowly persuading an amnesiac Clark Kent that he is, in fact, Superman.
I’ve often referred to CJ as npm’s secret weapon. Well, now the secret is out. npm has the best CTO I could possibly imagine, and I can’t wait to see what she does next.
Earlier today, July 6, 2016, the npm registry experienced a read outage for 0.5% of all package tarballs for all network regions. Not all packages and versions were affected, but the ones that were affected were completely unavailable during the outage for any region of our CDN.
The unavailable tarballs were offline for about 16 hours, from mid-afternoon PDT on July 5 to early morning July 6. All tarballs should now be available for read.
Here’s the outage timeline:
Over the next hour 502 rates fell to the normal 0.
We’re adding an alert on all 500-class status codes, not just 503s. This alert will catch the category of errors, not simply this specific problem.
We’re also revising our operational playbook to encourage examination of our CDN logs more frequently; we could have caught the problem very soon after introducing it if we had carefully verified that our guess about the source of 502s had resulted in making them vanish from our CDN logging. We can also do better with tools for examining the patterns of errors across POPs, which would have made it clearer to us immediately that the error was not specific to the US East coast and was therefore unlikely to have been caused by an outage in our CDN.
Read on if you would like the details of the bug.
The root cause for this outage was an interesting interaction of file modification time, nginx’s method of generating etags, and cache headers.
We recently examined our CDN caching strategies and learned that we were not caching as effectively as we might, because of a property of nginx. Nginx’s etags are generated using the file modification time as well as its size, roughly as mtime + '-' + the file size in bytes. This meant that if mtimes for package tarballs varied across our nginx instances, our CDN would treat the files from each server as distinct, and cache them separately. Getting the most from our CDN’s caches and from our users’ local tarball caches is key to good performance on npm installs, so we took steps to make the etags match across all our services.
Our chosen scheme was to set the file modification time to the first 32-bit BE integer from their md5 hash. This was entirely arbitrary but looked sufficient after testing in our staging environment. We produced consistent etags. Unfortunately, the script that applied this change to our production environment failed to clamp the resulting integer, resulting in negative numbers for timestamps. Ordinarily, this would result in a the infamous Dec 31, 1969 date one sees for timestamps before the Unix epoch.
Unfortunately, negative mtimes triggered an nginx bug. Nginx will serve the first request for a file in this state and deliver the negative etag. However, if there is a negative etag in the if-none-match header nginx attempts to serve a 304 but never completes the request. This resulted in the the bad gateway message returned by our CDN to users attempting to fetch a tarball with the bad mtime.
You can observe this behavior yourself with nginx and curl:
The final request never completes even though nginx has correctly given it a 304 status.
Because this only affected a small subset of tarballs, not including the tarball fetched by our smoketest alert, all servers remained in the pool. We have an alert on above-normal 503 error rates served by our CDN, but this error state produced 502s and was not caught.
All the tarballs that were producing a 502 gateway timeout error turned out to have negative timestamps in their file mtimes. The fix was to touch them all so their times were inconsistent across our servers but valid, thus both busting our CDN’s cache and dodging the nginx behavior.
The logs from our CDN are invaluable, because they tell us what quality of service our users are truly experiencing. Sometimes everything looks green on our own monitoring, but it’s not green from our users’ perspective. The logs are how we know.

Today, npm Enterprise has just grown hugely more extensible and powerful with the release of npm Enterprise add-ons.
It’s now possible to integrate third-parties’ developer tools directly into npm Enterprise. This has the power to combine what were discrete parts of your development workflow into a single user experience, and knock out the barriers that stand in the way of bringing open source development’s many-small-reusable-parts methodology into larger organizations.
npm Enterprise now exposes an API that allows third-party developers to build on top of our npm Enterprise product:
With this deceptively simple functionality, developers can offer a huge amount of value to enrich the process of using npm within the enterprise.
Enterprise developers already want to take advantage of the same code discovery, re-use, and collaboration enjoyed by millions of open source developers, billions of times every month. But this requires accommodating their companies’ op-sec, licensing, and code quality processes, which often predate the modern era.
For example…
In the past, it was possible to manually research the security implications of external code. But with the average npm package relying on over 100 dependencies and subdependencies, this process just doesn’t scale.
Without a way to ensure the security of each package, a company can’t take advantage of open source code.
Software that is missing a license, or that’s governed by a license unblessed by a company’s legal department, simply can’t be used at larger companies. Much like security screening, many companies have relied upon manually reviewing the license requirements of each piece of external code. And just like security research, trying to manually confirm the licensing of every dependency (and their dependencies, and their dependencies…) is impossible to scale.
Enterprise developers need a way to understand the license implications of packages they’re considering using, and companies need a way to certify that all of their projects are legally kosher.
Will bug reports be patched quickly? Is the code written well? Do packages rely on stale or abandoned dependencies? These questions demand answers before an enterprise can consider relying on open source code.
Without a way to quantitatively analyze the quality of every code package in a project, many enterprise teams simply don’t adopt open source code or workflows for mission-critical projects.
Our three launch partners, Node Security Platform, FOSSA, and bitHound, address these concerns, respectively.
You can learn about the specifics of each of them here:
By integrating them directly into the tool that enterprise developers use to browse and manage packages, we make it as easy as possible to scratch enterprise development’s specific itches. As more incredible add-ons join the platform, the barriers to open source-style development at big companies get knocked down, one by one.
The Node Security Platform, FOSSA, and bitHound add-ons are available to existing npm Enterprise customers today. Simply contact us at [email protected] to get set up.
If you’re looking to bring npm Enterprise and add-ons into your enterprise, let us show you how easy it is with a free 30-day trial.
Interested in building your own add-on? Awesome. Stay tuned: API documentation is on its way.
The movement to bring open source code, workflows, and tools into the enterprise is called InnerSource, and it’s the beginning of a revolution.
When companies develop proprietary code the same way communities build open source projects, then the open source community’s methods and tooling become the default way to build software.
Everyone stands to benefit from InnerSource because everyone stands to benefit from building software the right way: open source packages see more adoption and community participation, companies build projects faster and cheaper without re-inventing wheels, and developers are empowered to build amazing things.
Add-ons are an exciting step forward for us. We’re thrilled you’re joining us.
When using npm Enterprise, we sometimes encounter public packages in our private registry that need to fetch resources from the public internet when being installed by a client via npm install.
Unfortunately, this poses a problem for developers who work in an environment with limited or no access to the public internet.
Let’s take a look at some of the more common types of problems in this area and talk about ways we can work around them.
Note that these problems are not specific to npm Enterprise — but to using certain public packages in any limited-access environment. That being said, there are some things that npm (as an organization and software vendor) can do to better prevent or handle some of these problems. We’re still working to make these improvements.
Typically, developers will discover the problem when installing packages from their private registry. When this happens, we need to determine the type of problem it is and where in the dependency graph the problematic dependency resides.
Here are some common problem types:
Git repo dependency
This is when a package dependency is listed in a package.json file with a reference to a Git repository instead of with a semver version range. Typically these point to a particular branch or revision in a public GitHub or Bitbucket repository. They are mainly used when the package contents have not been published to the public npm registry.
When the npm client encounters these, it attempts to fetch the package from the Git repository directly, which is a problem for folks who do not have network access to the repository.
Shrinkwrapped package
This is when the internal contents of a package contain an npm-shrinkwrap.json file that lists a specific version and URL to use for each mentioned package from the dependency tree.
During a normal npm install, the npm client attempts to fetch the dependencies listed in npm-shrinkwrap.json directly from the URLs contained in the file. This poses a problem when the client installing the shrinkwrapped package does not have access to the URLs that the shrinkwrap author has access to.
Package with install script or node-gyp dependency
This is when a package attempts to defer some setup process until the package is installed, using a script defined in package.json, which typically involves building platform-specific binaries or Node add-ons on the client’s machine.
On a typical install, the npm client will find and run these scripts in order to automatically fetch and build the required resources, targeting the platform that the client is running on. But when limited internet access means the necessary resources cannot be fetched, the install will fail. Most likely the package will be unusable until the end result of running the install script on the client’s machine is achieved.
To determine the location of the problematic dependency, we can boil it down to two categories:
Direct dependency
A direct dependency is one that is explicitly listed in your own package.json file — a dependency that your project/package uses directly in code or in an npm run script.
Transitive dependency
A transitive dependency is one that is not explicitly listed in your own package.json file — a dependency that comes from anywhere in the tree of your direct dependencies’ dependencies.
The same way publishing a package to the public registry requires access to the public internet, most of these solutions require Internet access, at least on a temporary basis. Once the solution is in place, then access to public resources can be restricted.
For starters, remember that it’s generally a good idea to use the latest version of the npm client. To install or upgrade to the latest version, regardless of what version of Node you have installed, run npm i -g npm@latest (and make sure npm -v prints the version that was installed).
Let’s go over the problem types in more detail.
Unfortunately, a dependency that references a Git repository (instead of a semver range for a published package) must be replaced with a published package. To do this, you’ll need to first publish the Git repository as a package to your npm Enterprise registry and then fork the project with the Git dependency and replace the dependency with the package you published. Then, publish the forked project, and use that package as a dependency (instead of the original).
It’s usually a good idea to open an issue on the project with the Git dependency, politely asking the maintainers to replace the Git dependency, if possible. Generally, we discourage using Git dependencies in package.json, and it’s typically only used temporarily while a maintainer waits for an upstream fix to be applied and published.
Example: let’s replace the "grunt-mocha-istanbul": "christian-bromann/grunt-mocha-istanbul" Git dependency defined in version 4.0.4 of the webdriverio package, assuming that webdriverio is a direct dependency and grunt-mocha-istanbul is a transitive dependency.
We’ll tackle this in two main steps: forking and publishing the transitive dependency, and forking and publishing the direct dependency.
Clone the project that is referenced by the Git dependency
Optionally, you can create a remote fork first (e.g., in GitHub or Bitbucket) and then clone your fork locally. Otherwise, you can just clone/download the project directly from the remote repository. It’s a good idea to use source control so you can keep a history of your changes, but you could also probably get away with downloading and extracting the project contents.
Example:
git clone https://github.com/christian-bromann/grunt-mocha-istanbul.git
Create a new branch to hold your customizations
Again, this is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd grunt-mocha-istanbul
git checkout -b myco-custom-3.0.1
Add your scope to the package name in package.json
In our example, change "grunt-mocha-istanbul" to "@myco/grunt-mocha-istanbul".
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you have already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
Example:
git add package.json
git commit -m 'add @myco scope to package name'
npm publish
Clone the project’s source code locally
Either create a remote fork first (e.g., in GitHub or Bitbucket) and clone your fork locally, or just clone/download the project directly from the original remote repository. It’s a good idea to use source control so you can keep a history of your changes.
Example:
git clone https://github.com/webdriverio/webdriverio.git
Create a new branch to hold your customizations
This is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd webdriverio
git checkout -b myco-custom-4.0.4
Add your scope to the package name in package.json
In our example, change "webdriverio" to "@myco/webdriverio".
Replace the Git dependency with the scoped package
This means updating the reference in package.json, and it may mean updating require() or import statements too. You should basically do a find-and-replace, finding the unscoped package name and judiciously replacing it with the scoped package name.
In our example, we only need to update the reference in package.json from "grunt-mocha-istanbul": "christian-bromann/grunt-mocha-istanbul" to "@myco/grunt-mocha-istanbul": "^3.0.1".
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you’ve already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
In our example of webdriverio, we next need to deal with the shrinkwrap URLs before we can publish (handled below). In other scenarios, it may be possible to publish now.
Example:
git add .
git commit -m 'replace git dep with scoped fork'
npm publish
Update your downstream project(s) to use the scoped package as a direct dependency (in package.json and in any require() statements)
In our example, this basically means doing a find-and-replace to find references to webdriverio and judiciously replace them with @myco/webdriverio. However, webdriverio also contains an npm-shrinkwrap.json file. We’ll cover that in the next section.
It just so happens that our sample direct dependency above (webdriverio) also uses an npm-shrinkwrap.json file to pin certain dependencies to specific versions. Unfortunately the shrinkwrap file contains hardcoded URLs to the public registry. We need a way to either ignore or fix the URLs.
A quick workaround is to install packages using the --no-shrinkwrap flag. This will tell the npm client to ignore any shrinkwrap files it finds in the package dependency tree and, instead, install the dependencies from package.json in the normal fashion.
This is considered a workaround rather than a long-term solution: it’s possible that installing from package.json will install versions of dependencies that don’t exactly match the ones listed in npm-shrinkwrap.json, even though the versions of the package’s direct dependencies are guaranteed to be within the declared semver range.
Example:
npm install webdriverio --no-shrinkwrap
(As noted above, [email protected] also has a Git dependency, so just ignoring the shrinkwrap isn’t quite enough for this package.)
If you want to use the exact versions from the shrinkwrap file without using the URLs in it, you’ll have to use your own custom fork of the project that contains a modified shrinkwrap file.
Here’s the general idea:
(Note that steps 1-3 are identical to the fork-publish instructions for a direct dependency above. If you’ve already completed them, skip to step 4.)
Clone the project’s source code locally
Either create a remote fork first (e.g., in GitHub or Bitbucket) and clone your fork locally, or just clone/download the project directly from the original remote repository. It’s a good idea to use source control so you can keep a history of your changes.
Example:
git clone https://github.com/webdriverio/webdriverio.git
Create a new branch to hold your customizations
This is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd webdriverio
git checkout -b myco-custom-4.0.4
Add your scope to the package name in package.json
In our example, change "webdriverio" to "@myco/webdriverio".
Use rewrite-shrinkwrap-urls to modify npm-shrinkwrap.json, pointing the URLs to your npm Enterprise registry
Unfortunately this is slightly more complicated than a find-and-replace, since the tarball URL structure of the public registry is different than the one used for an npm Enterprise private registry.
In the example below, replace {your-registry} with the base URL of your private registry, e.g., https://npm-registry.myco.com or http://localhost:8080. The value you use should come from the Full URL of npm Enterprise registry setting in your Enterprise admin UI Settings page.
Example:
npm install -g rewrite-shrinkwrap-urls
rewrite-shrinkwrap-urls -r {your-registry}
git diff npm-shrinkwrap.json
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you’ve already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
Be mindful of any prepublish or publish scripts that may be defined in package.json. You can try skipping those scripts when publishing via npm publish --ignore-scripts, but running the scripts may be necessary to put the package into a usable state, e.g., if source transpilation is required.
Example:
git add npm-shrinkwrap.json package.json
git commit -m 'add @myco scope to package name' package.json
git commit -m 'rewrite shrinkwrap urls' npm-shrinkwrap.json
npm publish
Note that a prepublish script will probably need to install the package’s dependencies in order to run. In this case, npm install will be executed first. If this happens, it should pull all dependencies in the shrinkwrap file from your registry. If any of those packages don’t yet exist in your registry, you’ll need either to enable the Read Through Cache setting in your Enterprise instance or to manually add the packages to the white-list by running npme add-package webdriverio from your server’s shell and answering Y at the prompt to add dependencies.
Update your downstream project(s) to use the scoped package as a direct dependency (in package.json and in any require() statements)
In our example, this basically means doing a find-and-replace to find references to webdriverio and judiciously replace them with @myco/webdriverio.
This is less than ideal, obviously. We’re currently considering ways to improve handling of shrinkwrapped packages on the server side, but a better solution is not yet available.
Some packages want or need to run some script(s) on installation in order to build platform-specific dependencies or otherwise put the package into a usable state. This approach means that a package can be distributed as platform-independent source without having to prebundle binaries or provide multiple installation options.
Unfortunately this also means that these packages typically need access to the public internet in order to fetch required resources. In these cases, we can’t really do much to work around this approach, other than attempt to isolate the steps of fetching the package from the registry and set up the platform-specific resources it needs.
As a quick first attempt, you can ignore lifecycle scripts when installing packages via npm install {pkg-name} --ignore-scripts.
Unfortunately, install scripts typically do some sort of platform-specific setup to make the package usable. Thus, you should review the install or postinstall scripts from the package’s package.json file and determine if you need to attempt to run them separately or somehow achieve the same result manually.
When node-gyp is involved in the setup process, the package requires platform-specific binaries to be built and plugged into the Node runtime on the client’s system. In order to build the binaries, the package will typically need to fetch source header files for the Node API.
The best we can do is attempt to setup the node-gyp build toolchain manually. This requires Python and a C/C++ compiler. You can read more about this at the following locations:
General installation: https://github.com/nodejs/node-gyp#installation
Windows issues: https://github.com/nodejs/node-gyp/issues/629
A good example of a package with a node-gyp dependency is node-sass.
Once the build toolchain is in place, the package’s install script may not need to fetch any external resources.
If you’ve made it all the way to the end, surely you’ll agree that npm could be handling things better to minimize challenges faced by folks with restricted internet access. We feel it’s in the community’s best interest to at least raise awareness of these problems and their potential workarounds until we can get a more robust solution in place.
If you have feedback or questions, as always, please don’t hesitate to let us know.
Today, we’re excited to announce a simple, powerful new way to track changes to the npm registry — and build your own amazing new developer tools: hooks.
Hooks are notifications of npm registry events that you’ve subscribed to. Using hooks, you can build integrations that do something useful (or silly) in response to package changes on the registry.
Each time a package is changed, we’ll send an HTTP POST payload to the URI you’ve configured for your hook. You can add hooks to follow specific packages, to follow all the activity of given npm users, or to follow all the packages in an organization or user scope.
For example, you could watch all packages published in the @npm scope by setting up a hook for @npm. If you wanted to watch just lodash, you could set up a hook for lodash.
If you have a paid individual or organizational npm account, you can start using hooks right now.
Each user may configure a total of 100 hooks, and how you use them is up to you: you can put all 100 on a single package, or scatter them across 100 different packages. If you use a hook to watch a scope, this counts as a single hook, regardless of how many packages are in the scope. You can watch any open source package on the npm registry, and any private package that you control (you’ll only receive hooks for packages you have permission to see).

Create your first hook right now using the wombat cli tool.
First, install wombat the usual way: npm install -g wombat. Then, set up some hooks:
Watch the npm package:
wombat hook add npm https://example.com/webhooks shared-secret-text
Watch the @slack organization for updates to their API clients:
wombat hook add @slack https://example.com/webhooks but-sadly-not-very-secret
Watch the ever-prolific substack:
wombat hook add --type=owner substack https://example.com/webhooks this-secret-is-very-shared
Look at all your hooks and when they were last triggered:
wombat hook ls
Protip: Wombat has several other interesting commands. wombat --help will tell you all about them.
We’re also making public an API for working with hooks. Read the docs for details on how you can use the API to manage your hooks without using wombat.

You can use hooks to trigger integration testing, trigger a deploy, make an announcement in a chat channel, or trigger an update of your own packages.
To get you started, here are some of the things we’ve built while developing hooks for you:
npm-hook-receiver: an example receiver that creates a restify server to listen for hook HTTP posts. Source code.
npm-hook-slack: the world’s simplest Slackbot for reporting package events to Slack; built on npm-hook-receiver.
captain-hook: a much more interesting Slackbot that lets you manage your webhooks as well as receive the posts.
wombat: a CLI tool for inspecting and editing your hooks. This client exercises the full hooks API. Source code.
ifttt-hook-translator: Code to receive a webhook and translate it to an IFTTT event, which you can then use to trigger anything else you can do on IFTTT.
citgm-harness: This is a proof-of-concept of how node.js’s Canary in the Gold Mine suite might use hooks to drive its package tests. Specific package publications trigger continuous integration testing of a different project, which is one way to test that you haven’t broken your downstream dependents.

We’re releasing hooks as a beta, and where we take it from here is up to you. What do you think about it? Are there other events you’d like to watch? Is 100 hooks just right, too many, or not enough?
We’re really (really) (really) interested to see what you come up with. If you build something useful (or silly) using hooks, don’t be shy to drop us a line or poke us on Twitter.
This is the tip of an excitement iceberg — exciteberg, if you will — of cool new ways to use npm. Watch this space!
npm ♥ you!

The 283,000 (!) packages in the npm Registry are only useful if it’s easy for developers to integrate them into their projects and deploy their code, so we’re excited by any chance to streamline your workflow.
If you work with Bitbucket, starting today, it’s easier than ever to install and publish npm private packages — to either your npm account or your self-hosted npm Enterprise installation.
Bitbucket Pipelines is a new continuous integration service built into Bitbucket Cloud for end-to-end visibility, from coding to deployment. We’re excited that they’re launching with npm support.
Why?
How?…
bitbucket-pipelines.yml supplied in this repository.NPM_TOKEN environment variable.
~/.npmrc, after you log in to the registry.bitbucket-pipelines.yml supplied in this repository.NPM_TOKEN environment variable:
~/.npmrc, after you log in to the registry.NPM_REGISTRY_URL to the full URL of your private registry (with scheme).Alongside the new pipelines integration, the npm for Bitbucket add-on has been updated to support private modules.
This helps complete an elegant CI/CD workflow:
Get started by installing the add-on now.
We have more exciting integrations and improvements in the … pipeline (sorry), but it helps to know what matters to you. Don’t be shy to share feedback in the comments or hit us up on Twitter.
Last month, we released a “one-click” installer for npm Enterprise on AWS. Fresh on its heels, we’re excited to announce support for Google Compute Engine
Getting npm Enterprise up and running on GCE is easy:
gcloud auth login.gcloud config set project my-projectgcloud config set compute/zone us-east1-dFetch npm’s configuration template:
curl -XGET https://raw.githubusercontent.com/npm/npme-installer/master/npme-gce.jinja > /tmp/npme-gce.jinja
Run the npm Enterprise deploy template:
gcloud deployment-manager deployments create npme-deployment --config /tmp/npme-gce.jinja --properties="zone=us-east1-d"
Note: You can replace us-east1-d with whatever zone you’d like to deploy to.
When you get back, if you visit your cloud console you’ll see a running server called npm-enterprise. That’s all there is to it!
You can configure your instance by visiting
port 8800. Our npm Enterprise will point you in the right direction with information on the various settings that you can configure.
It’s our continued goal to make npm Enterprise painless to run, regardless of your infrastructure. Are we missing a platform that you’d love for us to support? Let us know in the comments.
How many npm users are there? It’s a surprisingly tricky question.
There are a little over 211,000 registered npm users, of whom about 73,000 have published packages. But far more people than that use npm: most things npm does do not require you to login or register. So how many are there, and what are they up to? Our first and most obvious clue is the number of packages they are downloading:

That’s over a billion downloads per week, and that only counts package installs that weren’t already in cache – about 66% of npm package installs don’t require downloading any packages, because they fetch from the cache. Still, the growth is truly, without exaggeration, exponential.
So what’s driving all the downloads? Are the same people building ever-more-complicated applications, downloading more packages to do it? Are the same people running build servers over and over? Or are there actually more people? Our next clue comes from the number of unique IPs hitting the registry:

Here’s a ton of growth again, close to 100% growth year-on-year, but much more linear than the downloads: 3.1 million IPs hit the registry in March. Of course, IP addresses are not people. Some of these IPs are build servers and other robots. Other IP addresses are companies or educational institions that serve thousands or even tens of thousands of people. So while it doesn’t correlate perfectly, generally speaking, more IPs means more people are using npm.

Every time npm runs it generates a unique ID that it sends as a header for every request it makes during that run. This ID is random and not tied to you in any way, and once npm finishes running it is deleted. We use this ID for debugging the registry: it lets us see that these 5, 10 or 50 requests were all part of the same operation, which makes it easier to see what’s going on when somebody has a problem. It also makes it possible to say roughly how many times npm is run – or at least, run in a way that makes a request to the registry. There were 84 million npm sessions in March: this number is growing faster than IPs, but less quickly than downloads.
We can take these last two and combine them:

This number is interesting because it’s not going anywhere. The ratio of packages downloaded to npm sessions is essentially constant (this is not the same as the number of packages downloaded per install, because many npm sessions don’t result in downloads). But this is a clear signal: the number of packages per install isn’t rising. Applications aren’t getting notably more complicated; people are installing packages more often because they are writing more applications.
Here’s a final clue:

The number of packages downloaded by an IP is also rising linearly. So, not only are more people using npm, but the people who are already using npm are using it more and more frequently. And then of course there’s this number:

Another way of counting npm users is counting people who visit npm’s website. This also grew enormously; 400% since we started the company. In the last 90 days, npm saw just over 4 million unique users visit our site. Ordinarily, you take web user numbers with a grain of salt – there’s lots of ways they can be wrong. But combined with the IPs, the sessions, and the download numbers, we think that number is probably accurate, maybe even a little conservative.
There are so many sources of error! There are robots who crawl the registry. There are lots of companies who host their own internal registry caches, or run npm Enterprise, and so have their own npm website and registry and never hit ours. There’s the entire nation of China, which finds it difficult to access npm through the great firewall, and is served by our hard-working friends at cnpmjs. There’s errors that inflate our numbers, and errors that deflate them. If you think we’re way off, let us know. But we think we have enough for a good guess.
We think there are four million npm users, and we think that number is doubling every year. Over at the node.js foundation, they see similar growth numbers. Not only are there more of them, but they’re more engaged than ever before. This is awesome! The 25 very hard working people of npm Inc. thank you for your participation and your contributions to the community, and we hope to see even more of you.
In a previous blog post we showed you how easy it is to run npm Enterprise on Amazon Web Services. Today, we’re happy to announce the public availability of the npm Enterprise Amazon Machine Image (AMI). Now, it’s even easier to run your own private npm registry and website on AWS!
Using our AMI, there is nothing to install. Just launch an instance, configure it using the npm Enterprise admin web UI, and you’re done: it’s a true point-and-click solution for sharing and managing private JavaScript packages within your company.
Let’s take a quick look at the details.
We have AMIs for several AWS regions. When you launch a new instance in the AWS EC2 Console, find the right one by searching for the relevant AMI ID under the Community AMIs tab. Note that new AMI versions are published about every month and include the date of publication in the AMI name.
Here’s a list of the AMI IDs by region:
us-east-1 (N. Virginia): ami-edd65bfaus-west-1 (N. California): ami-61db9d01us-west-2 (Oregon): ami-dc34f6bceu-central-1 (Frankfurt): ami-5ec13431ap-southeast-2 (Sydney): ami-7d32181eEnsure the AMI comes from owner 666882590071.
If you don’t see your preferred region in the list above, contact our support team, and we’ll get one created for you!

When you launch an instance of the AMI, you’ll need to:
m3.large or better22 (ssh), 8080 (registry), 8081 (website), and 8800 (npm Enterprise admin UI).pem key pair: this allows you to ssh into your server instanceIt’s not necessary, but if you’d prefer to attach an EBS volume for registry data that is separate from the root volume, you can. However, the root EBS volume cannot be smaller than 16 GB.
For more information (or screenshots) on any of the above, see our docs for Running npm Enterprise in AWS.
You don’t have to, but you can ssh into your EC2 instance to make sure it’s up and running. If you do, you should see a welcome message like the following:

Open your favorite web browser, access your server on port 8800, and follow the prompts to configure and start your appliance.
You’ll need a license key. If you haven’t already purchased one, you can get a free trial key here.
For more information on configuring npm Enterprise, visit our docs.
That’s it! Once you’ve configured and started the appliance, your private npm registry and website are ready for use. See this document for configuring your npm CLI to use your new private registry.
We’re continually striving to provide you the best solutions for distributing, discovering, and reusing your JavaScript code and packages. We hope this AMI makes it just that much easier to leverage the same tools within your organization that work so well in open source communities around the world - a concept we refer to as InnerSource.
As always, if you have questions or feedback, please reach out.