Recently a serious vulnerability in Apache CouchDB was discovered, patched, and disclosed. In this post we discuss its impact on the npm registry and correct some incorrect speculation on that impact.
The npm package registry is a well-known deployment of Apache CouchDB and we occasionally sing its praises in public. (To recap: it’s wonderfully easy operationally, and its replication features are its superpower.) Recently, two security vulnerabilities were found and patched in Couch. You can read the details in their blog post: Apache CouchDB CVE-2017-12635 and CVE-2017-12636. The CouchDB project very kindly let us know that we would want to patch our public-facing services on their next software release. They did not disclose to us the details of the vulnerability, but advised us it was serious. We followed our usual operational procedures and updated.
The person who discovered one of the vulnerabilities patched, Max Justicz, has written their own blog post on the topic: Remote Code Execution in CouchDB (and Privilege Escalation in the npm Registry). They raise some questions in this blog post about how npm might have been affected by this vulnerability. In particular I’d like to address this speculation:
I am almost certain that registry.npmjs.org was vulnerable to the privilege escalation/admin account creation part of this attack, which would have allowed an attacker to modify packages. This is because user creation on npm is more or less identical to the vanilla CouchDB user creation flow.
This speculation, while reasonable, is incorrect. Neither the public npm registry nor npm Enterprise installations are vulnerable to this bug because they do not use the CouchDB user system. The API presented is identical to CouchDB for compatibility with older versions of the npm client, but it is implemented via a proxy service that intercepts requests, rewrites them, and forwards them to another microservice. npm’s user and access control data is stored in postgresdb. This has been true since early 2015.
There were two services that were vulnerable to the bug until patched, however, as speculated in the blog post: our two public replication services. skimdb.npmjs.com is our legacy replication service, which provides a CouchDB changes feed for all unscoped packages in the registry. replicate.npmjs.com is a replication service for all public packages in the registry, including scoped packages. (These are packages with names like @slack/client or @std/esm.) We patched those CouchDB instances last week when the fix for this vulnerability landed.
We do not allow publication to these two CouchDB instances, so a breach would not have affected data in the npm registry. (Package data is downstreamed into these services through means decoupled from main registry operations.)
We deeply appreciate the responsible disclosure and cooperative spirit displayed by Max Justicz, who also made sure we were patched in advance of making his blog post. If you are aware of a package vulnerability or a vulnerability in any of npm’s system, please do contact us via email: [email protected].

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do, and what your company does?
Luke Sneeringer, SWE: Our company is Google.
How about this: what specifically are you doing? What does your team do?
LS: I am responsible for the authorship and maintenance of cloud client libraries in Python in Node.js.
Justin Beckwith, Product Manager: Essentially, Google has over 100 APIs and services that we provide to developers, and for each of those APIs and services we have a set of libraries we use to access them. The folks on this team help build the client libraries. Some libraries are automatically generated while others are hand-crafted, but for each API service that Google has, we want to have a corresponding npm module that makes it easy and delightful for Node.js users to use.
How’s your day going?
JB: My day’s going awesome! We’re at a Node conference, the best time of the year. You get to see all your friends and hang out with people that you only get to see here at Node.js Interactive and at Node Summit.
Today, we announced the public beta of Firestore, and of course we published an npm package. Cloud Firestore is a fully-managed NoSQL database, designed to easily store and sync app data at global scale.
Tell me the story of npm at your company. What specific problem did you have that private packages and Orgs solved?
JB: Google is a large company with a lot of products that span a lot of different spaces, but we want to have a single, consistent way for all of our developers to be able to publish their packages. More importantly, we need to have some sort of organizational mesh for the maintenance of those packages. For instance, if Ali publishes a package one day, and then tomorrow he leaves Google, we need to make sure we have the appropriate organization in place so that multiple people have the right access.
Ali Shiekh, SWE: We use npm Organization features to manage our modules and have teams set up to manage each of the distinct libraries that we have.
JB: We’re also users of some of the metrics that y’all produce. We use the number of daily installs for each module to measure adoption of our libraries and to figure out how they’re performing, not only against other npm modules but also other languages we support on the platform.
How do you consume that? Just logging into the website?
JB: No, we do a HTTP call, grab the response, and put it into BigQuery. Then we do analytics over that data in BigQuery and have it visualized on our dashboards.
How can private packages and orgs help you out?
JB: At Google, any time we release a product, there are four release phases that we go through, and the first is what we call an EAP, which is an “Early Access Preview” for a few hundred customers. When we’re distributing those EAPs, it can be difficult to get packages in the hands of customers. We don’t want to disclose a product that’s coming out, because we haven’t announced it yet, but we still need validation and feedback from people that we’ve built the right API and we have the right thing. Moving forward, that’s the way we’d like to use private packages.
What’s an improvement on our side that you would like to see? Or something you would like to see in the future?
LS: Something that we would be interested in seeing npm develop is the ability to have certain versions of a public package be private. Let’s say that we have 1.0, and there’s a 2.0 being developed in private that’s being EAP’ed.… I don’t think you have the concept yet of a private version of a public package.
Along with that, better management of package privacy. Managing an EAP for five people is very different than managing an EAP for 300 people. Another thing that would be nice would be the ability to give another npm Org access to a module at once.
AS: The user interface for Org controls and access management of teams within orgs seems a bit not fully defined at this point. Getting some of that management of Orgs in the UI would be a lot nicer. Before Orgs existed, we had a lot of modules that were published without Orgs, and getting them added to the org is fairly complicated because you have to go through a support ticket to get that done.
JB: Until this morning, you want to know what my first answer would have been?
2FA?
JB: 2FA! Yes.
LS: Also, Fido U2F, please. That is the standard behind security keys.
Would you recommend that another organization use npm Orgs?
JB: Well, yes, and also, it’s a provocative question… is there an alternative?
LS: I was going to say; you’re the only game in town.
Any other cool stuff you’d like to promote?
JB: The Firestore launch, of course, but another thing is the Google Stackdriver Debugger agent. We released this service called Stackdriver Debugger that lets you do passive debugging in production. You push your code to App Engine, or Kubernetes, or Compute engine. While it’s running, you can set breakpoints, and when that code gets hit, it will take a snapshot and visualize that without blocking the request. It’s a passive production debugger.
Did you just ‘one more thing’ us there? ‘Oh also, check out this amazing thing’!
JB: It’s kind of dark magic, actually. It’s a little ridiculous.

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name, what you do, and what your company does?
A: Hi there! I’m Marcus from Elsevier. I work in a team building their global ecommerce platform.
How’s your day going?
Not too bad. Had a kickoff meeting for an exciting upcoming project.
Tell me the story of npm at your company. What specific problem did you have that private packages and orgs solved?
Our backend is made up of mostly microservices. Various APIs to handle things like orders and customers have a corresponding custom client as an npm module that we publish to a private npm Org. This helps us handle changes to API gracefully through versioning of the endpoints and clients as well as distributing TypeScript type definitions for the requests and responses of all our APIs.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
As a platform, we handle multiple online shops within the company. Some are built by us, others are built by other teams. We have a single API that takes in all orders, stores them in a database and passes off to various fulfillment systems. We created an npm module to expose the types for a submitted order along with a generated .JSON schema that our API uses to validate orders sent to it. This module gets an upgrade as our endpoint is versioned to handle new data. These types are now only available as TypeScript definitions but we are currently working on generating types for PHP also.
Does your company do open source? How do you negotiate what you keep private and public (feel free to be as vague as you need to be)?
Not as much as we’d like. A lot of what we work on is specific to our organisation. Anything that’s generic we try to open source, like the above-mentioned json-schema-php-generator.
To people who are unsure what they could use private packages for - how would you explain the use case?
I think anything that is shared across multiple applications (be that logic, type definitions or something else) is a great candidate for private packages. As I said above, we use them heavily for sharing type definitions, but we also have a few modules that simply contain organisation-wide logic (such as country/region data) as a .JSON file with some helper methods.
How’s it going? How’s the day to day experience of using private packages/orgs?
Mostly it’s working well. We use npm extensively already so incorporating our own modules has seen very little friction. With tools like np we can make sure our packages are published in a consistent way with passing tests.
How would you see the product improved or expanded in the future?
We have run into one issue. With our versioned endpoint we still need to support older versions until we can ensure all clients have moved to the latest. We have an npm module containing our .JSON schema to validate against but we were unable to import multiple versions of a single module. Ideally, we’d like to be able to have version 1, 2, and 3 as dependencies and import them separately into our validator. We managed to work around this by moving the old version to a sub-directory and publishing that as its own package with the version appended to the end of the name.
Would you recommend that another org or company use private packages or orgs and why?
If you have a use case that requires some form of shared code or assets between applications it’s definitely worth giving private npm modules a try. They fit nicely into existing workflows and are easy enough to remove or replace if they don’t fit.

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do, and what your company does?
A: LogRocket helps product teams build better experiences for their users. By recording videos of user sessions along with logs and network data, LogRocket surfaces UX problems and reveals the root cause of every bug.
How’s your day going?
Splendidly.
Tell me the story of npm at your company. What specific problem did you have that private packages and orgs solved?
We run a monorepo. Our server code, our frontend code, and our publicly published SDK all coexist and share packages in this repository. This setup is great, but we knew it could easily become monolithic. Having everything split into independent packages helps enforce clean separation. We use lerna to minimize duplicates and keep versions consistent throughout the app.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
We use a lot of workers to offload data processing in our application. Packages such as promise-worker are lifesavers when dealing with workers in a complex application.
Does your company do open source? How do you negotiate what you keep private and public (feel free to be as vague as you need to be)?
We do! As a rule of thumb, if it’s only used by the SDK then we don’t mind making it public. Anything used by our application code is private.
To people who are unsure what they could use private packages for - how would you explain the use case?
For us, it works for anything that is used by more than one part of the application. Using git subrepos is a common solution to this, but we’ve found those to be quite messy in practice. Configuration, error reporting, and logging all come to mind. We also use packages to share API definitions between services so they’re never out of sync.
How’s it going? How’s the day to day experience of using private packages/orgs?
It’s great.
How would you see the product improved or expanded in the future?
We would love to see more tooling around lock-step publishing, with the end goal of replacing lerna with something native.
Would you recommend that another org or company use private packages or orgs and why?
Absolutely! Especially as the number of packages and contributors increases, having a single approach to package management is priceless. We have a few packages that are used across all different parts of the application. We could not build or distribute these as easily if they were not in a single place.
Any cool npm stuff your company has done publicly that you’d like to promote?
Aside from the main LogRocket SDK, we also use npm for distributing our plugins for React, Redux, GraphQL, VueJS, Mobx, rxjs, @ngrx/store, and more. Check out our plugin docs for more details.
Hey y'all, this is a big new feature release! We’ve got some security related goodies plus a some quality-of-life improvements for anyone who uses the public registry (so, virtually everyone).
To get this version, run: npm install "npm@^5.5.0" -g
Barring any major bugs, it will be the default npm version on 2017-10-11, which you can install by running: npm install -g npm@latest.
The changes largely came together in one piece, so I’m just gonna leave the commit line here:
f6ebf5e8b f97ad6a38 f644018e6 8af91528c 346a34260 Two factor authentication, profile editing and token management. (@iarna)You can now enable two-factor authentication for your npm account. You can even do it from the CLI. In fact, you have to, for the time being:
npm profile enable-tfa
With the default two-factor authentication mode you’ll be prompted to enter a one-time password when logging in, when publishing and when modifying access rights to your modules.
You can now create, list and delete authentication tokens from the comfort of the command line. Authentication tokens created this way can have NEW restrictions placed on them. For instance, you can create a read-only token to give to your CI. It will be able to download your private modules but it won’t be able to publish or modify modules. You can also create tokens that can only be used from certain network addresses. This way you can lock down access to your corporate VPN or other trusted machines.
Deleting tokens isn’t new, you could do it via the website but now you can do it via the CLI as well.
You can finally change your password from the CLI with npm profile set password! You can also update your email address with npm profile set email <address>. If you change your email address we’ll send you a new verification email so you verify that its yours.
You can also update all of the other attributes of your profile that previously you could only update via the website: fullname, homepage, freenode, twitter and github.
All of these features were implemented in a stand alone library, so if you have use for them in your own project you can find them in npm-profile on the registry. There’s also a little mini-cli written just for it at npm-profile-cli. You might also be interested in the API documentation for these new features: user profile editing and authentication.
5ee55dc71 install.sh: Drop support for upgrading from npm@1 as npm@5 can’t run on any Node.js version that ships npm@1. This fixes an issue some folks were seeing when trying to upgrade using curl | http://npmjs.com/install.sh. (@iarna)5cad1699a [email protected] Fix a bug where when more than one lifecycle script got queued to run, npm would crash. (@zkat)cd256cbb2 [email protected] Fix a bug where test directories would always be excluded from published modules. (@isaacs)2a11f0215 Fix formatting of unsupported version warning (@iarna)6d2a285a5 [email protected]69e64e27b [email protected]34e0f4209 [email protected]10d31739d [email protected]2b02e86c0 [email protected]b81fff808 [email protected]: Fixes a long standing bug in rimraf’s attempts to work around Windows limitations where it owns a file and can change its perms but can’t remove it without first changing its perms. This may be an improvement for Windows users of npm under some circumstances. (@isaacs)

Today, we are announcing two new ways to protect your npm account. Please read on to learn how you can use these security features to keep your code safe and increase everyone’s trust in the more than 550,000 packages of code in the npm Registry.
Two-factor authentication (2FA)
Now, you can sync your npm account with an authentication application like Google Authenticator or Authy. When you log in, you’ll be prompted for a single-use numeric code generated by the app.
2FA is another layer of defense for your account, preventing third parties from altering your code even if they steal or guess your credentials. This is one of the easiest and most important ways to ensure that only you can access to your npm account.
Read-only tokens
If your continuous integration / continuous deployment (CI/CD) workflow includes linking your npm account to tools like Travis CI with authentication tokens, you can now create read-only tokens for tools that don’t need to publish. You can also restrict tokens to work from only specified ranges of IP addresses.
Even if your token is compromised — for example, if you accidentally commit it to GitHub — no one else can alter your code, and only authorized CI servers will be able to download your code.
Learn more about tokens in this doc:
Set these up now (please)
The npm community is now larger than the population of New York City, so it’s never been more important to reinforce trust and encourage collaboration. Every developer who secures their npm account with these new methods helps ensure the safety and integrity of the code we all discover, share, and reuse to build amazing things.
Learn how to activate 2FA in this doc:
Using Two-Factor Authentication
Watch this space
There has never been a major security incident caused by leaked npm credentials, but our security work is never finished. We work continuously to protect the npm Registry and detect and remove malicious code, and we try to keep you informed of our efforts.
If you ever believe you’ve encountered any malicious code on the Registry or in npm itself, contact us right away [using the npm website](lgt security contact form) or by emailing [email protected]. If you have any feedback or questions about what we’ve rolled out today, just contact [email protected].
Thanks for helping keep the npm community safe.
UPDATE: To try out TFA, you’ll need version [email protected] or newer of the npm client. To get it, run `npm install npm@latest -g`.

Editor’s note: This is a guest post from Adam Baldwin of ^Lift Security and the Node Security Platform. As we discussed in earlier posts, Adam conducts constant security reviews of the Registry and its contents and keeps us appraised of anything that might compromise our security.
Over the years I’ve spent a lot of time digging through the half million public packages on npm. There are millions of public tarballs to go with those public packages, which means there’s a lot of code to look through and experiment with. Buried in that code is a surprising amount of sensitive information: authentication tokens, passwords, and production test data including credit card numbers.
You, as a developer publishing to npm, want to avoid leaking your data like this. I’ll share some tips for how to control what you’re publishing and keep your secrets out of the public registry.
Let’s explore the behavior of npm publish, because understanding how it chooses which files to include is critical to controlling what gets published. If you want to dive in deeper, the npm documentation goes into more detail, but I’ll cover the important points here.
When you run npm publish, npm bundles up all the files in the current directory. It makes a few decisions for you about what to include and what to ignore. To make these decisions, it uses the contents of several files in your project directory. These files include .gitignore, .npmignore, and the files array in the package.json. It also always includes certain files and ignores others.
npm will always include these files in a package:
package.jsonREADME and its variants like README.mdCHANGELOG and its variantsLICENSE and the alternative spelling LICENCEnpm will always ignore these file name patterns:
.*.swp._*.DS_Store.git, hg , .svn, CVS version control directories.npmrc.lock-wscript.wafpickle-*config.gypinpm-debug.logOne of the most common ways to exclude files and folders is to specify them in a .gitignore file. This is because files you do not want to commit to your repository are also typically files you do not want to be published.
npm also honors a file called .npmignore, which behaves exactly the same as .gitignore. These files are not cumulative. Adding an .npmignore file to your project replaces .gitignore entirely. If you try to use both, you will inadvertently publish a file you thought you had excluded.
This is how that might happen:
production.json configuration file to your .gitignore file because it contains sensitive information..npmignore but forget to add any files to it.npm publish..npmignore exists, it is consulted instead of .gitignore, but it includes no files to ignore!production.json file is therefore published, and your sensitive information is leaked.Stick to using .gitignore if you can! If you are using a different version control system, use .npmignore. If you are using git and have an ignore file but wish to publish some of the files you’re not committing— perhaps the result of build steps— start by copying your .gitignore file to .npmignore. Then edit to remove the files you don’t want in git but do want in your package.
There’s an even better way of controlling exactly which files are published with your package: whitelisting with the files array. Only 57,000 packages use this method of controlling what goes into them, probably because it requires you to take inventory of your package. It’s by far the safest way to do it, though.
The files array specifies each file or directory to include in your publish. Only those files are included, plus the ones npm always includes no matter what (such as package.json), minus the ones denied by another rule.
Here’s a package.json file with a files array:
{
"name": "@adam_baldwin/wombats",
"version": "1.0.0",
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"files": [
"index.js"
],
"keywords": [],
"author": "Adam Baldwin <[email protected]> (https://liftsecurity.io)",
"license": "ISC"
}
No matter what other files exist in this project directory during npm publish, only the index.js file will be packed up into the tarball (plus the readme and package.json, of course!)
You can use the npm-packlist module to programmatically get a list of the files npm would include for a specific directory. You can also run npm itself to find what it would include. This command lists the files that would get packed up:
tar tvf $(npm pack)
If you are using private modules in a continuous integration (CI) system you will need to provide that service with an authentication token. In the past you had to provide a regular authentication token out of your .npmrc, which gives your CI system the ability to do everything you can do with your npm account. Now it’s possible to generate a read-only token that can limit the damage if a token is leaked via CI.
This feature isn’t yet supported by the npm CLI, but you can use the public registry API to generate a token by hand. Here’s an example with curl:
curl -u [USERNAME]:[PASSWORD] https://registry.npmjs.org/-/npm/v1/tokens \
-X POST -H 'content-type: application/json' \
-d '{"password":"[USERNAME]", "readonly": "true"}'
You can review and delete the tokens you have created via the npm website. under Your profile > Tokens.

If you accidentally published a module containing sensitive information, you should consider that data compromised. It would be nice if you could just unpublish the module and hope that nobody saw the mistake, but the reality is as soon as you publish a module it’s replicated to all of the registry mirrors and and other third parties, like ^Lift. The only way to ensure that your services and data aren’t compromised is to invalidate and change any API keys or passwords that were published.
If a secret you can’t change has been leaked, your first step should be to unpublish the package to limit the damage, then take any other actions that are appropriate for the kind of data that leaked. The npm support team will help you unpublish any packages that are outside the 24 hour window for unpublication.
I hope this has been a good refresher on how to help protect what you publish in the registry and ensure that you don’t accidentally leak sensitive data. I encourage you to try using whitelists to control which files go into your next package.
Preventing the mistaken distribution of sensitive information via the npm Registry or any other format requires you and your team to continually educate yourselves on best practices. We will continue to discuss common security issues or mistakes on the npm blog, so be sure to follow along and stay informed on the best ways to secure your code.
This is a small bug fix release wrapping up most of the issues introduced with 5.4.0.
0b28ac72d #18458 Fix a bug on Windows where rolling back of failed optional dependencies would fail. (@marcins)3a1b29991 [email protected] Revert update of write-file-atomic. There were changes made to it that were resulting in EACCES errors for many users. (@iarna)cd8687e12 Fix a bug where if npm decided it needed to move a module during an upgrade it would strip out much of the package.json. This would result in broken trees after package updates.5bd0244ee #18385 Fix npm outdated when run on non-registry dependencies. (@joshclow) (@iarna)We’ve talked about our support policy before and it hasn’t changed but I wanted to take a moment to provide some clarification.
The npm CLI supports running on any version of Node.js currently supported by the Node.js Foundation. That is, we support the most recent version (even if that’s not an LTS release) and we support any version still in maintenance.
With npm@5 we support 4, 6 and 8. That will likely expand to include Node.js 9 if we don’t have an npm@6 by then.
We support the latest release of each major version we support. So that means that at the time of this writing we support v4.8.4, v6.11.3, and v8.4.0. We simply cannot support the huge number of individual releases, particularly when there are often bugs that have been already fixed in them.
We will not drop support for a major version of Node.js without a major version bump in npm itself. This means that npm’s support for a major Node.js version won’t change until sometime after it drops out of maintenance. So, for example, when Node v4 drops out of maintenance in April 2018 npm will continue to support it until its next major version, whatever that may be.

Over the years our legacy APIs have not had rate-limiting built into them, other than the implicit, informal rate limiting caused by performance bottlenecks. Most of the time, for most users of our public APIs, this has been sufficient. As the registry grows, however, we’ve seen heavier use of our APIs, some of which has been heavy enough to prompt us to take action. If we can identify the user and suspect it’s a bug, we reach out. Sometimes we simply block IPs when the usage is at levels that affect other people’s ability to use the APIs.
This isn’t ideal, and we’d rather give clear signals to API users what the allowed rates are and when they are being rate-limited. We’re therefore rolling out more explicit rate-limiting for all registry apis. Your tools should be ready to handle responses with http status code 429, which means that you have exceeded the allowed limit.
We will be allowing logged-in users to make requests at a higher rate than anonymous users.
For some services we have already begun limiting the amount of data that can be requested at a time. For example, we limit package search requests to queries that are at least three characters long. We have also taken steps to prevent API users from exploiting bugs to work around that limit.
We’ve also re-instituted limits in our downloads API. Our previous implementation of this API limited data ranges to well under a year for performance reasons. Our current implementation performs much better but turned out to have its breaking point as well,. You may now request at most 18 months of data at a time for single-package queries. Bulk package data queries are limited to at most 128 packages and 365 days of data.
We reserve the right to further limit API usage without warning when we see a pattern of requests causing the API to be unusable for most callers. We’ll follow up with documentation in these cases. Our primary goal is to prevent API use from either deliberately or accidentally making the service unresponsive for other users.
All of our existing registry API documentation is available in the registry repo on GitHub, and you can find the most up-to-date statements about our rate-limiting policies there.