Over the years our legacy APIs have not had rate-limiting built into them, other than the implicit, informal rate limiting caused by performance bottlenecks. Most of the time, for most users of our public APIs, this has been sufficient. As the registry grows, however, we’ve seen heavier use of our APIs, some of which has been heavy enough to prompt us to take action. If we can identify the user and suspect it’s a bug, we reach out. Sometimes we simply block IPs when the usage is at levels that affect other people’s ability to use the APIs.
This isn’t ideal, and we’d rather give clear signals to API users what the allowed rates are and when they are being rate-limited. We’re therefore rolling out more explicit rate-limiting for all registry apis. Your tools should be ready to handle responses with http status code 429, which means that you have exceeded the allowed limit.
We will be allowing logged-in users to make requests at a higher rate than anonymous users.
For some services we have already begun limiting the amount of data that can be requested at a time. For example, we limit package search requests to queries that are at least three characters long. We have also taken steps to prevent API users from exploiting bugs to work around that limit.
We’ve also re-instituted limits in our downloads API. Our previous implementation of this API limited data ranges to well under a year for performance reasons. Our current implementation performs much better but turned out to have its breaking point as well,. You may now request at most 18 months of data at a time for single-package queries. Bulk package data queries are limited to at most 128 packages and 365 days of data.
We reserve the right to further limit API usage without warning when we see a pattern of requests causing the API to be unusable for most callers. We’ll follow up with documentation in these cases. Our primary goal is to prevent API use from either deliberately or accidentally making the service unresponsive for other users.
All of our existing registry API documentation is available in the registry repo on GitHub, and you can find the most up-to-date statements about our rate-limiting policies there.

npm’s newest wombat is… an actual wombat. Teacup is a female wombat joey being nursed and raised at the Sleepy Burrows Wombat Sanctuary in Gundaroo, Australia. When npm adopted her shortly after she arrived at Sleepy Burrows in July, Teacup weighed just under 200 grams (7 oz.), but her caretakers have done their best to simulate life in her mother’s pouch, providing milk and rubbing her with hemp oil (be warned: cuteness ahead!) to help her regulate body temperature.
A month and a half later, Teacup has grown to over 800 grams (28 oz.), is growing a healthy layer of hair and is learning how to walk. We will be paying very close attention as she matures, because now “watching baby wombat videos” counts as legitimate workday activity. Stay tuned for updates!
If you’d like to support the Sleepy Burrows Wombat Sanctuary, you can learn more, and lose the next week of your life watching all the videos, on their website.
npm, Inc. and I will continue to throw our weight behind our values, including diversity and inclusivity in the Node.js project. I am encouraged to see that the Node.js Foundation board also recognizes the importance of these values, and is taking steps to correct the failures in project governance that risked calling their commitment into doubt.
There is tremendous risk if the Node.js Foundation doesn’t decisively expand its community of open source contributors. The Node.js ecosystem is larger than ever. Its continued growth depends on technical innovation, and innovation requires a healthy culture. Any project will suffer without contributions from a broad selection of its members, and any project will lose relevance if its leaders don’t actively promote inclusive conduct.
Node.js developers are an extremely diverse community who care deeply about inclusivity, and are not shy about expressing themselves through direct action. The Node.js project is stronger when they speak up.
I am confident that the leaders of Node.js Foundation will take the right actions to put this challenge behind us. It isn’t the first time that the community has spoken up about its needs, and I hope it isn’t the last. I am extremely proud of the community we’ve all built together, and excited to see it continue to grow and mature.

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do?
A: Hi! I’m Jan and I’m an iOS developer at Clue.
How’s your day going?
Chilling with my cat, so purrrrrretttty good.
Tell me the story of npm at your company.
Our products are two mobile apps for iOS and Android. We write some logic in JavaScript so that we don’t have to do it twice on both platforms and can share it. Using a real package manager to handle that instead of someone going from time to time “hey, we should maybe update the JS in the apps, huh?” is pretty nice.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
Some of the core logic of our app would be really error-prone and tricky to re-write for each of our platforms — but it has a bunch of proprietary logic, so its being private was a must-have.
Does your company do open source? How do you negotiate what you keep private and public?
We do have a GitHub org and few repos up there with little helper things that we’ve built over the years, but it’s not an important part of our work. We’ve recently been talking internally about carving out bits and pieces that would be useful in broader contexts and open-sourcing those, but nothing concrete yet.
To people who are unsure what they could use private packages for, how would you explain the use case?
By making analogy to GitHub private orgs/repos. You know how your source code is in a private repo? Well, the build artifacts of your JS library can be, too!
How’s the day-to-day experience of using private packages?
Pretty seamless! I have few nitpicks about the web interface (getting to the private package takes way too many clicks, and I’d love to see a version history), but otherwise I can’t say I’ve noticed any problems.
Oh! there was an issue earlier this year when the person who set up the org left the company. I remember people complaining about the process of transferring the ownership being a PITA, but I wasn’t super involved with that, so I don’t really remember the specifics…
Editor’s note: We’re always happy to help! If you have any issues, please reach out to support at [email protected].
Would you recommend that another org or company use private packages or orgs? Why?
Yes. “Please stop copy-pasting files between repos.”
Any questions I didn’t ask that you wish I did?
Nyup, I think you got it covered.
Any cool npm stuff your company has done publicly that you’d like to promote?
Sadly not…
Here’s another small big release, with a handful bunch of fixes and a couple of small new features! This release has been incubating rather longer than usual and it’s grown quite a bit in that time. I’m also excited to say that it has contributions from 27 different folks, which is a new record for us. Our previous record was 5.1.0 at 21. Before that the record had been held by 1.3.16 since December of 2013.
If you can’t get enough of the bleeding edge, I encourage you to check out our canary release of npm. Get it with npm install -g npmc. It’s going to be seeing some exciting stuff in the next couple of weeks, starting with a rewriten npm dedupe, but moving on to… well, you’ll just have to wait and find out.
d080379f6 [email protected] Updates extract to use tar@4, which is much faster than the older tar@2. It reduces install times by as much as 10%. (@zkat)4cd6a1774 0195c0a8c #16804 [email protected] Update publish to use tar@4. tar@4 brings many advantages over tar@2: It’s faster, better tested and easier to work with. It also produces exactly the same byte-for-byte output when producing tarballs from the same set of files. This will have some nice carry on effects for things like caching builds from git. And finally, last but certainly not least, upgrading to it also let’s us finally eliminate fstream—if you know what that is you’ll know why we’re so relieved. (@isaacs)1ac470dd2 #10382 If you make a typo when writing a command now, npm will print a brief “did you mean…” message with some possible alternatives to what you meant. (@watilde)20c46228d #12356 When running lifecycle scripts, INIT_CWD will now contain the original working directory that npm was executed from. Remember that you can use npm run-script even if you’re not inside your package root directory! (@MichaelQQ)be91e1726 4e7c41f4a [email protected]: Fixes a number of issues on Windows and adds support for several more languages: Korean, Norwegian (bokmål and nynorsk), Ukrainian, Serbian, Bahasa Indonesia, Polish, Dutch and Arabic. (@zkat)2dec601c6 #17142 Add the new commit-hooks option to npm version so that you can disable commit hooks when committing the version bump. (@faazshift)bde151902 #14461 Make output from npm ping clear as to its success or failure. (@legodude17)b6d5549d2 #17844 Make package-lock.json sorting locale-agnostic. Previously, sorting would vary by locale, due to using localeCompare for key sorting. This’ll give you a little package-lock.json churn as it reshuffles things, sorry! (@LotharSee)44b98b9dd #17919 Fix a crash where npm prune --production would fail while removing .bin. (@fasterthanlime)c3d1d3ba8 #17816 Fail more smoothly when attempting to install an invalid package name. (@SamuelMarks)55ac2fca8 #12784 Guard against stack overflows when marking packages as failed. (@vtravieso)597cc0e4b #15087 Stop outputting progressbars or using color on dumb terminals. (@iarna)7a7710ba7 #15088 Don’t exclude modules that are both dev & prod when using npm ls --production. (@iarna)867df2b02 #18164 Only do multiple procs on OSX for now. We’ve seen a handful of issues relating to this in Docker and in on Windows with antivirus. (@zkat)23540af7b #18117 Some package managers would write spaces to the _from field in package.json’s in the form of name @spec. This was causing npm to fail to interpret them. We now handle that correctly and doubly make sure we don’t do that ourselves. (@IgorNadj)0ef320cb4 #16634 Convert any bin script with a shbang a the start to Unix line-endings. (These sorts of scripts are not compatible with Windows line-endings even on Windows.) (@ScottFreeCode)71191ca22 #16476 [email protected] Running an install with --ignore-scripts was resulting in the the package object being mutated to have the lifecycle scripts removed from it and that in turn was being written out to disk, causing further problems. This fixes that: No more mutation, no more unexpected changes. (@addaleax)459fa9d51 npm/read-package-json#74 #17802 [email protected] Use unix-style slashes for generated bin entries, which lets them be cross platform even when produced on Windows. (@iarna)5ec72ab5b #18229 Make install.sh find nodejs on debian. (@cebe)b019680db #10846 Remind users that they have to install missing peerDependencies manually. (@ryanflorence)3aee5986a #17898 Minor punctuation fixes to the README. (@AndersDJohnson)e0d0a7e1d #17832 Fix grammar, format, and spelling in documentation for run-script. (@simonua)3fd6a5f2f #17897 Add more info about using files with npm pack/npm publish. (@davidjgoss)f00cdc6eb #17785 Add a note about filenames for certificates on Windows, which use a different extension and file type. (@lgp1985)0cea6f974 #18022 Clarify usage for the files field in package.json. (@xcambar)a0fdd1571 #15234 Clarify the behavior of the files array in the package-json docs. (@jbcpollak)cecd6aa5d #18137 Clarify interaction between npmignore and files in package.json. (@supertong)6b8972039 #18044 Corrected the typo in package-locks docs. (@vikramnr)6e012924f #17667 Fix description of package.json in npm-scripts docs. (@tripu)48d84171a f60b05d63 [email protected] Perf improvements. (@zkat)f4650b5d4 [email protected]: Serialize writes to the same file so that results are deterministic. Cleanup tempfiles when process is interrupted or killed. (@ferm10n) (@iarna)96d78df98 80e2f4960 4f49f687b 07d2296b1 a267ab430 #18176 #18025 Move the lifecycle code out of npm into a separate library, npm-lifecycle. Shh, I didn’t tell you this, but this portends to some pretty cool stuff to come very soon now. (@mikesherov)0933c7eaf #18025 Force Travis to use Precise instead of Trusty. We have issues with our couchdb setup and Trusty. =/ (@mikesherov)afb086230 #18138 Fix typos in files-and-ignores test. (@supertong)3e6d11cde #18175 Update dependencies to eliminate transitive dependencies with the WTFPL license, which some more serious corporate lawyery types aren’t super comfortable with. (@zkat)ee4c9bd8a #16474 The tests in test/tap/lifecycle-signal.js, as well as the features they are testing, are partially broken. This moves them from being skipped in CI to being disabled only for certain platforms. In particular, because npm spawns its lifecycle scripts in a shell, signals are not necessarily forwarded by the shell and won’t cause scripts to exit; also, shells may report the signal they receive using their exit status, rather than terminating themselves with a signal. (@addaleax)9462e5d9c #16547 Remove unused file: bin/read-package-json.js (@metux)0756d687d #16550 The build tools for the documentation need to be built/installed before the documents, even with parallel builds. Make has a simple mechanism which was made exactly for that: target dependencies. (@metux)
Recently, there’s been some buzz around the next great architectural shift in systems. There is a rising interest in the evolution of decentralized edge computing as a core part of that shift.
For over two years, npm has been using edge computing concepts to ensure that the developer experience for users of npm Enterprise, our private registry product, matches the experience of using the centralized, cloud-hosted version of the npm Registry.
Here’s why we’re doing that, and how:
Many enterprises have strict requirements that prevent them from using cloud-hosted products for critical parts of their infrastructure. This approach makes sense from a regulatory compliance perspective, but it makes life inconvenient for developers within those companies who wish to take advantage of open-source code from the npm Registry, or who wish to use npm to share and reuse their own code with their colleagues.

npm Enterprise allows developers at big companies to run a version of the npm Registry behind their firewall. Of course, it wouldn’t be enough for enterprise customers to simply deploy a fresh install of npm Enterprise with an empty registry. Much of npm’s value comes from the 500,000 packages available on the public registry, and being able to combine these packages with private code. Without access to these packages, developers would waste time reinventing a lot of wheels.
npm Enterprise lets companies mix public packages from the public registry with private code stored within their private registry without risks or complexity.
We designed npmE so that each npmE server is a private edge node to the public registry. Each npmE instance can replicate select parts or all of the npm Registry to offer this functionality to end users. It also provides additional local services that are only accessible to these users, based on a company’s unique requirements.

Our customers are able to configure their private registry as a full mirror of the public Registry to decrease latency, cut bandwidth costs, and offer npm Registry to end users who are restricted from accessing the public internet. Alternately, they may selectively mirror npm’s Registry using specific whitelists managed by the admin console or the npm command line.
When combined with npmE add-ons which enforce code quality, evaluate how packages and their dependencies are licensed, and scan for security vulnerabilities, this architecture gives companies total control of the public packages their developers may use.
At the same time, these developers may find, share, and re-use proprietary code by publishing to their private local registry. Private code stays private by never leaving the company’s own infrastructure.
End users don’t have to think about where each package is located; they can just pull from the npm Enterprise server. Behind the scenes, the server automatically determines the scope and proxy of the package pull.

There are two primary challenges of an edge node architecture:
For many years, deploying and maintaining a private instance of this kind of architecture would have been prohibitively difficult for everyone but the most advanced IT organizations. These sorts of enterprise software installations took several months to implement and involved manual processes of configuring servers, runtimes, and components. Every enterprise IT org would have taken responsibility for the ops role of their enterprise instance.
Fortunately, it’s now much easier to deploy and maintain private edge nodes thanks to technologies like containerization, orchestration and scheduling platforms.
Deployment and management are now baked into the design and development effort of modern applications. Generally, this creates reproducible and consistent cloud-native deployments, but it also is becoming the foundation of modern enterprise software deployments. This automation and inherent portability allows our customers to deploy into their own environments without deep knowledge of our architecture.
Of course, it isn’t quite as easy and magical as it all sounds. We initially built out our own containerized installation methods by packing all of our services into a single container. This approach still required npm Enterprise customers to be quite technical to complete the deployment. The system also lacked the tooling for managing versions, customers, backups, and updates.
After a few months of banging our heads against the wall, we decided that we were dedicating too many resources to deploying and managing of Enterprise instances. We switched over to a platform called Replicated to be our enterprise version management platform. Replicated’s platform provides workflows for integrating our existing CI/CD release pipeline with our enterprise release channels.
Similar to the way that npm packages are versioned for automatic updates, we use discrete versioned images for each of the services that make up npm Enterprise. We organize these into specific channels provided — “stable,” “beta”, and “unstable” — and when we promote a set of images to “stable,” Replicated automatically notifies our customers that an update to npm Enterprise is available and makes the update as simple as a single click. Our customers don’t have to manually update services, and we don’t have to manually push containers around in order to keep the edge nodes of the npm Registry on recent versions.

Beyond deployment and management, there also was the problem of developing enterprise-specific features such as change management processes, LDAP integration, and an admin dashboard, which enterprise customers require but which fall outside our core product expertise. Many of these features are included (or at least made easier) in the Replicated platform and provide a consistent experience that enterprise IT admins are now familiar with.
These sorts of enterprise ready features are important to our Enterprise customers, but since they aren’t a core part of our value proposition, it has made a ton of sense for us to leverage a partner to power these as much as possible.
The state of art in edge node architecture is still evolving, but it is gaining more traction in a variety of use cases. An increasing quantity of JS developers rely on npm, and as a result, an increasing number of enterprises will need npm Enterprise. For developers to be effective it’s imperative that they benefit from the global Registry of npm packages.
By partnering with Replicated to pioneer an architecture that delivers on that promise while reducing management overhead and satisfying security requirements, we can see an emerging future that embraces the distributed nature of the internet. To learn more about Replicated’s technology, visit their site.
npm’s core and enterprise offerings are constantly improving.To try out a fully private, enterprise edge-node instance of npm, just reach out or download a free trial today.
Editor’s note: This is a guest post from Jenn Schiffer, who originally posted it on her blog extremely online and incredibly logged on (you can see the original post here). We feel 100% in agreement about the importance of an inclusive, collaborative, and supportive community and code ecosystem.
I was having lunch the other day with a very cool local dev evangelist and among the many interesting topics we covered in the dev rel world, the one that stuck out was how developers tend to over-worry about whether their contributions to a code ecosystem are actually valuable. This comes up a lot when people ask me if it’s “okay” to post a static site on Glitch or use it to save code snippets or prototype throwaway things (yes!).
When I say “code ecosystem” I mean places like the npm registry, GitHub, and even Glitch. and when I say that developers worry about their contributions there, i mean that we have an actual problem where we are:
I’ve had many debates with past coworkers, pals, and collaborators about whether a node module that returns the number of seconds in a minute belongs in npm (sure!) or if it’s okay to have a GitHub repo that only contains a readme file listing one’s favorite pizzerias (definitely!) and I find it hard to believe that it’s the ecosystem that detractors of this kind of use are worried about. Honestly, I think it has to do with their idea of the value of repo and module count and contributions.
The open source community has a big problem with projecting our insecurities about our own metrics onto other developers and it’s a bad cycle that makes people of all levels of experience worry about whether their contributions are some fake idea of “worthy” or not. It’s been used as a tool of harm against women in the community (“she hardly has any green on her GitHub”) which in part is harm against the entire community and our code ecosystems.
As developers we need to stop giving weight to these metrics and chill tf out when it comes to judging our peers. As ecosystem owners, we need to better moderate and have more discussions with the community about what we expect of our users with regards to both social behavior and code contributions. I don’t want people judging Glitch users on how many Glitch projects they have or if they are mostly “just remixes,” much like I witness with GitHub.
We call Glitch “the friendly community where you’ll build the app of your dreams” — and that dream app can be a static site, or a markdown file listing your favorite pizzerias - hey, it may inspire you to learn how to evolve it into a map of those places using the google maps api, or not! All of our dreams are different and so all of our contributions to Glitch will be different, and I think that is a big part of what will keep Glitch a friendly community.
On August 1, a user notified us via Twitter that a package with a name very similar to the popular cross-env package was sending environment variables from its installation context out to npm.hacktask.net. We investigated this report immediately and took action to remove the package. Further investigation led us to remove about 40 packages in total.
On July 19 a user named hacktask published a number of packages with names very similar to some popular npm packages. We refer to this practice as “typo-squatting”. In the past, it’s been mostly accidental. In a few cases we’ve seen deliberate typo-squatting by authors of libraries that compete with existing packages. This time, the package naming was both deliberate and malicious—the intent was to collect useful data from tricked users.
All of hacktask’s packages have been removed from the npm registry.
Adam Baldwin of Lift Security also looked into this incident to see if there were any other packages, not owned by hacktask, with the same package setup code. He has every file in the public registry indexed by content hash to make scans like this possible. He did not find any other instances of that specific file with those contents.
Following is a list of hacktask’s packages, with a count of total downloads from 7/19 to 7/31.
Download counts for these packages are larger in the last two days because of public interest in the problem. The numbers from before exposure are more revealing of the effect of the malware. Note that 30-40 downloads is typical for any public package published to the registry, from registry mirrors automatically downloading copies. From this you can see that the real danger came from the crossenv package, which had nearly 700 downloads, with some secondary exposure from the jquery typosquats. But even in that case, most of the downloads come from mirrors requesting copies of the 16 versions of crossenv published. Our estimate is that there were at most 50 real installations of crossenv, probably fewer.
babelcli: 42 cross-env.js: 43 crossenv: 679 d3.js: 72 fabric-js: 46 ffmepg: 44 gruntcli: 67 http-proxy.js: 41 jquery.js: 136 mariadb: 92 mongose: 196 mssql-node: 46 mssql.js: 48 mysqljs: 77 node-fabric: 87 node-opencv: 94 node-opensl: 40 node-openssl: 29 node-sqlite: 61 node-tkinter: 39 nodecaffe: 40 nodefabric: 44 nodeffmpeg: 39 nodemailer-js: 40 nodemailer.js: 39 nodemssql: 44 noderequest: 40 nodesass: 66 nodesqlite: 45 opencv.js: 40 openssl.js: 43 proxy.js: 43 shadowsock: 40 smb: 40 sqlite.js: 48 sqliter: 45 sqlserver: 50 tkinter: 45
If you downloaded and installed any of these packages, you should immediately revoke and replace any credentials you might have had in your shell environment.
hacktask’s email address is banned from using npm. In this era of throwaway email addresses, that is not sufficient to prevent the human being behind it from trying again, but we felt it was a necessary gesture.
We are supporting ^Lift Security and the Node Security Project in their ongoing work to do static analysis of public registry packages, but this will not find every problem. Determining if a package contains malicious content when published is, of course, equivalent to the halting problem and therefore not something we can do.
We’re discussing various approaches to detecting and preventing publication—either accidental or malicious—of packages with names very close to existing packages. There are programmatic ways to detect this, and we might use them to block publication. We’re using the Smyte service to detect spam as it is published to the registry, and will be experimenting with using it to detect other kinds of violations of our terms of service.
Please do reach out to us immediately if you find malware on the registry. The best way to do so is by sending email to [email protected]. We will act to clean up the problem and find related problems if we can.

You probably know ^Lift Security for its work as the Node Security Project, which reviews the most popular of the half-million packages in the npm Registry to find security vulnerabilities. However, you might not know that ^Lift also reviews the npm registry itself.
Since npm was founded, we have worked with Adam Baldwin and his team to conduct periodic security reviews of the code that we use to power the world’s largest software registry. Their methods include penetration tests and code audits of the contents of all private packages and of all of npm’s operations.
We work with ^Lift’s engineers to get a much-needed second opinion about our work. They clearly explain tradeoffs and priorities in their code reviews and give us actionable suggestions for mitigating risks.
Earlier this week, ^Lift completed another penetration test of the registry and I am currently reviewing their report of what they found. As always, they’ve shown us that we have things to do to tighten up our operations, including using HSTS and changing the ways some of our APIs operate.
In npm, Inc.’s three and a half years of operations, there has never been an incident in which a stranger exploited a vulnerability to steal user credentials — but our work to improve security is never done. Every change to a system can have security implications, and we’re constantly working on a lot of things! Every npm user and package on the npm registry is safer because of ^Lift’s reviews.
“We’re dedicated to creating good processes so that a solid foundation exists for new features to help users protect their accounts and the software they publish such as two factor authentication or package signing,” Adam said.
“Security is always at odds when it comes to other business objectives. From ^Lift’s perspective, npm does a great job prioritizing security when necessary and also taking it seriously. We don’t have to go to extreme lengths to convince them when something is an issue. npm takes the time to understand and make it a part of its overall business plans.”
npm will always favor transparency — we’re happy to describe, in specific detail, our security processes and policies, including how you can help. Take a look, share your feedback, and watch this space. We’ll be sharing the security processes and features we add in order to keep the npm community in the loop.
The #npm channel on irc.freenode.net is being devoiced. That means: if you’re not a moderator in the channel, you won’t be able to post there. Instead, you’ll be redirected to this message.
As an official communication channel, IRC is difficult for us to adequately moderate and maintain. A welcoming and compassionate npm community is important to us, and as a small team, moderating IRC effectively has proven very difficult. Since #npm is probably always going to be seen as an “official” npm venue, we cannot in good conscience continue to let it fall into disrepair.
Here are some much better places to seek help, guidance, and support, which are actively moderated and maintained, either by us or by others.
#Node.js channel on irc.freenode.net is a good place to go for general purpose Node.js or JavaScript chat. And if you create an unofficial #npm channel, we won’t stop you, but we also can’t commit to helping you moderate it.
This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line
Q. Hi! Can you state your name and what you do?
A. Hi 👋, I’m Max from JavaScript Studio. I’m currently forming a startup around a cloud service that scans JavaScript modules for runtime errors.
How’s your day going?
It’s going great! Calm enough to get some work done.
Tell me the story of npm at your company.
I want to share my work early with interested developers, but I also want to avoid running my own infrastructure. So I’ve set up the npm @studio Org where I can publish packages privately and manage who can install and/or publish.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
It’s really a story about many packages, because I prefer slicing projects into small parts. This usually means a lot of setup, like cloning multiple git repositories and linking them. With private packages you can clone a single repository, run `npm install`, and you’re good to go. It makes modular projects more approachable.
Does your company do open source? How do you negotiate what you keep private and public?
Yes, I’m planning to open source as much of my work as possible. There are three questions I ask:
• Is it general purpose?
• Is the scope of the module well-defined?
• Does the current form of the project provide value?
Especially the first point is important. It makes little sense to open source projects that are only useful in the context of JavaScript Studio. To give an example, I have a couple of projects that allow me to run parts of my AWS setup offline. I use them to run tests or when coding at the airport. They are general purpose, the scope is clear, and they (hopefully) provide value to others. I’m planning to open source those.
To people who are unsure how they could use private packages, how would you explain the use case?
I think the main use case is sharing your work privately with no hidden costs like infrastructure maintenance. It’s also nice to know I can easily open source a project any time by flipping a switch.
How’s it going? How’s the day to day experience?
It’s 100% reliable. I never had a single service issue. It just works™.
How would you see the product improved or expanded in the future?
I think it could be easier to add people to teams with “install only” roles. In a first step, I want to allow others to install my modules, but not necessarily publish new versions.
A nice feature would be notifications, like desktop notifications or Slack integration. If someone publishes a module in my org, I’d like to be notified.
Would you recommend that another company use private packages and orgs?
If there are no objections to uploading packages to npm, it’s just the easiest way to get started for small teams. Especially if you have little capacity for infrastructure maintenance.
Any cool npm stuff your company has done publicly that you’d like to promote?
I started open sourcing the ground work projects that are used by most of the modules making up JavaScript Studio.
The first one is a release tool that integrates with the `npm version` command to prepare a changelog from git commits. I make all my releases with this tool:
https://www.npmjs.com/package/@studio/changes
The second project is a tiny JSON logger with a fancy emoji featuring formatter:
npm’s open source terms of use require that you provide us with a valid email address. Starting next week, you will need to verify your email before you can publish new packages.
This change affects only the requirements for new packages. You do not need to verify your email address to publish new versions of existing packages.
When npm was a smaller registry with fewer users, we were not an attractive spam target, but this is no longer true. We’ve seen a recent increase in spammers publishing many packages to the registry, sometimes thousands of packages at once. Sometimes spammers publish these packages from a single account, and sometimes they create a new account for every package published. Spammers can, currently, create accounts very easily and begin spamming immediately since no verification step is required.
Requiring valid email addresses for people intending to publish new packages is one of several steps we’re taking to slow down spammers. We are also working with Smyte to identify spam packages from their metadata and README data as they are published, so we can clean up incidents faster than in the past.
Log into your account on the npm website and go to your profile page. Mine, for example is https://www.npmjs.com/~ceejbot. If your email address needs verification, you’ll see a banner like this one:
Click the “send it again” link to send verification email.
If you need to change your email address, you can do so on the email edit page.
Next week, on Tuesday July 25.
You need to have a valid email address associated with your npm account to publish new packages. Verify your email address now if you have not already done so.
Contact our support team if you have questions about this requirement or experience problems following the steps above. npm loves you, but it doesn’t love spam.
As mentioned before, we’re continuing to do relatively rapid, smaller releases as we keep working on stomping out npm@5 issues! We’ve made a lot of progress since 5.0 already, and this release is no exception.
1e3a46944 #17616 Add --link filter option to npm ls. (@richardsimko)33df0aaa [email protected] (@zkat):33df0aaa [email protected] (@zkat):8e979bf80 Revert change where npm stopped flattening modules that required peerDeps. This caused problems because folks were using peer deps to indicate that the target of the peer dep needed to be able to require the dependency and had been relying on the fact that peer deps didn’t change the shape of the tree (as of npm@3). The fix that will actually work for people is for a peer dep to insist on never being installed deeper than the the thing it relies on. At the moment this is tricky because the thing the peer dep relies on may not yet have been added to the tree, so we don’t know where it is. (@iarna)7f28a77f3 #17733 Split remove and unbuild actions into two to get uninstall lifecycles and the removal of transitive symlinks during uninstallation to run in the right order. (@iarna)637f2548f #17748 When rolling back use symlink project-relative path, fixing some issues with fs-vacuum getting confused while removing symlinked things. (@iarna)f153b5b22 #17706 Use semver to compare node versions in npm doctor instead of plain >comparison. (@leo-shopify)542f7561 #17742 Fix issue where npm version would sometimes not commit package-locks. (@markpeterfejes)51a9e63d #17777 Fix bug exposed by other bugfixes where the wrong package would be removed. (@iarna)npx npx npx npx npx npx npx npx works again.update-notifier will no longer run for the npx bundled with npm.npx <cmd> in a subdirectory of your project should be able to find your node_modules/.bin now. OopsHave we mentioned we really like documentation patches? Keep sending them in! Small patches are just fine, and they’re a great way to get started contributing to npm!
fb42d55a9 #17728 Document semver git urls in package.json docs. (@sankethkatta)f398c700f #17684 Tweak heading hierarchy in package.json docs. (@sonicdoe)d5ad65e50 #17691 Explicitly document --no-save flag for uninstall. (@timneedham)
This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do, and what your company does?
A: Hi, I’m Clemens and I’m the lead frontend developer at Civey in Berlin, Germany. We do representative online opinion research. Anyone can embed our widget in their website to allow their users to take part in over 500 polls on a variety of topics.
How’s your day going?
I’ve just spent the last few hours optimizing our webpack config, but besides that I’m doing great. ;) The weather is starting to get nicer here in Berlin and the weekend is just around the corner. Also, we released some exciting new features this week and are working on even more.
Tell me the story of npm at your company. What specific problem did you have that private packages and orgs solved?
We’ve been using npm in our daily frontend workflow from the start of the company. That’s how we handle all of our dependencies, build steps etc.
We started out with just one product - our voting widget. When we started building our webapp shortly thereafter, we quickly noticed that we’d like to share some parts of the authentication logic or the api interaction layer.
So we moved them to separate git repositories and specified them as dependencies in our package.json. With more and more packages being split out we wanted to get proper releases and versioning. This is where private packages on npm came into play.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
We recently launched a third product which we call “Analytics Widget” and are working on refactoring our website. We quickly noticed that we would like to reuse and share a lot of our already existing UI components. So we started setting up a UI component library and published it as a private npm package. While our other internal libraries could potentially be open-sourced at some points, branded and very specific UI components aren’t of great value for the wider community. They are of great value to my team, though, as it allows us to build up a solid foundation for our user interfaces. One of our developers just replaced all of the forms in our webapp with a new, redesigned implementation in a matter of hours. This was possible because moving code to a separate module makes you think about the API you expose. If it’s well designed, reusing it becomes almost trivial.
Does your company do open-source? How do you negotiate what you keep private and public?
We currently do not maintain any open-source projects, but our developers are encouraged to contribute to the libraries we use. In the future we’re looking to potentially extract useful libraries out of our products.
To people who are unsure what they could use private packages for - how would you explain the use case?
As a JavaScript developer, you probably know and love a lot of npm modules. Sometimes you want to share a piece of code, but cannot open-source it. With private packages, it’s just an ‘npm install’ away. Another aspect of using modules is a better structure of your code. You can split out pieces of functionality, write tests and forget about it. As I mentioned above it makes you think about abstractions and boundaries differently which can help make you a better developer.
How’s it going? How’s the day to day experience of using private packages/orgs?
Publishing and installing private packages works flawlessly and is lightning fast. We rarely have to think about it, because we automatically publish from our CI system. For us as developers the npm CLI tool has really made a big step forward with v5 and is a pleasure to work with. Private packages aren’t any different to work with than open-source packages. You just have to remember to put your org scope in the package name. This makes them really pleasant work with as most JavaScript developers are already familiar with open-source npm modules.
How would you see the product improved or expanded in the future?
From the point of view of our CTO the organization administration, the UI on the npm website could be improved. It is a little barebones with regards to managing billing etc. Also, more fine grained access token management would be valuable for our CI system.
Would you recommend that another org or company use private packages or orgs and why?
If your org or company has multiple projects that share code that can’t be open-sourced (yet), npm private packages are the best solution to keep them organized. Each module can have proper tests, be integrated in CI flows and get correctly versioned releases. This will allow developers to confidently make changes in a large system and update modules with care. Additionally, a lot of JavaScript developers are already familiar with npm’s ecosystem which makes onboarding a breeze. For most of our projects it essentially just takes an `npm install && npm start` to get started.
Any cool npm stuff your company has done publicly that you’d like to promote?
No, but if you speak German feel free to check out our service, answer some questions and get realtime representative opinion data on all kinds of interesting topics. ;)
Those of you upgrading npm to its latest version, [email protected], might notice that it installs a new binary alongside the usual npm: npx.
npx is a tool intended to help round out the experience of using packages from the npm registry — the same way npm makes it super easy to install and manage dependencies hosted on the registry, npx makes it easy to use CLI tools and other executables hosted on the registry. It greatly simplifies a number of things that, until now, required a bit of ceremony to do with plain npm:

For the past couple of years, the npm ecosystem has been moving more and more towards installing tools as project-local devDependencies, instead of requiring users to install them globally. This means that tools like mocha, grunt, and bower, which were once primarily installed globally on a system, can now have their versions managed on a per-project basis. It also means that all you need to do to get an npm-based project up and running is to make sure you have node+npm on your system, clone the git repo, and do npm it to run install and test. Since npm run-script adds local binaries to path, this works just fine!
The downside is that this gives you no fast/convenient way to invoke local binaries interactively. There’s several ways to do this, and they all have some annoyance to them: you can add those tools to your scripts, but then you need to remember to pass arguments through using --, you can do shell tricks like alias npmx=PATH=$(npm bin):$PATH, or you can just path manually to them with ./node_modules/.bin/mocha. These all work, but none are quite ideal.
npx gives you what I think is the best solution: $ npx mocha is all you need to do to use your local installation. If you go an extra step and configure the shell auto-fallback (more on this below), then $ mocha inside a project directory will do the trick for you!
For bonus points, npx has basically no overhead if invoking an already-installed binary — it’s clever enough to load the code for the tool directly into the current running node process! This is about as fast as this sort of thing gets, and makes it a perfectly acceptable tool for scripting.

Have you ever run into a situation where you want to try some CLI tool, but it’s annoying to have to install a global just to run it once? npx is great for that, too. Calling npx <command> when <command> isn’t already in your $PATH will automatically install a package with that name from the npm registry for you, and invoke it. When it’s done, the installed package won’t be anywhere in your globals, so you won’t have to worry about pollution in the long-term.
This feature is ideal for things like generators, too. Tools like yeoman or create-react-app only ever get called once in a blue moon. By the time you run them again, they’ll already be far out of date, so you end up having to run an install every time you want to use them anyway.
As a tool maintainer, I like this feature a lot because it means I can just put $ npx my-tool into the README.md instructions, instead of trying to get people over the hurdle of actually installing it. To be frank, saying “oh just copy-paste this one commands, it’s zero commitment” is more palatable to users who are unsure about whether to use a tool or not.
Here’s some other fun packages that you might want to try using with npx: happy-birthday, benny-hill, workin-hard, cowsay, yo, create-react-app, npm-check. Go ahead! A command to get a full-fledged local REST server running is small enough to fit in a tweet.

As it turns out, there’s this cool package called node-bin on the npm registry. This means that you can very easily try out node commands using different node versions, without having to use a version manager like nvm, nave, or n. All you need is a stock [email protected] installation!
The -p option for npx allows you to specify packages to install and add to the running $PATH, so it means you can do fun things such as: $ npx -p node-bin@6 npm it to install and test your current npm package as if you were running node@6 globally. I use this all the time myself — and even recently had to use it a lot with one project, due to one of my testing libraries breaking under node@8. It’s been a real life-saver, and I’ve found it much easier to use for this sort of use-case than version managers, which I always somehow find a way to break or misconfigure.
Note: node-bin only works on *nix platforms. It’s the excellent work of Aria Stewart. In the future, that same package will be available as simply node, so you’ll be able to do $ npx node@6 ... directly, including on Windows.

A lot of npm users these days take advantage of the really cool run-scriptfeature. Not only do they arrange your $PATH such that local binaries are accessible, but they also add a whole slew of environment variables that you can access in those scripts! You can see what these extra variables are with $ npm run env | grep npm_.
This can make it tricky to develop and test out run scripts — and it means that even with tricks like $(npm bin)/some-bin, you still won’t have access to those magical env vars while working interactively.
But wait! npx has yet another trick up its sleeve: when you use the -coption, the script written inside the string argument will have full access to the same env variables as a regular run script! You can even use pipes and multiple commands with a single npx invocation!

It’s become pretty common to use gist.github.com to share all sorts of utility scripts, instead of setting up entire git repos, releasing new tools, etc.
With npx, you can take it a step further: since npx accepts any specifier that npm itself does, you can create a gist that people can invoke directly, with a single command!
Try it out yourself with https://gist.github.com/zkat/4bc19503fe9e9309e2bfaa2c58074d32!
Note: Stay safe out there! Always make sure to read through gists when executing them like this, much like you would when running .sh scripts!

This awesome feature, added by Félix Saparelli, means that for many of these use cases you never even need to call npxdirectly! The main difference between regular npx usage and the fallback is that the fallback doesn’t install new packages unless you use the pkg@versionsyntax: a safety net against potentially-dangerous typosquatting.
Setting up the auto-fallback is straightforward: look in the npx documentation for the command to use for your current shell, add it to .bashrc/ .zshrc / .fishrc, then restart your shell (or use source or some other mechanism to refresh the shell).
Now, you can do things like $ standard@8 --version to try out different versions of things, and if you’re inside an npm project, $ mocha will automatically fall back to the locally-installed version of mocha, provided it’s not already installed globally.
You can get npx now by installing [email protected] or later — or, if you don’t want to use npm, you can install the standalone version of npx! It’s totally compatible with other package managers, since any npm usage is only done for internal operations. Oh, and it’s available in 10 different languages, thanks to contributions by a bunch of early adopters from all over the world, with --help and all system messages translated and automatically available based on system locale!
Do you have a favorite feature? Have you already been using it? If you have something cool to show off that I didn’t list here, share it in the comments! I’d love to hear what other people are up to!
It’s only been a couple of days but we’ve got some bug fixes we wanted to get out to you all. We also believe that npx is ready to be bundled with npm, which we’re really excited about!
npx is a tool intended to help round out the experience of using packages from the npm registry — the same way npm makes it super easy to install and manage dependencies hosted on the registry, npx is meant to make it easy to use CLI tools and other executables hosted on the registry. It greatly simplifies a number of things that, until now, required a bit of ceremony to do with plain npm.
@zkat has a great introduction post to npx that I highly recommend you give a read
9fe905c39 #17652 Fix max callstack exceeded loops with trees with circular links. (@iarna)c0a289b1b #17606 Make sure that when write package.json and package-lock.json we always use unix path separators. (@Standard8)1658b79ca #17654 Make npm outdated show results for globals again. Previously it never thought they were out of date. (@iarna)06c154fd6 #17678 Stop flattening modules that have peer dependencies. We’re making this change to support scenarios where the module requiring a peer dependency is flattened but the peer dependency itself is not, due to conflicts. In those cases the module requiring the peer dep can’t be flattened past the location its peer dep was placed in. This initial fix is naive, never flattening peer deps, and we can look into doing something more sophisticated later on. (@iarna)88aafee8b #17677 There was an issue where updating a flattened dependency would sometimes unflatten it. This only happened when the dependency had dependencies that in turn required the original dependency. (@iarna)b58ec8eab #17626 Integrators who were building their own copies of npm ran into issues because make install and https://npmjs.com/install.sh weren’t aware that npm install creates links now when given a directory to work on. This does not impact folks installing npm with npm install -g npm. (@iarna)The npm CLI project does not have designated LTS releases. The project only regularly does releases to the most recent major release.
In the event of a security issue, the npm CLI project will back port security patches to any version of npm currently shipping with a supported Node.js version, that is to say, any Node.js version still in its maintenance window.
From time to time the npm CLI project may do a release of an older version of npm at the request of the Node.js project. Historically this has only been for important updates to node-gyp.
These older versions of npm will continue to work with the registry.npmjs.org registry but may not support all of its latest features.
Questions or requests to change this policy should be filed as issues on the npm CLI repo, so the discussion can be tracked in one place.
During the npm@3 release cycle, npm@2 was maintained as an LTS release. Support for this version ended when npm@4 was released and no new version was promoted to LTS status.

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do?
A: Hi there! I’m Dan Gebhardt. I’m a co-founder of Cerebris, which is a small web application consulting firm I run with my brother Larry Gebhardt. We’re pretty heavily into open source — I’m on the core teams for Ember.js, Glimmer.js, the JSONAPI spec, and Orbit.js.
How’s your day going?
Whew, it’s a hot one today in New Hampshire! But things are going well. I’m putting some finishing touches on a docs site for Orbit, which feels good because it’s been so long in the making.
What is your history with npm?
Although I’ve been working on web apps for a very long time, I haven’t done much Node development. As a result, I’ve only become a regular npm user in the past few years as it’s gained traction for front-end development. During that time, I’ve been really pleased to see how quickly npm has matured. And not just the npm service, which seems to have scaled quite well, but also the CLI, which is getting both faster and more deterministic (yay lockfiles!).
What problem did you have that npm Orgs helped you fix?
Tom Dale and I started developing Glimmer.js as a standalone component library separate from Ember in late 2016. Although Ember itself is architected very modularly, the core framework does not feel very modular in practice because of the way it is currently published and typically consumed. When building Glimmer.js we quite deliberately decided to package and publish it as modularly as possible from the start. We not only wanted to share as much as possible between Ember and Glimmer — we also wanted to make packages as useful as possible on their own.
We chose to publish all of the core Glimmer packages through the @glimmer Org. This means that “official” packages all get an authoritative scope that differentiates them from non-scoped community packages. Furthermore, developers can use different packages, such as the dependency injection library @glimmer/di, independent from the rest of Glimmer.
How’s the day-to-day experience of using Orgs?
There’s very little friction to working with Orgs. As the rest of the Ember core team has gotten involved in developing Glimmer, assignments and authorization have been simple and straightforward. As packages are published, core team members are automatically assigned rights, which reduces the overhead of creating and managing packages.
The only extra thing to remember about scoped packages is that they are private by default. So it’s necessary to explicitly publish packages with public access using `npm publish –access=public`. This is not a problem though, since you only have to remember this on the initial publish (and it’s no doubt a good safety check).
How would you see the product improved or expanded in the future?
I like using the npm CLI to manage packages, teams, and assignments, but a more interactive dashboard would be nice. Have I mentioned Ember.js? ;)
Would you recommend that other groups or companies use Orgs?
Most definitely. I can recommend Orgs for multi-package open source projects, even if they only have one member, because of the clarity that scoped packages provide to a community. Once you have multiple developers working on a project, you also gain the benefits of permission management. And even though I’ve only used Orgs for open source packages, I can easily see wanting to use private Orgs as well to get the same benefits for proprietary code.
What’s your favorite npm feature/hack?
I’ve become a real fan of using lerna for multi-package “mono-repos.” Lerna nicely solves the problem of managing dependencies across several local packages. Instead of needing to `npm link` them all individually, lerna can link them all together with one “bootstrap” command. It’s also quite useful for publishing multiple packages at once.
What is the most important/interesting/relevant problem with the JavaScript package ecosystem right now? If you could magically solve it, how would you?
A few months ago I would have said lockfiles, but I’m grateful that yarn and npm 5 have jumped that hurdle already.
Instead, I’ll give a rather boring answer: I think we need more rigorous conventions for defining the entry points to our packages. In this era of advanced build tooling and transpilers, the current conventions around defining `main`, `module`, and even `types` in `package.json` seem inadequate. Stronger conventions could identify distributions by language (e.g., TypeScript), language level (e.g., ES5), and module format (e.g., commonjs). This would allow for automatic discovery of the least lossy version of sources appropriate for any given application, and allow for the most optimized JavaScript to be shipped to browsers using tools like babel-preset-env.
Any cool npm stuff your company has done that you’d like to promote?
Nothing specific to npm tooling, just lots of exciting stuff happening in the @glimmer, @ember, and @orbit Orgs :)
Hey y'all. This is another minor patch release with a variety of little fixes we’ve been accumulating~
f0a37ace9 Fix npm doctor when hitting registries without ping. (@zkat)64f0105e8 Fix invalid format error when setting cache-related headers. ([@Kat Marchán](https://github.com/Kat Marchán))d2969c80e Fix spurious EINTEGRITY issue. (@zkat)800cb2b4e #17076 Use legacy from field to improve upgrade experience from legacy shrinkwraps and installs. (@zkat)4100d47ea #17007 Restore loose semver parsing to match older npm behavior when running into invalid semver ranges in dependencies. (@zkat)35316cce2 #17005 Emulate npm@4’s behavior of simply marking the peerDep as invalid, instead of crashing. (@zkat)e7e8ee5c5 #16937 Workaround for separate bug where requested was somehow null. (@forivall)2d9629bb2 Better logging output for git errors. (@zkat)2235aea73 More scp-url fixes: parsing only worked correctly when a committish was present. (@zkat)80c33cf5e Standardize package permissions on tarball extraction, instead of using perms from the tarball. This matches previous npm behavior and fixes a number of incompatibilities in the wild. (@zkat)2b1e40efb Limit shallow cloning to hosts which are known to support it. (@zkat)npm’s documentation recommends that you use semantic versioning, which we also call semver, but it doesn’t explain why you’d use SemVer in the first place.
This post is a quick overview of SemVer and why it’s a good idea.
At its most basic, SemVer is a contract between the producers and consumers of packages that establishes how risky an upgrade is — that is, how likely it is that an upgrade will break something. The different digits that comprise a SemVer version number each have meaning, which is where the “semantic” part comes from.
There’s a great deal of nuance to the full semver specification but it takes just a few seconds to review the core idea.
A simple semver version number looks like this: 1.5.4. These three numbers, left to right, are called:
A more descriptive way to think of them is as:
You release a major version when the new release will definitely break something in users’ code unless they change their code to adopt it. You release a minor version when you introduce a feature that adds functionality in a backwards-compatible way, i.e., you add a feature that doesn’t require users of the previous versions to change their code. You release a patch version when you make a backwards-compatible bug fix, like closing a security flaw or correcting the code to match documented behavior.
By separating releases by risk, SemVer allows the consumer of the software to set rules about how to automatically pull in new versions.
A pretty common set of rules when using a library in development is:
When using npm, you can express this set of rules by listing a package version as ^1.3.5 or 1.x. These are the default rules npm will apply to a package when you add it to your project’s package.json. npm@5 ensures that this happens automatically when you run npm install.
However, you might not care about new features as long as there are no bugs:
You would express those rules in npm using a range like ~1.3.5 or 1.3.x. You can make this the default behavior of npm using the save-prefix configuration option.
The best formulation of rules isn’t 100% clear: a fix version isn’t guaranteed not to break your code, the author just doesn’t think it will. But excluding fix versions entirely might leave you open to a known security problem, in which case your code is “broken” simply by staying as-is.
Many people accept feature and fix versions in development, but lock down the packages they depend on to exact, known-good versions once in production by using the package-lock.json feature of npm@5.
SemVer allows you — and npm — to automatically manage, and thus reduce, the risk of breaking your software by baking information about relative risk into the version number. The key word here is automatically.
Imagine if everybody used a single number for their version, which they incremented every time they made any kind of change. Every time a package changed, you would need to go to the project’s home page or changelog and find out what changed in the new version. It might not be immediately clear if that change would break existing code, so you would have to ask the author or install it and test the software to find out.
Imagine instead if everybody used a single number for their version and incremented it only when they’d added a bunch of new features that they were really proud of. This would be even worse. Not only would you not know if a change was going to break your code, but if an update did break your code, you’d have no way of specifying that you wanted a specific earlier version.
Either of these extreme alternatives would be painful for the consumers of a package, but even more painful for the author of a package, who would constantly be getting inquiries from users about how risky an upgrade was. A good author might put that information in a known place on their home page, but not everyone might be able to find it.
By making this form of communication automatic, SemVer and npm save everybody involved a great deal of time and energy. Authors and users alike can spend less time on emails, phone calls, and meetings about software, and more time writing software.
It’s common for a modern JavaScript project to depend on 700–1200 packages. When you’re using that many packages, any system that requires you to manually check for updates is totally unworkable, making SemVer critical — but SemVer is also why there are that many packages in the first place.
10 years ago, the JavaScript world was dominated by a handful of very large libraries, like YUI, Mootools, and jQuery. These “kitchen sink” libraries tried to cover every use case, so you would probably pick one and stick with it. Different libraries were not guaranteed to work well together, so you’d have to consider compatibility before adding a new one to your project.
Then Node.js came along, and server-side JavaScript developers began using npm to add new libraries with very little effort.
This “many small modules” pattern became hugely popular, and npm’s automatic use of SemVer allowed it to blossom without software being constantly broken by unexpected changes. In the last few years, tools like webpack and babel have unlocked the 500,000 packages of the npm ecosystem for use on the client side in browsers, and the pattern proved equally popular with front-end developers.
The evidence suggests that using a large number of smaller modules is a more popular pattern than a handful of large libraries. Why that’s the case is up for debate, but its popularity is undeniable, and SemVer and npm are a big part of what make it possible.

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do?
A: Ahoy, I’m Alistair Brown and I’m a lead front-end engineer at ShopKeep, primarily focusing on our BackOffice app, which enables more than 23,000 merchants the ability to manage their business operations from anywhere. With ShopKeep’s BackOffice, business owners manage everything from inventory management to accessing customized reporting specific to their business, so this a vital component to the ShopKeep product.
How’s your day going?
It’s going pretty well — my team is mostly front-end focused, so we use npm many times every day. We’re currently prepping some dependency upgrades to make sure we’re ready to jump on the newest version of React (v16) when it’s released. It’s important to us that we stay up to date, getting the benefits of optimization, bug fixes, and new tools.
What is your history with npm?
I’ve used npm in a few jobs to manage dependencies as well as publish some personal projects as modules on the npm registry. A few years ago, I was given an iKettle for Christmas and spent much of that holiday creating an npm module so I could boil water remotely using JavaScript — not a very popular module, but a lot of fun to build! More recently, I’m excited about the release of npm5. We’ve just rolled it out across our developer machines and onto the CI servers, and we’re really seeing the benefits.
What problem did you have that npm Orgs helped you fix?
The main problem we wanted to solve was being able to share code between upcoming projects. The npm Organization setup allowed us to create our own modules and keep control over who could access them. Having private packages within the organization has allowed us the freedom to create a versioned module, but without the fanfare of opening it up to the world.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
At Node.js Interactive Europe last year, I’d been inspired by a talk by Aria Stewart, called “Radical Modularity.” With the concept of “anything can be a package” in mind, we first started small with our brand colours (JSON, SASS, etc.) and configs. I explored pulling these components out of our code base into separate modules as part of a Code Smash (our internal hackathon). This allowed us to test the waters. As we mainly write in React and had created a number of generic components, there were lots of packages we wanted to extract. In the end, we started modularizing everything and have even extracted out our icon assets.
How’s the day to-day experience of using private packages and orgs?
It’s super easy. Day to day, there’s no difference from using any other package from npm. Once the code is out in a module, we get to treat it just like any other piece of third-party code. There had been a little bit of fear that the scope prefix would cause problems with existing tooling, but so far there have been no problems at all — a great feat!
Does your company do open source? How do you negotiate what you keep private and public?
We have several repositories of useful tools that we’ve open-sourced on GitHub, hoping these same tools could be useful for other developers. These range from shpkpr, a tool we use for managing applications on marathon and supporting zero-downtime deploys, to our air traffic controller slack bot, which helps us coordinate deployments to all of the different services we run. Open sourcing a project is an important undertaking and we always want to make sure that we have pride in what we release. Using private packages gives us that middle ground, allowing us to separate out reusable code but keep it internal until we’re ready to show it off.
To people who are unsure how they could use private packages, how would you explain the use case?
We started off wanting to get code reuse by sharing code as a package. Making private packages allowed us to be more confident about pulling the code out, knowing it wasn’t suddenly visible to the world. Our ESLint config is a nice example of a small reusable module we created, containing rules which enforce our internal code style. Splitting this out allowed us to apply the rules across multiple codebases by extending from this central config. Later, we added a new rule, and having immutable packages meant we could cut a new version and stagger the updates to dependent projects. Really, we get all the benefits that you’d expect from using a third-party package, while keeping control of updating and distribution.
How would you see the product improved or expanded in the future?
With the rapid development of the JavaScript ecosystem, it can be hard to keep up to date with new versions as they come out. The `outdated` command helps towards this, but anything that can be built to help developers stay on the latest and greatest would be really handy.
Would you recommend that other groups or companies use Orgs?
Definitely! It’s not just so you can use private packages, it’s also a great way to group your modules under a brand and avoid naming clashes. With the recent pricing change making organizations free, there really is no excuse for open source groups and companies not to publish their modules under an org.
What’s your favorite npm feature/hack?
I’m a huge fan of npm scripts. It’s allowed us to provide a single interface for useful commands and avoid forcing developers to install global dependencies. From building our application with gulp, upgrading tooling with a shell script, to publishing multiple modules with lerna, the developer experience stays the same by hiding the internals behind the simplicity of `npm run`.
What is the most important/interesting/relevant problem with the JavaScript package ecosystem right now? If you could magically solve it, how would you?
Building a package manager is a difficult problem to solve and it’s great to see so much engagement in this space. Deterministic installs is something that has been really important, so it’s good to see this in npm5 and yarn. I think the natural next step is a client-agnostic lock file. When there are multiple developers on a project, making sure that we can replicate a development environment across all dev machines and CI servers is very important — we use a shrinkwrap file (moving soon to a package-lock.json!), but those are npm-specific. Reducing that barrier between different packaging clients should allow for more experimentation on new approaches and optimisations.
Any cool npm stuff your company has done that you’d like to promote?
No — we’re just happy users!
The npm cli GitHub project is one of the most active on all of GitHub. The npm cli team is made up of two people, Rebecca Turner and Kat Marchán. At the time of this writing there are 3,244 open issues in the issue tracker. That’s clearly more issue than a team of two can reasonably handle, even with the invaluable help of the community. It’s even more untenable when you consider that this team is the team fixing the vast majority of the bugs and implementing new features.
Prior to January of 2017 some effort was made to try work the issue tracker while still working on the cli. That was put aside so that we could focus on npm@5. Ongoing we can not and do not want to continue ignoring the issue tracker. Even with its overwhelming size it’s extremely helpful.
So in an attempt to make it serve our team better we’re going to begin automatically closing issues that go too long without activity. This reflects the existing reality that older issues are unlikely to get attention. By making this explicit we hope that it will help ensure that issues that are pain points do not disappear into the churn: If our bots close your issue and it’s still a problem for you, please open a new version of it.
The initial policies are outlined below. We’ll revisit these over time and may adjust some of the numbers. To begin with, we are only considering closing issues that are neither assigned to a team member nor assigned to a milestone. If we do either of those the issue or pull request will stay open.
For issues triaged as support: Issues with no activity for three days will be closed. While we want to give community members an opportuntity to help each other, ultimately this is not the venue for that. You’ll likely be better served by joining something like package.community and asking your questions there.
For issues that have not received any triage (no labels): Issues with no activity for seven days will be closed. Ideally all incoming issues would be triaged but in practice this isn’t practical. Most of these issues are support issues.
For issues that have all other issues: Issues with no activity for thirty days will be closed. If your issue is closed and it’s still a problem for you then we encourage you to open a new issue.
For pull requests: Pull requests with no activity for sixty days will be closed. You are always welcome to open a new pull request (but please rebase on to npm/npm#latest).

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q: Hi! Can you state your name and what you do?
A: Gregor, community manager at the open source project Hoodie, and co-founder at Neighbourhoodie, a consultancy; we do Greenkeeper.
How’s your day going?
I just arrived in Berlin and life is good!
What is your history with npm?
Love at first sight! I’m a big fan of all of what you do! npm is a big inspiration for how a company can self-sustain and be a vital part of the open source community at the same time.
What problems has npm helped you fix?
We love small modules. Because of the maintenance overhead that comes with small modules, we created semantic-release and eventually Greenkeeper. Here is an overview of all our modules.
The `@hoodie` scope allows us to signal that this is a module created by us, the core hoodie team, and that it’s part of the core architecture. I could imagine using the scope in the future for officially supported 3rd party plugins, too.
How’s it going? How’s the day to day experience?
Our release process is entirely automated via semantic-release so that we don’t use different npm accounts to release software. Technically, it’s all released using the https://www.npmjs.com/~hoodie account.
How would you see the product improved or expanded in the future?
Hmm I can’t think of anything… I’ll ask around.
How would you recommend that other groups or companies use npm?
I don’t see why companies would not use scopes. I think it’s a great way to signal an “official” package, in order to differentiate 3rd party modules from the community.
What’s your favorite npm feature/hack?
As a developer, I love that I can subscribe to the live changes of the entire npm registry. It allows us to build really cool tools, including Greenkeeper.
What is the most important/interesting/relevant problem with the JavaScript package ecosystem right now? If you could magically solve it, how would you?
For me, a big challenge is to fix a bug in a library like `hoodie` that is caused by dependency (or sub dependency, or sub sub…). It would be cool if there was a way to easily set up a development environment in which I could test the bug on the main module, but working on the dependency, until I have it resolved. That would make it simple to release a new fixed version of the dependency and the package.json of the main module to request the fixed version.
This is kind of related: let’s say I have a module A with a dependency of module B and B depends on module C, so it’s A=>B=>C. Now, if I fix C and release a new fix version, I cannot release a new version of A that enforces this new version, because it’s a sub-dependency. I’m not sure what the right approach to this problem is, but that’s one that’s bothering me related to npm.
Any cool npm stuff your company has done that you’d like to promote?
Over the last few days we’ve been resetting the passwords for more than a thousand users and sending email informing them of the reset. Here is some detail about why we’re doing this.
We often revoke npm credentials leaked through testing service logs or that were accidentally checked into GitHub. Accidentally leaking environment variables like npm auth tokens in CI logs is a common mistake! We also have reset passwords for users who were found to have used common or weak passwords for their npm accounts, such as their username or the string password.
In this case, however, passwords for a number of users were available online, accessible via Google search. These passwords were made public through security breaches of other sites, and, unfortunately, the owners of some hacked accounts re-used the passwords for their npm accounts. This was discovered by an independent security researcher, who informed us of his discovery and set a short deadline for action on our part before he contacted you himself.
We have reset the passwords and revoked all extant auth tokens for the users whose passwords were publicly available.
To our knowledge, at no time has npm’s account information been accessed inappropriately. In all of these cases, the credentials were leaked either by the npm users themselves accidentally, or in breaches of other sites.
Here are the steps we’ve taken to help protect you from problems like this:
Here’s how you can protect yourself from credential leaks like this:
Here’s another patch release, soon after the other!
This particular release includes a slew of fixes to npm’s git support, which was causing some issues for a chunk of people, specially those who were using self-hosted/Enterprise repos. All of those should be back in working condition now.
There’s another shiny thing you might wanna know about: npm has a Canary release now! The npm5 experiment we did during our beta proved to be incredibly successful: users were able to have a tight feedback loop between reports and getting the bugfixes they needed, and the CLI team was able to roll out experimental patches and have the community try them out right away. So we want to keep doing that.
From now on, you’ll be able to install the ‘npm canary’ with npm i -g npmc. This release will be a separate binary (npmc. Because canary. Get it?), which will update independently of the main CLI. Most of the time, this will track release-next or something close to it. We might occasionally toss experimental branches in there to see if our more adventurous users run into anything interesting with it. For example, the current canary ([email protected]) includes an experimental multiproc branch that parallelizes tarball extraction across multiple processes.
If you find any issues while running the canary version, please report them and let us know it came from npmc! It would be tremendously helpful, and finding things early is a huge reason to have it there. Happy hacking!
Just a heads up: We’re preparing to do a massive cleanup of the issue tracker. It’s been a long time since it was something we could really keep up with, and we didn’t have a process for dealing with it that could actually be sustainable.
We’re still sussing the details out, and we’ll talk about it more when we’re about to do it, but the plan is essentially to close old, abandoned issues and start over. We will also add some automation around issue management so that things that we can’t keep up with don’t just stay around forever.
Stay tuned!
1f26e9567 [email protected]: Fixes installing committishes that look like semver, even though they’re not using the required #semver: syntax. (@zkat)85ea1e0b9 [email protected]: This includes the npa git-parsing patch to make it so non-hosted SCP-style identifiers are correctly handled. Previously, npa would mangle them (even though hosted-git-info is doing the right thing for them). (@zkat)The new summary output has been really well received! One downside that reared its head as more people used it, though, is that it doesn’t really tell you anything about the toplevel versions it installed. So, if you did npm i -g foo, it would just say “added 1 package”. This patch by @rmg keeps things concise while still telling you what you got! So now, you’ll see something like this:
$ npm i -g foo bar
+ [email protected]
+ [email protected]
added 234 packages in .005ms
362f9fd5b #16899 For every package that is given as an argument to install, print the name and version that was actually installed. (@rmg)a47593a98 #16835 Fix a crash while installing with --no-shrinkwrap. (@jacknagel)89e0cb816 #16818 Fixes a spelling error in the docs. Because the CLI team has trouble spelling “package”, I guess. (@ankon)c01fbc46e #16895 Remove --save from npm init instructions, since it’s now the default. (@jhwohlgemuth)80c42d218 Guard against cycles when inflating bundles, as symlinks are bundles now. (@iarna)7fe7f8665 #16674 Write the builtin config for npmc, not just npm. This is hardcoded for npm self-installations and is needed for Canary to work right. (@zkat)63df4fcdd #16894 [email protected]: Fixes an issue parsing SDK versions on Windows, among other things. (@refack)5bb15c3c4 [email protected]: Fixes some racyness while reading the tree. (@iarna)a6f7a52e7 [email protected]: Remove nested function declaration for speed up (@mikesherov)An anonymous person recently registered domains that appear to be affiliated with npm, Inc., and in recent days has contacted some npm users to promote a commercial service that our users could confuse for an npm, Inc. product.
npm, Inc. is not affiliated with this individual, we do not endorse these actions, and we are taking action to protect our users and defend our intellectual property rights.
This is what we know:
A few weeks ago, an unknown party registered the domains npm-cdn.com and
npm-js.com. They are hosted on DigitalOcean behind the Cloudflare CDN.
npm, Inc. is not affiliated with these domain names, which we believe are an intentional attempt to confuse npm users as to their association with npm, Inc. They are a violation of npm’s trademark policy.
The domains point to sites that run a fork of unpkg, a CDN backend written by Michael J. Jackson. Michael is an upstanding
Open Source citizen with whom we have a longstanding relationship. Unpkg is not
released under a license that would allow others to use the codebase in this
way. The npm-cdn.com and npm-js.com forks are code theft.
A few days ago, the anonymous party began creating a flood of automated accounts on the npm Registry, and created thousands of empty packages that link back to their website. These actions are in violation of npm’s terms of use.
They also have emailed npm maintainers to advertise their product, and BCC’ed other maintainers about packages with which they’re not involved. This is a violation of not only our terms of use, but also common decency.
We have reached out to Cloudflare and DigitalOcean to shut down this abusive behavior. These companies’ processes take some time, and are still underway. As the situation unfolds, we’ll keep the community informed. If you’re not sure whether a piece of communication is really from npm, Inc., contact npm support at [email protected] for assistance.
Supporting the npm Registry and the Open Source community remains our highest priority. We’ll continue to take every possible action to support the community and help developers build amazing things.
— Isaac Z. Schlueter, CEO
npm loves everyone!
By popular demand, this year we’re making the npm team’s Pride shirts available to all, with help from our friends at Teespring. Select your favorite design and click through for types and sizes — or collect them all! — and 100% of proceeds will benefit The Trevor Project.
Are we missing a design? Reach out and let us know!

It’s here!
Starting today, if you type `npm install npm@latest -g`, you’ll be updated to npm version 5. In addition, npm@5 is bundled in all new installations of Node.js 8, which has replaced Node.js 7 in the Node Project’s current release line.
Over the last year and a half, we’ve been working to address a huge number of pain points, some of which had existed since the registry was created. Today’s release is the biggest ever improvement to npm’s speed, consistency, and user experience.
The definitive list of what’s new and what’s changed is in our release notes,
but here are some highlights:
We’ve reworked package metadata, package download, and package caching, and this has sped things up significantly. In general, expect performance improvements of 20–100%; we’ve also seen some installations and version bumps that run 5x faster.

(Installing the npm website on our own dev environments went from 99 seconds using npm@4 to 27 seconds with npm@5. Now we spend less time jousting.)
Since npm was originally designed, developers have changed how they use npm. Not only is the npm ecosystem exponentially larger, but the number of dependencies in the average npm package has increased 250% since 2014. More devs now install useful tools like Babel, Webpack, and Tap locally, instead of globally. It’s a best practice, but it means that `npm install` does much more work.
Given the size of our community, any speed bump adds up to massive savings for millions of users, not to mention all of our Orgs and npm Enterprise customers. Making npm@5 fast was an obvious goal with awesome rewards.
Default lockfiles
Shrinkwrap has been a part of npm for a long time, but npm@5 makes lockfiles the default, so all npm installs are now reproducible. The files you get when you install a given version of a package will be the same, every time you install it.
We’ve found countless common and time consuming problems can be tied to the “drift” that occurs when different developer environments utilize different package versions. With default lockfiles, this is no longer a problem. You won’t lose time trying to figure out a bug only to learn that it came from people running different versions of a library.
SHA-512 hashes
npm@5 adds support for any tarball hash function supported by Node.js, and it publishes with SHA-512 hashes. By checking all downloaded packages, you’re protected against data corruption and malicious attacks, and you can trust that the code you download from the registry is consistent and safe.
Self-healing cache
Our new caching is wicked fast, but it’s also more resilient. Multiple npm processes won’t corrupt a shared cache, and npm@5 will check data on both insertion and extraction to prevent installing corrupted data. If a cache entry fails an integrity check, npm@5 will automatically remove it and re-fetch.
With your feedback, we’ve improved the user experience with optimizations throughout npm@5. A big part of this is more informative and helpful output. The best example of this is that npm no longer shows you the entire tree on package install; instead, you’ll see a summary report of what was installed. We made this change because of the larger number of dependencies in the average package. A file-by-file readout turned out to be pretty unwieldy beyond a certain quantity.
npm@5 is a huge step forward for both npm and our awesome community, and today’s release is just the beginning. A series of improvements in the pipeline will make using npm as frictionless as possible and faster than ever before.
But: npm exists because of its users, and our goal remains being open and flexible to help people build amazing things, so we depend on your feedback.
What works for you? What should we improve next? How much faster are your installs? Let us know. Don’t hesitate to find us on Twitter, and, if you run into any trouble, be sure to drop us a note.
Wowowowowow npm@5!
This release marks months of hard work for the young, scrappy, and hungry CLI team, and includes some changes we’ve been hoping to do for literally years. npm@5 takes npm a pretty big step forward, significantly improving its performance in almost all common situations, fixing a bunch of old errors due to the architecture, and just generally making it more robust and fault-tolerant. It comes with changes to make life easier for people doing monorepos, for users who want consistency/security guarantees, and brings semver support to git dependencies. See below for all the deets!
Existing npm caches will no longer be used: you will have to redownload any cached packages. There is no tool or intention to reuse old caches. (#15666)
npm install ./packages/subdir will now create a symlink instead of a regular installation. file://path/to/tarball.tgz will not change – only directories are symlinked. (#15900)
npm will now scold you if you capitalize its name. seriously it will fight you.
npm will --save by default now. Additionally, package-lock.json will be automatically created unless an npm-shrinkwrap.json exists. (#15666)
Git dependencies support semver through user/repo#semver:^1.2.3 (#15308) (#15666) (@sankethkatta)
Git dependencies with prepare scripts will have their devDependencies installed, and npm install run in their directory before being packed.
npm cache commands have been rewritten and don’t really work anything like they did before. (#15666)
--cache-min and --cache-max have been deprecated. (#15666)
Running npm while offline will no longer insist on retrying network requests. npm will now immediately fall back to cache if possible, or fail. (#15666)
package locks no longer exclude optionalDependencies that failed to build. This means package-lock.json and npm-shrinkwrap.json should now be cross-platform. (#15900)
If you generated your package lock against registry A, and you switch to registry B, npm will now try to install the packages from registry B, instead of A. If you want to use different registries for different packages, use scope-specific registries (npm config set @myscope:registry=https://myownregist.ry/packages/). Different registries for different unscoped packages are not supported anymore.
Shrinkwrap and package-lock no longer warn and exit without saving the lockfile.
Local tarballs can now only be installed if they have a file extensions .tar, .tar.gz, or .tgz.
A new loglevel, notice, has been added and set as default.
One binary to rule them all: ./cli.js has been removed in favor of ./bin/npm-cli.js. In case you were doing something with ./cli.js itself. (#12096) (@watilde)
The “extremely legacy” _token couchToken has been removed. (#12986)
A new, standardised lockfile feature meant for cross-package-manager compatibility (package-lock.json), and a new format and semantics for shrinkwrap. (#16441)
--save is no longer necessary. All installs will be saved by default. You can prevent saving with --no-save. Installing optional and dev deps is unchanged: use -D/--save-dev and -O/--save-optional if you want them saved into those fields instead. Note that since npm@3, npm will automatically update npm-shrinkwrap.json when you save: this will also be true for package-lock.json. (#15666)
Installing a package directory now ends up creating a symlink and does the Right Thing™ as far as saving to and installing from the package lock goes. If you have a monorepo, this might make things much easier to work with, and probably a lot faster too. 😁 (#15900)
Project-level (toplevel) preinstall scripts now run before anything else, and can modify node_modules before the CLI reads it.
Two new scripts have been added, prepack and postpack, which will run on both npm pack and npm publish, but NOT on npm install (without arguments). Combined with the fact that prepublishOnly is run before the tarball is generated, this should round out the general story as far as putzing around with your code before publication.
Git dependencies with prepare scripts will now have their devDependencies installed, and their prepare script executed as if under npm pack.
Git dependencies now support semver-based matching: npm install git://github.com/npm/npm#semver:^5 (#15308, #15666)
node-gyp now supports node-gyp.cmd on Windows (#14568)
npm no longer blasts your screen with the whole installed tree. Instead, you’ll see a summary report of the install that is much kinder on your shell real-estate. Specially for large projects. (#15914):
$ npm install
npm added 125, removed 32, updated 148 and moved 5 packages in 5.032s.
$
--parseable and --json now work more consistently across various commands, particularly install and ls.
Indentation is now detected and preserved for package.json, package-lock.json, and npm-shrinkwrap.json. If the package lock is missing, it will default to package.json’s current indentation.
sha512 and sha1 checksums. Versions of npm from 5 onwards will use the strongest algorithm available to verify downloads. npm/npm-registry-client#157We’ve been talking about rewriting the cache for a loooong time. So here it is. Lots of exciting stuff ahead. The rewrite will also enable some exciting future features, but we’ll talk about those when they’re actually in the works. #15666 is the main PR for all these changes. Additional PRs/commits are linked inline.
Package metadata, package download, and caching infrastructure replaced.
It’s a bit faster. Hopefully it will be noticeable. 🤔
With the shrinkwrap and package-lock changes, tarballs will be looked up in the cache by content address (and verified with it).
Corrupted cache entries will automatically be removed and re-fetched on integrity check failure.
npm CLI now supports tarball hashes with any hash function supported by Node.js. That is, it will use sha512 for tarballs from registries that send a sha512 checksum as the tarball hash. Publishing with sha512 is added by npm/npm-registry-client#157 and may be backfilled by the registry for older entries.
Remote tarball requests are now cached. This means that even if you’re missing the integrity field in your shrinkwrap or package-lock, npm will be able to install from the cache.
Downloads for large packages are streamed in and out of disk. npm is now able to install packages of “”“any”“” size without running out of memory. Support for publishing them is pending (due to registry limitations).
Automatic fallback-to-offline mode. npm will seamlessly use your cache if you are offline, or if you lose access to a particular registry (for example, if you can no longer access a private npm repo, or if your git host is unavailable).
A new --prefer-offline option will make npm skip any conditional requests (304 checks) for stale cache data, and only hit the network if something is missing from the cache.
A new --prefer-online option that will force npm to revalidate cached data (with 304 checks), ignoring any staleness checks, and refreshing the cache with revalidated, fresh data.
A new --offline option will force npm to use the cache or exit. It will error with an ENOTCACHED code if anything it tries to install isn’t already in the cache.
A new npm cache verify command that will garbage collect your cache, reducing disk usage for things you don’t need (-handwave-), and will do full integrity verification on both the index and the content. This is also hooked into npm doctor as part of its larger suite of checking tools.
The new cache is very fault tolerant and supports concurrent access.
npm cache clear is no longer useful for anything except clearing up disk space.Package metadata is cached separately per registry and package type: you can’t have package name conflicts between locally-installed packages, private repo packages, and public repo packages. Identical tarball data will still be shared/deduplicated as long as their hashes match.
HTTP cache-related headers and features are “fully” (lol) supported for both metadata and tarball requests – if you have your own registry, you can define your own cache settings the CLI will obey!
prepublishOnly now runs before the tarball to publish is created, after prepare has run.
Since before the release of npm 2.0 in 2014 we have encouraged developers using our APIs to use token authentication instead of passing username and password in a basic auth header. Over the next few weeks we will be turning the recommendation into a requirement: basic http authentication will no longer work for any of the npm registry endpoints that require authorization. Instead you should use bearer tokens.
There are two exceptions:
/login endpoint remains the endpoint to use to log into the npm registry & generate an auth token for later use./whoami endpoint will continue to respond with the username for a successful login.Both of these endpoints are monitored and rate-limited to detect abuse.
If you’re an npm user, this change will likely not affect you. Log in with the npm cli as you would normally:
npm login
A successful login will store an auth token in your .npmrc , which the client will use for all actions that require auth.
If you are using the npm cli to interact with registries other than npm’s, you should also not be affected. We have no plans to remove support for basic auth from the npm cli.
If you are a developer using npm’s API, make sure you’re using a bearer token when you need to authenticate with the registry. For more information about how to do this, please see the documentation for npm/npm-registry-client. This package is what the official command-line client uses to do this work.
If you have any questions or requests for us, please contact npm support. We want to hear about how you’re using our APIs and how you’d like them to evolve to support your use cases.

This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q. Hi! Can you state your name and what you do?
A. Hi! I’m Rob Tirserio. I’m a Tech Lead at Remedy Health Media.
How’s your day going?
Can’t complain. < — written before I had 2 meetings so now I’m sad :(
Tell me the story of npm at your company. What specific problem did you have that private packages and Orgs solved?
We started using private modules at the beginning of this year. We are using it for a few modules that really don’t make a lot of sense other than for internal use. For example, one is a client for an internal API. Using Orgs allowed this client to be shared with multiple projects that needed to interface with the API.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
The API client. It was just an easier way to allow other projects/teams at the company to integrate with an API.
Does your company do open source? How do you negotiate what you keep private and public?
We do open source and have been trying to do more. We just discuss internally whether or not we feel it makes sense to release something open source. If so, we get some of the people who didn’t work on it to review and offer feedback.
To people who are unsure what they could use private packages for, how would you explain the use case?
I think it’s great for creating reusable modules that may need to be shared across projects/teams. It really can be anything from configurations to a project template specific to an organization.
How’s it going? How’s the day to day experience of using private packages and Orgs?
We don’t have a ton to manage so its pretty easy. It’s nice to only have to tell everyone a version number to change in their project when new changes go out.
How would you see the product improved or expanded in the future?
Being able to get more info from the org packages list page would be nice. I do appreciate the simplicity of the site and ease of releasing/updating modules.
Would you recommend that another org or company use private packages or Orgs and why?
I would recommend everyone use npm and modules in general. Keeping things small and focused makes things easier to manage and maintain. It also increases the chance that something is identified as a candidate to be released to the community.
Any cool npm stuff your company has done publicly that you’d like to promote?
Our only significant open source module is ‘contentpull’ https://www.npmjs.com/package/contentpull which started as an internal module and we decided to release it. It’s just a wrapper for the Contentful CMS but we rely on it heavily.
This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q. Hi! Can you state your name and what you do?
A. Fabian Cook, Lead Software Developer at NZDigital, and Software Developer/Owner at Shipper NZ.
How’s your day going?
Pretty good, nice sunny day here today.
Tell me the story of npm at your company. What specific problem did you have that private packages and orgs solved?
At Shipper, we have a stack that we sub-license to other partners, one being NZDigital. We previously had to add these partners to our BitBucket account so that we include our packages in their software, this wasn’t the best way of doing things and didn’t work very well as we got further on and wanting to release more versions of everything.
By using private packages, we were able to provide these partners access to the versioned set of modules on npm.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
Really any of them, for example our core modules used for identity management, service boilerplates, those kinds of things, things that are tailored for us specifically and don’t necessarily have value in the open source community.
Does your company do open source? How do you negotiate what you keep private and public (feel free to be as vague as you need to be)?
If it’s something we have made very generic, for example @shipper/fastway, we are happy to make this open source, there is no reason to keep that kind of thing closed, it helps the wider community.
We also have small modules like @shipper/shipper-mongodb-database and @shipper/shipper-mongodb-async-collection that are very thin, they are merely just helpers for us, nothing we should be holding away from others.
To people who are unsure what they could use private packages for - how would you explain the use case?
Pretty simple in the end, it’s like using a private GitHub repo, or BitBucket. Either you are wanting it private for things like “closed source” business, or you may be just developing it in its infancy and you want to iron out the wrinkles before other people get involved and you are stuck with a certain way of doing things that actually don’t match what you intended.
How’s it going? How’s the day to day experience of using private packages/orgs?
Going pretty good, had a couple of hiccups where I accidentally made a private package public, apart from that its been good.
How would you see the product improved or expanded in the future?
We have “private” in the package.json files, would be good to have “protected” or something along the lines were we can only publish it as private.
Would you recommend that another org or company use private packages or orgs and why?
Definitely, I’ve used them with 4 companies now, its been great, no reason not to use them.
Any cool npm stuff your company has done publicly that you’d like to promote?
I wish there were things I can put out there at this time, we are still learning the ropes in our industry, trying to get through it all!
A little release to tide you over while we hammer out the last bits for npm@5.
d13c9b2f2 [email protected]: The name: prompt is now package name: to make this less ambiguous for new users.
The default package name is now a valid package name. For example: If your package directory has mixed case, the default package name will be all lower case.
f08c66323 #16213 Add --allow-same-version option to npm version so that you can use npm version to run your version lifecycles and tag your git repo without actually changing the version number in your package.json. (@lucastheisen)f5e8becd0 Timing has been added throughout the install implementation. You can see it by running a command with --loglevel=timing. You can also run commands with --timing which will write an npm-debug.log even on success and add an entry to _timing.json in your cache with the timing information from that run. (@iarna)9c860f2ed #16021 Fix a crash in npm doctor when used with a registry that does not support the ping API endpoint. (@watilde)65b9943e9 #16364 Shorten the ELIFECYCLE error message. The shorter error message should make it much easier to discern the actual cause of the error. (@j-f1)a87a4a835 [email protected]: Fix flashing of the progress bar when your terminal is very narrow. (@iarna)41c10974f [email protected]: Wait for fsync to complete before considering our file written to disk. This will improve certain sorts of Windows diagnostic problems.2afa9240c #16336 Don’t ham-it-up when expecting JSON. (@bdukes)566f3eebe #16296 Use a single convention when referring to the <command> you’re running. (@desfero)ccbb94934 #16267 Fix a missing space in the example package.json. (@famousgarkin)Every npm account comes with a free scope, which allows you to publish packages within your own personal name space, like “@myname/mypackage”. Orgs also get a free scope, so you can publish “@mycompany/mypackage”.
Scopes have existed on the registry for a while, but with the announcement of free orgs last month, they suddenly got a LOT more popular. The adoption of npm orgs by the open source community has been fantastic. In response, we finished some long-promised work to better support scoped packages in other parts of npm.
At long last, we provide download statistics for scoped packages like @types/lodash and @angular/common on our website. Hooray! We finally got to close one of our oldest bugs!
And in case you missed it, we now support scoped searches in npm’s package search, so you can search just within a given scope. You can also filter by keyword and maintainer.
As before, download stats are also available via our public download counts API. Docs are going to be updated soon, but here’s an example call. The system currently has only a week’s worth of stats for scoped packages, but we are slowly backfilling those.
The stats via our API are only available for public packages, for obvious reasons. We will be integrating stats for private packages into the website in the coming months.
We thank you for your patience while our small team got around to these features, and we hope you love them!
Welcome a wrinkle on npm’s registry API!
Codename: Corgi
This release has some bug fixes, but it’s mostly about bringing support for MUCH smaller package metadata. How much smaller? Well, for npm itself it reduces 416K of gzip compressed JSON to 24K.
As a user, all you have to do is update to get to use the new API. If you’re interested in the details we’ve documented the changes in detail.
Package metadata: now smaller. This means a smaller cache and less to download.
86dad0d74 Add support for filtered package metadata. (@iarna)41789cffa [email protected] (@iarna)Previously we needed to extract every package’s tarball to look for an npm-shrinkwrap.json before we could begin working through what its dependencies were. This was one of the things stopping npm’s network accesses from happening more concurrently. The new filtered package metadata provides a new key, _hasShrinkwrap. When that’s set to false then we know we don’t have to look for one.
4f5060eb3 #15969 Add support for skipping npm-shrinkwrap.json extraction when the registry can affirm that one doesn’t exist. (@iarna)878aceb25 #16129 Better handle Ctrl-C while running scripts. npm will now no longer exit until the script it is running has exited. If you press Ctrl-C a second time it kill the script rather than just forwarding the Ctrl-C. (@jaridmargolin)def75eebf [email protected]: Preserve case of the user name part of shortcut specifiers, previously they were lowercased. (@iarna)eb3789fd1 [email protected]: Add support for VS2017 and Chakracore improvements. (@refack) (@kunalspathak)245e25315 [email protected] (@mcollina)30357ebc5 [email protected] (@isaacs)This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Hey all, I’m Paul Betts and I’m the lead developer on the Slack Desktop application.
Ugh pass, long story :)
The Slack Desktop app is an Electron app, so our app is of course 100% in with regards to the npm ecosystem. We publish our own public packages as well as use private packages.
In the Desktop app, we actually use private packages in a somewhat unusual way. In order to package the software needed for the Calls feature, we need to ship precompiled versions of our modified WebRTC, and to interface it with Electron, we write a native node module. We found that the easiest way to get that Node module to have the versions of WebRTC it needed was to make WebRTC itself a npm package, so that npm install just pulls everything in correctly and builds the Calls native module. Who knew that npm is a great C++ package system :)
We generally try to make anything public that we can in the Desktop team. Because our app is built on open-source, our contributions are a small way to repay the development community for enabling us to build this product. Generally, we choose to make things private when they don’t make sense for other people — i.e., if it has a change specific to our setup. We don’t want people to find a package and think, “Oh, this @slack version must be better / endorsed by Slack!” but instead it’s just weird and broken.
Any time you want to have private source code as an npm package, use Orgs and private packages.
Everything works great. It’s headache-free.
It’d be cool to think about how to support the “Temporary scoped package to work around PR” workflow — i.e.,
npm deprecates mine and tells me to move to the new upstream one.Might be a little too heavy for npm, Inc. to do, but thinking about that use-case of “scoped packages for temporary forks, that go away after awhile” would be something to explore.
Of course, if you need them they work great, and they don’t have the weirdo problems that pinning to git revisions have, like certain npm scripts not running.
Nope, we’re a fairly non-trivial project and npm’s CLI and package system has always had an answer for everything we’ve thrown at it, including per-platform packages. It’s really great.
We’ve published a bunch of public npm packages in the Desktop team. A mostly-complete list is at https://slack.engineering/building-hybrid-applications-with-electron-dc67686de5fb#.wbcehjpau
Today, we’re excited to announce that npm Orgs, our collaboration tool for helping teams manage permissions and share their code, is free for all developers of open source packages. You may invite an unlimited number of collaborators to manage an unlimited number of public packages for $0.
We launched Orgs in 2015 for companies that needed to mix public and private code. They wanted an easy way to set permissions for multiple team members and multiple packages. Now, teams who don’t need private packages can use this functionality too.
Why would we give away our most popular product? Making it easier to collaborate on open source projects is good for the whole community, and anything that reduces friction makes it easier for everyone to build amazing things.
Take a look at our Orgs docs to learn how these work, and create a free Org now to supercharge how you collaborate on development projects.
If you run into any questions, just drop a line to our support team at [email protected] or tweet us @npmjs. We can’t wait to see what you build.
😩😤😅 Okay! We have another next release for ya today. So, yes! With v4.4.3 we fixed the bug that made bundled scoped modules uninstallable. But somehow I overlooked the fact that we: A) were using these and B) that made upgrading to v4.4.3 impossible. 😭
So I’ve renamed those two scoped modules to no longer use scopes and we now have a shiny new test to ensure that scoped modules don’t creep into our transitive deps and make it impossible to upgrade to npm.
(None of our woes applies to most of you all because most of you all don’t use bundled dependencies. npm does because we want the published artifact to be installable without having to already have npm.)
2a7409fcb #16066 Ensure we aren’t using any scoped modules. Because npms prior 4.4.3 can’t install dependencies that have bundled scoped modules. This didn’t show up sooner because they ALSO had a bug that caused bundled scoped modules to not be included in the bundle. (@iarna)eb4c70796 #16066 Switch to move-concurrently to remove scoped dependency (@iarna)We’re going to start publishing our changelogs over here on the npmjs blog. Today we have for you v4.4.3 from myself (@iarna) and v4.4.2 from Kat (@zkat).
This is a small patch release, mostly because the published tarball for v4.4.2 was missing a couple of modules, due to a bug involving scoped modules, bundled dependencies and legacy tree layouts.
There are a couple of other things here that happened to be ready to go. So without further ado…
3d80f8f70 npm/fs-vacuum#6 [email protected]: Make sure we never, ever remove home directories. Previously if your home directory was entirely empty then we might rmdir it. (@helio-frota)1af85ca9f #16040 Fix bug where bundled transitive dependencies that happened to be installed under bundled scoped dependencies wouldn’t be included in the tarball when building a package. (@iarna)13c7fdc2e #16040 Fix a bug where bundled scoped dependencies couldn’t be extracted. (@iarna)d6cde98c2 #16040 Stop printing ENOENT errors more than once. (@iarna)722fbf0f6 #16040 Rewrite the extract action for greater clarity. Specifically, this involves moving things around structurally to do the same thing d0c6d194 did, but in a more comprehensive manner. This also fixes a long standing bug where errors from the move step would be eaten during this phase and as a result we would get mysterious crashes in the finalize phase when finalize tried to act on them. (@iarna)6754dabb6 #16040 Flatten out @npmcorp/move’s deps for backwards compatibility reasons. Versions prior to this one will fail to install any package that bundles a scoped dependency. This was responsible for ENOENT errors during the finalize phase. (@iarna)This week, the focus on the release was mainly going through all of npm’s deps that we manage ourselves, and making sure all their PRs and versions were up to date. That means there’s a few fixes here and there. Nothing too big codewise, though.
The most exciting part of this release is probably our shiny new Contributing and Troubleshooting docs! @snopeks did some ✨fantastic✨ work hashing it out, and we’re really hoping this is a nice big step towards making contributing to npm easier. The troubleshooting doc will also hopefully solve common issues for people! Do you think something is missing from it? File a PR and we’ll add it! The current document is just a baseline for further editing and additions.
Also there’s maybe a bit of an easter egg in this release. ‘Cause those are fun and I’m a huge nerd. 😉
07e997a #15756 Overhaul CONTRIBUTING.md and add new TROUBLESHOOTING.md files. 🙌🏼 (@snopeks)2f3e4b6 #15833 Mention the 24-hour unpublish policy on the main registry. (@carols10cents)84be534 #15888 Stop flattening ls-tree output. From now on, deduped deps will be marked as such in the place where they would’ve been before getting hoisted by the installer. (@iarna)e9a5dca #15967 Limit metadata fetches to 10 concurrent requests. (@iarna)46aa9bc #15967 Limit concurrent installer actions to 10. (@iarna)c3b994b #15901 Use EXDEV aware move instead of rename. This will allow moving across devices and moving when filesystems don’t support renaming directories full of files. It might make folks using Docker a bit happier. (@iarna)0de1a9c #15735 Autocomplete support for npm scripts with : colons in the name. (@beyondcompute)84b0b92 #15874 Stop using undocumented res.writeHeader alias for res.writeHead. (@ChALkeR)895ffe4 #15824 Fix empty versions column in npm search output. (@bcoe)38c8d7a [email protected]: npm/init-package-json#61 Exclude existing devDependencies from being added to dependencies. Fixes #12260. (@addaleax)If you’re using this endpoint in your own client for npm’s API, read on for more information. If you only use the official npm clients, this deprecation won’t affect you.
In six months, on September 1 2017, we plan to shut down the servers that back the GET /-/all endpoint of registry.npmjs.org. This endpoint has until recently provided search results for the npm command-line client. If you didn’t know that the npm cli has search built into it, it’s because this API was designed for an era when the registry had an order of magnitude fewer packages. Over the last few years, as the number has gone from 10,000 to 400,000, this API has become increasingly unusable for its purpose. The increasing size of the full json response—234M at the time of this writing—means that even if a request for the payload succeeds, many clients run out of memory when parsing it.
Late in 2016, we improved our web site’s search with a new search service, which we then made available as a public API at GET /-/v1/search. The very latest npm clients use this new search endpoint directly and provide fast and usefully-ordered search results. A few weeks ago we began redirecting older npm clients to the new endpoint as well, which means that no currently-supported cli versions are using the older search data.
If you are using the GET /-/all endpoint to search package contents, we strongly encourage you to use GET /-/v1/search instead. You can find documentation for this endpoint in the registry repo on GitHub.
If you are using the endpoint as a way to get a list of all packages, we encourage you to write a registry follower that watches the changes stream at replicate.npmjs.com for public packages. We provide sample code and libraries to support you.
If you are using the endpoint for another purpose that isn’t supported by either of these two methods, we would like to hear from you. Our intent is to support your use of the npm registry’s public data as efficiently as we can. Contact us to tell us what you need!
My first npm publish was unusual. npm didn’t exist at the time, so that presented a bit of a challenge.
This is the story of helping to inventing a universe so that I could make an apple pie from scratch.
Back in 2009, I was working at Yahoo! as a Front-End Engineer. That meant that I wrote a lot of PHP and JavaScript. I had just finished a project where we had front-end components generated on the back-end and shipped to the client based on some data being parsed into a template, and then later on, on the front-end, do the same work in JavaScript with the same templates and data services.
These days, that’d be called “fast boot” or “isometric templates” or something clever, but back in those dark days, it required tediously maintaining two implementations of a view layer, one in PHP and the other in JavaScript. Maintaining the same thing in two languages was downright awful.
“Well”, I figured, “JavaScript is a language, and we can control what’s on the server, why not just run JavaScript on the server?”
The state of the art in server-side JavaScript (SSJS) was Rhino on the JVM. The problem was, unless you compiled your JavaScript into JVM bytecode using arcane special magicks, it was godawful slow. I started messing around with V8 and SpiderMonkey, thinking “I want something like PHP, but JavaScript”.
The SSJS community at that time was a very different place than the Node.js of today. There were dozens of projects, any one of which could’ve seemed like it would be the breakout hit. SpiderApe and v8-juice were trying to make it easier to embed spidermonkey and v8, and add a standard library to each. v8cgi (renamed to TeaJS) provided a CGI binding to use v8 in Apache2. I started messing around with K7, which provided a bunch of macros for using V8 in various contexts, and Narwhal, which was the only one of these that seemed to be delivering a fully thought-out platform for making programs. There was also Helma and RingoJS, and probably a bunch of others I’m forgetting.
A few years ago, we used to joke that every Node.js dev had their own test framework and argument parser. Well, in 2009, every server-side JavaScript developer had their own SSJS platform.
The contributors to all of these platforms got together in a mailing list and tried to form some kind of standard for server-side JavaScript programming. Front-end JavaScript has the DOM, so we thought, and right now, writing server-side JavaScript suffers from a dearth of portability. What we need is a standards body, clearly! This was initially called “ServerJS”, but then expanded its scope to CommonJS.
The first proper “module” I wrote in JavaScript was a port of a url parser I wrote for YUI. I landed it in Narwhal. There was no userland, really. Just lots of little cores.
Some time later, in August of 2009, I gave a tech talk about SSJS and demonstrated using Narwhal and Jack, a Rack-like thing built on top of Narwhal, using the JSGI protocol.
After the talk, one of the people in the audience asked if I’d ever tried out Node.js. As it turned out, I had, but like so many SSJS platforms:
Ergo: Not a thing.
“I dunno,” he said. “Maybe try it again. It’s pretty nifty.”
He insisted that it was fast, and I was like, “Meh. JVM is fine.”
I checked the website again, and they’d added a “Community” section. Also, the docs still sucked, but it was version 0.0.6 now, which was like, 4 more than it was the first time I’d checked, so whoever this Ryan guy was, he was at least working hard on the thing.
It compiled successfully, and I was hooked! It started up so fast
compared to Rhino! And it had tests that ran when I did make test,
and they passed!
3 important lessons for OSS success:
I gradually stopped paying much attention to CommonJS, and instead just threw my efforts at Node. I hung out on the mailing list and in IRC during all my free time.
The problem with Node back then was that even though a growing number of people were all writing really interesting programs, it was hard to share them. So, I wrote this thing, which was a port of a bash script I was using to play with people’s code.
Technically that wasn’t “publishing” though. In order to actually publish to npm there had to be an npm registry. Today, that registry is a webservice at https://registry.npmjs.org/, run by npm, Inc.. The first registry was a git repo called “npm-data”. I collected up the handful of modules that’d been shared from on the mailing list and in the Node.js wiki page, and made a JSON file with links to them.
One principle of package management that I felt was really important was that no one person should be the bottleneck in community growth. Especially if that person is me. Because I really hate that crap.
I don’t mind working really hard on lots of challenging stuff, but if I have to do some simple task over and over again, especially if other people are depending on me to do it, it’s like torture to me. The prospect of being in someone’s critical path for deploying their module was just… ugh. Gross.
I needed a web service type thing that would let people publish packages and then could download those packages and install them.
I got to talking to Mikeal Rogers, who worked at Couch.IO. He built the first npm registry CouchApp, and got it functional.
Fun fact! For a little while, anyone could publish any package, and we relied on the honor system to keep anyone from clobbering anyone else’s name. It was an ok system for a short while, since there were only about 4 or 5 people in the world who knew this thing existed, but we got an authentication and authorization system set up before anyone could take advantage of it.
By that time, I’d quit my job at Yahoo! and was taking a sabatical. If you can afford it, I highly recommend saving up a little nest egg and taking a few months off to see what comes out of you. Muses can be fickle, and tend to call when least expected.
You’re thinking that the culmination of this story is that I published npm to npm and that was my first npm publish, and it’ll be super meta and awesome like that. It’d be a beautiful punchline.
Real life is sloppy sometimes.
I knew that I wanted npm to be able to accept abbreviated versions of
commands, so that npm inst would do the same thing as npm install.
(To this day, the friendly CLI shorthands are some of npm’s most
beloved features.)
The first thing I published to npm was abbrev. I’d written it already, mostly as a sort of coding crossword puzzle some… Saturday? Wednesday? All the days were pretty identical during those two lazy/exhausting months of funemployment.
Since abbrev was only one module, no build command, it was
really easy to publish and install repeatedly. Ever since then, it’s
always been one of my go-to testing modules to make sure things are
working properly. Not only was it my first npm publish, it was
the first npm publish, and it was published probably dozens or
hundreds of times to http://localhost:5984/ while I was working on
npm. So, of course, when I had a registry running on my little
DreamHost instance, abbrev was the first thing I published to it.
The really wacky part: despite it being the first thing I’d published
with npm, I didn’t actually use abbrev in npm until 5 months
later. That whole time I
kept trying to figure out how to have proper dependencies in the thing
that installed dependencies. Eventually, I gave up and threw it in a
utils folder.
Looking back over abbrev now, it’s amazing to me how little it’s changed. Most of the code is still that initial implementation from May 2010.
The moral of the story is that you don’t know how it’s going to end.
Package publishers are the people who make the npm registry the largest (and awesomest?!) package ecosystem in the world (universe?!). Today we are kicking off a campaign to show ya’ll a little love.
For a while now, we’ve noted that the community loves tweeting about publishing their first npm package. They are some of our favorite tweets- so much so in fact that we’ve decided to collect and publicize them!
Here’s how it works:
If you are a new publisher:
If you are a seasoned publisher:
If you are a member of the community:
Everyone:
To everyone who has published a package, be it big, small, popular, or only used by you- THANK YOU SO MUCH for contributing to the npm ecosystem. We’d be nothing without you, and we’re glad you are here <3
Want to publish your first package this week? Awesome. Learn more about how to with this doc.
This piece is part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
A. Phil Schleihauf, co-founder and developer at OpenRide.
Great, thank you! Hope yours is good too :)
We’ve been node/react server/client since day 1, so we naturally used npm to manage dependencies. Private npm packages let us move code outside our private monorepo, so client and server could depend on specific versions, de-coupling library development from synchronization with the app.
We wrote our own testing framework. The intent was to open-source it eventually, but keeping it private let us design it in private, slack on documentation, not fix bugs, etc., until we had a solid design to clean up and release publicly.
We intended to move more things out of the monorepo into private packages, but in the end, did not.
Yes. We discuss it case-by-case. We’re definitely biased toward open-sourcing everything, but we also like to release things that we feel meet a level of quality that we’re happy with, and things that we intend to maintain.
To me, the killer use-case is code reuse. Moving shared code to a library vastly decreases friction around updating that library—it can have its own fast CI, its own releases whenever it wants, and app code can upgrade to the latest version at leisure. It’s nice to be able to get that for code that’s not suitable for open-source, whether because it’s too domain-specific, just-not-ready, or whatever.
I want to stress the joy of a separate CI. Our main app’s CI takes 12–20mins to run. Our private package runs in under a minute. Moving stuff out of our monorepo to their own fast CI has a positive dev-happiness effect!
Good. There is friction—new devs need to have npm accounts and permissions have to be set up for them. CI and build pipelines need to log in to pull the private dependencies, which required some hacks when we set it up. But those are all fixed-cost (or per-dev) items, and the benefits are worth it.
I don’t think we’ve caught up to private orgs yet—I think we had to set up an “organization user” when we were getting things up and running, so I’d have to get up to date before answering this :)
I would ask about their deploy/CI setup first. If they can get the keys/passwords in the right place at the right time easily, then I’d recommend it. So for example, at least at the time I set things up, we couldn’t get a Docker auto-build to work. We keep secrets in our CircleCI environment variables, and have a script that builds our Docker images in that environment instead.
Q: What’s the difference between a hippo and a zippo? A: One’s really heavy, the other is a little lighter.
Over last month’s holidays, with the help of npms.io, npm introduced an improved search platform and brought it to the npmjs.com web experience. We’re really proud of how this project went: it was an opportunity to work with folks in the community and pull in an open-source solution that people love.
As we promised at the time, here are some more details about the how and the why, and an exciting announcement about bringing new search to the npm command-line tool.
It turns out we’ve improved search several times in the life of the company, and the story of search, like any story about npm, is a story about the JavaScript community’s terrifyingly ridiculous growth:
At each of the steps along the way, we’ve had to make significant changes to our search algorithm, to support the growing ecosystem.
When the registry was just getting started, guessing a few keywords was a great way to find the module you were looking for, e.g., “http request”, “xml parser”, “node globber”.
npm’s first search implementation was exclusively for the CLI. This first search implementation quite simply:
description, keywords, name, etc., matching
the arguments provided to npm ls [some key words].That’s all there was to it: no stop-word removal, no stemming, no fancy-pants search-engine technology.
With only a few hundred packages in the registry, this worked great … for a while.
In December 2010, just a few months after we released search for the CLI, Mikeal Rogers implemented search.npmjs.org.
Mikeal’s code introduced several improvements over the initial search implementation:
all endpoint.search.npmjs.org was definitely a step forward for search; it also set in motion the npm website’s search drifting away from the npm CLI’s search… something that’s taken us until now to correct.
At a few hundred packages in the registry, the approach to search described above worked great, but as the ecosystem grew and users adopted the tiny module method to development, search began to fail:
To help address this growing discoverability problem, several implementations of search grew out of the community. These third-party search sites introduced many cool innovations:
In 2014, npmjs.com adopted the indexer used by npmsearch.com. This significantly sped up search results, while also improving the discovery algorithm by ranking based on download counts.
This is was a major improvement to the search algorithm, and a step in the right direction, but…
When npm, Inc. formed in 2014, our first goal as a company was to make the registry a stable platform that people took for granted. As we stabilized the registry, this plan paid off. More ecosystems began calling the registry home: jQuery, React, and Meteor, to name a few. Between 2014 and early 2017, this helped see the number of modules in registry climb to over 400,000! … but our search algorithm did not age well:
As we researched the other search engines people used in the community, it became obvious that people were impressed by the quality of the results returned by npms.io:
This set in motion a conversation with the folks behind npms.io, and culminated in our deciding to deploy npms.io as npm’s third-generation search.
npms.io is by far the most advanced npm search algorithm npm has ever offered. npms.io’s analyzer takes into account three categories of information in its ranking:
By ranking results based on this variety of qualities, the algorithm can surface modules that in the past might have been ignored. express is the top hit for “web framework”, for example, despite not having “web” or “framework” in its name.
So far, the response from the community has been wonderful, and we’re excited to continue working with and deploying the npms.io project.
What’s next for search at npm?
We think this is very exciting news. An upcoming update to the npm command-line tool makes it so the CLI hits the shiny new search endpoint. This will unify the website and CLI search experience for the first time since 2010. It will also make default npm search on the main registry blazing fast:
The PR is basically ready, with only a handful of remaining to-dos. Check it out.
As mentioned, npms.io is an open-source project. We hope that the JavaScript community will to pitch in to continue to make our search algorithm top-notch.
Where’s feature x? What took so long? How will search work when we reach a million packages? These are good questions, and you can help with the answers. Please, join the discussion, and help make search even more amazing.
This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Hi! I’m Jesse Pollak and I’m one of the co-founders and the Chief Product Officer at Clef.
It’s going pretty well! Just ate a yogurt and apple :)
Just took a look back through our commit history. It looks like we started using npm to manage our frontend packages for Clef in February of 2014. At that point, we were migrating from a frontend that consisted of spaghetti jQuery code to React. We chose npm to manage our dependencies because it seemed like the obvious choice.
Two years later, in March 2016, I see our first commit that references a private package. At the time, we were starting to build a new product that required reuse of a bunch of frontend logic from our core Clef product. To solve that problem, we started breaking small components and libraries out of our code base into npm packages. We needed a place to publish them — but we didn’t want them to be public — so private npm packages in a @clef org was the obvious choice! Getting started was really easy and we had our first private package published in less than a day.
One of the coolest parts of Clef is the Clef Wave (you can check it out at getclef.com/demo). We’ve written a bunch of front-end logic to render and animate the wave, but since it’s a core part of our system, we don’t want to publish it as open source, although we may soon :). We need to use the wave across a bunch of different products, and private packages let us use the npm infrastructure to develop, publish, and install the necessary packages to render and animate the Clef Wave without making the unminified source public.
We do do open source (you can check out our work here)! Generally, we publish anything that we think would be generally usable by a broader audience (i.e., React components, Flask libraries, etc.), but we don’t open source our core products. That’s changing with Instant 2FA though — we’ll be open sourcing the entire code base!
All the developer friendliness of open source on npm — without the open source! It makes developing, publishing, and installing private packages dead simple.
Great!
Everything works pretty great right now. The one thing that’s been frustrating, which I would like to see improved, is adding outside collaborators to private packages. We have some packages that we need to share with partners, and in the current organization model, we’d need to pay for every npm user in the partner organization as a part of our organization — that’s really frustrating!
Ideally, we’d be able to add anyone to a private package for free and it would only cost money if we needed to add them to our whole organization.
Yes! npm makes publishing JS (and other front-end related code) dead simple and everyone should be using it!
This piece is a part of our Customer Convos series. We’re sharing stories of how people use npm at work. Want to share your thoughts? Drop us a line.
Q. Hi! Can you state your name and what you do?
A. I’m Karen, I’m an engineer at Mapbox on the Directions team; directions as in, routing, a to b, take me home and give me an eta! (read more here).
How’s your day going?
Umm… I’m on a bus headed up to New York for a few days, so, great!
Tell me the story of npm at your company. What specific problem did you have that private packages and orgs solved?
Mapbox adopted Node.js as the base of a lot of projects early on, so npm has always been important to publishing and deployment, but there are also private, usually security-related tools that need to end up on the same servers as product code that we can’t publish to npm.
We had developed internal infrastructure for publishing and distributing these private modules, but having them available through npm the same way as public modules is much more convenient because of semver support and less overhead to onboard new team members, i.e., new people only have to learn npm (or not, if they were already familiar!) rather than npm and our internal system.
Can you tell us a story about a specific package you wanted to make that private packages really enabled you to do?
Hmm, nothing too interesting here since we’d gotten by OK with our pre-private orgs internal system.
Does your company do open source? How do you negotiate what you keep private and public? (Feel free to be as vague as you need to be)
Open source has been a core Mapbox engineering value since the beginning (like, small Drupalshop beginning). We consider it an asset to contribute to FOSS communities and benefit from community contributions. In addition to maintaining some 600 open repos under the Mapbox GitHub account, we sponsor a few other big open source projects that are very important in the GIS space, like leaflet, libosmium, and OSRM (Open Source Routing Machine).
Our general rule of thumb is to work in the open from the get-go because it’s easier to keep a project open than it is to later open a closed project. However, sometimes in the interest of time, security, or bizness we develop projects privately, too.
To people who are unsure what they could use private packages for, how would you explain the use case?
Security, for platform infrastructure and user privacy, but with the convenience of open project management. Sharing source code for a cool shader is one thing, sharing a project that is a wrapper around cloud provider account access is another.
How’s it going? How’s the day-to-day experience of using private packages/orgs?
Once we got internal docs written on how to get people using them it’s been a smooth ride.
Would you recommend that another org or company use private packages or orgs? Why?
I would, as long as they also try and maintain open packages as well!
Any cool npm stuff your company has done publicly that you’d like to promote?
Haha, I’ll plug the project I’m working on… We’ve put a lot of work into making Node bindings for OSRM easily available on npm, so anyone can use a routing engine written in cpp without worrying about finicky build systems!
Happy holidays from npm! We’re sorry we couldn’t get you all socks this year, but we have a gift we hope you’ll like.
Today, npm and npms.io are proud to take the wrapping paper off a completely revamped npm registry search experience. It’s available today on npmjs.com and available in the npm CLI application after the holiday break. Give it a whirl!
It’s been apparent for some time that the search feature on npmjs.com was lacking, to put it mildly.
Why? One word: success.
The npm registry has grown beyond even our wildest dreams. In 2014, the registry had 60,000 packages with around eight million downloads per day. We’re now close to 400,000 packages and 300 million downloads every day. In a very basic sense, our search just wasn’t built for that large of a registry.
The problem extended beyond sheer numbers, though. At the same time as our community has grown, it’s become obvious that code quality — features like whether a package is still maintained, and how often; and whether a lot of developers depend on it — is an important consideration, too. npm is, and always should be, home to works in progress, first packages, and even, uh… this stuff — but the community also depends on us to help point to reusable code that’s been vetted by others, is kept up to date, and works well for a given task.
Look, search is hard, and we’ve been busy. Fortunately, help was available from — who else? — the community. In this case: André Cruz, CTO at MOXY in Porto, Portugal, who saw the need and attacked it.
It was a pity that such a great package manager and community had its search engine pulling it down. I saw an opportunity to make a proper alternative that, given enough dedication, could completely solve the problem.
Solve it he did, bringing on another André — André Duarte of Feedzai — along the way. The project that eventually became npms.io started as a small side project but blossomed into a full-fledged solution to npm’s search problem.
The first commit happened on the 24th of January, and after a few months of hard work and sleepless nights, we were able to validate that our strategy was right.
npms.io launched on July 12th to rave reviews. Almost immediately, we reached out to os Andrés to figure out a way to work together. Our Benjamin Coe worked with them to integrate npms’s search methodology into the npmjs.com user experience and scale it to power over 1.3 million searches a month.
We consider this a flagship example of npm’s approach to building software. We’ve always favored working with existing technologies if possible rather than reinventing the wheel — so we’re using a technology that grew out of the community.
Today’s release is just the start. Because npm search is powered by npms, that means our search is open source, and you can help. We’re looking to build a small team around npms that will push the project further and keep it sustainable. Take a look and get involved.
When developers devote time to building amazing things and sharing them back with the npm community, it validates and strengthens the philosophies behind open source development. Being able to more quickly discover, share, and contribute code empowers all of us to build more amazing things.
As gifts go, we think it beats an ugly sweater. Please enjoy, share your feedback, and watch this space for more awesomeness to come in 2017.
Recently, we reached out to people who’ve traded us some money for our goods and services, and asked them to tell us how it’s been working out. We didn’t pay them anything, and didn’t edit the content of their responses — just asked for their thoughts in their own words.
Over the next few months, we’ll be sharing the conversations we’ve had. Interested in sharing your thoughts? Drop us a line.
A. Revin Guillen, Staff Software Engineer at a company called Ellie Mae, who makes products that help banks and lenders process mortgages. I’m one of the lead developers on a large team of sub-teams that are building the next version of the main product (it’s a web app; the existing product is a Windows desktop app). I “grew up” in the open source world and enjoy bringing the culture and mindset to enterprise teams.
Peachy so far. Been dealing with some finicky Angular 1 + Angular 2 integration work, working on designing architecture/patterns for the teams to use for pieces of our app, etc… Typical stuff.
The short answer is it was the simplest way to build proprietary/internal stuff that still worked with the grain of our JS tooling.
The slightly longer answer is, well, we’re building a web app, so that means lots of JavaScript. And by the time I joined the company, npm was already in place for doing general dependency management. Most of the JS devs aren’t really familiar with npm-centric workflow or how open source projects typically work (we manage everything on a private GitHub Enterprise setup), so we’ve ended up building a CLI utility that manages dependencies, does component scaffolding, configures and runs webpack builds, etc…. Similar in concept to the ember or angular 2 CLIs. And since we have a lot of teams building web apps that have a lot of the same components or needs, it just made sense to have our internal stuff be npm dependencies, but private to our @elliemae org. There’s a bit of dogfooding at work here as well, because we intend to export a platform/SDK on which our customers can build their own banking apps, and we’ll end up making some of these things public so those apps can use them.
I don’t really have a specific package in mind that only worked out because it was private, but we have a few that contain things like internal URLs or test credentials or things like that, that can’t be available to the public but are still useful to package up and make them easily installable by internal devs.
We haven’t released anything yet, but we intend to do so. I’m not 100% clear on how the negotiations on that stuff work, other than just, the default will be to open source everything, but upper management will look at something and say there’s too much of our “secret sauce” (or whatever) in that part, so don’t open source that. For the most part, at least for now, the really proprietary stuff will be living in the cloud services we’re using to power the web apps, and those won’t be open source at all, as far as I know.
For me it’s: 1) npm based workflow is largely simple, widely known, and greatly eases the pain of building apps as compositions of smaller pieces. 2) In an organization where you have multiple projects sharing dependencies that shouldn’t be public (for any reason, including legal issues), it just makes sense to build those dependencies just like “normal” ones, and limit access to them so they’re not public, but still work with the toolchain.
Overall, it works quite well. We do have some self-inflicted problems stemming from the wide range of JavaScript experience across our group: 1) people coming from environments where semver isn’t really a thing don’t understand a lot of the nuances involved, and 2) not everyone buys into Full Semver™, so versioning gets done slightly differently depending on who owns a package.
To keep things working, people lock the dependency versions down manually, down to the literal patch version. This makes package updates slow to propagate internally because the consuming packages have to all be manually updated, and it sometimes leads to lots of QA fallout because everyone needs to worry about breaking changes on every version bump. This is mostly a training issue; it’s easy to overlook in a large organization.
Probably the only slightly painful thing for me personally on it is that I have open source projects I contribute to on the same laptop as my work projects, so managing my two different npm logins can be kind of finicky. I usually end up copying around the .npmrc from each work project. Some are maintained by a shared npm account, some are maintained by my own @elliemae npm org account, and it’s not always clear to me which ones will work where. Supposedly, my user is an admin in our org, but there have been private packages that didn’t let me push new versions until I logged in as the shared account. It’s probably that some kind of permissions thing isn’t set up correctly.
This may already exist in a simpler form and only needs me to go looking for it, but that issue above, managing multiple logins (and possibly even multiple registries) on a per-project basis easily would be nice.
Yes, absolutely.
One of my biggest development philosophies is to choose tools that work with the grain of how you want to work, and work with the grain of your tools. npm based workflow is a successful model, so if you can’t make all your code public, it’s the perfect way to still use your nice tools without jumping through unnecessary hoops. Plus, if someone has any intention of releasing their work later, building it on npm from the beginning saves a lot of hassle come release time.
“Q: why are your answers boring? Is it because you are personally boring?”
A: possibly. But also there’s the issue that enterprise development inherently tends toward mediocrity, and the lines of business are typically fairly boring as it is (even if the technical challenges are fun to work on).
Not yet. Hopefully someday. There are some things going on internally, but I can’t really talk about them to the world yet!
npm has one of the most active issue trackers on GitHub, so we receive constant exposure to your pain points and concerns. This means we have some pretty solid guesses of what we should build and how to prioritize it. However, given the huge number of users and projects who depend on npm, it’s essential that we understand everyone’s needs and build the right thing.
Since the release of npm 4, the team responsible for npm’s CLI has been hard at work on npm’s next major version. npm 4 was intended to clear out some breaking changes that had built up over time, but npm 5 will be a more ambitious release — so we’re trying a new process for soliciting feedback and hearing your concerns.
The process is broken down into a couple pieces. Let’s talk about them, and how you can get involved.
Before the release of Node.js 8, we intend to ship npm@5:
npm-shrinkwrap.json, npm’s lockfile. In particular, shrinkwrap was originally intended to be a finalizing tool used prior to deployment, but we intend the new shrinkwrap to be used for the entire lifecycle of your JavaScript applications;How did npm get to where it is? Well, not really on purpose.
npm was a product of rapid, iterative design, organically developed in response to immediate needs. A fair amount of what people have come to rely upon in the CLI were things that were either experiments on Isaac’s part, or contributions from community members trying to solve specific problems.
A great example of this is npm shrinkwrap. While Isaac was at Joyent, his colleague (and DTrace wizard) Dave Pacheco submitted npm shrinkwrap as a patch to solve some specific deployment issues that Joyent faced with npm.
This rapid iteration is part of what makes npm one of open source’s great success stories, but it also makes maintaining it a challenge. Absent design documents or specifications, developers assume that if they can get something to work with npm, that must be how it was designed to work. That’s a logical assumption, but as time passes and people build ever more sophisticated workflows on top of this assumption, it gets harder for us as maintainers to make even small simplifications to how npm works without breaking somebody’s workflow.
One conclusion I’ve drawn from this: we’re past the point where the CLI can afford to exist without some sort of specifications.
Writing down what we expect the CLI to do helps both maintainers and developers when there’s confusion, and makes it easier to integrate npm into your larger solutions for building and maintaining JavaScript applications.
Another conclusion I’ve come to: the specs we produce will be much better if you’re involved.
Every member of the CLI team has learned far more than we ever wanted about the intricacies of package management, but we’re not omniscient. We see only a tiny fraction of your use cases — and in most cases, we learn more about what we need to build by letting you talk directly to one other.
To support this, we’ve come up with a process to encourage you to help us figure out how to build the right functionality to meet your needs.
The flow is straightforward:
#rfcs channel. This allows the other teams at npm to make sure we’re building the right thing, and that the spec makes sense. (This is not just engineering — feedback from product design, support, documentation, and marketing is important at this point too.)doc/specs/. Over time this will grow to include specifications of how we intend all of the CLI’s core components to behave.This process shouldn’t look particularly new or exciting; we’ve borrowed pieces of it from other projects. We’ve also tried to balance our need to get a good level of feedback — from both the project’s collaborators and the broader community — with still keeping things moving along. To be ready for Node.js 8’s release, we need to finish the breaking changes for npm@5 by March of 2017. This is a tight deadline given our ambitions.
Good news: we’re almost ready to put up the first set of RFCs!
Kicking things off, Rebecca and Kat have been working on a set of changes to how symbolic links and local dependencies work with shrinkwrap. At least one of those doesn’t work at all in shrinkwrap today, so their proposals are interesting and at least a little radical. We’ll get them up shortly after everyone returns from their winter holidays. After that, we’ll keep a steady stream coming until the release of npm@5.
We’re still striving to find newer and better ways to make clear what we’re doing, and to get your feedback. We’re looking forward to your input on these new proposals — I’m confident it will help us produce a much better new npm.
Thanks as always for your time and attention. From our family to yours: best wishes for the holidays and happy new year!
Last week we were notified by security researcher Deian Stefan of a potential security issue for some packages in the npm Registry. The issue was publicly disclosed today. We appreciate his responsible disclosure of the issue, and his efforts in researching it thoroughly.
The root of the problem is that HTTP URLs are insecure and are vulnerable to interception by “man in the middle” (MITM) attacks. These insecure URLs can be embedded in shrinkwrap files, thus forcing a victim to download code that is not what they were expecting. An attacker who can deliver arbitrary code in this way can do any number of malicious things.
We judge the severity of a security flaw by two criteria: impact and likelihood. The impact of this attack is very severe. However, the probability of this attack is very low:
The npm Registry has served metadata over HTTPS for over 4 years and switched entirely to HTTPS in April 2016. This probably accounts for why the number of affected packages is so low: packages created or updated since April 2016 will not have this vulnerability.
All registry packages are available over HTTPS, so if you discover a package with HTTP URLs in its shrinkwrap file, the fix is as simple as adding an “S” to the URLs and pushing an update to the package.
The changes necessary to prevent this recurring have already happened, so only individual package authors need to act to update specific insecure packages. We also recommend authors deprecate insecure package versions with a message referring to the vulnerability.