npm has only been a company for 3 years, but it has been a code base for around 5–6 years. Much of it has been rewritten, but the cores of the CLI and registry are still the original code. Having only worked at npm for a year at this point, there’s still a lot of things left for me to learn about how the whole system works.
Sometimes, a user files a bug which, in the process of debugging it, teaches you some things you didn’t know about your own system. This is the story of one of those bugs.
Over the past week or so, several people filed issues regarding some strange truncating in npm package pages. In one issue, a user reported what appeared to be a broken link in their README:
Another user pointed out that the entire end portion of their README was missing!
As a maintainer of npm’s markdown parser, marky-markdown, I was concerned that these issues were a result of a parsing rule gone awry. However, another marky-markdown maintainer, @revin, quickly noted something odd: the description was cut off at exactly 255 characters, and the README was cut off at exactly 64kb. As my colleague @aredridel pointed out: those numbers are smoking guns.
Indeed, an internal npm service called registry-relational-follower was truncating both the READMEs and descriptions of packages published to the npm registry. This was a surprise to me and my colleagues, so I filed an issue on our public registry repo. In nearly no time at all, our CTO @ceejbot responded by saying that this truncation was intended behavior(!) and closed the issue.
“TIL!” I thought. And that’s when I decided to dig into how the registry handles READMEs… and why.
Before I dive into exactly what happens to your packages’ READMEs between your writing & publishing to their rendering on the npm website, let’s address the 800-lb gorilla in the room:
When I discovered that the registry was arbitarily truncating READMEs, I thought: “Seems bad.”
Maybe you thought this, too.
Indeed, at least one other person did, commenting on the closed issue:
This may be desired by npm, but I doubt any package authors desire their descriptions to be truncated. Also, see zero-one-infinity.
I should point out that commenting negatively on an already closed issue isn’t the best move in the world. However, I appreciated this comment, because it gave me new words to explain my own vaguely negative feelings about this truncation situation — fancy words with a nice name: The Zero One Infinity rule.
The Zero One Infinity rule is a guiding priniciple made popular by Dutch computer scientist Willem Van der Poel and goes as follows:
Allow none of foo, one of foo, or any number of foo. —Jargon File
This principle stands to eliminate arbitrary restrictions of any kind. Functionally, it suggests that, if you are going to allow something at all, allow one thing or allow an inifinite amount of things. These seem to be aligned with a seemingly symbiotic rule: the Principle of Least Astonishment, which states:
If a necessary feature has a high astonishment factor, it may be necessary to redesign the feature.
In the end, these principles are fancy, important-sounding ways of saying: arbitrary restrictions are surprising, and we shouldn’t be surprising our users.
Now that we can agree that surprising users with strange and seemingly arbitrary restrictions is no bueno … why does the npm registry currently have this restriction? Certainly npm’s developers don’t want to be surprising developers, right?
Indeed, they don’t! The current restriction on description and README size is a Band-Aid that npm’s registry developers were forced to apply as a result of the original architecture of the npm registry: large READMEs were making npm slow.
How the heck…, you might be thinking. Reasonable. Let’s take a look.
Currently, here is how your READMEs are dealt with by the registry:
When you type npm publish, the CLI tool takes a look at your .npmignore (or your .gitignore, if no .npmignore is present) and the files key of your package.json. Based on what it finds there, the CLI takes the files you intend to publish and runs npm pack, which packs everything up in a tarball, or .tar.gz file. npm doesn’t allow you to ever ignore the README file, so that gets packed up no matter what!
When you type npm publish, your README gets packed into a package tarball. This is what gets downloaded when someone npm installs your package. But this is not the only thing that happens with your README.
So while npm publish runs npm pack, it also runs a script called publish.js that builds an object containing the package’s metadata. Over the course of your package’s life (as you publish new versions), this metadata grows. First, read-package-json is run and grabs the content of your README file based on what you’ve listed in your package.json. Then publish.js adds this README data to the metadata for your package. You can think of this metadata as a more verbose version of your package.json — if you ever want to check out what it looks like, you can go to http://registry.npmjs.com/. For example, check out http://registry.npmjs.com/marky-markdown. As you’ll see, there’s README data in there for whichever version of your package has the latest tag!
Finally, publish.js sends this metadata, including your README, to validate-and-store… and here is where we bump into our truncation situation.
npm publish sends the entire README data to the registry, but the entire README does not get written to the database. Instead, when the database receives the README, it truncates it at 64kb before inserting.
This means: while we talk about a package on the npm registry as a single entity, the truth is that a single package is actully made up of multiple components that are dealt with by the npm registry services differently. Notably, there’s one service for tarballs, and another for metadata, and your README is added to both.
This means that the registry has 2 versions of your README: - The original version as a file in the package tarball - A potentially truncated version in the package metadata
As you may now be guessing, users have been seeing truncated READMEs on the npm website because the npm website uses the README data from package metadata. This makes a fair amount of sense: if we wanted to use the READMEs in the package tarballs, we’d have to unpack every package tarball to retrieve the README, and that would not be super efficient. Reading README data from a JSON response, which is how the npm registry serves package metadata, seems at least a little more reasonable than unpacking over 350,000 tarballs.
So now we know where the READMEs are truncated, and how those truncated READMEs are used — but it’s still not necessarily clear why. Understanding this requires a bit of archaeology.
Like many things about npm, this truncation was not always the case. On January 20, 2014, @isaacs committed the 64kb README truncation to npm-registry-couchapp, and he had several very good reasons for doing so:
First, allowing extremely large READMEs exposed us to a potential DDoS attack. An unsavory actor could automate publishing several packages with epically large READMEs and take down a bunch of npm’s infrastructure.
Second, extremely large READMEs in the package metadata were exploding the file size of that document, which made GET requests to retrieve package data very slow. Requesting the package metadata happens for every package on an npm install, so ostentisbly a single npm install could be gummed up in having to read several packages with very long READMEs — READMEs that wouldn’t even be useful to the end user, who would either use the unpacked README from the tarball or wouldn’t even need the README if, for example, the package was a transitive dependency far down in the dependency tree.
Interestingly enough, the predicament of exploding document size was a problem that npm had dealt with before.
Remember when we pointed out that a single package is actually a set of data managed by several different services? Like many things at npm, this also was not always the case.
Originally, npm’s registry was entirely contained by a single service, a CouchApp, on top of a CouchDB database. CouchDB is a database that uses JSON for documents, JavaScript for MapReduce indexes, and regular HTTP for its API.
CouchDB comes with an out-of-the-box functionality called CouchApp that is a web application served directly from CouchDB. npm’s registry was originally exclusively a CouchApp: packages were single, document-based entities with the tarballs as attachments on the documents. The simplicity of this architecture made it easy to work with and maintain, i.e., a totally reasonable version 1.
Soon after that, though, npm began to grow extremely quickly — package publishes and downloads exploded — and the original architecture scaled poorly. As packages grew in size and number, and dependency trees grew in length and complexity, performance ground to a halt and npm’s registry would crash often. This was a period of intense growing pains for npm.
To mitigate this situation, @isaacs split the registry into two pieces: a registry that had only metadata (attachments were moved to an object store called Manta and removed from the CouchDB), which he called skim, and another registry that contained both the metadata and the tarball attachment called full-fat. This splitting was the first of what would be multiple (and ongoing!) refactoring efforts to reduce the size of package metadata documents and distributing how we process packages across multiple services to improve performance.
If you look at the npm registry architecture today, you’ll see the effects of our now CTO @ceejbot’s effort to continue to split the monolith: slowly separating out registry functionality into multiple smaller services, some of which are no longer backed by the original CouchDB, and are backed by Postgres.
Turns out that nobody thinks that arbitrarily restricting README length is a good thing. There are plans in the works for a registry version 3, and changing up the README lifecycle is definitely in the cards. Much like the original shift that @isaacs made when he created the skim and full-fat registry services, the team would ideally like to see README data removed from the package metadata document and moved to a service that can render them and serve them statically to the website. This would bring several awesome benefits:
README truncating! Good-bye arbitrary restrictions!READMEs and serving them statically instead of parsing them on request. (Yes we cache, but still…)READMEs for all versions of a package! By lowring the cost of READMEs, we can not only parse more of a single README, but parse more READMEs too! :)npm cares deeply about backwards compatibility, so all of the original endpoints and functionality of our original API will continue to be supported as the npm regsitry grows out of its CouchApp and CouchDB origins. This means there will always be a service where you can request a package’s metadata and get the README for the latest version. However, npm itself doesn’t have to use that service. Moving on from it towards our vision of registry version 3 will be an awesome improvement, across several axes.
systems as designed are great, but systems as found are awful
This is not a shot at npm; this statement is pretty ubiquitously true. Most systems that are of any interest to anyone are the products of a long and likely complicated history of constraints and motivations, and such circumstances often produce strange results. As displeasing as the systems you find might be, there is still a pleasure in finding out how a system “works” (for certain values of “work,” of course).
In the end, the “fix” for the “bug” was “we’ve got a plan for that, but it’s gonna take a while.” That isn’t all that satisfying. However, the process of tracking down a seemingly simple element of the npm registry system and exploring it across services and time was extremely rewarding.
In fact, in the process of writing this post I became aware that Crates.io, the website for the Rust Programming Language’s package manager Cargo, was dealing with a very similar situation regarding their package READMEs. Instead of trying to remove them from their package metadata like us, they’re considering putting it in! If I hadn’t had the opportunity to dig around in the internals of npm’s registry, I might not have been ready to offer them suggestions with the strength of 5 years of experience.
So — the moral of the story is this: When you can, take the time to dig through the caves of your own software and ask questions about past decisions and lessons. Then, write down what you learn. It might be helpful one day, and probably sooner than you think.
Here’s how we deploy node services at npm:
cd ~/code/exciting-service
git push origin +master:deploy-production
That’s it: git push and we’ve deployed.
Of course, a lot is triggered by that simple action. This blog post is all about the things that happen after we type a git command and press Return.

As we worked on our system, we were motivated by a few guiding principles:
Why? We want no barriers to pushing out code once it’s been tested and reviewed, and no barriers to rolling it back if something surprising happens — so any friction in the process should be present before code is merged into master, via a review process, not after we’ve decided it’s good. By separating the steps, we gain finer control over how things happen. Finally, making things repeatable means the system is more robust.
What happens when you do that force-push to the deploy-production branch? It starts at the moment an instance on AWS is configured for its role in life.
We use Terraform and Ansible to manage our deployed infrastructure. At the moment I’m typing, we have around 120 AWS instances of various sizes, in four different AWS regions. We use Packer to pre-bake an AMI based on Ubuntu Trusty with most of npm’s operational requirements, and push it out to each AWS region.
For example, we pre-install a recent LTS release of node as well as our monitoring system onto the AMI. This pre-bake greatly shortens the time it takes to provision a new instance. Terraform reads a configuration file describing the desired instance, creates it, adds it to any security groups needed and so on, then runs an Ansible playbook to configure it.
Ansible sets up which services a host is expected to run. It writes a rules file for the incoming webhooks listener, then populates the deployment scripts. It sets up a webhook on GitHub for each of the services this instance needs to run. Ansible then concludes its work by running all of the deploy scripts for the new instance once, to get its services started. After that, it can be added to the production rotation by pointing our CDN at it, or by pointing other processes to it through a configuration change.
This setup phase happens less often than you might think. We treat microservices instances as disposable, but most of them are quite long-lived.
So our new instance, configured to run its little suite of microservices, is now happily running. Suppose you then do some new development work on one of those microservices. You make a pull request to the repo in the usual way, which gets reviewed by your colleagues and tested on Travis. You’re ready to run it for real!
You do that force-push to deploy-staging, and this is what happens: A reference gets repointed on the GitHub remote. GitHub notifies a web hooks service listening on running instances. This webhooks service compares the incoming hook payload against its configured rules, decides it has a match, & runs a deploy script.
Our deploy scripts are written in bash, and we’ve separated each step of a deploy into a separate script that can be invoked on its own. We don’t just invoke them through GitHub hooks! One of our Slack chatbots is set up to respond to commands to invoke these scripts on specific hosts. Here’s what they do:
Each step reports success or failure to our company Slack so we know if a deploy went wrong, and if so at which step. We emit metrics on each step as well, so we can annotate our dashboards with deploy events.
We name our deploy branches deploy-foo, so we have, for instance, deploy-staging, deploy-canary, and deploy-production branches for each repo, representing each of our deployment environments. Staging is an internal development environment with a snapshot of production data but very light load and no redundancy. Canary hosts are hosts in the production line that only take a small percentage of production load, enough to shake out load-related problems. And production is, as you expect, the hosts that take production traffic.
Every host runs a haproxy, which does load balancing as well as TLS termination. We use TLS for most internal communication among services, even within a datacenter. Unless there’s a good reason for a microservice to be a singleton, there are N copies of everything running on each host, where N is usually 4.
When we roll services, we take them out of haproxy briefly using its API, restart, then wait until they come back up again. Every service has two monitoring hooks at conventional endpoints: a low-cost ping and a higher-cost status check. The ping is tested for response before we put the service back into haproxy. A failure to come back up before a timeout stops the whole roll on that host.
You’ll notice that we don’t do any cross-host orchestration. If a deploy is plain bad and fails on every host, we’ll lose at most 1 process out of 4, so we’re still serving requests (though at diminished capacity). Our Slack operational incidents channel gets a warning message when this happens, so the person who did the deploy can act immediately. This level of orchestration has been good enough thus far when combined with monitoring and reporting in Slack.
You’ll also notice that we’re not doing any auto-scaling or managing clusters of containers using, e.g., Kubernetes or CoreOS. We haven’t had any problems that needed to be solved with that kind of complexity yet, and in fact my major pushes over the last year have been to simplify the system rather than add more moving parts. Right now, we are more likely to add copies of services for redundancy reasons than for scaling reasons.
Configuration is a perennial pain. Our current config situation is best described as “less painful than it used to be.”
We store all service configuration in an etcd cluster. Engineers write to it with a command-line tool, then a second tool pulls from it and writes configuration at deploy time. This means config is frozen at the moment of deploy, in the upstart config. If a process crashes & restarts, it comes up with the same configuration as its peers. We do not have plans to read config on the fly. (Since node processes are so fast to restart, I prefer killing a process & restarting with known state to trying to manage all state in a long-lived process.)
Each service has a configuration template file that requests the config data it requires. This file is in TOML format for human readability. At deploy time the script runs & requests keys from etcd namespace by the config value, the service requesting the config, and the configuration group of the host. This lets us separate hosts by region or by cluster, so we can, for example, point a service at a Redis in the same AWS data center.
Here’s an example:
> furthermore get /slack_token/
slack_token matches:
/slack_token == xoxb-deadbeef
/slack_token.LOUDBOT == XOXB-0DDBA11
/slack_token.hermione == xoxb-5ca1ab1e
/slack_token.korzybski == xoxb-ca11ab1e
/slack_token.slouchbot == xoxb-cafed00d
Each of our chatbots has a different Slack API token stored in the config database, but in their config templates they need only say they require a variable named slack_token[1].
These config variables are converted into environment variable specifications or command-line options in an upstart file, controlled by the configuration template. All config is baked into the upstart file and an inspection of that file tells you everything you need to know.
Here’s LOUDBOT’s config template:
app = "LOUDBOT"
description = "YELL AND THEN YELL SOME MORE"
start = "node REAL_TIME_LOUDIE.js"
processes = 1
[environment]
SERVICE_NAME = "LOUDBOT"
SLACK_TOKEN = "{{slack_token}}"
And the generated upstart file:
# LOUDBOT node 0
description "YELL AND THEN YELL SOME MORE"
start on started network-services
stop on stopping network-services
respawn
setuid ubuntu
setgid ubuntu
limit nofile 1000000 1000000
script
cd /mnt/deploys/LOUDBOT
SERVICE_NAME="LOUDBOT" \
SLACK_TOKEN="XOXB-0DDBA11" \
node REAL_TIME_LOUDIE.js \
>> logs/LOUDBOT0.log 2>&1
end script
This situation is vulnerable to the usual mess-ups: somebody forgets to override a config option for a cluster, or to add a new config value to the production etcd as well as to the staging etcd. That said, it’s at least easily inspectable, both in the db and via the results of a config run.
The system I describe above is sui generis, and it’s not clear that any of the components would be useful to anybody else. But our habit as an engineering organization is to open-source all our tools by default, so everything except the bash scripts is available if you’d find it useful. In particular, furthermore is handy if you work with etcd a lot.
[1] The tokens in this post aren’t real. And, yes, LOUDBOT’s are always all-caps.
Today, Facebook announced that they have open sourced Yarn, a backwards-compatible client for the npm registry. This joins a list of other third-party registry clients that include ied, pnpm, npm-install and npmd. (Apologies if we missed any.) Yarn’s arrival is great news for npm’s users worldwide and we’re happy to see it.
Like other third-party registry clients, Yarn takes the list of priorities that our official npm client balances, and shifts them around a little. It also solves a number of problems that Facebook was encountering using npm at their unique global scale. Yarn includes another take on npm’s shrinkwrap feature and some clever performance work. We’ve also been working on these specific features, so we’ll be paying close attention.
Mostly! We haven’t had time to run extensive tests on the compatibility of Yarn, but it seems to work great with public packages. It does not authenticate to the registry the way the official client does, so it’s currently unable to work with private packages. The Yarn team is aware of this issue and have said they’ll address it.
Whenever a big company gets involved in an open source project, there’s some understandable anxiety from the community about its intentions.
Yarn publishes to npm’s own registry by default, so Yarn users continue to be part of the existing community and benefit from the same 350,000+ packages as users of the official npm client. Yarn pulls packages from registry.yarnpkg.com, which allows them to run experiments with the Yarn client. This is a proxy that pulls packages from the official npm registry, much like npmjs.cf.
Like so many other companies around the world, Facebook benefits from the united open source JavaScript community on npm.
As I said at the start, we’re happy to see Yarn join the ranks of open source npm clients. This is how open source software is supposed to work!
The developers behind Yarn — Seb, James, Christoph, and Konstantin — are prolific publishers of npm packages and pillars of the npm community.
Through their efforts, Facebook and others have put a lot of developer time into this project to solve problems they encountered. Sharing the fruits of their labor will allow ideas and bugfixes to flow back and forth between npm’s official client and all the others. Everyone benefits as a result.
Yarn also shows that one of the world’s largest tech companies, which is already behind hugely popular JavaScript projects like React, is invested in and committed to the ongoing health of the npm community. That’s great news for JavaScript devs everywhere.
We’re pleased to see Yarn get off to such a great start, and look forward to seeing where it goes.
From its inception, npm has been keenly focused on open source values. As we’ve grown as a company, however, we’ve learned the important lesson that making source code available under an open license is the bare minimum for open source software. To take it even further, we’ve also learned that “open source” doesn’t necessarily mean community-driven. With these insights in mind, the web team has decided to make some changes to the community interface of npm’s website — with the goal of creating a more efficient and effective experience for everyone involved.
npm/newww is being retired and made privatenpm/www has been created for new issues and release notesnpm/newwwAs you may (or may not!) have noticed, the repo that used to home npm’s website (npm/newww) isn’t in sync with the production website (http://www.npmjs.com).
A few months back, the team made the executive decision to close source the npm website. There were several reasons for this:
This was a super tough call, and there were strong arguments from both sides. In the end, though, the team reached a unified understanding that this was both the best call for the company and for the community. The repo will be officially shutting down tomorrow, Friday, July 29, 2016.
One of the things we’re aware of is that many in the Node community were using the website as an example repo for using the Hapi framework. While we’re completely flattered by this, we honestly don’t believe the codebase is currently in a state to serve that role — it’s a katamari of many practices over many years rolled into one right now!
That being said, we do care about sharing our work with the world, and intend to and are excited to publish many of the website components as packages that will be open sourced and reusable.
npm/wwwIn place of the npm/newww repo, we’ve created npm/www! The goals of this repo are to give the community a place to:
While the source code for the website will no longer be available, the hope is that this new repo can be a more effective way to organize and respond to the needs the community has. We’re super excited to hear your thoughts, questions, and concerns — head over to npm/www now so we can start collaborating!
I don’t want to bury the lede, so here it is: npm has a new CTO, and her name is CJ Silverio. My title is changing from CTO to COO, and I will be taking over a range of new responsibilities, including modeling our business, defining and tracking our metrics, and bringing that data to bear on our daily operations, sales, and marketing. This will allow Isaac to concentrate more on defining the product and strategy of npm as we continue to grow.
CJ will be following this post with a post of her own, giving her thoughts about her new role. I could write a long post about how awesome CJ is — and she is awesome — but that wouldn’t achieve much more than make her embarrassed. Instead I thought I’d take this chance to answer a question I get a lot, before I forget the answer:
The answer is that it depends on the needs of the company. npm has grown from 3 people to 25 in the past 2.5 years, and in that time my job changed radically from quarter to quarter. Every time I got the hang of the job, the needs of the company would shift and I found myself doing something new. So this is my list of some of the things a CTO might do. Not all of them are a good idea, as you’ll see. The chronological order is an over-simplification: I was doing a small piece of all of these tasks all the time, but each quarter definitely had a focus, so I’ve talked about it then.
Started this quarter: CJ, Raquel.
npm Inc had a bumpy launch: the registry was extremely unstable, because it was running on insufficient hardware and had not been architected for high uptime. Our priority was to get the registry to stay up. I was spinning up hardware by hand, without the benefit of automation. By April we had found the hot spots and mostly met the load, but CJ was the first person to stridently make the case that we had to automate our way out of this. I handed operations to her.
Started: Ben, Forrest, Maciej.
Once the fires were out, we could finally think about building products, and we had a choice: do we build a paid product on top of the current (highly technically indebted) architecture, or build a new product and architecture? We decided on a new, modular architecture that we could use to build npm Enterprise first, and then extend later to become “Registry 2.0”. Between recruitment, management, and other duties, I discovered by the end of the quarter that it was already impossible to find time to write code.
This was the quarter we dug in and built npm Enterprise. My job became primarily that of an engineering manager: keeping everybody informed about what everybody else was up to, assigning tasks, deciding priorities of new work vs. sustaining and operational work, and handling the kind of interpersonal issues which every growing company experiences. I found I was relying on CJ a lot when solving these kinds of problems.
Started: Rebecca
With npm Enterprise delivered to its first customer, we started learning how to sell it. I went to conferences, gave talks, went to meetings and sales calls, wrote documentation and blog posts, and generally tried to make noise. I was never particularly good at any of this, so I was grateful when Ben took over npm Enterprise as a product, which started around this time.
In February 2014 I had written the system that to this day serves our download counts, but we were starting a process of raising our series A, and that data wasn’t good enough. I dredged up my Hadoop knowledge from a previous job and started crunching numbers, getting new numbers we hadn’t seen before, like unique IP counts and other trends. This is one job I’m keeping as I move to COO, since measuring these metrics and optimizing them is a big part of my new role.
Started: Ernie, Ryan
We’d been hiring all the time, of course, but we closed our series A in Q1 2015, so there was a sudden burst of recruitment at this time, most of whom didn’t actually start until the next quarter. By the end of this process we’d hired so many people that I never had to do recruitment again: the teams were now big enough to interview and hire their own people.
Started: Kat, Stephanie, Emily, Jeff, Chris, Jonathan, Aria, Angela
With so many new people, we had a sudden burst of momentum, and it became necessary for the first time to devote substantial effort to planning “what do we do next?” Until this point the next move had been obvious: put out the fire, all hands on deck. Now we had enough people that some of them could work on longer-term projects, which was good news, but meant we had to pull our heads up and think about the longer term. To accomplish this, I handed management of nearly all the engineering team to CJ, who became VP of engineering.
Started: Ashley, Andrea, Andrew (yes, it was confusing)
We had already launched npm Private Modules, a single-user product, but it hadn’t really taken off. We were sure we knew why: npm Organizations, a product for teams, was demanded by nearly everybody. It was a lot more complicated, and with more people there was a lot more coordination to do, so I started doing the kind of time, tasks, and dependency management of a project manager. I will be the first to admit that I was not particularly good at it, and nobody was upset when I mostly gave this task to Nicole the following quarter. We launched Orgs in November, and it was an instant hit, becoming half of npm’s revenue by the end of the year.
Started: Nicole, Jerry
Now with two product lines and a bunch of engineers, fully defining what the product should do (or not do), and what the next priority was, became critical. Isaac was too busy CEO’ing to do this, so he gave it to most available person: me. This was not a huge success, partly because I was still stuck in project management mode, which is a very different thing, and partly because I’m just not as creative as Isaac when it comes to product. Everybody learned something, even if it was “Laurie isn’t very good at this”.
Started: Kiera
Isaac’s baby was born on April 1st (a fact that, combined with his not having mentioned they were even expecting a baby, led many people to assume at first that his announcement of parenthood was a joke). He went on parental leave for most of Q2, so I took over as interim CEO. CJ, already VP of eng, effectively started being CTO at this time.
When Isaac came back from parental leave, we’d learned some things: I had, of necessity, handled the administrative and executive functions of a CEO for a quarter. CJ had handled those of a CTO. We now had two people who could be CTO, and one overloaded CEO with a talent for product. The course of action was obvious: Isaac handed over everything he could that was not-product to me, to focus on product development, while I handed over CTO duties to CJ. We needed a title for “CEO stuff that isn’t product” and picked COO mostly because it’s a title people recognize.
You’ll notice a common thread, which is that as I moved to new tasks I was mostly handing them to CJ. Honestly, it was pretty clear to me from day 1 that CJ was just as qualified to be CTO as I was, if not more — she has an extra decade’s worth of experience on me and is a better engineer to boot. The only thing she lacked was the belief that she could, and over the last two and a half years it has been a pleasure watching her confidence grow as she’s mastered every new challenge I put in front of her, and more than a little funny watching her repeatedly express surprise at her ability to do all these things. It’s been like slowly persuading an amnesiac Clark Kent that he is, in fact, Superman.
I’ve often referred to CJ as npm’s secret weapon. Well, now the secret is out. npm has the best CTO I could possibly imagine, and I can’t wait to see what she does next.
Earlier today, July 6, 2016, the npm registry experienced a read outage for 0.5% of all package tarballs for all network regions. Not all packages and versions were affected, but the ones that were affected were completely unavailable during the outage for any region of our CDN.
The unavailable tarballs were offline for about 16 hours, from mid-afternoon PDT on July 5 to early morning July 6. All tarballs should now be available for read.
Here’s the outage timeline:
Over the next hour 502 rates fell to the normal 0.
We’re adding an alert on all 500-class status codes, not just 503s. This alert will catch the category of errors, not simply this specific problem.
We’re also revising our operational playbook to encourage examination of our CDN logs more frequently; we could have caught the problem very soon after introducing it if we had carefully verified that our guess about the source of 502s had resulted in making them vanish from our CDN logging. We can also do better with tools for examining the patterns of errors across POPs, which would have made it clearer to us immediately that the error was not specific to the US East coast and was therefore unlikely to have been caused by an outage in our CDN.
Read on if you would like the details of the bug.
The root cause for this outage was an interesting interaction of file modification time, nginx’s method of generating etags, and cache headers.
We recently examined our CDN caching strategies and learned that we were not caching as effectively as we might, because of a property of nginx. Nginx’s etags are generated using the file modification time as well as its size, roughly as mtime + '-' + the file size in bytes. This meant that if mtimes for package tarballs varied across our nginx instances, our CDN would treat the files from each server as distinct, and cache them separately. Getting the most from our CDN’s caches and from our users’ local tarball caches is key to good performance on npm installs, so we took steps to make the etags match across all our services.
Our chosen scheme was to set the file modification time to the first 32-bit BE integer from their md5 hash. This was entirely arbitrary but looked sufficient after testing in our staging environment. We produced consistent etags. Unfortunately, the script that applied this change to our production environment failed to clamp the resulting integer, resulting in negative numbers for timestamps. Ordinarily, this would result in a the infamous Dec 31, 1969 date one sees for timestamps before the Unix epoch.
Unfortunately, negative mtimes triggered an nginx bug. Nginx will serve the first request for a file in this state and deliver the negative etag. However, if there is a negative etag in the if-none-match header nginx attempts to serve a 304 but never completes the request. This resulted in the the bad gateway message returned by our CDN to users attempting to fetch a tarball with the bad mtime.
You can observe this behavior yourself with nginx and curl:
The final request never completes even though nginx has correctly given it a 304 status.
Because this only affected a small subset of tarballs, not including the tarball fetched by our smoketest alert, all servers remained in the pool. We have an alert on above-normal 503 error rates served by our CDN, but this error state produced 502s and was not caught.
All the tarballs that were producing a 502 gateway timeout error turned out to have negative timestamps in their file mtimes. The fix was to touch them all so their times were inconsistent across our servers but valid, thus both busting our CDN’s cache and dodging the nginx behavior.
The logs from our CDN are invaluable, because they tell us what quality of service our users are truly experiencing. Sometimes everything looks green on our own monitoring, but it’s not green from our users’ perspective. The logs are how we know.
Today, npm Enterprise has just grown hugely more extensible and powerful with the release of npm Enterprise add-ons.
It’s now possible to integrate third-parties’ developer tools directly into npm Enterprise. This has the power to combine what were discrete parts of your development workflow into a single user experience, and knock out the barriers that stand in the way of bringing open source development’s many-small-reusable-parts methodology into larger organizations.
npm Enterprise now exposes an API that allows third-party developers to build on top of our npm Enterprise product:
With this deceptively simple functionality, developers can offer a huge amount of value to enrich the process of using npm within the enterprise.
Enterprise developers already want to take advantage of the same code discovery, re-use, and collaboration enjoyed by millions of open source developers, billions of times every month. But this requires accommodating their companies’ op-sec, licensing, and code quality processes, which often predate the modern era.
For example…
In the past, it was possible to manually research the security implications of external code. But with the average npm package relying on over 100 dependencies and subdependencies, this process just doesn’t scale.
Without a way to ensure the security of each package, a company can’t take advantage of open source code.
Software that is missing a license, or that’s governed by a license unblessed by a company’s legal department, simply can’t be used at larger companies. Much like security screening, many companies have relied upon manually reviewing the license requirements of each piece of external code. And just like security research, trying to manually confirm the licensing of every dependency (and their dependencies, and their dependencies…) is impossible to scale.
Enterprise developers need a way to understand the license implications of packages they’re considering using, and companies need a way to certify that all of their projects are legally kosher.
Will bug reports be patched quickly? Is the code written well? Do packages rely on stale or abandoned dependencies? These questions demand answers before an enterprise can consider relying on open source code.
Without a way to quantitatively analyze the quality of every code package in a project, many enterprise teams simply don’t adopt open source code or workflows for mission-critical projects.
Our three launch partners, Node Security Platform, FOSSA, and bitHound, address these concerns, respectively.
You can learn about the specifics of each of them here:
By integrating them directly into the tool that enterprise developers use to browse and manage packages, we make it as easy as possible to scratch enterprise development’s specific itches. As more incredible add-ons join the platform, the barriers to open source-style development at big companies get knocked down, one by one.
The Node Security Platform, FOSSA, and bitHound add-ons are available to existing npm Enterprise customers today. Simply contact us at [email protected] to get set up.
If you’re looking to bring npm Enterprise and add-ons into your enterprise, let us show you how easy it is with a free 30-day trial.
Interested in building your own add-on? Awesome. Stay tuned: API documentation is on its way.
The movement to bring open source code, workflows, and tools into the enterprise is called InnerSource, and it’s the beginning of a revolution.
When companies develop proprietary code the same way communities build open source projects, then the open source community’s methods and tooling become the default way to build software.
Everyone stands to benefit from InnerSource because everyone stands to benefit from building software the right way: open source packages see more adoption and community participation, companies build projects faster and cheaper without re-inventing wheels, and developers are empowered to build amazing things.
Add-ons are an exciting step forward for us. We’re thrilled you’re joining us.
When using npm Enterprise, we sometimes encounter public packages in our private registry that need to fetch resources from the public internet when being installed by a client via npm install.
Unfortunately, this poses a problem for developers who work in an environment with limited or no access to the public internet.
Let’s take a look at some of the more common types of problems in this area and talk about ways we can work around them.
Note that these problems are not specific to npm Enterprise — but to using certain public packages in any limited-access environment. That being said, there are some things that npm (as an organization and software vendor) can do to better prevent or handle some of these problems. We’re still working to make these improvements.
Typically, developers will discover the problem when installing packages from their private registry. When this happens, we need to determine the type of problem it is and where in the dependency graph the problematic dependency resides.
Here are some common problem types:
Git repo dependency
This is when a package dependency is listed in a package.json file with a reference to a Git repository instead of with a semver version range. Typically these point to a particular branch or revision in a public GitHub or Bitbucket repository. They are mainly used when the package contents have not been published to the public npm registry.
When the npm client encounters these, it attempts to fetch the package from the Git repository directly, which is a problem for folks who do not have network access to the repository.
Shrinkwrapped package
This is when the internal contents of a package contain an npm-shrinkwrap.json file that lists a specific version and URL to use for each mentioned package from the dependency tree.
During a normal npm install, the npm client attempts to fetch the dependencies listed in npm-shrinkwrap.json directly from the URLs contained in the file. This poses a problem when the client installing the shrinkwrapped package does not have access to the URLs that the shrinkwrap author has access to.
Package with install script or node-gyp dependency
This is when a package attempts to defer some setup process until the package is installed, using a script defined in package.json, which typically involves building platform-specific binaries or Node add-ons on the client’s machine.
On a typical install, the npm client will find and run these scripts in order to automatically fetch and build the required resources, targeting the platform that the client is running on. But when limited internet access means the necessary resources cannot be fetched, the install will fail. Most likely the package will be unusable until the end result of running the install script on the client’s machine is achieved.
To determine the location of the problematic dependency, we can boil it down to two categories:
Direct dependency
A direct dependency is one that is explicitly listed in your own package.json file — a dependency that your project/package uses directly in code or in an npm run script.
Transitive dependency
A transitive dependency is one that is not explicitly listed in your own package.json file — a dependency that comes from anywhere in the tree of your direct dependencies’ dependencies.
The same way publishing a package to the public registry requires access to the public internet, most of these solutions require Internet access, at least on a temporary basis. Once the solution is in place, then access to public resources can be restricted.
For starters, remember that it’s generally a good idea to use the latest version of the npm client. To install or upgrade to the latest version, regardless of what version of Node you have installed, run npm i -g npm@latest (and make sure npm -v prints the version that was installed).
Let’s go over the problem types in more detail.
Unfortunately, a dependency that references a Git repository (instead of a semver range for a published package) must be replaced with a published package. To do this, you’ll need to first publish the Git repository as a package to your npm Enterprise registry and then fork the project with the Git dependency and replace the dependency with the package you published. Then, publish the forked project, and use that package as a dependency (instead of the original).
It’s usually a good idea to open an issue on the project with the Git dependency, politely asking the maintainers to replace the Git dependency, if possible. Generally, we discourage using Git dependencies in package.json, and it’s typically only used temporarily while a maintainer waits for an upstream fix to be applied and published.
Example: let’s replace the "grunt-mocha-istanbul": "christian-bromann/grunt-mocha-istanbul" Git dependency defined in version 4.0.4 of the webdriverio package, assuming that webdriverio is a direct dependency and grunt-mocha-istanbul is a transitive dependency.
We’ll tackle this in two main steps: forking and publishing the transitive dependency, and forking and publishing the direct dependency.
Clone the project that is referenced by the Git dependency
Optionally, you can create a remote fork first (e.g., in GitHub or Bitbucket) and then clone your fork locally. Otherwise, you can just clone/download the project directly from the remote repository. It’s a good idea to use source control so you can keep a history of your changes, but you could also probably get away with downloading and extracting the project contents.
Example:
git clone https://github.com/christian-bromann/grunt-mocha-istanbul.gitCreate a new branch to hold your customizations
Again, this is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd grunt-mocha-istanbul
git checkout -b myco-custom-3.0.1
Add your scope to the package name in package.json
In our example, change "grunt-mocha-istanbul" to "@myco/grunt-mocha-istanbul".
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you have already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
Example:
git add package.json
git commit -m 'add @myco scope to package name'
npm publish
Clone the project’s source code locally
Either create a remote fork first (e.g., in GitHub or Bitbucket) and clone your fork locally, or just clone/download the project directly from the original remote repository. It’s a good idea to use source control so you can keep a history of your changes.
Example:
git clone https://github.com/webdriverio/webdriverio.gitCreate a new branch to hold your customizations
This is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd webdriverio
git checkout -b myco-custom-4.0.4
Add your scope to the package name in package.json
In our example, change "webdriverio" to "@myco/webdriverio".
Replace the Git dependency with the scoped package
This means updating the reference in package.json, and it may mean updating require() or import statements too. You should basically do a find-and-replace, finding the unscoped package name and judiciously replacing it with the scoped package name.
In our example, we only need to update the reference in package.json from "grunt-mocha-istanbul": "christian-bromann/grunt-mocha-istanbul" to "@myco/grunt-mocha-istanbul": "^3.0.1".
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you’ve already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
In our example of webdriverio, we next need to deal with the shrinkwrap URLs before we can publish (handled below). In other scenarios, it may be possible to publish now.
Example:
git add .
git commit -m 'replace git dep with scoped fork'
npm publish
Update your downstream project(s) to use the scoped package as a direct dependency (in package.json and in any require() statements)
In our example, this basically means doing a find-and-replace to find references to webdriverio and judiciously replace them with @myco/webdriverio. However, webdriverio also contains an npm-shrinkwrap.json file. We’ll cover that in the next section.
It just so happens that our sample direct dependency above (webdriverio) also uses an npm-shrinkwrap.json file to pin certain dependencies to specific versions. Unfortunately the shrinkwrap file contains hardcoded URLs to the public registry. We need a way to either ignore or fix the URLs.
A quick workaround is to install packages using the --no-shrinkwrap flag. This will tell the npm client to ignore any shrinkwrap files it finds in the package dependency tree and, instead, install the dependencies from package.json in the normal fashion.
This is considered a workaround rather than a long-term solution: it’s possible that installing from package.json will install versions of dependencies that don’t exactly match the ones listed in npm-shrinkwrap.json, even though the versions of the package’s direct dependencies are guaranteed to be within the declared semver range.
Example:
npm install webdriverio --no-shrinkwrap
(As noted above, [email protected] also has a Git dependency, so just ignoring the shrinkwrap isn’t quite enough for this package.)
If you want to use the exact versions from the shrinkwrap file without using the URLs in it, you’ll have to use your own custom fork of the project that contains a modified shrinkwrap file.
Here’s the general idea:
(Note that steps 1-3 are identical to the fork-publish instructions for a direct dependency above. If you’ve already completed them, skip to step 4.)
Clone the project’s source code locally
Either create a remote fork first (e.g., in GitHub or Bitbucket) and clone your fork locally, or just clone/download the project directly from the original remote repository. It’s a good idea to use source control so you can keep a history of your changes.
Example:
git clone https://github.com/webdriverio/webdriverio.gitCreate a new branch to hold your customizations
This is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd webdriverio
git checkout -b myco-custom-4.0.4
Add your scope to the package name in package.json
In our example, change "webdriverio" to "@myco/webdriverio".
Use rewrite-shrinkwrap-urls to modify npm-shrinkwrap.json, pointing the URLs to your npm Enterprise registry
Unfortunately this is slightly more complicated than a find-and-replace, since the tarball URL structure of the public registry is different than the one used for an npm Enterprise private registry.
In the example below, replace {your-registry} with the base URL of your private registry, e.g., https://npm-registry.myco.com or http://localhost:8080. The value you use should come from the Full URL of npm Enterprise registry setting in your Enterprise admin UI Settings page.
Example:
npm install -g rewrite-shrinkwrap-urls
rewrite-shrinkwrap-urls -r {your-registry}
git diff npm-shrinkwrap.json
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you’ve already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
Be mindful of any prepublish or publish scripts that may be defined in package.json. You can try skipping those scripts when publishing via npm publish --ignore-scripts, but running the scripts may be necessary to put the package into a usable state, e.g., if source transpilation is required.
Example:
git add npm-shrinkwrap.json package.json
git commit -m 'add @myco scope to package name' package.json
git commit -m 'rewrite shrinkwrap urls' npm-shrinkwrap.json
npm publish
Note that a prepublish script will probably need to install the package’s dependencies in order to run. In this case, npm install will be executed first. If this happens, it should pull all dependencies in the shrinkwrap file from your registry. If any of those packages don’t yet exist in your registry, you’ll need either to enable the Read Through Cache setting in your Enterprise instance or to manually add the packages to the white-list by running npme add-package webdriverio from your server’s shell and answering Y at the prompt to add dependencies.
Update your downstream project(s) to use the scoped package as a direct dependency (in package.json and in any require() statements)
In our example, this basically means doing a find-and-replace to find references to webdriverio and judiciously replace them with @myco/webdriverio.
This is less than ideal, obviously. We’re currently considering ways to improve handling of shrinkwrapped packages on the server side, but a better solution is not yet available.
Some packages want or need to run some script(s) on installation in order to build platform-specific dependencies or otherwise put the package into a usable state. This approach means that a package can be distributed as platform-independent source without having to prebundle binaries or provide multiple installation options.
Unfortunately this also means that these packages typically need access to the public internet in order to fetch required resources. In these cases, we can’t really do much to work around this approach, other than attempt to isolate the steps of fetching the package from the registry and set up the platform-specific resources it needs.
As a quick first attempt, you can ignore lifecycle scripts when installing packages via npm install {pkg-name} --ignore-scripts.
Unfortunately, install scripts typically do some sort of platform-specific setup to make the package usable. Thus, you should review the install or postinstall scripts from the package’s package.json file and determine if you need to attempt to run them separately or somehow achieve the same result manually.
When node-gyp is involved in the setup process, the package requires platform-specific binaries to be built and plugged into the Node runtime on the client’s system. In order to build the binaries, the package will typically need to fetch source header files for the Node API.
The best we can do is attempt to setup the node-gyp build toolchain manually. This requires Python and a C/C++ compiler. You can read more about this at the following locations:
General installation: https://github.com/nodejs/node-gyp#installation
Windows issues: https://github.com/nodejs/node-gyp/issues/629
A good example of a package with a node-gyp dependency is node-sass.
Once the build toolchain is in place, the package’s install script may not need to fetch any external resources.
If you’ve made it all the way to the end, surely you’ll agree that npm could be handling things better to minimize challenges faced by folks with restricted internet access. We feel it’s in the community’s best interest to at least raise awareness of these problems and their potential workarounds until we can get a more robust solution in place.
If you have feedback or questions, as always, please don’t hesitate to let us know.
Today, we’re excited to announce a simple, powerful new way to track changes to the npm registry — and build your own amazing new developer tools: hooks.
Hooks are notifications of npm registry events that you’ve subscribed to. Using hooks, you can build integrations that do something useful (or silly) in response to package changes on the registry.
Each time a package is changed, we’ll send an HTTP POST payload to the URI you’ve configured for your hook. You can add hooks to follow specific packages, to follow all the activity of given npm users, or to follow all the packages in an organization or user scope.
For example, you could watch all packages published in the @npm scope by setting up a hook for @npm. If you wanted to watch just lodash, you could set up a hook for lodash.
If you have a paid individual or organizational npm account, you can start using hooks right now.
Each user may configure a total of 100 hooks, and how you use them is up to you: you can put all 100 on a single package, or scatter them across 100 different packages. If you use a hook to watch a scope, this counts as a single hook, regardless of how many packages are in the scope. You can watch any open source package on the npm registry, and any private package that you control (you’ll only receive hooks for packages you have permission to see).

Create your first hook right now using the wombat cli tool.
First, install wombat the usual way: npm install -g wombat. Then, set up some hooks:
Watch the npm package:wombat hook add npm https://example.com/webhooks shared-secret-text
Watch the @slack organization for updates to their API clients:wombat hook add @slack https://example.com/webhooks but-sadly-not-very-secret
Watch the ever-prolific substack:wombat hook add --type=owner substack https://example.com/webhooks this-secret-is-very-shared
Look at all your hooks and when they were last triggered:wombat hook ls
Protip: Wombat has several other interesting commands. wombat --help will tell you all about them.
We’re also making public an API for working with hooks. Read the docs for details on how you can use the API to manage your hooks without using wombat.

You can use hooks to trigger integration testing, trigger a deploy, make an announcement in a chat channel, or trigger an update of your own packages.
To get you started, here are some of the things we’ve built while developing hooks for you:
npm-hook-receiver: an example receiver that creates a restify server to listen for hook HTTP posts. Source code.
npm-hook-slack: the world’s simplest Slackbot for reporting package events to Slack; built on npm-hook-receiver.
captain-hook: a much more interesting Slackbot that lets you manage your webhooks as well as receive the posts.
wombat: a CLI tool for inspecting and editing your hooks. This client exercises the full hooks API. Source code.
ifttt-hook-translator: Code to receive a webhook and translate it to an IFTTT event, which you can then use to trigger anything else you can do on IFTTT.
citgm-harness: This is a proof-of-concept of how node.js’s Canary in the Gold Mine suite might use hooks to drive its package tests. Specific package publications trigger continuous integration testing of a different project, which is one way to test that you haven’t broken your downstream dependents.

We’re releasing hooks as a beta, and where we take it from here is up to you. What do you think about it? Are there other events you’d like to watch? Is 100 hooks just right, too many, or not enough?
We’re really (really) (really) interested to see what you come up with. If you build something useful (or silly) using hooks, don’t be shy to drop us a line or poke us on Twitter.
This is the tip of an excitement iceberg — exciteberg, if you will — of cool new ways to use npm. Watch this space!
npm ♥ you!
The 283,000 (!) packages in the npm Registry are only useful if it’s easy for developers to integrate them into their projects and deploy their code, so we’re excited by any chance to streamline your workflow.
If you work with Bitbucket, starting today, it’s easier than ever to install and publish npm private packages — to either your npm account or your self-hosted npm Enterprise installation.
Bitbucket Pipelines is a new continuous integration service built into Bitbucket Cloud for end-to-end visibility, from coding to deployment. We’re excited that they’re launching with npm support.
Why?
How?…
bitbucket-pipelines.yml supplied in this repository.NPM_TOKEN environment variable.
~/.npmrc, after you log in to the registry.bitbucket-pipelines.yml supplied in this repository.NPM_TOKEN environment variable:
~/.npmrc, after you log in to the registry.NPM_REGISTRY_URL to the full URL of your private registry (with scheme).Alongside the new pipelines integration, the npm for Bitbucket add-on has been updated to support private modules.
This helps complete an elegant CI/CD workflow:
Get started by installing the add-on now.
We have more exciting integrations and improvements in the … pipeline (sorry), but it helps to know what matters to you. Don’t be shy to share feedback in the comments or hit us up on Twitter.
Last month, we released a “one-click” installer for npm Enterprise on AWS. Fresh on its heels, we’re excited to announce support for Google Compute Engine
Getting npm Enterprise up and running on GCE is easy:
gcloud auth login.gcloud config set project my-projectgcloud config set compute/zone us-east1-dFetch npm’s configuration template:
curl -XGET https://raw.githubusercontent.com/npm/npme-installer/master/npme-gce.jinja > /tmp/npme-gce.jinja
Run the npm Enterprise deploy template:
gcloud deployment-manager deployments create npme-deployment --config /tmp/npme-gce.jinja --properties="zone=us-east1-d"
Note: You can replace us-east1-d with whatever zone you’d like to deploy to.
When you get back, if you visit your cloud console you’ll see a running server called npm-enterprise. That’s all there is to it!
You can configure your instance by visiting
port 8800. Our npm Enterprise will point you in the right direction with information on the various settings that you can configure.
It’s our continued goal to make npm Enterprise painless to run, regardless of your infrastructure. Are we missing a platform that you’d love for us to support? Let us know in the comments.
How many npm users are there? It’s a surprisingly tricky question.
There are a little over 211,000 registered npm users, of whom about 73,000 have published packages. But far more people than that use npm: most things npm does do not require you to login or register. So how many are there, and what are they up to? Our first and most obvious clue is the number of packages they are downloading:
That’s over a billion downloads per week, and that only counts package installs that weren’t already in cache – about 66% of npm package installs don’t require downloading any packages, because they fetch from the cache. Still, the growth is truly, without exaggeration, exponential.
So what’s driving all the downloads? Are the same people building ever-more-complicated applications, downloading more packages to do it? Are the same people running build servers over and over? Or are there actually more people? Our next clue comes from the number of unique IPs hitting the registry:
Here’s a ton of growth again, close to 100% growth year-on-year, but much more linear than the downloads: 3.1 million IPs hit the registry in March. Of course, IP addresses are not people. Some of these IPs are build servers and other robots. Other IP addresses are companies or educational institions that serve thousands or even tens of thousands of people. So while it doesn’t correlate perfectly, generally speaking, more IPs means more people are using npm.
Every time npm runs it generates a unique ID that it sends as a header for every request it makes during that run. This ID is random and not tied to you in any way, and once npm finishes running it is deleted. We use this ID for debugging the registry: it lets us see that these 5, 10 or 50 requests were all part of the same operation, which makes it easier to see what’s going on when somebody has a problem. It also makes it possible to say roughly how many times npm is run – or at least, run in a way that makes a request to the registry. There were 84 million npm sessions in March: this number is growing faster than IPs, but less quickly than downloads.
We can take these last two and combine them:
This number is interesting because it’s not going anywhere. The ratio of packages downloaded to npm sessions is essentially constant (this is not the same as the number of packages downloaded per install, because many npm sessions don’t result in downloads). But this is a clear signal: the number of packages per install isn’t rising. Applications aren’t getting notably more complicated; people are installing packages more often because they are writing more applications.
Here’s a final clue:
The number of packages downloaded by an IP is also rising linearly. So, not only are more people using npm, but the people who are already using npm are using it more and more frequently. And then of course there’s this number:
Another way of counting npm users is counting people who visit npm’s website. This also grew enormously; 400% since we started the company. In the last 90 days, npm saw just over 4 million unique users visit our site. Ordinarily, you take web user numbers with a grain of salt – there’s lots of ways they can be wrong. But combined with the IPs, the sessions, and the download numbers, we think that number is probably accurate, maybe even a little conservative.
There are so many sources of error! There are robots who crawl the registry. There are lots of companies who host their own internal registry caches, or run npm Enterprise, and so have their own npm website and registry and never hit ours. There’s the entire nation of China, which finds it difficult to access npm through the great firewall, and is served by our hard-working friends at cnpmjs. There’s errors that inflate our numbers, and errors that deflate them. If you think we’re way off, let us know. But we think we have enough for a good guess.
We think there are four million npm users, and we think that number is doubling every year. Over at the node.js foundation, they see similar growth numbers. Not only are there more of them, but they’re more engaged than ever before. This is awesome! The 25 very hard working people of npm Inc. thank you for your participation and your contributions to the community, and we hope to see even more of you.
In a previous blog post we showed you how easy it is to run npm Enterprise on Amazon Web Services. Today, we’re happy to announce the public availability of the npm Enterprise Amazon Machine Image (AMI). Now, it’s even easier to run your own private npm registry and website on AWS!
Using our AMI, there is nothing to install. Just launch an instance, configure it using the npm Enterprise admin web UI, and you’re done: it’s a true point-and-click solution for sharing and managing private JavaScript packages within your company.
Let’s take a quick look at the details.
We have AMIs for several AWS regions. When you launch a new instance in the AWS EC2 Console, find the right one by searching for the relevant AMI ID under the Community AMIs tab. Note that new AMI versions are published about every month and include the date of publication in the AMI name.
Here’s a list of the AMI IDs by region:
us-east-1 (N. Virginia): ami-edd65bfaus-west-1 (N. California): ami-61db9d01us-west-2 (Oregon): ami-dc34f6bceu-central-1 (Frankfurt): ami-5ec13431ap-southeast-2 (Sydney): ami-7d32181eEnsure the AMI comes from owner 666882590071.
If you don’t see your preferred region in the list above, contact our support team, and we’ll get one created for you!
When you launch an instance of the AMI, you’ll need to:
m3.large or better22 (ssh), 8080 (registry), 8081 (website), and 8800 (npm Enterprise admin UI).pem key pair: this allows you to ssh into your server instanceIt’s not necessary, but if you’d prefer to attach an EBS volume for registry data that is separate from the root volume, you can. However, the root EBS volume cannot be smaller than 16 GB.
For more information (or screenshots) on any of the above, see our docs for Running npm Enterprise in AWS.
You don’t have to, but you can ssh into your EC2 instance to make sure it’s up and running. If you do, you should see a welcome message like the following:
Open your favorite web browser, access your server on port 8800, and follow the prompts to configure and start your appliance.
You’ll need a license key. If you haven’t already purchased one, you can get a free trial key here.
For more information on configuring npm Enterprise, visit our docs.
That’s it! Once you’ve configured and started the appliance, your private npm registry and website are ready for use. See this document for configuring your npm CLI to use your new private registry.
We’re continually striving to provide you the best solutions for distributing, discovering, and reusing your JavaScript code and packages. We hope this AMI makes it just that much easier to leverage the same tools within your organization that work so well in open source communities around the world - a concept we refer to as InnerSource.
As always, if you have questions or feedback, please reach out.
Effective yesterday morning, all requests to the npm registry are made via HTTPS.
Practically this means:
http://registry.npmjs.org/pkgname you get a JSON responsehttp://registry.npmjs.org/pkgname/-/pkgname-1.2.3.tgzhttps://registry.npmjs.org/pkgname/-/pkgname-1.2.3.tgz.http://registry.npmjs.org/pkgname will 301 (redirect) over to https://registry.npmjs.org/pkgnameNo! The CLI client checks a shashum to verify the package and that check always has been over HTTPS.
We’ve developed an ecosystem of tools that you can use to replicate the registry in a way that is resilient to these changes:
_changes feed: https://skimdb.npmjs.com/registry/_changes?descending=true&limit=10
For every change in a package in the registry, the whole package object (with changes) gets emitted as data on the _changes feed of CouchDB.
follower: https://github.com/npm/concurrent-couch-follower
Users wishing to follow the changes feed can use our CouchDB follower wrapper, which will ensure you don’t miss any documents even if you process them asynchronously.
normalizer: https://github.com/npm/normalize-registry-metadata
Finally, we also provide a normalizer, so that you can clean up the data you receive, and implement the changes from the changes feed.
We will never stop making replicating public packages utterly trivial. If anything, we’ll keep making it easier.
We believe these tools should minimize any disruption from our transition to HTTPS — but of course there are edge cases! If you experience difficulty, we want to hear about it and help you out. As always, don’t be shy to reach out: [email protected].
Happy replicating!
Last week, [email protected] (npm LTS) and [email protected] were released to latest. Among other improvements, these fix a vulnerability that could cause the unintentional leakage of bearer tokens.
Here are details on this vulnerability and how it affects you.
An up to date npm is the most secure npm. Update npm to get this patch, as well as other patches:
npm install npm@latest -g
If you believe that your bearer token may have been leaked, invalidate your current npm bearer tokens and rerun npm login to generate new tokens. Keep in mind that this may cause continuous integration builds in services like Travis to break, in which case you’ll need to update the tokens in your CI server’s configuration.
Since 2014, npm’s registry has used HTTP bearer tokens to authenticate requests from the npm’s command-line interface. A design flaw meant that the CLI was sending these bearer tokens with every request made by logged-in users, regardless of the destination of their request. (The bearers only should have been included for requests made against a registry or registries used for the current install.)
An attacker could exploit this flaw by setting up an HTTP server that could collect authentication information, then use this authentication information to impersonate the users whose tokens they collected. This impersonation would allow them to do anything the compromised users could do, including publishing new versions of packages.
With the fixes we’ve released, the CLI will only send bearer tokens with requests made against a registry.
Maybe.
npm’s CLI team believes that the fix won’t break any existing registry setups. Due to the large number of registry software suites out in the wild, though, it’s possible our change will be breaking in some cases.
If so, please file an issue describing the software you’re using and how it broke. Our team will work with you to mitigate the breakage.
Thanks to Mitar, Will White & the team at Mapbox, Max Motovilov, and James Taylor for reporting this vulnerability to npm. You can learn more about npm’s security policy on our security page.
-/-
Node.js has also posted about this disclosure. You can read that here.
This week, after we announced changes to our unpublish policy, some community members created a series of packages that depend on every package in the registry. Functionally, these packages exist to ensure that every package has at least one dependent package. A full list of these packages is here.
Other members of the community quickly notified us of these packages. In response, we have contacted the authors of these packages and we are removing all of these packages from the registry, effective 6:00pm PST today (March 30, 2016).
All of the authors we contacted were cooperative and acted in good faith to our requests :) This is not a fight- we’re communicating this situation for the sake of transparency, not drama.
Here’s why we’ve taken this step:
We utterly depend on our community — for making the npm registry useful and safe for everyone, and for helping us make policies that balance everyone’s needs and keep everyone safe. We’re grateful to everyone who reached out to alert us to these packages — honestly, thank you! — and we’re also open to continue discussing it with you to understand your concerns. If you have concerns or questions, please contact [email protected].
Here is the email we sent to the authors of these packages.
One of Node.js’ core strengths is the community’s trust in npm’s registry. As it’s grown, the registry has filled with packages that are more and more interconnected.
A byproduct of being so interdependent is that a single actor can wreak significant havoc across the ecosystem. If a publisher unpublishes a package that others depend upon, this breaks every downstream project that depends upon it, possibly thousands of projects.
Last Tuesday’s events revealed that this danger isn’t just hypothetical, and it’s one for which we already should have been prepared. It’s our mission to help the community succeed, and by failing to protect the community, we didn’t uphold that mission.
We’re sorry.
This week, we’ve seen a lot of discussion about why unpublish exists at all. Similar discussions happen within npm, Inc. There are important and legitmate reasons for the feature, so we have no intention of removing it, but now we’re significantly changing how unpublish behaves and the policies that surround it.
These changes, which incorporate helpful feedback from a lot of community members, are intended to ensure that events like Tuesday’s don’t happen again.
Going forward, if you try to unpublish a given package@version:
If the version is less than 24 hours old, you can unpublish it. The package will be completely removed from the registry. No new packages can be published using the same name and version.
If the version is older than 24 hours, then the unpublish will fail, with a message to contact [email protected].
If you contact support, they will check to see if removing that version of your package would break any other installs. If so, we will not remove it. You’ll either have to transfer ownership of the package or reach out to the owners of dependent packages to change their dependency.
If every version of a package is removed, it will be replaced with a security placeholder package, so that the formerly used name will not be susceptible to malicious squatting.
If another member of the community wishes to publish a package with the same name as a security placeholder, they’ll need to contact [email protected]. npm will determine whether to grant this request. (Generally, we will.)
This can be a bit difficult to understand in the abstract. Let’s walk through some examples.
Brenna is a maintainer of a popular package named “supertools”. Supertools has 3 published versions: 0.0.1, 0.3.0, and 0.3.1. Many packages depend on all the versions of supertools, and, across all versions, supertools gets around 2 million downloads a month.
Brenna does a huge refactor and publishes 1.0.0. An hour later, she realizes that there is a huge vulnerability in the project and needs to unpublish. Version 1.0.0 is less than 24 hours old. Brenna is able to unpublish version 1.0.0.
Embarrassed, Brenna wants to unpublish the whole package. However, because the other versions of supertools are older than 24 hours Brenna has to contact [email protected] to continue to unpublish. After discussing the matter, Brenna opts instead to transfer ownership of the package to Sarah.
Supreet is the maintainer of a package named “fab-framework-plugin”, which has 2 published versions: 0.0.1 and 1.0.0. fab-framework-plugin gets around 5,000 downloads monthly across both versions, but most packages depend on it via ^1.0.0.
Supreet realizes that there are several serious bugs in 1.0.0 and would like to completely unpublish the version. He attempts to unpublish and is prompted to talk to [email protected] because the 1.0.0 version of his package is older than 24 hours. Instead, Supreet publishes a new version with bug fixes, 1.0.1.
Because all dependents are satisfied by 1.0.1, support agrees to grant Supreet’s request to delete 1.0.0.
Tef works for Super Private Company, which has several private packages it use to implement static analysis on Node.js packages.
Working late one night, Tef accidentally publicly publishes a private package called “@super-private-company/secrets”. Immediately noting his mistake, Tef unpublishes secrets. Because secrets was only up for a few minutes — well within the 24 window for unrestricted unpublishes — Tef is able to successfully unpublish.
Because Tef is a responsible developer aware of security best-practices, Tef realizes that the contents of secrets have been effectively disclosed, and spends the rest of the evening resetting passwords and apologizing to his coworkers.
Charlotte is the maintainer of a package called “superfoo”. superfoo is a framework on which no packages depend. However, the consultancy Cool Kids Club has been using it to develop their applications for years. These applications are private, and not published to the registry, so they don’t count as packages that depend on superfoo.
Charlotte burns out on open source and decides to unpublish all of their packages, including superfoo. Even though there are no published dependents on superfoo, superfoo is older than 24 hours, and therefore Charlotte must contact [email protected] to unpublish it.
After Charlotte contacts support, insisting on the removal of superfoo, npm deprecates superfoo with a message that it is no longer supported. Whenever it is installed, a notice is displayed to the installer.
Cool Kids Club sees this notice and republishes superfoo as “coolfoo”. Cool Kids Club software now depends on “coolfoo” and therefore does not break.
This policy is a first step towards balancing the rights of individual publishers with npm’s responsibility to maintain the social cohesion of the open source community.
The policy still relies on human beings making human decisions with their human brains. It’s a fairly clear policy, but there is “meat in the machine”, and that means it will eventually reach scaling problems as our community continues to grow.
In the future, we may extend this policy (including both the human and automated portions) to take into account such metrics as download activity, dependency checking, and other measures of how essential a package is to the community.
In balancing individual and community needs, we’re extremely cognizant that developers feel a sense of ownership over their code. Being able to remove it is a part of that ownership.
However, npm exists to facilitate a productive community. That means we must balance individual ownership with collective benefit.
That tension is at the very core of open source. No package ecosystem can survive without the ability to share and distribute code. That’s why, when you publish a package to the registry, you agree to our Terms of Service. The key lines are:
Your Content belongs to you. You decide whether and how to license it. But at a minimum, you license npm to provide Your Content to users of npm Services when you share Your Content. That special license allows npm to copy, publish, and analyze Your Content, and to share its analyses with others. npm may run computer code in Your Content to analyze it, but npm’s special license alone does not give npm the right to run code for its functionality in npm products or services.
When Your Content is removed from the Website or the Public Registry, whether by you or npm, npm’s special license ends when the last copy disappears from npm’s backups, caches, and other systems. Other licenses, such as open source licenses, may continue after Your Content is removed. Those licenses may give others, or npm itself, the right to share Your Content with npm Services again.
These lines are the result of a clarification that we asked our lawyer to make for the purposes of making this policy as understandable as possible. You can see that in this PR.
We don’t try to hide our policies; in fact, we encourage you to review their full list of changes and updates, linked from every policy page.
We acknowledge that there are cases where you are justified in wanting to remove your code, and also that removing packages can cause harm to other users. That’s exactly why we are working so hard on this issue.
This new policy is just the first of many steps we’ll be taking. We’ll be depending on you to help us consider edge cases, make tough choices, and continue building a robust ecosystem where we can all build amazing things.
You probably have questions about this policy change, and maybe you have a perspective you’d like to share, too.
We appreciate your feedback, even when we can’t respond to all of it. Your participation in this ecosystem is the core of its greatness. Please keep commenting and contributing: you are an important part of this community!
Please post comments and questions here. We’ve moved to a Github issue for improved moderation.
Disclaimer: we had been told this vulnerability would be disclosed on Monday, not Friday, so this post is a little rushed and may be edited later.
As disclosed to us in January and formally discussed in CERT vulnerability note VU#319816, it is possible for a maliciously-written npm package, when installed, to execute a script that includes itself into a new package that it then publishes to the registry, and to other packages owned by that user.
npm cannot guarantee that packages available on the registry are safe. If you see malicious code on the registry, report it to [email protected] and it will be taken down.
If you are installing a package that you do not trust, you can avoid this vulnerability by running
npm install --ignore-scripts
If you wish to never run scripts at install time, you can instead run
npm config set ignore-scripts true
Either or both of these steps will prevent you from spreading a worm at install time.
If you install a package that contains malicious code and then execute it (e.g. by require()ing it into your code) it could still perform malicious actions. You should not execute any software downloaded from the Internet if you do not trust it, including software downloaded from npm.
Installation and other lifecycle scripts are a useful tool that allows package authors to set up configuration, compile binary dependencies, and perform other actions that make using npm packages convenient.
On balance, it’s npm’s belief that the utility of having installation scripts is greater than the risk of worms. This is a tradeoff that we will continue to evaluate.
Package scripts have been a feature of npm since the very beginning. The implications of this feature were clear from the start, but not everyone in the ever-expanding npm community is fully aware of them. Disclosures of this kind are helpful for that reason.
You should report malicious packages to [email protected]. Per our terms of service, they will be taken down. Authors publishing malicious code to the registry may be banned from the registry.
npm monitors publish frequency. A spreading worm would set off alarms within npm, and if a spreading worm were identified we could halt all publishing while infected packages were identified and taken down.
npm is working with security vendors to introduce enhanced security vulnerability scanning and mitigation services. This work is underway but not yet ready.
At root, it is impossible to guarantee that any new piece of software is benign short of manually inspecting it, as mobile app stores do. The work required to do this would be prohibitively expensive. Instead, we rely on users to flag suspicious packages and act quickly to remove them from the registry.
Other potential steps can be taken to make publishing without an author’s knowledge harder, including implementing 2-factor authentication on publishing. This functionality is already available via integrations in npm On-Site, and npm is working to make various 2-factor solutions available to the public registry. This work is also not yet complete.
Ultimately, if a large number of users make a concerted effort to publish malicious packages to npm, malicious packages will be available on npm. npm is largely a community of benevolent, helpful people, and so the overwhelming majority of software in the registry is safe and often useful. We hope the npm community continues to help us to keep things that way, and we will do our best to continuously improve the reliability and security of the registry.
Earlier this week, many npm users suffered a disruption when a package that many projects depend on — directly or indirectly — was unpublished by its author, as part of a dispute over a package name. The event generated a lot of attention and raised many concerns, because of the scale of disruption, the circumstances that led to this dispute, and the actions npm, Inc. took in response.
Here’s an explanation of what happened.
In recent weeks, Azer Koçulu and
Kik exchanged
correspondence
over the use of the module name kik. They weren’t able to come to an
agreement. Last week, a representative of Kik contacted us to ask for
help resolving the disagreement.
This hasn’t been the first time that members of the community have disagreed over a name. In a global namespace for unscoped modules, collisions are inevitable. npm has a package name dispute resolution policy for this reason. That policy encourages parties to attempt an amicable solution, and when one is impossible, articulates how we resolve the dispute.
The policy’s overarching goal is this: provide npm users with the package they expect. This covers spam, typo-squatting, misleading package names, and also more complicated cases such as this one. Entirely on this basis, we concluded that the package name “kik” ought to be maintained by Kik, and informed both parties.
So far, this followed a process that is routine, though rare. What happened next, though, was unprecedented.
Under our dispute policy, an existing package with a disputed name
typically remains on the npm registry; the new owner of the name
publishes their package with a breaking version number. Anyone using
Azer’s existing kik package would have continued to find it.
In this case, though, without warning to developers of dependent
projects, Azer unpublished his kik package and 272 other packages.
One of those was left-pad.
This impacted many thousands of projects. Shortly after 2:30 PM
(Pacific Time) on Tuesday, March 22, we began observing hundreds of
failures per minute, as dependent projects — and their dependents, and
their dependents… — all failed when requesting the now-unpublished
package.
Within ten minutes, Cameron Westland
stepped in and published a functionally identical version of
left-pad. This was possible because left-pad is open source, and we
allow anyone to use an abandoned package name as long as they don’t
use the same version numbers.
Cameron’s left-pad was published as version 1.0.0, but we
continued to observe many errors. This happened because a number of
dependency chains, including babel and atom, were bringing it in
via line-numbers, which explicitly requested 0.0.3.
We conferred with Cameron and took the unprecedented step of
re-publishing the original 0.0.3. This required relying on a backup,
since re-publishing isn’t otherwise possible. We announced this plan
at 4:05 PM and
completed the
operation by
4:55 PM.
The duration of the disruption was 2.5 hours.
We stand by our package name dispute resolution policy, and the decision to which it led us.
Given two packages vying for the name kik, we believe that a
substantial number of users who type npm install kik would be
confused to receive code unrelated to the messaging app with over 200
million users.
The dispute resolution policy minimizes disruption.
Transferring ownership of a package’s name doesn’t remove current versions of the package. Dependents can still retrieve and install it. Nothing breaks.
Had Azer taken no action, Kik would have published a new version of
kik and everyone depending upon Azer’s package could have continued
to find it.
It was abrupt unpublishing, not our resolution policy, that led to yesterday’s disruptions.
The community stepped in.
It’s pretty remarkable that Cameron stepped in to replace left-pad
within ten minutes. The other 272 affected modules were adopted by
others in the community in a similar time. They either re-published
forks of the original modules or created “dummy” packages to prevent
malicious publishing of modules under their names.
We’re grateful to everyone who stepped in. With their explicit permission, we are working with them to transfer these to npm’s direct control.
Unrestricted un-publishing caused a lot of pain.
There are historical reasons for why it’s possible to un-publish a package from the npm registry. However, we’ve hit an inflection point in the size of the community and how critical npm has become to the Node and front-end development communities.
Abruptly removing a package disrupted many thousands of developers and threatened everyone’s trust in the foundation of open source software: that developers can rely and build upon one another’s work.
npm needs safeguards to keep anyone from causing so much disruption. If these had been in place yesterday, this post-mortem wouldn’t be necessary.
Poor communication made matters worse.
In the immediate wake of yesterday’s disruption, and continuing even now on blogs and Twitter, a lot of impassioned debate was based on falsehoods.
npm did not “steal” Azer’s code.
left-pad was open-source code, and explicitly allows
republishing by any other author. That’s what happened in this
case.
This incident did not arise because of intellectual property law.
We’re aware that Kik and Azer discussed the legal issues surrounding the “Kik” trademark, but that wasn’t pertinent. Our decision relied on our dispute resolution policy. It was solely an editorial choice, made in the best interests of the vast majority of npm’s users.
npm won’t suddenly take your package name.
Our guiding principle is to prevent confusion among npm users. In the rare event that another member of the community requests our help resolving a conflict, we work out a resolution by communicating with both sides. In the overwhelming majority of cases, these resolutions are amicable.
It took us too long to get you this update. If this were a purely technical operations outage, our internal processes would have been much more up to the challenge.
There are technical and social aspects to this problem. Any reasonable course of action must address both of these.
We will make it harder to un-publish a version of a package if doing so would break other packages.
We are still fleshing out the technical details of how this will work. Like any registry change, we will of course take our time to consider and implement it with care.
We will make it harder to maliciously adopt an abandoned package name.
If a package with known dependents is completely unpublished, we’ll replace that package with a placeholder package that prevents immediate adoption of that name. It will still be possible to get the name of an abandoned package by contacting npm support.
We are updating our internal policies to help our team stay in sync and address community conflict more effectively.
In a community of millions of developers, some conflict is inevitable. We can’t head off every disagreement, but we can earn your trust that our policies and actions are biased to supporting as many developers as possible.
Keeping dependencies up to date in your modules is a tedious chore,
but it’s very important; the further you drift away from the
latest release of a dependency, the more difficult and risky an
upgrade becomes. I’ve found that, despite developers’ best
intentions, dependencies tend to gradually drift behind:

Enter Greenkeeper. Greenkeeper solves the problem of keeping your dependencies up to date:
automatically creating a branch when dependent modules have changed;
kicking off CI, so that you can see if the changes break your build;
and

Open source developers (like me!) who have integrated Greenkeeper into their continuous integration workflow have found it indispensible. But what if you’re bringing open source’s best practices to the enterprise?
Good news!
We’re excited today to announce that Greenkeeper now integrates with npm On-Site. The integration allows you to configure Greenkeeper’s automated dependency management workflow for the modules that you develop privately within your company.

This integration fits wonderfully into our mission of Inner Source: empowering developers within companies of all sizes to benefit from the awesome tooling and best practices of the open source community.
Greenkeeper solves a very real problem for developers — keeping dependencies up to date — and it’s already seen wide adoption in the open source world. We’re excited to partner with Greenkeeper to bring this same slick workflow to companies’ private module development.
If you’d like to give Greenkeeper a shot, they’d love to hear from you: contact Greenkeeper about starting an enterprise trial.
And, as always, if you’d like to talk about how to bring npm into your large organization — or just see a demo of how easy it is to spin up an npm instance behind your firewall — give it a whirl: discover npm On-Site
npm, Inc. helps more than 3 million open source developers build communities, learn, collaborate, and ultimately publish software that’s used by many of the largest companies in the world.
I believe that this open source development process works. Some great studies back up this belief: In their 2015 article InnerSource: Internal Open Source at PayPal, PayPal describes introducing open source practices into their engineering culture, and the results are exciting:
The results were visible after 6 months. The Checkout Platform team spends 0% of its time rewriting code and just 10% reviewing submissions. The team was able to do a major refactoring and a 4x increase in performance without planning for it. The mindset moved from blocking change to mentoring and coaching.
In this post I focus on tooling, one important aspect of the open source process:
I’ll conclude by discussing the steps that npm On-Site and other industry leaders are taking to bring these open source tools and practices into the enterprise.
It’s amazing to watch best practices spread across OSS communities. Just a few years ago, projects rarely had unit-tests; now unit-tests are one of the first things folks look for in a healthy project. Developers learned running large open source projects that writing descriptive unit-tests was a great habit: It helped people familiarize themselves with codebases, and prevented regressions as new features were added.
How do good habits like unit-testing spread? There’s a wonderful feedback loop created by the social sites where people share and discover code:
npm’s website maintains a list of most-depended-upon modules, getting a module on this list is a great motivator for module builders. But how do developers get on this list? What works best is studying the projects of other maintainers already on this list, interacting with the developers, learning from them, and emulating their methodologies. This process means habits and best practices spread quickly across the whole community.
Badges are a great example of this process:
Among the first things developers look for in a healthy OSS project is a set of green badges along the top of its README documentation. These badges emerged organically from the developer community to indicate qualities including passing unit-tests, healthy code coverage percentages, and up-to-date project dependencies. The emergence of badges has, in turn, lead to better software development practices in the community. So cool!
As effective development habits spread across the community, developers start to converge on the tools that best facilitate these practices. Let’s look at the top 10 modules listed on the npm registry:
As the developer community arrives at a standard set of tools, incentives grow for these tools to work together elegantly.
Consider one average workflow of an npm module developer:
Various checks and balances automatically ensure code quality along the way — and at the end of the day, all of our badges remain green.
With the right set of tools, tasks that were once frustrating and manual — e.g., making sure that a pull request doesn’t break your project — become fun.
npm is dedicated to the concept of InnerSource. Originally coined by Tim O'Reilly in 2000, the term refers to applying the best open source practices and tools to building enterprise software.
To help make InnerSource a reality, npm On-Site has teamed up with Travis CI Enterprise, Coveralls Enterprise, and GitHub Enterprise to bring OSS developers’ elegant workflow to the enterprise.
What does this look like exactly?
Travis CI, Coveralls.io, and npm On-Site are working to build a similar installation experience. If you can get one product up and running, you can run the full suite. We’re also working together to keep our roadmaps in alignment; features will be built that compliment the full suite of products.
I’m excited about this direction for npm On-Site, and it’s wonderful to collaborate with other Open Source leaders. With hard work and evangelism, InnerSource can help make writing code at work as much fun as writing open source software on weekends. At the same time, open source practices can help make the enterprise software development cycle faster, while producing higher quality software.
Do you think that Open Source practices apply well in the Enterprise? Are there other useful tools that make your development workflow better? I would love to hear from you.
npm On-Site allows you to run your own private npm registry and website behind the firewall, and it’s designed to run on several different infrastructures. One of the easiest ways to run it, which we test extensively, is using Amazon Web Services.
In this post, we’ll show you exactly how to set this up.
Here’s the general idea:
That’s all it takes to get up and running!
Let’s start by logging into the AWS Console. From there, click on “EC2” in the top left to access the EC2 Dashboard.
In AWS, a private virtual server is called an “EC2 instance.” We’ll need to create, or “launch,” an instance that can run npm On-Site. Before we can launch an instance, though, we’ll need a “security group” to allow inbound TCP communication on the ports on which npm On-Site services listen. An AWS security group is a set of rules defining what type of network traffic is allowed for an EC2 instance.
To create a security group, click “Security Groups” under “Network & Security” in the vertical navigation bar on the left, then click the blue “Create Security Group” button near the top.
The ports needed for inbound traffic include:
Port | Reason
---- | -----------------
8080 | Registry
8081 | Website
8082 | Auth endpoints
8800 | Admin web console
You’ll also need SSH access into your instance, so include port 22 in your security group. Then give your security group a name that you will recognize later, like “npmo”:
In a production environment, you’ll probably want to front the registry and website with a load balancer or routing layer, using a DNS name and standard ports (80 for HTTP and 443 for HTTPS). For purposes of this walkthrough, though, we’ll do without that initially and just access the services directly on the ports to which they bind.
Now that we have a security group defined, we’re ready to launch an EC2 instance.
Click on “Instances” under “Instances” in the left navigation panel, then click the blue “Launch Instance” button.
This starts the multi-step instance wizard.
Our first step is to select a base Amazon Machine Image (or AMI) as a starting point. npm On-Site supports Ubuntu 14+, CentOS 7, and RHEL 7. For this walkthrough, let’s assume CentOS 7. You can use the top centos 7 search result in the “AWS Marketplace.” Just make sure it’s 64-bit. Click the blue “Select” button.
The next step is choosing an instance type. This determines how many resources your server will be allocated. We recommend using an m3.large. Select the radio button in the first column of the table, then click “Next: Configure Instance Details” on the bottom right.
Go with default instance details, then click “Next: Add Storage” on the bottom right.
The next step is to configure storage volumes for your instance. We recommend adding an EBS volume that has at least 50 GB. We’ll use this volume to store the data for our npm On-Site registry, and using EBS will make it easy to create snapshots of your package data for backup or transfer purposes.
Click “Add New Volume” button, select “EBS” as the “Volume Type,” and enter your desired amount of storage in “Size.” Then click the “Next: Tag Instance” button on the bottom right.
Give your instance a name and click “Next: Configure Security Group.”
Choose “Select an existing security group” at the top, then select the Security Group you created in step 1 from the list. Next, click the blue “Review and Launch.”
Review your settings and click the blue “Launch” instance when you’re satisfied. This will open a dialog to select or create a key pair that you’ll need to access your instance over SSH.
In the dialog, select “Create a new key pair” if you don’t already have one. Give it a name and click “Download Key Pair.” Remember where you save this file, as you’ll need it to SSH into your server. Once you’ve downloaded the .pem key pair file, click “Launch Instances” in blue.
Wait for your instance to launch, and view its status in the “Instances” list.
Note that if you’re running your EC2 instance in an AWS VPC virtual private cloud, then you may need to explicitly set your network interface MTU setting to 1500. You can read about why and how to do this in the AWS docs.
Now that we have a server instance up and running, we need to prepare our attached EBS volume for use. Note that it will be attached, but not formatted or mounted initially.
Access your server using the .pem key pair file via SSH. For Mac or Linux users, use the canonical ssh CLI program. For Windows users, try PuTTY.
To SSH into your server, you’ll need its public IP address. You can find the public IP for your server in the Instances list. Note that for CentOS 7, the username is centos, but if you chose a different Amazon Machine Image, the username may be different (e.g., ubuntu for Ubuntu, ec2-user for RHEL, or admin for Debian).
$ ssh -i ~/.ssh/my-key-pair.pem centos@{public-ip}
First find the volume’s device name, e.g. /dev/xvdb, using the lsblk command:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
└─xvda1 202:1 0 8G 0 part /
xvdb 202:16 0 50G 0 disk
Let’s quickly verify that our volume does not yet have a file system:
# sudo file -s {device}
$ sudo file -s /dev/xvdb
/dev/xvdb: data
The output should say data, meaning there is no file system formatted on the volume yet. Let’s add one:
# sudo mkfs -t ext4 {device}
$ sudo mkfs -t ext4 /dev/xvdb
mke2fs 1.42.9 (28-Dec-2013)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
3276800 inodes, 13107200 blocks
655360 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=2162163712
400 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208,
4096000, 7962624, 11239424
Allocating group tables: done
Writing inode tables: done
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done
Now we can create a mount point, like /data, for our formatted volume:
# sudo mkdir {mount_point}
$ sudo mkdir /data
Then we can mount the volume to the mount point and check it with the df -h command:
# sudo mount {device} {mount_point}
$ sudo mount /dev/xvdb /data
$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 8.0G 823M 7.2G 11% /
devtmpfs 3.6G 0 3.6G 0% /dev
tmpfs 3.5G 0 3.5G 0% /dev/shm
tmpfs 3.5G 17M 3.5G 1% /run
tmpfs 3.5G 0 3.5G 0% /sys/fs/cgroup
/dev/xvdb 50G 53M 47G 1% /data
So far, so good. Now, in order to preserve the mount on reboot, we need to add an entry to fstab like so:
# keep the original fstab config in case we mess up
sudo mv /etc/fstab /etc/fstab.orig
# copy the original fstab config
sudo cp /etc/fstab.orig /etc/fstab
# and modify the new fstab config
sudo vi /etc/fstab
A word on editors:
At npm, we’re non-partisan, go-along-to-get-along types. Your editor is your choice! You do you! We’re cool with anything you choose — except your angry notes telling us about your choice.
That being said: depending on the type of AMI you’re using, you might have to (/get to!) use vi to edit the file. (This is the case on CentOS, for example. On other systems, you may be able to use nano.) For the uninitiated, a step-by-step:
With the file open, add an entry under the last line: Go to the last line by hitting Shift + G then Shift + 4. Insert a new line by hitting i to enter insert mode, use the right arrow key to move the end of the line, then press Enter to add a new line as shown below.
On CentOS or RHEL, use defaults,nofail as the fs mount options. On Ubuntu or Debian, use defaults,nofail,nobootwait.
#{device} {mount_point} {fs_type} {fs_mount_ops} {fs_freq} {fs_passno}
/dev/xvdb /data ext4 defaults,nofail 0 2
Exit insert mode with the esc key, then save and exit by entering :wq.
Now, test our new fstab config. If everything is okay, this command should give no output:
$ sudo mount -a
Let’s make sure we have proper file permissions on our new mount point, and create a new directory to house all of our registry data:
$ sudo chown -R $(whoami):$(id -gn) /data
$ mkdir /data/npmo
Our EBS volume is now ready to go, and we can install Node.js and npm On-Site!
In this step, the walkthrough will follow our documentation’s standard installation, but we’ll hit the highlights here.
First, install Node.js and update npm. Note that this command is specific to CentOS or RHEL:
$ curl -sL https://rpm.nodesource.com/setup_4.x | sudo -E bash -
$ sudo yum -y install nodejs
$ sudo npm install npm@latest -g
Next, install npmo and answer any prompts:
$ sudo npm install npmo -g --unsafe
Once that’s done, complete the installation by configuring your On-Site instance via the admin web console at https://{your-server}:8800. At this point we’ll defer to the installation doc, with the exception that we should configure storage settings to use our mounted EBS volume.
When you reach the Settings page, find the “Storage” section and change the /usr/local/lib/npme path prefix to /data/npmo for all configured paths:
For testing purposes, you may want to select “Open” as the “Authentication” option.
Once you save your configuration settings, you’ll be prompted to start the registry components and go to the Dashboard view. All components will be downloaded and started as lightweight containers.
Once you see a status of “Started,” your registry is ready for use!
Back at your local machine’s terminal prompt, configure your npm CLI client to use your new private registry:
Authenticate with your registry and associate the registry to a scope name. (The scope is a namespace or prefix that you’ll use for your private packages.)
$ npm login --registry http://{your-server}:8080 --scope @demo
Now, whenever npm sees the @demo scope in a package name — like @demo/test-pkg — it automatically will publish to and install from the private npm On-Site registry you’ve configured.
To quickly verify this, let’s create a tiny module and publish it as a private package:
$ mkdir test-pkg
$ cd test-pkg
$ npm init -y --scope @demo
$ echo "module.exports = 'test successful'\n" > index.js
$ npm publish
Visit your registry’s website at http://{your-server}:8081/ and find the @demo/test-pkg package under “recently updated packages.”
Now, let’s make sure we can install our private package:
$ mkdir downstream
$ cd downstream
$ npm install @demo/test-pkg
$ node -e "console.log(require('@demo/test-pkg'))"
test successful
As you can see, your package was downloaded to a local node_modules directory, allowing you to require() and use it.
That’s it!
With luck, this has demonstrated how easy it is to run your own private registry with npm On-Site and AWS.
What’s next? For more advanced topics or questions, check out our docs. Also, don’t hesitate to drop us a line at [email protected].
Happy publishing!
npm, Inc. is now 2 years old, and we’ve come a long way from where the project and community were at the beginning of 2014. Here’s a brief overview of what happened this year, and what’s planned for 2016.
Reliance on modular JavaScript is increasing, the community of npm users is growing incredibly quickly, and the rate at which these figures grow is also increasing. In the last year, npm users downloaded 25 billion packages.
This kind of growth means our “most of these things happened in the last year” metrics will almost certainly be repeated next year — and it also means that the dominant majority of our users are brand new.
Events like Node School and other community efforts are helpful for getting newcomers integrated. In 2016, a major priority for npm, Inc. will be to help newcomers succeed at using npm in their projects, both for Open Source and at work.
Since Private Packages debuted in April, many thousands of you have signed up. Support for organizations shipped a few months ago, and we continue to work to improve it.
The npm On-Site team has been busily cranking away to make it easier than ever to get the full npm experience behind a corporate firewall.
npm, Inc. went into 2015 with 11 employees and came into 2016 with 27. A key part of this has been building out teams for support, sales, and marketing — so that everyone everywhere can hear about npm and be successful with it.
This increase in people meant that we were tripping over one another in our old office, so we moved into a much larger space on Lake Merritt. If you come to an Oakland Node School event, you can enjoy our couches and coffee while you learn how to write JavaScript packages.
As I write this, our registry uptime has been 100% over the last month. Superstition dictates I shouldn’t brag too much about this, but the hard work of our registry and ops team means registry reliability has become something the rest of us can usually forget about.
One of this company’s initial goals was to make registry uptime remarkably unremarkable, boring, and expected. We’re still committed to making it even less noticeable. With a community as fast-growing as ours, staying ahead of exponentially increasing usage has to be a top priority.
The npm CLI team has been working hard to pay down technical debt and continue to maintain good Open Source discipline. Part of this has been conducting weekly calls as public hangouts, and establishing a clear set of priorities to help guide the vision of the project.
In the next year, you’ll see npm get faster, more reliable, and easier to understand.
In 2015, the Node Foundation was created as a home for the Node.js project. npm, Inc. was one of the initial Silver Sponsors, and several members of our team have actively participated in the Foundation since before it was even called that — when it was the Node Advisory Board and the io.js Technical Committee.
In 2016, it’s my personal goal to move the open-source npm CLI project away from single-company ownership. The npm CLI belongs in a more standard and mature Open Source governance structure: within a foundation.
Exact plans for this transition and governance structure are still being drawn up. The worst possible outcome would be to damage something that works well. We’re taking very deliberate steps and weighing the pros and cons of every choice.
The ideal arrangement creates clear lines of accountability from the CLI team to the community they serve, and provides a structure for more people to get involved with the open-source project in a productive way.
InnerSource means applying the culture, practices, and tools of Open Source Software to the domain of commercial software development.
InnerSource is what we do at npm, Inc. It’s at the heart of our skills and passion, and it’s the clearest way to create value for our customers. As a long-time OSS participant, I’ve tried to bring some of the best aspects of open participation to this company, and our products are designed to help you use well-established open source methodologies at your job.
I originally wrote npm because I wanted an easier way to share JavaScript code and experiment with what other people were creating. We started this company when we saw a movement happening: people taking what works in Open Source, and bringing it into their companies.
Companies have slightly different needs than individual Open Source developers, but these aren’t that different. They need control of their infrastructure, protection against accidentally violating software licenses, better support for managing teams, and visibility into what everyone across the company is working on.
npm On-Site provides a full-featured solution for companies that want an npm registry — complete with the website, first-class support of all npm features, and a whitelist (or other extensible policy) to control what gets installed behind the firewall. With a few small additions, it’s literally the exact same code that runs npm’s public registry and website for Open Source devs. You can’t get more InnerSource than that.
npm Private Packages provides a SaaS for companies that want to use npm to manage their private code, but don’t want or need to run the registry within their own network.
More importantly, we’re not alone in this approach. A lot of important dev tool companies are coming into an almost identical three-part strategy: free for Open Source, paid SaaS, and full-featured on-site enterprise software.
Expect to hear more about this from us and our friends in the coming year.
npm literally would not exist without the contributions of our community, and npm, Inc. continues to depend upon your input, PRs, packages, and feedback. Please don’t be shy to get in touch.
Have an improvement? Send us a pull request (and score free socks!).
Need help? Contact support or find us on Twitter.
Any thoughts? I want to hear from you. Drop me a line: [email protected]. If you have an npm account, then you probably got this message in your inbox, and can reply to it. (And if you don’t, then go create one!)
Sign up for the Weekly and get this sent to you instead!
Since we launched Private Packages this spring, support for managing developer teams with varying permisisons and multiple projects has been — by far — the most requested feature improvement.
With help from inexhaustible beta testers, and after some isolated bumps during last week’s rollout, organizational support is here. For $7/dev/mo., enjoy easier management of viewing & publishing of an unlimited number of packages and control access to your own scope name. You also can add existing paid Private Packages users to your org for free.
We hope you’ll be as excited as we are, and definitely want your feedback and ideas about what comes next. Learn more more about Orgs here, sign up here, and get in touch.
What a month!
Great news, but two important things to keep in mind:
When upgrading from 2 to 3, don’t overlook an important breaking change in
the way peerDependencies works. There’s a great explanation of that change in
our release notes, here. This is especially important if you maintain a
grunt, gulp, or broccoli plug-in.
If you develop on Windows, you may not see the new version of npm when you
upgrade, depending on how you have your PATH set up. Specifically, if npm’s
global install path comes before the Node folder in PATH, and you’d
previously upgraded npm yourself, you might find yourself contining to use the
last version of npm you’d installed yourself.
For more info, don’t forget that all of the content in our CHANGELOG is
super useful and awesome. And remember, if you’re really stuck, you can
always email [email protected], tweet us at
@npm_support, or find us on IRC at #npm on Freenode.
Alongside the Node.js 5 release was the “re-awakening” of Node’s LTS strategy. (You can read about it at the Node.js LTS repo or in this post by Rod Vagg.)
This makes a fine time to remind you that npm have our own LTS strategy,
designed to complement Node.js’s. npm@2 is included in Node.js 4 LTS Argon;
therefore, this version of npm will continue to receive fixes for crashers and
security issues — and support for new features on npm, Inc’s registry — for at
least the next six months.
Also, if you’re still on Node 0.10 (why?): npm@1 is being deprecated soon
and new versions of Node 0.10 LTS will be getting npm@2. Notes
here.
Nodevember is still one of our favorite conferences and not just, but not not just, for the food. If you weren’t able to make it, you absolutely are too late for Jon Q’s special-edition stickers, but not too late to catch up on some of npm humans’ Nodevember talks:
If you’ve ever browsed for packages and wished you had an effortless sandbox in
which to test a few out, dig Tonic Dev, an npm-connected node REPL that
allows you to simply require any package in our public registry.
We were fans when this first came out, and we’re so pleased with what this adds to the package discovery experience that we’ve added a link to Tonic to every package page. Just click the link in the right side panel and you’ll be dropped into a node REPL with your chosen package pre-added as the special ingredient.
You might have noticed that our cli reference docs just got a lot more “webby”. This is part of a strong push to make our documentation more accessible and usable. 1. Happy click-holing! 2. Substantial additional improvements to docs are coming, but for that we need your help. We love your email and tweets.
Microsoft Virtual Academy published an awesomely thorough 2 hour (!) tutorial that takes you from zero to npm package hero. Mastering Node.js Modules and Packages with Visual Studio Code.
Monica Garnica joins the sales team from the legal industry; in her free time she likes “checking out a movie or a new restaurant” (and therefore, presumably, also a new movie about a restaurant (!)).
Andrew Goode is based in Atlanta and hacked for 8 years in the telematics industry, but now he’s helping make npm On-Site as … goode as can be. (Sorry.)
Andrea Zodrow joins the sales team from Wind River Systems. In her off time she chases around her almost-2-year-old son Zach or watches “anything Marvel or zombie related.”
Hired connects software developers, data scientists, designers, and sales talent with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Today, npm released a set of features that I’m really excited about: Private Packages for Organizations.
This release allows teams to use private npm packages more effectively. It’s intended for businesses that manage developer teams, with varying permissions and multiple projects.
You can read the official announcement, and learn more about the product on our website.
A big part of the motivation to start a company around this project was that people using npm at work wanted to be able to manage teams and control access to private code. We got a lot of feedback from the early adopters of private packages for individuals, and even more from our beta testers who’ve been banging on this stuff for the last few months as it’s come together.
The team at npm has worked hard on this release, and we look forward to feedback and suggestions from the community. Now that it’s live, we’ll be continuing to improve and polish it as we see how more people use it. Contact us and tell us how we’re doing :)
Monday, November 9, npm will be breaking in the new Microsoft Reactor space with an event targeted at Windows-based npm and Node users:
Hear talks from Microsoft’s Sara Itani and ag_dubs to lead the group through key environment setup details and manage packages for your applications using cross-platform Visual Studio tools.
The year was 1995.
Your Humble Narrator wakes up, crushes Ecto Cooler and Dunkaroos, feeds his Tamagotchi, and goes for a quick morning Moon Shoes session. After coming back, he decides to move the hit counter on his link of the Power Rangers webring into its own HTML frame. So he opens the shiny new Windows 95 Start Menu™, and (after a quick game of Oregon Trail) launches Notepad and starts moving it into one of those fancy new frames that Netscape just started supporting. But he hits a snag — a lot of users are still on Lynx and Mosaic. What’s a hip, young developer to do?
User Agent sniff, of course!
perl
my($ua) = $ENV{HTTP_USER_AGENT};
if (index($ua, "Mozilla") > -1) {
# send framed hit counter
} else {
# send s00per lame non-frame version
}
And done — now off to that Kid ’n’ Play concert!
Everything goes great for a while, until this newfangled “Internet Explorer” comes out and ruins the great detection code! IE supports frames, but its user-agent isn’t “Mozilla” so now we have to rewrite our user agent sniff code rather than implement those awesome new table-based layouts everyone is doing! What a bummer.
There’s got to be a better way!
Maybe, rather than knowing which browser supports each feature, and abstracting compatibility from that, we can just query the browser itself to find out…
And for this reason, and this reason alone, JavaScript was created1.
With the power of JS, we can quickly and easily ask the browser if it supports frames, set a cookie, and serve the non-frame version to anyone missing that cookie.
js
if ("frames" in window) {
document.cookie = ‘frames=true”
}
With that, feature detection was born.
Fast forward two decades:
There are dozens of features added to the browser every year, often in various states of polish and conformity. It’s becoming increasingly difficult to know if the newest and shiniest features are supported in the browser in which your code happens to be executing. Since you’re a code genius and you know that this is a problem that others hit all the time, you reach for a library that already knows all of the edge cases, rather than hand-rolling each one.
Of all of the options out there, Modernizr is by far the most popular way to get this information.
How to install: npm install -g modernizr
External requirements: none
There are hundreds of different things Modernizr will detect for you. Since you’re a responsible web developer, rather than include all of them, you can use the Modernizr package to create custom builds using only the features that you want to support.
The configuration is just a standard JSON file:
json
{
"minify": true,
"feature-detects": ["canvas"]
}
With that, you can create your very own build of Modernizr as easy as this:
modernizr -c config.jsonIn addition to the command line interface, there is a programmatic API, so you can require() at will.
There are a lot more features available, so run modernizr --help to see them all.
How to install: npm install <build url>.tar / bower install <build url>.tar
External requirements: none
You may not want to have to install a global bin just to generate this file every time (we have tools for a reason, after all). Modernizr created a hosted downloading service that integrates with npm and bower.
Just grab the build URL found at the top of any generated modernizr file:
…and add a .tar.gz to the end of it.
Now you can add that to your package.json (or bower.json), run the requisite install, and you have a built version ready for processing and deployment.
npm install --save https://modernizr.com/download?-cssanimations-csscalc-csscolumns-csstransforms.tarAs convenient as that is, it can be tedious to keep manually adding the features to your config one at a time as you add them. That is why Richard Herrera created the grunt-modernizr and gulp-modernizr packages.
How to install: npm install grunt-modernizr --save-dev
External requirements:Grunt
grunt-modernizr (like gulp-modernizr) uses customizr under the hood to crawl your entire project (.html, .css, .sass, and .js files) looking for references to Modernizr. It dynamically creates the configuration needed for your exact project, and creates the modernizr.js file that suits you best — automagically.
At the end of the day, however, there are some things that you just can’t detect properly on the frontend. Sometimes your only option is to do user-agent sniffing.
How to install: npm install useragent
External requirements: none
If you truly, 100%, really do need to sniff, then I beg you to not roll your own. There are millions — millions — of different useragents in the wild, all trying to be simultaneously like everyone else and different, too. Rather than rediscover those quirks and edge cases, use a package that has the smarts to handle them all. useragent is one of the fastest, and its data is constantly updated from browserscope.
And if you don’t even want to think about browsers? If you just want to code for the latest browsers and polyfill everything else, FTLabs has just the thing for you
How to install: npm install polyfill-service (not needed if using hosted version)
External requirements: none
This is mostly intended as a hosted service that you can just include in your web page for free (thanks, FT!)
<script src="https://cdn.polyfill.io/v2/polyfill.min.js"></script>The polyfill-service is also a great package you can run yourself. It takes a user-agent and spits out an optimized bundle of polyfilled goodness that makes that browser have the same interfaces as the latest and greatest (or at least as close as it can come).
The web is progressing faster and faster, and there are a ton of new things to play with out there. Whether what you’re writing is just scratching your own itch or will become the Next Big Thing, make sure you’re utilizing all of the tools at your disposal to give your users the best experience possible.
¹ This probably is not actually the reason JavaScript was created.
Sign up for the Weekly and get this sent to you instead!
Surely you’ve noticed that non-violent communication is a big deal at npm — and it is particularly useful in our support program. Writes support human Stephanie:
We get feedback from users in a variety of different emotional states. We know npm can be tricky, especially when you’re a new user, but luckily we’re here to help. nvc actually is a great list of what we need to help you fully.
From 30,000 feet, nvc’s core tenets are these:
Regardless of the error you’re experiencing, npm’s here for you, your feelings are valid, and we’ll do our very best to help, every time. Don’t be shy.
Today in totally rad things in our inbox, @noffle writes to remind us:
npm viewis pretty darn cool, but not a lot of folks know about it. I wrote a short and sweet blog post about it.
And so he did. Get this:
The
viewsubcommand of npm is pretty cool. It will happily print any property of the package to standard output.
Like a project’s README! Combine this with a lightweight Markdown viewer and you can easily skim a project’s docs without having to go search the Web.
Give it a read, and share your tips, too.
Are you a Windows developer based in the Bay Area? Save the date!
Put Monday, November 9th on your calendar: Microsoft will be hosting npm at Folsom Reactor, a new space dedicated to the developer and startup community in the heart of San Francisco. The evening begins at 6:30 p.m., with speakers from both npm and Microsoft.
The topic: “Node and npm: Tools, Tips, and Tricks for the Windows Developer.”
Last month, our own Nick C asked for your help curating a thumb-drive’s-worth of open source apps and resources for the CS department at the University of Havana.
Nick’s returned from his trip. The bad news is that not all of the thumb drives made it into the country; the good news is that some of them did, and
I can report without a doubt: Cuba is such an exciting and vibrant country on the verge of a(nother) revolution in their connections with the greater world. I’m really excited to see what the next decade brings for the empowered youth and how they process the change that’s about to wash upon their shores.
Head to our blog for Nick’s notes & photos, socio.
pre blocksJoe8Bit recently did some pairing with a coworker with low visual acuity, and observed
preblocks (while very important) were very difficult to distinguish from the surrounding text
Thanks to his help and his code, we landed high-contrast styling to our code blocks — so now our docs are more accessible to everyone.
If you see something, patch something, and don’t forget…
…if you submit a PR to npm or our docs, you earn the eternal gratitude of the wombatariat, but just as compellingly, free socks.
Unlike our npm shirts, stickers, and phone chargers, which any old yokel can find in the shop, submitting a PR is the only way — short of being or dating/marrying an npm human — to get these beauts.
Hop to it.
Jonathan Barronville joins npm to work on the website, send Beyoncé GIFs on Slack, and, assuming he has the time, help improve search! He’s based in Boston, where he cofounded Runway Technologies, a Harvard i-lab startup that built smarter search fashion shopping.
$ npm pack $ curl -F package=missingsocks.tgz http://washmyclothes.io
h/t @LewisCowper
Hired connects software developers, data scientists, designers, and sales talent with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
I just returned from a great trip to Havana, and I can report without a doubt: Cuba is such an exciting and vibrant country on the verge of a(nother) revolution in their connections with the greater world. I’m really excited to see what the next decade brings for the empowered youth and how they process the change that’s about to wash upon their shores.
Here are some notes from the trip.
We had high hopes of delivering Los Paquetes of open source software and freedom, burned onto a few dozen thumb drives. Unfortunately, these were stifled somewhat at the airport.
Apparently, there exists a ledger of what, and how much, can be brought into Cuba at any given point, unbeknownst to the casual tourist. As I passed through the customs line, I dutifully itemized everything I had brought, all of which was nicely packaged and consolidated in a single wombat-themed npm sock. Unfortunately, I exceeded the items in this “ledger of allotment” by 33 thumb drives, 3 smart phones and 4 baseballs (!). It didn’t seem relevant that the flash memory in my personal camera and phone contained more GBs than all of the confiscated thumb drives combined.
To their credit, Cuban officials couldn’t have been nicer about taking the contraband, itemizing it in triplicate, placing everything in a burlap sack with double security ties, and putting it into a warehouse for eternal safekeeping. The whole process took only three short hours!
The good news is that I did succeed in bringing in a few thumb drives, on which were Tor clients, NodeSchool modules, a lightweight Linux distro, the movie Revolution OS, and a downsampled version of Laurie’s Stuff Everyone Knows talk. This meant I did have a few gifts such as the allowed thumbdrives, stickers, wombat socks and battery chargers for when I went to meet with my new comrades at the University of Havana’s School of Computer Science — just not enough to really spread the free-market love.
When we arrived at the university, a nice man noticed us looking for the CS department and offered a quick tour of the building, on which there was a balcony where Fidel himself had orated. He then found a friend who took us on a walking tour of the surrounding student neighborhoods, guided us to a bar / house where Castro used to live, pumped us full of Negronis, and finally hit us up for money — all this from a supposed Spanish professor!
Lesson learned: be very careful whom you ask for directions, or you’ll wind up a bit tipsy and a few Cuban Pesos lighter.
Back on track at the University and free of our diversions, we finally found the droids we were looking for, and enjoyed a good hour or so with about a dozen students. Here are a few things we learned and observed:
The importance of the connected society for Cuba was clear. Even intermittent access to the Internet via WiFi was creating new community centers throughout the country. What a benefit it might be to own a business nearby one of these chosen corners.
Walking along the Melacón (sea wall), you’ll find clusters of people on one block, and none on the other. The reason was clear, whether it was the plaza square of a small agricultural town or in the center of Havana:people congregate with their laptops, tablets, and phones wherever there’s a signal. However, there exist only 35 of those hotspots in a country of 11 million.
Unfortunately, the cost of this service is still prohibitively high — and try using your laptop for work with no power outlet, in full sunlight, under 100% humidity, with frequent torrential rain showers rolling through with tropical force. This doesn’t speak much for a nation’s hope in general productivity.
What this means is, like my introduction to Cuba through multiple, stamped custom forms in carbon triplicate, there is both a charm and challenge to the old ways. There are aspects of Cuban society that truly are in a time warp, and often done purposefully so.
When the next Cuban revolution begins (either slowly or in upheaval), it will be led by these, the next generation of technorati, who have taught themselves how to access and leverage information freely. For now, this access remains unique to a small percentage of the population, but the students we met can’t be far removed from becoming future revolutionaries, as they’re too smart and resourceful to do otherwise.
Sign up for the Weekly and get this sent to you instead!
The reality is that some teams’ workflows and legal constraints will always require hosting code on site. For you, our enterprise product makes it possible to share and distribute modules within your team, behind your firewall. npm On-Site is a turn-key way to manage internal sharing of private modules, selectively mirror the public registry, and assume control of development and deployment, while playing nice with GitHub Enterprise and other OAuth providers.
If you don’t follow Ben on Twitter, you’ve missed some pretty great feature updates. Now, you can take care of installation and configuration from a simple web GUI and — this one’s huge — your npm On-Site installation can include your very own hosted version of the npm registry website.
Go check it out, and if you’d like a 1-on-1 demo, schedule a chat.
Henrik Joreteg reflects on the bandwidth requirements of some common tools and frameworks and wonders,
Are all these heavier tools/frameworks even viable for mobile use? I’m not convinced they all are.
The post serves less to flog any specfic frameworks than to recommend,
I don’t want your experiences as a developer lead you to think the mobile web isn’t viable just because sending a megabyte of JS made the app slow. Maybe the mobile web is fast enough and we just need to stop pretending we can get away with ineffeciencies that we don’t feel on a desktop. I think we need to be much more minimalist from the start.
Namely, pre-rendering HTML and seeing how much you can accomplish with React + Redux.
It’s hardly the end of this topic, but we think it’s an interesting perspective. Give it a read.
We ♥ web development, but an increasing number of users are turning to apps, and app stores, before searching the web for you & your stuff. What web developent tools can be leveraged to build apps for devices? Jeff Burtoft contributed a post to the npm blog with pointers to some helpful tools, including some neat Cordova plugins.
Check it out and share your thoughts too.
The first of two (!) new humans to join this week,
When she’s not teaching the world how to use npm, she’ll talk your ear off about philosophy and neuroscience, and maybe — maybe — the Weekly will come out once a week again. [GitHub, twitter].
Building web applications is hard! Is the network reliable? Will bandwidth be constrained? Fortunately, there’s all-other-things-being-equal.
All other things being equal, everything should work. Will all other things remain equal? The assertions in these helpers offer a perfect simulation of a production environment for a JavaScript application, in Node or in the browser! They are compatible with every testing library and they are guaranteed to be accurate in all cases.
Oh?
const assert = require('assert');
const allOtherThingsBeingEqual = require('all-other-things-being-equal');
assert.ok(allOtherThingsBeingEqual.networkIsReliable());
assert.ok(allOtherThingsBeingEqual.networkIsSecure());
assert.ok(allOtherThingsBeingEqual.latencyIsZero());
assert.ok(allOtherThingsBeingEqual.bandwidthIsInfinite());
assert.ok(allOtherThingsBeingEqual.codeCoverageEqualsTestCaseCoverage());
assert.ok(allOtherThingsBeingEqual());
Write your code to target environments where these assertions pass, et voila.
Not sure why it took this long for someone to think this up.
Hired connects software developers, data scientists, designers, and sales talent with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Maybe you already have a successful web app, or you’re starting a new app from scratch, but your goal is the same: you want to reach users. The web is powerful, and has an unparalleled reach since it can be accessed from any device on any platform — however there are a growing number of users who are turning to “apps” to find solutions, and the first place they go to find these apps isn’t the web, it’s the app stores.
Companies are turning to the web as a tool for cross device development. With the powerful, modern tools available to web developers today, it’s no surprise why. Let’s look at some of the most popular of these tools and find what’s best suited to help you accomplish your goals.
When building for devices, your options fall within two main application architectures: packaged content or hosted content. Although there are many “hybrid” options of web apps which blend the hosted and the packaged — as well as the web app with native code — we’re going to look at the tooling to get you going in the right direction. Which architecture do you choose? Look at these descriptions to choose what looks closest to the type of app you want to build.
Hosted web apps are web apps submitted to app markets that point to web content that remains on the webserver as opposed to being packaged and submitted to the store. Hosted apps have these characteristics:
This is the most popular type of web app in the Chrome App Market, and a growing trend in many other markets. Hosted apps bring a lot of benefit for companies that already have a fully featured web app and don’t want to start from scratch with native code. This approach also allows web developers not only to continue using their skill sets and frameworks, but to continue using the same workflow they use for the web.
Another benefit of hosted web apps is that you don’t need to push a new version to the store every time you need to update your app.
Many platforms can support hosted web apps, but the process for developing for each of those platforms is different, for now: the W3C Manifest for Web Apps will standardize web apps in the future. When targeting users across devices, you’ll want to try this tool for building hosted web apps:
How to install: npm install manifoldjs -g
External requirements: to test in emulators, the appropriate platform emulator may need to be installed:
iOS: Xcode
Android: Android APK
Windows: phone emulator (optional, can be tested direclty in Windows 10)
Chrome: Chrome browser
FireFox OS: Firefox Browser
What it does: ManifoldJS puts the focus on the W3C Manifest for Web Apps — a standards-driven, open source approach for creating apps — and then uses that metadata to create a hosted native app on each platform. When a platform supports hosted apps, they build it natively and then use Cordova to polyfill the platforms that don’t have native support.
More info at https://www.manifoldjs.com
Hosted apps provide you with new and interesting opportunities not possible in browsers through platform APIs. iOS and Android apps can be configured to access Cordova APIs like media capture and contacts and for Windows 10 you get access to all those APIs plus the entire Windows Universal API set. New features can be added to your existing app (by simply feature-detecting the APIs) or you can build a new app with all the same features and advantages of a native app.
The is a type of web app distributed in its entirety to a user through a single download. Generally, we see packaged apps within app markets. Packaged apps share these characteristics:
Although some markets, such as Windows Store and the Amazon Appstore, enable you to submit packaged apps directly to the market without the need of any external packaging tools, if you’re trying to reach a cross-platform user base, you might consider a tool that allows you to submit to multiple markets. These markets also expose additional APIs for packaged apps that aren’t available to apps via the browser.
How to install: npm install cordova -g
External requirements: to test in emulators, the appropriate platform emulator may need to be installed:
iOS: Xcode
Android: Android APK
Windows: Visual Studio (optional)
What it does: Installs the Cordova CLI, which will generate the base packages for each of the targeted platform. Cordova also has tools that help you test and package for each platform.
More info at http://cordova.apache.org
One of the benefits of building a store app above that of a native app is the ability to access exciting platform features. Within Cordova, you can add plugins to provide that functionality, and Cordova normalizes the API so that the web code you write can be the same on every platform.
How to install: cordova plugin add phonegap-plugin-barcodescanner (it uses npm!)
External requirements: this plugin works with Cordova, so you must first install Cordova
What it does: provides a full fledged bar code scanner for your cross-platform packaged app, UI and all.
More info at https://www.npmjs.com/package/phonegap-plugin-barcodescanner
How to install: cordova plugin add phonegap-plugin-push (it uses npm!)
External requirements: this plugin works with Cordova, so you must first install Cordova
What it does: provides push notifications for iOS, Android and Windows Universal apps
More info at https://www.npmjs.com/package/phonegap-plugin-push
To learn more about web apps for stores and how they are built, check out these resources:
Sign up for the Weekly and get this sent to you instead!

We ❤️ hipster hackers (and we’re typing this on a MacBook Pro while drinking cold-brew &c.), but stats don’t lie, and a large percentage of developers are still avid Windows users. After our own Ben Coe broke Atom on Windows with one of the open-source projects he contributes to, he got motivated to test all of his libraries on Windows.
Good news: there are some awesome tools available that can help:
Jeff adds:
IE’s VMs have saved my ass more times than I can count.
Hoodie has a pretty swell mission:
making the lives of frontend developers easier by abstracting away the backend and keeping you from worrying about backends
Write frontend code, hook it up to Hoodie’s API, and you’re off to the races.
Hoodie itself is built based on Node, so setting up the dev environment makes use of scripts defined in its package.json. Hoodie’s docs helpfully spell out what each script does, explain why the environment is set up as it is, and help you consider using npm as a build tool.
We especially like their reminder that the “scripts” in a package.json don’t have to be JavaScript. Defining a command as a script tells npm to run the command whenever you use the script’s name as an argument to npm run — and it can be anything that’s executable from the console. It’s possible (and really useful) to hugely automate your builds without resorting to baroque tools.
Give Hoodie’s writeup a read and give scripts a whirl.
Robots are crazy cool, but historically, the field of robotics has had some crazy high barriers to entry, not least of which are years of education and scads of funding. Nodebots represent a huge step forward in the accessibility and speed of hacking and robot building by using JavaScript to reduce the complexity of writing the code that operates robotic hardware.
If this sounds, well, crazy, check out Raquel’s Strange Loop talk, No, Really … Robots and JavaScript?!:
Let’s discuss why, of all the languages on the planet, JavaScript is the perfect starting point for a future of robotics. As a roboticist-turned-web-developer, I will provide some deep insights not only into the world of robotics, but also into JavaScript and its server-side cousin, Node.js. We’ll talk about what JavaScript-enabled robots can already do, what they can’t do yet, and what they might be able to do with a bit of elbow grease.
Crazy!

Wombat emeritus Shivani Negi has a groovy step-by-step tutorial that solves a major problem: namely, that your email inbox doesn’t have enough GIFs of baby wombats. And…
along the way we’ll make use of Giphy, nodemailer, cron, Google’s Developer Console, and Heroku.
It’s another in a continuing series of posts to familiarize newer developers — which, statistically, probably includes you — with the power & ease of using Node & npm to build neat things. Watch this space for more to come, and don’t be shy about letting us know what else you’d like to learn.
Hired connects software developers, data scientists, designers, and sales talent with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Mornings can be rough. So, I thought, what’s better than the likes of this lil’ guy bringing some cheer to our bloated inboxes? And along the way we’ll make use of Giphy, nodemailer, cron, Google’s Developer Console, and Heroku.
Let’s go:

First, let’s make sure we have npm and node installed. You’ll also need to download the Heroku toolbelt (find more on installing Heroku here or feel free to use another application deployment tool).
Use the following command and walk through the prompts to get started with a new package.json file:
npm initNext, there are a few handy modules we’re gonna need. Use the following command to install these modules and save the dependencies to your package.json file:
npm install giphy [email protected] dotenv express cron --save
Next, create a file called “app.js”. Let’s go ahead and instantiate some modules in this file.
require('dotenv').load();
var express = require('express'),
app = express(),
giphy = require( 'giphy' )( 'dc6zaTOxFJmzC' ),
nodemailer = require("nodemailer"),
CronJob = require('cron').CronJob;Giphy is a wonderful resource for pulling GIFs for any occasion. Giphy’s API is open to the public — meaning, as a developer, you don’t need a unique secret key to use Giphy’s API. We’re going to set up a cron job that executes every day at 8:30 am EST (UTC–5) to call Giphy and request new wombats.
Giphy’s search call allows you to specify a number of things in your query, including the keyword you’re searching for, the number of GIFs you want the call to return, and the rating of the GIFs (e.g., Y, G, PG, PG-13 or R). Let’s use some incantations to take all the GIFs we receive and only assign one, at random, to the variable gif, using the public beta key dc6zaTOxFJmzC:
new CronJob('30 8 * * *', function() {
giphy.search({
q: 'wombats',
limit:100,
rating: 'g'
}, function (err, res) {
var gifs = res.data;
var gif = gifs[Math.floor(Math.random()*gifs.length)];}We’ll use GMail to deliver the wombats to our inbox, but before we can start emailing anything, we need to register our app on console.developers.google.com so that Google doesn’t block our emails.
First, click Create a project… and name it anything we like. Next navigate to Creditials under API & Auth. Under Add Credentials, click on OAuth 2.0 Client ID.

Make sure to include https://developers.google.com/oauthplayground under Authorized Redirect URLs. We’ll get to why this is necessary in just a minute.

When you press Create, a window will greet you with your new Client ID and Client Secret.
Next go to https://developers.google.com/oauthplayground and click on the gear button in the upper right-hand corner. Check Use your own OAuth Credentials and enter the Client ID and Client Secret keys you just created into respective boxes.
Under the Step 1 dropdown, set https://mail.google.com as your scope like the following:

This step will redirect us to a window asking for permission to authorize use of the Gmail API. We’ll then be redirected back to https://developers.google.com/oauthplayground. If we hadn’t included this URL under Authorized Redirect URLs back in the developers console, we’d receive an error.
Under the Step 2 dropdown, click Exchange Authorization Code for Tokens. The resulting Refresh Token is important to us. Lastly, make sure to click Auto-refresh the token before it expires.
Whew! Now, on to the code.
First, we’re going to store all our new keys in a .env file to keep them private (If you add/commit any files to GitHub, make sure to include .env in a .gitignore file). Your .env file should look something like the following:
[email protected]
MY_PASSWORD=some_numbers
MY_CLIENT_ID=lots_of_numbers.apps.googleusercontent.com
MY_CLIENT_SECRET=some_more_numbers
MY_REFRESH_TOKEN=_more numbers_
Next, use nodemailer to authenticate our Gmail login.
var smtpTransport = nodemailer.createTransport("SMTP",{
service: "Gmail",
auth: {
XOAuth2: {
user: process.env.MY_EMAIL, // Your gmail address.
// Not @developer.gserviceaccount.com
clientId: process.env.MY_CLIENT_ID,
clientSecret: process.env.MY_CLIENT_SECRET,
refreshToken: process.env.MY_REFRESH_TOKEN
}
}
});
Having done this, now it’s time to add more to our cron job code — so that it doesn’t just retrieve a wombat from Giphy, but emails it too. Here’s what to add:
new CronJob('30 8 * * *', function() {
giphy.search({
q: 'giraffes',
limit:100,
rating: 'g'
}, function (err, res) {
var gifs = res.data;
var gif = gifs[Math.floor(Math.random()*gifs.length)];
smtpTransport.sendMail({
from: process.env.MY_EMAIL, // sender address
to: "[email protected]", // receiver address
subject: "DAILY GIF ✔", // subject
text: gif.images.downsized_large.url // body
}, function(error, response){
if(error){
console.log(error);
}else{
console.log("Message sent: " + response.message);
}
});
});
}, null, true, 'America/New_York');Almost there!, but we need to do a few more things. Heroku requires your app to bind to port 5000. So using express, we’ll tell our app to point to port 5000:
app.set('port', (process.env.PORT || 5000));app.get('/', function(request, response) {
var result = 'App is running'
response.send(result);
}).listen(function() {
console.log('App is running');
//the rest of our app lives here, wrapped inside this function});
Create a file titled ‘Procfile’ without a file extension. This will tell Heroku the specific command it’ll need to use to run our app.
For web applications, we’d specify a “web” process which is responsible for responding to HTTP requests from users. All other processes are “workers.” These run continuously in the background. The contents of the Procfile should look like the following:
worker:node app.js
Now, use the following commands to launch your app
git add .
git commit -m “commit message”
git push heroku master
There you have it! You should be all set to receive a cute wombat in your inbox every morning.
If you receive a cute one (this is a trick question: they’re all cute), send it our way: tweet it with #wombatlove and bask in the glory and possible npm swag. And if you run into any trouble, just reach out.
Happy wombatting!
Sign up for the Weekly and get this sent to you instead!
It’s a light Weekly this week because a good chunk of our humans are at Open Source & Feelings. If you are, too, we have three requests:
It’s a profoundly important toolkit when collaborating with others in the open source community, becauseTurn conversations away from blame and antagonism, towards meaningful connection and opportunities for growth and collaboration
Check it out.Open source is a social machine, and compassion helps keep the gears from grinding to a halt.
It wasn’t much of a secret that the first versions of npm 3 have been … slower than we like. In one typical example, npm ls in npm 2 on a MacBook took around 5 seconds; the same command in npm 3 was closer to 00:50.
Good news, by which we mean great news: in npm 3.3.6, we’ve made huge strides at tuning performance to get things sped back up. We found one example of something going from 6 minutes down to 14 seconds. Rebecca gently suggests:
Performance just got 5 bazillion times better.
Yooge.
3.3.6 is next at the moment, and latest next week. Let us know how it goes.
We plugged this last week, but there are still some tickets available to participate in NewCo’s first annual Oakland Festival.
Thursday, October 8 at 3pm, we hope you’ll stop by npm, Inc. World Headquarters and Yak Sanctuary. Isaac talks work/life balance; you see our digs & snag swag; everyone wins.
Tickets: NewCo Oakland. Use the discount code HC30OAK for 30% off.
If you’ve ever used JSFiddle to try out or demonstrate JavaScript snippets online, you’re gonna dig Tonic.
Tonic lets you require() any module in npm and run it right in your browser, which comes in handy for blog posts, docs, teaching, or just more easily showing off cool things.
Hired connects developers with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Sign up for the Weekly and get this sent to you instead!
When you introduce a new contributor to a project — or just try to set up your new laptop — needing to run multiple global installations means plenty of opportunities for confusion or mistakes. Even the relatively straightforward process of locally building Angular.js still involves three separate shell commands and running “install” four times.
The good news is that npm scripting can help. The scripts section of a package.json can define all sorts of package-specific commands, like how to test the package and what runs after the application terminates, but as K.Adam at Bocoup points out,
commands used within npm script commands have direct access to locally-installed packages: this means that modules don’t have to be globally installed in order to be exposed to the user via npm scripts.
With npm package scripts, you can provide a common façade for your tools, simplify your libraries’ workflows, and get back to building great things with the best tools for the job.
Check out K.Adam’s thorough, excellent, explanation: A Façade for Tooling with NPM Package Scripts.
We know from search and download counts in the npm registry, and the kinds of support requests we receive, that a good number of you use npm for front-end asset and dependency management — and we build our own website with npm (delicious, delicious dog food). Exactly how much of the workflow involved in building a website can you manage and automate with npm?
Just ask Youssef Kababe:
you can literally use npm to do everything and that’s what we will do now! We’ll create a simple website by managing everything using npm!
Go follow Youssef’s step-by-step tutorial: npm-based front-end workflow.
Node is 4! but among the implications that carries is the need to recompile all of your C++ addons.
James Kyle reminds us to remind you: don’t forget about rebuild, which runs the npm build command on the folders for each package you specify.
NewCo is a pretty cool idea:
Our mission is to identify, celebrate, and connect the engines of positive change in our society.
One of the unique features of their NewCo Festivals is that they introduce neat companies by literally bringing festivalgoers to the companies — “get out to get in” — in a sort of business conference - cum - open-studio format.
In two weeks, they’ll host the 4th Annual NewCo San Francisco and the very first NewCo Oakland Festival. We’re proud to be included. If you’re in the area, this Thursday, October 8 at 3pm, we hope you’ll stop by the new place. Isaac will speak on work/life balance, you can meet the team and pet Haggis, and there will be swag.
Tickets are here: NewCo Oakland. Use the discount code HC30OAK for 30% off.
Old and busted: npm shirts, stickers, and phone chargers.
New hotness: npm socks.
But there’s a catch. Kasey ran these off as neat gifts for npm’s humans, not for sale on the store. If you want a pair, you have to work for it.
Sam said he’d cry, but all you have to do is help us fix all our bugs. James Hartig already did, and so can you. Submit a patch, receive a patch … of fashionable 80% cotton / 16% nylon fabric, sewn into the shape of a sock.
Seems like a fair trade to us.
Hired connects Node developers with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Sign up for the Weekly and get this sent to you instead!
If you don’t watch npm on GitHub (you should!), you missed Rebecca T.’s understated mic-drop:
the week of this release, v3.3.3 will become
latestand this version (v3.3.4) will becomenext!!
To understand what this means for you, now’s a good time to revisit our release notes from when 3 initially came out in beta. Much goodness awaits, including an improved multi-stage installer, flatter dependencies, and new shrinkwrap behavior. Run, don’t walk, to get the scoop.
In Jason Koebler’s fantastic Vice series about the challenges of getting online in Cuba, (part 1, part 2) he calls our attention to los paquetes: weekly deliveries of media and online content from the U.S. and elsewhere distributed via thumbdrive.
Our own Nick C. is preparing for a trip to Cuba and will be delivering paquetes of his own, but…
Instead of filling these with Mr. Bean videos or the latest from the land of Westeros, I’m appealing to the Node community to help me support the young developer in Cuba. Let’s provide them the open-source tools, instructions, and offers of mentorship they might need to get started with their engineering careers.
He’s opened up a GitHub repo for collecting apps, tools, and docs to distribute to Cuban computer science students who don’t benefit from the fast, free Internet which the rest of us take for granted — and soliciting our help with the curation.
Read more about Nick’s project on our blog, and consider helping out.
We use Marky-Markdown to parse Markdown content like readmes for display on npmjs.com. Over the course of chasing down a leak, we traced the problem to a leak in oniguruma, a regex module also used by the Atom editor.
Our pull request with a fix was quickly accepted, which took care of our leak and helps out all of its other dependents.
Quoth our Ryan D.: “another win for small modules, and one less persistent leak waking me up at night.” Amen.
You’re probably new here, and that’s okay. Here’s a neat InfoWorld writeup that provides a simple npm walk-through: Inside npm: Building and sharing JavaScript packages [free registration req’d].
Once that gets you hooked, we recommend these docs, which will get you the rest of the way to becoming a ninja rockstar unicorn npm power user.
And don’t forget: at any point, if you need help, we’re here.
40% of npm registry users use Windows, but 100% of Node developers would benefit from making cross-platform development easier. Meet ievms: spin up virtual machines for testing IE6 – IE11 and MSEdge with a single command. Bringing more instances of IE6 into the world in 2015 runs the risk of summoning a helldemon apocalypse: for the sake of all humanity, please use your power responsibly.
DanceJS is a unique speaker series that aims to foster a community around audio-hacking and dance music. Early bird tickets are only $10 through 9/25.
We know there are a multitude of npm issues our users deal with every day, and a bunch of different ways to get in touch with us, so we thought it would be useful to have a single list to help you decide the best way to reach us. So here’s all the ways you can talk to us, and what they’re good for:
In about a month’s time, I’ll be going to Cuba for a week, hoping to do all the things that a traditional tourist usually does — down my share of mojitos, learn firsthand the different blends of tobacco, and indulge in a sunset dinner on a rooftop with barking dogs in the background. And having jumped into npm the Monday after leaving my old position, I’m looking forward to spending a week away.
However, since I’m going under the guise of the fairly non-descript “In Support of the Cuban People” visa, I feel it bears a bit more responsibility than just libations and victuals.
In my pre-trip research, I’ve been fairly surprised by the reports about how far behind Cuba is with regards to connectivity. A government-monitored network of 35 hotspots for 11 million people presents a certain bottleneck for progress. Recently, Jason Koebler of Vice has authored a series of insightful articles (Life, Offline and the Internet Dealers) about the challenges Cubans face when trying to connect to the outside world.
Particularly interesting: Koebler mentions the weekly delivery of “Los Paquetes”: thumb drives that arrive from the United States every Tuesday, loaded with some of the prior week’s television shows and movies. Here at npm, we often joke that thumb drives are a thing of the past, but this is the de facto way most Cubans access online content, since the alternative is a long line and a week’s wages for something resembling a dial-up connection.
This Columbus Day, ironically enough, I’ll be passing through Aeroporto de Havana with a rucksack of 50 thumb drives. Instead of filling these with Mr. Bean videos or the latest from the land of Westeros, I’m appealing to the Node community to help me support the young developer in Cuba. Let’s provide them the open-source tools, instructions, and offers of mentorship they might need to get started with their engineering careers. With your help, I hope to index, categorize, and transfer ~2GB of open-source software for the students of the Computer Science department at the University of Havana. Together, we can provide them a Paquete de Codigo Abierto: an open source package to be delivered personally. If you’re willing to help, here’s what we might do:
Please: no pirated software, propaganda, or viruses. Let’s use this as an offering of goodwill, not a political platform. I’ll be heading out on October 10th by way of Mexico City. If you’re able to contribute some time to the repo before then, I’d appreciate it. While you’re at it, feel free to drop me a line with suggestions of what else I ought to see while I’m there. For those of you local to California Norte, I can offer stories over cigars and rum upon my return.
Sign up for the Weekly and get this sent to you instead!
Russell at lob.com wrote an in-depth tutorial that shows how to build an Express.js app that prints and mails a postcard of any of your Instagram photos. It’s fantastic.
We highly recommend checking out the tutorial and a working demo. a) It’s fun to build something cool, and b) This is the kind of tutorial we hope will proliferate within the npm community. Give it a whirl and let us know how it goes.
(Our address is 1999 Harrison Street, ste. 1150, Oakland, CA 94612, USA, by the way. Oh, no reason.)
While you’re in a building mood:
If you’re an absolute beginner — or just love unleashing Markovian havoc upon your colleagues — don’t miss this from our own Shivani Negi:
How to Build A Slackbot + Deploy an App to Heroku for Absolute Beginners takes you step-by-step through getting Node.js & npm set up in your dev environment, cloning CJ’s LOUDBOT, and getting it up on Heroku.
TRUST US, IT’S COOL.
If you somehow forgot to make it to ToulouseJS this week, all is not lost. Matt Winchester sends word of his rad talk at UtahJS:
Why we love the letters n p m: Ninja parade month? Nixon’s Presidential mistakes? Just what does npm mean? What does it do? Can it benefit front end development? We’ll take a tour of npm’s package making features. From installing to linking to semver, I’ll neatly present machinations that can benefit your development!
It’s coming up Friday, September 25. Tickets are here.
We’re working on big improvements to package discovery and publishing and managing private modules, so we’re looking for willing guinea pigs to share opinions and feedback. In exchange for participating in a 30-minute usability session, you get the thanks of a grateful wombatariat; also, free things.
Odds are none of us intends to exclude or hurt fellow members of the community, but polarizing and gender-favoring language has a way of slipping into what we write. Sometimes it’s a big help to have a second set of eyes that can look things over, notice what we’ve overlooked, and nudge us towards being more considerate and inclusive.
Check out alex, which helps “catch insensitive, inconsiderate writing” by identifying possibly offensive language and suggesting helpful alternatives.
Hired connects Node developers with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Sign up for the Weekly and get this sent to you instead!
Today in good news you can use, a neat story about NodeRedis.
A while back, Matt Ranney gave the module redis to the community. Just this week, contributors got the first release in a year out the door.
It’s great work by the team, and also a great reminder that if you don’t have the time to maintain something anymore, the npm community is here to help.
Moving your module into a GitHub organization is a useful first step; then email your main contributors and give them ownership on npm and commit access on GitHub. It turns out that being given ownership in a project helps contributors feel proud to jump in and help. Win!
Have an orphan project in need of adoption? Tell us.
We didn’t get Jon Q to draw us a wombat avec baguette, but don’t let that stop you. If you find yourself in Toulouse next week, don’t miss Maxime Warnier’s talk at Tolouse JS: npm: et s'il ne devait en rester qu'un (“npm: There Can Be Only One” — Highlander jokes transcend borders and tongues).
The talk is Tuesday, September 8th in Toulouse; tickets (free!) are on Eventbrite. Send a postcard.
(And: giving a talk about npm? Don’t forget to tell us; we’ll help spread the word.)
You: have always wanted to know which versions of your npm package the world is using, and view npm packages and their dependencies as a graph.
npmrank: uses PageRank to identify popular or important packages. There’s a neat online demo to sort packages by PageRank, and Andrei also gave us a rundown of the top 100 packages with the most dependencies and the top 100 packages upon which the most other packages depend.
Check it out.
npm2dot converts an npm dependency list to a dot file so you can visualize it in Graphviz. Pretty! It’s also a neat way to see what we mean when we say that npm 3 (coming soon!) flattens dependencies:
On the left, a package’s dependencies installed using npm’s current version. On the right, the same dependencies installed using npm 3: it’s easy to see a flatter structure and fewer nodes.
For a good time, try using this to compare Express’ development environment and production environment:
Are you as brave as @therockywounded? As strong as @danmactough? As noble as @AaronMakeStuff? Yes. Yes, you are.
Are you also a heavy user of npm Private Modules? Our own Nick Cawthon is working on improvements to publishing and managing private modules, and needs your help. Sign up here for a 30-minute usability session to influence the product and score neat stuff.
This week the Weekly is a little thin, and a little late, because we’ve been busily relocating to npm’s new world headquarters along Lake Merritt in Uptown Oakland.
The move reflects our growth to 23 humans — and counting — and gives us space for more classes, events, and wombat petting zoos. If you’re in the area, let’s chill.
Hired connects Node developers with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Back in ye olde days of the internet, there was IRC (Internet Relay Chat), a chat network with channels and servers for every microcosm in the world. Folks would make these little bots or services — they’d greet you when you joined a channel, played trivia, Uno and other chat-based games with you. Unfortunately, if you wanted to make one yourself, you likely needed some knowledge of Perl, client-server architecture and a handful of Unix commands.
Today, most will opt for a more aesthetic alternative to IRC. Namely, Slack — a chat-based team communication tool. The best part? You can create a Slackbot with substantially fewer roadblocks.
I’ll try not to make assumptions on the technologies you may or may not have on your machine here. So let’s go ahead and start from scratch. If you already have any or all of following downloaded, you’re ahead of the game!
1. Install Node.js and npm
Mac
Go to nodejs.org, Click ‘install’, and run through the install process.
Ubuntu
You should be able to use the following:
curl -sL https://deb.nodesource.com/setup_0.10 | sudo -E bash -
sudo apt-get install -y nodejscurl -sL https://deb.nodesource.com/setup_0.10 | sudo -E bash — sudo apt-get install -y nodejsMore installation help at https://github.com/nodesource/distributions#deb
Windows
Go ahead the download the Windows binary
2. Create a Github account (https://github.com/) and download git (http://git-scm.com/download)
3. Clone this repository https://github.com/ceejbot/LOUDBOT-SLACK by using the following command in your terminal:
git clone [email protected]:ceejbot/LOUDBOT-SLACK.git
4. Create an account and download Heroku on your machinehttps://toolbelt.heroku.com/
Pheew. Now that we have all that installed, on with the tutorial!
1. Go into your Slack group, click on the caret and then “Configure Integrations” in the dropdown

2. Under ‘All Integrations’ → ‘DIY Integrations & Customizations’, click on ‘Outgoing WebHooks’

3. The next screen should look like the following. Go ahead and click on ‘Add Outgoing WebHooks Integration’

4. Set the channel to the specific channel you would like your slackbot to be active on, and copy the corresponding token (Rest assured, we’ll be able to add our slackbot on multiple channels. We’ll get to that in a bit)

5. Open a terminal and navigate to the directory you cloned and use the following command to create your own file to store environment variables. You should note that your .gitignore includes this .env file.
cat .env.example > .env
6. Next, using a text editor of your choice, open the loudbot directory
7. Open .env and replace the contents of TOKENS with the token you copied from slack. Now I’d promised you could add loudbot to multiple channels. To do this, repeat the above steps to add outgoing webhook integrations for as many channels are you would like. Just remember to copy the corresponding tokens into the TOKENS list in .env.
8. To complete the SLACK_TOKEN field, we’ll have to revisit the Slack ‘Configure Integrations’ page. Under ‘DIY Integrations & Customizations’, there should be a service called ‘Bots’

9. Create a new Bot, name it ‘LOUDBOT’ and copy the ‘API TOKEN’ under Integration Settings into the SLACK_TOKEN field in your .env file.
1.Next we need put loudbot on a server so it can constantly be listening and responding to our Slack messages.
2. Create a file entitled ‘Procfile’ without a file extension. This is the file that tells Heroku what commands to use to run your application. The contents of the file are a single line that should be the following:

3. Finally, in your terminal, within your loudbot repository, run the following Heroku commands.
heroku create
This will create a new heroku app (note you can only have a max of 5 heroku apps on the free plan). Git remotes are references to remote repositories and this command will also create a git remote you can reference as ‘heroku’ on the command line.
git add
git commit -m "my commit message"
These are git commands for that’ll help track your changes to the app. They will commit your changes to your local directory, in preparation for deploying your app to Heroku.
git push heroku master
This will actually push your app to Heroku.
And there you have it — your app is live. LOUDBOT learns from your shouting and you can talk with LOUDBOT in any channel it is active in using all-caps. Enjoy!

Sign up for the Weekly and get this sent to you instead!
Here’s a tidy little thing that was added to git a while back:
git config push.followTags true
Once you set this config in git, any tags that you create with npm version will automatically be grabbed and included when you make a regular push to GitHub, or wherever your repos are. No more needing to remember to specially handle your tags!
This has been available for a few versions of git, but its (re‑) discovery within the npm office was met with joy.
“Whoaaaaaa!” —an npm, Inc. developer
“Suddenly tags aren’t a massive pain in the ass!” —another npm, Inc. developer
Give it a shot.
Also on the topic of simplifying your workload…
If there’s a package.json file in your project, you have the opportunity to include time-saving scripts for your development process. npm test is the most common — in fact, it’s included by default when you initialize a new package.json — but there’s much more you can script to cover both testing and what code gets run after the application is terminated.
Learn more in this guest post from Kenneth Ormandy of Surge.
Dave from Croissant took the time to list out (and alphabetize) the 40 (!) npm modules their team can’t live without: Sharing is caring.
40 covered, 178,221 to go. Want to tell us about your favorites?
If you weren’t able to attend BrazilJS, you missed not only a collectible npm USB charger, but a fine keynote from our own Laurie Voss.
You’re on your own for the chargers — your battery’ll last longer if you just turn down the screen brightness… — but if you want Laurie’s slides, you’re in luck: npm past, present & future.
Each week, we ask for volunteers to help our Nick Cawthon review upcoming improvements to package discovery, registry pages, and managing npm features; and share your feedback. This can bring not only lucre and fineries but the immortality of fame. Just ask these heroes:
Sign up here for a 30-minute UX session.
It’s so exciting that React’s core and renderer are now separated! This allows custom renderers for any browser! Even things that aren’t browsers! Like… the command line. We don’t know why you’d want to do that, but you can!
Behold: react-blessed, a React custom renderer for blessed.
Hired connects Node developers with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
If there’s a package.json file in your project, you have the opportunity to include time-saving scripts for your development process. npm test, the most common example, is actually included by default when you initialize a new package.json file. Most Node.js based projects make use of this pattern, and it’s increasingly common for front-end projects, too.
Knowing you can run npm install; npm test within a repository you don’t know much about yet is reassuring. This is an increasingly common first step after cloning a project. Beyond this, the script section of your package.json could cover,
everything from how the package should be tested to what code should get run after the application is terminated.
—K.Adam White, A Facade for Tooling with NPM Package Scripts
We use these scripts heavily while working on Surge. npm’s pre- and post- scripts, however, are employed much less often. They help you automatically run some tasks before or after others, and they can be used to make your project much friendlier to new developers—whether they are new to JavaScript, npm, or just to your project.
Many projects will include a test script that looks something like this in the package.json file:
// …
"scripts": {
"test": "node test/my-tests.js"
}
Let’s imagine you’d like to add code linting next, using Standard, to improve consistency. First, add Standard to your project by running the following command in your terminal:
npm install --save-dev standard
Now, the latest stable version of Standard has been added to your devDependencies in your package.json. The Standard README recommends you add it to your package.json test script, like so:
"scripts": {
"test": "standard && node test/my-tests.js"
}
This script works well at first: you are now linting your JavaScript code and then running your tests. This should help improve code consistency by making it a common action. But this isn’t particularly friendly to someone new to your project who’s just looking to run the test: as soon as they start contributing, they could have the linter vocalize concerns over their use of semicolons or extra whitespace.
Now, you’ll need to worry less about nitpicking inconsistencies from contributors. Unfortunately, a linter yelling at a potential contributor when they are just trying to dig into the code a little is an effective way to deter them from contributing at all. It doesn’t matter if they are a new developer or just new to your project.
For greater flexibility, let’s make each step a separate script:
"scripts": {
"lint": "standard",
"test": "node test/my-tests.js"
}
Now, you can run your linter whenever you’d like, independently of your tests:
npm run lint
—but there is something convenient about having it run automatically as part of your tests. Luckily, npm’s pre- and post-run scripts can take care of that.
Rather than running your linting as part of your test script, consider running it as a subsequent step, only if the tests pass:
"scripts": {
"lint": "standard",
"test": "node test/my-tests.js",
"posttest": "npm run lint"
}
Potential contributors will be spared the syntax warnings until their changes make the tests pass, and they might actually be ready to prepare a pull request.
Your code is tested and linted, so there’s no excuse not to deploy it! Surge helps front-end developers publish any directory of static files. Pass a folder to surge, and it can be published for free, with a subdomain or custom domain. First, install Surge as a development dependency:
npm install --save-dev surge
Then, add a deployment run script to your package.json:
"deploy": "surge ./path/to/dist"
Using the command npm run deploy in your terminal will start the publishing process.
Running the tests before each deploy shouldn’t be something you need to think about, however. Just as you added linting after your tests with a post-run script, why not run the tests before deploying with a pre-run script?¹
"scripts": {
"lint": "standard",
"test": "node test/my-tests.js",
"posttest": "npm run lint",
"predeploy": "npm test",
"deploy": "surge ./path/to/dist"
}
Prepending pre or post to any run script will automatically run it before or after the root task. In this case, npm run predeploy will automatically run the tests before deploying the project to Surge.
These run scripts are also available in an example repository on GitHub.
This approach doesn’t require the project to actually take advantage of npm run scripts proper; Grunt, Gulp, or any other build tool can still be used while aliasing the most common commands to npm run scripts:
"scripts": {
"start": "gulp",
"test": "gulp test",
"posttest": "gulp lint",
"deploy": "gulp deploy"
}
Use your best judgement on how many of these to include, but including some basics can make getting started with your project a lot clearer without someone needing to read your Gulp- or Gruntfile.js.
Run scripts help summarise common tasks within your project, and pre- and post-run scripts can order those tasks in a more friendly manner. Try adding one to the next package.json file you find yourself editing.
¹ Update 2015-09-01: An earlier revision of this post incorrectly read
"posttest": "npm run test",here instead of
"posttest": "npm run lint",We’ve fixed it. Thanks to commenters including Joe Zimmerman for making the catch!
Sign up for the weekly and get this sent to you instead!
You might have noticed that it’s possible to install npm@latest, which fetches npm 2.13.5, or npm@beta, which fetches npm 3.3.0. Those tags, latest and beta, are examples of distribution tags, and they’re pretty great.
Tagging enables multiple streams of development — think “stable” and “canary” — in order to test out changes before they’re widely used by everyone, and lets users decide whether they want to opt in to testing new things or just want your package to work. (Consider using latest to mean “the newest we intend most people to use”, not just “the newest.”)
Add, remove, and enumerate distribution tags with dist-tag.
(Also on the subject of versions: if you missed it, check out last week’s explainer on semantic versioning.)
Over 40% of the developers who use npm’s registry are running Windows — a number which more or less held steady for a year, and now is increasing month over month — so anything that improves the process of Node development on a Windows machine has a major impact.
The good news is that we have a monthly check-in with Microsoft’s Visual Studio team to keep an open line and make things more awesome. The also-good news is that you can help. Do you develop in Windows? What works, what hurts, and what’s on your wish list? Let us know: tweet, email, or reach out through any of the other usual channels.
We have received actual fan mail about our upcoming Orgs support for teams and groups, and we hope you’ll love it too. Want to try this out before our release later this fall? Sign up for the beta.
We’re also still looking for volunteers to participate in a 30-minute usability session to shape upcoming improvements to package discovery, registry pages, and managing npm features. This is your chance to influence a tool used over 2 billion times in the last month alone, and score neat swag. Can you lend a hand?
Angela Eichner comes to us this week from Loggly to help bring npm to the largest names in technology, finance, and commerce. If you’re looking to support npm for large teams, self-host the npm registry on premise, or talk custom solutions, get in touch.
What are you working on? Who’s doing inspiring things? What projects could use a hand?
Drop us a note with your suggestions for the Weekly. Just reply to this email with your ideas.
standard is a module that enforces consistent JavaScript style in your project.
semicolons requires semicolons and will throw an exception if every line does not end with one.
We absolutely do not recommend attempting to use both of these at the same time.
Hired connects Node developers with over 2,500 vetted tech companies in 13 major tech hubs, probably including yours. Developers on Hired receive an average of 5 interview requests within a week. Looking for a job? Check them out.
Sign up for the weekly and get this sent to you instead!

When you execute npm install in a clean project directory, package.json specifies the version of each dependency that gets installed. But npm makes it possible to specify a range of accepted versions instead of exact version numbers.
If this sounds a) awesome and b) worth understanding in detail, we highly recommend this neat ByteArcher explainer on Semver.
The Cyberwizard Institute is an open, collaborative, and free programming school based out of Oakland’s Sudo Room hackerspace. They’ve got a nice workshop on using npm, and many many more helpful videos on their YouTube channel.
How do you improve upon perfection? One way to find out. Actually, two!:
As part of our ongoing focus on making developers’ lives easier, we need your help participating in a 30-minute usability session with Nick Cawthon, our new head of design & UX. Laugh, cry, screenshare — and help us improve our product.
Coming this fall, Orgs will streamline the process for delegating permissions, managing roles, and collaborating with your team on publishing and sharing packages. But first, we need a small number of teams to help with testing and feedback. If you and your team would like early access to our newest feature, sign up here to help out.
Oh geez, we’re still growing.
If you’re not doing Node.js in your job, and you’d like to be, get a load of these companies looking for JavaScript and Node.js developers. (Have companies to add to this list? email [email protected]!)
Holy wah, a new record: in the last 30 days, npm users have downloaded over 2 billion packages. Also: in the 20 months since npm, Inc. started, the registry got 300% bigger, and registry traffic has grown 1100%.
We love hearing from you. Just drop us a note with your suggestions for the Weekly and heads-ups of what’s up.

Of the 173,288 (and counting!) packages on the npm registry, this is possibly the most edible. Here’s a fun package for ordering pizza from Domino’s: npm: dominos.

ohai! After a lengthy interview process, which I was lucky enough to come out of the wormhole still intact - I’m still dusting off my desk, cursing the unholy marriage of Apple and the USB-C standard, all the while systematically forgetting every password I’ve ever known.
I’m coming from a wonderful two-and-a-half years with the good folks o’er there at Loggly, now returning back to my native Eastern Bay Area to join the union of npm on this crazy new thing called JavaScript. It is a strange, obscure language that has something to do with DHTML… I think.
OK - down to business. I’m here to do a few things. 1) help lead the Product Design to a more intuitive and organized workflow, as mostly represented through the WWW site. 2) to guide the creative inputs of this wonderful brand, as they manifest themselves across any number of different points - from product pages to data visualizations. 3) lastly, to establish a research practice where we take the voice of you (yes, that’s right - YOU!) the user… and make sure that it is paramount in everything we decide to design and develop going forward.
On that last note, if you’d be OK with spending a half-hour of your time walking me through a screenshare of some of your common workflows with npm - I’d love to hear from you. I want to see your messy room - how you resolve your conflict errors, your upgrade / hierarchy workarounds, how you determine which one of those five image-resizing modules you eventually decide to use, and (hopefully) how you use npm to really shine. Your feedback will go a long way to helping our product evolve, and I can promise you some wombat and/or bitfont-themed apparel in return (may take 6-8 weeks for delivery).
<shameless self-promo> Also, If you’d like to learn more about user research, there’s a talk at SxSW, that I hear is going to be really good. </shameless self-promo> All kidding aside, I’m honored and excited to start working with you, the hoomans and wombats of node.
Nick Cawthon
[email protected]
Sign up for the weekly and get this sent to you instead!
Do you have a friend who wants to get started with Node.js modules? Here’s a great tutorial from @jdaudier.
Learning about npm is easier when you’re doing it with other people. If you’re giving a talk about npm, we’d love to feature it in the Weekly. Just reply to this note or fill out this form with your talk’s details.
Holy wah, you’ve been busy. Last week, the npm community
If you don’t follow Laurie on Twitter, you’ve missed more amazing stats-’n’-graphs. See, e.g., npm requests per region, and the resulting global traffic graph.
We frequently get questions about npm’s download stats — what do and don’t we count?, are an author’s package download counts “real”?… — so here’s a detailed post for the curious: how npm download counts work.
If you’re having trouble using a module, try npm owner ls.
It’s a one-step way to find a module’s contributor so you can get in touch. Also, npm bug will open the GitHub issues page in a new browser window.
And of course, if you’re having trouble installing a module from our registry, we’re always here to help: [email protected].
What are you working on? Who’s doing inspiring things? What projects could use a hand?
Drop us a note with your suggestions for the Weekly. Just reply to this email with your ideas.
npm i baby -g
External image—Adam Bretz @adambretz July 21, 2015
Sign up for the weekly and get it sent to you instead.

We’re on the shortlist for the net awards Game Changer of the Year! Thanks to everyone who nominated us.
Wondering how we got to npm 3? Rebecca, Forrest, and Kat give you the full backstory and latest news on the NodeUp Podcast.
It turns out there was a bug in [email protected] (which made it hard to download modules locally), so Rebecca released a new version out of cycle yesterday. You can upgrade with npm install -g [email protected].
If you want to upgrade npm on windows, check out npm-windows-upgrade.
Upgrading npm on Windows requires manual steps to ensure that PowerShell/CMD find the new version of npm. This is a small tool made by Microsoft DX engineers with :heart: for npm and Node, reducing the process to a simple command.
Stephan Bönnemann explains the confusion around the prepublish lifecycle script and why its quirky behavior is actually more helpful than it may seem.
Want to learn how to Node? Come to the npm office this weekend to hang out with other learners and mentors at NodeSchool Oakland. Space is limited, so be sure to sign up.
If you want to follow along from home, you can check out all of the workshops and download them using npm.
Sign up for the weekly.
If you’re an npm@3 beta tester, be advised: the tags for downloading the latest and next versions of npm@3 have changed. They are now [email protected] and [email protected]. Checkout the CHANGELOG for more info. Also, thank you to all of you for helping us squash all those bugs!
Kat also snuck a lot of things into npm@2 while Forrest was away, so checkout those in the CHANGELOG.
Rebecca talked to InfoQ about npm@3, particularly how the changes to the installation process will help Windows users.
We’ve alerted private modules users via email, but in case you missed it there was a security issue with private modules last week. Metadata about private modules was leaked, but package contents and private user information were not. Read more in the post-mortem.
If you’re working with React.js components, you’ll probably want to listen to the React Podcast on Webpack vs Browserify as tools for managing your front-end dependencies. The discussion starts around the 7-minute mark.
On the same subject, Lin explained how you can use Browserify with npm (and Babel) for front-end package management at jQuerySF last month and will be talking about Webpack at React Rally in August.
In his AMA, Sindre Sorhus explains how he uses npm as a way to capture his code snippets.
Why copy-paste when you can require it and with the benefit of having a clear intent. Fixing a bug in a snippet means updating one module instead of manually fixing all the instances where the snippet is used.
One of the most amazing parts of working at npm is the passion of our userbase. People make fan art. A fan invented our mascot. This stuff just happens.
Yesterday at lunch, we got into a run of punny song titles based on musicals, leading to a series of tweets about npm the musical. By late that evening, what landed in our inbox? Full lyrics to one of the songs, courtesy of the astonishingly creative Revin Guillen. Not only is it a catchy song, it’s a surprisingly thorough overview of npm’s features and functionality. We are blown away.
(To the tune of I Am The Very Model Of A Modern Major General.)
I am the very model of a modern package manager
I’ve information current, deprecated in my cache-ager
I know the code you write has other modules (“mod-you-uhls”) it depends on
And I collect it all for you from first the moment you log on
My registry with calculations teeming mathematical
Dependencies relationshipped by edges linked quite graph-ical
The versions, stars, and issues, bugs all published for the world to see
With many cheerful links to source control for the reposit'ry
With many cheerful links to source control for the reposit'ry
With many cheerful links to source control for the reposit'ry
With many cheerful links to source control for the reposiposit'ry
I’m very good at testing, tagging, publishing your libraries
I find the dupes and de- the dupes and you can rebuild all of these
In short, in matters JavaScript, dependency, and modular
I am the very model of a modern package manager
In short, in matters JavaScript, dependency, and modular
It is the very model of a modern package manager
I know your repo’s history, can shrinkwrap all the deps you need
I answer all the queries, many millions served today indeed
I quote the docs to you when asked, or open them in Chrome at least
I’m active all the time, in time zones all the way from west to east
I can uninstall, unlink, unpublish, unstar, or just un
I know your login name and I can whoami for everyone
My interface is quick and you can write instructions easily
Abbreviate commands so they’re as terse as ‘r’, ’s’, 'i’, and 'c’
Abbreviate commands so they’re as terse as 'r’, ’s’, 'i’, and 'c’
Abbreviate commands so they’re as terse as 'r’, ’s’, 'i’, and 'c’
Abbreviate commands so they’re as terse as 'r’, ’s’, 'i’, and 'i’ and 'c’
Then I can run-script all the things, lifecycle hooks I can perform
And tell you all about a package: view 'f you want me to inform
In short, in matters JavaScript, dependency, and modular
I am the very model of a modern package manager
In short, in matters JavaScript, dependency, and modular
It is the very model of a modern package manager
In fact when I know what is meant by “version” or (heh) “verison”
When I can give you scripts to help your shell do tab comp-uh-letion
When certain variables are all there in the environment
I enter “plumbing mode”; my output’s based upon the arguments
When I have learnt what progress has been made by npm upgrade
Or npm update, when identical operations made
In short, when registry’s upgraded underneath transparentlyj
You’ll say a better package manager had never ran à ceej
You’ll say a better package manager had never ran à ceej
You’ll say a better package manager had never ran à ceej
You’ll say a better package manager had never ran à ran â ceej
For my packager'il knowledge, though I’m plucky and adventury
Has really really taken off since funding came in ventury
But still in matters JavaScript, dependency, and modular
I am the very model of a modern package manager
But still in matters JavaScript, dependency, and modular
It is the very model of a modern package manager