For about a year now, npm’s engineering teams have been writing RFCs as a routine part of our design process. I’m going to talk a little bit about why we do this, what we get from it, and what it can’t do for us.
I’ve been doing something like this for a while on my own, as part of my own design process for coding projects. I often practice readme-first development: I write the README for a project, with usage examples and a sketch of the API, before I write any code. This helps me think through how I want my users to interact with my library. This works for small projects, or projects where the discovery process is limited to how people will want to use it.
For larger projects, or ones where I’m not sure in advance what the solution should look like, I use a different process. Here, I write things I call one-pagers, named for about how long they should be. These start with a problem statement, continue with an exploration of useful facts about the problem, then move on to a proposal for a specific solution. These are useful to me even if nobody else ever reads them, because the act of writing them makes me slow down and think about what I’m proposing to do.
npm’s RFC process is an elaboration of the one-pager process, but renamed to reflect that seeking out feedback on the document is just as important as writing it. It’s a request for comment!
Let’s get some comments!
npm’s more formalized RFC process adds a bit more to the document than I used to, because it is intended for a wider audience. Our RFCs have four sections:
I’ll go into detail about what each of these are and why they’re helpful for us.
Often, your hardest task is identifying the problem clearly and specifically.
Sometimes the “problem” arises from non-engineering concerns: “we need to implement this new feature” is a good trigger for an engineering design proposal. Sometimes you’re fixing a bug. Other times, it’s more vague to start with: “this system needs to scale” or “this user action is too slow.”
The goal of writing the problem statement is to focus on this problem instead of others that might be just to the side. It also pushes you to consider the properties of a good solution, which you must understand when you’re deciding what action to take.
I think of this as the research section: How did we get here? Why do we need to act on this now? What’s contributing to the situation? Writing this section pushes you to discover constraints on the solution or requirements that might be lurking. Knowing why a current system behaves the way it does is handy when you’re evaluating a replacement for it.
Writing this section is meant to help you understand the problem fully and describe it clearly enough that your colleagues can understand it too.
Is there more than one way to solve this problem? Probably! If you haven’t come up with more than one solution yet, try to do so now. Ponder the possibilities and pull apart their tradeoffs. The research work you did writing the background section sets you up to do this.
The goal of writing this section is to shift you into thinking about what you’re going to do next.
Okay: now you drive a stake into the ground and have an opinion. What’s your proposal for solving the problem?
Answer these questions if you can: * Why does this proposal meet the constraints better than other solutions? * What are your solution’s requirements? These might range from the technical (“this will be a breaking change, so we’ll need to release a major version number bump”) to the human (“the support team will need a new tool to view this data”). * What are the consequences of your choice? * Which systems will need to change? * Which systems will be replaced?
Go into as much detail as you can, given that you haven’t started implementation yet. By trying to answer these questions, you may conclude that you don’t know enough yet, and you need to do more research — and proposing to do more research is a perfectly reasonable proposal.
Be prepared: this section will be the focus of your colleague’s commentary when they read it. Also, don’t be afraid to make a strawman proposal that you know will be replaced. Reacting to a strawman might be just what your team needs to inspire some new ideas.
A request for comment is not complete until you’ve requested the comments!
Share your RFC with your colleagues and solicit their feedback. At npm, we do this by posting the RFC to a specific channel in our company Slack. We tag in people whom we especially want feedback from, but the entire company is welcome to read and comment on RFCs.
Your colleagues will have insights you don’t, on both the problem and your proposed solution. Use this feedback to revise your RFC as necessary and tune up your proposal. You might need to revise significantly and get another round of feedback, or you might be ready to start implementation work.
The goal of writing an RFC is not to produce a document that gets followed as if it were a spec. The goal also is not to be correct and marvelously brilliant in your first proposal. The journey is the point of writing an RFC.
Writing an RFC pushes you to think clearly about the problem you want to solve and to do research into it. By discussing it with your colleagues, you improve both your understanding of the problem and your understanding of the tradeoffs required by possible solutions.
I guarantee that your team is collectively smarter than any one person in it, even you. Together, you can solve harder problems than any of you could alone, so use this collective brain! It might take practice — it’s not always fun to have your ideas challenged — but once you can receive, consider, and incorporate others’ feedback, you will be smarter for it, and your eventual solution will be better.
Note the corollary: when you comment on an RFC, your job is to be constructive and collaborative.
Another interesting corollary: you can tune the RFC focus to suit the needs of your team. Do you, as a team, need to spend more time thinking through the operational requirements of your work? Make that a required section of the RFC. Do you always forget about support tools? Your RFC template can remind you. Encourage your team to think about the parts of the process it needs to think about more.
Don’t add too much overhead to this document, though. Don’t let this process consume too much time or be an excuse for procrastinating or over-architecting.
A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness. —Alfred Korzybski, Science and Sanity (1933)
You should emerge from writing an RFC with a better map of the path ahead of you, but you still need to put on your boots and hike. Happy trails.
npm has only been a company for 3 years, but it has been a code base for around 5–6 years. Much of it has been rewritten, but the cores of the CLI and registry are still the original code. Having only worked at npm for a year at this point, there’s still a lot of things left for me to learn about how the whole system works.
Sometimes, a user files a bug which, in the process of debugging it, teaches you some things you didn’t know about your own system. This is the story of one of those bugs.
Over the past week or so, several people filed issues regarding some strange truncating in npm package pages. In one issue, a user reported what appeared to be a broken link in their README:

Another user pointed out that the entire end portion of their README was missing!
As a maintainer of npm’s markdown parser, marky-markdown, I was concerned that these issues were a result of a parsing rule gone awry. However, another marky-markdown maintainer, @revin, quickly noted something odd: the description was cut off at exactly 255 characters, and the README was cut off at exactly 64kb. As my colleague @aredridel pointed out: those numbers are smoking guns.
Indeed, an internal npm service called registry-relational-follower was truncating both the READMEs and descriptions of packages published to the npm registry. This was a surprise to me and my colleagues, so I filed an issue on our public registry repo. In nearly no time at all, our CTO @ceejbot responded by saying that this truncation was intended behavior(!) and closed the issue.
“TIL!” I thought. And that’s when I decided to dig into how the registry handles READMEs… and why.
Before I dive into exactly what happens to your packages’ READMEs between your writing & publishing to their rendering on the npm website, let’s address the 800-lb gorilla in the room:
When I discovered that the registry was arbitarily truncating READMEs, I thought: “Seems bad.”
Maybe you thought this, too.
Indeed, at least one other person did, commenting on the closed issue:
This may be desired by npm, but I doubt any package authors desire their descriptions to be truncated. Also, see zero-one-infinity.
I should point out that commenting negatively on an already closed issue isn’t the best move in the world. However, I appreciated this comment, because it gave me new words to explain my own vaguely negative feelings about this truncation situation — fancy words with a nice name: The Zero One Infinity rule.
The Zero One Infinity rule is a guiding priniciple made popular by Dutch computer scientist Willem Van der Poel and goes as follows:
Allow none of foo, one of foo, or any number of foo. —Jargon File
This principle stands to eliminate arbitrary restrictions of any kind. Functionally, it suggests that, if you are going to allow something at all, allow one thing or allow an inifinite amount of things. These seem to be aligned with a seemingly symbiotic rule: the Principle of Least Astonishment, which states:
If a necessary feature has a high astonishment factor, it may be necessary to redesign the feature.
In the end, these principles are fancy, important-sounding ways of saying: arbitrary restrictions are surprising, and we shouldn’t be surprising our users.
Now that we can agree that surprising users with strange and seemingly arbitrary restrictions is no bueno … why does the npm registry currently have this restriction? Certainly npm’s developers don’t want to be surprising developers, right?
Indeed, they don’t! The current restriction on description and README size is a Band-Aid that npm’s registry developers were forced to apply as a result of the original architecture of the npm registry: large READMEs were making npm slow.
How the heck…, you might be thinking. Reasonable. Let’s take a look.
Currently, here is how your READMEs are dealt with by the registry:
When you type npm publish, the CLI tool takes a look at your .npmignore (or your .gitignore, if no .npmignore is present) and the files key of your package.json. Based on what it finds there, the CLI takes the files you intend to publish and runs npm pack, which packs everything up in a tarball, or .tar.gz file. npm doesn’t allow you to ever ignore the README file, so that gets packed up no matter what!
When you type npm publish, your README gets packed into a package tarball. This is what gets downloaded when someone npm installs your package. But this is not the only thing that happens with your README.
So while npm publish runs npm pack, it also runs a script called publish.js that builds an object containing the package’s metadata. Over the course of your package’s life (as you publish new versions), this metadata grows. First, read-package-json is run and grabs the content of your README file based on what you’ve listed in your package.json. Then publish.js adds this README data to the metadata for your package. You can think of this metadata as a more verbose version of your package.json — if you ever want to check out what it looks like, you can go to http://registry.npmjs.com/. For example, check out http://registry.npmjs.com/marky-markdown. As you’ll see, there’s README data in there for whichever version of your package has the latest tag!
Finally, publish.js sends this metadata, including your README, to validate-and-store… and here is where we bump into our truncation situation.
npm publish sends the entire README data to the registry, but the entire README does not get written to the database. Instead, when the database receives the README, it truncates it at 64kb before inserting.
This means: while we talk about a package on the npm registry as a single entity, the truth is that a single package is actully made up of multiple components that are dealt with by the npm registry services differently. Notably, there’s one service for tarballs, and another for metadata, and your README is added to both.
This means that the registry has 2 versions of your README: - The original version as a file in the package tarball - A potentially truncated version in the package metadata
As you may now be guessing, users have been seeing truncated READMEs on the npm website because the npm website uses the README data from package metadata. This makes a fair amount of sense: if we wanted to use the READMEs in the package tarballs, we’d have to unpack every package tarball to retrieve the README, and that would not be super efficient. Reading README data from a JSON response, which is how the npm registry serves package metadata, seems at least a little more reasonable than unpacking over 350,000 tarballs.
So now we know where the READMEs are truncated, and how those truncated READMEs are used — but it’s still not necessarily clear why. Understanding this requires a bit of archaeology.
Like many things about npm, this truncation was not always the case. On January 20, 2014, @isaacs committed the 64kb README truncation to npm-registry-couchapp, and he had several very good reasons for doing so:
First, allowing extremely large READMEs exposed us to a potential DDoS attack. An unsavory actor could automate publishing several packages with epically large READMEs and take down a bunch of npm’s infrastructure.
Second, extremely large READMEs in the package metadata were exploding the file size of that document, which made GET requests to retrieve package data very slow. Requesting the package metadata happens for every package on an npm install, so ostentisbly a single npm install could be gummed up in having to read several packages with very long READMEs — READMEs that wouldn’t even be useful to the end user, who would either use the unpacked README from the tarball or wouldn’t even need the README if, for example, the package was a transitive dependency far down in the dependency tree.
Interestingly enough, the predicament of exploding document size was a problem that npm had dealt with before.
Remember when we pointed out that a single package is actually a set of data managed by several different services? Like many things at npm, this also was not always the case.
Originally, npm’s registry was entirely contained by a single service, a CouchApp, on top of a CouchDB database. CouchDB is a database that uses JSON for documents, JavaScript for MapReduce indexes, and regular HTTP for its API.
CouchDB comes with an out-of-the-box functionality called CouchApp that is a web application served directly from CouchDB. npm’s registry was originally exclusively a CouchApp: packages were single, document-based entities with the tarballs as attachments on the documents. The simplicity of this architecture made it easy to work with and maintain, i.e., a totally reasonable version 1.
Soon after that, though, npm began to grow extremely quickly — package publishes and downloads exploded — and the original architecture scaled poorly. As packages grew in size and number, and dependency trees grew in length and complexity, performance ground to a halt and npm’s registry would crash often. This was a period of intense growing pains for npm.
To mitigate this situation, @isaacs split the registry into two pieces: a registry that had only metadata (attachments were moved to an object store called Manta and removed from the CouchDB), which he called skim, and another registry that contained both the metadata and the tarball attachment called full-fat. This splitting was the first of what would be multiple (and ongoing!) refactoring efforts to reduce the size of package metadata documents and distributing how we process packages across multiple services to improve performance.
If you look at the npm registry architecture today, you’ll see the effects of our now CTO @ceejbot’s effort to continue to split the monolith: slowly separating out registry functionality into multiple smaller services, some of which are no longer backed by the original CouchDB, and are backed by Postgres.
Turns out that nobody thinks that arbitrarily restricting README length is a good thing. There are plans in the works for a registry version 3, and changing up the README lifecycle is definitely in the cards. Much like the original shift that @isaacs made when he created the skim and full-fat registry services, the team would ideally like to see README data removed from the package metadata document and moved to a service that can render them and serve them statically to the website. This would bring several awesome benefits:
README truncating! Good-bye arbitrary restrictions!READMEs and serving them statically instead of parsing them on request. (Yes we cache, but still…)READMEs for all versions of a package! By lowring the cost of READMEs, we can not only parse more of a single README, but parse more READMEs too! :)npm cares deeply about backwards compatibility, so all of the original endpoints and functionality of our original API will continue to be supported as the npm regsitry grows out of its CouchApp and CouchDB origins. This means there will always be a service where you can request a package’s metadata and get the README for the latest version. However, npm itself doesn’t have to use that service. Moving on from it towards our vision of registry version 3 will be an awesome improvement, across several axes.
systems as designed are great, but systems as found are awful
This is not a shot at npm; this statement is pretty ubiquitously true. Most systems that are of any interest to anyone are the products of a long and likely complicated history of constraints and motivations, and such circumstances often produce strange results. As displeasing as the systems you find might be, there is still a pleasure in finding out how a system “works” (for certain values of “work,” of course).
In the end, the “fix” for the “bug” was “we’ve got a plan for that, but it’s gonna take a while.” That isn’t all that satisfying. However, the process of tracking down a seemingly simple element of the npm registry system and exploring it across services and time was extremely rewarding.
In fact, in the process of writing this post I became aware that Crates.io, the website for the Rust Programming Language’s package manager Cargo, was dealing with a very similar situation regarding their package READMEs. Instead of trying to remove them from their package metadata like us, they’re considering putting it in! If I hadn’t had the opportunity to dig around in the internals of npm’s registry, I might not have been ready to offer them suggestions with the strength of 5 years of experience.
So — the moral of the story is this: When you can, take the time to dig through the caves of your own software and ask questions about past decisions and lessons. Then, write down what you learn. It might be helpful one day, and probably sooner than you think.
Here’s how we deploy node services at npm:
cd ~/code/exciting-service
git push origin +master:deploy-production
That’s it: git push and we’ve deployed.
Of course, a lot is triggered by that simple action. This blog post is all about the things that happen after we type a git command and press Return.

As we worked on our system, we were motivated by a few guiding principles:
Why? We want no barriers to pushing out code once it’s been tested and reviewed, and no barriers to rolling it back if something surprising happens — so any friction in the process should be present before code is merged into master, via a review process, not after we’ve decided it’s good. By separating the steps, we gain finer control over how things happen. Finally, making things repeatable means the system is more robust.
What happens when you do that force-push to the deploy-production branch? It starts at the moment an instance on AWS is configured for its role in life.
We use Terraform and Ansible to manage our deployed infrastructure. At the moment I’m typing, we have around 120 AWS instances of various sizes, in four different AWS regions. We use Packer to pre-bake an AMI based on Ubuntu Trusty with most of npm’s operational requirements, and push it out to each AWS region.
For example, we pre-install a recent LTS release of node as well as our monitoring system onto the AMI. This pre-bake greatly shortens the time it takes to provision a new instance. Terraform reads a configuration file describing the desired instance, creates it, adds it to any security groups needed and so on, then runs an Ansible playbook to configure it.
Ansible sets up which services a host is expected to run. It writes a rules file for the incoming webhooks listener, then populates the deployment scripts. It sets up a webhook on GitHub for each of the services this instance needs to run. Ansible then concludes its work by running all of the deploy scripts for the new instance once, to get its services started. After that, it can be added to the production rotation by pointing our CDN at it, or by pointing other processes to it through a configuration change.
This setup phase happens less often than you might think. We treat microservices instances as disposable, but most of them are quite long-lived.
So our new instance, configured to run its little suite of microservices, is now happily running. Suppose you then do some new development work on one of those microservices. You make a pull request to the repo in the usual way, which gets reviewed by your colleagues and tested on Travis. You’re ready to run it for real!
You do that force-push to deploy-staging, and this is what happens: A reference gets repointed on the GitHub remote. GitHub notifies a web hooks service listening on running instances. This webhooks service compares the incoming hook payload against its configured rules, decides it has a match, & runs a deploy script.
Our deploy scripts are written in bash, and we’ve separated each step of a deploy into a separate script that can be invoked on its own. We don’t just invoke them through GitHub hooks! One of our Slack chatbots is set up to respond to commands to invoke these scripts on specific hosts. Here’s what they do:
Each step reports success or failure to our company Slack so we know if a deploy went wrong, and if so at which step. We emit metrics on each step as well, so we can annotate our dashboards with deploy events.
We name our deploy branches deploy-foo, so we have, for instance, deploy-staging, deploy-canary, and deploy-production branches for each repo, representing each of our deployment environments. Staging is an internal development environment with a snapshot of production data but very light load and no redundancy. Canary hosts are hosts in the production line that only take a small percentage of production load, enough to shake out load-related problems. And production is, as you expect, the hosts that take production traffic.
Every host runs a haproxy, which does load balancing as well as TLS termination. We use TLS for most internal communication among services, even within a datacenter. Unless there’s a good reason for a microservice to be a singleton, there are N copies of everything running on each host, where N is usually 4.
When we roll services, we take them out of haproxy briefly using its API, restart, then wait until they come back up again. Every service has two monitoring hooks at conventional endpoints: a low-cost ping and a higher-cost status check. The ping is tested for response before we put the service back into haproxy. A failure to come back up before a timeout stops the whole roll on that host.
You’ll notice that we don’t do any cross-host orchestration. If a deploy is plain bad and fails on every host, we’ll lose at most 1 process out of 4, so we’re still serving requests (though at diminished capacity). Our Slack operational incidents channel gets a warning message when this happens, so the person who did the deploy can act immediately. This level of orchestration has been good enough thus far when combined with monitoring and reporting in Slack.
You’ll also notice that we’re not doing any auto-scaling or managing clusters of containers using, e.g., Kubernetes or CoreOS. We haven’t had any problems that needed to be solved with that kind of complexity yet, and in fact my major pushes over the last year have been to simplify the system rather than add more moving parts. Right now, we are more likely to add copies of services for redundancy reasons than for scaling reasons.
Configuration is a perennial pain. Our current config situation is best described as “less painful than it used to be.”
We store all service configuration in an etcd cluster. Engineers write to it with a command-line tool, then a second tool pulls from it and writes configuration at deploy time. This means config is frozen at the moment of deploy, in the upstart config. If a process crashes & restarts, it comes up with the same configuration as its peers. We do not have plans to read config on the fly. (Since node processes are so fast to restart, I prefer killing a process & restarting with known state to trying to manage all state in a long-lived process.)
Each service has a configuration template file that requests the config data it requires. This file is in TOML format for human readability. At deploy time the script runs & requests keys from etcd namespace by the config value, the service requesting the config, and the configuration group of the host. This lets us separate hosts by region or by cluster, so we can, for example, point a service at a Redis in the same AWS data center.
Here’s an example:
> furthermore get /slack_token/
slack_token matches:
/slack_token == xoxb-deadbeef
/slack_token.LOUDBOT == XOXB-0DDBA11
/slack_token.hermione == xoxb-5ca1ab1e
/slack_token.korzybski == xoxb-ca11ab1e
/slack_token.slouchbot == xoxb-cafed00d
Each of our chatbots has a different Slack API token stored in the config database, but in their config templates they need only say they require a variable named slack_token[1].
These config variables are converted into environment variable specifications or command-line options in an upstart file, controlled by the configuration template. All config is baked into the upstart file and an inspection of that file tells you everything you need to know.
Here’s LOUDBOT’s config template:
app = "LOUDBOT"
description = "YELL AND THEN YELL SOME MORE"
start = "node REAL_TIME_LOUDIE.js"
processes = 1
[environment]
SERVICE_NAME = "LOUDBOT"
SLACK_TOKEN = "{{slack_token}}"
And the generated upstart file:
# LOUDBOT node 0
description "YELL AND THEN YELL SOME MORE"
start on started network-services
stop on stopping network-services
respawn
setuid ubuntu
setgid ubuntu
limit nofile 1000000 1000000
script
cd /mnt/deploys/LOUDBOT
SERVICE_NAME="LOUDBOT" \
SLACK_TOKEN="XOXB-0DDBA11" \
node REAL_TIME_LOUDIE.js \
>> logs/LOUDBOT0.log 2>&1
end script
This situation is vulnerable to the usual mess-ups: somebody forgets to override a config option for a cluster, or to add a new config value to the production etcd as well as to the staging etcd. That said, it’s at least easily inspectable, both in the db and via the results of a config run.
The system I describe above is sui generis, and it’s not clear that any of the components would be useful to anybody else. But our habit as an engineering organization is to open-source all our tools by default, so everything except the bash scripts is available if you’d find it useful. In particular, furthermore is handy if you work with etcd a lot.
[1] The tokens in this post aren’t real. And, yes, LOUDBOT’s are always all-caps.
Today, Facebook announced that they have open sourced Yarn, a backwards-compatible client for the npm registry. This joins a list of other third-party registry clients that include ied, pnpm, npm-install and npmd. (Apologies if we missed any.) Yarn’s arrival is great news for npm’s users worldwide and we’re happy to see it.
Like other third-party registry clients, Yarn takes the list of priorities that our official npm client balances, and shifts them around a little. It also solves a number of problems that Facebook was encountering using npm at their unique global scale. Yarn includes another take on npm’s shrinkwrap feature and some clever performance work. We’ve also been working on these specific features, so we’ll be paying close attention.
Mostly! We haven’t had time to run extensive tests on the compatibility of Yarn, but it seems to work great with public packages. It does not authenticate to the registry the way the official client does, so it’s currently unable to work with private packages. The Yarn team is aware of this issue and have said they’ll address it.
Whenever a big company gets involved in an open source project, there’s some understandable anxiety from the community about its intentions.
Yarn publishes to npm’s own registry by default, so Yarn users continue to be part of the existing community and benefit from the same 350,000+ packages as users of the official npm client. Yarn pulls packages from registry.yarnpkg.com, which allows them to run experiments with the Yarn client. This is a proxy that pulls packages from the official npm registry, much like npmjs.cf.
Like so many other companies around the world, Facebook benefits from the united open source JavaScript community on npm.
As I said at the start, we’re happy to see Yarn join the ranks of open source npm clients. This is how open source software is supposed to work!
The developers behind Yarn — Seb, James, Christoph, and Konstantin — are prolific publishers of npm packages and pillars of the npm community.
Through their efforts, Facebook and others have put a lot of developer time into this project to solve problems they encountered. Sharing the fruits of their labor will allow ideas and bugfixes to flow back and forth between npm’s official client and all the others. Everyone benefits as a result.
Yarn also shows that one of the world’s largest tech companies, which is already behind hugely popular JavaScript projects like React, is invested in and committed to the ongoing health of the npm community. That’s great news for JavaScript devs everywhere.
We’re pleased to see Yarn get off to such a great start, and look forward to seeing where it goes.
From its inception, npm has been keenly focused on open source values. As we’ve grown as a company, however, we’ve learned the important lesson that making source code available under an open license is the bare minimum for open source software. To take it even further, we’ve also learned that “open source” doesn’t necessarily mean community-driven. With these insights in mind, the web team has decided to make some changes to the community interface of npm’s website — with the goal of creating a more efficient and effective experience for everyone involved.
npm/newww is being retired and made privatenpm/www has been created for new issues and release notesnpm/newwwAs you may (or may not!) have noticed, the repo that used to home npm’s website (npm/newww) isn’t in sync with the production website (http://www.npmjs.com).
A few months back, the team made the executive decision to close source the npm website. There were several reasons for this:
This was a super tough call, and there were strong arguments from both sides. In the end, though, the team reached a unified understanding that this was both the best call for the company and for the community. The repo will be officially shutting down tomorrow, Friday, July 29, 2016.
One of the things we’re aware of is that many in the Node community were using the website as an example repo for using the Hapi framework. While we’re completely flattered by this, we honestly don’t believe the codebase is currently in a state to serve that role — it’s a katamari of many practices over many years rolled into one right now!
That being said, we do care about sharing our work with the world, and intend to and are excited to publish many of the website components as packages that will be open sourced and reusable.
npm/wwwIn place of the npm/newww repo, we’ve created npm/www! The goals of this repo are to give the community a place to:
While the source code for the website will no longer be available, the hope is that this new repo can be a more effective way to organize and respond to the needs the community has. We’re super excited to hear your thoughts, questions, and concerns — head over to npm/www now so we can start collaborating!
I don’t want to bury the lede, so here it is: npm has a new CTO, and her name is CJ Silverio. My title is changing from CTO to COO, and I will be taking over a range of new responsibilities, including modeling our business, defining and tracking our metrics, and bringing that data to bear on our daily operations, sales, and marketing. This will allow Isaac to concentrate more on defining the product and strategy of npm as we continue to grow.
CJ will be following this post with a post of her own, giving her thoughts about her new role. I could write a long post about how awesome CJ is — and she is awesome — but that wouldn’t achieve much more than make her embarrassed. Instead I thought I’d take this chance to answer a question I get a lot, before I forget the answer:
The answer is that it depends on the needs of the company. npm has grown from 3 people to 25 in the past 2.5 years, and in that time my job changed radically from quarter to quarter. Every time I got the hang of the job, the needs of the company would shift and I found myself doing something new. So this is my list of some of the things a CTO might do. Not all of them are a good idea, as you’ll see. The chronological order is an over-simplification: I was doing a small piece of all of these tasks all the time, but each quarter definitely had a focus, so I’ve talked about it then.
Started this quarter: CJ, Raquel.
npm Inc had a bumpy launch: the registry was extremely unstable, because it was running on insufficient hardware and had not been architected for high uptime. Our priority was to get the registry to stay up. I was spinning up hardware by hand, without the benefit of automation. By April we had found the hot spots and mostly met the load, but CJ was the first person to stridently make the case that we had to automate our way out of this. I handed operations to her.
Started: Ben, Forrest, Maciej.
Once the fires were out, we could finally think about building products, and we had a choice: do we build a paid product on top of the current (highly technically indebted) architecture, or build a new product and architecture? We decided on a new, modular architecture that we could use to build npm Enterprise first, and then extend later to become “Registry 2.0”. Between recruitment, management, and other duties, I discovered by the end of the quarter that it was already impossible to find time to write code.
This was the quarter we dug in and built npm Enterprise. My job became primarily that of an engineering manager: keeping everybody informed about what everybody else was up to, assigning tasks, deciding priorities of new work vs. sustaining and operational work, and handling the kind of interpersonal issues which every growing company experiences. I found I was relying on CJ a lot when solving these kinds of problems.
Started: Rebecca
With npm Enterprise delivered to its first customer, we started learning how to sell it. I went to conferences, gave talks, went to meetings and sales calls, wrote documentation and blog posts, and generally tried to make noise. I was never particularly good at any of this, so I was grateful when Ben took over npm Enterprise as a product, which started around this time.
In February 2014 I had written the system that to this day serves our download counts, but we were starting a process of raising our series A, and that data wasn’t good enough. I dredged up my Hadoop knowledge from a previous job and started crunching numbers, getting new numbers we hadn’t seen before, like unique IP counts and other trends. This is one job I’m keeping as I move to COO, since measuring these metrics and optimizing them is a big part of my new role.
Started: Ernie, Ryan
We’d been hiring all the time, of course, but we closed our series A in Q1 2015, so there was a sudden burst of recruitment at this time, most of whom didn’t actually start until the next quarter. By the end of this process we’d hired so many people that I never had to do recruitment again: the teams were now big enough to interview and hire their own people.
Started: Kat, Stephanie, Emily, Jeff, Chris, Jonathan, Aria, Angela
With so many new people, we had a sudden burst of momentum, and it became necessary for the first time to devote substantial effort to planning “what do we do next?” Until this point the next move had been obvious: put out the fire, all hands on deck. Now we had enough people that some of them could work on longer-term projects, which was good news, but meant we had to pull our heads up and think about the longer term. To accomplish this, I handed management of nearly all the engineering team to CJ, who became VP of engineering.
Started: Ashley, Andrea, Andrew (yes, it was confusing)
We had already launched npm Private Modules, a single-user product, but it hadn’t really taken off. We were sure we knew why: npm Organizations, a product for teams, was demanded by nearly everybody. It was a lot more complicated, and with more people there was a lot more coordination to do, so I started doing the kind of time, tasks, and dependency management of a project manager. I will be the first to admit that I was not particularly good at it, and nobody was upset when I mostly gave this task to Nicole the following quarter. We launched Orgs in November, and it was an instant hit, becoming half of npm’s revenue by the end of the year.
Started: Nicole, Jerry
Now with two product lines and a bunch of engineers, fully defining what the product should do (or not do), and what the next priority was, became critical. Isaac was too busy CEO’ing to do this, so he gave it to most available person: me. This was not a huge success, partly because I was still stuck in project management mode, which is a very different thing, and partly because I’m just not as creative as Isaac when it comes to product. Everybody learned something, even if it was “Laurie isn’t very good at this”.
Started: Kiera
Isaac’s baby was born on April 1st (a fact that, combined with his not having mentioned they were even expecting a baby, led many people to assume at first that his announcement of parenthood was a joke). He went on parental leave for most of Q2, so I took over as interim CEO. CJ, already VP of eng, effectively started being CTO at this time.
When Isaac came back from parental leave, we’d learned some things: I had, of necessity, handled the administrative and executive functions of a CEO for a quarter. CJ had handled those of a CTO. We now had two people who could be CTO, and one overloaded CEO with a talent for product. The course of action was obvious: Isaac handed over everything he could that was not-product to me, to focus on product development, while I handed over CTO duties to CJ. We needed a title for “CEO stuff that isn’t product” and picked COO mostly because it’s a title people recognize.
You’ll notice a common thread, which is that as I moved to new tasks I was mostly handing them to CJ. Honestly, it was pretty clear to me from day 1 that CJ was just as qualified to be CTO as I was, if not more — she has an extra decade’s worth of experience on me and is a better engineer to boot. The only thing she lacked was the belief that she could, and over the last two and a half years it has been a pleasure watching her confidence grow as she’s mastered every new challenge I put in front of her, and more than a little funny watching her repeatedly express surprise at her ability to do all these things. It’s been like slowly persuading an amnesiac Clark Kent that he is, in fact, Superman.
I’ve often referred to CJ as npm’s secret weapon. Well, now the secret is out. npm has the best CTO I could possibly imagine, and I can’t wait to see what she does next.
Earlier today, July 6, 2016, the npm registry experienced a read outage for 0.5% of all package tarballs for all network regions. Not all packages and versions were affected, but the ones that were affected were completely unavailable during the outage for any region of our CDN.
The unavailable tarballs were offline for about 16 hours, from mid-afternoon PDT on July 5 to early morning July 6. All tarballs should now be available for read.
Here’s the outage timeline:
Over the next hour 502 rates fell to the normal 0.
We’re adding an alert on all 500-class status codes, not just 503s. This alert will catch the category of errors, not simply this specific problem.
We’re also revising our operational playbook to encourage examination of our CDN logs more frequently; we could have caught the problem very soon after introducing it if we had carefully verified that our guess about the source of 502s had resulted in making them vanish from our CDN logging. We can also do better with tools for examining the patterns of errors across POPs, which would have made it clearer to us immediately that the error was not specific to the US East coast and was therefore unlikely to have been caused by an outage in our CDN.
Read on if you would like the details of the bug.
The root cause for this outage was an interesting interaction of file modification time, nginx’s method of generating etags, and cache headers.
We recently examined our CDN caching strategies and learned that we were not caching as effectively as we might, because of a property of nginx. Nginx’s etags are generated using the file modification time as well as its size, roughly as mtime + '-' + the file size in bytes. This meant that if mtimes for package tarballs varied across our nginx instances, our CDN would treat the files from each server as distinct, and cache them separately. Getting the most from our CDN’s caches and from our users’ local tarball caches is key to good performance on npm installs, so we took steps to make the etags match across all our services.
Our chosen scheme was to set the file modification time to the first 32-bit BE integer from their md5 hash. This was entirely arbitrary but looked sufficient after testing in our staging environment. We produced consistent etags. Unfortunately, the script that applied this change to our production environment failed to clamp the resulting integer, resulting in negative numbers for timestamps. Ordinarily, this would result in a the infamous Dec 31, 1969 date one sees for timestamps before the Unix epoch.
Unfortunately, negative mtimes triggered an nginx bug. Nginx will serve the first request for a file in this state and deliver the negative etag. However, if there is a negative etag in the if-none-match header nginx attempts to serve a 304 but never completes the request. This resulted in the the bad gateway message returned by our CDN to users attempting to fetch a tarball with the bad mtime.
You can observe this behavior yourself with nginx and curl:
The final request never completes even though nginx has correctly given it a 304 status.
Because this only affected a small subset of tarballs, not including the tarball fetched by our smoketest alert, all servers remained in the pool. We have an alert on above-normal 503 error rates served by our CDN, but this error state produced 502s and was not caught.
All the tarballs that were producing a 502 gateway timeout error turned out to have negative timestamps in their file mtimes. The fix was to touch them all so their times were inconsistent across our servers but valid, thus both busting our CDN’s cache and dodging the nginx behavior.
The logs from our CDN are invaluable, because they tell us what quality of service our users are truly experiencing. Sometimes everything looks green on our own monitoring, but it’s not green from our users’ perspective. The logs are how we know.

Today, npm Enterprise has just grown hugely more extensible and powerful with the release of npm Enterprise add-ons.
It’s now possible to integrate third-parties’ developer tools directly into npm Enterprise. This has the power to combine what were discrete parts of your development workflow into a single user experience, and knock out the barriers that stand in the way of bringing open source development’s many-small-reusable-parts methodology into larger organizations.
npm Enterprise now exposes an API that allows third-party developers to build on top of our npm Enterprise product:
With this deceptively simple functionality, developers can offer a huge amount of value to enrich the process of using npm within the enterprise.
Enterprise developers already want to take advantage of the same code discovery, re-use, and collaboration enjoyed by millions of open source developers, billions of times every month. But this requires accommodating their companies’ op-sec, licensing, and code quality processes, which often predate the modern era.
For example…
In the past, it was possible to manually research the security implications of external code. But with the average npm package relying on over 100 dependencies and subdependencies, this process just doesn’t scale.
Without a way to ensure the security of each package, a company can’t take advantage of open source code.
Software that is missing a license, or that’s governed by a license unblessed by a company’s legal department, simply can’t be used at larger companies. Much like security screening, many companies have relied upon manually reviewing the license requirements of each piece of external code. And just like security research, trying to manually confirm the licensing of every dependency (and their dependencies, and their dependencies…) is impossible to scale.
Enterprise developers need a way to understand the license implications of packages they’re considering using, and companies need a way to certify that all of their projects are legally kosher.
Will bug reports be patched quickly? Is the code written well? Do packages rely on stale or abandoned dependencies? These questions demand answers before an enterprise can consider relying on open source code.
Without a way to quantitatively analyze the quality of every code package in a project, many enterprise teams simply don’t adopt open source code or workflows for mission-critical projects.
Our three launch partners, Node Security Platform, FOSSA, and bitHound, address these concerns, respectively.
You can learn about the specifics of each of them here:
By integrating them directly into the tool that enterprise developers use to browse and manage packages, we make it as easy as possible to scratch enterprise development’s specific itches. As more incredible add-ons join the platform, the barriers to open source-style development at big companies get knocked down, one by one.
The Node Security Platform, FOSSA, and bitHound add-ons are available to existing npm Enterprise customers today. Simply contact us at [email protected] to get set up.
If you’re looking to bring npm Enterprise and add-ons into your enterprise, let us show you how easy it is with a free 30-day trial.
Interested in building your own add-on? Awesome. Stay tuned: API documentation is on its way.
The movement to bring open source code, workflows, and tools into the enterprise is called InnerSource, and it’s the beginning of a revolution.
When companies develop proprietary code the same way communities build open source projects, then the open source community’s methods and tooling become the default way to build software.
Everyone stands to benefit from InnerSource because everyone stands to benefit from building software the right way: open source packages see more adoption and community participation, companies build projects faster and cheaper without re-inventing wheels, and developers are empowered to build amazing things.
Add-ons are an exciting step forward for us. We’re thrilled you’re joining us.
When using npm Enterprise, we sometimes encounter public packages in our private registry that need to fetch resources from the public internet when being installed by a client via npm install.
Unfortunately, this poses a problem for developers who work in an environment with limited or no access to the public internet.
Let’s take a look at some of the more common types of problems in this area and talk about ways we can work around them.
Note that these problems are not specific to npm Enterprise — but to using certain public packages in any limited-access environment. That being said, there are some things that npm (as an organization and software vendor) can do to better prevent or handle some of these problems. We’re still working to make these improvements.
Typically, developers will discover the problem when installing packages from their private registry. When this happens, we need to determine the type of problem it is and where in the dependency graph the problematic dependency resides.
Here are some common problem types:
Git repo dependency
This is when a package dependency is listed in a package.json file with a reference to a Git repository instead of with a semver version range. Typically these point to a particular branch or revision in a public GitHub or Bitbucket repository. They are mainly used when the package contents have not been published to the public npm registry.
When the npm client encounters these, it attempts to fetch the package from the Git repository directly, which is a problem for folks who do not have network access to the repository.
Shrinkwrapped package
This is when the internal contents of a package contain an npm-shrinkwrap.json file that lists a specific version and URL to use for each mentioned package from the dependency tree.
During a normal npm install, the npm client attempts to fetch the dependencies listed in npm-shrinkwrap.json directly from the URLs contained in the file. This poses a problem when the client installing the shrinkwrapped package does not have access to the URLs that the shrinkwrap author has access to.
Package with install script or node-gyp dependency
This is when a package attempts to defer some setup process until the package is installed, using a script defined in package.json, which typically involves building platform-specific binaries or Node add-ons on the client’s machine.
On a typical install, the npm client will find and run these scripts in order to automatically fetch and build the required resources, targeting the platform that the client is running on. But when limited internet access means the necessary resources cannot be fetched, the install will fail. Most likely the package will be unusable until the end result of running the install script on the client’s machine is achieved.
To determine the location of the problematic dependency, we can boil it down to two categories:
Direct dependency
A direct dependency is one that is explicitly listed in your own package.json file — a dependency that your project/package uses directly in code or in an npm run script.
Transitive dependency
A transitive dependency is one that is not explicitly listed in your own package.json file — a dependency that comes from anywhere in the tree of your direct dependencies’ dependencies.
The same way publishing a package to the public registry requires access to the public internet, most of these solutions require Internet access, at least on a temporary basis. Once the solution is in place, then access to public resources can be restricted.
For starters, remember that it’s generally a good idea to use the latest version of the npm client. To install or upgrade to the latest version, regardless of what version of Node you have installed, run npm i -g npm@latest (and make sure npm -v prints the version that was installed).
Let’s go over the problem types in more detail.
Unfortunately, a dependency that references a Git repository (instead of a semver range for a published package) must be replaced with a published package. To do this, you’ll need to first publish the Git repository as a package to your npm Enterprise registry and then fork the project with the Git dependency and replace the dependency with the package you published. Then, publish the forked project, and use that package as a dependency (instead of the original).
It’s usually a good idea to open an issue on the project with the Git dependency, politely asking the maintainers to replace the Git dependency, if possible. Generally, we discourage using Git dependencies in package.json, and it’s typically only used temporarily while a maintainer waits for an upstream fix to be applied and published.
Example: let’s replace the "grunt-mocha-istanbul": "christian-bromann/grunt-mocha-istanbul" Git dependency defined in version 4.0.4 of the webdriverio package, assuming that webdriverio is a direct dependency and grunt-mocha-istanbul is a transitive dependency.
We’ll tackle this in two main steps: forking and publishing the transitive dependency, and forking and publishing the direct dependency.
Clone the project that is referenced by the Git dependency
Optionally, you can create a remote fork first (e.g., in GitHub or Bitbucket) and then clone your fork locally. Otherwise, you can just clone/download the project directly from the remote repository. It’s a good idea to use source control so you can keep a history of your changes, but you could also probably get away with downloading and extracting the project contents.
Example:
git clone https://github.com/christian-bromann/grunt-mocha-istanbul.gitCreate a new branch to hold your customizations
Again, this is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd grunt-mocha-istanbul
git checkout -b myco-custom-3.0.1
Add your scope to the package name in package.json
In our example, change "grunt-mocha-istanbul" to "@myco/grunt-mocha-istanbul".
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you have already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
Example:
git add package.json
git commit -m 'add @myco scope to package name'
npm publish
Clone the project’s source code locally
Either create a remote fork first (e.g., in GitHub or Bitbucket) and clone your fork locally, or just clone/download the project directly from the original remote repository. It’s a good idea to use source control so you can keep a history of your changes.
Example:
git clone https://github.com/webdriverio/webdriverio.gitCreate a new branch to hold your customizations
This is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd webdriverio
git checkout -b myco-custom-4.0.4
Add your scope to the package name in package.json
In our example, change "webdriverio" to "@myco/webdriverio".
Replace the Git dependency with the scoped package
This means updating the reference in package.json, and it may mean updating require() or import statements too. You should basically do a find-and-replace, finding the unscoped package name and judiciously replacing it with the scoped package name.
In our example, we only need to update the reference in package.json from "grunt-mocha-istanbul": "christian-bromann/grunt-mocha-istanbul" to "@myco/grunt-mocha-istanbul": "^3.0.1".
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you’ve already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
In our example of webdriverio, we next need to deal with the shrinkwrap URLs before we can publish (handled below). In other scenarios, it may be possible to publish now.
Example:
git add .
git commit -m 'replace git dep with scoped fork'
npm publish
Update your downstream project(s) to use the scoped package as a direct dependency (in package.json and in any require() statements)
In our example, this basically means doing a find-and-replace to find references to webdriverio and judiciously replace them with @myco/webdriverio. However, webdriverio also contains an npm-shrinkwrap.json file. We’ll cover that in the next section.
It just so happens that our sample direct dependency above (webdriverio) also uses an npm-shrinkwrap.json file to pin certain dependencies to specific versions. Unfortunately the shrinkwrap file contains hardcoded URLs to the public registry. We need a way to either ignore or fix the URLs.
A quick workaround is to install packages using the --no-shrinkwrap flag. This will tell the npm client to ignore any shrinkwrap files it finds in the package dependency tree and, instead, install the dependencies from package.json in the normal fashion.
This is considered a workaround rather than a long-term solution: it’s possible that installing from package.json will install versions of dependencies that don’t exactly match the ones listed in npm-shrinkwrap.json, even though the versions of the package’s direct dependencies are guaranteed to be within the declared semver range.
Example:
npm install webdriverio --no-shrinkwrap
(As noted above, [email protected] also has a Git dependency, so just ignoring the shrinkwrap isn’t quite enough for this package.)
If you want to use the exact versions from the shrinkwrap file without using the URLs in it, you’ll have to use your own custom fork of the project that contains a modified shrinkwrap file.
Here’s the general idea:
(Note that steps 1-3 are identical to the fork-publish instructions for a direct dependency above. If you’ve already completed them, skip to step 4.)
Clone the project’s source code locally
Either create a remote fork first (e.g., in GitHub or Bitbucket) and clone your fork locally, or just clone/download the project directly from the original remote repository. It’s a good idea to use source control so you can keep a history of your changes.
Example:
git clone https://github.com/webdriverio/webdriverio.gitCreate a new branch to hold your customizations
This is so you can keep a history of your changes. It’s probably a good idea to include the current version of the package in the branch name, in case you need to repeat these steps when a later version is available.
Example:
cd webdriverio
git checkout -b myco-custom-4.0.4
Add your scope to the package name in package.json
In our example, change "webdriverio" to "@myco/webdriverio".
Use rewrite-shrinkwrap-urls to modify npm-shrinkwrap.json, pointing the URLs to your npm Enterprise registry
Unfortunately this is slightly more complicated than a find-and-replace, since the tarball URL structure of the public registry is different than the one used for an npm Enterprise private registry.
In the example below, replace {your-registry} with the base URL of your private registry, e.g., https://npm-registry.myco.com or http://localhost:8080. The value you use should come from the Full URL of npm Enterprise registry setting in your Enterprise admin UI Settings page.
Example:
npm install -g rewrite-shrinkwrap-urls
rewrite-shrinkwrap-urls -r {your-registry}
git diff npm-shrinkwrap.json
Commit your changes to your branch and publish the scoped package to your npm Enterprise registry
Assuming you’ve already configured npm to associate your scope to your private registry, publishing should be as simple as npm publish.
Be mindful of any prepublish or publish scripts that may be defined in package.json. You can try skipping those scripts when publishing via npm publish --ignore-scripts, but running the scripts may be necessary to put the package into a usable state, e.g., if source transpilation is required.
Example:
git add npm-shrinkwrap.json package.json
git commit -m 'add @myco scope to package name' package.json
git commit -m 'rewrite shrinkwrap urls' npm-shrinkwrap.json
npm publish
Note that a prepublish script will probably need to install the package’s dependencies in order to run. In this case, npm install will be executed first. If this happens, it should pull all dependencies in the shrinkwrap file from your registry. If any of those packages don’t yet exist in your registry, you’ll need either to enable the Read Through Cache setting in your Enterprise instance or to manually add the packages to the white-list by running npme add-package webdriverio from your server’s shell and answering Y at the prompt to add dependencies.
Update your downstream project(s) to use the scoped package as a direct dependency (in package.json and in any require() statements)
In our example, this basically means doing a find-and-replace to find references to webdriverio and judiciously replace them with @myco/webdriverio.
This is less than ideal, obviously. We’re currently considering ways to improve handling of shrinkwrapped packages on the server side, but a better solution is not yet available.
Some packages want or need to run some script(s) on installation in order to build platform-specific dependencies or otherwise put the package into a usable state. This approach means that a package can be distributed as platform-independent source without having to prebundle binaries or provide multiple installation options.
Unfortunately this also means that these packages typically need access to the public internet in order to fetch required resources. In these cases, we can’t really do much to work around this approach, other than attempt to isolate the steps of fetching the package from the registry and set up the platform-specific resources it needs.
As a quick first attempt, you can ignore lifecycle scripts when installing packages via npm install {pkg-name} --ignore-scripts.
Unfortunately, install scripts typically do some sort of platform-specific setup to make the package usable. Thus, you should review the install or postinstall scripts from the package’s package.json file and determine if you need to attempt to run them separately or somehow achieve the same result manually.
When node-gyp is involved in the setup process, the package requires platform-specific binaries to be built and plugged into the Node runtime on the client’s system. In order to build the binaries, the package will typically need to fetch source header files for the Node API.
The best we can do is attempt to setup the node-gyp build toolchain manually. This requires Python and a C/C++ compiler. You can read more about this at the following locations:
General installation: https://github.com/nodejs/node-gyp#installation
Windows issues: https://github.com/nodejs/node-gyp/issues/629
A good example of a package with a node-gyp dependency is node-sass.
Once the build toolchain is in place, the package’s install script may not need to fetch any external resources.
If you’ve made it all the way to the end, surely you’ll agree that npm could be handling things better to minimize challenges faced by folks with restricted internet access. We feel it’s in the community’s best interest to at least raise awareness of these problems and their potential workarounds until we can get a more robust solution in place.
If you have feedback or questions, as always, please don’t hesitate to let us know.
Today, we’re excited to announce a simple, powerful new way to track changes to the npm registry — and build your own amazing new developer tools: hooks.
Hooks are notifications of npm registry events that you’ve subscribed to. Using hooks, you can build integrations that do something useful (or silly) in response to package changes on the registry.
Each time a package is changed, we’ll send an HTTP POST payload to the URI you’ve configured for your hook. You can add hooks to follow specific packages, to follow all the activity of given npm users, or to follow all the packages in an organization or user scope.
For example, you could watch all packages published in the @npm scope by setting up a hook for @npm. If you wanted to watch just lodash, you could set up a hook for lodash.
If you have a paid individual or organizational npm account, you can start using hooks right now.
Each user may configure a total of 100 hooks, and how you use them is up to you: you can put all 100 on a single package, or scatter them across 100 different packages. If you use a hook to watch a scope, this counts as a single hook, regardless of how many packages are in the scope. You can watch any open source package on the npm registry, and any private package that you control (you’ll only receive hooks for packages you have permission to see).

Create your first hook right now using the wombat cli tool.
First, install wombat the usual way: npm install -g wombat. Then, set up some hooks:
Watch the npm package:wombat hook add npm https://example.com/webhooks shared-secret-text
Watch the @slack organization for updates to their API clients:wombat hook add @slack https://example.com/webhooks but-sadly-not-very-secret
Watch the ever-prolific substack:wombat hook add --type=owner substack https://example.com/webhooks this-secret-is-very-shared
Look at all your hooks and when they were last triggered:wombat hook ls
Protip: Wombat has several other interesting commands. wombat --help will tell you all about them.
We’re also making public an API for working with hooks. Read the docs for details on how you can use the API to manage your hooks without using wombat.

You can use hooks to trigger integration testing, trigger a deploy, make an announcement in a chat channel, or trigger an update of your own packages.
To get you started, here are some of the things we’ve built while developing hooks for you:
npm-hook-receiver: an example receiver that creates a restify server to listen for hook HTTP posts. Source code.
npm-hook-slack: the world’s simplest Slackbot for reporting package events to Slack; built on npm-hook-receiver.
captain-hook: a much more interesting Slackbot that lets you manage your webhooks as well as receive the posts.
wombat: a CLI tool for inspecting and editing your hooks. This client exercises the full hooks API. Source code.
ifttt-hook-translator: Code to receive a webhook and translate it to an IFTTT event, which you can then use to trigger anything else you can do on IFTTT.
citgm-harness: This is a proof-of-concept of how node.js’s Canary in the Gold Mine suite might use hooks to drive its package tests. Specific package publications trigger continuous integration testing of a different project, which is one way to test that you haven’t broken your downstream dependents.

We’re releasing hooks as a beta, and where we take it from here is up to you. What do you think about it? Are there other events you’d like to watch? Is 100 hooks just right, too many, or not enough?
We’re really (really) (really) interested to see what you come up with. If you build something useful (or silly) using hooks, don’t be shy to drop us a line or poke us on Twitter.
This is the tip of an excitement iceberg — exciteberg, if you will — of cool new ways to use npm. Watch this space!
npm ♥ you!