I have a medium-sized team (~10 people, but we expect to grow) that manages a set of EC2 servers on AWS (currently a few dozens, but this is also expected to grow).

As EC2 requires you to have an AWS generated private SSH key, without a passphrase (that's how AWS generates them) in order to access EC2 servers, as the number of servers and different environments and products my team manages (and as people are moving on and off the team), I'm less and less happy with our current solution for securing access to EC2 servers - namely having a single private key that is copied to each team members local machine.

I'm considering a few options, and I would appreciate it if you can suggest a better option and discuss why its better:

  1. Keep using a single SSH key for all our systems.
    • Pros: simple to manage, relatively secure (assuming team trust)
    • Cons: after a team member gets the key, there is no way to revoke access; a single leak compromises security of all systems; no passwords
  2. Have a single SSH key for each product/environment, distribute to all team members.
    • Pros: Still not difficult to manage, relatively secure (assuming team trust)
    • Cons: keys can't be revoked; a single leak compromises all systems (maybe not all, if a junior member that doesn't have all the keys was the source); no passwords; difficult to use as the user has to unload and reload keys when moving between environments
  3. Build a bastion server for each product/environment; create a single SSH key for each product/environment and install the private key in the bastion server's known user account; install each team members personal public key in the known user account.
    • Pros: allows key revocation; compromise of a bastion compromises only 1 environment; if a leak of a user key is detected, compromise of untouched system can be easily prevented; allows use of passphrases to access bastions
    • Cons: relatively complex to manage (creating additional servers and running non-trivial installation, adding team members, removing team members); costly (servers aren't free); complicates software tools used by team; complex key revocation
  4. Use a key storage service; create a single key for each environment/product and store in service; control access to the service using a password or a personal ssh key; operations start by identifying the environment/product accessed and obtaining the key into the SSH agent.
    • Pros: easy to manage (assuming the service is available); keys can be revoked on a per user or per environment; single point to protect (that is not mission critical); Relies on OpenSSH agent to secure keys outside the service.
    • Cons: Single point of failure; may complicate usage scenario

I'm currently leaning towards #4, but are there serious issues I'm missing? Is there a service like that that I can use or do I have to roll my own?

Note: we don't use an orchestration/configuration server a-la puppet/chef - our orchestration software is mostly home grown and installed on each team member's local system. It is basically just a set of recipes loaded from source control and used to execute various scenarios, mostly using AWS APIs. Each team member has a personal AWS API key and the orchestration software uses it to run the AWS API. In addition, some scenarios call for SSH access to system servers, and here is where I have the problem described above. The EC2 servers are accessed using the default AMI user (usually "ubuntu") and the software uses NOPASSWD sudo to execute local operations.

share|improve this question
    
So screw the whole idea behind SSH's decentralized key management. You want key escrow? – jas- Aug 26 '15 at 3:57
    
Yes, something like that. I want to use personalized access to a key escrow vaule :) Decentralized key management works great for n:1 relationships, or even 1:n, but it gets really messy when you have an n:n setup. – Guss Aug 26 '15 at 8:46
1  
Everything gets messy with n:n – jas- Aug 27 '15 at 2:46

Not sure why you're using AWS-created keys. The "Network & Security / Key Pairs" screen also has an "Import key pair" button, I've successfully used it.

When you add a new team member who might be standing up new images, get his public key, import it.

Keep a copy of your standard "authorized_keys" file, with one or more public key per person, on a webserver or S3 (could be VPN-facing, or secured by a team-shared username and password, if you're trying to keep your team identities confidential.) Now part of your deploy process is to do

curl -u username:password \
  https://team-server/private/authorized_keys.txt >/home/ec2-user/.ssh/authorized_keys

(Such a command could also be added to the instance user data to run at instance creation.)

To add a new team-member, add his public key to the file, then run that command on all servers.

To remove a team-member, remove his key from the file, then run that command on all servers.

share|improve this answer
    
Ok, so the setup will be to import all members' public keys to Amazon, and when a deployment scenario is played out, the orchestration software will need to choose the correct key (for the member running the scenario) to use in the AWS API call to start the server, then log in and load the rest of the keys? I think its doable. – Guss May 4 '15 at 20:53
    
How do you actually start the deployment scenario? If the member running the scenario directly initiates it from the command line, the orchestration software may not need to do anything - simply use SSH agent forwarding. Otherwise, you run into the issue that your orchestration server may need the team member's private key (a big no-no in SSH security). – Kevin Keane May 5 '15 at 3:23
    
Ah, sorry it wasn't clear - there is no orchestration server. The orchestration software is installed on each team member's local system. I've updated my question to reflect that. – Guss May 5 '15 at 15:31

After re-reading all this, I insist : having root access granted to many users directly is generally a not so good idea. Use sudo is the recommended way (you could even run su, with group restriction).

Why not use Un*x group original behaviour?

  • For each user, add a specific account ( you could manage user account replication by using LDAP, NIS or even rsync on your passwd, group, shadow and *$USER/.ssh/authorized_keys by a simple shell script... NIS is nice! )

    adduser alice
    
  • Add specific passwd to each user (user may or need to change them... or not)

    passwd alice
    
  • Add user to group with specific right (to acccess sudo, for sample)

    adduser alice sudo
    
  • Doing so, you could create another group with common right, for working together on specific project, even without access to root.

Doing this way let the ability to

  • let user connect without passwd, but only on his account. Then ask user for password when they run sudo.

    su - alice <<<eocmd
       curl -u username:password \
          https://team-server/private/authorized_keys.txt >.ssh/authorized_keys
    eocmd
    
  • add or delete as many user you want

    deluser alice
    
  • drop or restore rigth to

    • access to any host/server

      usermod --expiredate 1 alice
      
    • access to specific group rights.

      deluser alice sudo
      
  • keep specific authorized_keys for each user, for sample

    su - alice -c 'curl ... >>.ssh/authorized_keys'
    

    to permit one user to connect from many different points. In fact, as each authorized_key file is owned by his user, each could do what he want, without compromising whole host.

... and keep PermitRootLogin to no in /etc/ssh/sshd_config!

share|improve this answer
    
Read The Linux System Administrator's Guide for further docs. – F. Hauri May 5 '15 at 8:35
    
We are definitely not logging in as root, sorry it wasn't clear - I've updated my question with details. – Guss May 5 '15 at 15:40
    
If I understand correctly, what you suggest is to either use a central login service (which will need to be managed and protected - I'm not happy doing that once, but as we are using VPC and there is no way that I will put this server on the public internet, we'll need one for each environment), or manage multiple users and multiple SSH keys on multiple servers on a remote "farm" (multiples of those as we plan to expand to multiple AWS sites). Looks like way too heavy lifting. Also, requiring passwords on the each system is a big no-no as it will break our orchestration software. – Guss May 5 '15 at 15:44
    
Hem yes, even if you need to keep root access without password for administrative scripts, having separated account for each sysadmin is mostly a good practice. Of couse, each sysadmin who have access to sudo could install his own backdoor. You have to keep this in mind. – F. Hauri May 5 '15 at 15:59

Maybe a solution such as FreeIPA could help you. It provides a way to make sure that ssh keys are properly distributed as well as sudo profiles and authentication.

Z.

share|improve this answer
1  
An interesting concept, but I don't believe this is the right solution for me: FreeIPA is kind of heavy lifting and requires installing software on each server (what FreeIPA calls "client") and its use case (SSO to per-user named accounts) doesn't really fit the EC2 model (the only account on the server is the one with sudo permissions and the public part of the AWS key pair). – Guss May 4 '15 at 16:50
    
You don't actually need the client, you could pick and choose only the pieces you want, and manually manage the configurations. Like Linux to AD style, the FreeIPA client packages are just scripts that automate the various configuration changes. Their are no agents or anything like that, unless you count SSSD as one.. but again, that's a choice. – TechZilla Nov 7 '16 at 17:46

A typical best-practices answer would be to use something like LDAP. With it, you can define users, groups, and more.

By connecting your SSH authentication backend via PAM to your LDAP server, you can maintain separate accounts for each member of your team. I may be mistaken, but you should also be able to maintain separate $HOME/.ssh/authorized_keys files for each user as well.

By making users members of the sudoers group, they'll be able to request root permissions on-demand. These accesses will be recorded in logs, so you'll have auditability later on down the line.

This setup is pretty standard in larger corporations.

share|improve this answer
1  
You're not the first to suggest that the best way to manage security in a dynamic cluster to individually manage user credentials on all servers. I don't think this scales - even if I could deploy LDAP (which I can't due to network partitioning). How would you get the keys of a new member to all servers? For the sake of the argument, here's how yesterday went - I deployed two new clusters (>5 servers each) and recycles 8 servers in another network (i.e. killed them and created new ones). If I spend only 5 extra minutes on each server setting up security, the system will never be ready on time. – Guss Aug 26 '15 at 8:54

When I need to distribute public ssh-keys on my servers, I use an online tool (https://keydistributor.io) and fetch the keys on each server from their API. In that tool (just like Puppet, chef etc) I could define groups, where each user belongs to a group.

share|improve this answer
    
This looks like an interesting service. It's a bit expensive when you have many servers and I'm not sure how they handle auto scaling servers - but it's a step in the right direction. I'm not sure why some one word choose to down vote you. – Guss Nov 28 '16 at 17:40
    
@Guss look at Azure Keyvault if you're looking for something scaleable – Little Code Nov 28 '16 at 18:12

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.