<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
  <channel>
    <title>Servers for Hackers</title>
    <link>https://serversforhackers.com</link>
    <description>Server administration for programmers.</description>
    <item>
      <title>Customize your Login Screen via Linux's Message of the Day (Ubuntu/CentOS)</title>
      <link>https://serversforhackers.com/video/customize-your-login-screen-via-linuxs-message-of-the-day-ubuntucentos</link>
      <description>See how you can customize the screen you see when you login to give you important information quickly.</description>
    </item>
    <item>
      <title>Docker for Gulp Build Tasks</title>
      <link>https://serversforhackers.com/docker-for-gulp-build-tasks</link>
      <description>I run my development environments inside of a Vagrant virtual machine. Most of my projects use a `gulp` pipeline to build static assets.&#13;
&#13;
**Generally, you can decide to run Gulp in one of two places:**&#13;
&#13;
1. **On the host machine:** Install Node (via NVM or not) on your host machine (perhaps your Mac computer) and run `gulp watch` from the host machine.&#13;
2. **On the guest machine** Install Node on the guest (the virtual machine) and run `gulp watch` from there.&#13;
&#13;
Here are some pros and cons of that. Some of these cons are purely subjective (my personal preference). You can disagree with them, I won't mind.&#13;
&#13;
## NodeJS On the Host&#13;
&#13;
**The Pro:** NodeJS on the host machine doesn't miss file changes. Since the files are actually on the host machine file system (not shared into the VM), inotify and friends are generally working as they should. This means that `gulp watch` doesn't bat an eye.&#13;
&#13;
**The Con:** NodeJS is one of those tools I often fight with - either `npm install` in inexplicably failing, or NodeJS versions aren't matching up, or, even if everything should work, it just refuses to install correctly.&#13;
&#13;
Therefore I HATE having it installed directly on my Mac. NVM doesn't really solves this for me. I instead prefer to install development related software on a virtual machine, as these can be destroyed and re-created very easily. NodeJS, especially, has a fast-moving community. It's always changing.&#13;
&#13;
## NodeJS on the Guest&#13;
&#13;
**The Pro:** NodeJS on the guest let's me worry a LOT less about testing and installing things. VM's are servers that I can destroy and re-create easily. I don't mind experimenting with tools, or throwing multiple versions of binaries around inside of it willy-nilly.&#13;
&#13;
(Personal preference, remember!)&#13;
&#13;
**The Con**: The default file sharing that Virtualbox uses can easily miss file changes, making `gulp watch` frustrating to work with.&#13;
&#13;
NFS make this situation better, but is **not** infallible. It should be noted that I've tried LOTS of various NFS configurations, including using [`cachefilesd`](https://github.com/fideloper/Vaprobash/commit/677e19d61337b11bc64eb829016028f1047c7d76#diff-23b6f443c01ea2efcb4f36eedfea9089)).&#13;
&#13;
### Anything else we can try?&#13;
&#13;
&lt;blockquote class="twitter-tweet" data-lang="en"&gt;&lt;p lang="en" dir="ltr"&gt;Picard management tip: There is always a non-obvious third option. When caught between a rock and a hard place, find a jackhammer.&lt;/p&gt;— Picard Tips (@PicardTips) &lt;a href="https://twitter.com/PicardTips/status/591013198628687872"&gt;April 22, 2015&lt;/a&gt;&lt;/blockquote&gt;&#13;
&#13;
Agreed!&#13;
&#13;
## Docker&#13;
&#13;
I've recently been getting reacquainted with Docker and it's [new tools](https://blog.docker.com/2016/06/docker-1-12-built-in-orchestration/). I realized I could use it effectively here.&#13;
&#13;
**My goals in using Docker to run Gulp build steps were:**&#13;
&#13;
1. Have NodeJS in something super ephemeral - something I wouldn't mind making mistakes on.&#13;
2. Have `gulp watch` work more reliably.&#13;
&#13;
The following setup does both of these things! Note that point 2 works because Docker's volume mounting is much better than either NFS or Virtualhost's default file share.&#13;
&#13;
&gt; This is possible on my Mac because of the [Docker app (in beta)](https://docs.docker.com/engine/installation/mac/), which is solving the sticky bits of running Docker on Mac's kernel. It no longer runs in a Virtualbox VM thanks to [*magic*](https://github.com/mist64/xhyve/).&#13;
&#13;
Let's see how Docker helped me here:&#13;
&#13;
### Dockerfile&#13;
&#13;
First, we'll create new Docker image that has NodeJS in it. I install Node 5, since that's what my project happened to expect. Node 6 is the latest and might be what you want instead.&#13;
&#13;
```&#13;
FROM ubuntu:16.04&#13;
&#13;
MAINTAINER Chris Fidao&#13;
&#13;
WORKDIR /opt&#13;
&#13;
ADD setup_5.x /tmp/setup_5.x&#13;
RUN bash /tmp/setup_5.x&#13;
&#13;
RUN apt-get update&#13;
RUN apt-get install -y build-essential&#13;
RUN apt-get install -y nodejs&#13;
RUN /usr/bin/npm install -g gulp&#13;
RUN /usr/bin/npm install -g bower&#13;
&#13;
VOLUME ["/opt"]&#13;
CMD ["gulp", "watch"]&#13;
```&#13;
&#13;
All we do here is grab the latest Ubuntu LTS and install NodeJS. I then globally install Gulp and Bower. This is taken straight out of the [Homestead install script](https://github.com/laravel/settler/blob/9ad04c61eed8bbb163b567d4f8f9015ad6636a46/scripts/provision.sh#L170-L174).&#13;
&#13;
**Three things to note:** &#13;
&#13;
1. I add a file called `setup_5.x`. This is directly from the [Nodesource Node 5.x installer](https://deb.nodesource.com/setup_5.x), which I downloaded and saved as a file first.&#13;
2. The working directory is set to `/opt`, so any commands run will be relative to that directory&#13;
3. Using `CMD ["gulp", "watch"]` means that if I don't specify any other command, this container will run `gulp watch`. I can, however, specify any other command I want. And I will! You'll see why.&#13;
&#13;
Once this Dockerfile is saved, we can build a new image. I'll name (tag) it simply `gulp`:&#13;
&#13;
```bash&#13;
$ cd /path/to/dir/with/Dockerfile&#13;
&#13;
$ docker build -t gulp .&#13;
# ...it builds...&#13;
&#13;
$ docker images&#13;
REPOSITORY    TAG       IMAGE ID        CREATED         SIZE&#13;
gulp          latest    59fe57f1d14a    17 hours ago    460.6 MB&#13;
ubuntu        16.04     2fa927b5cdd3    5 weeks ago     122 MB&#13;
```&#13;
&#13;
### Building node_modules&#13;
&#13;
There's two steps to actually using this:&#13;
&#13;
1. Use the container to build the `node_modules` directory&#13;
2. Use the container to run `gulp watch`&#13;
&#13;
Let's first assume your project has not had `npm install` run yet. (I backed up and deleted my old `node_modules` directory).&#13;
&#13;
```bash&#13;
# Get old node_modules dir out of the way&#13;
$ cd ~/Sites/some-project&#13;
$ cp node_modules ~/some-project-node_modules&#13;
$ rm -rf node_modules # everyone's familiar with this command&#13;
&#13;
# Use our container to build a new node_modules dir&#13;
$ docker run --rm -v ~/Sites/some-project:/opt gulp npm install&#13;
```&#13;
&#13;
Let's cover what's happening there:&#13;
&#13;
1. `docker run` - run a container&#13;
2. `--rm` - Delete the container when it exits. We don't need this container to persist.&#13;
3. `-v  ~/Sites/some-project:/opt` - Share our host computer's `some-project` dir with the `/opt` dir, which we set as the working directory of the image (and will continue to be the working directory when we use this image to make a new container)&#13;
4. `gulp` - specify the image from which the container should be created&#13;
5. `npm install` - The default CMD set was `gulp watch`, but we're over-riding it in this case so we can have it install our npm dependencies first. This way, anything specific to the container's node version and architecture (Node 5.x, Ubuntu 16.04 x64) will be created correctly.&#13;
&#13;
### Running "Gulp Watch"&#13;
&#13;
Once that's done, a new `node_modules` directory will exist! We're ready to run `gulp watch` now:&#13;
&#13;
```bash&#13;
$ docker run --rm -v ~/Sites/some-project:/opt gulp&#13;
```&#13;
&#13;
This is almost the exact same command, except we didn't give the container a command to run. Since we defined `CMD ["gulp", "watch"]` in the Dockerfile, the `gulp watch` command will be run by default.&#13;
&#13;
&gt; Don't get the fact that I named the image `gulp` confused with the commands we're running in that container. The container first ran `npm install`, and our second container ran `gulp watch`.&#13;
&#13;
I use `--rm` on both containers since the containers don't need to persist - the results of their actions are saved on our host computer thanks to the volume mounting.&#13;
&#13;
When you're done with that container, you can open a new shell and run `docker ps` to get the docker ID or name, and then run `docker stop &lt;container-id-or-name&gt;` to stop it.&#13;
&#13;
&lt;script async src="//platform.twitter.com/widgets.js" charset="utf-8"&gt;&lt;/script&gt;</description>
    </item>
    <item>
      <title>Letsencrypt for Free &amp; Easy SSL Certificates</title>
      <link>https://serversforhackers.com/video/letsencrypt-for-free-easy-ssl-certificates</link>
      <description>See how to easy it is to use letsencrypt to create and automatically renew FREE SSL certificates!</description>
    </item>
    <item>
      <title>Logrotate for Forge</title>
      <link>https://serversforhackers.com/logrotate-for-forge</link>
      <description>Laravel Forge (and Envoyer) keep your Laravel application's `storage` directory persistent through deployments. One side effect of this is that your `storage/logs/laravel.log` file is always growing.&#13;
&#13;
Linux systems come with [Logrotate, which can help us manage your growing log files](https://serversforhackers.com/managing-logs-with-logrotate). It's really simple - we can basically set it and forget it.&#13;
&#13;
&gt; **Note:** This is best used when your logs are set to `single` rather than `daily` within your `config/app.php` file.&#13;
&#13;
## Logrotate for Forge&#13;
&#13;
First, we need to know the application's log storage directory. If my application is `foo.example.com`, that location is likely `/home/forge/foo.example.com/storage/logs`.&#13;
&#13;
Knowing that, we can make a logrotate configuration for it at `/etc/logrotate.d/foo.example.com`:&#13;
&#13;
```&#13;
/home/forge/foo.example.com/storage/logs/*.log {&#13;
    su forge forge&#13;
    weekly&#13;
    missingok&#13;
    rotate 24&#13;
    compress&#13;
    notifempty&#13;
    create 755 forge forge&#13;
}&#13;
```&#13;
&#13;
Once you save that file, you're done!&#13;
&#13;
Let's go over the options we set there:&#13;
&#13;
* First we set the file path, and tell it to find any file ending in `.log`&#13;
* **su forge forge** tells logrotate to use user/group `forge` to process these logs&#13;
* **weekly** tells logrotate to rotate logs weekly (versus daily, monthly, yearly)&#13;
* **missingok** - If no *.log files are found, don't freak out&#13;
* **rotate 24** - Keep 24 files before deleting old log archived (deletes oldest first). This is 24 weeks, or 6 months&#13;
* **compress** - Compress archived log files so they take less disk space&#13;
* **notifempty** - Don't rotate the log if it's empty&#13;
* **create 755 forge forge**  - Create new log files with these permissions/owner settings&#13;
&#13;
## Testing&#13;
&#13;
To test this command out, we can force the running of logrotate with the following command:&#13;
&#13;
```bash&#13;
# Forge logrotate to run against config file foo.example.com&#13;
sudo logrotate --force foo.example.com&#13;
```&#13;
&#13;
This will tell you if there are any errors. If you get no output, you should be able to check your log directory and see some newly rotated logs!&#13;
&#13;
## Resources&#13;
&#13;
* See more about [logrotate here](https://serversforhackers.com/managing-logs-with-logrotate)</description>
    </item>
    <item>
      <title>Mysqldump with Modern MySQL</title>
      <link>https://serversforhackers.com/mysqldump-with-modern-mysql</link>
      <description>Mysqldump has *many* options ([I count 111 options](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#idm140518991203200) 😁).&#13;
&#13;
Most of us are likely keeping it simple. Here's how I've typically exported a single database:&#13;
&#13;
```bash&#13;
mysqldump some_database &gt; some_database.sql&#13;
&#13;
# Or with user auth&#13;
mysqldump -u some_user -p some_database &gt; some_database.sql&#13;
&#13;
# Or with gzip compression&#13;
mysqldump some_database | gzip &gt; some_database.sql.gz&#13;
&#13;
# Or with the "pv" tool, which let's us know how much data is&#13;
# flowing between our pipes - useful for knowing if the msyqldump&#13;
# has stalled&#13;
mysqldump some_database | pv | gzip &gt; some_database.sql.gz&#13;
# 102kB 0:01:23 [1.38MB/s] [  &lt;=&gt;&#13;
```&#13;
&#13;
However, it's worth digging into this command a bit to learn what's going on. If you're using mysqldump against a production database, it's usage can cause real issues for your users while it's running.&#13;
&#13;
## Defaults&#13;
&#13;
First, let's cover mysqldump's defaults. Unless we explicitly tell it not to, mysqldump is using the [`--opt`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_opt) flag. The `opt` option is an alias for the following flags:&#13;
&#13;
* [--add-drop-table](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_add-drop-table) - Write a DROP TABLE statement before each CREATE TABLE statement, letting you re-use the resulting .sql file over and over with idempotence.&#13;
* [--add-locks](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_add-locks) - This applies when you're importing your dump file (not when running mysqldump). Surrounds each table dump with LOCK TABLES and UNLOCK TABLES statements. This results in faster inserts when the dump file is reloaded. This means that while you're importing data, each table will be locked from reads and writes while it's (re-)creating a table.&#13;
* [--create-options](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_create-options) - Include all MySQL-specific table options in the CREATE TABLE statements. In testing this (turn it off using `-create-options=false`), I found that the main/most obvious difference was the absense of `AUTO_INCREMENT` on primary keys when setting this option to false.&#13;
* [--disable-keys](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_disable-keys) - *This option is effective only for nonunique indexes of **MyISAM** tables.* This makes loading the dump file faster *(for MyISAM tables)* because the indexes are created after all rows are inserted.&#13;
* [--extended-insert](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_extended-insert) - Write INSERT statements using multiple-row syntax that includes several VALUES lists. Not using this option may be required for tables with large columns (usually blobs) that cause queries to go higher than client/server "max_allowed_packet" configuration, but generally always use this option. Using a single-query per insert slows down imports considerably.&#13;
* [--lock-tables](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_lock-tables) - Unline `add-locks`, this applies to when you're running mysqldump. This locks all tables for the duration of the mysqldump, making it a bad option to use on a live environment. Primarily it's used for protection of data integrity when dumping MyISAM tables. Since InnoDB is rightly the default table storage engine now-a-days, this option usually should be over-ridden by using `--skip-lock-tables` to stop the behavior and `--single-transaction` to run mysqldump within a transaction, which I'll cover in a bit.&#13;
* [--quick](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_quick) - Reads out large tables in a way that doesn't require having enough RAM to fit the full table in memory.&#13;
* [--set-charset](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_set-charset) - Write SET NAMES default_character_set to the output. This DOES NOT perform any character set conversion (mysqldump won't do with that *any* flag). Instead, it's just saying that you want the character set info added in so it's set when re-importing the dump file.&#13;
&#13;
&#13;
So the default are pretty good, with the exception of **--lock-tables**. This causes the database to become unusable while mysqldump is running, but it doesn't need to be this way!&#13;
&#13;
We can use mysqldump more intelligently.&#13;
&#13;
## Mysqldump and Table Locks&#13;
&#13;
When using mysqldump, there's a trade off to be made between halting/affecting database performance and ensuring data integrity. Your strategy will largely  be determined by what storage engine(s) your using in your database tables.&#13;
&#13;
&gt; Since each table can have a separate storage engine, this can get interesting :D&#13;
&#13;
By default, mysqldump locks all the tables it's about to dump. This ensure the data is in a **consistent state** during the dump. &#13;
&#13;
### Data Consistency&#13;
&#13;
A "consistent state" means that the data is in an expected state. More specifically, all relationships should match up. Imagine if mysqldump exports the first 5 tables out of 20. If table 1 and table 20 got new rows related to eachother by primary/foreign keys after mysqldump dumped table 1 but before it dumped table 20, then we're in an inconsistent state. Table 20 has data relating to a row in table 1 that did not make it into the dump file.&#13;
&#13;
MyISAM tables require this locking because they don't support transactions. However, InnoDB (the default storage engine as of MySQL 5.5.5) supports transactions. Mysqldump defaults to a conservative setting of locking everything, but we don't need to use that default - we an avoid locking tables completely.&#13;
&#13;
### Mysqldump with Transactions&#13;
&#13;
As a rule of thumb, **unless you are using MyISAM for a specific reason, you should be using the InnoDB storage engine** on all tables. If you've been porting around a database to various MySQL servers for years (back when MyISAM used to be the default storage engine), check to make sure your tables are using InnoDB.&#13;
&#13;
This is the important one:&#13;
&#13;
**Assuming you are using InnoDB tables, your mysqldump should look something like this:**&#13;
&#13;
```bash&#13;
mysqldump --single-transaction --skip-lock-tables some_database &gt; some_database.sql&#13;
```&#13;
&#13;
The [`--single-transaction`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_single-transaction) flag  will start a transaction before running. Rather than lock the entire database, this will let mysqldump read the database in the current state at the time of the transaction, making for a consistent data dump.&#13;
&#13;
&gt; The `single-transaction` options uses the default transaction isolation mode: [REPEATABLE READ](https://dev.mysql.com/doc/refman/5.7/en/set-transaction.html#isolevel_repeatable-read).&#13;
&#13;
Note that if you have a mix of MyISAM and InnoDB tables, using the above options can leave your MyISAM (or Memory tables, for that matter) in an inconsistent state, since it does not lock reads/writes to MyISAM tables.&#13;
&#13;
In that case, I suggest dumping your MyISAM tables separately from InnoDB tables. &#13;
&#13;
However, if that still results in inconsistent state (if the MyISAM table has PK/FK relationships to InnoDB tables), then using the [`--lock-tables`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_lock-tables) option becomes the only way to guarantee the database is in a consistent state when using mysqldump. &#13;
&#13;
This means that in that situation, you'll have to be careful about when you run mysqldump on a live database. Perhaps run it on a replica database instead of a master one, or investigate options such as Xtrabackup, which copies the mysql data directory and does not cause down time.&#13;
&#13;
## Replication&#13;
&#13;
If you're using replication, you already have a backup on your replica servers. That's awesome! However, off-site backups are still a good thing to have. In such a setup, I try to run mysqldump on the replica server instead of a master server.&#13;
&#13;
In terms of mysqldump, this has as few implications:&#13;
&#13;
1. Running mysqldump on a replica server means the data it receives might be slightly behind the master server. &#13;
    - For regular backups, this is likely fine. If you need the data to be at a certain point, then you need to wait until that data has reached the replica server.&#13;
2. Running mysqldump on a replica is prefered (IMO) since in theory, there is already a built-in assumption that the replica servers will be behind anyway - adding a bit of "strain" of a mysqldump shouldn't be a big deal.&#13;
&#13;
In any case, there are some useful flags to use when replication is in place (or when binlogs are enabled in general).&#13;
&#13;
### Master Data&#13;
&#13;
The [`--master-data`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_master-data) flag adds output to a dump file which allows it to be used to set up another server as a replica of the master. The replica needs the master data to know where to start replication.&#13;
&#13;
The `--master-data` option automatically turns off `--lock-tables`, since the included binlog position will say where to start replication off, letting you not lose queries if the dump ends up in an inconsistent state. (Again, that's only a consideration if you have MyISAM tables). &#13;
&#13;
If `--single-transaction` is also used, a global read lock is acquired only for a short time at the beginning of the dump.&#13;
&#13;
Use this when dumping from a master server.&#13;
&#13;
### Dump Replica&#13;
&#13;
The [`--dump-slave`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_dump-slave) option is very similar to the `--master-data` except it's use case is:&#13;
&#13;
1. Instead of being a dump from the master server, it's meant to be a dump of a replica server&#13;
2. It will contain the same master information as the replica server being dumped, where as `--master-data` set itself as the master&#13;
&#13;
Use this when dumping from a replica server.&#13;
&#13;
&gt; From the docs: "This option should not be used if the server where the dump is going to be applied uses gtid_mode=ON and MASTER_AUTOPOSITION=1."&#13;
&gt; &#13;
&gt; GTID is a newer way to do MySQL replication as of MySQL 5.6. It's a nicer method, so --dump-slave in theory can be one to ignore.&#13;
&#13;
&lt;!--&#13;
In theory, replica servers are always up to speed with the master server. However, lag is a **real** reality of database replication ([with exceptions](http://stackoverflow.com/questions/29381442/eventual-consistency-vs-strong-eventual-consistency-vs-strong-consistency)). Write-heavy database loads, slow networks and large geographic distances between database servers are all contributors to replication lag.&#13;
&#13;
In terms of mysqldump, this has as few implications:&#13;
&#13;
1. Running mysqldump on a replica server means the data it receives might be slightly behind the master server. &#13;
    - For regular backups, this is likely fine. If you need the data to be at a certain point, then you need to wait until that data has reached the replica server.&#13;
2. Running mysqldump on a replica is prefered (IMO) since in theory, there is already a built-in assumption that the replica servers will be behind anyway - adding a bit of "strain" of a mysqldump shouldn't be a big deal.&#13;
&#13;
&gt; A bit of a sidenote: A replica server being behind the master database only matters if your application uses the replicas to spread read queries (a common use case). That's not always the case - sometimes replicas are only used for backend/reporting systems, or just as "hot backups".&#13;
&gt; &#13;
&gt; Using MySQL replication may have implications on your applications - they may need to handle situations where an update (write query) isn't immediately reflected in the UI.&#13;
--&gt;&#13;
&#13;
&#13;
## Dump more than one (all) database&#13;
&#13;
I generally dump specific databases, which lets me more easily recover a specific database if I need to.&#13;
&#13;
However you can dump multiple databases:&#13;
&#13;
```bash&#13;
mysqldump --single-transaction --skip-lock-tables --databases db1 db2 db3 \&#13;
    &gt; db1_db2_and_db3.sql&#13;
```&#13;
&#13;
You can also dump specific tables from a single database:&#13;
&#13;
```bash&#13;
mysqldump --single-transaction --skip-lock-tables some_database table_one table_two table_three \&#13;
    &gt; some_database_only_three_tables.sql&#13;
```&#13;
&#13;
You can also dump the entire database. Note that this likely includes the internal `mysql` database as well:&#13;
&#13;
```bash&#13;
mysqldump --single-transaction --skip-lock-tables --flush-privileges --all-databases &gt; entire_database_server.sql&#13;
```&#13;
&#13;
The above command used the [`--all-databases`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_all-databases) option along with the [`--flush-privileges`](http://dev.mysql.com/doc/refman/5.7/en/mysqldump.html#option_mysqldump_flush-privileges) option.&#13;
&#13;
Since we'll get the internal `mysql` database, which includes mysql users and privileges, the `--flush-privileges` option adds a `FLUSH PRIVILEGES` query at the end of the dump, needed since the dump may change users and privileges when being imported.&#13;
&#13;
## That's it!&#13;
&#13;
There are many, many options you can use with mysqldump. However, we covered what I think are the most important for using mysqldump in a modern implementation of MySQL.&#13;
&#13;
&lt;blockquote style="padding: 30px; background: #eeeeee;"&gt;&lt;p&gt;Side note, If you're interested in a service to help you manage &lt;strong&gt;MySQL-optimized, backup and (eventually) replication-enabled&lt;/strong&gt; database servers, &lt;a href="http://sqlops.launchrock.com/" title="mysql replication"&gt;sign up here to let me know&lt;/a&gt;! The idea is to allow you to better manage your MySQL servers, taking advantage of many of MySQL's more advanced options, especially around backup and recovery.&lt;/p&gt;&lt;/blockquote&gt;</description>
    </item>
    <item>
      <title>MySQL Network Security</title>
      <link>https://serversforhackers.com/mysql-network-security</link>
      <description>On Digital Ocean, Linode and similar clouds, private networks are open to the entire data center (e.g. Newark or NYC3). They are **NOT** private to your specific account. My Linode server in Newark can potentially communicate to your Linode server in Newark over their private network address.&#13;
&#13;
If your database requires access from other servers, it needs to be protected.&#13;
&#13;
## Our Available Tools&#13;
&#13;
MySQL comes with plenty of network security tools. Here are the essential ones:&#13;
&#13;
1. MySQL Bind-Address&#13;
2. MySQL User security&#13;
3. Firewall&#13;
&#13;
### MySQL Bind Address&#13;
&#13;
* MySQL can bind to no networks (connect over `localhost` only, e.g. a unix socket)&#13;
* MySQL can bind to all networks (`0.0.0.0`)&#13;
* MySQL can bind to a specific network, e.g. a public network that the whole internet can reach, or a private network that can only be reached from within a data center&#13;
&#13;
&gt; MySQL usually listens to a `localhost` unix socket **in addition to** a network location, so localhost connections should be just fine still no matter what setup you use.&#13;
&#13;
The more restrictive we can be, the better. If our application is on the same server as the database, we can  close mysql from binding to any network (defaulting instead to listening only on the local unix socket). More common is to also bind to the loopbackn etwork address `127.0.0.1` so both `localhost` (unix socket) and `127.0.0.1` (tcp socket) connections work.&#13;
&#13;
&lt;!--&#13;
&gt; You can use an SSH tunnel, but I haven't heard of anyone using one in production, due to the overhead of the tunnel, reliability of SSH connections and externalities needed to make it work (e.g. SSH keys, config and their management).&#13;
&gt;&#13;
&gt; More advanced options for network security are using a VPN for SSL tunnel.&#13;
--&gt;&#13;
&#13;
### MySQL User Security&#13;
&#13;
In addition to setting what networks MySQL listens on, we can set **where** users are allowed to connect **from**. This means we can say "*user `my_app_user` can only connect to MySQL from the server whose address is `192.168.33.10`*".&#13;
&#13;
Let's see how that looks in MySQL.&#13;
&#13;
#### What users exist&#13;
&#13;
Run the following to see what users exist on the MySQL server:&#13;
&#13;
```sql&#13;
mysql&gt; SELECT User, Host from mysql.user;&#13;
+-----------+--------------------+&#13;
| User      | Host               |&#13;
+-----------+--------------------+&#13;
| root      | 127.0.0.1          |&#13;
| root      | ::1                |&#13;
| mysql.sys | localhost          |&#13;
| root      | localhost          |&#13;
+-----------+--------------------+&#13;
4 rows in set (0.00 sec)&#13;
```&#13;
&#13;
We can see that we have three `root` users and one system user.&#13;
&#13;
* `root@127.0.0.1` - Can connect using the loopback ipv4 network `127.0.0.1`&#13;
* `root@::1` - Can connect using the loopback ipv6 network `::1`&#13;
* `root@localhost` - Can connect using the unix socket&#13;
&#13;
Note that `localhost` in MySQL will mean connecting over the Unix socket, even if the hostname `localhost` resolves to IP address `127.0.0.1`.&#13;
&#13;
&lt;!--&#13;
#### Localhost Users&#13;
&#13;
I only let super users (users with privileges `ALL`, `SUPER`, &amp; `GRANT`) connect over local networks. This forces me to access the server over SSH (another layer of security) to perform more destructive commands.&#13;
&#13;
We can create a new user to connect via localhost:&#13;
&#13;
```sql&#13;
-- Create new user&#13;
CREATE USER 'my_super_user'@'localhost' IDENTIFIED BY 'some-strong-password';&#13;
-- Grant that user all the powers, if you want&#13;
GRANT ALL, SUPER ON *.* TO 'my_super_user'@'localhost' WITH GRANT OPTION;&#13;
```&#13;
&#13;
That's all good, but we need to know how to connect our remote users securely.&#13;
--&gt;&#13;
&#13;
#### Application Users&#13;
&#13;
We need to make MySQL users for our applications to use.&#13;
&#13;
Let's pretend that our MySQL server is in a single region with networks:&#13;
&#13;
* **Public Ipv4:** 159.203.81.145&#13;
* **Private Ipv4:** 10.132.30.23&#13;
&#13;
And an application server in the same data center with networks:&#13;
&#13;
* **Public Ipv4:** 104.131.100.163&#13;
* **Private Ipv4:** 10.132.51.34&#13;
&#13;
Since these two servers are within the same private network (`10.132.*.*`), they can communicate to each other. Let's set the application server to be able to connect to the MySQL server.&#13;
&#13;
We have a few tools we can use:&#13;
&#13;
* Hostnames (`example.com`)&#13;
* Explicit IP addresses (`192.168.10.10`)&#13;
* Wildcards (`192.168.10.%`)&#13;
* Netmasks&#13;
&#13;
Here we use an explicit IP address, some wildcards, and then some netmasks to see how they can be used:&#13;
&#13;
```sql&#13;
-- From the MySQL Server, create a new user&#13;
&#13;
-- Create a user that can only connect from this one server&#13;
CREATE USER 'my_app_user'@'10.132.51.34' IDENTIFIED BY 'some-strong-password';&#13;
&#13;
-- Create a user that can connect from any IP starting in `10.132.51.`&#13;
CREATE USER 'my_app_user'@'10.132.51.%' IDENTIFIED BY 'some-strong-password';&#13;
&#13;
-- Create a user that can connect from any IP starting in `10.132.`&#13;
CREATE USER 'my_app_user'@'10.132.%' IDENTIFIED BY 'some-strong-password';&#13;
&#13;
-- Create a user that can connect from any IP starting in `10.`&#13;
CREATE USER 'my_app_user'@'10.%' IDENTIFIED BY 'some-strong-password';&#13;
&#13;
-- Create a user that can connect from `10.132.51.*`&#13;
CREATE USER 'my_app_user'@'10.132.51.0/255.255.255.0' IDENTIFIED BY 'some-strong-password';&#13;
&#13;
-- Create a user that can connect from `10.132.*.*`&#13;
CREATE USER 'my_app_user'@'10.132.0.0/255.255.0.0' IDENTIFIED BY 'some-strong-password';&#13;
&#13;
-- Create a user that can connect from `10.*.*.*`&#13;
CREATE USER 'my_app_user'@'10.0.0.0/255.0.0.0' IDENTIFIED BY 'some-strong-password';&#13;
```&#13;
&#13;
The first example there is very specific - the user can only connect to MySQL if they are connecting from that one server at ip address `10.132.51.34`. &#13;
&#13;
The next few examples use wildcards (`%`). Wildcards use the same principles as `LIKE` matching in MySQL , and are best used to match hostnames (e.g. `'my_app_user'@'%.example.com'`).&#13;
&#13;
Finally we have some netmask examples. Netmasks are specific to IPv4 network addresses (they don't work for IPv6 currently).&#13;
&#13;
&gt; In a cloud like Digital Ocean or Linode, I create multiple users who can only connect from a specific IP address (no wildcard, no netmask), since **all other options leave open the possibility that someone else's server can connect to your MySQL instance**.&#13;
&#13;
#### Public Networks&#13;
&#13;
The concepts here work for a public network as well. If your database needs to be connected from other regions, then the private network won't work. In this case, you "should" look into setting up a VPN or SSL Tunnel, so a private network can be setup across regions.&#13;
&#13;
However, we can get fairly secure without a VPN, which are complex to setup and maintain.&#13;
&#13;
You can bind your MySQL to either all networks (if you also need private network access), `0.0.0.0`, or to the public network, `159.203.81.145` in our example.&#13;
&#13;
User setup is then the same - you can define the IP address or hostname the user can connect from.&#13;
&#13;
```sql&#13;
-- Create a user that can only connnect from this one server&#13;
CREATE USER 'my_app_user'@'104.131.100.163' IDENTIFIED BY 'some-strong-password';&#13;
```&#13;
&#13;
&lt;!-- --&gt;&#13;
&#13;
### Firewall&#13;
&#13;
On top of what MySQL provides, we can (should) also use our firewalls to protect us.&#13;
&#13;
For example, we can set our firewall to only accept connections to port 3306 if they are coming to our private network (the destination is our private network):&#13;
&#13;
```bash&#13;
# Append rule to the end of the INPUT chain&#13;
sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -d 10.132.30.23 -j ACCEPT&#13;
&#13;
# Or, insert it into the middle of our chain (position 3 here)&#13;
sudo iptables -I INPUT 3 -p tcp -m tcp --dport 3306 -d 10.132.30.23 -j ACCEPT&#13;
```&#13;
&#13;
The `-d` flag sets the destination network. &#13;
&#13;
Alternatively, we can set what network interface allows traffic to port 3306:&#13;
&#13;
```bash&#13;
# Append rule to the end of the INPUT chain&#13;
sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -i eth1 -j ACCEPT&#13;
&#13;
# Or, insert it into the middle of our chain (position 3 here)&#13;
sudo iptables -I INPUT 3 -p tcp -m tcp --dport 3306 -i eth1 -j ACCEPT&#13;
```&#13;
&#13;
This works well on Digital Ocean where the `eth1` interface (`-i`) is your private network. &#13;
&#13;
Linode, however, doesn't setup a network interface for the private traffic, so this firewall rule wouldn't work there.&#13;
&#13;
And of course we can set rules to only allow traffic from specific IP addresses or networks, using both netmask and CIDR notation.&#13;
&#13;
```bash&#13;
# Allow traffic from a specific IP&#13;
sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -s 10.132.51.34  -j ACCEPT&#13;
&#13;
# Allow traffic from a netmask range of IP addresses 10.132.51.*&#13;
sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -s 10.132.51.0/255.255.255.0  -j ACCEPT&#13;
&#13;
# Allow traffic from a CIDR range of IP addresses 10.132.51.*&#13;
sudo iptables -A INPUT -p tcp -m tcp --dport 3306 -s 10.132.51.0/24  -j ACCEPT&#13;
```&#13;
&#13;
## TL;DR&#13;
&#13;
1. Bind MySQL to `0.0.0.0` if you want it to listen on all networks, private and public. Bind to the private network Ip address of the MySQL to listen only on the private network.&#13;
2. MySQL will listen on a local unix socket as well, used when connecting to `localhost`&#13;
3. Setup new MySQL users to be restrictive in where they can connect from - a specific IP address or as small a range of IPs as possible.&#13;
4. Use firewalls in conjunction with MySQL user restrictions&#13;
&#13;
## Resources&#13;
&#13;
* More on [MySQL User security](https://serversforhackers.com/video/mysql-user-security)&#13;
* [Connecting to a remote MySQL server securely over SSH](https://serversforhackers.com/video/connecting-to-mysql-via-ssh) from your workstation&#13;
* [MySQL bind-address, firewall and mysql users](https://serversforhackers.com/video/application-servers-and-mysql)&#13;
&#13;
&lt;blockquote style="padding: 30px; background: #eeeeee;"&gt;&lt;p&gt;Side note, If you're interested in a service to help you manage &lt;strong&gt;MySQL-optimized, backup and (eventually) replication-enabled&lt;/strong&gt; database servers, &lt;a href="http://sqlops.launchrock.com/" title="mysql replication"&gt;sign up here to let me know&lt;/a&gt;! The idea is to allow you to better manage your MySQL servers, taking advantage of many of MySQL's more advanced options, especially around backup and recovery.&lt;/p&gt;&lt;/blockquote&gt;</description>
    </item>
    <item>
      <title>Running Ansible 2 Programmatically</title>
      <link>https://serversforhackers.com/running-ansible-2-programmatically</link>
      <description>Ansible 2 is out, and that means it's time to upgrade the previous article on [Running Ansible Programmatically](https://serversforhackers.com/running-ansible-programmatically) for Ansible 2, which has significant API changes under the hood.&#13;
&#13;
## Use Case&#13;
&#13;
At work, we are spinning up hosted trials for a historically on-premise product (no multi-tenancy).&#13;
&#13;
To ensure things run smoothly, we need logging and reporting of Ansible runs while these trials spin up or are updated.&#13;
&#13;
Each server instance (installation of the application) has unique data (license, domain configuration, etc).&#13;
&#13;
Running Ansible programmatically gives us the most flexibility and has proven to be a reliable way to go about this.&#13;
&#13;
At the cost of some code complexity, we gain the ability to avoid generating host and variable files on the system (although dynamic host generations may have let us do this - this is certainly not THE WAY™ to solve this problem).&#13;
&#13;
&gt; There are ways to accomplish all of this without diving into Ansible's internal API. The trade-off seems to be control vs "running a CLI command from another program", which always feels dirty.&#13;
&gt; &#13;
&gt; Learning some of Ansible's internals was fun, so I went ahead and did it.&#13;
&#13;
Overall, there's just more control when calling the Ansible API programmatically.&#13;
&#13;
## Install Dependencies&#13;
&#13;
Ansible 2 is the latest stable, so we don't need to do anything fancy to get it. We can get a virtual environment (Python 2.7, because I live in the stone-age) up and running and install dependencies into it. I'm using an Ubuntu server in this case:&#13;
&#13;
```bash&#13;
# Get pip&#13;
sudo apt-get install -y python-pip&#13;
&#13;
# Get/udpate pip and virtualenv&#13;
sudo pip install -U pip virtualenv&#13;
&#13;
# Create virtualenv&#13;
cd /path/to/runner/script&#13;
virtualenv ./.env&#13;
source ./env/bin/activate&#13;
&#13;
# Install Ansible into the virtualenv&#13;
pip install ansible&#13;
```&#13;
&#13;
Then, with the virtual environment active, we call upon Ansible from our Python scripts.&#13;
&#13;
## Ansible Config&#13;
&#13;
In the Ansible 1 series, I first tried and eventually gave up pon trying to set the path to the ansible.cfg file in code. I didn't even try again in Ansible 2, opting instead to (again) set the environmental variable `ANSIBLE_CONFIG`.&#13;
&#13;
That variable looks something like `ANSIBLE_CONFIG=/path/to/ansible.cfg`;&#13;
&#13;
The `ansible.cfg` file looks something like this:&#13;
&#13;
```ini&#13;
[defaults]&#13;
log_path = /var/log/ansible/ansible.log&#13;
callback_plugins = /path/to/project/callback_plugins:~/.ansible/plugins/callback_plugins/:/usr/share/ansible_plugins/callback_plugins&#13;
&#13;
[ssh_connection]&#13;
ssh_args = -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -o IdentitiesOnly=yes -o ControlMaster=auto -o ControlPersist=60s&#13;
control_path = /home/your_username/.ansible/cp/ansible-ssh-%%h-%%p-%%r&#13;
```&#13;
&#13;
Here's what is set here:&#13;
&#13;
* `log_path` - Uses the internal log plugin to log all actions to a file. I may turned this off in production in favor of our own log plugin which will send logs to another source (database or a log aggregator).&#13;
* `callback_plugins` - The file path a custom logging plugin (or any plugin!) would be auto-loaded from, if any. **I add the project's path in first**, and then append the default paths after.&#13;
* `ssh_args` - Sets SSH options as you would normally set with the `-o` flag. This controls how Ansible connects to the host server. The ones I used will prevent the prompt that asks if this is a trusted host (be cautious with doing that!) and ensure Ansible uses our private key to access the server. Check out the video on Logging in with SSH to see some examples of those options used with the SSH command.&#13;
* `control_path` - I set the SSH control path to a directory writable by the user running the script/Ansible code (and thus using SSH to connect to the remote server). This is likely optional in reality.&#13;
&#13;
## The Script(s)&#13;
&#13;
Like in our previous article, we'll dive right into the main script.&#13;
&#13;
```python&#13;
import os&#13;
from tempfile import NamedTemporaryFile&#13;
from ansible.inventory import Inventory&#13;
from ansible.vars import VariableManager&#13;
from ansible.parsing.dataloader import DataLoader&#13;
from ansible.executor import playbook_executor&#13;
from ansible.utils.display import Display&#13;
&#13;
&#13;
class Options(object):&#13;
    """&#13;
    Options class to replace Ansible OptParser&#13;
    """&#13;
    def __init__(self, verbosity=None, inventory=None, listhosts=None, subset=None, module_paths=None, extra_vars=None,&#13;
                 forks=None, ask_vault_pass=None, vault_password_files=None, new_vault_password_file=None,&#13;
                 output_file=None, tags=None, skip_tags=None, one_line=None, tree=None, ask_sudo_pass=None, ask_su_pass=None,&#13;
                 sudo=None, sudo_user=None, become=None, become_method=None, become_user=None, become_ask_pass=None,&#13;
                 ask_pass=None, private_key_file=None, remote_user=None, connection=None, timeout=None, ssh_common_args=None,&#13;
                 sftp_extra_args=None, scp_extra_args=None, ssh_extra_args=None, poll_interval=None, seconds=None, check=None,&#13;
                 syntax=None, diff=None, force_handlers=None, flush_cache=None, listtasks=None, listtags=None, module_path=None):&#13;
        self.verbosity = verbosity&#13;
        self.inventory = inventory&#13;
        self.listhosts = listhosts&#13;
        self.subset = subset&#13;
        self.module_paths = module_paths&#13;
        self.extra_vars = extra_vars&#13;
        self.forks = forks&#13;
        self.ask_vault_pass = ask_vault_pass&#13;
        self.vault_password_files = vault_password_files&#13;
        self.new_vault_password_file = new_vault_password_file&#13;
        self.output_file = output_file&#13;
        self.tags = tags&#13;
        self.skip_tags = skip_tags&#13;
        self.one_line = one_line&#13;
        self.tree = tree&#13;
        self.ask_sudo_pass = ask_sudo_pass&#13;
        self.ask_su_pass = ask_su_pass&#13;
        self.sudo = sudo&#13;
        self.sudo_user = sudo_user&#13;
        self.become = become&#13;
        self.become_method = become_method&#13;
        self.become_user = become_user&#13;
        self.become_ask_pass = become_ask_pass&#13;
        self.ask_pass = ask_pass&#13;
        self.private_key_file = private_key_file&#13;
        self.remote_user = remote_user&#13;
        self.connection = connection&#13;
        self.timeout = timeout&#13;
        self.ssh_common_args = ssh_common_args&#13;
        self.sftp_extra_args = sftp_extra_args&#13;
        self.scp_extra_args = scp_extra_args&#13;
        self.ssh_extra_args = ssh_extra_args&#13;
        self.poll_interval = poll_interval&#13;
        self.seconds = seconds&#13;
        self.check = check&#13;
        self.syntax = syntax&#13;
        self.diff = diff&#13;
        self.force_handlers = force_handlers&#13;
        self.flush_cache = flush_cache&#13;
        self.listtasks = listtasks&#13;
        self.listtags = listtags&#13;
        self.module_path = module_path&#13;
&#13;
&#13;
class Runner(object):&#13;
&#13;
    def __init__(self, hostnames, playbook, private_key_file, run_data, become_pass, verbosity=0):&#13;
&#13;
        self.run_data = run_data&#13;
&#13;
        self.options = Options()&#13;
        self.options.private_key_file = private_key_file&#13;
        self.options.verbosity = verbosity&#13;
        self.options.connection = 'ssh'  # Need a connection type "smart" or "ssh"&#13;
        self.options.become = True&#13;
        self.options.become_method = 'sudo'&#13;
        self.options.become_user = 'root'&#13;
&#13;
        # Set global verbosity&#13;
        self.display = Display()&#13;
        self.display.verbosity = self.options.verbosity&#13;
        # Executor appears to have it's own &#13;
        # verbosity object/setting as well&#13;
        playbook_executor.verbosity = self.options.verbosity&#13;
&#13;
        # Become Pass Needed if not logging in as user root&#13;
        passwords = {'become_pass': become_pass}&#13;
&#13;
        # Gets data from YAML/JSON files&#13;
        self.loader = DataLoader()&#13;
        self.loader.set_vault_password(os.environ['VAULT_PASS'])&#13;
&#13;
        # All the variables from all the various places&#13;
        self.variable_manager = VariableManager()&#13;
        self.variable_manager.extra_vars = self.run_data&#13;
&#13;
        # Parse hosts, I haven't found a good way to&#13;
        # pass hosts in without using a parsed template :(&#13;
        # (Maybe you know how?)&#13;
        self.hosts = NamedTemporaryFile(delete=False)&#13;
        self.hosts.write("""[run_hosts]&#13;
%s&#13;
""" % hostnames)&#13;
        self.hosts.close()&#13;
&#13;
        # This was my attempt to pass in hosts directly.&#13;
        # &#13;
        # Also Note: In py2.7, "isinstance(foo, str)" is valid for&#13;
        #            latin chars only. Luckily, hostnames are &#13;
        #            ascii-only, which overlaps latin charset&#13;
        ## if isinstance(hostnames, str):&#13;
        ##     hostnames = {"customers": {"hosts": [hostnames]}}&#13;
&#13;
        # Set inventory, using most of above objects&#13;
        self.inventory = Inventory(loader=self.loader, variable_manager=self.variable_manager, host_list=self.hosts.name)&#13;
        self.variable_manager.set_inventory(self.inventory)&#13;
&#13;
        # Playbook to run. Assumes it is&#13;
        # local to this python file&#13;
        pb_dir = os.path.dirname(__file__)&#13;
        playbook = "%s/%s" % (pb_dir, playbook)&#13;
&#13;
        # Setup playbook executor, but don't run until run() called&#13;
        self.pbex = playbook_executor.PlaybookExecutor(&#13;
            playbooks=[playbook], &#13;
            inventory=self.inventory, &#13;
            variable_manager=self.variable_manager,&#13;
            loader=self.loader, &#13;
            options=self.options, &#13;
            passwords=passwords)&#13;
&#13;
    def run(self):&#13;
        # Results of PlaybookExecutor&#13;
        self.pbex.run()&#13;
        stats = self.pbex._tqm._stats&#13;
&#13;
        # Test if success for record_logs&#13;
        run_success = True&#13;
        hosts = sorted(stats.processed.keys())&#13;
        for h in hosts:&#13;
            t = stats.summarize(h)&#13;
            if t['unreachable'] &gt; 0 or t['failures'] &gt; 0:&#13;
                run_success = False&#13;
&#13;
        # Dirty hack to send callback to save logs with data we want&#13;
        # Note that function "record_logs" is one I created and put into&#13;
        # the playbook callback file&#13;
        self.pbex._tqm.send_callback(&#13;
            'record_logs', &#13;
            user_id=self.run_data['user_id'], &#13;
            success=run_success&#13;
        )&#13;
&#13;
        # Remove created temporary files&#13;
        os.remove(self.hosts.name)&#13;
&#13;
        return stats&#13;
```&#13;
&#13;
We'll cover what all this is doing - in particular, what that huge, ugly `Options` class is doing there.&#13;
&#13;
### Some Assumptions&#13;
&#13;
I have a directory called `roles` in the same directory as this script. However, you can set the roles path in your `ansible.cfg` file.&#13;
&#13;
Playbooks are assumed to be in the same directory as this script as well. That's hard-coded above via `pb_dir = os.path.dirname(__file__)`.&#13;
&#13;
Lastly, as noted, when this code is run, ensure the `ANSIBLE_CONFIG` environment variable is set with the full path to the `ansible.cfg` file.&#13;
&#13;
Now let's start from the top to cover the file.&#13;
&#13;
### Imports&#13;
&#13;
We import the Ansible objects that we'll use to run everything. The only non-Ansible imports are `os` (to get environment variables and the current file path of this script) and `NamedTemporaryFile`, useful for generating files with dynamic content for when Ansible expects a file. &#13;
&#13;
Note that Ansible has refactored a lot of code in Ansible 2 so things like the DataLoader and VariableMapper are in one class, making things like variable loading precedence much more consistent.&#13;
&#13;
### Options&#13;
&#13;
We made a monster `Options` class, which is basically a glorified dict. During a regular CLI call to Ansible, an option parser gets available options (via `optparse`). These are options like "ask vault password", "become user", "limit hosts" and other common options we would pass to Ansible.&#13;
&#13;
Since we're not calling it via CLI, we need something to provide options. In it's place, we provide an object with the same properties, painfully taken from the [Github code](https://github.com/ansible/ansible/blob/v2.0.1.0-1/lib/ansible/cli/playbook.py) and some experimentation.&#13;
&#13;
&gt; Note that this is similar to how Ansible itself runs Ansible programmatically in their [Python API docs](http://docs.ansible.com/ansible/developing_api.html). The use `options = Options(connection='local', module_path='/path/to/mymodules', forks=100, ... )`.&#13;
&#13;
This sets almost all the options (and more!) that you might pass a CLI call to `ansible` or `ansible-playbook`.&#13;
&#13;
We'll later fill out some of these options (not all, some just need to exist, even if they have a None value) and our Options object to the `PlaybookExecutor`.&#13;
&#13;
### Runner&#13;
&#13;
Here's the magic - I made a `Runner` class, responsible for collecting needed data and running the Ansible Playbook executor.&#13;
&#13;
The runner needs a few bits of information, which of course you can customize to your needs. As noted, it uses an instance of the **Options** object and sets the important bits to it, such as the `become` options, `verbosity`, `private_key_file` location and more.&#13;
&#13;
In this case, we can pass our desired verbosity to the **Display** object, which will set how much is output to stdout when we run this.&#13;
&#13;
We create a **DataLoader** instance, which will load in YAML/JSON data from our roles. This gets passed a Vault password as well, in case you are encrypting any variable data using Ansible Vault. **Note** we have a second environmental variable, `VAULT_PASS`. You may want to pass that in instead of use an environmental variable - whatever works for you.&#13;
&#13;
Then the script creates a **VariableManager** object, which is responsible for adding in all variables from the various sources, and keeping [variable precedence](http://docs.ansible.com/ansible/playbooks_variables.html#variable-precedence-where-should-i-put-a-variable) consistent.&#13;
&#13;
We pass in any `extra_vars` to this object. **This is the main way in which I've chosen to add in variable data for the roles to use.** It skips using Jinja2 or other methods of passing in host variables, although those methods are also available to you. Since our use case was to have a lot of custom data per customer, this method of passing variables to Ansible made sense to us.&#13;
&#13;
After that, we create a **NamedTemporaryFile** and create a small hosts file entry (I'm assuming one host at a time, you don't have to!). I avoided using Jinja2 there, but you could easily do that just like in the [previous article on running Ansible 1 programmatically](https://serversforhackers.com/running-ansible-programmatically).&#13;
&#13;
Next, we create an **Inventory** object and pass it the items it needs. &#13;
&#13;
**Finally** we create an instance of **PlaybookExecutor** with all our objects. That's then ready to run!&#13;
&#13;
The actual execution of the playbook is in a `run` method, so we can call it when we need to. The `__init__` method just sets everything up for us.&#13;
&#13;
This should run your roles against your hosts! It will still output the usual data to Stderr/Stdout.&#13;
&#13;
## Callback Module&#13;
&#13;
I needed a callback module to log Ansible runs to a database. Here's how to do that!&#13;
&#13;
First, the callback module's path is set in the `ansible.cfg` file. The following callback module is in that defined directory location.&#13;
&#13;
Second, a note: You may have noticed that on the bottom of the Runner object, we reach deep into the PlaybookExecutor's Task Queue Manager object and tell it to send a (custom) callback. This object is meant to be a private property of the PlaybookExecutor, but Python's "We're all adults here" philosophy makes adding a custom callback possible.&#13;
&#13;
Here's the callback module:&#13;
&#13;
```python&#13;
from datetime import datetime&#13;
from ansible.plugins.callback import CallbackBase&#13;
&#13;
from some_project.storage import Logs # A custom object to store to the database&#13;
&#13;
&#13;
class PlayLogger:&#13;
    """Store log output in a single object.&#13;
    We create a new object per Ansible run&#13;
    """&#13;
    def __init__(self):&#13;
        self.log = ''&#13;
        self.runtime = 0&#13;
&#13;
    def append(self, log_line):&#13;
        """append to log"""&#13;
        self.log += log_line+"\n\n"&#13;
&#13;
    def banner(self, msg):&#13;
        """Output Trailing Stars"""&#13;
        width = 78 - len(msg)&#13;
        if width &lt; 3:&#13;
            width = 3&#13;
        filler = "*" * width&#13;
        return "\n%s %s " % (msg, filler)&#13;
&#13;
&#13;
class CallbackModule(CallbackBase):&#13;
    """&#13;
    Reference: https://github.com/ansible/ansible/blob/v2.0.0.2-1/lib/ansible/plugins/callback/default.py&#13;
    """&#13;
&#13;
    CALLBACK_VERSION = 2.0&#13;
    CALLBACK_TYPE = 'stored'&#13;
    CALLBACK_NAME = 'database'&#13;
&#13;
    def __init__(self):&#13;
        super(CallbackModule, self).__init__()&#13;
        self.logger = PlayLogger()&#13;
        self.start_time = datetime.now()&#13;
&#13;
    def v2_runner_on_failed(self, result, ignore_errors=False):&#13;
        delegated_vars = result._result.get('_ansible_delegated_vars', None)&#13;
&#13;
        # Catch an exception&#13;
        # This may never be called because default handler deletes&#13;
        # the exception, since Ansible thinks it knows better&#13;
        if 'exception' in result._result:&#13;
            # Extract the error message and log it&#13;
            error = result._result['exception'].strip().split('\n')[-1]&#13;
            self.logger.append(error)&#13;
&#13;
            # Remove the exception from the result so it's not shown every time&#13;
            del result._result['exception']&#13;
&#13;
        # Else log the reason for the failure&#13;
        if result._task.loop and 'results' in result._result:&#13;
            self._process_items(result)  # item_on_failed, item_on_skipped, item_on_ok&#13;
        else:&#13;
            if delegated_vars:&#13;
                self.logger.append("fatal: [%s -&gt; %s]: FAILED! =&gt; %s" % (result._host.get_name(), delegated_vars['ansible_host'], self._dump_results(result._result)))&#13;
            else:&#13;
                self.logger.append("fatal: [%s]: FAILED! =&gt; %s" % (result._host.get_name(), self._dump_results(result._result)))&#13;
&#13;
    def v2_runner_on_ok(self, result):&#13;
        self._clean_results(result._result, result._task.action)&#13;
        delegated_vars = result._result.get('_ansible_delegated_vars', None)&#13;
        if result._task.action == 'include':&#13;
            return&#13;
        elif result._result.get('changed', False):&#13;
            if delegated_vars:&#13;
                msg = "changed: [%s -&gt; %s]" % (result._host.get_name(), delegated_vars['ansible_host'])&#13;
            else:&#13;
                msg = "changed: [%s]" % result._host.get_name()&#13;
        else:&#13;
            if delegated_vars:&#13;
                msg = "ok: [%s -&gt; %s]" % (result._host.get_name(), delegated_vars['ansible_host'])&#13;
            else:&#13;
                msg = "ok: [%s]" % result._host.get_name()&#13;
&#13;
        if result._task.loop and 'results' in result._result:&#13;
            self._process_items(result)  # item_on_failed, item_on_skipped, item_on_ok&#13;
        else:&#13;
            self.logger.append(msg)&#13;
&#13;
    def v2_runner_on_skipped(self, result):&#13;
        if result._task.loop and 'results' in result._result:&#13;
            self._process_items(result)  # item_on_failed, item_on_skipped, item_on_ok&#13;
        else:&#13;
            msg = "skipping: [%s]" % result._host.get_name()&#13;
            self.logger.append(msg)&#13;
&#13;
    def v2_runner_on_unreachable(self, result):&#13;
        delegated_vars = result._result.get('_ansible_delegated_vars', None)&#13;
        if delegated_vars:&#13;
            self.logger.append("fatal: [%s -&gt; %s]: UNREACHABLE! =&gt; %s" % (result._host.get_name(), delegated_vars['ansible_host'], self._dump_results(result._result)))&#13;
        else:&#13;
            self.logger.append("fatal: [%s]: UNREACHABLE! =&gt; %s" % (result._host.get_name(), self._dump_results(result._result)))&#13;
&#13;
    def v2_runner_on_no_hosts(self, task):&#13;
        self.logger.append("skipping: no hosts matched")&#13;
&#13;
    def v2_playbook_on_task_start(self, task, is_conditional):&#13;
        self.logger.append("TASK [%s]" % task.get_name().strip())&#13;
&#13;
    def v2_playbook_on_play_start(self, play):&#13;
        name = play.get_name().strip()&#13;
        if not name:&#13;
            msg = "PLAY"&#13;
        else:&#13;
            msg = "PLAY [%s]" % name&#13;
&#13;
        self.logger.append(msg)&#13;
&#13;
    def v2_playbook_item_on_ok(self, result):&#13;
        delegated_vars = result._result.get('_ansible_delegated_vars', None)&#13;
        if result._task.action == 'include':&#13;
            return&#13;
        elif result._result.get('changed', False):&#13;
            if delegated_vars:&#13;
                msg = "changed: [%s -&gt; %s]" % (result._host.get_name(), delegated_vars['ansible_host'])&#13;
            else:&#13;
                msg = "changed: [%s]" % result._host.get_name()&#13;
        else:&#13;
            if delegated_vars:&#13;
                msg = "ok: [%s -&gt; %s]" % (result._host.get_name(), delegated_vars['ansible_host'])&#13;
            else:&#13;
                msg = "ok: [%s]" % result._host.get_name()&#13;
&#13;
        msg += " =&gt; (item=%s)" % (result._result['item'])&#13;
&#13;
        self.logger.append(msg)&#13;
&#13;
    def v2_playbook_item_on_failed(self, result):&#13;
        delegated_vars = result._result.get('_ansible_delegated_vars', None)&#13;
        if 'exception' in result._result:&#13;
            # Extract the error message and log it&#13;
            error = result._result['exception'].strip().split('\n')[-1]&#13;
            self.logger.append(error)&#13;
&#13;
            # Remove the exception from the result so it's not shown every time&#13;
            del result._result['exception']&#13;
&#13;
        if delegated_vars:&#13;
            self.logger.append("failed: [%s -&gt; %s] =&gt; (item=%s) =&gt; %s" % (result._host.get_name(), delegated_vars['ansible_host'], result._result['item'], self._dump_results(result._result)))&#13;
        else:&#13;
            self.logger.append("failed: [%s] =&gt; (item=%s) =&gt; %s" % (result._host.get_name(), result._result['item'], self._dump_results(result._result)))&#13;
&#13;
    def v2_playbook_item_on_skipped(self, result):&#13;
        msg = "skipping: [%s] =&gt; (item=%s) " % (result._host.get_name(), result._result['item'])&#13;
        self.logger.append(msg)&#13;
&#13;
    def v2_playbook_on_stats(self, stats):&#13;
        run_time = datetime.now() - self.start_time&#13;
        self.logger.runtime = run_time.seconds  # returns an int, unlike run_time.total_seconds()&#13;
&#13;
        hosts = sorted(stats.processed.keys())&#13;
        for h in hosts:&#13;
            t = stats.summarize(h)&#13;
&#13;
            msg = "PLAY RECAP [%s] : %s %s %s %s %s" % (&#13;
                h,&#13;
                "ok: %s" % (t['ok']),&#13;
                "changed: %s" % (t['changed']),&#13;
                "unreachable: %s" % (t['unreachable']),&#13;
                "skipped: %s" % (t['skipped']),&#13;
                "failed: %s" % (t['failures']),&#13;
            )&#13;
&#13;
            self.logger.append(msg)&#13;
&#13;
    def record_logs(self, user_id, success=False):&#13;
        """&#13;
        Special callback added to this callback plugin&#13;
        Called by Runner objet&#13;
        :param user_id:&#13;
        :return:&#13;
        """&#13;
&#13;
        log_storage = Logs()&#13;
        return log_storage.save_log(user_id, self.logger.log, self.logger.runtime, success)&#13;
```&#13;
&#13;
One thing not shown here is the `some_project.storage.Logs` object that has some boiler plate for saving log output to a database.&#13;
&#13;
We have a `PlayLogger` object that does two things:&#13;
&#13;
1. Concatenates log output string together&#13;
2. Times how long the Ansible run takes&#13;
&#13;
Then we have the callback module object which is basically copy and paste boiler plate from [the default callback module](https://github.com/ansible/ansible/blob/v2.0.0.2-1/lib/ansible/plugins/callback/default.py) with a few tweaks. In particular, I edit how Exceptions are handled (ignoring verbosity settings) and remove calls to "Display", since we're saving output to a log string rather than outputting data to Stderr/Stdout.&#13;
&#13;
The most interesting part here is the added method `record_logs`. This is the custom callback method we call from the `Runner` object. It Just Works™ and is amazing! In that method, we collect a `user_id` to give this Ansible run context (that's specific to our use case and we pass it a bunch more information in reality, including the ID of the server it was run on).&#13;
&#13;
## Running It&#13;
&#13;
Here's how to use it, assuming we have a playbook called `run.yaml`.&#13;
&#13;
```python&#13;
from task import Runner&#13;
# You may want this to run as user root instead&#13;
# or make this an environmental variable, or&#13;
# a CLI prompt. Whatever you want!&#13;
become_user_password = 'foo-whatever' &#13;
run_data: {&#13;
    'user_id': 12345,&#13;
    'foo': 'bar',&#13;
    'baz': 'cux-or-whatever-this-one-is'&#13;
}&#13;
&#13;
runner = task.Runner(&#13;
    hostnames='192.168.10.233'&#13;
    playbook='run.yaml',&#13;
    private_key='/home/user/.ssh/id_whatever',&#13;
    run_data=run_data,&#13;
    become_pass=become_user_password, &#13;
    verbosity=0&#13;
)&#13;
&#13;
stats = runner.run()&#13;
&#13;
# Maybe do something with stats here? If you want!&#13;
&#13;
return stats&#13;
```&#13;
&#13;
&#13;
That's it! You can run Ansible 2 programmatically now!&#13;
</description>
    </item>
    <item>
      <title>Using New Relic's Free Server Monitoring</title>
      <link>https://serversforhackers.com/using-new-relics-free-server-monitoring</link>
      <description>## Why New Relic?&#13;
&#13;
Purely due to random circumstances at work (and on the side), I hadn't used New Relic in quite some time (~3 years). Recently I took another look and found that their Server monitoring tier was separated from their other offerings - [APM](https://newrelic.com/application-monitoring), [Insights](https://newrelic.com/insights), Mobile, Browser and others.&#13;
&#13;
This isn't how I remembered it, but perhaps the distinction was just in the messaging/marketing. In any case, the exciting part is that **the Server monitoring tier is free**!&#13;
&#13;
While in terms of your web application, you'll get the most value out of New Relic with their paid offerings, I've found the free Server monitoring to still be an instant life saver, especially when monitoring many (70+) servers.&#13;
&#13;
&gt; The free server monitoring tier is a lead-in to make it easier on your to decide to buy into the paid products, of course. Whether or not that stays free remains to be seen!&#13;
&#13;
At work, we run customer trials and paid hosting through AWS. Since the main application is an on-premise application (and not the typical multi-tenant code/database structure), we have a server per application instance. That makes for a lot of running servers!&#13;
&#13;
We've setup Cloudwatch with SNS for alarms and notifications, but it's still hard to use Cloudwatch to get a good overview of all the servers. We decided to out New Relic on a subset of the servers to see what insights we could find.&#13;
&#13;
We were immediately happy that we did.&#13;
&#13;
## Install&#13;
&#13;
Installing the server monitoring daemon is really easy, particularly because they hand you OS-specific instructions on a silver platter when you choose to add a new server within their web console.&#13;
&#13;
For Debian/Ubuntu servers, these instructions are as follows (only really slightly tweaked for those not running as user root):&#13;
&#13;
```bash&#13;
# Add Repository and Signing Key&#13;
echo deb http://apt.newrelic.com/debian/ newrelic non-free | sudo tee /etc/apt/sources.list.d/newrelic.list&#13;
wget -O- https://download.newrelic.com/548C16BF.gpg | sudo apt-key add -&#13;
&#13;
# Update repos and install their package&#13;
sudo apt-get update&#13;
sudo apt-get install newrelic-sysmond&#13;
&#13;
# Configure the local install with your license key:&#13;
sudo nrsysmond-config --set license_key=1234567890abcdefghijklmnopqrstuvwxyz&#13;
&#13;
# Start the service:&#13;
sudo service newrelic-sysmond start&#13;
```&#13;
&#13;
Their instructions will conveniently give you your license key. **Don't blindly copy and paste** the license key above 🚨.&#13;
&#13;
## Configure&#13;
&#13;
You technically don't need any more configuration, but I do like to change one more thing. The server name as shown in New Relic will match the server's host name, which you can find by running the command `hostname`.&#13;
&#13;
This might not be the way you want to identify it, so you can either wait for the server to appear in New Relic, or configure the hostname ahead of time in the New Relic configuration installed on your server. I choose the latter approach.&#13;
&#13;
Edit the `/etc/newrelic/nrsysmond.cfg` file and change the `hostname` section to whatever is appropriate for your server.&#13;
&#13;
```&#13;
# You'll find your license key in here, which you can set manually&#13;
# instead of running the `nrsysmond-config` command&#13;
license_key=1234567890abcdefghijklmnopqrstuvwxyz&#13;
&#13;
# Find the hostname entry and set it&#13;
hostname=some-server.example.com&#13;
```&#13;
&#13;
After making any changes, start (or restart) the `newrelic-sysmond` service:&#13;
&#13;
```bash&#13;
sudo service newrelic-sysmond restart&#13;
```&#13;
&#13;
You'll soon see that server appear in your New Relic account under the Server section, without any further work on your part.&#13;
&#13;
## What We Watch&#13;
&#13;
This monitoring on New Relic isn't super advanced, but it has some really nice features:&#13;
&#13;
### Processes and process count. &#13;
&#13;
Note that PHP is a bit out of control there - that was one thing that we were alerted to, as the server has a high CPU usage over their default threshold of 15 minutes.&#13;
&#13;
![server process list and count](https://s3.amazonaws.com/sfh-assets/process_list.png)&#13;
&#13;
We found some zombie PHP processes, which we killed off quickly. We immediately saw CPU reduction back to normal levels.&#13;
&#13;
```bash&#13;
# Find a process&#13;
$ ps aux | grep php&#13;
www-data  5889  0.0  3.3 438856 34004 ?        S    Jan12   0:17 php: /path/to/file.php&#13;
&#13;
# Kill it based on process ID&#13;
$ sudo kill 5889&#13;
```&#13;
&#13;
![cpu usage reduction](https://s3.amazonaws.com/sfh-assets/cpu_reduction.png)&#13;
&#13;
We can get a more detailed view of each process type as well, sorting by memory and cpu usage:&#13;
&#13;
![process memory usage](https://s3.amazonaws.com/sfh-assets/process_memory2.png)&#13;
&#13;
### Disk Usage&#13;
&#13;
New Relic also gave us alerts about disk usage. One server had excess files taking up a lot of space. We were able to reduce the disk spaced used by  manually deleting extra media files, giving us time to provision more hard drive space.&#13;
&#13;
![disk drive usage](https://s3.amazonaws.com/sfh-assets/disk_available.png)&#13;
&#13;
### Alerts&#13;
&#13;
Alerts may not send you alerts off the bat, but can be easily configured.&#13;
&#13;
In any case, when viewing a list of all your servers, you will see some on the sidebar worth paying attention to.&#13;
&#13;
![new relic alerts](https://s3.amazonaws.com/sfh-assets/recent_events.png)&#13;
&#13;
&#13;
## Automated with Ansible&#13;
&#13;
I automated this installation with a super simple Ansible Playbook (specific to Debian/Ubuntu). The basics are as follows:&#13;
&#13;
File `tasks/main.yml`:&#13;
&#13;
```yaml&#13;
---&#13;
- name: Add New Relic Repository&#13;
  apt_repository:&#13;
    repo: deb http://apt.newrelic.com/debian/ newrelic non-free&#13;
    state: present&#13;
&#13;
- name: Add New Relic Signing Key&#13;
  apt_key:&#13;
    url: https://download.newrelic.com/548C16BF.gpg&#13;
    state: present&#13;
&#13;
- name: Update Repositories&#13;
  apt:&#13;
    name: newrelic-sysmond&#13;
    update_cache: yes&#13;
    state: present&#13;
&#13;
- name: Add New Relic Config&#13;
  template:&#13;
    src: nrsysmond.cfg.j2&#13;
    dest: /etc/newrelic/nrsysmond.cfg&#13;
    owner: newrelic&#13;
    group: newrelic&#13;
&#13;
- name: Restart New Relic&#13;
  service:&#13;
    name: newrelic-sysmond&#13;
    state: restarted&#13;
```&#13;
&#13;
File `templates/nrsysmond.cfg.j2`, with 2 variables:&#13;
&#13;
```&#13;
#&#13;
# Option : license_key&#13;
# Value  : 40-character hexadecimal string provided by New Relic. This is&#13;
#          required in order for the server monitor to start.&#13;
# Default: none&#13;
#&#13;
license_key={{ new_relic_key }}&#13;
&#13;
# Much taken out for brevity....&#13;
&#13;
&#13;
#&#13;
# Setting: hostname&#13;
# Type   : string&#13;
# Purpose: Sets the name of the host (max 64 characters) that you wish to use&#13;
#          for reporting. This is usually determined automatically on startup&#13;
#          but you may want to change it if, for example, you have machine&#13;
#          generated hostnames that are not visually useful (for example, the&#13;
#          names generated by Docker containers).&#13;
# Default: The system configured host name&#13;
#&#13;
hostname={{ domain }}&#13;
&#13;
&#13;
# Much taken out for brevity....&#13;
```&#13;
&#13;
Then you can set your defaults or `vars/main.yml` file with your variable definitions:&#13;
&#13;
```yaml&#13;
new_relic_key: 1234567890abcdefghijklmnopqrstuvwxyz&#13;
domain: whatever.example.com&#13;
```&#13;
&#13;
&gt; When using Ansible for this, I actually provided the domain variable has a [host variable](http://docs.ansible.com/ansible/intro_inventory.html#host-variables), included in the hosts inventory file for each host being updated.</description>
    </item>
    <item>
      <title>TCP load balancing with Nginx (SSL Pass-thru)</title>
      <link>https://serversforhackers.com/tcp-load-balancing-with-nginx-ssl-pass-thru</link>
      <description>Learn to use Nginx 1.9.* to load balance TCP traffic. In this case, we'll setup SSL Passthrough to pass SSL traffic  received at the load balancer onto the web servers.</description>
    </item>
    <item>
      <title>Installing PHP-7 with Memcached</title>
      <link>https://serversforhackers.com/video/installing-php-7-with-memcached</link>
      <description>Let's install PHP7 and Nginx on a new Ubuntu 14.04 server, and manually build the (not yet packaged) memcached module for PHP7.</description>
    </item>
    <item>
      <title>HTTP 2.0 With Nginx </title>
      <link>https://serversforhackers.com/video/http-20-with-nginx</link>
      <description>Installing the latest Nginx from the MAINLINE branch to get http2 support.</description>
    </item>
    <item>
      <title>Curl With HTTP2 Support</title>
      <link>https://serversforhackers.com/video/curl-with-http2-support</link>
      <description>Installing HTTP2 support with the curl command</description>
    </item>
    <item>
      <title>Using apt-get</title>
      <link>https://serversforhackers.com/video/using-apt-get</link>
      <description>Using apt-get to install, search, query and remove software on Debian and Ubuntu servers.</description>
    </item>
    <item>
      <title>Creating and Using SSH Keys</title>
      <link>https://serversforhackers.com/video/creating-and-using-ssh-keys</link>
      <description>Generate an SSH key and use it to log into a user on a new server.</description>
    </item>
    <item>
      <title>Amazon RDS</title>
      <link>https://serversforhackers.com/amazon-rds</link>
      <description>I've recently been [investigating interest levels](http://sqlops.launchrock.com/) in a small application that spins up master/replica MySQL servers (with back-ups, yadda yadda yadda).&#13;
&#13;
Tangentially from this, I've also gotten a few questions about how such a set would work within Amazon's [Managed Relational Database Service](https://aws.amazon.com/rds/) (RDS).&#13;
&#13;
Amazon has a few concepts going on which makes the picture of how everything works together a little hazy at first. Here's my attempt at clearing up how the RDS service works.&#13;
&#13;
## Regions and Availability Zones&#13;
&#13;
First, we should cover [Regions vs Availability Zones](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html).&#13;
&#13;
AWS has **~11 regions**. These are separate physical locations, very geographically distant. &#13;
&#13;
**For example, there is:**&#13;
&#13;
* US-East (N. Virginia)&#13;
* US-West (N. Cali &amp; Oregon)&#13;
* EU-West (Ireland)&#13;
* AP-Southeast (Singapore, Sydney) &#13;
* and 6 others spread across the world&#13;
&#13;
Each region has **multiple Availability Zones** (AZ). These zones are also physically different locations, but within the same general geographical area.&#13;
&#13;
&gt; These availability zones are close together and connected by high-bandwidth networks, making the network latency between them pretty low.&#13;
&#13;
![google maps of aws locations](https://s3.amazonaws.com/sfh-assets/google_map.png)&#13;
&#13;
In theory, **us-east-1d** can explode without affecting **us-east-1e**. However, **practical experience says otherwise**. About twice a year, we've seen large service interruptions which tend to affect entire regions (looking at you, **us-east-1**).&#13;
&#13;
**Some generalisms about regions and availability zones:**&#13;
&#13;
1. Network connections between AZ's are usually quite fast, often fast enough to not worry about inter-AZ communication for most web applications. YMMV.&#13;
2. Network connections between Regions can/will be slower due to geographical distance.&#13;
3. Services (EC2, SES, etc) have different prices in different regions. **Us-east-1** tends to be the cheapest (but most over-used).&#13;
&#13;
With that out of the way, let's actually talk about RDS. Specifically, we'll talk about MySQL. Some details will be different per database.&#13;
&#13;
## The Basics&#13;
&#13;
RDS is a managed database service. In return for paying an arm and a leg (although it's still cheaper than hiring a human being!), you get a managed database.&#13;
&#13;
### Server Size&#13;
&#13;
One of your first decisions in setting up an RDS instance will be to decide how large the database is. Because the cheaper [T instance type uses a system of CPU credits](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/t2-instances.html#t2-instances-cpu-credits) to minimize cost for burst work loads, I tend to stay away from it with RDS, which often has a more consistent work load. The T series of instances will throttle available CPU after credits are used. &#13;
&#13;
Of course, depending on your work load, using a server type that's better for burst usage can make sense.&#13;
&#13;
![rds server sizes](https://s3.amazonaws.com/sfh-assets/rds_server_sizes.png)&#13;
&#13;
I like the **m3.large** (2 vCPU, 7.5GiB RAM) instance size to start with, although that's probably overkill for small/starting applications. &#13;
&#13;
&gt; If you're in a position where you can fit your application and database workload into a 1-2 CPU server/2-4GB RAM, I don't suggest using AWS at all. Linode/Digital Ocean is much more cost-effective and simple.&#13;
&#13;
In **us-east-1**, with 100gb of storage, that's **~$144/mo** (not counting bandwidth usage) with no failover (Multi-AZ) nor replication.&#13;
&#13;
![aws price rds single instance](https://s3.amazonaws.com/sfh-assets/rds_single.png)&#13;
&#13;
**For this price, you get:**&#13;
&#13;
* A managed database instance that (probably?) won't suffer too many issues.&#13;
* Backups back for n (selectable) days. These allow point-in-time recovery, **granularity up to the second** 🎉!&#13;
&#13;
![restore rds point in time backup](https://s3.amazonaws.com/sfh-assets/rds_restore.png)&#13;
&#13;
## Multi-AZ&#13;
&#13;
The next decision to make is whether or not to enable a [Multi-AZ deployment](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html). Amazon pushes hard on this in their documentation, but **this is not replication**. At least, not the replication you're thinking of.&#13;
&#13;
**Multi-AZ is about two things:**&#13;
&#13;
1. Durability&#13;
2. Fast failover&#13;
&#13;
In addition to Amazon's default durability metrics, Multi-AZ deployments will make a **synchronous** copy of your database to *another* AZ within the region you're creating the database.&#13;
&#13;
If your RDS instance is in **us-east-1d**, the copy might end up in **us-east-1c**.&#13;
&#13;
*Synchronous* is the interesting part there. Amazon is guaranteeing that any change that is made on the primary database is also made in the copy. This isn't something that a database will usually guarantee without a lot of potential slowdown, as it's typically done via transactional locking - the database waits for all replicas to report that it's completed the transaction before moving on to the next query.&#13;
&#13;
Since AZ's in Amazon are physically different locations, my best guess on how this is accomplished is by proxying TCP connections so they go to both AZ's at once. I don't think this is a copy done on the hard disk level (e.g. raid array) due to the physical locations being different.&#13;
&#13;
### What's it good for?&#13;
&#13;
The best reason for this setup is so **your application can failover fast** in the event that the AZ goes down. According to the [opening announcement of Multi-AZ](https://aws.amazon.com/blogs/aws/amazon-rds-multi-az-deployment/) (from 2010), this failover takes ~3 minutes.&#13;
&#13;
Amazon takes care of this for you - the database host doesn't even change, so your application should switch over without too much interruption. Other than that phantom 3 minutes, I suppose. I haven't had this happen to me yet to find out.&#13;
&#13;
The other nice thing is that backups are done on the replica database, so it doesn't cause any IO congestion on your production database.&#13;
&#13;
Note that **you can't use this replica** unless the production server fails over.&#13;
&#13;
Also, this considerably increases your cost. Selecting this brings it up to **~$293/mo**, a little more than double the cost.&#13;
&#13;
![rds multi-az pricing](https://s3.amazonaws.com/sfh-assets/rds_multi_az.png)&#13;
&#13;
### Considerations&#13;
&#13;
The only thing to watch out for here is if you are putting RDS in the same AZ as your application servers. If your application servers are also down, then having a database failing over to another AZ isn't really going to help you.&#13;
&#13;
That's why it's also recommended to use at least 2 AZ's for any application. This means using a load balanced environment for any application where it isn't OK for it to go down once in a while.&#13;
&#13;
I don't really worry about network lag between AZ's, and AWS is essentially setup with the assumption that you will not either. They want you to use multiple AZ's for RDS and for your applications.&#13;
&#13;
## Read Replication&#13;
&#13;
[RDS does indeed offer replication](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.MySQL) that you can use to distribute **read** queries (there isn't any option for multiple write replicas, because databases are hard).&#13;
&#13;
Like regular old database replication, this is asynchronous. Your replica is not guaranteed to be up to speed with your master database.&#13;
&#13;
Other than the obvious benefits of distributing expensive read queries, AWS does some other smart things. For example, databases go into maintenance occasionally for updates. In Multi-AZ deployments, the database can fall over to a backup. If you use replication, your application can continue to make read queries to a replication database.&#13;
&#13;
You need to be careful, however, as **it's possible to accidentally send write queries to the replication database** (I believe this is an aspect of MySQL, were a super user can override a read-only configuration).&#13;
&#13;
Read replication databases can be [promoted to a "regular" database](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.Promote). In fact, this is how you get [cross-region replication](http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ReadRepl.html#USER_ReadRepl.XRgn) with RDS. For "serious" (important, larger budget) deployments, this may be an important step in keeping your application available and/or keeping low response time for read queries around the world.&#13;
&#13;
Note that **replication lag will increase using cross-region deployment**, due to the large geographic distance between regions.&#13;
&#13;
Read replicas are charged just like a regular database instance, so if you're using a database with Multi-AZ and one read-replica, you're paying **~438/mo**,  roughly triple what you pay for just one instance.&#13;
&#13;
![rds multi-az with replication price](https://s3.amazonaws.com/sfh-assets/rds_multi_az_replication.png)&#13;
&#13;
Although you can **save big money overall by using reserved instances**:&#13;
&#13;
![rds price reduction with reserved instances](https://s3.amazonaws.com/sfh-assets/rds_reserved.png)&#13;
&#13;
Let's see that math for one instance, with Multi-AZ and a read replica:&#13;
&#13;
* $438.46/mo for 3 years = **$15,784.56**&#13;
* Reserved for 3 yrs with partial up-front payment is $237.32/mo + $2,158.62 = **$10,702.14**&#13;
&#13;
If you know you'll have the database for that long, reserved instances gives you a significant savings. Keep in mind you can, if workload and compliance permits, create many individual databases with an RDS instance as well.&#13;
&#13;
## The rent is too damned high!&#13;
&#13;
Those plugged into the Entrepreneurial Scene™ may "know" that products "should" (all those air quotes!) be priced based on the value the product brings:&#13;
&#13;
1. Revenue brought in&#13;
2. Time savings&#13;
3. Money savings&#13;
&#13;
AWS, in theory, replaces the technical know-how of a consultant or full-time DBA, and Amazon charges based on (under!) that value.&#13;
&#13;
With RDS, you're paying to relieve administrative burden and disaster recovery. It's not cheap.&#13;
&#13;
&lt;blockquote style="padding: 30px; background: #eeeeee;"&gt;&lt;p&gt;Side note, If you're interested in a less expensive, but unmanaged &lt;strong&gt;MySQL-optimized, backup and (eventually) replication service&lt;/strong&gt;, &lt;a href="http://sqlops.launchrock.com/" title="mysql replication"&gt;sign up here to let me know&lt;/a&gt;! The idea with this application is it will allow you to better manage your own MySQL instances,  while taking advantage of many RDS-like features.&lt;/p&gt;&lt;/blockquote&gt;</description>
    </item>
    <item>
      <title>PHP-FPM: Process Management</title>
      <link>https://serversforhackers.com/video/php-fpm-process-management</link>
      <description>Learn how to manage how PHP-FPM creates and uses PHP processes to get the most out of your server.</description>
    </item>
    <item>
      <title>PHP-FPM: Configuration the Listen Directive</title>
      <link>https://serversforhackers.com/video/php-fpm-configuration-the-listen-directive</link>
      <description>PHP-FPM can listen on multiple sockets. I also listen on Unix sockets, or TCP sockets. See how this works and how to ensure Nginx is properly sending requests to PHP-FPM.</description>
    </item>
    <item>
      <title>PHP-FPM: Multiple Resource Pools</title>
      <link>https://serversforhackers.com/video/php-fpm-multiple-resource-pools</link>
      <description>Split PHP-FPM into multiple resource pools to separate applications and enhance system security!</description>
    </item>
    <item>
      <title>Apache and PHP-FPM</title>
      <link>https://serversforhackers.com/video/apache-and-php-fpm</link>
      <description>Learn to hook Apache up to PHP-FPM using Apache's proxy modules.</description>
    </item>
    <item>
      <title>Automatic Security Updates: CentOS</title>
      <link>https://serversforhackers.com/video/automatic-security-updates-centos</link>
      <description>On CentOS servers, we can enable the automatic download and installation of security updates. Let's see how to protect our servers by installing the `yum-cron` package!</description>
    </item>
    <item>
      <title>Backup and Restore MySQL with mysqldump</title>
      <link>https://serversforhackers.com/video/backup-and-restore-mysql-with-mysqldump</link>
      <description>Let's see how to backup and restore MySQL databases, with a few extra tricks, using the `mysqldump` tool.</description>
    </item>
    <item>
      <title>MySQL User Security</title>
      <link>https://serversforhackers.com/video/mysql-user-security</link>
      <description>Learn how to setup users in MySQL to be accessible from various hosts, remote or local, as well as ranges of IP addresses. We'll cover giving users specific grants to determine what they can do to the databases they are assigned.</description>
    </item>
    <item>
      <title>LEMP on RedHat/CentOS</title>
      <link>https://serversforhackers.com/video/lemp-on-redhatcentos</link>
      <description>Let's install Nginx, PHP and MySQL (MariaDB) on a RedHat or CentOS 7 server!</description>
    </item>
    <item>
      <title>LEMP on Debian</title>
      <link>https://serversforhackers.com/video/lemp-on-debian</link>
      <description>Learn how to install &amp; configure Nginx, MySQL and PHP on a Debian server, which has minor differences from the Ubuntu install video.</description>
    </item>
    <item>
      <title>LEMP on Ubuntu</title>
      <link>https://serversforhackers.com/video/lemp-on-ubuntu</link>
      <description>Learn how to install &amp; configure Nginx, MySQL and PHP on an Ubuntu server.</description>
    </item>
    <item>
      <title>Your First LAMP Server on CentOS/RedHat 7</title>
      <link>https://serversforhackers.com/video/your-first-lamp-server-on-centosredhat-7</link>
      <description>Get up and running with a LAMP server for your PHP applications.&#13;
&#13;
* Use `yum search` and `yum install` commands to find and install packages&#13;
* Install Apache and MariaDB (MySQL)&#13;
* Install PHP and all the modules you'll need to get started</description>
    </item>
    <item>
      <title>Automatic Security Updates</title>
      <link>https://serversforhackers.com/video/automatic-security-updates</link>
      <description>On Debian/Ubuntu servers, we can enable the automatic download and installation of security updates. Let's see how to protect our servers by enabling unattended upgrades!</description>
    </item>
    <item>
      <title>Sudo and Sudoers Configuration</title>
      <link>https://serversforhackers.com/video/sudo-and-sudoers-configuration</link>
      <description>We can configure who can use the sudo command and how. You may have noticed that the Vagrant user on your development server can use sudo without a password. Similarly, AWS servers allow the same thing. Find out how that's done, and much more!</description>
    </item>
    <item>
      <title>Users and SSH Security</title>
      <link>https://serversforhackers.com/video/users-and-ssh-security</link>
      <description>Many server providers only give us a root login when we spin up a server. Other ones create a user which can use "sudo" without needing a password (just as dangerous). In this video we'll see how to manage users and their server access abilities.</description>
    </item>
    <item>
      <title>Process Monitoring with Systemd</title>
      <link>https://serversforhackers.com/video/process-monitoring-with-systemd</link>
      <description>For most Linux distributions, Systemd will be the officially support init process. This will, among many other things, monitor our processes like SysVInit and Upstart did. Get a leg up and learn how to use it now!</description>
    </item>
  </channel>
</rss>
