This weekend we spent sometime moving PostGIS/pgRouting windows buildbot Winnie to new hardware. Leo did the hardware and I handled installing and reconfiguring stuff. While I was at it, I upgraded to new Jenkins. Vicky Vergara has been bugging me to setup pgTap so she can run her pgRouting pgTap tests to make sure they work on windows. She's got 22488 tests. She just loves pgTap. Last time I tried installing pgTap I gave up, but I was in mood for experimentation so gave it another chance.
The reason why I like Unicode a lot is because it allows me to code in text based environments and still have nice output. Today, we’re going to play with Regional Indicator Symbol, which is implemented as a Unicode combinaison of letters from 🇦 to 🇿. For instance, if you display 🇫 then 🇷 concatenated together, you get 🇫🇷. Let’s try that from our PostgreSQL prompt!
This year Postgres open and PGConf SV have combined to great a bigger and better conference right in downtown San Francisco. I’m obviously biased as I’m one of the co-chairs, and I know every conference organizer says picking the talks was hard, but I’m especially excited for the line-up this year. The hard part for me is going to be which talks do I miss out on because I’m sitting in the other session that’s ongoing. You can see the full list of talk and tutorial sessions, but I thought it’d be fun to do a rundown of some of my favorites.
Postgres indexing itself has long been on my wishlist. Andrew Kane from Instacart, and creator of PgHero has bottled up many learnings into a new tool: Dexter. I suspect we’ll get a look at all that went into this, how it works, and how you can leverage it to have a more automatically tuned database.
Migration talks are all to common, from Postgres to MySQL from MySQL to Postgres, or from Dynamo to Postgres. But this one is a little different flavor from Postgres to sharded Postgres with Citus. Sharding into a distributed system of course brings new things to consider and think about, and here you’ll learn about them from first hand experience so hopefully you can avoid mistakes yourself.
This one looks to be a great under the hood look as well as likely very practical. It’ll cover MVCC which is really at so much of the core of how Postgres works, but then bring it up to what it means for things like locks. Best of all, this one like so many others comes with lots of real world experience from Segment.
I love me some windows, though not always the
easiest things to work with. They can let you easily do things like
compute month over month growth between rows in a single query.
Bruce whose always a great presenter
walks us through them and all the things they’re capable
of.
Instagram is well known a
[...]The PostGIS development team is pleased to announce the release of PostGIS 2.3.3 As befits a patch release, the focus is on bugs and breakages. Best served with PostgreSQL 9.6.3+ and pgRouting 2.4.1.
Continue Reading by clicking title hyperlink ..One of the coolest things about Postgres functions is that they can return rows as if they were a table. Not only is it possible, but creating a set-returning function in Postgres is almost frighteningly trivial when compared to other database engines. In fact, some Oracle users are probably rolling their eyes and muttering “Just user a cursor!” already. Just hear us out!
Postgres functions can return cursors too, but using them
afterwards isn’t exactly a friendly experience. Cursors are passed
by reference, so the function must either accept a parameter to
name the cursor, or it generates something wacky like “<unnamed
portal 1>”. After that, cursors can only be used with
FETCH instead of SELECT, greatly limiting
their utility.
What about views, then? Obviously they can return rows like a table, so why not just write everything as a query and turn it into a view? It’s true that practically any function can be emulated with a sufficiently advanced query. This is especially true since Postgres 9 added CTEs, allowing it to generate step-by-step transformations. Some use cases aren’t well covered by that approach, however.
Consider the Fibonacci Sequence. Here’s a function that iteratively generates the whole sequence up to the Nth value:
CREATE OR REPLACE FUNCTION fibonacci(nLoops INT)
RETURNS SETOF NUMERIC AS
$$
DECLARE
i INT;
x NUMERIC := 0;
y NUMERIC := 1;
z NUMERIC := 1;
BEGIN
FOR i IN 1..nLoops LOOP
RETURN NEXT x;
x = y;
y = z;
z = x + y;
END LOOP;
END;
$$ LANGUAGE plpgsql;
SELECT max(fibonacci) FROM fibonacci(100);
max
-----------------------
218922995834555169026
Amusingly enough, this particular function works until the
NUMERIC datatype itself is exhausted, provided
sufficient hardware. Our test VM was able to survive until the
400,000th result before subsequent attempts exhausted our meager
8GB of temporary disk space. We’d print it here, but sadly a number
with 83,595 digits won’t fit in the article.
Now cons
[...]The modern calendar is a trap for the young engineer’s mind. We deal with the calendar on a daily basis and until exposed to its insanity it’s rather common to think that calendar based computations are easy. That’s until you’ve tried to do it once. A very good read about how the current calendar came to be the way it is now is Erik’s Naggum The Long, Painful History of Time.
With the SCRAM implementation done in Postgres 10, it is time to move on with other things in this area. The next, and perhaps last, area of focus in the implementation of channel binding, on which a patch has been submitted for integration into Postgres 11.
Channel binding is a concept defined in RFC 5056, to ensure that the frontend and the backend connecting to each other are the same in order to prevent man-in-the-middle attacks.
RFC 5802, which is the specification for SCRAM, has a section dedicated to channel binding. In the context of Postgres, if a client attempts an SSL connection, then the server needs to advertise the SASL mechanism named SCRAM-SHA-256-PLUS, on top of the existing SCRAM-SHA-256 that has been implemented in Postgres 10, to let the client choose if it wants to perform channel binding or not. For Postgres, with an SSL connection, and if the server has published the -PLUS mechanism, a client has to choose the channel binding option or the server would consider that as a downgrade attack if choosing the non-PLUS channel. The protocol used for the exchange is defined in the documentation.
As mentioned by RFC 5929, there are several types of channel bindings, which define the channel binding data their own way:
Both of them need to be encoded with base-64. Note that per the definition available, a SCRAM implementation has to implement tls-unique, the other two ones being optional.
The patch proposed to the community at the time this post is written implements two of them: tls-unique and tls-server-end-point. The former because of its mandatory nature, and the second one after discussion to ease the integration of channel binding in the Postgres JDBC driver, where getting the TLS finish message data can be a dependency pain.
Now, finding out how to implement bo
[...]There was a poll recently on @amplifypostgres to determine where that Postgres community should have its interactive communication. The options were Google Hangouts, Slack, Reddit or “Other”. The results were not surprising with Google Hangouts beating Slack with 157 votes cast. There was also notable mentions of IRC, and Gitter. There were a couple of long time Postgresql.Org members that asked the inevitable, “What is wrong with IRC?” . Of course there is nothing wrong with IRC but most users when you tell them to use IRC will say, “IRWhat?” which is either a sign of disdain or ignorance depending on the user.
The problem and what is driving this post was an additional comment made by a long time community member that we need to educate the users because the community (Postgresql) is on IRC and mailing lists.
That thinking ignores the reality that many community silos that have been built up over the years. If IRC and mailing lists were sufficient for the needs of the community, these silos wouldn’t thrive. They are active and serve the needs of many communities that are linked but not directly integrated into Postgresql.Org.
|
Silo |
Members |
|
14000+ |
|
|
9100+ |
|
|
5100+ |
|
|
4500+ |
|
|
Facebook PostgreSQL в России |
2500+ |
|
1100+ |
|
|
Notable Mention |
|
|
StackOverflow #postgresql |
70,000+ Questions |
That doesn’t count the many other Postgres communities. The NYCPUG has over 2300 users and I guarantee you that at most a small two digit percentage participate on “The mailing lists” or “IRC”. Let us also not forget some of the largest communities in the world such as Japan and Asia. Very few of them participate on the .Org lists, that doesn’t mean they aren’t part of the community.
If as a community our goal is to not only build software but also to build people then we have to let go of our old man, get off my lawn tendencies and embrace new forms of collaboration. I am a bonafide master of “Good lord, why do
[...]CREATE EXTENSION replace_empty_string;
CREATE TABLE res (
id int4,
idesc text,
test1 varchar,
test2 text
);
CREATE TRIGGER res_replace_empty_string
BEFORE UPDATE OR INSERT ON res
FOR EACH ROW
EXECUTE PROCEDURE replace_empty_string ();
INSERT INTO res VALUES (1, 'first', NULL, '');
INSERT INTO res VALUES (2, NULL, '', 'Hello');
\pset null ****
SELECT * FROM res;
id | idesc | test1 | test2
----+-------+-------+-------
1 | first | **** | ****
2 | **** | **** | Hello
(2 rows)
UPDATE res SET idesc = ''
WHERE id = 1;
SELECT * FROM res;
id | idesc | test1 | test2
----+-------+-------+-------
2 | **** | **** | Hello
1 | **** | **** | ****
(2 rows)
Some weeks ago already the Beta 1 of upcoming Postgres 10 was released. Besides the very visible version numbering scheme change (no more X.Y.Z ) the release notes promised amongst other very cool and longawaited features – like builtin Logical Replication and Table Partitioning – also many performance improvements, especially in the area of analytics. […]
The post A quick check on Postgres 10 Beta Analytics Performance vs version 9.6 appeared first on Cybertec - The PostgreSQL Database Company.

We are pleased to announce pgAdmin3 LTS is now available on OSX and Windows for PostgreSQL 10!
As v1.0 of pgAdmin4 was released in September of 2016, the pgAdmin Development Team decided to no longer support pgAdmin III with any more fixes. We believe that pgAdmin3 usage should fade out over time and have forked the code to include basic support for PostgreSQL 9.6+.
Try it out!
If you haven’t already installed PostgreSQL, start here.
Windows users don’t prefix the pgc command with ./ as shown in the following commands:
cd <directory of installation>
./pgc install pgadmin3
./pgc start pgadmin3
![]()
If you haven’t already, install pgDevOps.
In pgDevOps, navigate to Package Manager, select the pgAdmin3 component, and install.
cd <directory of installation>
./pgc update
./pgc upgrade pgadmin3
cd <directory of installation>
./pgc remove pgadmin3
As always, give us your feedback so that we can continue to support and respond to the needs of the PostgreSQL community.
You might be aware of the interval data type, which allows mathematical operations on date, time, and timestamp values. This can lead to odd behavior, but this email posting showed me a new oddity when dealing with months containing a different number of days. First, let's summarize how many days are in the first five months of 2017:
2017-01 31 2017-02 28 2017-03 31 2017-04 30 2017-05 31
The GDAL community just announced the
release of GDAL 2.2… and BigSQL just made it easier to
implement!
We support Diegos and Yodas with 2 ways to install:

For GIS analysts (scientists, statisticians, analysts, etc) who find themselves as the de facto DBA (among other things).
“I don’t have time for this! I’m saving this sea turtle. Just give me a GUI!”
2. Make sure to check the box for including the pgDevOps component with your install.

3. Open pgDevOps in your web browser:
<ip of machine>:8051
If installing on local machine:
localhost:8051


That’s it!

For DBAs who find themselves supporting these strange spatially enabled databases.
“Real men only use command line! And what the hell is GIS?”
Install pgc via command line to a sandbox.
*Note: this command will install the BigSQL distribution into the
directory you are currently located.
MAC / Linux:
python -c "$(curl -fsSL http://s3.amazonaws.com/pgcentral/install.py)"
Windows:
@powershell -NoProfile -ExecutionPolicy unrestricted -Command "iex ((new-object net.webclient).DownloadString('http://s3.amazonaws.com/pgcentral/install.ps1'))"
Get into the product home directory by navigating via command line to the bigsql directory. Windows users don’t prefix the pgc command with ./ as shown in the below examples:
cd bigsql
Using the pgc command line tool, run the update command to get the lastest distribution:
./pgc update
Then run the install, init, and start commands on pg96:
./pgc install pg96
./pgc start pg96
Next, install postgis:
./pgc install postgis23-pg96
That’s it!
In order for all your Po
[...]If you often use the TIMESTAMP data type, you might not be making full use of it. In these queries:
CREATE TABLE tztest (x TIMESTAMP);
INSERT INTO tztest VALUES (CURRENT_TIMESTAMP);
SELECT * FROM tztest;
x
----------------------------
2016-10-25 18:49:20.220891
SHOW timezone;
TimeZone
------------
US/Eastern
SET timezone = 'Asia/Tokyo';
SELECT * FROM tztest;
x
----------------------------
2016-10-25 18:49:20.220891
Business logic is supposed to be the part of the application where you deal with customer or user facing decisions and computations. It is often argued that this part should be well separated from the rest of the technical infrastructure of your code. Of course, SQL and relational database design is meant to support your business cases (or user stories), so then we can ask ourselves if SQL should be part of your business logic implementation. Or actually, how much of your business logic should be SQL?
pip install pg_chameleon
--upgradechameleon.py upgrade_schema
--config <your_config> --debugPostgreSQL has a feature called dollar-quoting, which allows you to include a body of text without escaping the single quotes. This feature has existed for quite some time. You've probably seen this in action when defining functions for example:
CREATE OR REPLACE FUNCTION hello_world(param_your_name text)
RETURNS text AS
$$
SELECT 'Hello world. My name is ' || param_your_name || '.';
$$
language sql STRICT;
Which is easier to read, than the equivalent escape quoted function:
CREATE OR REPLACE FUNCTION hello_world(param_your_name text)
RETURNS text AS
'
SELECT ''Hello world. My name is '' || param_your_name || ''.'';
'
language sql STRICT;
PL/Java has been around since 2005, but it has regularly struggled to gain users. Unfortunately, a lot of these problems are specific to the Java language and hamper its adoption.
First, there are a limited number of people who know both Java and the Postgres backend code. Fortunately Chapman Flack has recently resumed PL/Java development.
Second, there is the deployment complexity of binding PL/Java to a Java Runtime Environment (JRE) in an easily-deployed way. This 2014 email thread discusses the problems of JRE packaging and installation.
Testing within the context of a database can sometimes seem a
bit overwhelming. Not only is it necessary to create a reasonable
set of test data to bootstrap multiple tables, but we need some
reliable invocation system to compare query, view, and function
output to expected results. And that is no small feat. For a very
long time, the "easiest" way to test end-user database objects
involved building a Makefile that included PGXS.
Then we had to create the tests, run the tests, and copy over the
output into static expected files. Modifying any test always meant
updating the static output file as well.
It’s great that the tools are there, yet why do we need to be halfway to a Postgres developer in order follow good QA practices with Postgres? Well, maybe we don’t. It turns out, like many things in the Postgres world, that there’s an extension for that. These days, all the cool kids are using pgTAP. Not only does pgTAP make it easier to write tests for Postgres, but its design generally encourages test-driven development. Regardless of how any of us might view that development paradigm, having the availability makes Postgres a more welcoming platform for those who fully embrace it.
To that end, let’s say we have our trusty
sensor_log table and we want to write a get/set
function pair to control and access its contents. As always, the
sensor_log table looks like this:
CREATE TABLE sensor_log ( sensor_log_id SERIAL PRIMARY KEY, location VARCHAR NOT NULL, reading BIGINT NOT NULL, reading_date TIMESTAMP NOT NULL ); CREATE INDEX idx_sensor_log_location ON sensor_log (location); CREATE INDEX idx_sensor_log_date ON sensor_log (reading_date);
We also need to install pgTAP itself. That part at least, is as easy as the usual extension install. For RHEL and Debian system users, it’s not even necessary to build anything, since pgTAP is in the official Postgres YUM and APT repositories. After that, it’s just a quick extension install:
CREATE EXTENSION pgtap;
In the spirit of TDD, we should write the
[...]In December 2016, ISO released a new version of the international SQL standard (ISO/IEC 9075:2016). It supersedes the previous version from 2011.
This article is a brief overview of the new features being introduced into the SQL language. Strictly speaking, this article covers the additions to part 2 of the standard (SQL/Foundation), i.e. the most commonly used part.
This article also shows the availability of these features among six major databases. Note that respective figures—shown below—only reflect whether or not the databases support the features in the way described by the standard. For example, an X in the JSON row does not mean the database has no JSON support—it simply means the database doesn't support the JSON features described by the SQL standard. As a matter of fact, all tested databases support JSON in some way—but not necessarily in the way described by the standard.0

Before going through all the shiny new features, let's look at some trivia: part 2 of the SQL:2016 standard has 1732 pages—that's 260 pages (~18%) than the 2011 edition. It introduces 44 new optional features (+14%). Let's take a look at them…
Row pattern recognition captures groups of rows that follow a pattern. Rows in matched groups can be filtered, grouped, and aggregated. The pattern itself is described with a simple regular expression syntax.
The main use of row pattern recognition is to check time series
for patterns. However, the match_recongize clause combines aspects of the
where, group
by, having and over clauses (window
functions) so it is also useful in many other cases.
I have given a
presentation about row pattern recognition. It discusses
several examples in two implementation variants: with and without
the new match_recongize clause. The
examples cover some typical use cases, and also some atypical use
cases for row pattern recognition:
Consecutive events: identifying sessions in a web-log; tolerating gaps in sequences
Top-N per group (might be faster than window functions!)
Time in
The use of server-side logic, particularly stored procedures, has been a highly contentious topic among database professionals for decades. The question has always been what amount of logic should be encoded in the database vs. in client applications or application servers.
Beyond using the database as a simple data store, there are three levels in which logic can be added to Postgres:
Let's look at each of these in turn:
If you’ve followed my previous posts (here and here), then you already have one or more versions of PostgreSQL installed on your Mac. Maybe these are solely for test or dev purposes and you don’t really care about any of the data therein, but if you do, let me guide you to pgBackRest.
pgBackRest aims to be a simple, reliable backup and restore system that can seamlessly scale up to the largest databases and workloads.
Instead of relying on traditional backup tools like tar and rsync, pgBackRest implements all backup features internally and uses a custom protocol for communicating with remote systems. Removing reliance on tar and rsync allows for better solutions to database-specific backup challenges. The custom remote protocol allows for more flexibility and limits the types of connections that are required to perform a backup which increases security.
pgBackRest is written in Perl, but don’t hold that against it. As of the 1.19 release, pgBackRest can now use S3 buckets as the storage backend. I really like pgBackRest and tend to use it for myself and customers over any of the other tools in the PostgreSQL ecosphere. So, let’s get started by downloading the latest release from their site, and then installing it. For some reason, no one has added pgBackRest to Homebrew yet (someone, pls!) so let’s do it the manual way:
true
(Keep in mind that I already had Perl setup to connect to
PostgreSQL for other uses. You might need to install DBD::Pg.)
Now that pgBackRest is installed, let’s configure it. First, we’ll want to set some of the global properties that affect all pgBackRest operations:
true
As you can see, we set the following: